[01:49] cory_fu: thanks ! [03:13] Hello [03:13] I have been getting an error "ERROR cannot assign unit "mysql/0" to machine: cannot assign unit "mysql/0" to new machine or container: cannot assign unit "mysql/0" to new machine: use "juju add-machine ssh:[user@]" to provision machines" [03:13] when trying to deploy mysql on a manual environment running wiley [03:46] bkerensa: did you try doing what it suggested? [07:44] magicaltrout - i replied to the bug. i dont see why open file descriptors would cause a difference one way or another [07:44] i feel like this was an unrelated [07:44] "fix" that just happened to work [08:00] lazyPower: i replied to you, I think you read or wrong, or I just didn't explain it very well :P [08:02] Ah, that makes sense [08:02] yeah, if we tweak that upstart job it'll do us some justice. We'll need to re-verify once xenial lands and we convert that to a systemd job [08:03] but for a hotfix, i'm +1 to setting that as default so we're g2g on lxd as well as public clouds [08:24] magicaltrout - did you happen to have a patch for that? or was it all manual investigation/fix? [08:34] morning all [08:40] o/ jamespage [08:40] jamespage - can i steal your eyeballs for a minute before you get into full swing? [08:47] lazyPower, sure [08:47] jamespage - before we begin, this is what we are visualizing - http://i.imgur.com/ABw9G9r.png [08:48] lazyPower, okies [08:48] all the funny little subordinate units are Elastic Beats - the replacement for "logstash forwarder" [08:48] purple green grey and blue right? [08:48] but now, they are more like FluentD, carbon, et-al - it collects and streams system metrics along with log data. one beat per focus group - topbeat (distributed htop), filebeat (log files), packetbeat (network protocols), and dockerbeat [08:49] yep [08:49] here's the topbeat dashboard that they have in a demo bundle - http://54.80.82.242/app/kibana#/dashboard/Topbeat-Dashboard [08:49] you'll see you can click into hosts, and drill down [08:49] its *somewhat* interactive [08:49] * jamespage looks [08:50] this is neat [08:50] I've got this mostly functional Its the dashboarding part thats going ot wreck my free time [08:50] there no dash for filebeat, and the packetbeat shipping doesn't appear to be finding anything on the consul http/dns interface :\ [08:50] but topbeat looks awesome! [08:51] lazyPower, its like ganglia and nagios combined... [08:51] kinda, there's no notion of alerting in here [08:51] i'm pretty sure with some queries, attached to a webservice, that could be changed [08:52] lazyPower, okies.. [08:52] this all looks super useful [08:53] jamespage - i think this has implications with our big data bundles, as we can do this one of two ways - river from ES to HDFS for cold storage, or route through logstash to split messages into dashboard and cold storage. Think having this in OIL while running the tests. We can reproduce a visualization of the hosts under load during testing. Long term compute jobs matched with what merlijin and team are doing for "common infrastructure [08:53] problems", match that to machine metrics as well *shrug* [08:53] im no data scientist, but i think we've stumbled into something useful that applies everywhere for telemetry [08:53] lazyPower, hmmm [08:54] lazyPower, aggregating general telemetry, log data et al into a single place allows for some interesting analytics certainly [08:54] its that or i'm super late to the party that fluentd/hekka/carbond/statsd have been having for a while [08:54] lazyPower, tbh this is a bit of a gap with the openstack charm set right now; I'd love to integrate with something like this as well... [08:55] its all juju-info based, and the full config is available for tweaking in the layer [08:55] this is like concept release quality right now [08:55] but it should take little to no time to get running on an openstack deploy [08:55] lazyPower, that's fine... [08:55] juju deploy ~containers/development/bundle/beats-core [08:55] lazyPower, this is all based around https://www.elastic.co/products/beats ? [08:55] yep [08:55] the dash you're looking at is available as an action on the kibana charm [08:56] juju action do kibana/0 deploy-dashboard dashboard=beats [08:56] all the agents self-register with an index upon relating to elasticsearch, i think that pretty well covers it [08:56] feedback / flames welcome :) [08:56] oh, and all the repos are in github.com/juju-solutions [08:57] lazyPower, I'll try find some time to give it a spin... [08:57] completely understand :) [08:57] just wanted to give you an early access peek, see if you're intersted in being a stakeholder [08:57] lazyPower, deadline at the end of the week so maybe next week - but it does look useful [08:58] lazyPower, our monitoring/metric/log solution is a little fragmented right now - so this might be something we target for next cycle - but that sounds about right timing wise to me... [08:58] if this is concept right now... [08:58] oh man i cant wait to get this to the list :P [08:59] lazyPower, I like to look of this - https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration.html [08:59] just suck up everything baby! [08:59] Yeah! [08:59] its super simple [08:59] and it does an amazingly good job too [09:00] jamespage: are you the right person to pester about https://code.launchpad.net/~paulgear/charms/trusty/ntpmaster/sync-charmhelpers/+merge/289605 and https://code.launchpad.net/~paulgear/charms/trusty/ntpmaster/execd-support/+merge/289609 ? Your name is on it. ;-) [09:00] blahdeblah, eek! [09:00] jamespage: I'll take that as a yes. ;-) [09:00] I'll stick it on my list - just working the SRU backlog at the moment [09:00] they had already dropped into my inbox tbh... [09:01] cool - thanks [09:02] Just wanted to make sure they would get looked at. At the moment I'm having to work from a fork, which is always sad. [09:54] jamespage, I'm hitting a problem running the xenial mitaka amulet tests for nova-cloud-controller. Am I right in thinking that that should work to the best of your knowledge? [09:54] gnuoy, they should do yes [09:54] well they did last week at least... [09:55] xenial maybe foobarred outside of charms... [09:55] thats quite possible [09:56] jamespage, no, this is: [09:56] 2016-03-21 09:48:20.269 29291 ERROR nova.api.openstack.extensions CantStartEngineError: No sql_connection parameter is established [10:08] lazyPower: no patch i'm afraid, just hacked it and passed out [10:08] onsite at a client, I can dump something in this evening, or you can push the 1 liner [10:09] I'll try to circle back, but if you want to make the PR i'm happy to wait for a link [10:10] would be a big +1 if you made the PR and had associated tests :) [10:10] okay i'll send something over this evening [10:14] currently looking at a clients project plan for a tool they've got another consultancy migrating from sql server to hadoop [10:14] 19 days for Amazon EMR prod env setup [10:14] should be using juju :P [10:14] +1 to that sentiment [10:14] or just employ someone more competent ;) [10:37] Xenial absolutely hoses my laptop battery, I hope that gets better before release ;( [10:44] The only laptop i've ever had good battery life under linux is my XPS13 [10:45] which is in a sad state these days :/ I think i tanked the ssd [10:45] my x1 carbon has a reasonably decent battery but it lasts about 50% of the time in Xenial than it did in Trusty [10:46] but the problem with all this slim laptops is I can't swap it, or add wedge on [10:47] Yeah [10:54] gnuoy, the fix for that should have landed in the master branch already [10:55] gnuoy, just testing now [10:55] lazyPower, do you ever sleep? [10:55] :-) [10:55] The fix for " No sql_connection parameter is established" ? [10:55] jamespage, do you mean the master branch of the charm ? [10:56] jamespage - my sleep schedule is completely borked. i wake up at like 3am and pass out around 8. But the level of work i can get done in this morning wake is off the hook [10:56] gnuoy, yes [10:56] i'm seriously considering making this my permanent schedule. [10:56] I went through years of starting work at 4am [10:56] get far more done [10:56] kids stopped that plan, for now at least [10:57] gnuoy, I just deployed master branch changes with xenial-mitaka OK [10:57] jamespage, ack, I'll rebase my branch [10:57] my only dependent is furry and doesn't understand the concept of cat vs computer [10:57] hehe [10:58] on the up side he doesn't disturb me when he's going into spaz mode at 4am on this schedule [12:35] lazyPower: so I debated capitalizing MUST and SHOULD [12:35] but then the entire thing becomes hard to read [12:35] I was thinking maybe instead is under each header [12:35] saying something like "everything below MUST" [12:35] and then "everything below SHOULD" [12:35] something that makes it obvious without punching your eyeballs in the face with capitalization every line [12:35] Separate it into terms that parse well with proof. Make them errors and warn level events. [12:36] shoulds = warn [12:36] must = error [12:36] oh I see [12:36] dude, that's brilliant. [12:36] I will work on this [12:36] right on :D [13:08] lazyPower: re:amulet [13:09] * lazyPower is all ears [13:09] * marcoceppi looks at code [13:09] for context: i'm puzzled by an amulet nuance, i can get the relationship information from one direction - but not the same relation in reverse. https://gist.github.com/anonymous/b252865ec752459f01f8#file-10-deploy-with-logstash-L22 [13:09] looking at metadata, i have no idea how this actually worked in the first place, the relation name is beat, not filebeat [13:10] lazyPower: so does that line work? [13:11] its returning the IP address of the unit on the other end of that relation, when i reverse the params - self.unit.relation('filebeat', 'logstash:beat') (or logstash:filebeat according to this) - it yields that the relationship is not fond [13:11] *found [13:11] lazyPower: it's the relation as scoped to the unit [13:12] lazyPower, jcastro +1 and a smile wrt warns being nonfatal, and scooting fatal issues to error. [13:12] so filebeat, filebeat:logstash? [13:12] beisner <3 [13:12] lazyPower: so if you want logstash's side [13:13] lazyPower: self.d.sentry['logstash'].relation('', 'filebeat:logstash') [13:13] grr [13:13] lazyPower: self.d.sentry['logstash'][0].relation('', 'filebeat:logstash') [13:14] lazyPower: everything is scoped to the unit you're calling relation on [13:14] on that note - we want to prep *os-charms with the min-juju-version metadata as soon as practical ahead of 16.04. do we know when charm proof will allow that? [13:14] lazyPower: http://pythonhosted.org/amulet/amulet.html#amulet.sentry.UnitSentry.relation [13:14] beisner: as soon as you open an issue on the repo [13:15] ah, duh [13:15] right on, thanks marcoceppi [13:17] lazyPower: we can probably do a smarter job of this command, but I haven't found a nicer UX for it yet [13:18] well what bit me was i aliased self.unit to that specific sentry [13:18] i didnt even trip that i was calling this on filebeat getting the data from its perspective [13:18] i jus tknew i had the wrong ip, and was trying to backtrace to where i needed that scope to change. obv the params weren't the issue :P [13:20] thats it, self.d.sentry['logstash'][0].relation('filebeat', 'filebeat:logstash') [13:21] now to figure out where this stale artifact came from that i have in our namespace === TaLioN- is now known as TaLioN [13:28] what's the best way to determine whether i'm juju1 or juju2 from a running unit? [13:28] marcoceppi, ack thx, raised :) [13:29] tvansteenburgh: from a deployed unit? there really is no way [13:31] marcoceppi: i was hoping to make benchmark-gui smart enough to work on either w/o being told [13:32] i guess i could just try both and see which works [13:32] tvansteenburgh: well, you've got full api access, can't you prob or try login v1 then do login v2 then error? [13:32] gmta [14:23] beisner, I'm doing a pull request to add extra-bindings and series anyway [14:24] what's the juju-min-version key? [14:25] jamespage: https://docs.google.com/document/d/1ID-r22-UIjl00UY_URXQo_vJNdRPqmSNv7vP8HI_E5U/edit min-juju-version [14:25] with the notes/etc [14:36] rick_h_, ta [14:46] marcoceppi, raised another two issues for new metadata.yaml top-level entries charm-tools needs to support [14:46] I'll swap you for some package reviews/uploads... [14:46] :-) [14:56] marcoceppi: Have you ever seen this apt error before? http://pastebin.ubuntu.com/15463740/ [14:59] Has anyone successfully upgraded an environment from juju 2 beta 1 to beta2? `juju upgrade-juju` just reports no upgrades available. [15:03] aisrael: you need to use a different stream [15:04] aisrael: that apt error is a known one [15:04] jamespage: pushing the packages to a ppa without backportpage atm [15:06] jamespage: what is "extra-bindings" ? [15:12] marcoceppi, network space bind points which don't relate to actual relations... [15:12] jamespage: interesting, okay [15:12] jamespage: are they just a dict of dicts? [15:13] marcoceppi, example - https://github.com/javacruft/charm-neutron-api/commit/665c34bca503edf80c0a5c108b2cf335ec48bcb1 [15:13] jamespage: ack, ta === cos1 is now known as c0s === cos1 is now known as c0s [15:56] jamespage: we'll have an initial set of packages today, but charm-tools and charm will be ready tomorrow [15:56] marcoceppi, ppa location? [15:57] jamespage: https://launchpad.net/~marcoceppi/+archive/ubuntu/xenial-chopper I just started running dput against that a second ago [16:05] marcoceppi: could you point me to any docs on streams in 2.0? My google-fu is failing. [16:06] aisrael https://lists.ubuntu.com/archives/juju/2016-February/006618.html [16:06] this does however make reference ot environments.yaml which doesn't exist in 2.0 :\ [16:06] lazyPower: Yeah, that's where I'm stuck. [16:07] `juju get-model-config` doesn't have an agent-stream key [16:07] cherylj ping o/ [16:07] wait, yes it does [16:07] oh maybe an unping is in order then [16:08] hey lazyPower, what up? [16:08] So I have agent-stream: devel. I still can't upgrade my environment to beta2, though [16:08] Hey, do you know where we stuff the stream info in juju 2.0 to set streams? [16:08] tl;dr;, upgrading from juju 2 beta1 to beta2 isn't working for me [16:11] aisrael: maybe just destroy environment? [16:12] marcoceppi: Probably faster at this point :/ [16:12] aisrael: I don't think we support upgrades from betas [16:12] marcoceppi: ahhh. That's good to know. [16:13] unping cheryl, thanks for responding! [16:16] ~charmers it looks like thedac has gotten all the necessary +1 and then some for his ~charmer application [16:17] congrats thedac [16:17] arosales: thanks \o/ [16:17] woooooo [16:17] \o/ [16:17] congrats thedac [16:17] any ~charmers care to comment on the thread and him introduced to ~charmer responsibilities ? [16:17] beisner - we still need to onboard you right? [16:17] aisrael: tehre are a few that need to be onboarded still [16:17] arosales: ^ [16:17] we'll do them all at once [16:18] marcoceppi: sounds good. [16:18] its time to assemble! [16:18] CHARMER TEAM YOOOOOOOOOOO [16:18] marcoceppi: lazyPower perhaps an official reply on the juju list would be good too [16:19] charmer-voltron is coming together nicely [16:22] openstackers: question for you re amulet testing. If i'm onboarding an ISV and say they dont use the openstack-origin config, is the proper way forward here to submit a MP against the charmhelpers.contrib.openstack.amulet.deployments.py class and add themselves? or is this already handled elsewhere? [16:27] lazyPower, yes plz [16:27] was that re: amulet or re: onboarding? [16:27] :D [16:28] lazyPower, onboarding. [16:28] * beisner digests the other ? [16:29] lazyPower, do you have an example/link? [16:29] i have one better, let me forward over the mail [16:29] cool thanks [16:31] tvansteenburgh: you got a few mins? Hate to distract but I've got some charm-tools questions [16:31] marcoceppi: sure [16:31] lazyPower, yes, mp @ c-h for exactly that. it's necessary to exclude them from the automagical flux capactitor charm test configuration foo. [16:31] tvansteenburgh: I'll see you in eco-wx [16:31] thats what i was thinking, but the confirmation is nice :) [16:32] if they're using the openstack amulet helper that is [16:37] thanks beisner [16:37] lazyPower, yw sir [16:53] marcoceppi, erm parse == python-parse? [16:53] jamespage: yes, parse is the soruce package for python-parse and python3-parse [16:54] marcoceppi, problemo - we already have python-parse in distro [16:54] * jamespage looks [16:54] jamespage: what? where? [16:54] marcoceppi, I take it you need the py3 support [16:54] ? [16:54] I freakin searched everywhere for it [16:54] marcoceppi, try "rmadison python-parse" [16:54] -- Cyril Bouthors Mon, 11 Nov 2013 15:37:03 +0100 [16:54] been there a while... [16:54] bleh! [16:55] 1.6.3 is so old though [16:55] 1.6.6 was almost two years ago [16:56] jamespage: we don't actually need py3, charm-tools is still only py2 [16:57] jamespage: on a fresh xenial machine though, it couldn't find python-parse [16:58] marcoceppi, I can rev it - leave that one with me [16:58] jamespage: thanks, we need python(2)-parse >= 1.6.6 [16:59] jamespage: I'm uploading the next one, there's about 5 that aren't in archive today (except for charm and charm-tools) so 7 total [17:02] jamespage: I have a few packages with wily as their target, should I just upload those as is to the ppa or bump them to xenial? [17:05] marcoceppi, ok python-parse updated in xenial [17:05] jamespage: \o/ thank you [17:05] marcoceppi, bump the target [17:05] jamespage: just dch -i or is there a better way? [17:05] marcoceppi, that's fine for now - i'll tidy as I upload... [17:08] jamespage: going to pm right quick [17:12] kwmonroe: http://2016.texaslinuxfest.org/call-for-papers [17:14] there seems to be a discrepancy between the command syntax in 2.0 and their description in here. At least [17:14] juju add-credential [17:14] is listed as [17:14] juju add-credentials -f creds.yaml [17:14] The plural form isn't getting recognized by the software [17:18] yeah c0s, that's gotta be a typo in the 2.0-beta2 release notes. probably copy pasta from autoload-credentials (plural) to add-credential (singular). 'add-credential -f creds.yaml' is what works for me. [17:18] jcastro: ack on the TLF cfp [17:18] yup, it does for sure. The dev's docs you sent me got it right. [17:19] obey the release notes, devel docs, and #juju (pick 2) ;) [17:20] indeed [17:21] hey jamespage.... Long time no see ;) [17:21] This is Cos from Bigtop [17:21] hey c0s - indeed a long time! [17:21] good to see you around and sticking to the same guns ;) [17:21] you bet... [17:21] rick_h urulama tvansteenburgh when a charm is pushed to the store, will the push command validate series at push time? what happens if a malformed series is added? [17:22] marcoceppi: as in you pust trusty but it's xenial only? [17:22] urulama: as in the metdata.yaml says "- whatevetr" [17:22] as in you typoed "tursty" [17:22] it should, yes [17:23] urulama: but, will it? [17:23] hehe [17:23] * urulama goes and tries before saying yes, just in case [17:23] tvansteenburgh: I think here we just need to make sure that the series is a valid list, otherwise we'll always be behind with supported series [17:23] marcoceppi: okay [17:24] marcoceppi: it will fail, yes, with a stupid error though ... i'll add a task for fix [17:24] urulama: <3 thanks [17:24] tvansteenburgh: push will gate that, we just need to check formatting [17:25] Deploying a local charm in beta2 is hanging - any ideas? http://pastebin.ubuntu.com/15465568/ [17:25] marcoceppi: ack, trying to find out if list is required, or if it can be a string instead if you only support one series [17:25] marcoceppi, tvansteenburgh: verified for both normal and multi series charms ... [17:25] urulama: <3 thanks! [17:25] * urulama now goes: how could you doubt charm store!!! :) [17:36] Looks like the deploy is triggering this: ERROR juju.apiserver charms.go:114 returning error from POST /model/df5bae38-c251-4ee9-8334-ed014ae6fb80/charms?%3Amodeluuid=df5bae38-c251-4ee9-8334-ed014ae6fb80&series=trusty: [{github.com/juju/juju/apiserver/charms.go:68: } {error processing file upload: unexpected EOF}] [17:38] * aisrael goes off to file a bug [17:42] aisrael - was your debug-log spewing errors about hashsum mismatch? [17:43] thats the only bug i've seen thats kept my local charms stuck in pending [17:43] lazyPower: Nope. Fat charms that are too fat won't deploy. [17:43] these aren't fat :/ [17:43] 22M works; 250M doesn't [17:43] its some weird transient thing i only ever seem to hit with local charms [17:44] and usually an upgrade-charm triggers it back into being happy [17:44] the only hashsum issues I had was updating packages in the vagrant image [17:45] kwmonroe: mbruzek: do either of you know of a big fat charm offhand (one of the ibm ones or big data ones, maybe)? [17:46] there was a really fat one amir nd i worked on that was ~ 190 mb fatpacked [17:46] * lazyPower digs [17:48] aisrael: easiest thing to do is "dd if=/dev/zero of=/awheck bs=1M count=250" to make a local charm fat. [17:48] kwmonroe: Hey, good idea! [17:48] even better [17:49] kwmonroe - did you try to replace java with 250mb of /dev/zero data? [17:49] sure did lazyPower, and no-one cared... [17:49] ZING! [17:49] LazyPower, I am sure you won't notice a thing after that - will work just the same ;) [17:51] aisrael: https://jujucharms.com/websphere-liberty/trusty/ [17:51] mbruzek: Excellent, sir! [17:51] aisrael: That one is not too bad, but the binary for the sdk and liberty is in there. [17:51] So not too fat, but the binaries are in the charm. [18:48] lazyPower: I'm fixing this kibana init script but I don't know what I can do test-wise, I can define a test that only runs on LXD can I? [18:48] hmm, good point [18:48] you really cant as the substrate is defined by the runner agent, either CI, or myself as the driver. [18:49] and i can see someone clicking "deploy on AWS" and that test becomes noisy then [18:49] magicaltrout - point taken, one liner update it is! [18:50] alrighty, I'll just check it fixes it locally then shunt it up [18:51] I was working on bootsraping AWS with beta2 and I got [18:51] http://paste.ubuntu.com/15466576/ [18:51] so I had to create a cred.yaml file and do [18:52] juju add-credential aws -f cred.ayml [18:52] juju add-credential aws -f cred.yaml [18:52] is this a known issue that the interactive bits aren't there yet [18:52] jcastro: fiche is updated, but it's old school ingestion, may take a hot min to show up [18:52] arosales: yes, the team i sstill working on it [18:52] arosales: some branches went by last week [18:52] rick_h_: thanks [18:53] c0s: ^ fyi -- using a cred yaml file for me I was able to deploy the realtime analytics bundle [18:53] well, looks like something is different about the accounts. [18:54] I have setup awscli and about to try to start a micro instance to see what happens [18:57] c0s: http://paste.ubuntu.com/15466644/ is what worked for me [18:59] ah, looks like it is past the error message. Perhaps Marco did something to my account ;) [19:00] it is spinning up an instance now [19:00] c0s: great [19:01] yup, doing the apt-get update and all shenanigans [19:01] good, all set now I guess [19:03] c0s: so Amazon says we have up to 100 instances per region, but I suspect they're lying. us-east-1 was at 20, it's down to 10 now. I think trying us-west-1 instead might work [19:04] yeah, could be the case too.... I will switch to the local one: looks like I am the only one here anyway ;) [19:18] pardon my ignorance, when I have bootstrap'd a controller, I see that juju creates the ssh keys for me but looks like I can not ssh into the bootstrap instance using those. [19:18] perhaps should use different user name not the ec2-user@ as usual? [19:18] could not find any documentation on that.... [19:20] c0s: as the ubuntu user? [19:20] c0s: ubuntu@hostname [19:21] c0s: and theres a [19:21] that's better! :) [19:21] thanks [19:21] `juju ssh` command as well [19:21] c0s: ubuntu@ is the unbuntu cloud image standard account across clouds [19:22] tvansteenburgh: as an fyi: https://github.com/juju/charm-tools/issues/141#issuecomment-199437756 [19:22] make sense, thank you! [19:23] rick_h: one more question I guess (in the next 30 minutes at least): does bootstrapping process bring up an interface similar to that of demo.jujucharms.com ? [19:24] or it is all CLI at this point? [19:24] c0s: it will but for now you have to juju deplpy juju-gui [19:24] ah, self-contained... I like it. Thank you! [19:24] c0s: it's a new feature.landing in 2.0 but not there yet [19:25] c0s: make sure you juju expose. [19:25] after.deploy [19:26] ok, doing this right now [19:33] lazyPower: hey, everything LGTM on the kubernetes PR [19:33] had a question about etcd, but it's not major [19:35] sure, whats up? [19:35] lazyPower: it's on the pr [19:35] also, is there any documentation about charm build or installing charm-tools ? [19:36] ah, no, that was the other card that i haven't done yet [19:37] lazyPower: ack, then this LGTM pending your throughts about ~containers [19:54] if the lxd bootstrap node goes missing you can't kill the environment \o/ [19:55] define "goes missing" [19:55] like, destroying it out from under juju? [19:56] well I don't know whats actually wrong with my server, probably some borked routing but [19:56] https://gist.github.com/buggtb/3ebe3e02bc7b2d3479a9 [19:57] surely kill should flatten it regardless [19:57] that said, maybe my build is just out of date, haven't pulled a new one in a week [19:59] magicaltrout - i've had that happen [19:59] the only resolution was to clean up the cache.yaml in your $JUJU_HOME [19:59] banging [19:59] kill should kill it regardless of connectivity id have thought? [19:59] i feel like kill-controller should do that for me when i'm banging a --force flag on kill-controller, which is already a RBFH [20:00] yep, my sentiments exactly [20:00] lazyPower: magicaltrout file a bug? [20:00] already on it [20:00] especially if you can reproduce it [20:00] kill-controller does try to play nice, but should bypass the api server if it's not there/times out [20:00] i'm eating steak, I have priorities! :P [20:01] magicaltrout: fair enough :) [20:03] https://bugs.launchpad.net/juju-core/+bug/1560191 [20:03] Bug #1560191: kill-controller is hinky without a model-controller behind it [20:03] no taxation of the feels without representation of the bugs rick_h_ :) [20:09] lazyPower: lol, "hinky" ? [20:10] <3 hinky [20:24] yeah, i didnt know what else to call it, as its not exactly expected, but it works most of hte time [20:31] lazyPower: launchpad confuses the life out of me [20:32] but the activity log says i've linked my branch [20:32] with a 1 line patch [20:32] link me sir [20:33] https://bugs.launchpad.net/charms/+source/kibana/+bug/1539806 [20:33] Bug #1539806: [ARM64][LXD Provider][ 2.0-alpha1-0ubuntu1~16.04.1~juju1] kibana 'hook failed: "install"' [20:33] http://bazaar.launchpad.net/~spicule/charms/trusty/kibana/trunk/files [20:51] bug *and* a fix? [20:52] man, thats awesome! [21:01] well i can't yet fix go stuff, but i can hack around upstart scripts until they work ;0 [21:08] :D [22:17] do I understand correctly, that Juju doesn't support in-flight changes of the relations? [22:18] Say, if I want to move a service from it's own dedicated node to collocate with another service. It is somewhat equivalent to scaling a cluster up and down, but not exactly, per se [22:28] f [22:28] fail [22:31] you do understand correctly c0s, AFAIK you need to shut it down and redeploy if you want to move your unit from one machine or location to another [22:32] I see. But I should be able to add/remove slaves as need, right? [22:32] (presumably, it will be fun on a busy HDFS cluster though) [22:32] yeah, juju add-unit will add you new slaves [22:34] cool, that's what I thought. And I guess at his point I can co-locate new unit with an existing one [22:35] thanks for the clarification magicaltrout! [22:36] no worries c0s [22:46] you actually charming bigtop then c0s or just playing? [22:46] at this point I am just playing but who knows ... ;) [22:47] cool [22:47] One thing I don't get completely, is the fact that Juju has to build/publish its own set of the binary tarballs, then add all this metadata on top instead of resorting to existing Apache-proper data stack with its native packaging. [22:47] and of course I refer here to Bigtop ;) [22:51] yeah but this is canonical you're talking about, they just reinvent stuff because they can ;) [22:51] Take Wayland/Mir ;) [22:54] lemme check on that ;) [22:55] ah, that... yesh [22:55] Although I like Unity, really [22:56] regardless of distro the first thing I generally do is remove the default WM ;) [23:05] c0s: happy to talk decisions sometime if yulou're.interested [23:05] bah sorry phone typing [23:05] yes, very [23:05] wrapping up for today - had a good progress and a lot of new things were learnt ;) [23:05] c0s: arosales worth setting up a call there?^ [23:06] or if you feel like doing it today - I would ready in 15: need to walk the dog [23:06] arosales: to help bootstrap c0s on why things.work the way tyey do? [23:06] c0s: dinner time.here, but think arosales can setup.something this week [23:10] c0s: you doing apachecon? i'm trying to round up a loose band of Apache folk interested in juju to sit down and have a beer/chat at some point [23:15] I don't know yet... planning on, but it will depends on how business going for me ;) [23:16] fair enough! [23:16] I am not a part of any big corp now, so paying for all the events myself [23:17] yeah the first apachecon i went to i self funded, then last year I was supposed to apply to TAC and accidentally signed up to LF travel sponsorship [23:17] would be great to have a chat like that of course. I had some face2face time with some of the Juju folks during Scale14x down in LA, and it was good [23:17] i was lucky i still got accepted ;) [23:18] well, for the last three years I was a VP of open source development for a company. With my own budget and all ;) [23:18] But I think I got tired of corp-stuff, really [23:18] I assumed you were romans actual right hand man [23:18] not just a twitter right hand man ;) [23:19] I never been to Pivotal man ;) [23:19] Roman is their Director of OS; I was at a completely different company [23:19] yeah, he has his fingers in many open source pies [23:20] we never lended a hand to each other ;) [23:20] rick_h_ understood. Let's do a call when something works for all timewise