[06:17] <metsuke> Is there a way to get juju to create containers directly on openstack compute nodes rather than creating nova instances on top?
[06:19] <blahdeblah> metsuke: you can either deploy directly to MAAS-deployed nodes, or into LXC containers on those nodes
[06:23] <metsuke> blahdeblah: the lxc method is for evaluation though so it is not recommended?  We are trying to cut out the additional layer that Nova creates.
[06:24] <bradm> metsuke: do you mean using lxc instead of kvm for nova instances?
[06:25] <blahdeblah> I'm not sure whether the LXC method is officially supported, but it definitely works, and there is an LXD version coming in 16.04.
[06:25]  * blahdeblah defers to bradm on this :-)
[06:25] <metsuke> bradm: It's more like not using nova instances at all
[06:25] <metsuke> maybe this isn't really a problem
[06:26] <bradm> metsuke: there's a nova-compute-lxd coming
[06:26] <metsuke> but it just seemed overbearing for juju to create 50 separate vms for separate containers
[06:26] <hloeung> change OpenStack to use the LXC driver (nova.virt.lxc.LXCDriver)? I haven't tried it myself, but https://wiki.ubuntu.com/OpenStack/LXC might help
[06:27] <blahdeblah> metsuke: Why do you want to use openstack compute nodes without nova?  The point of nova is to make it easy to create your compute instances, whether they're containers or VMs.
[06:27] <metsuke> blahdeblah: well, we wanted everything from Openstack except Nova, but it is installed by fuel and juju leverages it automatically
[06:28] <blahdeblah> I think you will find the upcoming LXD version of nova that bradm mentioned pretty good.
[06:29] <metsuke> ok.  Upcoming like mitaka? =)
[06:41] <metsuke> in terms of scaling charms, my company has always used haproxy to scale horizontally for web apps.  However, just looking at the Wordpress charm, that methodology is very different now.  Should all new, non-inherently scalable, http charms use the same methodology as Wordprerss (apache load balancer?) or is there a Good way to use haproxy?
[06:42] <blahdeblah> metsuke: upcoming in xenial (16.04)
[06:42] <blahdeblah> metsuke: If we're talking about the old public Wordpress charm, it's not a great example.
[06:43] <blahdeblah> The haproxy charm has relations that you can hook up to anything supporting the http interface.
[06:44] <metsuke> blahdeblah: would you recommend that for http scaling as opposed to doing it in-charm?
[06:44] <blahdeblah> That's how we usually do it
[06:45] <metsuke> cool, it would definitely be easier
[06:45] <blahdeblah> It may work well for you that way, or it may be better to do it in-charm, depending on your application.
[06:45] <blahdeblah> But the general idea is that charms are composable pieces that can be hooked together into application suites
[07:19] <metsuke> Can ironic be in place of MAAS for cutting nova vm's out of the equation?
[10:34] <jamespage> gnuoy, if you are around - https://code.launchpad.net/~james-page/charm-helpers/fix-ssl-certs/+merge/287639 could do with a review
[10:35] <gnuoy> jamespage, +1
[10:36] <jamespage> gnuoy, thankyou
[10:36] <gnuoy> np
[11:12] <magicaltrout> this looks rubbish, but is actually pretty cool: http://ibin.co/2Yr5fKlDWBeD the first Juju controlled PDI ETL cluster
[11:15] <jamespage> gnuoy, wanna do a +2 and landing? https://review.openstack.org/#/c/287087/
[11:15] <jamespage> ready to go imho
[11:17] <gnuoy> jamespage, done
[11:25] <jamespage> gnuoy, you have to hit the "workflow +1 as well"
[11:25] <jamespage> to move things along...
[11:26] <gnuoy> What am I =1'ing when I do that>
[11:26] <gnuoy> ?
[11:27] <jamespage> gnuoy, http://docs.openstack.org/infra/manual/developers.html#project-gating
[11:27] <jamespage> gnuoy, the +2 is a review; the +1 is actually 'approve'
[11:27] <jamespage> workflow +1 that is
[11:40] <jamespage> gnuoy, and job done..
[11:40] <jamespage> thanks!
[11:46] <jamespage> dosaboy, https://review.openstack.org/#/q/project:%22%255Eopenstack/charm.*%22+status:open is a useful view btw
[11:46] <dosaboy> yep i got that thanks
[12:06] <jamespage> gnuoy, beisner: pushing up a charmhelpers resync for review/test
[12:07] <jamespage> https://review.openstack.org/#/q/status:open+topic:charmhelpers-resync
[12:11] <admcleod1> magicaltrout: hey! ...i still havent thought of a comeback.
[12:12] <admcleod1> lazyPower: some of the times the leadership broke, a force destroy of a machine was involved, other times a service destroy / resolved and no force
[12:14] <magicaltrout> hehe
[12:16] <admcleod1> magicaltrout: i was introduced to 2 expats here as 'british but doesnt want to be british'. they were not impressed.
[12:17] <magicaltrout> are they the expats I see on facebook who don't want refugees coming into the UK whilst they live on the costa del sol? ;)
[12:18] <admcleod1> magicaltrout: no.. one was this english artist / yoga teacher who told one of our spanish friends that 'english people are much more interesting' though.. so less beer-drinking-lobster tan but similar amount of strange++
[12:19] <magicaltrout> ah yes
[12:19] <magicaltrout> those interesting english-folk
[12:19] <magicaltrout> of course
[12:19] <admcleod1> magicaltrout: i met another english girl here who told me her parents had a place somewhere in the south of mallorca. i asked her: "are there many foreigners there?" and she replied "no, its mostly english"
[12:19] <magicaltrout> oooh
[12:19] <magicaltrout> thats bad
[12:20] <magicaltrout> did she looks that stupid or just act it?
[12:20] <admcleod1> ...yes :}
[12:23] <admcleod> magicaltrout: nice blog post btw
[12:24] <magicaltrout> random ramblings on a monday afternoon
[12:25] <magicaltrout> for the people I work with in the "real world" I'm hopefully doing a PDI ETL cluster demo this afternoon if Amazon speeds the f**k up
[12:25] <admcleod> hah
[12:25] <magicaltrout> now my charm has leader election, config detecion stuff in, it should be ready for people to test. I'll get it into the listings and worry about unit tests later ;)
[12:27] <admcleod> magicaltrout: what ver juju are you using?
[12:27] <magicaltrout> once juju<->lxd stops messing up my ip addressing, I reckon most of my development workload will switch to some form of charmed up development process
[12:28] <magicaltrout> Trunk! :)
[12:28] <jamespage> beisner, getting alot of "ImportError: No module named pika"
[12:28] <jamespage> for amulet test executions....
[12:37] <marcoceppi> magicaltrout: Having had to do Pentaho ETL stuff many years ago, I'm curious what you've whipped up
[12:37] <magicaltrout> having never used PDI clustering before.... so am i
[12:38] <marcoceppi> as a point of reference, the project I was working on basically piloted pentaho, but ended up making us write our own ETL tool
[12:39] <magicaltrout> well my background for the last decade has been mostly Open Source BI around Pentaho and stuff
[12:39] <magicaltrout> I've done some Talend projects, but I absolutely hate the platform
[12:39] <magicaltrout> so we mostly end up using PDI
[12:42] <magicaltrout> anyway so my plan is thus..... get this deployed which will allow users to either, run upload jobs & transformations and run them via actions or cron tab them for scheduled execution or use PDI Carte as a webservice so people can create stuff in Spoon and either run it as a slave server or, if they want to scale up juju add-unit pdi and run as a PDI cluster
[12:42] <magicaltrout> thats phase 1
[12:42] <magicaltrout> and will give people scale out capabilities and remote execution capabilities with little effort on my/our part
[12:43] <magicaltrout> this pretty much works, i'm just putting some finishing touches to it, like informing the user of the leaders public ip etc
[12:44] <magicaltrout> phase 2 will work as a subordinate to the hadoop and yarn charms, because PDI these days has Big Data scaleout stuff that the Hadoop folk use, I've never touched it myself but now I have an excuse to
[12:44] <magicaltrout> and you can write MapR transformations within PDI and run them in a Hadoop cluster
[12:45] <magicaltrout> so my plan is to create something that allows you to hook it up to Hadoop, at which point it will embed itself like their docs explain, and then allow users to run PDI jobs in the Hadoop/Yarn environment
[12:49] <magicaltrout> http://ibin.co/2Yr5fKlDWBeD as you'll sorta see here marcoceppi on the left, pdi doing some stuff on the right, carte these days doesn't suck too much and you get some basic webservices with a picture of the transformation you select, the run log, etc
[12:49] <magicaltrout> so there might be some scope to write a nice carte UI as well to wrap it all up, but thats somewhere down the road
[12:50] <marcoceppi> sounds awesome
[12:50] <jamespage> marcoceppi, can you take a peek at https://github.com/marcoceppi/leankit.hostmar.co/pull/3 ?
[12:51] <magicaltrout> well if no one else uses it I certainly will. But in reality it should fit in quite well with the stuff kwmonroe and cory_fu do, so I'm sure once the hadoop stuff is up and running there will be some fun stuff to demo around Hadoop ETL work
[12:51] <marcoceppi> jamespage: i just opened it up, looking now!
[12:51] <marcoceppi> magicaltrout: It's a great solution space. Lots of users and little operational knowledge. So this fits well with the write once use many idea of Juju
[12:51] <magicaltrout> yup
[12:53] <marcoceppi> jamespage: deployed, cheers!
[12:55] <magicaltrout> aww marcoceppi
[12:55] <magicaltrout> get amazon to up your quote allowance :P
[12:56] <magicaltrout> -e +a
[12:56] <jamespage> marcoceppi, working nicely - thanks!
[12:56] <marcoceppi> magicaltrout: eu-west-1 is full of a lot of other people
[12:57] <marcoceppi> magicaltrout: you might want to try eu-central-1, no one is in it atm. We may need to spin up another AWS account to support all the CDP members
[12:57] <magicaltrout> k
[12:57] <marcoceppi> magicaltrout: actually, I'm the problem in eu-west-1
[12:57] <marcoceppi> magicaltrout: let me tear down an old demo
[12:58] <magicaltrout> hehe, get back the other side of the pond!
[12:58] <marcoceppi> magicaltrout: when at MWC, it was much more convient ;)
[12:58] <admcleod> magicaltrout: + kostas + I ;}
[12:59] <magicaltrout> show off
[12:59] <marcoceppi> magicaltrout: I just cleared out 6 instances
[13:00] <marcoceppi> will work with amazon to get better resources
[13:00] <admcleod> magicaltrout: even chocolate blue cheese icecream is all about the showmanship
[13:01] <magicaltrout> don't you worry marcoceppi once LXD stuff is done, I'm all over LXD based development :P
[13:01] <magicaltrout> admcleod: that I can understand
[13:08] <marcoceppi> magicaltrout: well you're not the first to hit limits, I've started a dialog with aws
[13:11] <beisner> jamespage, yep that's the amulet smoke not doing the 00-setup bit.   fixed and will recheck those.
[13:12] <beisner> jamespage, ps.  the amulet smoke is live! ;-)
[13:14] <beisner> jamespage, rechecking just this one to confirm, then will rerun the others  https://review.openstack.org/#/c/286451/2
[13:18] <lazyPower> ..chocolate..bluecheese...icecream?
[13:18] <lazyPower> but...why
[13:18] <magicaltrout> yeah
[13:18] <magicaltrout> weirdly, everyone else on the table also decided to try it
[13:18] <magicaltrout> i was wise
[13:18] <magicaltrout> and refused :P
[13:19] <jamespage> beisner, i saw and did some low risk landings on that basis
[13:21] <admcleod> actually.. i had some 'white chocolate blue cheese icecream' again in valencia and it was quite nice...
[13:21] <magicaltrout> you disgust me
[13:21] <admcleod> thats why im here
[13:21] <magicaltrout> that said
[13:21] <beisner> admcleod, hmmm i'll have to take your word on that
[13:21] <magicaltrout> it would certainly be an improvement on white chocolate and american cheese....
[13:22] <lazyPower> i dont know where you're getting this crazy amount of chocolate and cheese, but they really aren't complimentary flavors in my mind
[13:23] <magicaltrout> correct
[13:23] <admcleod> tell that to philadelphia
[13:23] <admcleod> or like, cheese cakes
[13:25] <magicaltrout> thats not a winning argument
[13:25] <magicaltrout> for starters they aren't blue cheese cakes
[13:26] <magicaltrout> thats not to say you can't get such a thing, i'm sure you can
[13:26] <magicaltrout> doesn't make it right though
[13:26] <jamespage> beisner, guess we can turn off the +1 code review from uosci now right?
[13:27]  * beisner looks
[13:27] <beisner> oh sweet!  yep that change landed.
[13:27] <jamespage> beisner, it did - I asked this morning and got perms on the charms-ci group and twiddled things around
[13:27] <jamespage> verification is working fine for uosci
[13:28] <beisner> ok cool i'll make it so
[13:28] <admcleod> magicaltrout: well.. the second time was good.
[13:29] <beisner> jamespage, fyi, anything that's currenty running will still crvw vote, but new jobs won't
[13:34] <jamespage> beisner, ok
[13:34] <jamespage> beisner, I also have a juju-deployer branch that deals with using anything other than the master branch with git repo's
[13:34] <beisner> jamespage, ah nice.  -b ftw?
[13:35] <jamespage> beisner, yah
[13:35] <jamespage> beisner, https://github.com/openstack/charm-percona-cluster;stable format
[13:37] <jamespage> beisner, also noticed that recheck-charm also gets picked up by the standard check processes as well
[13:38] <beisner> jamespage, yeah it's all regex based.  so, they must be   ^recheck.*
[13:39] <beisner> jamespage, i noticed that too in the sandbox.  but not everyone retriggered on recheck-charm.    i'm going to add charm-recheck and see if that limits the scope.
[13:45] <beisner> jamespage, ok, charm-recheck is hot, and i just did that on:  https://review.openstack.org/#/c/286669/1
[13:46] <beisner> if that confirms a-ok, i'll update the comment template.  but recheck-charm will continue to work as well.
[13:52] <jamespage> beisner, I guess that we don't have vast capacity for concurrency atm right?
[13:52] <jamespage> check is taking >1.5 hrs atm
[13:54] <jamespage> beisner, good - I think not consuming vast amount of openstack resources when we don't need to is optimal...
[13:56] <beisner> jamespage, each check is about 45 min.  i just bumped the concurrency up from 4 to 6.
[13:58] <beisner> jamespage, depending on which charm some are as low as 30 min for lint, unit, charm-single, and amulet-smoke.   that's from the moment the start running of course.
[14:10] <jamespage> beisner, hmm  - test_charm_single: FAILURE
[14:11] <beisner> jamespage, 00:06:10.118 W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/trusty-security/universe/i18n/Translation-en  Hash Sum mismatch
[14:16] <beisner> jamespage, looks like a launchpad ppa blip got a handful of those jobs at about the same time.
[14:42] <magicaltrout> okay leadership reactive knowitalls
[14:42] <jamespage> beisner, \o/
[14:42] <magicaltrout> i'm clearly not doing something write
[14:42] <jamespage> oh crappers
[14:43] <magicaltrout> marcoceppi: I want to set a bunch of variables when the leader is elected and also reset those variables if a new leader is elected
[14:43] <magicaltrout> but
[14:43] <magicaltrout> I don't want to set those variables if the leader isn't changed
[14:44] <magicaltrout> https://gist.github.com/buggtb/328f36919bd637811a61 that was my latest attempt
[14:44] <magicaltrout> but it seems leadership.changed is only called when you update one of the variables
[14:45] <lazyPower> magicaltrout leader-elected
[14:46] <lazyPower> and all followers should just consume the data via leader-get when leader-settings-changed
[14:46] <lazyPower> leader-elected will be run on *any* leader, the first cycle that they get elected. so control clsuter state there. I'm doing something very similar in layer-etcd
[14:46] <magicaltrout> hmm
[14:47] <lazyPower> exchange data over peer-relation-joined on first run, then all subsequent cluster status is formed based on data being received from leader-get, as registration is 2 step w/ a http post (Self registration) and then reading cluster data broadcast from the leader.
[14:47] <magicaltrout> according to the leadership layer it means I don't have to write leader-elected hooks :P
[14:47] <lazyPower> @when('leader.elected')  i guess? i'm not sure, i haven't used stubs leadership layer yet
[14:47] <marcoceppi> magicaltrout: right leadership.changed is for the settings, not the leader itself
[14:48] <marcoceppi> magicaltrout lazyPower there is no leader.elected https://git.launchpad.net/layer-leadership/tree/README.md
[14:48] <marcoceppi> stub:  ^^ opinions?
[14:49] <magicaltrout> nope
[14:49] <marcoceppi> magicaltrout: actually
[14:49] <marcoceppi> magicaltrout: wouldn't this just work? https://gist.github.com/marcoceppi/d91d6b2cc5072f8204d2
[14:50] <lazyPower> marcoceppi - seems like an oversight,  the leader being elected is totally an event we care about
[14:50] <marcoceppi> magicaltrout: or do you only want to have the settings sent once?
[14:50] <marcoceppi> lazyPower: it's not, you care about the state
[14:50] <lazyPower> schenanigans
[14:50] <marcoceppi> either you are the leader or you're not
[14:50] <lazyPower> what if hte leader changed, and i have stale cluster data?
[14:50] <lazyPower> that leader-elected state gives me an opportunity to cleanup and reconcile before i declare the current cluster state again, from my leader perspective
[14:51] <marcoceppi> lazyPower magicaltrout how aobut this? https://gist.github.com/marcoceppi/d91d6b2cc5072f8204d2
[14:51] <magicaltrout> marcoceppi: thats sort of what I had before
[14:51] <marcoceppi> magicaltrout: right, so I added one more state
[14:51] <magicaltrout> but i seem to get dumped into an endless restart cycle because the leader_set appears to trigger a change even it it remains the same
[14:51] <marcoceppi> if the leader changes, that unit becomes the leader and sees it hasn't been configured yet for leadership
[14:52] <marcoceppi> err
[14:52] <marcoceppi> crap, I rushed that
[14:52] <lazyPower> mmhmm
[14:52] <magicaltrout> i was playing around with something along your lines marcoceppi with my init=True stuff
[14:52] <marcoceppi> magicaltrout: https://gist.github.com/marcoceppi/d91d6b2cc5072f8204d2
[14:52] <magicaltrout> seems to me it would be good to have a fire and forget call that runs on leader elected
[14:52] <magicaltrout> maybe I'm just lazy ;)
[14:53] <marcoceppi> magicaltrout: just set and manage the state instead
[14:53] <marcoceppi> that way it'll only be run once if you're the leader and you haven't run through that method yet
[14:53] <magicaltrout> hmm didn't thinkg about that
[14:53] <magicaltrout> still need to use states better
[14:53] <magicaltrout> thanks marcoceppi I'll give it a go
[14:53] <lazyPower> magicaltrout - or, do the dreaded antipattern of using @hook('leader-settings-changed') and defend the users right to choice :P
[14:53] <marcoceppi> magicaltrout: replace "self" with the layer name and you're on your way
[14:53] <lazyPower> rather @hook('leader-elected')
[14:53] <marcoceppi> lazyPower: no! hooks are for fools
[14:54] <magicaltrout> lol
[14:54] <marcoceppi> lazyPower: and you can't combine with states
[14:54] <lazyPower> only if you care about stacked decorators
[14:54] <lazyPower> if i have a single method, decorated by @hook('leader-elected') that runs everytime a leader is elected, i've solved this with a one liner
[14:54]  * marcoceppi beats lazyPower with a pillow full of quarters for using @hook
[14:54]  * lazyPower points at the field where his figs are grown
[14:54] <magicaltrout> i heard if I mix hooks and states cory_fu will force feed me admcleod's ice cream
[14:55] <lazyPower> behold my field of figs, notice they are barren
[14:55] <marcoceppi> lazyPower: however, if you want to update the leadership settings, you can just remove the state
[14:55] <marcoceppi> lazyPower: don't have to wait for a hook now
[15:02] <beisner> jamespage, will be retriggering pxc for amulet smoke after resolving an env var that the test expects:  Please set the vip in local.yaml or env var AMULET_OS_VIP to run this test suite
[15:04]  * magicaltrout wonders if he can get around lxd deploying to localhost by blocking up localhost....
[15:04] <magicaltrout> booo
[15:04] <magicaltrout> bloody thing
[15:04] <cory_fu> Hey, sorry I'm late to the leadership party.  Don't use @hook
[15:05] <magicaltrout> told you....
[15:05] <cory_fu> magicaltrout: I think you're over thinking it.  You don't need to care specifically about when leadership changes, you just need to react to @when('leadership.is_leader') and @when_not('leadership.is_leader')
[15:05] <magicaltrout> hmm
[15:05] <magicaltrout> well i need the leader to broadcast new variables and I need the slaves to pick them up
[15:06] <magicaltrout> I also need to trigger a carte restart
[15:06] <cory_fu> In the @when('leadership.is_leader') handler, write the leadership settings.  If they're different, all the peers will get notified that they changed
[15:06] <cory_fu> And they can then react to leadership.changed
[15:06] <cory_fu> (all the peers, including the leader)
[15:07] <magicaltrout> okay i'll just put something together and you can tell me if you think it works
[15:07] <magicaltrout> then we'll have a bet :)
[15:09] <magicaltrout> https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L113 bit of that cory_fu ?
[15:11] <cory_fu> magicaltrout: I was thinking like this: https://gist.github.com/johnsca/8cf0f25cce06c13bdecb
[15:11] <cory_fu> Also, no need to leader_set something that's already in config
[15:12] <magicaltrout> good point
[15:12] <jamespage> beisner, ack
[15:13] <cory_fu> magicaltrout: Note the first line.  It's important to use that version of leader_set to ensure the leader gets notified of the settings changes internally
[15:13] <cory_fu> (It might get notified by another hook call, but I'm not certain, and it would take quite a bit longer even if it works)
[15:13] <magicaltrout> yeah i'm using that one
[15:14] <beisner> jamespage, looks like charm-recheck is the ticket.
[15:15] <jamespage> beisner, awesome
[15:18] <kjackal> Hi everyone, what is your setup to deploy charms in xenial? Do you use the local provider while in trusty?
[15:19] <stub> marcoceppi, magicaltrout: @when('leadership.is_leader') ?
[15:19] <marcoceppi> stub: yeah, we moved on a bit from that
[15:21] <stub> cory_fu: I was tempted to monkeypatch charmhelpers.core.hookenv, but thankfully thought better of that.
[15:21] <cory_fu> I am also interested in kjackal's question about deploying xenial units using local provider on trusty
[15:21] <cory_fu> stub: :)
[15:29] <magicaltrout> right then
[15:30] <magicaltrout> cory_fu: this is why i asked the question in the first place
[15:30] <magicaltrout> https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L120
[15:30] <magicaltrout> post bootstrap it all sorted itself out
[15:30] <magicaltrout> but then update_master_config is run
[15:30] <magicaltrout> which triggers a restart
[15:31] <magicaltrout> https://gist.github.com/buggtb/1cab57c0f1d1d4a7fa99
[15:31] <cory_fu> magicaltrout: It should only trigger that if the leadership settings had changed
[15:31] <magicaltrout> well i only have 1 node
[15:31] <cory_fu> Which would imply that it needed to restart
[15:32] <lazyPower> leader-settings-changed seems tor un anytime you leader-set, it doesn't retain the same behavior as relation-set
[15:32] <lazyPower> unless i'm completely mistaken
[15:32] <lazyPower> but thats what i've seen the last week or so in 2.x land
[15:32] <magicaltrout> yeah that was my thinking
[15:32] <magicaltrout> its not like the other properties
[15:32] <lazyPower> so i'm 98% certain thats teh case
[15:32] <magicaltrout> this stuff is rexecuted verbatim
[15:32] <lazyPower> and this entire conversation about not caring when leader changes is fud
[15:32] <lazyPower> well not fud
[15:32] <lazyPower> but not right
[15:33] <cory_fu> magicaltrout: In a meeting now
[15:33] <magicaltrout> oh any excuse when you don't know the answer :P
[15:33] <cory_fu> :)
[15:33] <cory_fu> I haven't tested it, but what you just said doesn't sound right at all, per my understanding.  Maybe stub could help
[15:33] <magicaltrout> sure. But I'm siding with lazyPower, leader_set seems to force leadership.changed to trigger
[15:33] <magicaltrout> which is why it sucks a bit :)
[15:33] <lazyPower> yep
[15:33] <lazyPower> doesn't matter if the data changed or not
[15:33] <lazyPower> it has different behavior characteristics
[15:33] <magicaltrout> sooooo hello stub
[15:34] <magicaltrout> do you have an opinion? :)
[15:34] <lazyPower> magicaltrout what you could do..
[15:34] <lazyPower> there's a @leader.setting_changed context, you can stuff the data into a cache, and do something like the data_changed bits that cory exposed
[15:35] <lazyPower> so it only does a restart if the leader-data differed from the last hearbeat it got from leader-set
[15:35] <lazyPower> s/hearbeat/heartbeat-update-thing-whatever-we-call-this
[15:36] <stub> charms.leadership.leader_set should set leadership.changed, and that state will remain set for the remainder of the hook
[15:36] <lazyPower> @when('leadership.changed.admin_password')
[15:36] <beisner> jamespage, so are you thinking we could just serialize the next-sync with the existing gh2lp sync job?
[15:36] <lazyPower> magicaltrout ^ is what i was looking for
[15:37] <lazyPower> or does that trigger regardless of the actual value changing?
[15:37] <magicaltrout> yeah stub so if I wanted to set some variables on leader election but not reset them later how do you go about catching it?
[15:38] <magicaltrout> lazyPower: dunno, i was trying to avoid just annotating all the config stuff
[15:38] <magicaltrout> give it a whirl though i guess
[15:39] <magicaltrout> oh that way i need a method per config as well I think
[15:39] <magicaltrout> which is annoying
[15:40] <stub> magicaltrout: http://pastebin.ubuntu.com/15267478/ ?
[15:41] <magicaltrout> ah right you do a state set as well
[15:41] <magicaltrout> okay stub thanks
[15:41] <stub> Or just use @hook('leader-elected') :)
[15:42] <magicaltrout> lol
[15:42]  * magicaltrout ducks the projectiles coming from cory_fu 
[15:43] <cory_fu> magicaltrout, stub: I really think that thinking about it in terms of "I need to do something when leadership has changed" is the wrong way to think about it
[15:43] <cory_fu> 1 min, I'll explian more
[15:43] <magicaltrout> well, I can't ignore a leadership change event :P
[15:43] <stub> magicaltrout: Mixing leadership.is_leader and leadership.changed could be dangerous, as you can't tell if the leadership settings where changed in the current hook by the current leader, or by the previous leader that has been deposed.
[15:44] <stub> cory_fu: Normally, yeah. But if you want to ensure the credentials are cycled every time the leadership has changed, you want to do something when leadership has changed.
[15:44] <stub> I just set the credentials once and leave them no matter who the leader is
[15:45] <stub> But I guess if you were setting a public key, and the private key never left the leader, there might be some sense in regenerating the creds rather than unnecessarily share the secret.
[15:45] <deanman> A fresh install of juju on windows 7 gives "ERROR there was an issue examining the environment: dial tcp 127.0.0.1:37017: ConnectEx tcp: No connection could be made because the target machine actively refused it." when running bootstrap. Any hints?
[15:46] <stub> (although my example is also simple without using @hook either)
[15:46] <magicaltrout> stub: so if running leadership.changed and leadership.is_leader is dangerous, how do you know if your running on the leader or slave?
[15:47] <stub> magicaltrout: leadership.is_leader is only ever set on the leader. leadership.changed is set at the start of a hook if the leadership settings are different to the last hook.
[15:48] <stub> So if a leader did leader_set, then died, a new leader will get its leader-elected hook run but find leadership.changed already set.
[15:48] <cory_fu> stub: My point is that if you just set the leader settings whenever you are leader, then they will naturally update when it changes
[15:49] <stub> cory_fu: Yup.
[15:49] <cory_fu> And if you also gate your restart on leader settings changed, then, even if you are the only unit, you will still be elected leader and the config write and restart will happen at the right place
[15:49] <magicaltrout> this is turning into a much larger exercise than it should be. How had can it be for the leader to propogate a few variables, and then the leader set 1 config, the slaves set another config and they all restart, but only when a leader changes or a config variable is *actually* updated? :P
[15:50] <cory_fu> magicaltrout: That means, don't do a separate config write + (re)start other than the leadership.changed; that can be your main config write + (re)start handler
[15:50] <stub> cory_fu: Where was your example?
[15:51] <cory_fu> magicaltrout: My point is that it's easier than you're making it out to be.  If you just work under the assumption of leadership and don't worry about specifically reacting to changes in leadership status (only to changes in leadership values) then it becomes much simpler
[15:51] <magicaltrout> stub: https://gist.github.com/johnsca/8cf0f25cce06c13bdecb
[15:52] <cory_fu> Yeah, ^
[15:52] <cory_fu> With that, if you don't have any other calls to render_*_config and restart(), then you will only ever get restarted when it
[15:52] <magicaltrout> how can i work under the assumption of leadership when I have a separate master and slave template to fill?
[15:52] <cory_fu> *it's necessary
[15:53] <cory_fu> magicaltrout: I mean, under the assumption that you will always either be leader or not, like in my example
[15:53] <cory_fu> i.e., don't have a separate set of handlers for "no leadership" or "single unit" case
[15:54] <stub> So the crazy edge case that would be next to impossible to engineer is update_master_config being called before change_leader
[15:54] <cory_fu> That's just naturally handled because Juju will always pick a leader, even for a single unit
[15:54] <cory_fu> stub: How could that happen if change_leader is what updates the leader settings?
[15:54] <magicaltrout> I don't assume there will be no leadership. Do i?
[15:54] <jamespage> beisner, that might work ok
[15:55] <stub> cory_fu: Another unit is leader and changes settings, and loses the leadership lease. This unit is elected leader, and runs leader-elected. The leadership settings have changed, so that state is true when the hook starts.
[15:56] <stub> cory_fu: So both handlers are valid, and it is undefined which is called first.
[15:56] <cory_fu> Hrm
[15:56] <stub> but pretty much impossible due to the 30s lease period on leadership. It really means I've thought about this too much :)
[15:56] <cory_fu> ^__^
[15:57] <stub> LET ME SPECIFY PRIORITY OF MY HANDLERS DAMMIT
[15:57] <cory_fu> lol
[15:58] <magicaltrout> i'm still lost
[15:58] <magicaltrout> how on earth do I set the config without it restarting? :P
[15:59] <cory_fu> magicaltrout: If you set the config and it's different, that implies that you *need* to restart.
[15:59] <stub> magicaltrout: What is the actual problem you are trying to solve btw?
[15:59] <stub> magicaltrout: The coordinator layer might be a better fit.
[15:59] <magicaltrout> cory_fu: I set the config because  is_leader is called
[16:00] <magicaltrout> nothing in that configuration actually changes
[16:00] <magicaltrout> but leader_set is called so leadership.changed is fired
[16:00]  * magicaltrout feels like we've been here before....
[16:01] <cory_fu> magicaltrout: If nothing in the settings changes, then it shouldn't trigger that changed state.
[16:01] <magicaltrout> lazyPower: can you have a word with cory_fu please :P
[16:01] <cory_fu> magicaltrout: I could see it triggering on the first run, since the leader settings aren't set before
[16:02] <stub> Yeah, just checked the code. If it sets leadership.changed even if its a noop, then its a bug.
[16:02] <cory_fu> stub: Actually, is it the case that on the first run, leadership.changed will always be set?
[16:02] <stub> Yes
[16:02] <cory_fu> Well, that means your "nearly impossible" edge case is guaranteed to happen on the first run.  :(  And that explains magicaltrout's gist
[16:03] <magicaltrout> https://gist.github.com/buggtb/e6faa5b897bcf7d5d282
[16:03] <magicaltrout> thats happened whilst we've been mulling this over
[16:03] <stub> (because in Juju, deleting a key is just setting its value to None)
[16:04] <stub> Well, there's your problem. Your charm is full of Java.
[16:04] <magicaltrout> lol
[16:04] <magicaltrout> well that I have no control over, but at least I know how Java works :P
[16:04] <mbruzek> why so much Java hate?
[16:05] <mbruzek> It is *just* a language people!
[16:05] <magicaltrout> the java appeared because after a bunch of restarts that failed its given up and run out of memory :)
[16:05] <magicaltrout> yeah i got called out on an ASF mailing list last week for mocking PHP, I'm keeping quiet now ;)
[16:06] <stub> mbruzek: For me, because of the damn click through license and the effort involved in working around it :-P
[16:06] <cory_fu> magicaltrout: Hrm.  That is a problem.  I really don't understand why update_master_config would be running during update_status
[16:06] <cory_fu> It really shouldn't be
[16:06] <cory_fu> magicaltrout: Can you point me to the code that that is running right now
[16:06] <cory_fu> ?
[16:07] <cory_fu> s/right now/currently/
[16:08] <magicaltrout> er http://paste.ubuntu.com/15267724/
[16:08] <magicaltrout> I think its that
[16:08] <magicaltrout> but we've been talking and I've been messing around with ideas
[16:08] <stub> cory_fu: leadership.changed will always be set the first time leader_set is called, not always set the first run at the start.
[16:16] <cory_fu> stub: Sure, but in that case it's guaranteed that you'll end up with both leadership.changed and leadership.is_leader at the start of the hook at the same time
[16:16] <cory_fu> So at least once, it could potentially run in the wrong order
[16:17] <cory_fu> And since changing the values would then not update leadership.changed (since it would already be set), the @when('leadership.changed') handler would not run again after the leader settings were updated
[16:18] <stub> cory_fu: I don't follow. The first handler runs, guarded by @when('leadership.is_leader'). It sets stuff, turning on leadership.changed. Then the other hook runs.
[16:19] <cory_fu> stub: Oh, I misunderstood
[16:20] <magicaltrout> okay so can we try and agree on what everyone thinks will work
[16:20] <magicaltrout> then we can test it in reality
[16:20] <magicaltrout> https://gist.github.com/buggtb/a96f3ea9d1044760b642#file-gistfile1-txt-L111
[16:20] <magicaltrout> if I'm following, leader_set should run
[16:20] <magicaltrout> then on the leader I should get leadership.changed and it update and restart?
[16:21] <cory_fu> stub: However, I think what I was saying was still true, because for the very first hook run, https://git.launchpad.net/layer-leadership/tree/reactive/leadership.py#n41 is going to return an empty dict
[16:22] <cory_fu> magicaltrout: Yes.  I'm concerned that for the very first hook run, you could have leadership.changed set from the start of the hook, which could cause the order of change_leader and update_master_config to be undetermined
[16:22] <magicaltrout> okay i'm gonna refresh and see how knackered it is
[16:22] <cory_fu> It would be correct for the following hook runs, but that first run seems like it will hit the edge case
[16:23] <stub> cory_fu: Right, but there are no keys in an empty dict so we have nothing to iterate over and the flag remains unset
[16:23] <stub> (previous is an empty dict, current is an empty dict)
[16:24] <cory_fu> stub: Oh yeah!  Great
[16:25] <cory_fu> Ok, so I'm back to having no idea why update_master_config is running before change_leader in update_status for magicaltrout
[16:26] <magicaltrout> well i'll run this clean one with code I know what it looks like and we can argue about it some more
[16:28] <jamespage> beisner, https://review.openstack.org/#/q/status:open+topic:charmhelpers-resync status atm
[16:28] <jamespage> I submitted a few rechecks...
[16:28] <beisner> jamespage, ok cool.  the queue is pretty clear atm.
[16:35] <HollyRain> hi! I've started to use LXD. I would want to know whether Juju is integrated or is going to be integrated with LXD?
[16:35] <magicaltrout> LXD!!!! woop
[16:38] <magicaltrout> HollyRain: from an unofficial POV, 2.0 will include LXD support
[16:38] <HollyRain> magicaltrout, how to install that beta?
[16:39] <magicaltrout> you need to wait a few days or get Xenial installed and compile Juju lxd-container-type branch
[16:42] <cory_fu> stub: Did you see https://github.com/juju-solutions/charms.reactive/pull/56
[16:50] <stub> cory_fu: yes. might simplify some things here.
[16:51] <stub> Are java subordinates rather than java layers the future ?
[16:52] <lazyPower> It seems the easiest way to abstract that without many copies of a charm that only vary by layers. We used it in layer-logstash and it works really well
[16:53] <lazyPower> granted we have only tested with kwmonroe's openjdk subordinate
[16:53] <stub> cool. I guess even I need to allow people to select between OpenJDK and Oracle, and unless a layer supported both.
[16:54] <beisner> jamespage, on your charm-odl-controller review, the single charm deploy fails with http://pastebin.ubuntu.com/15268096/   (Failed to connect to nexus.opendaylight.org port 443: Connection timed out).   We'll need to file an RT to open up access to that.
[16:54] <jamespage> beisner, oh wait - thers is an amulet override for that
[16:54] <lazyPower> interface:java makes that pretty intuitive out of the box. you gate on java.ready
[16:54] <stub> A layer may still be useful, that sticks in OpenJDK until a subordinate is added to override it.
[16:55] <lazyPower> true statement
[16:55] <beisner> jamespage, but that's just:  juju deploy XYZ
[16:55] <jamespage> beisner, ah right I see
[16:55] <jamespage> hmm
[16:55] <stub> Someone please write an Oracle java layer so I don't have too :-P
[16:56] <stub> c/layer/subordinate
[16:56] <magicaltrout> you get to make use of the fancy new license acceptance stuff :P
[16:56] <kwmonroe> stub, we've got layers for openjdk, ibmjdk, and zulu8 (all providing subordinates).  none are promulgated yet, but they're all close.
[16:56] <stub> Not me. Last time I tried that the lawyers told me to get knotted. Someone elses turn.
[16:56] <magicaltrout> lol
[16:57] <kwmonroe> stub, i wouldn't write a layer that just sticks in openjdk.  i'd use a layer option instead.. if/when a subordinate comes along, it should do the update-alternatives thing and switch you over.
[16:57] <kwmonroe> layer option meaning this.. just stick in openjdk or whatever in the package section: https://github.com/juju-solutions/layer-ubuntu-devenv/blob/master/layer.yaml
[16:58] <stub> was thinking of an option to specify the default, but can't think of anything that could be a default except OpenJDK (I don't know zulu or ibm)
[16:58] <stub> oh, right.
[16:59] <stub> 'cept I need openjdk 8 under trusty, which means a PPA
[16:59] <beisner> jamespage, i shall raise that rt now
[17:00] <kwmonroe> yeah stub, i'll update layer-openjdk to add ppa:openjdk-r/ppa if someone wants java8 installed on trusty.
[17:02] <magicaltrout> awww F**$"£$%(IG race conditions
[17:02]  * magicaltrout goes in search of some food and alcohol to get over the annoyance
[17:02] <stub> kwmonroe: No rush - I'll probably have Xenial released before I get to refactor Cassandra into reactive
[17:02] <kwmonroe> heh, right on.
[17:05] <stub> kwmonroe: And now I think of it, the openjdk8 ppa needs to be a config option so it can be overridden at the sites without network egress.
[17:09] <kwmonroe> yeah stub, probably need a generic ppa config opt regardless of the version.. blank by default, but document setting it for restricted network envs or for getting 8 on trusty.
[17:11] <stub> I'm using the charmhelpers/apt layer install_sources in Cassandra, since it is 'standard'
[17:21] <HollyRain> I've a container with Xenial, where is the info. to compile Juju lxd-container-type branch?
[17:24] <HollyRain> magicaltrout, https://github.com/juju/juju do I compile the last release?
[17:25] <HollyRain> there is juju-2.0-beta1 https://github.com/juju/juju/releases
[17:31] <cory_fu> HollyRain: http://marcoceppi.com/2016/03/testing-juju-without-wrecking-juju/ has some useful info
[17:32] <cory_fu> HollyRain: There is also a docker container that lets you run the latest juju w/o building it yourself, if you're ok with running inside docker
[17:35] <HollyRain> cory_fu, thanks!
[17:49] <roryschramm> hi im having a problem deploying openstach to power8 compute nodes, im using pre-release juju 1.25.4 due to bug 1532167.  Afix was released for it but I cant find it in the ppa's and its preventing me from deploying
[17:49] <mup> Bug #1532167: maas bridge script handles VLAN NICs incorrectly <addressability> <maas-provider> <network> <juju-core:Fix Released by frobware> <juju-core 1.25:Fix Committed by frobware> <https://launchpad.net/bugs/1532167>
[17:50] <roryschramm> juju cant find the ppc64el packages to push to the power nodes after juju add-machine
[18:05] <kwmonroe> roryschramm: is deployment complaining about a specific package not being available for ppc64le, or is it that the ppc64le OS image isn't available in your maas setup?  if the latter, you might have more luck asking in #maas.
[18:06] <roryschramm> its complaining about the package being unavailable
[18:08] <roryschramm> i hav the deb files for doing a manual install but i have no idea how to get juju to use those packages for power
[18:10] <kwmonroe> roryschramm: what charm are you trying to deploy?
[18:10] <roryschramm> i was just tring to do juju add-machine to get the host into juju
[18:10] <roryschramm> plan is is put nova-compute on the power nodes
[18:11] <roryschramm> deploying power via maas gui is working fine. just not through juju
[18:16] <Gil> trying to deploy nova-compute to openstack liberty, get in syslog:  "AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds"
[18:20] <kwmonroe> sorry roryschramm, but i'm not sure what's causing add-machine to fail.  beisner, i see you answered an openstack/p8/juju question here: http://askubuntu.com/questions/677199/can-bundle-https-jujucharms-com-openstack-base-39-be-used-for-ppc64el-environm -- any pointers for roryschramm?
[18:21] <roryschramm> well i think the problem is that im using 1.25.4 juju-core. However that code is not available in a ppa currently so juju is failing when deploying to power. for x86 its jush pushing the local x86 installed juju code.
[18:23] <jrwren> roryschramm: is the controller on power as well? did you deploy that with --upload-tools then? i'd think add machine would use the tools from controller, but I do not know.
[18:24] <roryschramm> the controller is on x86. I can deploy to x86 nodes just fine. Yes I deployed with upload tools.
[18:45] <roryschramm> is it possible to juju bootstrap --upload-tools for multiple architechures? ie upload 1.25.4 juju tools for x86 and pplc64el?
[18:45] <roryschramm> ppc64el*
[19:11] <magicaltrout> some days I hate my clients "Views are getting dropped randomly in the production server."
[19:12] <magicaltrout> now i'm not a redshift administration expert, but I severely doubt redshift is randomly dropping views without user action
[19:23] <beisner> kwmonroe, i've just regained access to ppc64el hw in our openstack charm testing lab in maas, but haven't exercised 1.25.[3-4] against it yet.  we plan to late this week, into next week.
[19:46] <mux> can anyone tell me if there's a list of public charms that have been updated to use the new 'storage' subsystem?
[19:47] <mux> as far as I can tell the only one out there is cs:~axwalk/postgresql
[19:47] <mux> I would love to be wrong
[19:48] <aisrael> mux: I have an example charm that demonstrates some of the storage functionality (though it's not full featured): https://github.com/juju-solutions/storage-demo
[19:57] <mux> aisrael: nice, checking it out now
[20:05] <bbaqar> Anyone got a working lxd bundle for trusty?
[20:12] <marcoceppi> bbaqar: what does that mean?
[20:13] <marcoceppi> cory_fu arosales https://jujucharms.com/big-data
[20:13] <cory_fu> Awesome!
[20:13] <cory_fu> kwmonroe: ^
[20:13] <rick_h__> cory_fu: kwmonroe ty for the patience on that.
[20:13] <magicaltrout> ooh that looks snazzy
[20:14] <cory_fu> We still need our elephant hats.  :p
[20:14] <rick_h__> good luck :P
[20:14] <arosales> marcoceppi: cory_fu  \o/
[20:14] <cory_fu> ha
[20:14] <arosales> thanks urulama !
[20:14] <bbaqar> marcoceppi: So the bundle for openstack-lxd available on the charms store (https://code.launchpad.net/~openstack-charmers-next/charms/bundles/openstack-lxd/bundle) pulls the wily charms
[20:14] <cory_fu> rick_h__, marcoceppi: So cards are production now?  Is there a UI for generating them?
[20:14] <bbaqar> i need to run it with the trusty charms
[20:15] <marcoceppi> bbaqar: ah, you can't lxd is only supported in wily for nova
[20:15] <urulama> arosales: you're welcome :)
[20:15] <marcoceppi> cory_fu: no idea
[20:15] <rick_h__> cory_fu: cory_fu https://jujucharms.com/community/cards
[20:15] <rick_h__> oops
[20:15] <marcoceppi> rick_h__ cory_fu so we should start killing off cards.juju.solutions - I'll start by transparently sending requests to this service now
[20:16] <rick_h__> marcoceppi: awesome
[20:16] <marcoceppi> rick_h__: mind if I mail the list about https://jujucharms.com/community/cards ?
[20:16] <rick_h__> marcoceppi: I'm working with urulama and team to do an overall email on the release
[20:16] <marcoceppi> kk
[20:17] <rick_h__> marcoceppi: as the multi-series charms should work now as well
[20:17] <rick_h__> marcoceppi: e.g. upload multi-series, deploy on juju 1.25
[20:17] <magicaltrout> cards?! blimey you guys have so much stuff hidden
[20:17] <rick_h__> marcoceppi: so there's other stuff to note people on
[20:17] <marcoceppi> nioce
[20:17] <magicaltrout> +away
[20:17] <marcoceppi> magicaltrout: we just released it!
[20:17] <magicaltrout> that explains that then.... :P
[20:17] <bbaqar> marcoceppi: Are you sure because the nova-compute charm allows lxd as the virt-type and https://jujucharms.com/u/openstack-charmers-next/lxd/trusty/ exists
[20:18] <magicaltrout> thats great when I get Saiku stabilised, dump that card into the wiki, website & splash screen
[20:18] <rick_h__> marcoceppi: exactly
[20:18] <rick_h__> oops, magicaltrout exactly
[20:19] <marcoceppi> bbaqar: I've just been told it's wily, you might want to check with zuul coreycb or beisner who are still online
[20:19] <marcoceppi> bbaqar: or email the juju list: juju@lists.ubuntu.com
[20:19] <bbaqar> Thanks
[20:20] <coreycb> rockstar, I don't see zul in here but maybe you know the answer for bbaqar
[20:21] <zul> hi
[20:21] <rockstar> bbaqar: afaik we don't support lxd on trusty just yet.
[20:22] <zul> no we dont yet
[20:22] <rockstar> "trusty" in that charm name is merely a placeholder.
[20:24] <beisner> hi bbaqar, keep in mind that the "-next" charms track the current development versions of the charms.
[20:26] <beisner> bbaqar, if you've not seen this yet, pasting for reference:  https://jujucharms.com/u/openstack-charmers-next/openstack-lxd/bundle
[20:36] <bbaqar> Okay thanks guys. I appreciate your help.
[20:39] <HollyRain> with juju, is necessary to have to work directly with Systemd or is handled automatically?
[20:40] <HollyRain> I want that my service can be started/stopped from juju, and restarted auto. if it faills and it's possible
[20:41] <magicaltrout> HollyRain: i did something outside of systemd for that with the help of the charmers
[20:42] <magicaltrout> https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L76 start gets called everytime and it just checks to see if its running
[20:42] <magicaltrout> so every 5 minutes if its down it'll get restarted
[20:42] <magicaltrout> if you have systemd kicking around, you could probably do something similar where you check the status
[20:42] <magicaltrout> and if its down, kick it
[20:43] <HollyRain> thanks
[20:50] <urulama_> marcoceppi: one more thing ... you can publish a multiseries charms now and old clients will work with it now :) there's been a regression if it's gated - so make it public, but we'll fix that with new release
[20:54] <marcoceppi> urulama_: YESSS
[20:56] <urulama_> marcoceppi: https://jujucharms.com/u/uros-jovanovic/wpnew and https://api.jujucharms.com/charmstore/v5/~uros-jovanovic/wpnew/archive/metadata.yaml
[22:17] <magicaltrout> this seems to work cory_fu https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L129 I think i was getting a race condition which was causing the rolling restart now I schedule a restart
[22:17] <magicaltrout> that way it waits for PDI to be installed
[22:19] <cory_fu> magicaltrout: A race condition with what?  Any chance I could see the reactive log lines from your run that works?
[22:19] <cory_fu> Oooh
[22:19] <cory_fu> magicaltrout: Yeah, your update_master_config really ought to have @when('pdi.installed') on it.  Do you ever want to update the config before it's installed?
[22:21] <magicaltrout> well thats part of the problem cory_fu
[22:22] <magicaltrout> from what I could tell in the logs
[22:22] <magicaltrout> with @when('leadership.is_leader', 'leadership.changed')
[22:22] <magicaltrout> its run before the PDI package is installed
[22:22] <magicaltrout> and if you dump a @when('pdi.installed')
[22:22] <magicaltrout> it doesn't execute but then leadership.changed isn't recalled
[22:23] <magicaltrout> leader elected -> leadership changed -> pdi and java installed
[22:24] <magicaltrout> if i defer the restart though of course by then the package exists
[22:25] <cory_fu> magicaltrout: Ok, what I would recommend is putting the @when('pdi.installed') on change_leader.  That way, it will only run after PDI is installed, and would then immediately trigger  the update_master_config.
[22:25] <cory_fu> Well, actually, the solution you came up with is ok, too
[22:25] <cory_fu> Except that you're potentially rendering the config before PDI is installed.  I trust the install won't overwrite the config?
[22:26] <magicaltrout> na it writes else where, although now you mention it i can see yours working without the deferal
[22:27] <magicaltrout> woop, even carte is up and running which has failed miserably all afternoon
[22:27]  * magicaltrout chucks a job at it before testing the slave logic
[22:28] <magicaltrout> http://52.29.99.174:9999/kettle/status/
[22:28] <magicaltrout> cluster/cluster
[22:28] <magicaltrout> look at that cory_fu
[22:28] <magicaltrout> working and everything
[22:29] <cory_fu> Nice
[22:29] <cory_fu> :)
[22:42] <aisrael> tvansteenburgh: Are you still here?
[22:42] <tvansteenburgh> yup
[22:43] <aisrael> tvansteenburgh: Could you kick off a test for me? https://code.launchpad.net/~brad-marshall/charms/trusty/memcached/add-monitors-relation/+merge/276958
[22:44] <tvansteenburgh> aisrael: you can do it from here if you're logged in http://review.juju.solutions/review/2354
[22:44] <aisrael> tvansteenburgh: Oh sweet, I missed that. Thanks!
[22:44] <tvansteenburgh> aisrael: np :)
[22:45] <tvansteenburgh> aisrael: fyi joyent queue is empty
[22:51] <magicaltrout> woop self registering slaves
[23:15] <magicaltrout> cory_fu: on https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L54
[23:15] <magicaltrout> why would I get
[23:15] <magicaltrout> TypeError: check_running() missing 1 required positional argument: 'java'
[23:15] <magicaltrout> haven't i included an argument?
[23:16] <cory_fu> So that error is perhaps misleading.  It means that, at runtime, the function was not passed the argument (java) that it expects.  However, it looks like the java.ready state ought to provide that param, so I don't know why you hit that
[23:17] <magicaltrout> i only hit it when I added the when_not
[23:17] <magicaltrout> but I don't want a restart and check_run to run at the same time
[23:17] <cory_fu> magicaltrout: Can you give me the output of `charms.reactive --format=yaml get_states` from that unit?
[23:18] <magicaltrout> {}
[23:19] <cory_fu> You have to run it from the charm directory
[23:19] <magicaltrout> boo
[23:19] <cory_fu> juju run --unit pdi/0 'charms.reactive --format=yaml get_states'
[23:19] <cory_fu> That would work, but using `juju ssh` to do the same would not
[23:19] <magicaltrout> although helpfully you've answered a question I was going to ask earlier :P
[23:19] <magicaltrout> {leadership.set.hostname: null, leadership.set.public_ip: null, leadership.set.username: null, pdi.installed: null}
[23:19] <cory_fu> (Unless you first cd to the charm dir)
[23:20] <cory_fu> Hrm.  That seems like the java state isn't set at all, so no idea how that handler ran
[23:20] <magicaltrout> well
[23:20] <magicaltrout> funny you say that
[23:20] <magicaltrout> juju status says
[23:20] <magicaltrout> hook failed: "java
[23:20] <magicaltrout> -relation-changed" for java:java
[23:21] <magicaltrout> hook failed: "java-relation-changed" for java:java
[23:21] <cory_fu> Oh, well, that makes sense.  It failed during that hook, so the state wasn't flushed
[23:22] <cory_fu> But, again, I don't see how that state was set without a value
[23:23] <cory_fu> Also, btw, the way you're using those gating states is going to cause your check_running handler to run more times than you expect (possibly)
[23:23] <magicaltrout> which is probably whats happening
[23:24] <cory_fu> Because it could run once before restart() and then again after restart finishes since the pdi.restarting state changed
[23:24] <magicaltrout> well if check_running runs >1 thats not a problem
[23:24] <magicaltrout> if its up it does nothing
[23:24] <magicaltrout> if its down it kicks it
[23:24] <cory_fu> I think you would be better served by changing check_running and restart to both use the pdi.restart_scheduled state
[23:24] <magicaltrout> that shouldn't cause an issue
[23:25] <cory_fu> Fair enough
[23:25] <magicaltrout> aww wtf, I took that out because you told me to use @when('pdi.installed') :P
[23:25] <cory_fu> Oh, well, I still saw it in there
[23:25] <cory_fu> But yeah, if you're going to use that instead then... um
[23:25] <magicaltrout> well its still in the code, its not called
[23:26] <magicaltrout> well I don't care, I'll revert to using restart_scheduled
[23:26] <magicaltrout> if it works it works, I just need to trap 2 methods both trying to do the same, but slightly different thing
[23:26] <cory_fu> magicaltrout: Where is the java.updated state coming from?
[23:27] <magicaltrout> dunno that was something we did in ghent
[23:29] <cory_fu> I don't think that state exists any more.  I don't see it in the current java interface
[23:29] <magicaltrout> k
[23:29] <cory_fu> https://github.com/juju-solutions/interface-java/blob/master/provides.py
[23:29] <magicaltrout> oh actually
[23:29] <magicaltrout> I think it was a kwmonroe "todo"
[23:29] <magicaltrout> so we boiler plated it
[23:33] <magicaltrout> cory_fu: if I do this: https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L52 its just going to give me the same error isn't it?
[23:33] <magicaltrout> as that was the cause and effect from before
[23:33] <magicaltrout> so can I check the state from within the method?
[23:35] <cory_fu> magicaltrout: I don't know.  I honestly have no idea how you got that error
[23:35] <cory_fu> But the problem isn't with the @when_not, it's with that java.ready state.  That's the one that should be giving a relation instance and it's not
[23:36] <magicaltrout> hmm
[23:38] <cory_fu> magicaltrout: My recommendation would be to put a breakpoint in one of your other handlers that (tends to) get triggered before that one and do some get_state checking in that
[23:38] <cory_fu> Unfortunately, my wife is saying I have to call it a day now.  :)
[23:38] <magicaltrout> aye
[23:38] <magicaltrout> hehe
[23:38] <magicaltrout> well someone needs to have a life!
[23:40] <cory_fu> Have a good evening!
[23:41] <magicaltrout> you too