[03:46] <lamont> lets say I have juju 1.24.7-0ubuntu1~14.04.1, and I want to juju deploy local:xenial/foo... is there some obvious reason that the service winds up in this state:
[03:46] <lamont>     service-status:
[03:46] <lamont>       current: unknown
[03:46] <lamont>       message: Waiting for agent initialization to finish
[03:46] <lamont>       since: 19 Nov 2015 03:45:05Z
[03:46] <mgz> lamont: what status is the machine in?
[03:47] <lamont> nova?  doesn't exist at all
[03:47] <lamont> interestingly, local:wily/foo gets me machine-2
[03:47] <mgz> weeelll... that would be why the agent isn;t up
[03:47] <lamont>     service-status:
[03:47] <lamont>       current: maintenance
[03:47] <lamont>       message: installing charm software
[03:47] <lamont>       since: 19 Nov 2015 03:47:45Z
[03:48] <lamont> that's with local:wily/foo
[03:49] <lamont> mgz: is there some easy way to cause juju to spill what command it's using for "nova boot"?
[03:49] <mgz> lxc-ls --fancy gives you a view under the hood, as does $JUJU_HOME/$ENV/*
[03:50] <mgz> er, meph
[03:50] <lamont> no lxcs involved... this is openstack
[03:50] <mgz> I was imagining containers where none exist
[03:50] <lamont> can I have multiple series (trusty,wily,xenial) in one juju environment?
[03:50] <mgz> for the true story, you need the nova logs
[03:51] <lamont> as in nova host api logs on the api host?
[03:51] <mgz> but machine-0.log will tell you what the provisioner tried to do
[03:51] <lamont> ah!
[03:51] <mgz> it may be something as simple as nova not having a xenial image
[03:52] <lamont> nova image-list| grep xenial
[03:52] <lamont> | 00e59d4a-b4a5-4b27-8cf9-3fe2bcd0fb79 | ubuntu-xenial-daily-amd64-server-20151105-disk1.img  | ACTIVE |        |
[03:52] <lamont> ^^ that's why I thought I could do this
[03:53] <mgz> okay, but how does juju know about that image?
[03:53] <lamont> 2015-11-19 03:45:07 ERROR juju.provisioner provisioner_task.go:630 cannot find tools for machine "1": no matching tools available
[03:53] <mgz> did you also update local simplestreams
[03:53] <lamont> bingo!
[03:53] <lamont> apparently not
[03:53] <mgz> ...and also it couldn't find xenial tools
[03:53]  * lamont goes to poke his admins
[03:55] <mgz> okay, we haven't published xenial tools yet, because there are no xenial cloud-images yet, and our build chain expects that
[03:55] <lamont> oh!
[03:55] <lamont> well then
[03:55] <mgz> in practice, the tools would be identical to wily's
[03:55] <lamont> mgz: any eta?  (meanwhile, I'll do the wily + upgrade-in-place-while-crying
[03:56] <lamont> guessing step 1 is poke the cloud-images people?
[03:57] <mgz> yeah, they're trying to change the method they're making images, which is the hang up
[03:59] <lamont> ack
[07:57] <jamespage> adam_g, hey - you should hook up with gnuoy, coreycb and thedac as they are about to write three new openstack charms - all based on the new reactive framework which is our standard now
[10:36] <ionutbalutoiu> hi, jamespage. Update MP with the unit tests as you advised. Can you take a look whenever you can ? (https://code.launchpad.net/~ionutbalutoiu/charms/trusty/neutron-gateway/next/+merge/276833)
[10:37] <jamespage> ionutbalutoiu, on my list for today
[10:40] <ionutbalutoiu> jamespage, just as a note, amulet tests didn't run yet for this update (only lint_test and unit_test), but it's been two days. I think we have to wait for this as well. Just wanted to give you a ping. Is it common such a delay for amulet tests ?
[10:41] <jamespage> ionutbalutoiu, long queue in the lab right now as we keep enabling new tests for each charm
[10:41] <jamespage> its possible something is blocked up - I'll check
[10:41] <jamespage> ionutbalutoiu, if not I'll run them myself...
[10:41] <ionutbalutoiu> good, thank-you.
[10:42] <jamespage> ionutbalutoiu, tbh I'd be happy to land without those reporting in as they passed before, and you've made 0 code change that would impact that since then
[10:42] <jamespage> but lets see
[10:45] <lazypower> o/ mornin jamespage
[11:22] <jamespage> hey lazypower
[11:23] <lazypower> jamespage did you see the somewhat silent post yesterday about charming w/ layers interview feat. Adam Stokes?
[11:24] <jamespage> lazypower, I did - started to watch but then got children
[11:24] <jamespage> so had to stop
[11:24] <jamespage> will resume later today
[11:24] <lazypower> :D just wanted ot make sure it crossed your radar as i know your team is about to go head first in reactive/layer territory
[11:25] <lazypower> to anyone lurking that didn't catch the update - https://www.youtube.com/watch?v=lowRfWcxky0
[11:28] <jamespage> gnuoy, coreycb, thedac, ddellav, rockstar, beisner ^^
[11:29] <gnuoy> ta
[14:26] <tvansteenburgh> anyone know how to get the ID of the currently executing juju action from inside the action itself?
[14:27] <tvansteenburgh> i've seen other code using JUJU_ACTION_ID, but it's not set, and I don't see any mention of that env var in the docs
[14:43] <tvansteenburgh> ah, it's JUJU_ACTION_UUID
[15:06] <stub> Register the decorated function to run when not all desired_states are active.
[15:06] <stub> Which does not mean when all desired_states are not active.
[15:08] <stub> tvansteenburgh: I look in charmhelpers.core.hookenv() for those :)
[15:10] <stub> So we get an AND operation with @when and an OR with @when_not. But I think I might stick with a single state per decorator for now until I get the hang of this ;)
[17:58] <jcastro> rick_h_: we're 7 subscriptions short of the 100 we need to get the custom URL for the youtube channel, can you ask for one more push across the board?
[17:59] <rick_h_> jcastro: sure thing
[18:03] <rick_h_> jcastro: sent, included cloud@ as well this time so hopefully moves fast for you.
[18:04] <jcastro> <3
[18:06] <rick_h_> jcastro: <3 jane's reply
[18:28] <rick_h_> jcastro: and done! woot
[18:58] <natefinch> rick_h_: sort of sad that a 600 person company has trouble getting 100 subscriptions to their own youtube channel ;)
[19:02] <rick_h_> natefinch: naw, it got through pretty well. Folks just need reminders :)
[19:26] <adam_g> gnuoy, coreycb thedac  are there any examples out there of openstack charms using the reactive framework?
[19:27] <gnuoy> adam_g, there is one I think
[19:27]  * gnuoy is looking for the link
[19:28] <gnuoy> adam_g, https://code.launchpad.net/~openstack-charmers/charms/trusty/openvswitch-odl/next
[19:28] <adam_g> gnuoy, thanks.
[19:28] <gnuoy> adam_g, the directory layout is a little outdated though
[19:28] <gnuoy> reactive should be a top level dir I think
[19:31] <adam_g> also--i seem to remember at least one place in the openstack charms where a charm would go and create neutron networks and subnets in neutron using neutronclient as part of a hook. does that still exist?
[19:31] <adam_g> i need to do something similar but can't remember where that was done
[19:32] <gnuoy> adam_g, are you thinking of http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bin/quantum-ext-net
[19:32] <gnuoy> adam_g, oh, as aprt of a hook
[19:32] <adam_g> gnuoy, oh ya, that was it
[19:33] <gnuoy> adam_g, also http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bin/quantum-tenant-net
[19:33] <adam_g> gnuoy, okay, so those dont actuallyg et called as part of a hook
[19:33] <gnuoy> adam_g, nope, I can't think of a hook that does that
[19:34] <adam_g> as part of my services deployment, i need to go and create a network in neutron on behalf of the service tenant
[19:34] <coreycb> adam_g, btw we're in the midst of creating a base openstack layer and inteface layers for rmq, mysql, keystone, etc
[19:35] <adam_g> coreycb, cool. im kinda operating on a tight schedule so im not sure ill be able to adapt that just yet. :\ unless all that stuff is about done and ready to use now.
[19:35] <gnuoy> adam_g, that's going to be a pain I would think
[19:35] <gnuoy> I mean creating the network
[19:35] <adam_g> gnuoy, yeah, i was trying to find a good way to make that fit yesterday
[19:36] <adam_g> gnuoy, best i could come with is have neutron-api expose a setting in its relatoin that lets other services know that neutron at least has: keystone, db, rabbit relations
[19:36] <adam_g> then have the other end do what it needs once thats true
[19:37] <gnuoy> adam_g, fwiw we already have a concept of the charm being ready (all mandatory relations and config satisfied)
[19:37] <gnuoy> it;s spat out in the workload status
[19:38] <adam_g> gnuoy, right but isnt that just user facing? i cant block relatin A's hooks on the status of B+C+D ?
[19:38] <bdx_> charmers, core, dev: Hey whats up everyone? I am finally getting my feet wet and jumping into charming with reactive.... I'm trying to react to the changing of specific configs on config-changed hook, is there a best practice for checking for config change for specific configs? heres what I'm working with...https://github.com/jamesbeedy/puppet-agent/blob/master/reactive/puppet_agent.py
[19:39] <gnuoy> adam_g, yes, you're right. I just meant the charm already has a concept of what it needs to be ready
[19:39] <adam_g> gnuoy, yeah, AIUI that unfortunately doesnt help here
[19:40] <adam_g> i think all i need from neutron-api is a flag i can pass to other services that says 'the api is functional and you can create things'
[19:40] <adam_g> should be fine if that were  happen before the rest of neutron is wired (neutron-gateway and cooresponding compute agents)
[19:43] <gnuoy> adam_g, yep, it seems reasonable. I'd like to see what jamespage thinks too as he was pondering a similiar issue recently
[19:43] <adam_g> cool
[19:44] <adam_g> i might get something functional later this afternoon and push it somewhere
[19:56] <tvansteenburgh> bdx_: you can do `if config.changed(key)`
[19:57] <bdx_> tvansteenburgh: ha...I knew it was something simple....thanks!
[20:52] <bdx_> 2-vcpu_4-gb-ram_20-gb-hd