[05:48]  * thumper sighs
[05:48] <thumper> axw: you may recall a fix for this
[05:49] <thumper> upgraded an environment from 1.19.4 to 1.20.8
[05:49] <thumper> now I see this:
[05:49] <thumper> machine-0: 2014-09-30 05:47:53 ERROR juju.worker.instanceupdater updater.go:267 cannot set addresses on "0": cannot set addresses of machine 0: cannot set addresses for machine 0: state changing too quickly; try again soon
[05:49] <thumper> machine-0: 2014-09-30 05:47:53 ERROR juju.worker runner.go:218 exited "instancepoller": cannot set addresses of machine 0: cannot set addresses for machine 0: state changing too quickly; try again soon
[05:49] <thumper> machine-0: 2014-09-30 05:47:55 ERROR juju.worker runner.go:218 exited "machiner": cannot set machine addresses of machine 0: cannot set machineaddresses for machine 0: state changing too quickly; try again soon
[05:49] <thumper> every 10s or so
[05:49] <thumper> actually, every 5s
[05:51] <thumper> actually... every 3s
[05:51] <axw> thumper: hmm, I thought that was fixed...
[05:51] <thumper> it is the worker restart delay
[05:51] <thumper> seems not
[05:51] <axw> I will investigate
[05:51] <thumper> could be because I was on a dev version before
[05:55] <axw> thumper: 1.19.4 was broken
[05:55] <axw> https://bugs.launchpad.net/juju-core/+bug/1334773
[05:55] <mup> Bug #1334773: Upgrade from 1.19.3 to 1.19.4 cannot set machineaddress <landscape> <lxc> <maas-provider> <precise> <regression> <upgrade-juju> <juju-core:Fix Released by axwalk> <juju-core 1.20:Fix Released by axwalk> <https://launchpad.net/bugs/1334773>
[05:56] <thumper> ok... so I now have an environment which is broken, how do I fix it?
[05:56] <axw> umm
[05:57] <axw> thumper: I *think* you would have to resort to mongo surgery
[05:57] <axw> renaming the "scope" fields to "networkscope"
[08:12] <gnuoy> jamespage, while prepping the mp for cells I noticed that I was carrying a fix for a neutron-api/nova-cc endpoint race. I've broken it out into a small mp if you have time https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/next-fix-endpoint-race/+merge/236459
[08:18] <underyx> hey there
[08:19] <underyx> I'm writing a charm where the install hook needs to install a package hosted on another service
[08:19] <underyx> is it okay to block execution in the install hook until the relation to the package server is live?
[08:23] <jamespage> gnuoy, +1
[08:23] <gnuoy> thanks
[08:39] <jamespage> gnuoy, what does your priority list look like right now?
[08:40] <gnuoy> jamespage, I've got about 1.5 to 2 hours work to do which I could really do with getting finished but then I'm free for urgent studd. does that help ?
[08:40] <gnuoy> s/studd/stuff/
[08:40] <jamespage> gnuoy, I was just looking through my list of features we still need to land
[08:41] <jamespage> gnuoy, l2population driver and vxlan overlays being two we must do
[08:41] <gnuoy> jamespage, the l2pop branches are done
[08:41] <gnuoy> I can take a look at vxlan
[08:41] <jamespage> gnuoy, OK _ looking at l2pop now then
[08:43] <jamespage> gnuoy, awesome on vxlan - I think we probably just need a toggle option in neutron-api
[08:43] <jamespage> gre|vxlan
[08:44] <jamespage> gnuoy, which charms does l2pop impact?
[08:45] <gnuoy> jamespage,
[08:45] <gnuoy> lp:~gnuoy/charms/trusty/neutron-openvswitch/next-l2-population
[08:45] <gnuoy> lp:~gnuoy/charms/trusty/neutron-api/next-l2-population
[08:45] <gnuoy> lp:~gnuoy/charms/trusty/quantum-gateway/next-l2-population
[08:45] <jamespage> gnuoy, can you propose those please?
[08:46] <gnuoy> jamespage, sorry, it looks like I didn't create mps. I think that was because I hadn't had a chance to check it was blocking the propagation of multicast
[08:46] <jamespage> gnuoy, don't block on that :-)
[08:46] <jamespage> trusty upstream!
[08:55] <jamespage> gnuoy, ah  - I just spotted that the neutron-api charm is not nsx enabled just yet - but that was not working on trusty yet so that OK
[08:55]  * jamespage makes a note to enable that once we get to trusty support with upstream
[08:59] <jamespage> gnuoy, just looking through your charms - neutron-api needs:
[08:59] <jamespage> mechanism_drivers = openvswitch,l2population
[08:59] <gnuoy> ack
[09:00] <jamespage> the l2_population flag is only used by the agents on neutron-gateway and neutron-openvswitch (which is correct)
[09:00] <jamespage> gnuoy, I'd probably inconditionally enable the driver and use the agent flag to turn it on and off
[09:01] <gnuoy> ok, I'll take a look at that
[09:01] <jamespage> gnuoy, thanks
[09:02] <underyx> huh, does the install hook have to finish execution before a relation can be added?
[09:03] <jamespage> underyx, it does yes
[09:04] <jamespage> underyx, well the relation can be added as soon as you deploy the charm, but its hook won't run until after install->config-changed->start has completed
[09:05] <underyx> jamespage, so if I need a relation to complete installation, can I block execution in the install hook until I get the required data from a relation?
[09:06] <jamespage> underyx, I'd just move what you are blocking for into the -changed hook of the relation - as soon as the remote service signals its done via a changed execution, you can complete things
[09:07] <jamespage> underyx, install does not need to complete install if you see what I mean
[09:07] <jamespage> its more 'init#
[09:07] <jamespage> init rather
[09:08] <underyx> alright, that makes sense
[09:08] <underyx> thanks!
[09:15] <thumper> axw: still around?
[09:15] <axw> thumper: yo
[09:15] <thumper> axw: do you have the commit handy where you fixed that problem?
[09:15] <axw> I'll have a look
[09:15] <thumper> axw: I'm going to look at mongo surgery locally to fix my db
[09:15] <thumper> I don't really want to re-deploy
[09:16] <axw> thumper: https://github.com/juju/juju/commit/80ca2ac1765e5f1ec555939006d14f23325da7d8#diff-0854f7d657770b8adf41defe45fc8cd1
[09:16] <thumper> axw: ta
[09:16] <axw> np
[09:18] <thumper> I should really write this down somewhere
[09:46] <thumper> gah
[09:47] <thumper> for the love of all things good...
[09:47] <thumper> why does a machine have both "addresses" and "machineaddresses" ?
[09:47] <thumper> and why is there overlap in them...
[09:47]  * thumper thinks
[09:47] <marcoceppi> for the glory of satan, of course thumper
[09:47] <thumper> I think, for those following at home, that one is what the provider says
[09:47] <thumper> and one is what the machine says
[09:47] <thumper> o/ marcoceppi
[09:48] <marcoceppi> \o
[10:03] <gnuoy> jamespage, l2pop mps:
[10:03] <gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/next-l2-population/+merge/236474
[10:03] <gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/quantum-gateway/next-l2-population/+merge/236482
[10:03] <gnuoy> https://code.launchpad.net/~gnuoy/charms/trusty/neutron-openvswitch/next-l2-population/+merge/236477
[10:24] <jamespage> gnuoy, all merged - thanks!
[10:24] <gnuoy> jamespage, fantastic, thanks
[10:25] <gnuoy> :q
[12:25] <ayr-ton> Is possible to move a service/unit to other enviroment?
[12:49] <underyx> has anyone ever successfully used charmhelpers.contrib.python.packages with a manually specified index URL for installation?
[12:50] <underyx> because reading this code it seems to be completely broken
[12:50] <underyx> http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/python/packages.py#L41
[12:50] <underyx> if I'm reading it correctly, it checks the keyword arguments if they are in the tuple specified here
[12:51] <underyx> but an argument in python can't contain a hyphen
[12:51] <underyx> have I got this completely wrong, or did this really go unnoticed for half a year?
[12:53] <Odd_Bloke> underyx: You can pass hyphenated kwargs if you use ** at the call-point.
[12:53] <underyx> Odd_Bloke, right, I just realized and tested if that would work
[12:53] <underyx> and yep, it does
[12:53] <underyx> thanks!
[12:53] <Odd_Bloke> underyx: http://paste.ubuntu.com/8465842/
[12:53] <Odd_Bloke> :)
[12:55] <underyx> it wouldn't be an issue if I were to fix this in charmhelpers though, right?
[12:55] <underyx> if I retained backwards compatibility of course
[12:55] <underyx> it's just not very pythonic to have to do that
[12:59] <Odd_Bloke> underyx: I agree that it's un-Pythonic; don't know the answer to your question though. :p
[13:13] <jamespage> gnuoy, I pushed a  trivial to neutron-api to change the default mcastport - it was conflicting with nova-cc and I ended up with a broken cluster
[13:13] <gnuoy> kk
[13:18] <gnuoy> jamespage, I'm thinking about adding a relation between neutron-gateway and neutron-api so that neutron-api can dictate config like l2pop and network type driver to the neutron-gateway. What do you think?
[13:19] <jamespage> gnuoy, +1
[13:19] <gnuoy> ta
[14:43] <gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/quantum-gateway/add-neutron-plugin-api-rel/+merge/236533 once that has landed I'll do the vxlan change since I'll use the same mechanism as l2pop
[14:48] <jamespage> gnuoy, merged - thanks!
[14:48] <gnuoy> jamespage, fantastic, thanks
[15:39] <lazyPower> https://github.com/juju/docs/pull/188 - large refactor on the relationship docs
[15:43] <lazyPower> kwmonroe: ^
[15:44] <lazyPower> marcoceppi: when you get time, i'd like your +/- 1 on this as well. ty
[15:45] <kwmonroe> ack lazyPower- i'll fetch my spectacles shortly.
[15:50] <marcoceppi> lazyPower: you've got feedback
[15:51] <lazyPower> ty
[16:22] <jamespage> gnuoy, we might need to make the neutron db migration conditional on >= juno
[16:22] <gnuoy> jamespage, ok
[16:39] <gnuoy> jamespage, I'll add that condition in the morning. fyi I've prepped the vxlan branches but things aren't looking too chipper when I try and deploy an gues iwth vxlan running on the overcloud
[16:44] <jamespage> gnuoy, ta
[19:34] <bloodearnest> hazmat: heya, trival deployer MP for your consideration: https://code.launchpad.net/~bloodearnest/juju-deployer/run-build-cmds-in-shell/+merge/236596
[19:40] <hazmat> bloodearnest, thanks looks good
[19:43] <bloodearnest> hazmat: cool, thanks. I have a few ideas for further changes I'd like to run by you before I spend time on them. Shall I drop you an email?
[19:44] <hazmat> bloodearnest, sounds good
[19:44] <hazmat> bloodearnest, or we can g+ now? there's one other pending merge/pull request from yesterday
[19:45] <hazmat> either way
[19:45] <bloodearnest> hazmat: g+ is good - you mean chat or video?
[19:48] <mwenning> stokachu, good afternoon!
[19:55] <stokachu> mwenning, hey there
[19:56] <mwenning> hi, trying to get cloud-install to run a second time - it worked once OK, I tore it back down with -u -
[19:56] <mwenning> now when I try again it loops on error: kvm container creation failed: exit status 1
[19:56] <mwenning> any ideas?
[19:57] <stokachu> mwenning, what does kvm-ok report?
[19:57] <mwenning> /dev/kvm exists
[19:57] <mwenning> KVM acceleration can be used
[19:57] <stokachu> ok hmm
[19:58] <stokachu> does virsh list show an existing vm or did it get removed?
[19:58] <mwenning> shows nothing
[19:58] <stokachu> ok hmm
[19:59] <stokachu> mwenning, does juju give you any other information other than 'kvm creation failed'?
[19:59] <stokachu> probably not iirc
[20:00] <bloodearnest> hazmat: thanks for that, gives me confidence to get started, as it seems we're on the same page.
[20:01] <stokachu> mwenning, you could also do a juju destroy-environment local && juju bootstrap
[20:01] <stokachu> manually to see what happens
[20:01] <mwenning> looks like 0 was created ok, after that I get a succession of "1":   "2" etc
[20:01] <mwenning> ok.
[20:01] <stokachu> then run juju debug-log in a separate terminal
[20:01] <stokachu> and juju deploy 'service-name'
[20:01] <stokachu> see if it gives us any other information
[20:11] <mwenning> stokachu, found it .  Firewall permissions had expired - It couldn't access any archives.  Sorry to bother.