[09:37] <jamespage> gnuoy, could you take a run through the verified of theses: https://review.openstack.org/#/q/status:open+topic:charmhelpers-resync
[09:39] <gnuoy> sure
[09:40] <jamespage> gnuoy, n-gateway and n-openvswitch have relations to n-api right?
[09:41] <gnuoy> right
[09:42] <gnuoy> jamespage, I'm supposed +1 the workflows on those as well, right?
[09:42] <jamespage> gnuoy, yeah so +2 review and +1 workflow
[09:42] <gnuoy> ack
[09:46] <jamespage> gnuoy, beisner: I've just raised a project-config change that should fix bug tracking for the charms as well - its currently not functional
[09:49] <jamespage> gnuoy, lets book some time to discuss v3 and get it swinging
[09:49] <jamespage> I have some ideas of what might be up...
[09:51] <gnuoy> jamespage, ta
[09:52] <jamespage> gnuoy, just looking at these - https://review.openstack.org/#/q/status:open+topic:bug/1545886
[09:52] <mup> Bug #1545886: need to support vni_range option for ml2 plugin <openstack> <sts> <neutron-api (Juju Charms Collection):In Progress by xtrusia> <neutron-gateway
[09:52] <mup> (Juju Charms Collection):In Progress by xtrusia> <neutron-openvswitch (Juju Charms Collection):In Progress by xtrusia> <https://launchpad.net/bugs/1545886>
[09:52] <jamespage> I think that in fact we only need to set those in the plugin for neutron-api, not all of the agent locations but I'm not 100% sure
[09:53] <gnuoy> jamespage, I would expect to only have to set then for neutron-api
[09:53] <jamespage> gnuoy, yeah - I think thats true
[09:53] <jamespage> gnuoy, let me dig a little
[09:57] <gnuoy> jamespage, Do i still need to mark nbugs fix committed or do I sit back and let gerrit do it?
[09:57] <jamespage> gnuoy, for now please do manually
[09:58] <jamespage> gnuoy, hopefully my proposed change will fix that
[09:58] <gnuoy> ack
[10:14] <jamespage> gnuoy, beisner: http://stackalytics.com/?metric=commits&project_type=all&company=canonical&release=all
[10:15] <gnuoy> nice
[10:15] <jamespage> gnuoy, and - http://stackalytics.com/?metric=commits&project_type=all&release=all
[10:15] <jamespage> commits not reviews but meh - we don't get metrics on those
[10:17] <jamespage> gnuoy, lets do it
[10:36] <jamespage> gnuoy, http://paste.ubuntu.com/15272811/
[10:39] <tinwood> gnuoy, jamespage: is there an easy way to determine the github repo for a OS charm?  e.g. in my case I'm looking for keystone.
[10:41] <tinwood> I'm guessing it's this one: https://github.com/openstack-charmers/charm-keystone
[10:42] <sparkieg`> anyone else seeing invisible text in the Search field on https://jujucharms.com/docs/ ?
[10:44] <evilnickveitch> sparkieg`,  yes, it has been an issue for a while now - I believe it is being worked on
[10:44] <sparkiegeek> evilnickveitch: ok
[10:45] <evilnickveitch> it is annoying! there are too many pages of docs not to have working search!
[10:45] <sparkiegeek> evilnickveitch: +1
[10:45] <sparkiegeek> evilnickveitch: the search is ... non intuitive too
[10:46] <Muntaner> hello guys! I'm having a problem. My environment was going well, but all of a sudden when I try to deploy a new service, it stays in the state "Waiting for agent initialization to finish" forever. How can I diagnose?
[10:46] <Muntaner> (I use Juju on an openstack all-in-one installation)
[10:54] <jamespage> gnuoy, http://paste.ubuntu.com/15272863/
[10:54] <jamespage> tinwood, yes it is
[10:54] <tinwood> thanks jamespage
[11:58] <jamespage> Muntaner, take a look in /var/log/juju on machine 0
[12:09] <Muntaner> jamespage, too late! I destroyed the environment, bootstraped it again and works. Weird
[12:10] <jamespage> gnuoy, hey - when you did the last release of charms to stable branches, I think you did some 'at the point of release' type changes
[12:11] <Muntaner> jamespage, I thinks it is related to the security groups of openstack... when I delete and create a VM, the sec group created for that VM remains and does not get deleted into openstack
[12:24] <gnuoy> jamespage, update charmhelper location, that sort of thing?
[12:30] <jamespage> gnuoy, yah
[12:35] <gnuoy> jamespage, that + update amulet branches to point at stable branches
[12:59] <jamespage> tvansteenburgh, dpb1_: I know we're not really focussing on deployer any longer but https://code.launchpad.net/~james-page/juju-deployer/git-branch-support/+merge/287929 would help us out in the short term...
[13:19] <beisner> jamespage, woah, the stackalytics thing!
[13:24] <marcoceppi> jamespage: love it
[13:50] <tvansteenburgh> jamespage: i can get that deployer merge released today
[13:54] <jamespage> tvansteenburgh, that would be great - are you OK with the syntax?
[13:54] <jamespage> I think that longer term we'll just be deploying from the charm-store but this works us through the interim...
[13:55] <tvansteenburgh> jamespage: i just merged this yesterday https://code.launchpad.net/~niedbalski/juju-deployer/add-refspecs/+merge/287829
[13:55] <jamespage> beisner, I have a few reviews that I'd like the full amulet suite run against - is that a manual kickoff atm?
[13:55] <tvansteenburgh> jamespage: but it looks like your change can coexist with that
[13:55] <jamespage> tvansteenburgh, it would appear so
[13:56] <tvansteenburgh> jamespage: so yeah, lgtm. will let you know when it's released
[13:57] <beisner> jamespage, it is.  i'll be focusing on the gate job integration and the artifact public linking now that we're in a good place.  will send you info shortly.
[13:58] <jamespage> tvansteenburgh, awesome - thankyou
[14:02] <jamespage> beisner, I fixed up the single deploy for swift-storage - https://review.openstack.org/#/c/287156/
[14:03] <jamespage> if you'd like to review, that will unblock other reviews inflight I think
[14:06] <beisner> jamespage, ah good.  i suspect there may be a few of those.
[14:06] <jamespage> beisner, yes...
[14:06] <jamespage> my resync flushed out a few...
[14:08] <beisner> jamespage, was it the setup_storage bit that was tripping?
[14:09] <jamespage> beisner, basically with no explicit config, there are no block devices found, so the perms changes at the end of setup_storage error out as /srv/node never gets created...
[14:09] <beisner> yah ok
[14:11] <beisner> jamespage, so this is valuable to eek out.  i think each charm should deploy to a blocked state instead of an error state, when deployed with its defaults.
[14:16] <jamespage> beisner, and that would be a valid test ?
[14:17] <jamespage> beisner, ok that does make sense...
[14:17] <beisner> jamespage, so it tests the user experience a bit, in that if i've just added a charm, and it has deployed before i add configs, i don't have to jump through resolved/retry manual steps.
[14:21] <tvansteenburgh> frankban: when you have time can you add juju-deployer-0.6.4 (on pypi) to the juju/stable ppa?
[14:22] <frankban> tvansteenburgh: oh sorry yes
[14:22] <sparkiegeek> tvansteenburgh: what makes it stable? how much testing has it undergone?
[14:24] <tvansteenburgh> sparkiegeek: aside from my own, and the testing of the patch submitters, none
[14:26] <sparkiegeek> tvansteenburgh: can I suggest proposed is a better place for it, at least for now?
[14:27] <tvansteenburgh> sparkiegeek: sure. frankban, juju/proposed instead please
[14:28] <tvansteenburgh> which i guess is actually juju/devel?
[14:29] <dpb1_> tvansteenburgh: is lp:python-jujuclient not getting any updates?  ahasenack has had ppas building from there forever.
[14:29] <dpb1_> tvansteenburgh: lp:juju-deployer rather.
[14:29] <tvansteenburgh> dpb1_: yeah that's where updates are happening, which ppas are getting updates?
[14:29]  * dpb1_ gets links
[14:30] <dpb1_> https://code.launchpad.net/~ahasenack/+archive/ubuntu/python-jujuclient and https://launchpad.net/~ahasenack/+archive/ubuntu/juju-deployer-daily
[14:32] <tvansteenburgh> dpb1_: ok, that's good to know. so if ppl want latest tell them to install from there?
[14:32] <dpb1_> yup
[14:32] <dpb1_> tvansteenburgh: it's where I pull from on all my machines
[14:32] <tvansteenburgh> dpb1_: do those builds ever graduate to juju/stable?
[14:32] <frankban> tvansteenburgh, dpb1_: sounds great, can we just copy those when a deployer needs to be updated in stable ppa?
[14:33] <sparkiegeek> note the failures in https://launchpadlibrarian.net/245134994/buildlog_ubuntu-vivid-amd64.juju-deployer_0.6.1~bzr165~48~ubuntu15.04.1_BUILDING.txt.gz
[14:33] <dpb1_> frankban: tvansteenburgh: there is no workflow: it's always been just ping frank or marco to copy to /stable
[14:33] <sparkiegeek> the test is trying to reach out to reviews.openstack.org and failing
[14:33] <marcoceppi> well, I never copy binaries to stable, so that's mostly a frank thing
[14:34] <marcoceppi> I prefer rebuilding
[14:34] <tvansteenburgh> dpb1_: yes, it's always been a "by request" thing
[14:35] <frankban> marcoceppi: if we already have a working deb that has been tested, why not just copy it over?
[14:36] <beisner> fwiw, with openstack package flow, we build for staging, then rebuild for proposed.  then it's a binary copy to release it.  that way your release binary is identical to the thing that was tested in proposed.
[14:37] <beisner> also, woot! for that deployer daily build ppa.  :-)
[14:38] <sparkiegeek> beisner: note the FTBFS errors though
[14:39] <tvansteenburgh> what is FTBFS
[14:39] <beisner> fail to build from source
[14:39] <tvansteenburgh> ack, i will address that. i didn't realize the build host didn't have egress https access
[14:40] <tvansteenburgh> i did run all the tests before releasing
[14:42] <beisner> tvansteenburgh, ah yes i see jorge's added test is the thing there.
[14:42] <tvansteenburgh> beisner: yeah, i'm gonna ask him to fix
[14:47] <beisner> tvansteenburgh, yah, not sure that test should actually reach out.
[14:47] <tvansteenburgh> agree, asked for a mock
[14:48] <tvansteenburgh> or just a straight unit test
[14:48] <beisner> right.  anyway, cool stuff.   we're all about the refspec now.  thanks for the bits.
[14:59] <magicaltrout> http://www.meteoriteconsulting.com/spinning-up-pentaho-data-integrations-quickly-with-juju/ <- those card things fsck-ing rock
[15:00] <lazyPower> haha!
[15:00] <lazyPower> yeah they do!
[15:01] <magicaltrout> better fix the icon
[15:05] <beisner> jamespage, should we take that swift-storage fixup through the full deal before +2?
[15:05] <jamespage> beisner, meh
[15:05] <jamespage> beisner, its a fairly innocuious change
[15:05] <beisner> jamespage, ack, i'm good with it
[15:08] <magicaltrout> there's a bug in the cards lazyPower, who's the boss?
[15:08] <lazyPower> magicaltrout - https://github.com/CanonicalLtd/jujucharms.com/issues/ <- that board is
[15:09] <magicaltrout> cool
[15:13] <beisner> icey_, your test update expects one or more ceph-osd processes, whereas we were previously expecting exactly 2.  is that intentional?
[15:14] <icey_> beisner: yes, although it's not ideal; to make the tests repeatably passable in a given deployement, it has to either do that or ebe able to expect 2 OR 3 OSD processes to be running
[15:15] <beisner> we could plumb that helper to take a list type, then specify [2, 3
[15:15] <beisner> ]
[15:15] <beisner> bahh new keyboard still improving my experience here ;-)
[15:17] <icey> beisner: same, I think my fingers are *nearly* used to this keyboard :); I like the idea of that helper being able to take a list
[15:17] <icey> beisner: what's the timeline to get a openstack contrib charmhelper change landed?
[15:18] <lazyPower> if its anything like charmhelpers.core.host mods - > a month
[15:19] <beisner> icey, lazyPower - yah the core c-h stuff is likely to be more sensitive than a test helper.  we can usually land test helpers quickly.
[15:20] <lazyPower> i'm just salty that i'm still asking for help with the same merge and received nothing in response, thats all :)
[15:20] <lazyPower> https://code.launchpad.net/~dbuliga/charm-helpers/charm-helpers/+merge/285044 <-- really needs some love if anyone has the spare cycles to go +1 that for me
[15:20] <beisner> lazyPower, ha!  where be it?
[15:21] <jacekn> is https://bugs.launchpad.net/charms/+bug/1538573 on somebody's radar? It should be trivial review now. Do I need to do anything for it to appear in the review queue?
[15:21] <mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:In Progress> <https://launchpad.net/bugs/1538573>
[15:38] <rick_h__> cory_fu: do you recall the name of the dev from neo4j at the charmer's summit for merlijn's email to the list?
[15:39] <magicaltrout> just google, loud neo4j dev who is canadian but lives in sweeden
[15:39] <magicaltrout> -e
[15:39] <rick_h__> lol
[15:39] <rick_h__> first result: "Ask HN: Who is hiring? (January 2016)"
[15:40] <lazyPower> steven
[15:40] <lazyPower> steven fisher i think?
[15:40] <lazyPower> i follow him on twitter
[15:40] <magicaltrout> steven.baker@neotechnology.com,
[15:40] <lazyPower> thats it
[15:40] <lazyPower> srbaker
[15:40] <rick_h__> <3 ty
[15:40] <magicaltrout> thanks to jcastro for not bcc'ing the emails in his mailshot;)
[15:40] <rick_h__> <3
[15:40] <rick_h__> found it and the charm
[15:41]  * marcoceppi sighs
[15:43]  * magicaltrout cracks out the ear plugs and hides the alcohol
[15:43] <jcastro> the sign up sheet was publically, technically, that's like a bcc. /me runs.
[15:43] <lazyPower> magicaltrout that takes all teh fun out of thursday
[15:43] <magicaltrout> hehe
[15:43] <jcastro> not like a bcc I should say
[15:45] <icey> anybody have issues on xenial running `charm build`? I just got `TypeError: write() argument 1 must be unicode, not str` the moment it got to the first interface to process
[15:45] <rick_h__> jcastro: what's the topic for the next charm hangout?
[15:45] <rick_h__> jcastro: can/should I join?
[15:46] <marcoceppi> rick_h__: it's an open office hours, all are welcome
[15:46] <jcastro> rick_h__: it's almost always open ended
[15:46] <jcastro> if you have something to show, show
[15:47] <rick_h__> jcastro: can you toss me an invite to the calendar and I'll show up please?
[15:47] <lazyPower> icey - errr
[15:47] <lazyPower> icey - are you using charm from marco's ppa?
[15:48] <icey> probably not, let me dig up that email; just installed ubuntu on this machine on Monday so it's pretty fresh :)
[15:48] <lazyPower> ok, i'm not sure that will fix anything
[15:48] <lazyPower> but its worth a shot to get started
[15:48] <lazyPower> it has all the road-to-2.0 fixes in there
[15:49] <marcoceppi> lazyPower: road-to-2.0 is now master
[15:49] <marcoceppi> as an fyi
[15:49] <jacekn> kjackal: hey. Thanks for feedback on collectd layer. I think it should be ready to go now: https://bugs.launchpad.net/charms/+bug/1538573
[15:49] <mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:In Progress> <https://launchpad.net/bugs/1538573>
[15:49] <jcastro> rick_h__: done
[15:49] <lazyPower> well
[15:49] <lazyPower> its built from FYI-Master apparently
[15:49] <lazyPower> :P
[15:49] <icey> lazy I'm on 1.11.1, do you have a link to marco's PPA?
[15:49] <rick_h__> jcastro: <3 ty much
[15:50] <kjackal> jacekn thank you
[15:51] <icey> nevermind lazyPower: ppa:marcoceppi/charm-tools-2.0
[15:51] <lazyPower> https://launchpad.net/~marcoceppi/+archive/ubuntu/charm-tools-2.0
[15:51] <lazyPower> ye
[15:52] <icey> BAM nope
[15:52] <icey> same exact error
[15:53] <icey> given that xenial is targeting python3, shouldn't charm tools see about updating :)
[15:53] <marcoceppi> icey: i got you
[15:53] <marcoceppi> https://github.com/juju/charm-tools/pull/119
[15:53] <marcoceppi> icey: alpha1 build is being released as I type
[15:54] <icey> three cheers for marcoceppi!
[15:54] <lazyPower> icey - its almost like we were expecting you :)
[15:54] <jamespage> beisner, can you workflow +1 https://review.openstack.org/#/c/287156/ as well please
[15:54] <lazyPower> we are devx, expect us
[15:54] <lazyPower> jamespage o/ heyo
[15:54] <jamespage> lazyPower, ola
[15:55] <lazyPower> jamespage - friendly ping to get this back on your roadmap - https://code.launchpad.net/~dbuliga/charm-helpers/charm-helpers
[15:55] <jamespage> lazyPower, yup
[15:55] <jamespage> on my list still
[15:55] <lazyPower> same bat time same bat channel next week?
[16:08] <nagyz> hey guys a quick question
[16:08] <nagyz> if I specify nodes in my juju-deployer config like this:
[16:08] <nagyz> '0': constraints: name=xxx blahblah
[16:08] <nagyz> should it always pick that node from MAAS?
[16:08] <nagyz> because currently it looks like it ignores the name actually and just picks based on the tag
[16:08] <lazyPower> nagyz - you're looking for tags
[16:09] <lazyPower> --constraints="tags=bootstrap"   for example
[16:09] <nagyz> yes, I know them, but in my juju-deployer config I'd like to pin the charms to nodes
[16:09] <lazyPower> i'm not sure you can use names as a constraint
[16:09] <lazyPower> afaik its only ever supported tags
[16:09] <nagyz> it doesn't complain that it would be invalid... :)
[16:09] <rick_h__> lazyPower: nagyz no, i think you need to add maas tags and use the tags as constraints
[16:10] <nagyz> ok let me rerwite it and test it
[16:10] <nagyz> cheers
[16:10] <lazyPower> cheers nagyz
[16:10] <nagyz> wait, one more thing: will the machine constraints I specify in the config override the constrainst I specified when bootstrapping?
[16:11] <lazyPower> yeah, nice thing for you to expect in 2.0
[16:11] <lazyPower> you get boostrap constraints, and you get model constraints
[16:11] <nagyz> ok so I'm on the latest stable atm :)
[16:11] <nagyz> so they get applied after the boostrap constraints in 1.25?
[16:11] <lazyPower> yeah if you bootstrap with --constraints=tags=bootstrap
[16:11] <nagyz> so if I have tags=a and then in the juju-deployer config I have tags=b, will it pick a node that has a and b?
[16:11] <lazyPower> all machines launched after the fact will try to launch with --tags=bootstrap
[16:11] <lazyPower> you can clear it, or change it, its not catastrophic
[16:11] <nagyz> yes, that's the behaviour we're seeing
[16:12] <lazyPower> just a minor annoyance
[16:12] <nagyz> so how could I pin different services to different tags? would that even work now then?
[16:12] <nagyz> let's say I clear the global constraints after bootstrapping
[16:12] <lazyPower> if you clear globals and have --constraints=tags=abc   per unit, they will apply those units constraints when the bundle gets deployed
[16:13] <lazyPower> sorry, il mean per charm
[16:13] <nagyz> I currently list the constraints per machine listed in the yaml - and then for each charm I explicitly bind it to a machine id like --to '5'
[16:13] <nagyz> is that not a good thing?
[16:13] <lazyPower> thats fine, it makes the bundle a bit brittle
[16:13] <nagyz> (based on wolsen's tokyo demo yaml :P)
[16:14] <lazyPower> but with how we're modeling networking, storage, etc. i'm sure we'll see more bundles like that start to crop up
[16:15] <nagyz> ok let me give it a try
[16:29] <nagyz> if I have globals set, they will ALWAYS override local constraints, right?
[16:29] <nagyz> or are they getting an "AND" inbetween?
[16:31] <magicaltrout> aahh cory_fu I knew there was a reason for me piping the hookenv stuff through leader_set..... whilst it will get set, if someone changes it, it wont trigger a restart
[16:32] <cory_fu> magicaltrout: Ah, that's true
[16:32] <cory_fu> @when_any('leadership.changed', 'config.changed.foo') would be useful there
[16:32] <cory_fu> I should cut a release of charms.reactive so that's available
[16:37] <aisrael> marcoceppi: tvansteenburgh: This item shouldn't be in the queue (it got merged yesterday): http://review.juju.solutions/review/2429
[16:39] <nagyz> lazyPower, do you have a second?
[16:39] <nagyz> lazyPower, it doesn't work as expected... :-)
[16:40] <nagyz> this is the yaml that we're running with juju-deployer: http://pastebin.com/4BVd32L1
[16:40] <nagyz> but for example haproxy gets put on machine "2", while the config says it should go in 3,4,5
[16:40] <nagyz> what did we screw up? :-)
[16:41] <lazyPower> thats...what?
[16:41] <lazyPower> it clearly says -to '2'
[16:41] <lazyPower> i mean - '3'
[16:41] <nagyz> haproxy: (the very last) lists -3, -4, -5
[16:41] <nagyz> yep
[16:42] <nagyz> juju status shows (while the deployer is running): machine: "2"
[16:42] <nagyz> plus it requests a bunch of machines from maas
[16:42] <lazyPower> yeah something went awry here
[16:42] <nagyz> not just the ones we've listed in the yaml
[16:42] <tinwood> thedac, gnuoy: quick question: if I want to run an action inside a unit (juju ssh unit/n ; then "run an action") is there anything I need to set up re: ENV to get it to work?  I need to see the stacktrace that's blowing up on my keystone charm!
[16:42] <lazyPower> tinwood - you can juju debug-hooks on that unit, then execute teh actin
[16:43] <lazyPower> dbug-hooks will trap the action tinwood
[16:43] <tinwood> even for an action?
[16:43] <lazyPower> nagyz - i'm thinking through this and its not obvious to me why this is the case :(
[16:43] <tinwood> ok. will try.  ta!
[16:43] <lazyPower> tinwood - yep, actions run in an anonymous hook context
[16:43]  * thedac learned something too
[16:43] <nagyz> lazyPower, I'm happy to try any suggestion.
[16:44] <nagyz> the debug tells me this:
[16:44] <nagyz> 2016-03-03 16:33:03 [INFO] deployer.import:  Deploying service haproxy using cs:trusty/haproxy
[16:44] <nagyz> 2016-03-03 16:33:03 [ERROR] deployer.deploy: Service haproxy to be deployed with non-existent service 3
[16:44] <nagyz> maybe that's why?
[16:44] <lazyPower> its trying to co-locate with a service huh?
[16:44] <lazyPower> remove the quotes around the number's declaring the machines? (not even sure this will have any effect)
[16:45] <nagyz> the problem is for every retry I need to destroy the env and then rebootstrap...
[16:45] <nagyz> takes a while.
[16:45] <lazyPower> i understand
[16:45] <nagyz> what does it mean by non-existent service 3? I mean the machines are not "services", are they?
[16:45] <lazyPower> you may have some success dropping that bundle on the store gui and inspecting machine view
[16:45] <lazyPower> i think the 'string' is getting interpolated as a service name
[16:45] <lazyPower> deploy supports the following syntax
[16:45] <lazyPower> to: - 'wordpress/0'
[16:46] <lazyPower> i think its getting confused and trying to colo with a service named 3, so it spins up a machine expecting a parent service to be deployed there
[16:46] <nagyz> ok let me remove the quotes.. :)
[16:46] <lazyPower> s/parent/principal
[16:47] <jacekn> kjackal: so is setting charm review bug status to "New" something I should have done after my reply? Or you prefer for charmers to do it themselves?
[16:48] <lazyPower> jcastro - move it to fix-committed
[16:48] <lazyPower> jacekn ^
[16:48] <kjackal> Thank you lazyPower
[16:48] <lazyPower> we dont move status to anything other than tog et them out of the queue after we've replied :)
[16:49] <jacekn> alright, good to know thanks
[16:50] <tvansteenburgh> lazyPower, nagyz: i don't think machine placement works for v3 bundles
[16:50] <lazyPower> ah that would explain it too
[16:52] <rick_h__> tvansteenburgh: lazyPower right, it was added in v4
[16:52] <lazyPower> nagyz - might want to hold off on that re-test
[16:52] <lazyPower> sounds like we've found the blocker
[16:52] <rick_h__> lazyPower: how is it a v3 bundle? We don't do those in the store for a while any more?
[16:53] <rick_h__> lazyPower: are we sure it's the old format?
[16:53] <lazyPower> well this is a labeled bundle
[16:53] <lazyPower> thats by definition the v3 format right?
[16:53] <lazyPower> rick_h__ http://pastebin.com/4BVd32L1
[16:53] <rick_h__> lazyPower: k, just checking
[16:54] <rick_h__> lazyPower: right, remove that, dedent one level and try again?
[16:54] <lazyPower> nagyz - the good news is the format change is super simple to make, delete that openstack: key at teh top and fix the indentation now that we've removed the parent key and you'r ein a v4 bundle
[16:54] <lazyPower> nagyz - give a v4 bundle a go, and lets see if you get better results
[16:57] <nagyz> oh that's how I started out
[16:57] <nagyz> but I had to add the openstack: at the beginning to actually be able to make juju-deploy parse it...
[16:58] <rick_h__> nagyz: juju-deployer should take a v4 bundle. The team updated it to use it
[16:58] <nagyz> it was complaining about not getting any deployment names
[16:58] <nagyz> that's why we added openstack: at the top
[16:58] <rick_h__> nagyz: if you have a problem without it let us know and we can look into what's not happy. https://github.com/juju/charmstore/blob/v4/docs/bundles.md
[16:59] <rick_h__> nagyz: ^ has the docs/description of the v4 vs v3 and what changed
[16:59] <nagyz> looking at it.
[17:01] <jamespage> beisner, does uosci understand which pxc test to run yet?
[17:03] <Gil> Can etcd and flannel be installed on the same physical node?  I want to put etcd & flannel on physical node 2, then add a unit of flannel to phys node 3.  Will it work ok?
[17:04] <lazyPower> Gil - sure can
[17:04] <lazyPower> Gil - which flannel charm are you using?
[17:04] <tvansteenburgh> nagyz: what version of deployer are you using?
[17:05] <beisner> jamespage, checking..
[17:05] <lazyPower> brb
[17:06] <beisner> jamespage, yes:  "Automatically selected test:  ./tests/10-deploy_test.py"
[17:06] <jamespage> beisner, \o/
[17:06] <jamespage> oh wait - I think it was an amulet test failure nm
[17:09] <nagyz> tvansteenburgh, I've got it via pip install
[17:09] <nagyz> as the stable ppa doesn't have the 0.4 package
[17:10] <nagyz> (based on readthedocs there is 0.4, right?)
[17:10] <tvansteenburgh> nagyz: 0.6.4 is latest on pypi
[17:10] <tvansteenburgh> nagyz: if you're on 0.4.x then you probably don't have the v4 support
[17:11] <nagyz> well I just did pip install
[17:12] <nagyz> that I assume gets the latest :)
[17:12] <nagyz> there is no -v or --version
[17:12] <nagyz> ok I'm on 0.6.3
[17:13] <nagyz> according to pip list :)
[17:13] <tvansteenburgh> k, should be fine then
[17:13] <nagyz> ok, changed the bundle to v4, running now
[17:18] <nagyz> so what I specify as machine '0' in my bundle actually becomes '1', right?
[17:18] <nagyz> I should just skip declaring '0'?
[17:18] <nagyz> right now I have 0,1,2 declared with a storage tag and then now according to juju status 0 is the juju bootstrap node, 1 became storage and 2 become a controller which I only list for 3,4,5...
[17:24] <tvansteenburgh> nagyz: the machine keys that you define in the machines section of your bundle do not necessarily map to actual machine numbers
[17:25] <tvansteenburgh> nagyz: for that reason, yes, it is best to not use '0' in your machines definition
[17:25] <tvansteenburgh> to avoid confusion
[17:29] <nagyz> cheers
[17:32] <jamespage> beisner, https://review.openstack.org/#/c/287082/ is looking better now as well
[17:34] <beisner> jamespage, ready to land that?  lgtm
[17:34] <jamespage> beisner, +1
[17:34] <jamespage> can you do the honours?
[17:34] <beisner> jamespage, yep
[17:35] <jamespage> beisner, a little tidy here as well - https://review.openstack.org/#/c/287848/
[17:35] <jamespage> just template rollups and removal
[17:35] <jamespage> amazing how much old stuff we lug around....
[17:39] <beisner> yah, that should be a safe landing too, jamespage
[17:41] <jamespage> lovely
[17:54] <lkraider> Can juju be deployed to third party account with external_id access?
[17:54] <lkraider> (in AWS)
[18:01] <aisrael> How long should it take for updated charms to be ingested into the store?
[18:13] <lazyPower> longest case: an hour and 20 minutes
[18:14] <lazyPower> lkraider - not sure what you mean
[18:14] <lazyPower> external_id access?
[18:15] <lkraider> @lazyPower - I have an IAM user that was granted to another AWS account through a Role (assumeRole permission). To use that I in aws_cli need to set external_id in ~/.aws/config
[18:16] <lazyPower> ah, i dont know that we support that. i would most def. poke the mailing list about it though
[18:16] <lkraider> https://docs.aws.amazon.com/cli/latest/userguide/cli-roles.html#cli-roles-xaccount
[18:16] <lazyPower> someone who's more versed in that specific case may be able to chime in and prove me wrong
[18:17] <lkraider> @lazyPower another question: does juju support user that has MFA enabled?
[18:17] <lazyPower> not that i'm aware of. the IAM credentials you would give juju assumes that i has all the required permissions, and there's no interactive way for it to prompt you for the MFA credentials
[18:18] <lazyPower> s/that i has/that it has/
[18:18] <lkraider> thanks
[18:18] <lazyPower> np lkraider - sorry i wasn't full of good news for your questions :/
[19:15] <ChrisHolcombe> i'm a bit confused about something with magic mock
[19:16] <ChrisHolcombe> i'm trying to mock hookenv.config() and while config('source') works properly config.previous('source') does not.
[19:28] <lazyPower> ChrisHolcombe - it creates another mock
[19:29] <lazyPower> and now that i've said that, i realize how nebulous the question and answer both were
[19:30] <lazyPower> ChrisHolcombe - give me another go at what you're seeing vs what you're trying to do
[19:30] <ChrisHolcombe> lazyPower, yeah i think i get that but i've also done this: self.config.previous.return_value = "blah"
[19:30] <ChrisHolcombe> lazyPower, so i'm wondering.  how do i mock config.previous
[19:30] <icey> beisner: do you think you can weigh in on https://review.openstack.org/#/c/287446/ one more time? jamespage has given it the +1 but wants your +2 for the tests
[19:30] <lazyPower> ah common misconcenception.   self.config.return_value.previous.return_value="blah"
[19:30] <lazyPower> i think that'll get you sorted ChrisHolcombe ^
[19:30] <ChrisHolcombe> lazyPower, omg .. ugh
[19:31] <lazyPower> i could be wrong, but i recall having ot do something like that with like filepointers
[19:31] <lazyPower> i forget the exact formula for when that works but give it a go, if it works yay
[19:31] <lazyPower> if not, i call schenanigans
[19:34] <beisner> icey, see comment.  i didn't -1 it, but one lil misspelling.  if you want to bump that one more time, i'll watch for it.
[19:36] <icey> beisner: I'll make that one right quick
[19:36] <ChrisHolcombe> lazyPower, you're correct in that it's making another mock but i can't figure out how to get it to return the right thing :)
[19:37] <icey> beisner: pushed, pending jenkins
[19:38] <beisner> icey, thx sir
[19:38] <icey> for a change to c-h, should it be on LP?
[19:38] <lazyPower> icey yep
[19:38] <lazyPower> charm-helpers head is still up on launchpad in /charm-helpers
[20:04] <nagyz> lazyPower tvansteenburgh so I retested it, changed my bundle to v4 format, now it looks much better, except that somehow 6 becomes a compute, 7 becomes a controller yet my deployment descriptor clearly specifies it differently
[20:04] <nagyz> let me repastebin both
[20:06] <nagyz> lazyPower tvansteenburgh my v4 deployer bundle: http://pastebin.com/CY2kDma2, and the status output (clipped, but visible): http://pastebin.com/rGq7BUDH
[20:07] <nagyz> 6 gets compute 7 gets controller and I don't get why
[20:11] <magicaltrout> i see Mark is keynoting ApacheCon US this time
[20:11] <magicaltrout> he's been promoted to the big time ;)
[20:15] <rick_h__> cargonza: ping
[20:15] <cargonza> hi
[20:15] <rick_h__> cargonza: got a sec please?
[20:16] <cargonza> sure
[20:16] <rick_h__> https://plus.google.com/hangouts/_/canonical.com/rick?authuser=1 cargonza
[20:20]  * magicaltrout messed up his ApacheCon submission and instead of 3 presentations, has 1 presenation and 2 - 2 hour tutorials.....
[20:20] <magicaltrout> arse
[20:22] <tvansteenburgh> nagyz: same reason as before, there's no guarantee that the labels for the machines in your bundle will match up to the actual machine numbers
[20:24] <rick_h__> nagyz: bundles are meant to be 'self contained' so they can be reused by others. So it's machine 0, within that bundle.
[20:24] <tvansteenburgh> nagyz: deployer asks juju for the correct number of machines with the specs you want, but it can't tell juju "make it this number"
[20:30] <icey> beisner: build failed because of a timeout :( http://logs.openstack.org/46/287446/5/check/gate-charm-ceph-osd-python27/326d4fb/console.html
[20:31] <marcoceppi> magicaltrout: sounds like fun! ;)
[20:31] <beisner> icey, ha!  upstream test timed out.   comment back on your review with:   recheck
[20:31] <magicaltrout> well marcoceppi at least with 2 hours, I'll get to demo a crap load more Juju stuff to pad it out :P
[20:32] <icey> also beisner, I'm about to push a change for c-h that adds test coverage for the pid checking method, as well as support for lists :-D
[20:32] <magicaltrout> "hello gang, we're doing data management, so to get started and fake distrubuted-ness we're going to install juju and spin up some nodes"
[20:32] <beisner> icey, coolio
[20:33] <magicaltrout> also my presentation is at 5:10pm, i'm normally in the bar by then... how rude
[20:33] <magicaltrout> maybe i'll just do the presentation in the bar....
[20:33] <marcoceppi> magicaltrout: we'll have to bring the bar to the presentation
[20:34] <magicaltrout> aye, sounds like a plan
[20:35] <icey> bam beisner: https://code.launchpad.net/~chris.macnaughton/charm-helpers/pids-can-be-a-list/+merge/288014 ; once again, I make a 5 line change with 33 lines of tests to support it :)
[20:36] <beisner> icey, imho, as it should be ;-)
[20:36] <icey> I'm happy with it :)
[20:36] <icey> there were no tests covering that function at all, now there's a test covering all of the accepted types :-D
[20:37] <marcoceppi> I love how declarative the branch is, it's so assertive and sassy like "pids can be a list too, DAD"
[20:37] <beisner> icey, but where are the tests for your tests that test the test you updated as a helper to another test to test the feature that you really just want to land?
[20:38] <icey> :)
[20:41] <beisner> fwiw i'm still laughing over here, marcoceppi
[20:41] <icey> I laughed out loud as well marcoceppi
[20:44] <icey> beisner: I just can't catch a break today: http://10.245.162.36:8080/job/test_charm_amulet_smoke/78/console
[20:45] <icey> Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/dists/trusty-updates/main/i18n/Translation-en  Hash Sum mismatch
[20:53] <beisner> icey, apt repo glitch
[20:53] <icey> yeah, build retrying (again)
[21:10] <magicaltrout> duuuh
[21:10] <magicaltrout> there should be a big fat warning sign if you're in a repository charm directory building your charm instead of the development directory :P
[21:11] <magicaltrout> or maybe i should just build in the right place
[21:21] <lazyPower> actually
[21:21] <lazyPower> thats really good feedback magicaltrout - i've been bit by that
[21:21] <lazyPower> and the manifest being present in cwd is a pretty large identifier that you're doing something wrong....
[21:21] <magicaltrout> hehe i think i did 4 updates before realising what i was doing :)
[21:22] <nagyz> tvansteenburgh, of course it could, it just doesn't do it.
[21:23] <nagyz> tvansteenburgh, but more importantly, will the mappings still work? eg if I map haproxy to 4, but the controller becomes machine 6, will it install haproxy to 6?
[21:25] <tvansteenburgh> nagyz: yes, it will install services to the correct machines
[21:26] <nagyz> ok, thanks. I'll let it run through then we'll see :-)
[21:26] <nagyz> but I think it's confusing that if I map a machine to 3 and it gets to 6...
[21:26] <nagyz> while juju could very well deploy the machines in order
[21:28] <tvansteenburgh> nagyz: actually, i see another possible cause for what's happening
[21:28] <nagyz> happy to experiment :)
[21:28] <nagyz> and to rewrite the bundle to be better
[21:29] <tvansteenburgh> nagyz: deployer iterates over the machines defs in the bundle, requesting juju to create them one at a time. but...dictionaries aren't ordered
[21:29] <tvansteenburgh> nagyz: so it's possible this could be fixed by sorting the machine keys first
[21:30] <tvansteenburgh> nagyz: it'd be a change to deployer, no the bundle
[21:31] <nagyz> depending on what people use as keys. can the dict keys only be numbers in this case (for machines)?
[21:32] <tvansteenburgh> nagyz: it's not required they be ints, but that's the convention
[21:33] <nagyz> so the bundle should just work as-is then, right?
[21:33] <nagyz> regardless of the numbers juju status tells me.
[21:34] <tvansteenburgh> nagyz: yes
[21:35] <nagyz> alrightly, let me run it through
[21:35] <tvansteenburgh> run it through with a broadsword
[21:36] <nagyz> maybe the wrong expression ;-)
[21:36] <tvansteenburgh> lol
[22:02] <icey> wolsen: took care of your comments on ceph-osd
[22:02] <wolsen> icey: awesome thanks
[22:03] <wolsen> icey: doh, still typo - search for notr in config.yaml
[22:03] <icey> haha not anymore!
[22:07] <wolsen> haha
[22:19] <nagyz> all my lxc containers are stuck in agent-state: pending... is there a docu on how to debug it? :)
[22:19] <nagyz> after ssh I do see them starting up, running dhclient, but as I don't have DHCP on the network that will fail. is it possible to manually assign IPs to them via the deployment config?
[23:26] <nagyz> ok, it deployed \o/
[23:27] <nagyz> I'll do the more complex stuff tomorrow, thanks for all the help guys!