[04:19] <stub> lazyPower: I was wondering if charm-build should be VCS aware, and able to build directly to a branch (potentially the same repo the source branch is in). But I don't know enough yet to know how much effort that would involve.
[04:22] <lazyPower> stub - i think i missed a message somewhere and have lost context...
[04:24] <stub> oh, building in the wrong dir from 7 hours ago
[04:25] <lazyPower> Interesting thing is, it is *somewhat* vcs aware, insofar that it uses VCS to grab the resources listed in the directory
[04:25] <stub> just mumbling ideas I doubt I'll ever get a chance to investigate :)
[04:25] <lazyPower> i think sniffing for a manifest would be sufficient to quell the build error gotchya of sitting in the charm artifact dir trying to charm build
[04:26] <stub> I end up with crap in my built branches due to test runs and stuff, so getting the stuff 'officially' added and/or committed would mean I don't need to go through later and ad/commit.
[04:26] <stub> And I imagine uploading to the charm store would have similar issues, unless that is smart enough to only upload what is in the manifest.
[04:27] <lazyPower> yeah, thats true. then again adding all that to a clean target, and running make clean before you do anything wrt pushing your charm, should be g2g
[04:27] <lazyPower> but thats an interesting idea, only including the artifacts in the manifest...
[04:27] <stub> Yeah, I do that so it isn't anything important. I do need to maintain the rules, so a little bit of maintenance.
[04:27] <lazyPower> that would be a pretty good way to trim the fat, unless you're binpacking in last minute dependencies to a "fat charm"
[04:27] <lazyPower> that could be problematic
[04:29] <stub> A command to add files to the manifest post-build might be better for that, since then it is explicit and you get a nice checksum embedded in the manifest.
[04:30] <lazyPower> ohhhh, i like that too
[04:30] <stub> heck, might even be some resources workflow involved here
[04:30] <lazyPower> ^
[04:30] <lazyPower> that sounds like the winner
[04:30] <lazyPower> manfiest for charm code, everything else is treated as a resource
[04:30] <stub> I don't do fat charms so can't really comment ;)
[04:31] <lazyPower> I try not to, but i'm also not limited by an angry egress firewall
[04:31] <blahdeblah> On that note, if my charm is not getting the updated basic layer with "config.changed*" support, where should I be looking to troubleshoot it?  Is $JUJU_REPOSITORY the only place it should look for the layer?
[04:31] <lazyPower> make sure you dont have a local copy of layer:basic in $LAYER_PATH
[04:31] <blahdeblah> lazyPower: All of our egress firewalls are perfectly calm and rational. :-P
[04:31] <lazyPower> blahdeblah - schenanigans!
[04:32] <stub> blahdeblah: Its a LAYER_??? variable
[04:32] <lazyPower> there is a LAYER_PATH env var, yes
[04:32] <lazyPower> oh, nvm
[04:32] <blahdeblah> So, hypothetically, if $LAYER_PATH isn't set, where would my layer be coming from?
[04:32] <lazyPower> from the API, so it reaches out and clones it in the deps dir in $JUJU_REPOSITORY
[04:33] <stub> Oh... and charms.reactive is emedded by pip, so if you have a pip cache
[04:34] <blahdeblah> aaargh
[04:34] <stub> Oh, its def in base layer so pip doesn't matter
[09:06] <jacekn> kjackal: hey, I think setting my collectd bug to "Fix commited" did not work as expected. I can see my review is still showing as failing tests, they were not rerun. Also I think it probably belongs in the "Incoming Charms" section not "Charm Reviews"
[09:07] <kjackal> Hey jacekn , let me check
[09:08] <jacekn> kjackal: thanks. If my charm stays in "Charm Reviews" I think it can take a long time before somebody will pick it up, it's way down the list and some charms at the top have been waiting for well over a month for review
[09:08] <jacekn> and my review shoudl be just a few lines
[09:10] <kjackal> wait up, there should not be any new round of review for your charm
[09:10] <kjackal> We are talking about this one: https://bugs.launchpad.net/charms/+bug/1538573
[09:10] <mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1538573>
[09:11] <kjackal> Yes, cool
[09:11] <kjackal> so here is where we are with this charm
[09:11] <kjackal> yesterday i did the review
[09:11] <kjackal> It was the third review iteration
[09:12] <kjackal> I did not find any issue so you got the green light from me
[09:13] <jacekn> so what happens next? I am looking at http://review.juju.solutions/ and it's way down the list there
[09:13] <kjackal> however, I am rather new in the team, so a more senior member will read my report and proceed wit the process of promulgating the charm
[09:14] <jacekn> kjackal: I see so there is basically another process outside of review.juju.solutions that my charm is going through?
[09:15] <kjackal> your charm is part of a batch that was reviewed yesterday. The batch is almost finalised. We will send an update on the list with our progress
[09:16] <kjackal> So, you will not have to do anything, I will bring it up on our daily sync but it is a normal internal process
[09:16] <jacekn> kjackal: cool, thanks for explanation!
[09:17] <jacekn> kjackal: just a suggestion - maybe add somethjign to the review queue to indicate that the charm review is "in progress" or something similar
[09:17] <kjackal> The list that we will send our update is here: 	juju <juju@lists.ubuntu.com>
[09:18] <kjackal> Are you registered, there?
[09:18] <jacekn> yes I am
[09:18] <jacekn> cool thanks again for explaining this
[09:20] <kjackal> Great! Regarding the review queue, there will be a number of changes. We are working on improving the process especially since now there is the need to review layers (in addition to "old-style" charms))
[10:03] <jamespage> beisner, gnuoy: I've switched over the offical charm branches to the reverse-imported ones from github
[10:03] <jamespage> not that we no longer have different branches for trusty/precise
[10:03] <jamespage> figured out that magic as well...
[10:03] <gnuoy> s/not/note/ ?
[10:04] <jamespage> note
[10:04] <jamespage> indeed
[10:04] <jamespage> lol
[10:04] <gnuoy> ack
[10:04] <jamespage> gnuoy, https://code.launchpad.net/~openstack-charmers/+branches?field.category=OWNED&field.category-empty-marker=1&field.lifecycle=MATURE&field.lifecycle-empty-marker=1&field.sort_by=most+recently+changed+first&field.sort_by-empty-marker=1
[10:04] <jamespage> gnuoy, they show as precise charms as that the current default series in charms distro
[10:22] <magicaltrout> stub: ping
[10:22] <stub> magicaltrout: pong
[10:23] <magicaltrout> ooh high, quick leadership q if you have 2 mins
[10:23] <stub> np
[10:23] <magicaltrout> okay so here: https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L90
[10:24] <magicaltrout> if I do a juju set-config carte_password=1234 or something it will get picked up on the next is_leader execution
[10:24] <magicaltrout> correct?
[10:25] <magicaltrout> that runs every 5 mins or something
[10:25] <stub> You are thinking of update-status, which I think runs every 5 mins with Juju 1.25
[10:25] <stub> At the moment, you seem to be resetting the credentials every hook.
[10:26] <stub> Which is fine, as none of them are random
[10:26] <magicaltrout> actually I lied, the leadership.changed was going to be my question, it is being called :P
[10:26] <magicaltrout> I have no question :)
[10:27] <stub> You probably should rely on update-status here. The config-changed hook is what gets invoked after a 'juju set'
[10:27] <stub> should not
[10:27] <magicaltrout> yes I set the credentials everytime, but thats because its something that can be changed by set-config and needs to be propogated to the leader and the slaves because they need to know how to login
[10:29] <magicaltrout> hmm actually you have a point, as the nodes all share the same password, I guess on config change I could set it and trigger a restart
[10:29] <magicaltrout> actually check_running already does that for the password so it might be a bit of a noop anyway
[10:30] <stub> So at the moment, you run 'juju set'. The config-changed hook runs on all units. On the leader, change_leader is called. On the others, update_slave_config.
[10:30] <stub> Sory - I'll start again
[10:31] <magicaltrout> hehe like I said, in reality I don't think i have a problem, my implementation might jump through a few extra hoops but I think it works the logs look okay: https://gist.github.com/buggtb/3b65eb1672dc602c98ac
[10:32] <stub> The implementation seems fine.
[10:33] <stub> Apart from the bad choice of method names, but that is just opinion :)
[10:33] <magicaltrout> yeah thats just because they've evolved over time :)
[10:33] <magicaltrout> I'm gonna clean all that up in a bit
[10:33] <magicaltrout> they start doing one thing then cory_fu makes me rework it and they do something else :P
[10:36] <stub> All your units might restart at the same time atm. Is that a problem?
[10:37] <magicaltrout> restart?
[10:37] <magicaltrout> oh the service
[10:37] <magicaltrout> yeah i had wondered about that
[10:37] <magicaltrout> because the slaves register with the master
[10:38] <magicaltrout> currently i'm not sure what happens in the app if the master is unavailable, whether they retry or not
[10:38] <stub> If you want to avoid that, check out the coordinator layer
[10:38] <stub> I've only used it for top level charms though. This seems to be a layer?
[10:39] <stub> https://github.com/stub42/layer-coordinator has the docs
[10:39] <magicaltrout> yeah just reading the readme, thanks!
[10:41] <stub> Of course, rolling restarts mean your slave are running for longer with outdated credentials so it might not be useful :)
[11:45] <magicaltrout> how do I copy files to a unit during an amulet test?
[11:45] <magicaltrout> can I run juju scp somehow?
[11:50] <sparkiegeek> magicaltrout: yes, you can - see https://github.com/juju/amulet/blob/master/amulet/sentry.py#L69 for how amulet itself uses juju scp
[11:51] <magicaltrout> aaah I misread the comment
[11:51] <magicaltrout> I thought it said "juju scp doesn't work"
[11:51] <magicaltrout> :)
[11:51] <magicaltrout> thanks sparkiegeek
[12:15] <jamespage> beisner, I think I'm always getting a 'full' amulet execution atm
[12:15] <jamespage> no initial smoke...
[13:44] <beisner> jamespage, indeed.  one too many triggers on that pipeline.  that should be fixed shortly.
[13:48] <tinwood> gnuoy, have you got a moment?  I'm having problems testing keystone.
[13:48] <gnuoy> tinwood, defo
[13:49] <tinwood> gnuoy, so I'm trying to run the 015-basic-trusty-icehouse and it's hanging because rabbitmq-server is failing.  Have you seen anything like that?
[13:50] <gnuoy> tinwood, yes. It'll be reverse DNS
[13:50] <tinwood> ah, as in the rabbitmq can't work out its domain name
[13:51] <tinwood> I see.  I've been having DNS problems - that gives me a new direction to look at.
[13:52] <jcastro> We'll do office hours in about 1 hour!
[13:57] <magicaltrout> I'm trying to test my charm, but it involves changing ports/passwords etc
[13:58] <magicaltrout> whats the best way to reset the state between each test?
[13:58] <magicaltrout> just like an amulet remove() type teardown?
[14:37] <jamespage> gnuoy, coreycb: hey can we chat about aodh briefly?
[14:37] <coreycb> jamespage, yes
[14:38] <gnuoy> I have a 15mins window before child shuffling
[14:41] <jamespage> coreycb, gnuoy: ok so sitrep - alarming is split from ceilometer -> aodh
[14:41] <gnuoy> yep
[14:42] <coreycb> yep
[14:42] <jamespage> ceilometer have removed old code from mitaka; aodh has its own api etc...
[14:42] <gnuoy> splendid
[14:42] <jamespage> so I suggest we drop aodh from the ceilometer charm altogether and produce a new aodh one
[14:42] <jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYeW6FuePgoCJC8HN9v_cpHfg1mQpsdKPmc5OIRopOb0NN1qAg
[14:42] <gnuoy> yep
[14:42] <jamespage> or we can piggy back then together
[14:42] <gnuoy> split gets my vote
[14:42] <jcastro> ^^^ Hangout for the office hours in ~15 minutes
[14:42] <jamespage> gnuoy, ok I'll do the work to drop the aodh bits for now
[14:43] <coreycb> split makes sense to me since separate apis
[14:43] <gnuoy> jamespage, as coreycb has said before, it's the upgrade that;s is worrying
[14:43] <coreycb> jamespage, that sounds good, I put those there before I realized there were separate apis
[14:44] <jamespage> gnuoy, well people will lose alarming from ceilometer
[14:45] <jamespage> we'll need to release note to add aodh
[14:45] <gnuoy> jamespage, what about existing data in the db?
[14:46] <tinwood> thanks gnuoy, reverse dns fixed, tests proceeding.
[14:46] <gnuoy> tinwood, tip top
[14:46] <magicaltrout> blimey laptop upgrade from trusty to xenial didn't completely brick it
[14:48] <tinwood> gnuoy, yay! new pause/resume works with keystone :)
[14:49]  * tinwood does a little dance
[14:49] <sparkiegeek> tinwood: woot!
[14:49] <beisner> wraskelly wrabbits
[14:50]  * tinwood now has to work out how to do the git review bit.
[14:57] <marcoceppi> OFFICE HOURS! Starting in just 5 mins
[14:58] <rick_h__> marcoceppi: linky me please
[14:58] <rick_h__> marcoceppi: so I can join and get setup
[14:59] <marcoceppi> http://ubuntuonair.com/ - https://plus.google.com/hangouts/_/hoaevent/AP36tYeW6FuePgoCJC8HN9v_cpHfg1mQpsdKPmc5OIRopOb0NN1qAg
[15:01] <magicaltrout> hello bundletester environment resets....
[15:01] <cholcombe> question about charmhelpers related_units.  It looks like from the code it's valid to call this function without any arguments.  However i'm getting a CalledProcessError saying I must specify a relation id.  Maybe the API changed?
[15:01] <magicaltrout> says it happens after every test
[15:02] <magicaltrout> in python land is a test a single method a test? or is it class level?
[15:02] <cholcombe> magicaltrout, every def fn is a test
[15:02] <magicaltrout> hmm
[15:02] <magicaltrout> thanks cholcombe
[15:02] <cholcombe> magicaltrout, i think the class is just an easy way to group related tests
[15:03] <magicaltrout> yup, same as java-land then, cool
[15:03] <cholcombe> right
[15:03]  * magicaltrout goes off to find the cause of run away procs then
[15:04] <cholcombe> lazypwr, do you know of related_units can be called without any args?
[15:05] <magicaltrout> rick_h__: is that a vanity mirror above your head? ;)
[15:06] <magicaltrout> ooh mic stuff
[15:11] <rick_h__> magicaltrout: yea, do I sound decent? :)
[15:11] <magicaltrout> hehe
[15:11] <rick_h__> but love the vanity mirror idea :)
[15:11] <magicaltrout> better than my dodgy videos
[15:11] <magicaltrout> I like a mirror, juju grooming
[15:13] <lazyPower> cholcombe -  nah i'm pretty sure you have to either a) only call that when you're in a relation-* hook, b) provide the rel_id so it can query for active conversations
[15:20] <magicaltrout> marcoceppi: for those of us use to the gui, there should be a rendering of juju status within the UI, i find myself flipping back and forth
[15:21] <beisner> https://github.com/openstack/?utf8=%E2%9C%93&query=charm-
[15:22] <beisner> https://github.com/openstack-charmers/openstack-community/blob/master/README.dev-charms.md
[15:26] <beisner> https://jujucharms.com/u/openstack-charmers-next/
[15:26] <marcoceppi> magicaltrout: good point, I'll bring it up
[15:32] <bdx_> jcastro, thanks for stressing this
[15:33] <jcastro> \o/
[15:33] <magicaltrout> testing people
[15:34] <magicaltrout> if you set a config option in amulet
[15:35] <magicaltrout> whats the python way of waiting a while for it to take effect? :)P
[15:37] <beisner> magicaltrout, in the openstack charms, we address that by waiting for the extended workload status message, which is where the charm declares itself done, ready and settled.  that requires that all of the charms in the deployment possess such logic.
[15:37] <beisner> ex.  https://github.com/openstack/charm-keystone/blob/master/tests/basic_deployment.py#L39
[15:38] <aisrael> tvansteenburgh: I'm seeing an LXC test failure that looks like it's a problem in the environment: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/2862/console
[15:39] <beisner> magicaltrout, we do that to avoid race conditions in the tests.  naturally, you want the charm to be done doing its work before poking at it.
[15:39] <tvansteenburgh> aisrael: thanks, fixed
[15:39] <aisrael> tvansteenburgh: no no, thank *you*
[15:39] <magicaltrout> beisner: good idea thanks!
[15:40] <beisner> marcoceppi - is there a vanilla wait-for-readiness amulet method?
[15:40] <bdx_> officehours: so, each openstack tenant would corresponds to a separate cloud, or separate controller?
[15:41] <bdx_> marcoceppi: yes
[15:41] <tvansteenburgh> beisner, yes
[15:42] <tvansteenburgh> beisner https://github.com/juju/amulet/blob/master/amulet/sentry.py#L307
[15:42] <tvansteenburgh> beisner, also https://github.com/juju/amulet/blob/master/amulet/sentry.py#L380
[15:42] <beisner> tvansteenburgh, right, that waits for a specific status.  what i'm asking is actually for charms that don't use extended status.
[15:42] <beisner> such as mongodb or mysql
[15:43] <bdx_> rick_h: awsome!
[15:43] <sparkiegeek> sounds fishy to me
[15:43] <bdx_> awesome
[15:44] <tvansteenburgh> beisner wait_for_status doesn't depend on extended status i don't think
[15:44] <jamespage> beisner, hey so I have a number of 'rollup' reviews up to drop old release configuration files...
[15:44] <beisner> tvansteenburgh, yep looks like that's the one magicaltrout ^
[15:44] <jamespage> could you take a peek? I've been doing a recheck-full
[15:45] <magicaltrout> rick_h__: are you going to make cloud metadata pluggable eventually? so we can inject new instance types etc without upgrading Juju?
[15:46] <tvansteenburgh> beisner, magicaltrout: there is also sentry.wait() which waits for hooks to complete
[15:46] <tvansteenburgh> https://github.com/juju/amulet/blob/master/amulet/sentry.py#L345
[15:47] <beisner> ah yes, thanks tvansteenburgh.  we've wrapped all of those amulet helpers in other test helpers so i've lost memory of their names. :-)
[15:47] <beisner> jamespage, ok so on those, clear to land if all passing?
[15:47] <magicaltrout> hmm
[15:47] <magicaltrout> thanks beisner tvansteenburgh
[15:47] <jamespage> beisner, yah
[15:47] <bdx_> rick_h, officehours: what is the status of lxd <--> network spaces?
[15:48] <jamespage> bdx_, next two weeks I think...
[15:48] <jamespage> so not quite but almost
[15:48] <bdx_> jamespage: awesome! exciting!
[15:48] <lazyPower> YES!
[15:48] <lazyPower> \O/
[15:48] <lazyPower> rick_h__ - is it hard being that awesome?
[15:49] <magicaltrout> lol
[15:51] <beisner> cool stuff, bigdata :)   https://jujucharms.com/big-data
[15:51] <rick_h__> lazyPower: exhausting :P
[15:51] <jamespage> beisner, just trying to de-cruft old stuff in between things...
[15:51] <rick_h__> lazyPower: helps to have tons of awesome people around
[15:51] <lazyPower> i'll let mbruzek know you said that ;D
[15:51]  * rick_h__ runs fast to keep up with them all
[15:51] <beisner> jamespage, +1 for spring cleaning
[15:52] <mbruzek> heyo
[15:53] <deanman> Hello, I'm trying to use juju with manual provide to connect to an already bootstrapped environment but for some reason it uses my personal key instead of the juju key. Is there a way to define which key to be used when running juju command ?
[15:54] <tinwood> jamespage, do I need to be an Openstack Foundation member before doing a git review request?
[15:54] <jamespage> tinwood, yes
[15:55] <tinwood> jamespage, as a person, I'm guessing - i.e. I have to physically join myself.
[15:55] <bdx_> officehours: thanks everyone!
[15:55] <jamespage> tinwood, yes
[15:56] <tinwood> Thanks jamespage, I'm staring at the form now :)
[15:57] <cherylj> hey lazyPower, I'm going to co-opt your bug 1553059 for just the last part of your problem - providing a way to clean up the cache.  I'm fixing the help text now, but I'd like to use that bug to track the cache.yaml cleanup
[15:57] <mup> Bug #1553059: Help output when killing a `shared model` is incorrect <destroy-model> <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1553059>
[15:57] <lazyPower> oh you betchya
[15:57] <lazyPower> its a polymorphic issue anyway :)
[15:57] <cherylj> so I'll be changing the title
[15:57] <lazyPower> duck-typed problems ftw
[15:57] <bdx_> officehours: As a side note, I'm giving a presentation of juju-openstack-ha at a portland openstack meetup this month .... I need to hammer out a few issues I'm experiencing with different service under ha .... is there someone that would be willing to work with me on this a bit so as my demo might be legit?
[16:03] <magicaltrout> tinwood: beisner just some terminology clarification if I want to catch a password change for example, I can set that block under message and then use Amulet wait_for_status?
[16:03] <magicaltrout> sorry not tinwood !:P
[16:03] <magicaltrout> tvansteenburgh: --^
[16:03] <magicaltrout> or is status literally the thing in the services that relays the status?
[16:04] <magicaltrout> (I realise the irony in that question)
[16:04] <tvansteenburgh> magicaltrout: not sure what you mean by "catch it'
[16:04] <tvansteenburgh> you mean you want to change config and wait for the change to complete?
[16:04] <magicaltrout> tvansteenburgh: I need to change a config option and wait for it to actually happen on the unit before proceeding in my test
[16:04] <Gil> ERROR cannot retrieve charm "cs:~hazmat/trusty/etcd-6": cannot get archive: Get https://api.jujucharms.com/charmstore/v4/~hazmat/trusty/etcd-6/archive: dial tcp 162.213.33.121:443: connection timed out
[16:05] <tvansteenburgh> magicaltrout: then you want to make the change and then call senty.wait()
[16:06] <magicaltrout>  /o\
[16:06] <magicaltrout> alrighty thats easier :P
[16:06] <tvansteenburgh> magicaltrout: more specifically, deployment.sentry.wait()
[16:07] <magicaltrout> on a slightly different subject, there are no amulet python api docs published yet are there?
[16:07] <tinwood> np magicaltrout :)
[16:07] <magicaltrout> I had a prod around but I've ended up grepping the source
[16:07] <tvansteenburgh> magicaltrout: sadly, no :(
[16:08] <tvansteenburgh> magicaltrout: there are some docs but they are lagging behind the source
[16:08] <magicaltrout> no probs
[16:08]  * tvansteenburgh makes card to generate api docs for amulet
[16:11] <icey> beisner: it's merged: https://review.openstack.org/#/c/287446/
[16:11] <icey> !
[16:11] <beisner> indeed \o/
[16:12] <marcoceppi> jcastro: let me know when the video lands
[16:12] <jcastro> marcoceppi: Almost, you done editing the notes?
[16:12] <jcastro> I need to paste those in
[16:15] <jcastro> https://www.youtube.com/watch?v=zPLW7cGrrjE&feature=youtu.be
[16:16] <jcastro> marcoceppi: ^^^ unlisted yet so we can fix the description
[16:16] <marcoceppi> I don't have edit access
[16:17] <marcoceppi> jcastro ^
[16:17] <jcastro> you signed in? You're listed as a "manager"
[16:17] <marcoceppi> jcastro: which account?
[16:18] <jcastro> https://plus.google.com/u/0/b/103184405956510785630/+MarcoCeppi/
[16:18] <jcastro> that one
[16:20] <marcoceppi> OIC
[16:26] <magicaltrout> can someone tell me in simplistic terms
[16:26] <magicaltrout> what an environment reset entails in bundletester?
[16:27] <magicaltrout> its supposed to be a machine tear down isn't it? from my look at the code
[16:29] <marcoceppi> magicaltrout: yeah, bundletester -TT is the 1.x equivalent of destroy-model in 2.0
[16:29] <marcoceppi> it kills machines, services, units, etc, but keeps the bootstrap node
[16:29] <magicaltrout> and according to the README reset is true by default
[16:29] <magicaltrout> so after every test I should get a fresh machine?
[16:32] <jcastro> magicaltrout: are you indexing the video or you want me to do that?
[16:32] <magicaltrout> not me :P
[16:40] <marcoceppi> jcastro: I'll index
[16:43] <beisner> jamespage, include ceilometer in those +2s?
[16:44] <magicaltrout> aww yeah, when you think its deploying new code.... but it aint \o/
[16:52] <stub> magicaltrout: Every bundletester target/test gets a fresh environment. What sort of tests are you writing?
[16:53] <magicaltrout> hey stub, yeah it helps if I deploy more than boiler plate I guess :)
[17:00] <jcastro> anyone know if we have the juju->juju2 update-alternatives bits documented? Can't seem to find anything
[17:05] <lazyPower> jcastro - i know that marco did a blog post on it, bu ti dont think we have it officially documented in any capacity, no
[17:05]  * rick_h__ can't recall where it was, release notes or what
[17:05] <marcoceppi>  release notes and my blog lazyPower rick_h__ lazyPower
[17:05] <magicaltrout> marcoceppi.com
[17:05] <magicaltrout> what he said
[17:06] <lazyPower> marcoceppi do i constitute a committee now?
[17:09] <jamespage> beisner, do you want me to pickup the xenial-enable branches on monday?
[17:10] <jamespage> I can work through those but they all need a fullcheck
[17:10] <beisner> jamespage, yep, fulls are running.  it's not time-sensitive.  i was mainly needing something to double check triggers and workflow and thought i'd see if those are ready to flip on.
[17:11] <beisner> i think they are, but we shall see :-)
[17:11] <jamespage> beisner, \o/
[17:11] <jamespage> beisner, nice work this week btw - I think the transition has gone pretty smoothly
[17:11] <lazyPower> Gil - getting along ok w/ the new pointer to ETCD?
[17:12] <beisner> jamespage, thanks :-)  and thanks for all your good work on it too!
[17:13] <Gil> lazyPower no.  i'm working on getting juju upgraded to 2.0.x which you indicated was a pre-req, and that's not going well either .  I'm on 1.25.3 atm
[17:13] <lazyPower> Gil - ok instead of going through a 2.0 upgrade path
[17:14] <lazyPower> you can also build the charm from source, theres only 1 repository to clone, then you can use a local charm until 2.0 lands as stable
[17:14] <lazyPower> unless you *want* to beta :) Then in which case, disco! i'll lend a hand where i can
[17:14] <Gil> uhhhh
[17:14] <lazyPower> lets go for the path of least resistance
[17:15] <Gil> sounds good
[17:15] <lazyPower> have you built any charms before? literally with `charm build`
[17:16] <lazyPower> Gil - if you clone this repo https://github.com/chuckbutler/layer-etcd   and run `charm build -o $JUJU_REPOSITORY`  it will output the charm you're upgrading to 2.0 to get to in your charm repo (assuming that env var is set) you can then deploy with `juju deploy local:trusty/etcd`
[17:16] <Gil> no but I think I could - i watched marco's youtube classes and did some work bringing charms down locally to get started on building an "oraclexe" charm
[17:17] <Gil> cloning repos I do know
[17:17] <Gil> from github
[17:17] <Gil> that's a start
[17:19] <lazyPower> \o/ woo, its a great start
[17:19] <lazyPower> the rest is downhill from there
[17:20] <marcoceppi> Step 1 clone, Step 2 ... step 3 profit
[17:22] <Gil> export JUJU_REPOSITORY=/home/gstanden/charms/trusty
[17:22] <Gil> charm build -o $JUJU_REPOSITORY
[17:22] <Gil> returned 4 lines of output, no errors
[17:23] <Gil> Added charm "local:trusty/etcd-0" to the environment.
[17:23] <Gil> seems to have worked
[17:24] <aisrael> tvansteenburgh: Looks like a few items in the review queue that are merged but still showing up: http://review.juju.solutions/review/2429 and http://review.juju.solutions/review/2435 are the ones I've seen
[17:24] <Gil> machine 2 hasn't fired up yet though...
[17:25] <lazyPower> Gil - ah, interesting. i'm surprised that did work, $JUJU_REPOSITORY is typically the directory just before series. it must have recused into the tree and found it.
[17:25] <tvansteenburgh> aisrael: thanks, updated
[17:25] <Gil> to what should I have set JUJU_REPOSITORY?
[17:26] <aisrael> kwmonroe: Are you still working on the zulu8 review? (it's currently locked)
[17:26] <kwmonroe> yeah aisrael
[17:26] <lazyPower> based off your last paste - /home/gstanden/charms
[17:26] <aisrael> kwmonroe: ack, thanks!
[17:26] <kwmonroe> np, it's really a no-op, just need to move tests from ./trunk to his ./source branch.  it'll be done today.  thanks for checking ;)
[17:27] <kwmonroe> well, not so much a no-op, just a not-much-op
[17:27] <Gil> something may still not be right ... normally when I deploy a non-lxc charm a machine starts up (maas).  The metal is not starting - and I know that my maas setup is solid so I'm wondering if something is still not right
[17:27] <lazyPower> yeah that doesn't sound promising Gil
[17:27] <Gil> juju status does show the etcd deployment status
[17:30] <lazyPower> Gil - however the machine seems to be left in pending?
[17:33] <cholcombe> marcoceppi, quick question about juju relation-list.  On ceph-mon it's not showing the related units and just blowing up
[17:33] <marcoceppi> cholcombe: when are you calling realtion-list?
[17:33] <cholcombe> hookenv.related_units()
[17:33] <cholcombe> oh when
[17:33] <Gil> I ran "juju remove-service etcd" then deleted machine 2 using the juju gui, then retried after setting JUJU_REPOSITORY to /home/gstanden/charms
[17:34] <cholcombe> marcoceppi, i'm calling it inside a config-changed hook
[17:34] <Gil> then I got: WARNING failed to load charm at "/home/gstanden/charms/trusty/deps": open /home/gstanden/charms/trusty/deps/metadata.yaml: no such file or directory
[17:34] <Gil> so I put JUJU_REPOSITORY back to what I had before...and then retried...and it did launch a machine this time
[17:34] <lazyPower> Gil - thats known behavior. the deps directory is a cache that gets used when building layers
[17:34] <marcoceppi> cholcombe: you have to give it the relation you want to list
[17:34] <lazyPower> it pulls in the remote interface-layers, and required layers that you dont have locally to assemble the charm - all of which are listed on http://interfaces.juju.solutions
[17:34] <cholcombe> marcoceppi, ah ok interesting
[17:35] <Gil>  machine: "3"
[17:37] <marcoceppi> cholcombe: so you can only call relation_list without parameters inside of a relation hook, otherwise you have to pass it a context
[17:38] <cholcombe> marcoceppi, got it.  inside config-changed i'm just trying to see who else is in the cluster and get a list of unit names
[17:39] <aisrael> kwmonroe: I saw you were having troubles with the xenial vagrant images. Did you run into this? http://pastebin.ubuntu.com/15282513/
[17:39] <aisrael> kwmonroe: and if so, did you find a workaround?
[17:40] <kwmonroe> yup aisrael: https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1538547
[17:40] <mup> Bug #1538547: vagrant box fails with private network interface <livecd-rootfs (Ubuntu):Confirmed for utlemming> <https://launchpad.net/bugs/1538547>
[17:41] <kwmonroe> aisrael: but that didn't stop me from being able to 'vagrant ssh xenail'
[17:41] <aisrael> kwmonroe: aha, I can get in.
[17:42] <kwmonroe> aisrael: yeah, looks like it's borked, but it's just messin with you.  once you get in, you can add ubuntu-xenial to the 127.0.0.1 line of /etc/hosts to get rid of the "failure in name resolution"
[17:42] <aisrael> kwmonroe: heh, I just hit that bug.
[17:47] <cholcombe> marcoceppi, are the relation ids that relation-list wants just the names of the relations?
[17:50] <marcoceppi> cholcombe: yes
[17:53] <cholcombe> marcoceppi, looks like hookenv.relations()['mon']['mon:1'].keys() returns the right information
[18:00] <neiljerram> jamespage, hi - a question if I may about the new Git/Gerrit-based process: will changes at https://git.openstack.org/cgit/openstack/charm-neutron-api be mirrored into https://code.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/next ?
[18:03] <jamespage> neiljerram, yes but there is some lag
[18:03] <neiljerram> jamespage, Or alternatively, is there a way in a bundle .yaml file to specify using the Git source directly?
[18:03] <marcoceppi> cholcombe: mon:1 is not a hard coded key
[18:03] <marcoceppi> don't hard code that
[18:03] <cholcombe> marcoceppi, i won't
[18:04] <marcoceppi> cholcombe: is this a peers relation?
[18:04] <neiljerram> jamespage, I'm guessing you mean around once per day?
[18:04] <jamespage> neiljerram, utimately those will end up under https://jujucharms.com/u/openstack-charmers-next/ and
[18:04] <jamespage> beisner, whats the schedule on the git->bzr sync?
[18:04] <jamespage> neiljerram, I think its more frequent than that
[18:04] <cholcombe> marcoceppi, yeah
[18:04] <marcoceppi> cholcombe: k
[18:05] <neiljerram> jamespage, Well that's plenty frequent enough, anyway.  Thanks.
[18:05] <beisner> jamespage, every 4hrs
[18:05] <jamespage> okies
[18:06] <jamespage> neiljerram, ultimately when all the bit for charm push are in place, all of those branches are redundant and charm-store publication becomes a post commit task in the ci train
[18:06] <neiljerram> jamespage, Right, so then the process will go from git.openstack.org directly to  https://jujucharms.com/u/openstack-charmers-next/ ?
[18:20] <Gil> lazyPower  - there is alot I can do without getting into etcd at this point.  I will just work on my oraclexe charm for now on my stable trusty.  I've alot to learn still
[18:20] <lazyPower> Gil - ok, if you need any help with etcd/flannel/etc. let me know and i'll lend a hand :)
[18:21] <lazyPower> thanks for taking a look
[18:21] <Gil> cool thanks!
[18:32] <beisner> neiljerram, it has a few hops and sync along the way, but yes, the tip of master of a charm at git.openstack.org flows to github.com to lauchpad and then to the openstack-charmers-next space in the charm store. :-)
[18:32] <beisner> unless it goes through albuquerque, then i'm not sure
[18:32] <neiljerram> beisner, thanks!
[18:32] <neiljerram> :-)
[18:37] <beisner> neiljerram, yw.  thanks for those maintenance updates on the neutron-api calico bits.
[19:21] <magicaltrout> right fscking tests.... lets get you working
[19:33] <magicaltrout> charmers
[19:34] <marcoceppi> magicaltrout you rang?
[19:34] <magicaltrout> can i run a test class on a unit thats already running?
[19:34] <magicaltrout> rather than wait for bundletester to mess around
[19:37] <tvansteenburgh> magicaltrout: if you run your amulet test against an env that already has the services deployed, it'll use them instead of deploying new ones
[19:39] <tvansteenburgh> magicaltrout: note that if you have multiple test files (in the tests/ dir), bundletester will reset the environment between files
[19:40] <tvansteenburgh> magicaltrout: you can prevent that by setting `reset: false` in the tests.yaml file
[19:40] <tvansteenburgh> https://github.com/juju-solutions/bundletester#testsyaml
[19:59] <magicaltrout> ah thanks tvansteenburgh never noticed you could just execute them, assumed i've have to dump some more stuff in there
[20:13] <magicaltrout> I'm clearly doing something stupid: is there anything wrong with this line: https://gist.github.com/buggtb/5620a4b5abf403e7b997#file-brokentest-py-L39 ?
[20:16] <lazyPower> magicaltrout - nope, that looks correct to me
[20:16] <lazyPower> magicaltrout - matter of fact, i do something similar here https://github.com/chuckbutler/docker-charm/blob/master/tests/10-deploy-test#L68
[20:17] <magicaltrout> bah! thanks lazyPower
[20:21] <magicaltrout> oooh wtf
[20:21] <magicaltrout> this is where the whole compile deploy, suck from Launchpad stuff gets messed up
[20:22] <magicaltrout> somehow with bundletester I have current rev tests but old charm reactive code
[20:22]  * magicaltrout sobs into his rum and soda
[20:24] <jrwren> magicaltrout: its ok. its not worth drinking about. I too have experienced this pain.
[20:24] <magicaltrout> hehe
[20:25] <magicaltrout> i sit there wondering why the config isn't executing
[20:25] <magicaltrout> it is, but the old code didn't work properly and it runs and fails
[20:25] <magicaltrout> silly automated tests
[20:25] <magicaltrout> clearly not my fault in the slightest :P
[20:26]  * magicaltrout double checks launchpad before running this time
[20:43] <magicaltrout> okay wtf
[20:43] <magicaltrout> does bundletester cache charms or something?
[20:44] <magicaltrout> why does
[20:44] <magicaltrout> bundletester -t lp:~f-tom-n/charms/trusty/pentahodataintegration/trunk
[20:44] <magicaltrout> when it spin up give me old code?
[20:45] <magicaltrout> the code in my reactive pdi.py is stale
[20:45] <magicaltrout> and i have no clue why
[21:06] <lazyPower> there no older copy of the charm hanging around somewhere in $JUJU_REPOSITORY?
[21:06] <lazyPower> i've had issues where i thought i was going to be clever and must mv a charmdir, cloned, and kicked off a test only to have the one i mv'd get deployed
[22:36] <tvansteenburgh> magicaltrout: did you figure it out?
[22:38] <magicaltrout> yeah i've just this second had one of those "duh" moments
[22:39] <magicaltrout> my test is what I have locally but the charm iself is pulled out of the charm store...... *massive face palm*
[22:41] <magicaltrout> i guess this is why fridays were invented
[22:41] <tvansteenburgh> heh
[22:42] <magicaltrout> its just one of those things isn't it "wtf, my code is infront of me... why is it not right damnit?!" "oh yeah because its pulling the source from somewhere else"......
[22:42] <magicaltrout> bah
[22:43] <magicaltrout> got another talk accepted at ApacheCon that will use juju \o/
[22:43] <tvansteenburgh> magicaltrout: nice
[22:43] <magicaltrout> yeah, i have a lot of writing to do :P
[22:43] <tvansteenburgh> you know you can run bundletester against your local source right?
[22:44] <magicaltrout> well i kinda figured thats what it was there for. but my setup was "borrowed" from kwmonroe's ubuntu-dev env and has cls.d.add('pdi', 'cs:~f-tom-n/trusty/pentahodataintegration') wired in the top
[22:45] <tvansteenburgh> also, to answer your earlier question about the env reset, it destroys everything except the bootstrap node between each test file execution
[22:45] <tvansteenburgh> (same as `juju-deployer -TT`
[22:48] <tvansteenburgh> magicaltrout: is that the same charm in which these tests reside?
[22:48] <tvansteenburgh> you can just do cls.d.add('pentahodataintegration')
[22:49] <magicaltrout> thanks tvansteenburgh that will work when it gets pushed up to the canonical run tests as well I assume?
[22:49] <tvansteenburgh> yep
[22:49] <magicaltrout> marvelous
[22:56] <tych0> howdy cats. if i do a, `juju bootstrap gce gce`, i get a, `ERROR cloud "gce" not found`. this is with a current trunk build of juju with patches on top, and my environments.yaml has suitable config (it worked in 1.25 and older 2.0s). i must be doing something dumb, but i'm not sure what
[23:01] <rick_h__> tych0: environmemts.yaml is no longer used
[23:01] <tych0> rick_h__: yeah, i figured as much. are there docs on how to migrate to whatever the new thing is?
[23:02] <rick_h__> tych0: see juju list-clouds list-credentials and tge beta1 release notes
[23:02] <rick_h__> tych0: release notes emailed
[23:03] <tych0> rick_h__: cool, thanks
[23:03] <rick_h__> cherylj: do we a start to the "getting started" tych0 could try?
[23:04] <magicaltrout> okay tvansteenburgh if I run a config change, what does sentry.wait actually wait for?
[23:04] <tvansteenburgh> magicaltrout: hooks to finish executing
[23:08] <magicaltrout> hmm fair enough
[23:08] <magicaltrout> the config gets executed okay now, but the test gets to the next step too early
[23:08] <tvansteenburgh> magicaltrout: happy to look at your code if you want
[23:09] <magicaltrout> thanks tvansteenburgh
[23:09] <magicaltrout> https://github.com/OSBI/layer-pdi/blob/master/tests/01-deploy.py#L39
[23:09] <magicaltrout> 39 and 40 need to stop a process and wait until its actually happened
[23:10] <magicaltrout> that process is stopped via https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L53
[23:10] <magicaltrout> which then drops down to line 152
[23:10] <magicaltrout> just kills a pid
[23:10] <magicaltrout> but the next line in my test checks its stopped and it returns a pid
[23:11] <magicaltrout> but if you login to the machine it has actually been destroyed
[23:17] <tvansteenburgh> magicaltrout: i don't follow. the config change is supposed to kill the proc right?
[23:17] <tvansteenburgh> and it getting killed?
[23:18] <magicaltrout> tvansteenburgh: that is correct
[23:18] <magicaltrout> but in the test when I check for the process, it returns a pid
[23:18] <tvansteenburgh> oic
[23:19] <magicaltrout> but if you login ot the box after its failed and check for the pid its not there
[23:19] <magicaltrout> so its like the next line executes too soon
[23:19] <magicaltrout> before the pkill has finished doing its thing
[23:21] <tvansteenburgh> yeah, i wonder if wait() is getting called and returns before the hook even starts
[23:22] <tvansteenburgh> i wonder if you could use wait_for_messages() instead
[23:22] <tvansteenburgh> https://github.com/juju/amulet/blob/master/amulet/sentry.py#L380
[23:23] <tvansteenburgh> although then you might end up changing your charm to support the test, which is backwards
[23:23] <magicaltrout> well i don't mind
[23:23] <magicaltrout> nothings finalised
[23:24] <magicaltrout> so extended status, is that like when i do status_set('active', 'PDI Installed. Carte Server Disabled.')
[23:24] <magicaltrout> its the blurb afterwards?
[23:24] <tvansteenburgh> yep
[23:25] <magicaltrout> okay so if I wait for the disabled message it should hang because that certainly comes after the pkill
[23:26] <tvansteenburgh> magicaltrout: cool, try it!
[23:32] <magicaltrout> other dumb question tvansteenburgh do I need to do anything other than talisman = Talisman([], timeout=self.timeout)
[23:32] <magicaltrout> to get a talisman object?
[23:33] <tvansteenburgh> magicaltrout: generally you don't create your own, you get it from d.sentry
[23:33] <tvansteenburgh> d.sentry is a Talisman object
[23:33] <magicaltrout> ah right
[23:59] <tych0> rick_h__: thanks, i got it sorted. my last question is do you (or anyone) have any idea where i can put the enable-os-upgrade: false flag now to speed up deploys?
[23:59] <tych0> arosales: might know? ^^