=== lazyPower_ is now known as lazyPower === lazyPower is now known as lazypwr === lazypwr is now known as lazyPower === scuttle is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey === lazyPower is now known as lazypwr === lazypwr is now known as lazyPower [04:19] lazyPower: I was wondering if charm-build should be VCS aware, and able to build directly to a branch (potentially the same repo the source branch is in). But I don't know enough yet to know how much effort that would involve. [04:22] stub - i think i missed a message somewhere and have lost context... [04:24] oh, building in the wrong dir from 7 hours ago [04:25] Interesting thing is, it is *somewhat* vcs aware, insofar that it uses VCS to grab the resources listed in the directory [04:25] just mumbling ideas I doubt I'll ever get a chance to investigate :) [04:25] i think sniffing for a manifest would be sufficient to quell the build error gotchya of sitting in the charm artifact dir trying to charm build [04:26] I end up with crap in my built branches due to test runs and stuff, so getting the stuff 'officially' added and/or committed would mean I don't need to go through later and ad/commit. [04:26] And I imagine uploading to the charm store would have similar issues, unless that is smart enough to only upload what is in the manifest. [04:27] yeah, thats true. then again adding all that to a clean target, and running make clean before you do anything wrt pushing your charm, should be g2g [04:27] but thats an interesting idea, only including the artifacts in the manifest... [04:27] Yeah, I do that so it isn't anything important. I do need to maintain the rules, so a little bit of maintenance. [04:27] that would be a pretty good way to trim the fat, unless you're binpacking in last minute dependencies to a "fat charm" [04:27] that could be problematic [04:29] A command to add files to the manifest post-build might be better for that, since then it is explicit and you get a nice checksum embedded in the manifest. [04:30] ohhhh, i like that too [04:30] heck, might even be some resources workflow involved here [04:30] ^ [04:30] that sounds like the winner [04:30] manfiest for charm code, everything else is treated as a resource [04:30] I don't do fat charms so can't really comment ;) [04:31] I try not to, but i'm also not limited by an angry egress firewall [04:31] On that note, if my charm is not getting the updated basic layer with "config.changed*" support, where should I be looking to troubleshoot it? Is $JUJU_REPOSITORY the only place it should look for the layer? [04:31] make sure you dont have a local copy of layer:basic in $LAYER_PATH [04:31] lazyPower: All of our egress firewalls are perfectly calm and rational. :-P [04:31] blahdeblah - schenanigans! [04:32] blahdeblah: Its a LAYER_??? variable [04:32] there is a LAYER_PATH env var, yes [04:32] oh, nvm [04:32] So, hypothetically, if $LAYER_PATH isn't set, where would my layer be coming from? [04:32] from the API, so it reaches out and clones it in the deps dir in $JUJU_REPOSITORY [04:33] Oh... and charms.reactive is emedded by pip, so if you have a pip cache [04:34] aaargh [04:34] Oh, its def in base layer so pip doesn't matter === lazyPower is now known as lazypwr [09:06] kjackal: hey, I think setting my collectd bug to "Fix commited" did not work as expected. I can see my review is still showing as failing tests, they were not rerun. Also I think it probably belongs in the "Incoming Charms" section not "Charm Reviews" [09:07] Hey jacekn , let me check [09:08] kjackal: thanks. If my charm stays in "Charm Reviews" I think it can take a long time before somebody will pick it up, it's way down the list and some charms at the top have been waiting for well over a month for review [09:08] and my review shoudl be just a few lines === zenlot6 is now known as zenlot [09:10] wait up, there should not be any new round of review for your charm [09:10] We are talking about this one: https://bugs.launchpad.net/charms/+bug/1538573 [09:10] Bug #1538573: New collectd subordinate charm [09:11] Yes, cool [09:11] so here is where we are with this charm [09:11] yesterday i did the review [09:11] It was the third review iteration [09:12] I did not find any issue so you got the green light from me [09:13] so what happens next? I am looking at http://review.juju.solutions/ and it's way down the list there [09:13] however, I am rather new in the team, so a more senior member will read my report and proceed wit the process of promulgating the charm [09:14] kjackal: I see so there is basically another process outside of review.juju.solutions that my charm is going through? [09:15] your charm is part of a batch that was reviewed yesterday. The batch is almost finalised. We will send an update on the list with our progress [09:16] So, you will not have to do anything, I will bring it up on our daily sync but it is a normal internal process [09:16] kjackal: cool, thanks for explanation! [09:17] kjackal: just a suggestion - maybe add somethjign to the review queue to indicate that the charm review is "in progress" or something similar [09:17] The list that we will send our update is here: juju [09:18] Are you registered, there? [09:18] yes I am [09:18] cool thanks again for explaining this [09:20] Great! Regarding the review queue, there will be a number of changes. We are working on improving the process especially since now there is the need to review layers (in addition to "old-style" charms)) [10:03] beisner, gnuoy: I've switched over the offical charm branches to the reverse-imported ones from github [10:03] not that we no longer have different branches for trusty/precise [10:03] figured out that magic as well... [10:03] s/not/note/ ? [10:04] note [10:04] indeed [10:04] lol [10:04] ack [10:04] gnuoy, https://code.launchpad.net/~openstack-charmers/+branches?field.category=OWNED&field.category-empty-marker=1&field.lifecycle=MATURE&field.lifecycle-empty-marker=1&field.sort_by=most+recently+changed+first&field.sort_by-empty-marker=1 [10:04] gnuoy, they show as precise charms as that the current default series in charms distro [10:22] stub: ping [10:22] magicaltrout: pong [10:23] ooh high, quick leadership q if you have 2 mins [10:23] np [10:23] okay so here: https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L90 [10:24] if I do a juju set-config carte_password=1234 or something it will get picked up on the next is_leader execution [10:24] correct? [10:25] that runs every 5 mins or something [10:25] You are thinking of update-status, which I think runs every 5 mins with Juju 1.25 [10:25] At the moment, you seem to be resetting the credentials every hook. [10:26] Which is fine, as none of them are random [10:26] actually I lied, the leadership.changed was going to be my question, it is being called :P [10:26] I have no question :) [10:27] You probably should rely on update-status here. The config-changed hook is what gets invoked after a 'juju set' [10:27] should not [10:27] yes I set the credentials everytime, but thats because its something that can be changed by set-config and needs to be propogated to the leader and the slaves because they need to know how to login [10:29] hmm actually you have a point, as the nodes all share the same password, I guess on config change I could set it and trigger a restart [10:29] actually check_running already does that for the password so it might be a bit of a noop anyway [10:30] So at the moment, you run 'juju set'. The config-changed hook runs on all units. On the leader, change_leader is called. On the others, update_slave_config. [10:30] Sory - I'll start again [10:31] hehe like I said, in reality I don't think i have a problem, my implementation might jump through a few extra hoops but I think it works the logs look okay: https://gist.github.com/buggtb/3b65eb1672dc602c98ac [10:32] The implementation seems fine. [10:33] Apart from the bad choice of method names, but that is just opinion :) [10:33] yeah thats just because they've evolved over time :) [10:33] I'm gonna clean all that up in a bit [10:33] they start doing one thing then cory_fu makes me rework it and they do something else :P [10:36] All your units might restart at the same time atm. Is that a problem? [10:37] restart? [10:37] oh the service [10:37] yeah i had wondered about that [10:37] because the slaves register with the master [10:38] currently i'm not sure what happens in the app if the master is unavailable, whether they retry or not [10:38] If you want to avoid that, check out the coordinator layer [10:38] I've only used it for top level charms though. This seems to be a layer? [10:39] https://github.com/stub42/layer-coordinator has the docs [10:39] yeah just reading the readme, thanks! [10:41] Of course, rolling restarts mean your slave are running for longer with outdated credentials so it might not be useful :) [11:45] how do I copy files to a unit during an amulet test? [11:45] can I run juju scp somehow? [11:50] magicaltrout: yes, you can - see https://github.com/juju/amulet/blob/master/amulet/sentry.py#L69 for how amulet itself uses juju scp [11:51] aaah I misread the comment [11:51] I thought it said "juju scp doesn't work" [11:51] :) [11:51] thanks sparkiegeek [12:15] beisner, I think I'm always getting a 'full' amulet execution atm [12:15] no initial smoke... === rogpeppe2 is now known as rogpeppe === nagyz is now known as angyz === angyz is now known as nagyz [13:44] jamespage, indeed. one too many triggers on that pipeline. that should be fixed shortly. [13:48] gnuoy, have you got a moment? I'm having problems testing keystone. [13:48] tinwood, defo [13:49] gnuoy, so I'm trying to run the 015-basic-trusty-icehouse and it's hanging because rabbitmq-server is failing. Have you seen anything like that? [13:50] tinwood, yes. It'll be reverse DNS [13:50] ah, as in the rabbitmq can't work out its domain name [13:51] I see. I've been having DNS problems - that gives me a new direction to look at. [13:52] We'll do office hours in about 1 hour! [13:57] I'm trying to test my charm, but it involves changing ports/passwords etc [13:58] whats the best way to reset the state between each test? [13:58] just like an amulet remove() type teardown? [14:37] gnuoy, coreycb: hey can we chat about aodh briefly? [14:37] jamespage, yes [14:38] I have a 15mins window before child shuffling [14:41] coreycb, gnuoy: ok so sitrep - alarming is split from ceilometer -> aodh [14:41] yep [14:42] yep [14:42] ceilometer have removed old code from mitaka; aodh has its own api etc... [14:42] splendid [14:42] so I suggest we drop aodh from the ceilometer charm altogether and produce a new aodh one [14:42] https://plus.google.com/hangouts/_/hoaevent/AP36tYeW6FuePgoCJC8HN9v_cpHfg1mQpsdKPmc5OIRopOb0NN1qAg [14:42] yep [14:42] or we can piggy back then together [14:42] split gets my vote [14:42] ^^^ Hangout for the office hours in ~15 minutes [14:42] gnuoy, ok I'll do the work to drop the aodh bits for now [14:43] split makes sense to me since separate apis [14:43] jamespage, as coreycb has said before, it's the upgrade that;s is worrying [14:43] jamespage, that sounds good, I put those there before I realized there were separate apis [14:44] gnuoy, well people will lose alarming from ceilometer [14:45] we'll need to release note to add aodh [14:45] jamespage, what about existing data in the db? [14:46] thanks gnuoy, reverse dns fixed, tests proceeding. [14:46] tinwood, tip top [14:46] blimey laptop upgrade from trusty to xenial didn't completely brick it [14:48] gnuoy, yay! new pause/resume works with keystone :) [14:49] * tinwood does a little dance [14:49] tinwood: woot! [14:49] wraskelly wrabbits [14:50] * tinwood now has to work out how to do the git review bit. [14:57] OFFICE HOURS! Starting in just 5 mins [14:58] marcoceppi: linky me please [14:58] marcoceppi: so I can join and get setup [14:59] http://ubuntuonair.com/ - https://plus.google.com/hangouts/_/hoaevent/AP36tYeW6FuePgoCJC8HN9v_cpHfg1mQpsdKPmc5OIRopOb0NN1qAg [15:01] hello bundletester environment resets.... [15:01] question about charmhelpers related_units. It looks like from the code it's valid to call this function without any arguments. However i'm getting a CalledProcessError saying I must specify a relation id. Maybe the API changed? [15:01] says it happens after every test [15:02] in python land is a test a single method a test? or is it class level? [15:02] magicaltrout, every def fn is a test [15:02] hmm [15:02] thanks cholcombe [15:02] magicaltrout, i think the class is just an easy way to group related tests [15:03] yup, same as java-land then, cool [15:03] right [15:03] * magicaltrout goes off to find the cause of run away procs then [15:04] lazypwr, do you know of related_units can be called without any args? === lazypwr is now known as lazyPower === lazyPower is now known as lazypwr [15:05] rick_h__: is that a vanity mirror above your head? ;) [15:06] ooh mic stuff === lazypwr is now known as lazyPower [15:11] magicaltrout: yea, do I sound decent? :) [15:11] hehe [15:11] but love the vanity mirror idea :) [15:11] better than my dodgy videos [15:11] I like a mirror, juju grooming [15:13] cholcombe - nah i'm pretty sure you have to either a) only call that when you're in a relation-* hook, b) provide the rel_id so it can query for active conversations [15:20] marcoceppi: for those of us use to the gui, there should be a rendering of juju status within the UI, i find myself flipping back and forth [15:21] https://github.com/openstack/?utf8=%E2%9C%93&query=charm- [15:22] https://github.com/openstack-charmers/openstack-community/blob/master/README.dev-charms.md [15:26] https://jujucharms.com/u/openstack-charmers-next/ [15:26] magicaltrout: good point, I'll bring it up [15:32] jcastro, thanks for stressing this [15:33] \o/ [15:33] testing people [15:34] if you set a config option in amulet [15:35] whats the python way of waiting a while for it to take effect? :)P [15:37] magicaltrout, in the openstack charms, we address that by waiting for the extended workload status message, which is where the charm declares itself done, ready and settled. that requires that all of the charms in the deployment possess such logic. [15:37] ex. https://github.com/openstack/charm-keystone/blob/master/tests/basic_deployment.py#L39 [15:38] tvansteenburgh: I'm seeing an LXC test failure that looks like it's a problem in the environment: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/2862/console [15:39] magicaltrout, we do that to avoid race conditions in the tests. naturally, you want the charm to be done doing its work before poking at it. [15:39] aisrael: thanks, fixed [15:39] tvansteenburgh: no no, thank *you* [15:39] beisner: good idea thanks! [15:40] marcoceppi - is there a vanilla wait-for-readiness amulet method? [15:40] officehours: so, each openstack tenant would corresponds to a separate cloud, or separate controller? [15:41] marcoceppi: yes [15:41] beisner, yes [15:42] beisner https://github.com/juju/amulet/blob/master/amulet/sentry.py#L307 [15:42] beisner, also https://github.com/juju/amulet/blob/master/amulet/sentry.py#L380 [15:42] tvansteenburgh, right, that waits for a specific status. what i'm asking is actually for charms that don't use extended status. [15:42] such as mongodb or mysql [15:43] rick_h: awsome! [15:43] sounds fishy to me [15:43] awesome [15:44] beisner wait_for_status doesn't depend on extended status i don't think [15:44] beisner, hey so I have a number of 'rollup' reviews up to drop old release configuration files... [15:44] tvansteenburgh, yep looks like that's the one magicaltrout ^ [15:44] could you take a peek? I've been doing a recheck-full [15:45] rick_h__: are you going to make cloud metadata pluggable eventually? so we can inject new instance types etc without upgrading Juju? [15:46] beisner, magicaltrout: there is also sentry.wait() which waits for hooks to complete [15:46] https://github.com/juju/amulet/blob/master/amulet/sentry.py#L345 [15:47] ah yes, thanks tvansteenburgh. we've wrapped all of those amulet helpers in other test helpers so i've lost memory of their names. :-) [15:47] jamespage, ok so on those, clear to land if all passing? [15:47] hmm [15:47] thanks beisner tvansteenburgh [15:47] beisner, yah [15:47] rick_h, officehours: what is the status of lxd <--> network spaces? [15:48] bdx_, next two weeks I think... [15:48] so not quite but almost [15:48] jamespage: awesome! exciting! [15:48] YES! [15:48] \O/ [15:48] rick_h__ - is it hard being that awesome? [15:49] lol [15:51] cool stuff, bigdata :) https://jujucharms.com/big-data [15:51] lazyPower: exhausting :P [15:51] beisner, just trying to de-cruft old stuff in between things... [15:51] lazyPower: helps to have tons of awesome people around [15:51] i'll let mbruzek know you said that ;D [15:51] * rick_h__ runs fast to keep up with them all [15:51] jamespage, +1 for spring cleaning [15:52] heyo [15:53] Hello, I'm trying to use juju with manual provide to connect to an already bootstrapped environment but for some reason it uses my personal key instead of the juju key. Is there a way to define which key to be used when running juju command ? [15:54] jamespage, do I need to be an Openstack Foundation member before doing a git review request? [15:54] tinwood, yes [15:55] jamespage, as a person, I'm guessing - i.e. I have to physically join myself. [15:55] officehours: thanks everyone! [15:55] tinwood, yes [15:56] Thanks jamespage, I'm staring at the form now :) [15:57] hey lazyPower, I'm going to co-opt your bug 1553059 for just the last part of your problem - providing a way to clean up the cache. I'm fixing the help text now, but I'd like to use that bug to track the cache.yaml cleanup [15:57] Bug #1553059: Help output when killing a `shared model` is incorrect [15:57] oh you betchya [15:57] its a polymorphic issue anyway :) [15:57] so I'll be changing the title [15:57] duck-typed problems ftw [15:57] officehours: As a side note, I'm giving a presentation of juju-openstack-ha at a portland openstack meetup this month .... I need to hammer out a few issues I'm experiencing with different service under ha .... is there someone that would be willing to work with me on this a bit so as my demo might be legit? [16:03] tinwood: beisner just some terminology clarification if I want to catch a password change for example, I can set that block under message and then use Amulet wait_for_status? [16:03] sorry not tinwood !:P [16:03] tvansteenburgh: --^ [16:03] or is status literally the thing in the services that relays the status? [16:04] (I realise the irony in that question) [16:04] magicaltrout: not sure what you mean by "catch it' [16:04] you mean you want to change config and wait for the change to complete? [16:04] tvansteenburgh: I need to change a config option and wait for it to actually happen on the unit before proceeding in my test [16:04] ERROR cannot retrieve charm "cs:~hazmat/trusty/etcd-6": cannot get archive: Get https://api.jujucharms.com/charmstore/v4/~hazmat/trusty/etcd-6/archive: dial tcp 162.213.33.121:443: connection timed out [16:05] magicaltrout: then you want to make the change and then call senty.wait() === JoseeAntonioR is now known as jose [16:06] /o\ [16:06] alrighty thats easier :P [16:06] magicaltrout: more specifically, deployment.sentry.wait() [16:07] on a slightly different subject, there are no amulet python api docs published yet are there? [16:07] np magicaltrout :) [16:07] I had a prod around but I've ended up grepping the source [16:07] magicaltrout: sadly, no :( [16:08] magicaltrout: there are some docs but they are lagging behind the source [16:08] no probs [16:08] * tvansteenburgh makes card to generate api docs for amulet [16:11] beisner: it's merged: https://review.openstack.org/#/c/287446/ [16:11] ! [16:11] indeed \o/ [16:12] jcastro: let me know when the video lands [16:12] marcoceppi: Almost, you done editing the notes? [16:12] I need to paste those in [16:15] https://www.youtube.com/watch?v=zPLW7cGrrjE&feature=youtu.be [16:16] marcoceppi: ^^^ unlisted yet so we can fix the description [16:16] I don't have edit access [16:17] jcastro ^ [16:17] you signed in? You're listed as a "manager" [16:17] jcastro: which account? [16:18] https://plus.google.com/u/0/b/103184405956510785630/+MarcoCeppi/ [16:18] that one [16:20] OIC [16:26] can someone tell me in simplistic terms [16:26] what an environment reset entails in bundletester? [16:27] its supposed to be a machine tear down isn't it? from my look at the code [16:29] magicaltrout: yeah, bundletester -TT is the 1.x equivalent of destroy-model in 2.0 [16:29] it kills machines, services, units, etc, but keeps the bootstrap node [16:29] and according to the README reset is true by default [16:29] so after every test I should get a fresh machine? === lazyPower is now known as lazypwr [16:32] magicaltrout: are you indexing the video or you want me to do that? [16:32] not me :P === redelmann is now known as rudi|comida === lazypwr is now known as lazyPower === lazyPower is now known as lazypwr [16:40] jcastro: I'll index [16:43] jamespage, include ceilometer in those +2s? [16:44] aww yeah, when you think its deploying new code.... but it aint \o/ === lazypwr is now known as lazyPower [16:52] magicaltrout: Every bundletester target/test gets a fresh environment. What sort of tests are you writing? [16:53] hey stub, yeah it helps if I deploy more than boiler plate I guess :) [17:00] anyone know if we have the juju->juju2 update-alternatives bits documented? Can't seem to find anything [17:05] jcastro - i know that marco did a blog post on it, bu ti dont think we have it officially documented in any capacity, no [17:05] * rick_h__ can't recall where it was, release notes or what [17:05] release notes and my blog lazyPower rick_h__ lazyPower [17:05] marcoceppi.com [17:05] what he said [17:06] marcoceppi do i constitute a committee now? [17:09] beisner, do you want me to pickup the xenial-enable branches on monday? [17:10] I can work through those but they all need a fullcheck [17:10] jamespage, yep, fulls are running. it's not time-sensitive. i was mainly needing something to double check triggers and workflow and thought i'd see if those are ready to flip on. [17:11] i think they are, but we shall see :-) [17:11] beisner, \o/ [17:11] beisner, nice work this week btw - I think the transition has gone pretty smoothly [17:11] Gil - getting along ok w/ the new pointer to ETCD? [17:12] jamespage, thanks :-) and thanks for all your good work on it too! [17:13] lazyPower no. i'm working on getting juju upgraded to 2.0.x which you indicated was a pre-req, and that's not going well either . I'm on 1.25.3 atm [17:13] Gil - ok instead of going through a 2.0 upgrade path [17:14] you can also build the charm from source, theres only 1 repository to clone, then you can use a local charm until 2.0 lands as stable [17:14] unless you *want* to beta :) Then in which case, disco! i'll lend a hand where i can [17:14] uhhhh [17:14] lets go for the path of least resistance [17:15] sounds good [17:15] have you built any charms before? literally with `charm build` [17:16] Gil - if you clone this repo https://github.com/chuckbutler/layer-etcd and run `charm build -o $JUJU_REPOSITORY` it will output the charm you're upgrading to 2.0 to get to in your charm repo (assuming that env var is set) you can then deploy with `juju deploy local:trusty/etcd` [17:16] no but I think I could - i watched marco's youtube classes and did some work bringing charms down locally to get started on building an "oraclexe" charm [17:17] cloning repos I do know [17:17] from github [17:17] that's a start [17:19] \o/ woo, its a great start [17:19] the rest is downhill from there [17:20] Step 1 clone, Step 2 ... step 3 profit [17:22] export JUJU_REPOSITORY=/home/gstanden/charms/trusty [17:22] charm build -o $JUJU_REPOSITORY [17:22] returned 4 lines of output, no errors [17:23] Added charm "local:trusty/etcd-0" to the environment. [17:23] seems to have worked [17:24] tvansteenburgh: Looks like a few items in the review queue that are merged but still showing up: http://review.juju.solutions/review/2429 and http://review.juju.solutions/review/2435 are the ones I've seen [17:24] machine 2 hasn't fired up yet though... [17:25] Gil - ah, interesting. i'm surprised that did work, $JUJU_REPOSITORY is typically the directory just before series. it must have recused into the tree and found it. [17:25] aisrael: thanks, updated [17:25] to what should I have set JUJU_REPOSITORY? [17:26] kwmonroe: Are you still working on the zulu8 review? (it's currently locked) [17:26] yeah aisrael [17:26] based off your last paste - /home/gstanden/charms [17:26] kwmonroe: ack, thanks! [17:26] np, it's really a no-op, just need to move tests from ./trunk to his ./source branch. it'll be done today. thanks for checking ;) [17:27] well, not so much a no-op, just a not-much-op [17:27] something may still not be right ... normally when I deploy a non-lxc charm a machine starts up (maas). The metal is not starting - and I know that my maas setup is solid so I'm wondering if something is still not right [17:27] yeah that doesn't sound promising Gil [17:27] juju status does show the etcd deployment status === natefinch is now known as natefinch-lunch [17:30] Gil - however the machine seems to be left in pending? [17:33] marcoceppi, quick question about juju relation-list. On ceph-mon it's not showing the related units and just blowing up [17:33] cholcombe: when are you calling realtion-list? [17:33] hookenv.related_units() [17:33] oh when [17:33] I ran "juju remove-service etcd" then deleted machine 2 using the juju gui, then retried after setting JUJU_REPOSITORY to /home/gstanden/charms [17:34] marcoceppi, i'm calling it inside a config-changed hook [17:34] then I got: WARNING failed to load charm at "/home/gstanden/charms/trusty/deps": open /home/gstanden/charms/trusty/deps/metadata.yaml: no such file or directory [17:34] so I put JUJU_REPOSITORY back to what I had before...and then retried...and it did launch a machine this time [17:34] Gil - thats known behavior. the deps directory is a cache that gets used when building layers [17:34] cholcombe: you have to give it the relation you want to list [17:34] it pulls in the remote interface-layers, and required layers that you dont have locally to assemble the charm - all of which are listed on http://interfaces.juju.solutions [17:34] marcoceppi, ah ok interesting [17:35] machine: "3" [17:37] cholcombe: so you can only call relation_list without parameters inside of a relation hook, otherwise you have to pass it a context [17:38] marcoceppi, got it. inside config-changed i'm just trying to see who else is in the cluster and get a list of unit names [17:39] kwmonroe: I saw you were having troubles with the xenial vagrant images. Did you run into this? http://pastebin.ubuntu.com/15282513/ [17:39] kwmonroe: and if so, did you find a workaround? [17:40] yup aisrael: https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1538547 [17:40] Bug #1538547: vagrant box fails with private network interface [17:41] aisrael: but that didn't stop me from being able to 'vagrant ssh xenail' [17:41] kwmonroe: aha, I can get in. === rudi|comida is now known as redelmann [17:42] aisrael: yeah, looks like it's borked, but it's just messin with you. once you get in, you can add ubuntu-xenial to the 127.0.0.1 line of /etc/hosts to get rid of the "failure in name resolution" [17:42] kwmonroe: heh, I just hit that bug. [17:47] marcoceppi, are the relation ids that relation-list wants just the names of the relations? [17:50] cholcombe: yes [17:53] marcoceppi, looks like hookenv.relations()['mon']['mon:1'].keys() returns the right information [18:00] jamespage, hi - a question if I may about the new Git/Gerrit-based process: will changes at https://git.openstack.org/cgit/openstack/charm-neutron-api be mirrored into https://code.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/next ? [18:03] neiljerram, yes but there is some lag [18:03] jamespage, Or alternatively, is there a way in a bundle .yaml file to specify using the Git source directly? [18:03] cholcombe: mon:1 is not a hard coded key [18:03] don't hard code that [18:03] marcoceppi, i won't [18:04] cholcombe: is this a peers relation? [18:04] jamespage, I'm guessing you mean around once per day? [18:04] neiljerram, utimately those will end up under https://jujucharms.com/u/openstack-charmers-next/ and [18:04] beisner, whats the schedule on the git->bzr sync? [18:04] neiljerram, I think its more frequent than that [18:04] marcoceppi, yeah [18:04] cholcombe: k [18:05] jamespage, Well that's plenty frequent enough, anyway. Thanks. [18:05] jamespage, every 4hrs [18:05] okies [18:06] neiljerram, ultimately when all the bit for charm push are in place, all of those branches are redundant and charm-store publication becomes a post commit task in the ci train [18:06] jamespage, Right, so then the process will go from git.openstack.org directly to https://jujucharms.com/u/openstack-charmers-next/ ? [18:20] lazyPower - there is alot I can do without getting into etcd at this point. I will just work on my oraclexe charm for now on my stable trusty. I've alot to learn still [18:20] Gil - ok, if you need any help with etcd/flannel/etc. let me know and i'll lend a hand :) [18:21] thanks for taking a look [18:21] cool thanks! [18:32] neiljerram, it has a few hops and sync along the way, but yes, the tip of master of a charm at git.openstack.org flows to github.com to lauchpad and then to the openstack-charmers-next space in the charm store. :-) [18:32] unless it goes through albuquerque, then i'm not sure [18:32] beisner, thanks! [18:32] :-) [18:37] neiljerram, yw. thanks for those maintenance updates on the neutron-api calico bits. === natefinch-lunch is now known as natefinch [19:21] right fscking tests.... lets get you working [19:33] charmers [19:34] magicaltrout you rang? [19:34] can i run a test class on a unit thats already running? [19:34] rather than wait for bundletester to mess around [19:37] magicaltrout: if you run your amulet test against an env that already has the services deployed, it'll use them instead of deploying new ones [19:39] magicaltrout: note that if you have multiple test files (in the tests/ dir), bundletester will reset the environment between files [19:40] magicaltrout: you can prevent that by setting `reset: false` in the tests.yaml file [19:40] https://github.com/juju-solutions/bundletester#testsyaml === ejat_ is now known as ejat [19:59] ah thanks tvansteenburgh never noticed you could just execute them, assumed i've have to dump some more stuff in there [20:13] I'm clearly doing something stupid: is there anything wrong with this line: https://gist.github.com/buggtb/5620a4b5abf403e7b997#file-brokentest-py-L39 ? [20:16] magicaltrout - nope, that looks correct to me [20:16] magicaltrout - matter of fact, i do something similar here https://github.com/chuckbutler/docker-charm/blob/master/tests/10-deploy-test#L68 [20:17] bah! thanks lazyPower [20:21] oooh wtf [20:21] this is where the whole compile deploy, suck from Launchpad stuff gets messed up [20:22] somehow with bundletester I have current rev tests but old charm reactive code [20:22] * magicaltrout sobs into his rum and soda [20:24] magicaltrout: its ok. its not worth drinking about. I too have experienced this pain. [20:24] hehe [20:25] i sit there wondering why the config isn't executing [20:25] it is, but the old code didn't work properly and it runs and fails [20:25] silly automated tests [20:25] clearly not my fault in the slightest :P [20:26] * magicaltrout double checks launchpad before running this time [20:43] okay wtf [20:43] does bundletester cache charms or something? [20:44] why does [20:44] bundletester -t lp:~f-tom-n/charms/trusty/pentahodataintegration/trunk [20:44] when it spin up give me old code? [20:45] the code in my reactive pdi.py is stale [20:45] and i have no clue why [21:06] there no older copy of the charm hanging around somewhere in $JUJU_REPOSITORY? [21:06] i've had issues where i thought i was going to be clever and must mv a charmdir, cloned, and kicked off a test only to have the one i mv'd get deployed [22:36] magicaltrout: did you figure it out? [22:38] yeah i've just this second had one of those "duh" moments [22:39] my test is what I have locally but the charm iself is pulled out of the charm store...... *massive face palm* [22:41] i guess this is why fridays were invented [22:41] heh [22:42] its just one of those things isn't it "wtf, my code is infront of me... why is it not right damnit?!" "oh yeah because its pulling the source from somewhere else"...... [22:42] bah [22:43] got another talk accepted at ApacheCon that will use juju \o/ [22:43] magicaltrout: nice [22:43] yeah, i have a lot of writing to do :P [22:43] you know you can run bundletester against your local source right? [22:44] well i kinda figured thats what it was there for. but my setup was "borrowed" from kwmonroe's ubuntu-dev env and has cls.d.add('pdi', 'cs:~f-tom-n/trusty/pentahodataintegration') wired in the top [22:45] also, to answer your earlier question about the env reset, it destroys everything except the bootstrap node between each test file execution [22:45] (same as `juju-deployer -TT` [22:48] magicaltrout: is that the same charm in which these tests reside? [22:48] you can just do cls.d.add('pentahodataintegration') [22:49] thanks tvansteenburgh that will work when it gets pushed up to the canonical run tests as well I assume? [22:49] yep [22:49] marvelous [22:56] howdy cats. if i do a, `juju bootstrap gce gce`, i get a, `ERROR cloud "gce" not found`. this is with a current trunk build of juju with patches on top, and my environments.yaml has suitable config (it worked in 1.25 and older 2.0s). i must be doing something dumb, but i'm not sure what [23:01] tych0: environmemts.yaml is no longer used [23:01] rick_h__: yeah, i figured as much. are there docs on how to migrate to whatever the new thing is? [23:02] tych0: see juju list-clouds list-credentials and tge beta1 release notes [23:02] tych0: release notes emailed [23:03] rick_h__: cool, thanks [23:03] cherylj: do we a start to the "getting started" tych0 could try? [23:04] okay tvansteenburgh if I run a config change, what does sentry.wait actually wait for? [23:04] magicaltrout: hooks to finish executing [23:08] hmm fair enough [23:08] the config gets executed okay now, but the test gets to the next step too early [23:08] magicaltrout: happy to look at your code if you want [23:09] thanks tvansteenburgh [23:09] https://github.com/OSBI/layer-pdi/blob/master/tests/01-deploy.py#L39 [23:09] 39 and 40 need to stop a process and wait until its actually happened [23:10] that process is stopped via https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L53 [23:10] which then drops down to line 152 [23:10] just kills a pid [23:10] but the next line in my test checks its stopped and it returns a pid [23:11] but if you login to the machine it has actually been destroyed [23:17] magicaltrout: i don't follow. the config change is supposed to kill the proc right? [23:17] and it getting killed? [23:18] tvansteenburgh: that is correct [23:18] but in the test when I check for the process, it returns a pid [23:18] oic [23:19] but if you login ot the box after its failed and check for the pid its not there [23:19] so its like the next line executes too soon [23:19] before the pkill has finished doing its thing [23:21] yeah, i wonder if wait() is getting called and returns before the hook even starts [23:22] i wonder if you could use wait_for_messages() instead [23:22] https://github.com/juju/amulet/blob/master/amulet/sentry.py#L380 [23:23] although then you might end up changing your charm to support the test, which is backwards [23:23] well i don't mind [23:23] nothings finalised [23:24] so extended status, is that like when i do status_set('active', 'PDI Installed. Carte Server Disabled.') [23:24] its the blurb afterwards? [23:24] yep [23:25] okay so if I wait for the disabled message it should hang because that certainly comes after the pkill [23:26] magicaltrout: cool, try it! [23:32] other dumb question tvansteenburgh do I need to do anything other than talisman = Talisman([], timeout=self.timeout) [23:32] to get a talisman object? [23:33] magicaltrout: generally you don't create your own, you get it from d.sentry [23:33] d.sentry is a Talisman object [23:33] ah right [23:59] rick_h__: thanks, i got it sorted. my last question is do you (or anyone) have any idea where i can put the enable-os-upgrade: false flag now to speed up deploys? [23:59] arosales: might know? ^^