=== bogdanteleaga_ is now known as bogdanteleaga === natefinch_ is now known as natefinch === fginther` is now known as fginther === bcsaller_ is now known as bcsaller === mup_ is now known as mup === marlinc_ is now known as marlinc === FourDollars_ is now known as FourDollars === sarnold_ is now known as sarnold === root_ is now known as metsuke === Tristit1a is now known as Tristitia === bradm_ is now known as bradm [09:37] gnuoy, could you take a run through the verified of theses: https://review.openstack.org/#/q/status:open+topic:charmhelpers-resync [09:39] sure [09:40] gnuoy, n-gateway and n-openvswitch have relations to n-api right? [09:41] right [09:42] jamespage, I'm supposed +1 the workflows on those as well, right? [09:42] gnuoy, yeah so +2 review and +1 workflow [09:42] ack === dooferlad_ is now known as dooferlad [09:46] gnuoy, beisner: I've just raised a project-config change that should fix bug tracking for the charms as well - its currently not functional [09:49] gnuoy, lets book some time to discuss v3 and get it swinging [09:49] I have some ideas of what might be up... [09:51] jamespage, ta [09:52] gnuoy, just looking at these - https://review.openstack.org/#/q/status:open+topic:bug/1545886 [09:52] Bug #1545886: need to support vni_range option for ml2 plugin (Juju Charms Collection):In Progress by xtrusia> [09:52] I think that in fact we only need to set those in the plugin for neutron-api, not all of the agent locations but I'm not 100% sure [09:53] jamespage, I would expect to only have to set then for neutron-api [09:53] gnuoy, yeah - I think thats true [09:53] gnuoy, let me dig a little [09:57] jamespage, Do i still need to mark nbugs fix committed or do I sit back and let gerrit do it? [09:57] gnuoy, for now please do manually [09:58] gnuoy, hopefully my proposed change will fix that [09:58] ack === Odd_Blok1 is now known as Odd_Bloke [10:14] gnuoy, beisner: http://stackalytics.com/?metric=commits&project_type=all&company=canonical&release=all [10:15] nice [10:15] gnuoy, and - http://stackalytics.com/?metric=commits&project_type=all&release=all [10:15] commits not reviews but meh - we don't get metrics on those [10:17] gnuoy, lets do it [10:36] gnuoy, http://paste.ubuntu.com/15272811/ [10:39] gnuoy, jamespage: is there an easy way to determine the github repo for a OS charm? e.g. in my case I'm looking for keystone. [10:41] I'm guessing it's this one: https://github.com/openstack-charmers/charm-keystone [10:42] anyone else seeing invisible text in the Search field on https://jujucharms.com/docs/ ? [10:44] sparkieg`, yes, it has been an issue for a while now - I believe it is being worked on === sparkieg` is now known as sparkiegeek [10:44] evilnickveitch: ok [10:45] it is annoying! there are too many pages of docs not to have working search! [10:45] evilnickveitch: +1 [10:45] evilnickveitch: the search is ... non intuitive too [10:46] hello guys! I'm having a problem. My environment was going well, but all of a sudden when I try to deploy a new service, it stays in the state "Waiting for agent initialization to finish" forever. How can I diagnose? [10:46] (I use Juju on an openstack all-in-one installation) [10:54] gnuoy, http://paste.ubuntu.com/15272863/ [10:54] tinwood, yes it is [10:54] thanks jamespage [11:58] Muntaner, take a look in /var/log/juju on machine 0 [12:09] jamespage, too late! I destroyed the environment, bootstraped it again and works. Weird [12:10] gnuoy, hey - when you did the last release of charms to stable branches, I think you did some 'at the point of release' type changes [12:11] jamespage, I thinks it is related to the security groups of openstack... when I delete and create a VM, the sec group created for that VM remains and does not get deleted into openstack [12:24] jamespage, update charmhelper location, that sort of thing? [12:30] gnuoy, yah [12:35] jamespage, that + update amulet branches to point at stable branches [12:59] tvansteenburgh, dpb1_: I know we're not really focussing on deployer any longer but https://code.launchpad.net/~james-page/juju-deployer/git-branch-support/+merge/287929 would help us out in the short term... [13:19] jamespage, woah, the stackalytics thing! [13:24] jamespage: love it [13:50] jamespage: i can get that deployer merge released today [13:54] tvansteenburgh, that would be great - are you OK with the syntax? [13:54] I think that longer term we'll just be deploying from the charm-store but this works us through the interim... [13:55] jamespage: i just merged this yesterday https://code.launchpad.net/~niedbalski/juju-deployer/add-refspecs/+merge/287829 [13:55] beisner, I have a few reviews that I'd like the full amulet suite run against - is that a manual kickoff atm? [13:55] jamespage: but it looks like your change can coexist with that [13:55] tvansteenburgh, it would appear so [13:56] jamespage: so yeah, lgtm. will let you know when it's released [13:57] jamespage, it is. i'll be focusing on the gate job integration and the artifact public linking now that we're in a good place. will send you info shortly. [13:58] tvansteenburgh, awesome - thankyou [14:02] beisner, I fixed up the single deploy for swift-storage - https://review.openstack.org/#/c/287156/ [14:03] if you'd like to review, that will unblock other reviews inflight I think [14:06] jamespage, ah good. i suspect there may be a few of those. [14:06] beisner, yes... [14:06] my resync flushed out a few... [14:08] jamespage, was it the setup_storage bit that was tripping? [14:09] beisner, basically with no explicit config, there are no block devices found, so the perms changes at the end of setup_storage error out as /srv/node never gets created... [14:09] yah ok [14:11] jamespage, so this is valuable to eek out. i think each charm should deploy to a blocked state instead of an error state, when deployed with its defaults. [14:16] beisner, and that would be a valid test ? [14:17] beisner, ok that does make sense... [14:17] jamespage, so it tests the user experience a bit, in that if i've just added a charm, and it has deployed before i add configs, i don't have to jump through resolved/retry manual steps. === beuno_ is now known as beuni === beuni is now known as beuno === jrwren_ is now known as jrwren [14:21] frankban: when you have time can you add juju-deployer-0.6.4 (on pypi) to the juju/stable ppa? [14:22] tvansteenburgh: oh sorry yes [14:22] tvansteenburgh: what makes it stable? how much testing has it undergone? [14:24] sparkiegeek: aside from my own, and the testing of the patch submitters, none [14:26] tvansteenburgh: can I suggest proposed is a better place for it, at least for now? [14:27] sparkiegeek: sure. frankban, juju/proposed instead please [14:28] which i guess is actually juju/devel? [14:29] tvansteenburgh: is lp:python-jujuclient not getting any updates? ahasenack has had ppas building from there forever. [14:29] tvansteenburgh: lp:juju-deployer rather. [14:29] dpb1_: yeah that's where updates are happening, which ppas are getting updates? [14:29] * dpb1_ gets links [14:30] https://code.launchpad.net/~ahasenack/+archive/ubuntu/python-jujuclient and https://launchpad.net/~ahasenack/+archive/ubuntu/juju-deployer-daily [14:32] dpb1_: ok, that's good to know. so if ppl want latest tell them to install from there? [14:32] yup [14:32] tvansteenburgh: it's where I pull from on all my machines [14:32] dpb1_: do those builds ever graduate to juju/stable? [14:32] tvansteenburgh, dpb1_: sounds great, can we just copy those when a deployer needs to be updated in stable ppa? [14:33] note the failures in https://launchpadlibrarian.net/245134994/buildlog_ubuntu-vivid-amd64.juju-deployer_0.6.1~bzr165~48~ubuntu15.04.1_BUILDING.txt.gz [14:33] frankban: tvansteenburgh: there is no workflow: it's always been just ping frank or marco to copy to /stable [14:33] the test is trying to reach out to reviews.openstack.org and failing [14:33] well, I never copy binaries to stable, so that's mostly a frank thing [14:34] I prefer rebuilding [14:34] dpb1_: yes, it's always been a "by request" thing [14:35] marcoceppi: if we already have a working deb that has been tested, why not just copy it over? [14:36] fwiw, with openstack package flow, we build for staging, then rebuild for proposed. then it's a binary copy to release it. that way your release binary is identical to the thing that was tested in proposed. [14:37] also, woot! for that deployer daily build ppa. :-) [14:38] beisner: note the FTBFS errors though [14:39] what is FTBFS [14:39] fail to build from source [14:39] ack, i will address that. i didn't realize the build host didn't have egress https access [14:40] i did run all the tests before releasing [14:42] tvansteenburgh, ah yes i see jorge's added test is the thing there. [14:42] beisner: yeah, i'm gonna ask him to fix [14:47] tvansteenburgh, yah, not sure that test should actually reach out. [14:47] agree, asked for a mock [14:48] or just a straight unit test [14:48] right. anyway, cool stuff. we're all about the refspec now. thanks for the bits. === lazypower_ is now known as lazyPower === rcj` is now known as rcj [14:59] http://www.meteoriteconsulting.com/spinning-up-pentaho-data-integrations-quickly-with-juju/ <- those card things fsck-ing rock [15:00] haha! [15:00] yeah they do! [15:01] better fix the icon [15:05] jamespage, should we take that swift-storage fixup through the full deal before +2? [15:05] beisner, meh [15:05] beisner, its a fairly innocuious change [15:05] jamespage, ack, i'm good with it [15:08] there's a bug in the cards lazyPower, who's the boss? [15:08] magicaltrout - https://github.com/CanonicalLtd/jujucharms.com/issues/ <- that board is [15:09] cool [15:13] icey_, your test update expects one or more ceph-osd processes, whereas we were previously expecting exactly 2. is that intentional? [15:14] beisner: yes, although it's not ideal; to make the tests repeatably passable in a given deployement, it has to either do that or ebe able to expect 2 OR 3 OSD processes to be running === icey_ is now known as icey [15:15] we could plumb that helper to take a list type, then specify [2, 3 [15:15] ] [15:15] bahh new keyboard still improving my experience here ;-) [15:17] beisner: same, I think my fingers are *nearly* used to this keyboard :); I like the idea of that helper being able to take a list [15:17] beisner: what's the timeline to get a openstack contrib charmhelper change landed? [15:18] if its anything like charmhelpers.core.host mods - > a month [15:19] icey, lazyPower - yah the core c-h stuff is likely to be more sensitive than a test helper. we can usually land test helpers quickly. [15:20] i'm just salty that i'm still asking for help with the same merge and received nothing in response, thats all :) [15:20] https://code.launchpad.net/~dbuliga/charm-helpers/charm-helpers/+merge/285044 <-- really needs some love if anyone has the spare cycles to go +1 that for me [15:20] lazyPower, ha! where be it? [15:21] is https://bugs.launchpad.net/charms/+bug/1538573 on somebody's radar? It should be trivial review now. Do I need to do anything for it to appear in the review queue? [15:21] Bug #1538573: New collectd subordinate charm [15:38] cory_fu: do you recall the name of the dev from neo4j at the charmer's summit for merlijn's email to the list? [15:39] just google, loud neo4j dev who is canadian but lives in sweeden [15:39] -e [15:39] lol [15:39] first result: "Ask HN: Who is hiring? (January 2016)" [15:40] steven [15:40] steven fisher i think? [15:40] i follow him on twitter [15:40] steven.baker@neotechnology.com, [15:40] thats it [15:40] srbaker [15:40] <3 ty [15:40] thanks to jcastro for not bcc'ing the emails in his mailshot;) [15:40] <3 [15:40] found it and the charm [15:41] * marcoceppi sighs [15:43] * magicaltrout cracks out the ear plugs and hides the alcohol [15:43] the sign up sheet was publically, technically, that's like a bcc. /me runs. [15:43] magicaltrout that takes all teh fun out of thursday [15:43] hehe [15:43] not like a bcc I should say [15:45] anybody have issues on xenial running `charm build`? I just got `TypeError: write() argument 1 must be unicode, not str` the moment it got to the first interface to process [15:45] jcastro: what's the topic for the next charm hangout? [15:45] jcastro: can/should I join? [15:46] rick_h__: it's an open office hours, all are welcome [15:46] rick_h__: it's almost always open ended [15:46] if you have something to show, show [15:47] jcastro: can you toss me an invite to the calendar and I'll show up please? [15:47] icey - errr [15:47] icey - are you using charm from marco's ppa? === jhobbs_ is now known as jhobbs [15:48] probably not, let me dig up that email; just installed ubuntu on this machine on Monday so it's pretty fresh :) [15:48] ok, i'm not sure that will fix anything [15:48] but its worth a shot to get started [15:48] it has all the road-to-2.0 fixes in there [15:49] lazyPower: road-to-2.0 is now master [15:49] as an fyi [15:49] kjackal: hey. Thanks for feedback on collectd layer. I think it should be ready to go now: https://bugs.launchpad.net/charms/+bug/1538573 [15:49] Bug #1538573: New collectd subordinate charm [15:49] rick_h__: done [15:49] well [15:49] its built from FYI-Master apparently [15:49] :P [15:49] lazy I'm on 1.11.1, do you have a link to marco's PPA? [15:49] jcastro: <3 ty much [15:50] jacekn thank you [15:51] nevermind lazyPower: ppa:marcoceppi/charm-tools-2.0 [15:51] https://launchpad.net/~marcoceppi/+archive/ubuntu/charm-tools-2.0 [15:51] ye [15:52] BAM nope [15:52] same exact error [15:53] given that xenial is targeting python3, shouldn't charm tools see about updating :) [15:53] icey: i got you [15:53] https://github.com/juju/charm-tools/pull/119 [15:53] icey: alpha1 build is being released as I type [15:54] three cheers for marcoceppi! [15:54] icey - its almost like we were expecting you :) [15:54] beisner, can you workflow +1 https://review.openstack.org/#/c/287156/ as well please [15:54] we are devx, expect us [15:54] jamespage o/ heyo [15:54] lazyPower, ola [15:55] jamespage - friendly ping to get this back on your roadmap - https://code.launchpad.net/~dbuliga/charm-helpers/charm-helpers [15:55] lazyPower, yup [15:55] on my list still [15:55] same bat time same bat channel next week? [16:08] hey guys a quick question [16:08] if I specify nodes in my juju-deployer config like this: [16:08] '0': constraints: name=xxx blahblah [16:08] should it always pick that node from MAAS? [16:08] because currently it looks like it ignores the name actually and just picks based on the tag [16:08] nagyz - you're looking for tags [16:09] --constraints="tags=bootstrap" for example [16:09] yes, I know them, but in my juju-deployer config I'd like to pin the charms to nodes [16:09] i'm not sure you can use names as a constraint [16:09] afaik its only ever supported tags [16:09] it doesn't complain that it would be invalid... :) [16:09] lazyPower: nagyz no, i think you need to add maas tags and use the tags as constraints [16:10] ok let me rerwite it and test it [16:10] cheers [16:10] cheers nagyz [16:10] wait, one more thing: will the machine constraints I specify in the config override the constrainst I specified when bootstrapping? [16:11] yeah, nice thing for you to expect in 2.0 [16:11] you get boostrap constraints, and you get model constraints [16:11] ok so I'm on the latest stable atm :) [16:11] so they get applied after the boostrap constraints in 1.25? [16:11] yeah if you bootstrap with --constraints=tags=bootstrap [16:11] so if I have tags=a and then in the juju-deployer config I have tags=b, will it pick a node that has a and b? [16:11] all machines launched after the fact will try to launch with --tags=bootstrap [16:11] you can clear it, or change it, its not catastrophic [16:11] yes, that's the behaviour we're seeing [16:12] just a minor annoyance [16:12] so how could I pin different services to different tags? would that even work now then? [16:12] let's say I clear the global constraints after bootstrapping [16:12] if you clear globals and have --constraints=tags=abc per unit, they will apply those units constraints when the bundle gets deployed [16:13] sorry, il mean per charm [16:13] I currently list the constraints per machine listed in the yaml - and then for each charm I explicitly bind it to a machine id like --to '5' [16:13] is that not a good thing? [16:13] thats fine, it makes the bundle a bit brittle [16:13] (based on wolsen's tokyo demo yaml :P) [16:14] but with how we're modeling networking, storage, etc. i'm sure we'll see more bundles like that start to crop up [16:15] ok let me give it a try [16:29] if I have globals set, they will ALWAYS override local constraints, right? [16:29] or are they getting an "AND" inbetween? [16:31] aahh cory_fu I knew there was a reason for me piping the hookenv stuff through leader_set..... whilst it will get set, if someone changes it, it wont trigger a restart [16:32] magicaltrout: Ah, that's true [16:32] @when_any('leadership.changed', 'config.changed.foo') would be useful there [16:32] I should cut a release of charms.reactive so that's available [16:37] marcoceppi: tvansteenburgh: This item shouldn't be in the queue (it got merged yesterday): http://review.juju.solutions/review/2429 [16:39] lazyPower, do you have a second? [16:39] lazyPower, it doesn't work as expected... :-) [16:40] this is the yaml that we're running with juju-deployer: http://pastebin.com/4BVd32L1 [16:40] but for example haproxy gets put on machine "2", while the config says it should go in 3,4,5 [16:40] what did we screw up? :-) [16:41] thats...what? [16:41] it clearly says -to '2' [16:41] i mean - '3' [16:41] haproxy: (the very last) lists -3, -4, -5 [16:41] yep [16:42] juju status shows (while the deployer is running): machine: "2" [16:42] plus it requests a bunch of machines from maas [16:42] yeah something went awry here [16:42] not just the ones we've listed in the yaml [16:42] thedac, gnuoy: quick question: if I want to run an action inside a unit (juju ssh unit/n ; then "run an action") is there anything I need to set up re: ENV to get it to work? I need to see the stacktrace that's blowing up on my keystone charm! [16:42] tinwood - you can juju debug-hooks on that unit, then execute teh actin [16:43] dbug-hooks will trap the action tinwood [16:43] even for an action? [16:43] nagyz - i'm thinking through this and its not obvious to me why this is the case :( [16:43] ok. will try. ta! [16:43] tinwood - yep, actions run in an anonymous hook context [16:43] * thedac learned something too [16:43] lazyPower, I'm happy to try any suggestion. [16:44] the debug tells me this: [16:44] 2016-03-03 16:33:03 [INFO] deployer.import: Deploying service haproxy using cs:trusty/haproxy [16:44] 2016-03-03 16:33:03 [ERROR] deployer.deploy: Service haproxy to be deployed with non-existent service 3 [16:44] maybe that's why? [16:44] its trying to co-locate with a service huh? [16:44] remove the quotes around the number's declaring the machines? (not even sure this will have any effect) [16:45] the problem is for every retry I need to destroy the env and then rebootstrap... [16:45] takes a while. [16:45] i understand [16:45] what does it mean by non-existent service 3? I mean the machines are not "services", are they? [16:45] you may have some success dropping that bundle on the store gui and inspecting machine view [16:45] i think the 'string' is getting interpolated as a service name [16:45] deploy supports the following syntax [16:45] to: - 'wordpress/0' [16:46] i think its getting confused and trying to colo with a service named 3, so it spins up a machine expecting a parent service to be deployed there [16:46] ok let me remove the quotes.. :) [16:46] s/parent/principal [16:47] kjackal: so is setting charm review bug status to "New" something I should have done after my reply? Or you prefer for charmers to do it themselves? [16:48] jcastro - move it to fix-committed [16:48] jacekn ^ [16:48] Thank you lazyPower [16:48] we dont move status to anything other than tog et them out of the queue after we've replied :) [16:49] alright, good to know thanks [16:50] lazyPower, nagyz: i don't think machine placement works for v3 bundles [16:50] ah that would explain it too [16:52] tvansteenburgh: lazyPower right, it was added in v4 [16:52] nagyz - might want to hold off on that re-test [16:52] sounds like we've found the blocker [16:52] lazyPower: how is it a v3 bundle? We don't do those in the store for a while any more? [16:53] lazyPower: are we sure it's the old format? [16:53] well this is a labeled bundle [16:53] thats by definition the v3 format right? [16:53] rick_h__ http://pastebin.com/4BVd32L1 [16:53] lazyPower: k, just checking [16:54] lazyPower: right, remove that, dedent one level and try again? [16:54] nagyz - the good news is the format change is super simple to make, delete that openstack: key at teh top and fix the indentation now that we've removed the parent key and you'r ein a v4 bundle [16:54] nagyz - give a v4 bundle a go, and lets see if you get better results [16:57] oh that's how I started out [16:57] but I had to add the openstack: at the beginning to actually be able to make juju-deploy parse it... [16:58] nagyz: juju-deployer should take a v4 bundle. The team updated it to use it [16:58] it was complaining about not getting any deployment names [16:58] that's why we added openstack: at the top [16:58] nagyz: if you have a problem without it let us know and we can look into what's not happy. https://github.com/juju/charmstore/blob/v4/docs/bundles.md [16:59] nagyz: ^ has the docs/description of the v4 vs v3 and what changed [16:59] looking at it. [17:01] beisner, does uosci understand which pxc test to run yet? [17:03] Can etcd and flannel be installed on the same physical node? I want to put etcd & flannel on physical node 2, then add a unit of flannel to phys node 3. Will it work ok? [17:04] Gil - sure can [17:04] Gil - which flannel charm are you using? [17:04] nagyz: what version of deployer are you using? [17:05] jamespage, checking.. [17:05] brb [17:06] jamespage, yes: "Automatically selected test: ./tests/10-deploy_test.py" [17:06] beisner, \o/ [17:06] oh wait - I think it was an amulet test failure nm [17:09] tvansteenburgh, I've got it via pip install [17:09] as the stable ppa doesn't have the 0.4 package [17:10] (based on readthedocs there is 0.4, right?) [17:10] nagyz: 0.6.4 is latest on pypi [17:10] nagyz: if you're on 0.4.x then you probably don't have the v4 support [17:11] well I just did pip install [17:12] that I assume gets the latest :) [17:12] there is no -v or --version [17:12] ok I'm on 0.6.3 [17:13] according to pip list :) [17:13] k, should be fine then [17:13] ok, changed the bundle to v4, running now [17:18] so what I specify as machine '0' in my bundle actually becomes '1', right? [17:18] I should just skip declaring '0'? [17:18] right now I have 0,1,2 declared with a storage tag and then now according to juju status 0 is the juju bootstrap node, 1 became storage and 2 become a controller which I only list for 3,4,5... [17:24] nagyz: the machine keys that you define in the machines section of your bundle do not necessarily map to actual machine numbers [17:25] nagyz: for that reason, yes, it is best to not use '0' in your machines definition [17:25] to avoid confusion [17:29] cheers [17:32] beisner, https://review.openstack.org/#/c/287082/ is looking better now as well [17:34] jamespage, ready to land that? lgtm [17:34] beisner, +1 [17:34] can you do the honours? [17:34] jamespage, yep [17:35] beisner, a little tidy here as well - https://review.openstack.org/#/c/287848/ [17:35] just template rollups and removal [17:35] amazing how much old stuff we lug around.... [17:39] yah, that should be a safe landing too, jamespage [17:41] lovely [17:54] Can juju be deployed to third party account with external_id access? [17:54] (in AWS) [18:01] How long should it take for updated charms to be ingested into the store? [18:13] longest case: an hour and 20 minutes [18:14] lkraider - not sure what you mean [18:14] external_id access? [18:15] @lazyPower - I have an IAM user that was granted to another AWS account through a Role (assumeRole permission). To use that I in aws_cli need to set external_id in ~/.aws/config [18:16] ah, i dont know that we support that. i would most def. poke the mailing list about it though [18:16] https://docs.aws.amazon.com/cli/latest/userguide/cli-roles.html#cli-roles-xaccount [18:16] someone who's more versed in that specific case may be able to chime in and prove me wrong [18:17] @lazyPower another question: does juju support user that has MFA enabled? [18:17] not that i'm aware of. the IAM credentials you would give juju assumes that i has all the required permissions, and there's no interactive way for it to prompt you for the MFA credentials [18:18] s/that i has/that it has/ [18:18] thanks [18:18] np lkraider - sorry i wasn't full of good news for your questions :/ [19:15] i'm a bit confused about something with magic mock [19:16] i'm trying to mock hookenv.config() and while config('source') works properly config.previous('source') does not. [19:28] ChrisHolcombe - it creates another mock [19:29] and now that i've said that, i realize how nebulous the question and answer both were [19:30] ChrisHolcombe - give me another go at what you're seeing vs what you're trying to do [19:30] lazyPower, yeah i think i get that but i've also done this: self.config.previous.return_value = "blah" [19:30] lazyPower, so i'm wondering. how do i mock config.previous [19:30] beisner: do you think you can weigh in on https://review.openstack.org/#/c/287446/ one more time? jamespage has given it the +1 but wants your +2 for the tests [19:30] ah common misconcenception. self.config.return_value.previous.return_value="blah" [19:30] i think that'll get you sorted ChrisHolcombe ^ [19:30] lazyPower, omg .. ugh [19:31] i could be wrong, but i recall having ot do something like that with like filepointers [19:31] i forget the exact formula for when that works but give it a go, if it works yay [19:31] if not, i call schenanigans [19:34] icey, see comment. i didn't -1 it, but one lil misspelling. if you want to bump that one more time, i'll watch for it. [19:36] beisner: I'll make that one right quick [19:36] lazyPower, you're correct in that it's making another mock but i can't figure out how to get it to return the right thing :) [19:37] beisner: pushed, pending jenkins [19:38] icey, thx sir [19:38] for a change to c-h, should it be on LP? [19:38] icey yep [19:38] charm-helpers head is still up on launchpad in /charm-helpers [20:04] lazyPower tvansteenburgh so I retested it, changed my bundle to v4 format, now it looks much better, except that somehow 6 becomes a compute, 7 becomes a controller yet my deployment descriptor clearly specifies it differently [20:04] let me repastebin both [20:06] lazyPower tvansteenburgh my v4 deployer bundle: http://pastebin.com/CY2kDma2, and the status output (clipped, but visible): http://pastebin.com/rGq7BUDH [20:07] 6 gets compute 7 gets controller and I don't get why [20:11] i see Mark is keynoting ApacheCon US this time [20:11] he's been promoted to the big time ;) [20:15] cargonza: ping [20:15] hi [20:15] cargonza: got a sec please? [20:16] sure [20:16] https://plus.google.com/hangouts/_/canonical.com/rick?authuser=1 cargonza [20:20] * magicaltrout messed up his ApacheCon submission and instead of 3 presentations, has 1 presenation and 2 - 2 hour tutorials..... [20:20] arse [20:22] nagyz: same reason as before, there's no guarantee that the labels for the machines in your bundle will match up to the actual machine numbers [20:24] nagyz: bundles are meant to be 'self contained' so they can be reused by others. So it's machine 0, within that bundle. [20:24] nagyz: deployer asks juju for the correct number of machines with the specs you want, but it can't tell juju "make it this number" [20:30] beisner: build failed because of a timeout :( http://logs.openstack.org/46/287446/5/check/gate-charm-ceph-osd-python27/326d4fb/console.html [20:31] magicaltrout: sounds like fun! ;) [20:31] icey, ha! upstream test timed out. comment back on your review with: recheck [20:31] well marcoceppi at least with 2 hours, I'll get to demo a crap load more Juju stuff to pad it out :P [20:32] also beisner, I'm about to push a change for c-h that adds test coverage for the pid checking method, as well as support for lists :-D [20:32] "hello gang, we're doing data management, so to get started and fake distrubuted-ness we're going to install juju and spin up some nodes" [20:32] icey, coolio [20:33] also my presentation is at 5:10pm, i'm normally in the bar by then... how rude [20:33] maybe i'll just do the presentation in the bar.... [20:33] magicaltrout: we'll have to bring the bar to the presentation [20:34] aye, sounds like a plan [20:35] bam beisner: https://code.launchpad.net/~chris.macnaughton/charm-helpers/pids-can-be-a-list/+merge/288014 ; once again, I make a 5 line change with 33 lines of tests to support it :) [20:36] icey, imho, as it should be ;-) [20:36] I'm happy with it :) [20:36] there were no tests covering that function at all, now there's a test covering all of the accepted types :-D [20:37] I love how declarative the branch is, it's so assertive and sassy like "pids can be a list too, DAD" [20:37] icey, but where are the tests for your tests that test the test you updated as a helper to another test to test the feature that you really just want to land? [20:38] :) [20:41] fwiw i'm still laughing over here, marcoceppi [20:41] I laughed out loud as well marcoceppi [20:44] beisner: I just can't catch a break today: http://10.245.162.36:8080/job/test_charm_amulet_smoke/78/console [20:45] Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/dists/trusty-updates/main/i18n/Translation-en Hash Sum mismatch [20:53] icey, apt repo glitch [20:53] yeah, build retrying (again) === thumper is now known as thumper-afk [21:10] duuuh [21:10] there should be a big fat warning sign if you're in a repository charm directory building your charm instead of the development directory :P [21:11] or maybe i should just build in the right place [21:21] actually [21:21] thats really good feedback magicaltrout - i've been bit by that [21:21] and the manifest being present in cwd is a pretty large identifier that you're doing something wrong.... [21:21] hehe i think i did 4 updates before realising what i was doing :) [21:22] tvansteenburgh, of course it could, it just doesn't do it. [21:23] tvansteenburgh, but more importantly, will the mappings still work? eg if I map haproxy to 4, but the controller becomes machine 6, will it install haproxy to 6? [21:25] nagyz: yes, it will install services to the correct machines [21:26] ok, thanks. I'll let it run through then we'll see :-) [21:26] but I think it's confusing that if I map a machine to 3 and it gets to 6... [21:26] while juju could very well deploy the machines in order [21:28] nagyz: actually, i see another possible cause for what's happening [21:28] happy to experiment :) [21:28] and to rewrite the bundle to be better [21:29] nagyz: deployer iterates over the machines defs in the bundle, requesting juju to create them one at a time. but...dictionaries aren't ordered [21:29] nagyz: so it's possible this could be fixed by sorting the machine keys first [21:30] nagyz: it'd be a change to deployer, no the bundle [21:31] depending on what people use as keys. can the dict keys only be numbers in this case (for machines)? [21:32] nagyz: it's not required they be ints, but that's the convention [21:33] so the bundle should just work as-is then, right? [21:33] regardless of the numbers juju status tells me. [21:34] nagyz: yes [21:35] alrightly, let me run it through [21:35] run it through with a broadsword [21:36] maybe the wrong expression ;-) [21:36] lol [22:02] wolsen: took care of your comments on ceph-osd [22:02] icey: awesome thanks [22:03] icey: doh, still typo - search for notr in config.yaml [22:03] haha not anymore! [22:07] haha === thumper-afk is now known as thumper [22:19] all my lxc containers are stuck in agent-state: pending... is there a docu on how to debug it? :) [22:19] after ssh I do see them starting up, running dhclient, but as I don't have DHCP on the network that will fail. is it possible to manually assign IPs to them via the deployment config? [23:26] ok, it deployed \o/ [23:27] I'll do the more complex stuff tomorrow, thanks for all the help guys!