/srv/irclogs.ubuntu.com/2016/03/04/#juju.txt

=== lazyPower_ is now known as lazyPower
=== lazyPower is now known as lazypwr
=== lazypwr is now known as lazyPower
=== scuttle is now known as scuttle|afk
=== scuttle|afk is now known as scuttlemonkey
=== lazyPower is now known as lazypwr
=== lazypwr is now known as lazyPower
stublazyPower: I was wondering if charm-build should be VCS aware, and able to build directly to a branch (potentially the same repo the source branch is in). But I don't know enough yet to know how much effort that would involve.04:19
lazyPowerstub - i think i missed a message somewhere and have lost context...04:22
stuboh, building in the wrong dir from 7 hours ago04:24
lazyPowerInteresting thing is, it is *somewhat* vcs aware, insofar that it uses VCS to grab the resources listed in the directory04:25
stubjust mumbling ideas I doubt I'll ever get a chance to investigate :)04:25
lazyPoweri think sniffing for a manifest would be sufficient to quell the build error gotchya of sitting in the charm artifact dir trying to charm build04:25
stubI end up with crap in my built branches due to test runs and stuff, so getting the stuff 'officially' added and/or committed would mean I don't need to go through later and ad/commit.04:26
stubAnd I imagine uploading to the charm store would have similar issues, unless that is smart enough to only upload what is in the manifest.04:26
lazyPoweryeah, thats true. then again adding all that to a clean target, and running make clean before you do anything wrt pushing your charm, should be g2g04:27
lazyPowerbut thats an interesting idea, only including the artifacts in the manifest...04:27
stubYeah, I do that so it isn't anything important. I do need to maintain the rules, so a little bit of maintenance.04:27
lazyPowerthat would be a pretty good way to trim the fat, unless you're binpacking in last minute dependencies to a "fat charm"04:27
lazyPowerthat could be problematic04:27
stubA command to add files to the manifest post-build might be better for that, since then it is explicit and you get a nice checksum embedded in the manifest.04:29
lazyPowerohhhh, i like that too04:30
stubheck, might even be some resources workflow involved here04:30
lazyPower^04:30
lazyPowerthat sounds like the winner04:30
lazyPowermanfiest for charm code, everything else is treated as a resource04:30
stubI don't do fat charms so can't really comment ;)04:30
lazyPowerI try not to, but i'm also not limited by an angry egress firewall04:31
blahdeblahOn that note, if my charm is not getting the updated basic layer with "config.changed*" support, where should I be looking to troubleshoot it?  Is $JUJU_REPOSITORY the only place it should look for the layer?04:31
lazyPowermake sure you dont have a local copy of layer:basic in $LAYER_PATH04:31
blahdeblahlazyPower: All of our egress firewalls are perfectly calm and rational. :-P04:31
lazyPowerblahdeblah - schenanigans!04:31
stubblahdeblah: Its a LAYER_??? variable04:32
lazyPowerthere is a LAYER_PATH env var, yes04:32
lazyPoweroh, nvm04:32
blahdeblahSo, hypothetically, if $LAYER_PATH isn't set, where would my layer be coming from?04:32
lazyPowerfrom the API, so it reaches out and clones it in the deps dir in $JUJU_REPOSITORY04:32
stubOh... and charms.reactive is emedded by pip, so if you have a pip cache04:33
blahdeblahaaargh04:34
stubOh, its def in base layer so pip doesn't matter04:34
=== lazyPower is now known as lazypwr
jaceknkjackal: hey, I think setting my collectd bug to "Fix commited" did not work as expected. I can see my review is still showing as failing tests, they were not rerun. Also I think it probably belongs in the "Incoming Charms" section not "Charm Reviews"09:06
kjackalHey jacekn , let me check09:07
jaceknkjackal: thanks. If my charm stays in "Charm Reviews" I think it can take a long time before somebody will pick it up, it's way down the list and some charms at the top have been waiting for well over a month for review09:08
jaceknand my review shoudl be just a few lines09:08
=== zenlot6 is now known as zenlot
kjackalwait up, there should not be any new round of review for your charm09:10
kjackalWe are talking about this one: https://bugs.launchpad.net/charms/+bug/153857309:10
mupBug #1538573: New collectd subordinate charm <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1538573>09:10
kjackalYes, cool09:11
kjackalso here is where we are with this charm09:11
kjackalyesterday i did the review09:11
kjackalIt was the third review iteration09:11
kjackalI did not find any issue so you got the green light from me09:12
jaceknso what happens next? I am looking at http://review.juju.solutions/ and it's way down the list there09:13
kjackalhowever, I am rather new in the team, so a more senior member will read my report and proceed wit the process of promulgating the charm09:13
jaceknkjackal: I see so there is basically another process outside of review.juju.solutions that my charm is going through?09:14
kjackalyour charm is part of a batch that was reviewed yesterday. The batch is almost finalised. We will send an update on the list with our progress09:15
kjackalSo, you will not have to do anything, I will bring it up on our daily sync but it is a normal internal process09:16
jaceknkjackal: cool, thanks for explanation!09:16
jaceknkjackal: just a suggestion - maybe add somethjign to the review queue to indicate that the charm review is "in progress" or something similar09:17
kjackalThe list that we will send our update is here: juju <juju@lists.ubuntu.com>09:17
kjackalAre you registered, there?09:18
jaceknyes I am09:18
jacekncool thanks again for explaining this09:18
kjackalGreat! Regarding the review queue, there will be a number of changes. We are working on improving the process especially since now there is the need to review layers (in addition to "old-style" charms))09:20
jamespagebeisner, gnuoy: I've switched over the offical charm branches to the reverse-imported ones from github10:03
jamespagenot that we no longer have different branches for trusty/precise10:03
jamespagefigured out that magic as well...10:03
gnuoys/not/note/ ?10:03
jamespagenote10:04
jamespageindeed10:04
jamespagelol10:04
gnuoyack10:04
jamespagegnuoy, https://code.launchpad.net/~openstack-charmers/+branches?field.category=OWNED&field.category-empty-marker=1&field.lifecycle=MATURE&field.lifecycle-empty-marker=1&field.sort_by=most+recently+changed+first&field.sort_by-empty-marker=110:04
jamespagegnuoy, they show as precise charms as that the current default series in charms distro10:04
magicaltroutstub: ping10:22
stubmagicaltrout: pong10:22
magicaltroutooh high, quick leadership q if you have 2 mins10:23
stubnp10:23
magicaltroutokay so here: https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L9010:23
magicaltroutif I do a juju set-config carte_password=1234 or something it will get picked up on the next is_leader execution10:24
magicaltroutcorrect?10:24
magicaltroutthat runs every 5 mins or something10:25
stubYou are thinking of update-status, which I think runs every 5 mins with Juju 1.2510:25
stubAt the moment, you seem to be resetting the credentials every hook.10:25
stubWhich is fine, as none of them are random10:26
magicaltroutactually I lied, the leadership.changed was going to be my question, it is being called :P10:26
magicaltroutI have no question :)10:26
stubYou probably should rely on update-status here. The config-changed hook is what gets invoked after a 'juju set'10:27
stubshould not10:27
magicaltroutyes I set the credentials everytime, but thats because its something that can be changed by set-config and needs to be propogated to the leader and the slaves because they need to know how to login10:27
magicaltrouthmm actually you have a point, as the nodes all share the same password, I guess on config change I could set it and trigger a restart10:29
magicaltroutactually check_running already does that for the password so it might be a bit of a noop anyway10:29
stubSo at the moment, you run 'juju set'. The config-changed hook runs on all units. On the leader, change_leader is called. On the others, update_slave_config.10:30
stubSory - I'll start again10:30
magicaltrouthehe like I said, in reality I don't think i have a problem, my implementation might jump through a few extra hoops but I think it works the logs look okay: https://gist.github.com/buggtb/3b65eb1672dc602c98ac10:31
stubThe implementation seems fine.10:32
stubApart from the bad choice of method names, but that is just opinion :)10:33
magicaltroutyeah thats just because they've evolved over time :)10:33
magicaltroutI'm gonna clean all that up in a bit10:33
magicaltroutthey start doing one thing then cory_fu makes me rework it and they do something else :P10:33
stubAll your units might restart at the same time atm. Is that a problem?10:36
magicaltroutrestart?10:37
magicaltroutoh the service10:37
magicaltroutyeah i had wondered about that10:37
magicaltroutbecause the slaves register with the master10:37
magicaltroutcurrently i'm not sure what happens in the app if the master is unavailable, whether they retry or not10:38
stubIf you want to avoid that, check out the coordinator layer10:38
stubI've only used it for top level charms though. This seems to be a layer?10:38
stubhttps://github.com/stub42/layer-coordinator has the docs10:39
magicaltroutyeah just reading the readme, thanks!10:39
stubOf course, rolling restarts mean your slave are running for longer with outdated credentials so it might not be useful :)10:41
magicaltrouthow do I copy files to a unit during an amulet test?11:45
magicaltroutcan I run juju scp somehow?11:45
sparkiegeekmagicaltrout: yes, you can - see https://github.com/juju/amulet/blob/master/amulet/sentry.py#L69 for how amulet itself uses juju scp11:50
magicaltroutaaah I misread the comment11:51
magicaltroutI thought it said "juju scp doesn't work"11:51
magicaltrout:)11:51
magicaltroutthanks sparkiegeek11:51
jamespagebeisner, I think I'm always getting a 'full' amulet execution atm12:15
jamespageno initial smoke...12:15
=== rogpeppe2 is now known as rogpeppe
=== nagyz is now known as angyz
=== angyz is now known as nagyz
beisnerjamespage, indeed.  one too many triggers on that pipeline.  that should be fixed shortly.13:44
tinwoodgnuoy, have you got a moment?  I'm having problems testing keystone.13:48
gnuoytinwood, defo13:48
tinwoodgnuoy, so I'm trying to run the 015-basic-trusty-icehouse and it's hanging because rabbitmq-server is failing.  Have you seen anything like that?13:49
gnuoytinwood, yes. It'll be reverse DNS13:50
tinwoodah, as in the rabbitmq can't work out its domain name13:50
tinwoodI see.  I've been having DNS problems - that gives me a new direction to look at.13:51
jcastroWe'll do office hours in about 1 hour!13:52
magicaltroutI'm trying to test my charm, but it involves changing ports/passwords etc13:57
magicaltroutwhats the best way to reset the state between each test?13:58
magicaltroutjust like an amulet remove() type teardown?13:58
jamespagegnuoy, coreycb: hey can we chat about aodh briefly?14:37
coreycbjamespage, yes14:37
gnuoyI have a 15mins window before child shuffling14:38
jamespagecoreycb, gnuoy: ok so sitrep - alarming is split from ceilometer -> aodh14:41
gnuoyyep14:41
coreycbyep14:42
jamespageceilometer have removed old code from mitaka; aodh has its own api etc...14:42
gnuoysplendid14:42
jamespageso I suggest we drop aodh from the ceilometer charm altogether and produce a new aodh one14:42
jcastrohttps://plus.google.com/hangouts/_/hoaevent/AP36tYeW6FuePgoCJC8HN9v_cpHfg1mQpsdKPmc5OIRopOb0NN1qAg14:42
gnuoyyep14:42
jamespageor we can piggy back then together14:42
gnuoysplit gets my vote14:42
jcastro^^^ Hangout for the office hours in ~15 minutes14:42
jamespagegnuoy, ok I'll do the work to drop the aodh bits for now14:42
coreycbsplit makes sense to me since separate apis14:43
gnuoyjamespage, as coreycb has said before, it's the upgrade that;s is worrying14:43
coreycbjamespage, that sounds good, I put those there before I realized there were separate apis14:43
jamespagegnuoy, well people will lose alarming from ceilometer14:44
jamespagewe'll need to release note to add aodh14:45
gnuoyjamespage, what about existing data in the db?14:45
tinwoodthanks gnuoy, reverse dns fixed, tests proceeding.14:46
gnuoytinwood, tip top14:46
magicaltroutblimey laptop upgrade from trusty to xenial didn't completely brick it14:46
tinwoodgnuoy, yay! new pause/resume works with keystone :)14:48
* tinwood does a little dance14:49
sparkiegeektinwood: woot!14:49
beisnerwraskelly wrabbits14:49
* tinwood now has to work out how to do the git review bit.14:50
marcoceppiOFFICE HOURS! Starting in just 5 mins14:57
rick_h__marcoceppi: linky me please14:58
rick_h__marcoceppi: so I can join and get setup14:58
marcoceppihttp://ubuntuonair.com/ - https://plus.google.com/hangouts/_/hoaevent/AP36tYeW6FuePgoCJC8HN9v_cpHfg1mQpsdKPmc5OIRopOb0NN1qAg14:59
magicaltrouthello bundletester environment resets....15:01
cholcombequestion about charmhelpers related_units.  It looks like from the code it's valid to call this function without any arguments.  However i'm getting a CalledProcessError saying I must specify a relation id.  Maybe the API changed?15:01
magicaltroutsays it happens after every test15:01
magicaltroutin python land is a test a single method a test? or is it class level?15:02
cholcombemagicaltrout, every def fn is a test15:02
magicaltrouthmm15:02
magicaltroutthanks cholcombe15:02
cholcombemagicaltrout, i think the class is just an easy way to group related tests15:02
magicaltroutyup, same as java-land then, cool15:03
cholcomberight15:03
* magicaltrout goes off to find the cause of run away procs then15:03
cholcombelazypwr, do you know of related_units can be called without any args?15:04
=== lazypwr is now known as lazyPower
=== lazyPower is now known as lazypwr
magicaltroutrick_h__: is that a vanity mirror above your head? ;)15:05
magicaltroutooh mic stuff15:06
=== lazypwr is now known as lazyPower
rick_h__magicaltrout: yea, do I sound decent? :)15:11
magicaltrouthehe15:11
rick_h__but love the vanity mirror idea :)15:11
magicaltroutbetter than my dodgy videos15:11
magicaltroutI like a mirror, juju grooming15:11
lazyPowercholcombe -  nah i'm pretty sure you have to either a) only call that when you're in a relation-* hook, b) provide the rel_id so it can query for active conversations15:13
magicaltroutmarcoceppi: for those of us use to the gui, there should be a rendering of juju status within the UI, i find myself flipping back and forth15:20
beisnerhttps://github.com/openstack/?utf8=%E2%9C%93&query=charm-15:21
beisnerhttps://github.com/openstack-charmers/openstack-community/blob/master/README.dev-charms.md15:22
beisnerhttps://jujucharms.com/u/openstack-charmers-next/15:26
marcoceppimagicaltrout: good point, I'll bring it up15:26
bdx_jcastro, thanks for stressing this15:32
jcastro\o/15:33
magicaltrouttesting people15:33
magicaltroutif you set a config option in amulet15:34
magicaltroutwhats the python way of waiting a while for it to take effect? :)P15:35
beisnermagicaltrout, in the openstack charms, we address that by waiting for the extended workload status message, which is where the charm declares itself done, ready and settled.  that requires that all of the charms in the deployment possess such logic.15:37
beisnerex.  https://github.com/openstack/charm-keystone/blob/master/tests/basic_deployment.py#L3915:37
aisraeltvansteenburgh: I'm seeing an LXC test failure that looks like it's a problem in the environment: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/2862/console15:38
beisnermagicaltrout, we do that to avoid race conditions in the tests.  naturally, you want the charm to be done doing its work before poking at it.15:39
tvansteenburghaisrael: thanks, fixed15:39
aisraeltvansteenburgh: no no, thank *you*15:39
magicaltroutbeisner: good idea thanks!15:39
beisnermarcoceppi - is there a vanilla wait-for-readiness amulet method?15:40
bdx_officehours: so, each openstack tenant would corresponds to a separate cloud, or separate controller?15:40
bdx_marcoceppi: yes15:41
tvansteenburghbeisner, yes15:41
tvansteenburghbeisner https://github.com/juju/amulet/blob/master/amulet/sentry.py#L30715:42
tvansteenburghbeisner, also https://github.com/juju/amulet/blob/master/amulet/sentry.py#L38015:42
beisnertvansteenburgh, right, that waits for a specific status.  what i'm asking is actually for charms that don't use extended status.15:42
beisnersuch as mongodb or mysql15:42
bdx_rick_h: awsome!15:43
sparkiegeeksounds fishy to me15:43
bdx_awesome15:43
tvansteenburghbeisner wait_for_status doesn't depend on extended status i don't think15:44
jamespagebeisner, hey so I have a number of 'rollup' reviews up to drop old release configuration files...15:44
beisnertvansteenburgh, yep looks like that's the one magicaltrout ^15:44
jamespagecould you take a peek? I've been doing a recheck-full15:44
magicaltroutrick_h__: are you going to make cloud metadata pluggable eventually? so we can inject new instance types etc without upgrading Juju?15:45
tvansteenburghbeisner, magicaltrout: there is also sentry.wait() which waits for hooks to complete15:46
tvansteenburghhttps://github.com/juju/amulet/blob/master/amulet/sentry.py#L34515:46
beisnerah yes, thanks tvansteenburgh.  we've wrapped all of those amulet helpers in other test helpers so i've lost memory of their names. :-)15:47
beisnerjamespage, ok so on those, clear to land if all passing?15:47
magicaltrouthmm15:47
magicaltroutthanks beisner tvansteenburgh15:47
jamespagebeisner, yah15:47
bdx_rick_h, officehours: what is the status of lxd <--> network spaces?15:47
jamespagebdx_, next two weeks I think...15:48
jamespageso not quite but almost15:48
bdx_jamespage: awesome! exciting!15:48
lazyPowerYES!15:48
lazyPower\O/15:48
lazyPowerrick_h__ - is it hard being that awesome?15:48
magicaltroutlol15:49
beisnercool stuff, bigdata :)   https://jujucharms.com/big-data15:51
rick_h__lazyPower: exhausting :P15:51
jamespagebeisner, just trying to de-cruft old stuff in between things...15:51
rick_h__lazyPower: helps to have tons of awesome people around15:51
lazyPoweri'll let mbruzek know you said that ;D15:51
* rick_h__ runs fast to keep up with them all15:51
beisnerjamespage, +1 for spring cleaning15:51
mbruzekheyo15:52
deanmanHello, I'm trying to use juju with manual provide to connect to an already bootstrapped environment but for some reason it uses my personal key instead of the juju key. Is there a way to define which key to be used when running juju command ?15:53
tinwoodjamespage, do I need to be an Openstack Foundation member before doing a git review request?15:54
jamespagetinwood, yes15:54
tinwoodjamespage, as a person, I'm guessing - i.e. I have to physically join myself.15:55
bdx_officehours: thanks everyone!15:55
jamespagetinwood, yes15:55
tinwoodThanks jamespage, I'm staring at the form now :)15:56
cheryljhey lazyPower, I'm going to co-opt your bug 1553059 for just the last part of your problem - providing a way to clean up the cache.  I'm fixing the help text now, but I'd like to use that bug to track the cache.yaml cleanup15:57
mupBug #1553059: Help output when killing a `shared model` is incorrect <destroy-model> <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1553059>15:57
lazyPoweroh you betchya15:57
lazyPowerits a polymorphic issue anyway :)15:57
cheryljso I'll be changing the title15:57
lazyPowerduck-typed problems ftw15:57
bdx_officehours: As a side note, I'm giving a presentation of juju-openstack-ha at a portland openstack meetup this month .... I need to hammer out a few issues I'm experiencing with different service under ha .... is there someone that would be willing to work with me on this a bit so as my demo might be legit?15:57
magicaltrouttinwood: beisner just some terminology clarification if I want to catch a password change for example, I can set that block under message and then use Amulet wait_for_status?16:03
magicaltroutsorry not tinwood !:P16:03
magicaltrouttvansteenburgh: --^16:03
magicaltroutor is status literally the thing in the services that relays the status?16:03
magicaltrout(I realise the irony in that question)16:04
tvansteenburghmagicaltrout: not sure what you mean by "catch it'16:04
tvansteenburghyou mean you want to change config and wait for the change to complete?16:04
magicaltrouttvansteenburgh: I need to change a config option and wait for it to actually happen on the unit before proceeding in my test16:04
GilERROR cannot retrieve charm "cs:~hazmat/trusty/etcd-6": cannot get archive: Get https://api.jujucharms.com/charmstore/v4/~hazmat/trusty/etcd-6/archive: dial tcp 162.213.33.121:443: connection timed out16:04
tvansteenburghmagicaltrout: then you want to make the change and then call senty.wait()16:05
=== JoseeAntonioR is now known as jose
magicaltrout /o\16:06
magicaltroutalrighty thats easier :P16:06
tvansteenburghmagicaltrout: more specifically, deployment.sentry.wait()16:06
magicaltrouton a slightly different subject, there are no amulet python api docs published yet are there?16:07
tinwoodnp magicaltrout :)16:07
magicaltroutI had a prod around but I've ended up grepping the source16:07
tvansteenburghmagicaltrout: sadly, no :(16:07
tvansteenburghmagicaltrout: there are some docs but they are lagging behind the source16:08
magicaltroutno probs16:08
* tvansteenburgh makes card to generate api docs for amulet16:08
iceybeisner: it's merged: https://review.openstack.org/#/c/287446/16:11
icey!16:11
beisnerindeed \o/16:11
marcoceppijcastro: let me know when the video lands16:12
jcastromarcoceppi: Almost, you done editing the notes?16:12
jcastroI need to paste those in16:12
jcastrohttps://www.youtube.com/watch?v=zPLW7cGrrjE&feature=youtu.be16:15
jcastromarcoceppi: ^^^ unlisted yet so we can fix the description16:16
marcoceppiI don't have edit access16:16
marcoceppijcastro ^16:17
jcastroyou signed in? You're listed as a "manager"16:17
marcoceppijcastro: which account?16:17
jcastrohttps://plus.google.com/u/0/b/103184405956510785630/+MarcoCeppi/16:18
jcastrothat one16:18
marcoceppiOIC16:20
magicaltroutcan someone tell me in simplistic terms16:26
magicaltroutwhat an environment reset entails in bundletester?16:26
magicaltroutits supposed to be a machine tear down isn't it? from my look at the code16:27
marcoceppimagicaltrout: yeah, bundletester -TT is the 1.x equivalent of destroy-model in 2.016:29
marcoceppiit kills machines, services, units, etc, but keeps the bootstrap node16:29
magicaltroutand according to the README reset is true by default16:29
magicaltroutso after every test I should get a fresh machine?16:29
=== lazyPower is now known as lazypwr
jcastromagicaltrout: are you indexing the video or you want me to do that?16:32
magicaltroutnot me :P16:32
=== redelmann is now known as rudi|comida
=== lazypwr is now known as lazyPower
=== lazyPower is now known as lazypwr
marcoceppijcastro: I'll index16:40
beisnerjamespage, include ceilometer in those +2s?16:43
magicaltroutaww yeah, when you think its deploying new code.... but it aint \o/16:44
=== lazypwr is now known as lazyPower
stubmagicaltrout: Every bundletester target/test gets a fresh environment. What sort of tests are you writing?16:52
magicaltrouthey stub, yeah it helps if I deploy more than boiler plate I guess :)16:53
jcastroanyone know if we have the juju->juju2 update-alternatives bits documented? Can't seem to find anything17:00
lazyPowerjcastro - i know that marco did a blog post on it, bu ti dont think we have it officially documented in any capacity, no17:05
* rick_h__ can't recall where it was, release notes or what17:05
marcoceppi release notes and my blog lazyPower rick_h__ lazyPower17:05
magicaltroutmarcoceppi.com17:05
magicaltroutwhat he said17:05
lazyPowermarcoceppi do i constitute a committee now?17:06
jamespagebeisner, do you want me to pickup the xenial-enable branches on monday?17:09
jamespageI can work through those but they all need a fullcheck17:10
beisnerjamespage, yep, fulls are running.  it's not time-sensitive.  i was mainly needing something to double check triggers and workflow and thought i'd see if those are ready to flip on.17:10
beisneri think they are, but we shall see :-)17:11
jamespagebeisner, \o/17:11
jamespagebeisner, nice work this week btw - I think the transition has gone pretty smoothly17:11
lazyPowerGil - getting along ok w/ the new pointer to ETCD?17:11
beisnerjamespage, thanks :-)  and thanks for all your good work on it too!17:12
GillazyPower no.  i'm working on getting juju upgraded to 2.0.x which you indicated was a pre-req, and that's not going well either .  I'm on 1.25.3 atm17:13
lazyPowerGil - ok instead of going through a 2.0 upgrade path17:13
lazyPoweryou can also build the charm from source, theres only 1 repository to clone, then you can use a local charm until 2.0 lands as stable17:14
lazyPowerunless you *want* to beta :) Then in which case, disco! i'll lend a hand where i can17:14
Giluhhhh17:14
lazyPowerlets go for the path of least resistance17:14
Gilsounds good17:15
lazyPowerhave you built any charms before? literally with `charm build`17:15
lazyPowerGil - if you clone this repo https://github.com/chuckbutler/layer-etcd   and run `charm build -o $JUJU_REPOSITORY`  it will output the charm you're upgrading to 2.0 to get to in your charm repo (assuming that env var is set) you can then deploy with `juju deploy local:trusty/etcd`17:16
Gilno but I think I could - i watched marco's youtube classes and did some work bringing charms down locally to get started on building an "oraclexe" charm17:16
Gilcloning repos I do know17:17
Gilfrom github17:17
Gilthat's a start17:17
lazyPower\o/ woo, its a great start17:19
lazyPowerthe rest is downhill from there17:19
marcoceppiStep 1 clone, Step 2 ... step 3 profit17:20
Gilexport JUJU_REPOSITORY=/home/gstanden/charms/trusty17:22
Gilcharm build -o $JUJU_REPOSITORY17:22
Gilreturned 4 lines of output, no errors17:22
GilAdded charm "local:trusty/etcd-0" to the environment.17:23
Gilseems to have worked17:23
aisraeltvansteenburgh: Looks like a few items in the review queue that are merged but still showing up: http://review.juju.solutions/review/2429 and http://review.juju.solutions/review/2435 are the ones I've seen17:24
Gilmachine 2 hasn't fired up yet though...17:24
lazyPowerGil - ah, interesting. i'm surprised that did work, $JUJU_REPOSITORY is typically the directory just before series. it must have recused into the tree and found it.17:25
tvansteenburghaisrael: thanks, updated17:25
Gilto what should I have set JUJU_REPOSITORY?17:25
aisraelkwmonroe: Are you still working on the zulu8 review? (it's currently locked)17:26
kwmonroeyeah aisrael17:26
lazyPowerbased off your last paste - /home/gstanden/charms17:26
aisraelkwmonroe: ack, thanks!17:26
kwmonroenp, it's really a no-op, just need to move tests from ./trunk to his ./source branch.  it'll be done today.  thanks for checking ;)17:26
kwmonroewell, not so much a no-op, just a not-much-op17:27
Gilsomething may still not be right ... normally when I deploy a non-lxc charm a machine starts up (maas).  The metal is not starting - and I know that my maas setup is solid so I'm wondering if something is still not right17:27
lazyPoweryeah that doesn't sound promising Gil17:27
Giljuju status does show the etcd deployment status17:27
=== natefinch is now known as natefinch-lunch
lazyPowerGil - however the machine seems to be left in pending?17:30
cholcombemarcoceppi, quick question about juju relation-list.  On ceph-mon it's not showing the related units and just blowing up17:33
marcoceppicholcombe: when are you calling realtion-list?17:33
cholcombehookenv.related_units()17:33
cholcombeoh when17:33
GilI ran "juju remove-service etcd" then deleted machine 2 using the juju gui, then retried after setting JUJU_REPOSITORY to /home/gstanden/charms17:33
cholcombemarcoceppi, i'm calling it inside a config-changed hook17:34
Gilthen I got: WARNING failed to load charm at "/home/gstanden/charms/trusty/deps": open /home/gstanden/charms/trusty/deps/metadata.yaml: no such file or directory17:34
Gilso I put JUJU_REPOSITORY back to what I had before...and then retried...and it did launch a machine this time17:34
lazyPowerGil - thats known behavior. the deps directory is a cache that gets used when building layers17:34
marcoceppicholcombe: you have to give it the relation you want to list17:34
lazyPowerit pulls in the remote interface-layers, and required layers that you dont have locally to assemble the charm - all of which are listed on http://interfaces.juju.solutions17:34
cholcombemarcoceppi, ah ok interesting17:34
Gil machine: "3"17:35
marcoceppicholcombe: so you can only call relation_list without parameters inside of a relation hook, otherwise you have to pass it a context17:37
cholcombemarcoceppi, got it.  inside config-changed i'm just trying to see who else is in the cluster and get a list of unit names17:38
aisraelkwmonroe: I saw you were having troubles with the xenial vagrant images. Did you run into this? http://pastebin.ubuntu.com/15282513/17:39
aisraelkwmonroe: and if so, did you find a workaround?17:39
kwmonroeyup aisrael: https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/153854717:40
mupBug #1538547: vagrant box fails with private network interface <livecd-rootfs (Ubuntu):Confirmed for utlemming> <https://launchpad.net/bugs/1538547>17:40
kwmonroeaisrael: but that didn't stop me from being able to 'vagrant ssh xenail'17:41
aisraelkwmonroe: aha, I can get in.17:41
=== rudi|comida is now known as redelmann
kwmonroeaisrael: yeah, looks like it's borked, but it's just messin with you.  once you get in, you can add ubuntu-xenial to the 127.0.0.1 line of /etc/hosts to get rid of the "failure in name resolution"17:42
aisraelkwmonroe: heh, I just hit that bug.17:42
cholcombemarcoceppi, are the relation ids that relation-list wants just the names of the relations?17:47
marcoceppicholcombe: yes17:50
cholcombemarcoceppi, looks like hookenv.relations()['mon']['mon:1'].keys() returns the right information17:53
neiljerramjamespage, hi - a question if I may about the new Git/Gerrit-based process: will changes at https://git.openstack.org/cgit/openstack/charm-neutron-api be mirrored into https://code.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/next ?18:00
jamespageneiljerram, yes but there is some lag18:03
neiljerramjamespage, Or alternatively, is there a way in a bundle .yaml file to specify using the Git source directly?18:03
marcoceppicholcombe: mon:1 is not a hard coded key18:03
marcoceppidon't hard code that18:03
cholcombemarcoceppi, i won't18:03
marcoceppicholcombe: is this a peers relation?18:04
neiljerramjamespage, I'm guessing you mean around once per day?18:04
jamespageneiljerram, utimately those will end up under https://jujucharms.com/u/openstack-charmers-next/ and18:04
jamespagebeisner, whats the schedule on the git->bzr sync?18:04
jamespageneiljerram, I think its more frequent than that18:04
cholcombemarcoceppi, yeah18:04
marcoceppicholcombe: k18:04
neiljerramjamespage, Well that's plenty frequent enough, anyway.  Thanks.18:05
beisnerjamespage, every 4hrs18:05
jamespageokies18:05
jamespageneiljerram, ultimately when all the bit for charm push are in place, all of those branches are redundant and charm-store publication becomes a post commit task in the ci train18:06
neiljerramjamespage, Right, so then the process will go from git.openstack.org directly to  https://jujucharms.com/u/openstack-charmers-next/ ?18:06
GillazyPower  - there is alot I can do without getting into etcd at this point.  I will just work on my oraclexe charm for now on my stable trusty.  I've alot to learn still18:20
lazyPowerGil - ok, if you need any help with etcd/flannel/etc. let me know and i'll lend a hand :)18:20
lazyPowerthanks for taking a look18:21
Gilcool thanks!18:21
beisnerneiljerram, it has a few hops and sync along the way, but yes, the tip of master of a charm at git.openstack.org flows to github.com to lauchpad and then to the openstack-charmers-next space in the charm store. :-)18:32
beisnerunless it goes through albuquerque, then i'm not sure18:32
neiljerrambeisner, thanks!18:32
neiljerram:-)18:32
beisnerneiljerram, yw.  thanks for those maintenance updates on the neutron-api calico bits.18:37
=== natefinch-lunch is now known as natefinch
magicaltroutright fscking tests.... lets get you working19:21
magicaltroutcharmers19:33
marcoceppimagicaltrout you rang?19:34
magicaltroutcan i run a test class on a unit thats already running?19:34
magicaltroutrather than wait for bundletester to mess around19:34
tvansteenburghmagicaltrout: if you run your amulet test against an env that already has the services deployed, it'll use them instead of deploying new ones19:37
tvansteenburghmagicaltrout: note that if you have multiple test files (in the tests/ dir), bundletester will reset the environment between files19:39
tvansteenburghmagicaltrout: you can prevent that by setting `reset: false` in the tests.yaml file19:40
tvansteenburghhttps://github.com/juju-solutions/bundletester#testsyaml19:40
=== ejat_ is now known as ejat
magicaltroutah thanks tvansteenburgh never noticed you could just execute them, assumed i've have to dump some more stuff in there19:59
magicaltroutI'm clearly doing something stupid: is there anything wrong with this line: https://gist.github.com/buggtb/5620a4b5abf403e7b997#file-brokentest-py-L39 ?20:13
lazyPowermagicaltrout - nope, that looks correct to me20:16
lazyPowermagicaltrout - matter of fact, i do something similar here https://github.com/chuckbutler/docker-charm/blob/master/tests/10-deploy-test#L6820:16
magicaltroutbah! thanks lazyPower20:17
magicaltroutoooh wtf20:21
magicaltroutthis is where the whole compile deploy, suck from Launchpad stuff gets messed up20:21
magicaltroutsomehow with bundletester I have current rev tests but old charm reactive code20:22
* magicaltrout sobs into his rum and soda20:22
jrwrenmagicaltrout: its ok. its not worth drinking about. I too have experienced this pain.20:24
magicaltrouthehe20:24
magicaltrouti sit there wondering why the config isn't executing20:25
magicaltroutit is, but the old code didn't work properly and it runs and fails20:25
magicaltroutsilly automated tests20:25
magicaltroutclearly not my fault in the slightest :P20:25
* magicaltrout double checks launchpad before running this time20:26
magicaltroutokay wtf20:43
magicaltroutdoes bundletester cache charms or something?20:43
magicaltroutwhy does20:44
magicaltroutbundletester -t lp:~f-tom-n/charms/trusty/pentahodataintegration/trunk20:44
magicaltroutwhen it spin up give me old code?20:44
magicaltroutthe code in my reactive pdi.py is stale20:45
magicaltroutand i have no clue why20:45
lazyPowerthere no older copy of the charm hanging around somewhere in $JUJU_REPOSITORY?21:06
lazyPoweri've had issues where i thought i was going to be clever and must mv a charmdir, cloned, and kicked off a test only to have the one i mv'd get deployed21:06
tvansteenburghmagicaltrout: did you figure it out?22:36
magicaltroutyeah i've just this second had one of those "duh" moments22:38
magicaltroutmy test is what I have locally but the charm iself is pulled out of the charm store...... *massive face palm*22:39
magicaltrouti guess this is why fridays were invented22:41
tvansteenburghheh22:41
magicaltroutits just one of those things isn't it "wtf, my code is infront of me... why is it not right damnit?!" "oh yeah because its pulling the source from somewhere else"......22:42
magicaltroutbah22:42
magicaltroutgot another talk accepted at ApacheCon that will use juju \o/22:43
tvansteenburghmagicaltrout: nice22:43
magicaltroutyeah, i have a lot of writing to do :P22:43
tvansteenburghyou know you can run bundletester against your local source right?22:43
magicaltroutwell i kinda figured thats what it was there for. but my setup was "borrowed" from kwmonroe's ubuntu-dev env and has cls.d.add('pdi', 'cs:~f-tom-n/trusty/pentahodataintegration') wired in the top22:44
tvansteenburghalso, to answer your earlier question about the env reset, it destroys everything except the bootstrap node between each test file execution22:45
tvansteenburgh(same as `juju-deployer -TT`22:45
tvansteenburghmagicaltrout: is that the same charm in which these tests reside?22:48
tvansteenburghyou can just do cls.d.add('pentahodataintegration')22:48
magicaltroutthanks tvansteenburgh that will work when it gets pushed up to the canonical run tests as well I assume?22:49
tvansteenburghyep22:49
magicaltroutmarvelous22:49
tych0howdy cats. if i do a, `juju bootstrap gce gce`, i get a, `ERROR cloud "gce" not found`. this is with a current trunk build of juju with patches on top, and my environments.yaml has suitable config (it worked in 1.25 and older 2.0s). i must be doing something dumb, but i'm not sure what22:56
rick_h__tych0: environmemts.yaml is no longer used23:01
tych0rick_h__: yeah, i figured as much. are there docs on how to migrate to whatever the new thing is?23:01
rick_h__tych0: see juju list-clouds list-credentials and tge beta1 release notes23:02
rick_h__tych0: release notes emailed23:02
tych0rick_h__: cool, thanks23:03
rick_h__cherylj: do we a start to the "getting started" tych0 could try?23:03
magicaltroutokay tvansteenburgh if I run a config change, what does sentry.wait actually wait for?23:04
tvansteenburghmagicaltrout: hooks to finish executing23:04
magicaltrouthmm fair enough23:08
magicaltroutthe config gets executed okay now, but the test gets to the next step too early23:08
tvansteenburghmagicaltrout: happy to look at your code if you want23:08
magicaltroutthanks tvansteenburgh23:09
magicaltrouthttps://github.com/OSBI/layer-pdi/blob/master/tests/01-deploy.py#L3923:09
magicaltrout39 and 40 need to stop a process and wait until its actually happened23:09
magicaltroutthat process is stopped via https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L5323:10
magicaltroutwhich then drops down to line 15223:10
magicaltroutjust kills a pid23:10
magicaltroutbut the next line in my test checks its stopped and it returns a pid23:10
magicaltroutbut if you login to the machine it has actually been destroyed23:11
tvansteenburghmagicaltrout: i don't follow. the config change is supposed to kill the proc right?23:17
tvansteenburghand it getting killed?23:17
magicaltrouttvansteenburgh: that is correct23:18
magicaltroutbut in the test when I check for the process, it returns a pid23:18
tvansteenburghoic23:18
magicaltroutbut if you login ot the box after its failed and check for the pid its not there23:19
magicaltroutso its like the next line executes too soon23:19
magicaltroutbefore the pkill has finished doing its thing23:19
tvansteenburghyeah, i wonder if wait() is getting called and returns before the hook even starts23:21
tvansteenburghi wonder if you could use wait_for_messages() instead23:22
tvansteenburghhttps://github.com/juju/amulet/blob/master/amulet/sentry.py#L38023:22
tvansteenburghalthough then you might end up changing your charm to support the test, which is backwards23:23
magicaltroutwell i don't mind23:23
magicaltroutnothings finalised23:23
magicaltroutso extended status, is that like when i do status_set('active', 'PDI Installed. Carte Server Disabled.')23:24
magicaltroutits the blurb afterwards?23:24
tvansteenburghyep23:24
magicaltroutokay so if I wait for the disabled message it should hang because that certainly comes after the pkill23:25
tvansteenburghmagicaltrout: cool, try it!23:26
magicaltroutother dumb question tvansteenburgh do I need to do anything other than talisman = Talisman([], timeout=self.timeout)23:32
magicaltroutto get a talisman object?23:32
tvansteenburghmagicaltrout: generally you don't create your own, you get it from d.sentry23:33
tvansteenburghd.sentry is a Talisman object23:33
magicaltroutah right23:33
tych0rick_h__: thanks, i got it sorted. my last question is do you (or anyone) have any idea where i can put the enable-os-upgrade: false flag now to speed up deploys?23:59
tych0arosales: might know? ^^23:59

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!