/srv/irclogs.ubuntu.com/2014/11/06/#juju-dev.txt

perrito666ericsnow: before I go to sleep and forget about this (its like 2:30 AM here) please add a link to the patch to your last comment and an example of the Version output, I just made a change very similar to your suggestion to accomodate version and need to know how to know if we are above/below version N and since I am in it I can review your patch for archive data so it gets merged, or alternatively have a separate branch01:35
perrito666with this fix for when you are merged01:35
perrito666peace out01:35
ericsnowperrito666: k01:36
ericsnowperrito666: sleep well01:36
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
wallyworldaxw: hi, interrupt time. we've come up with a list of achievable bugs to burn down for 1.21. we want to get these fixed this week. bug 1388860 and bug 1389037 relate to ec2 availability zones. can i ask you to please take a look?04:09
mupBug #1388860: ec2 says     agent-state-info: 'cannot run instances: No default subnet for availability zone:       ''us-east-1e''. (InvalidInput)' <deploy> <ec2-provider> <network> <juju-core:Triaged> <https://launchpad.net/bugs/1388860>04:09
mupBug #1389037: provider/ec2: try alternative AZ on InsufficientInstanceCapacity error <ec2-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1389037>04:09
wallyworldthe bugs against beta1 milestone are what we're going to fix04:09
axwwallyworld: no worries04:10
wallyworldthank you04:11
menn0wallyworld: if you have a change could you look at this again? http://reviews.vapour.ws/r/355/diff/04:14
menn0wallyworld: this takes in to account the things we discussed04:15
menn0wallyworld: for the most part, things worker out simpler and clearer04:15
wallyworldmenn0: sorry, was otp, looking04:21
wallyworldmenn0: also, what's the status of bug 138275104:22
mupBug #1382751: non subordinate container scoped relations broken  <regression> <relations> <subordinate> <juju-core:Triaged by menno.smits> <https://launchpad.net/bugs/1382751>04:22
wallyworldwe have this as needing to be fixed for 1.2104:22
wallyworldand we'd like all fixes done by this week04:22
menn0wallyworld: I haven't looked in to it04:28
menn0wallyworld: it was going to be my next thing04:28
menn0wallyworld: i'm about to EOD but I can make it my top priority tomorrow04:29
wallyworldmenn0: ok, thanks, see how you go with it.  we are aiming to get beta1 out end of week if possible04:29
wallyworldsure, ty04:29
wallyworldlmost done review, looks good04:29
menn0wallyworld: regarding the upgrades PR, I've just had an idea.04:30
wallyworldshoot04:30
menn0wallyworld: instead of having Steps() and StateSteps() on Operation04:30
menn0wallyworld: what about defining 2 separate lists of operations04:30
menn0wallyworld: one for state based steps and one for the rest04:30
menn0wallyworld: that could simplify things further04:31
menn0wallyworld: and also i can see the state based stuff eventually happening in a separate worker04:31
menn0wallyworld: so having a separate list seems cleaner if that happens04:31
menn0wallyworld: I guess it doesn't matter too much at this stage04:32
wallyworldyeah, main goal from first review was separate lists and contexts04:32
wallyworldi think what's there works, but if it can be improved....04:32
menn0wallyworld: I'll think about it. I won't land the current PR today anyway (and maybe not tomorrow depending how sorting out this bug goes)04:33
wallyworldok, sounds good04:33
menn0wallyworld: thanks for the reviews04:36
menn0wallyworld: i'm off now (local Python meetup)04:36
wallyworldmenn0: thanks, have fun, you got a +104:36
jamanastasiamac: ping05:20
wallyworldjam: i talked to anastasiamac about the numactl stuff - the solution is being revised, so what's there is obsolete05:29
jamsure05:30
anastasiamacjam,wallyworld: thank u for ur time! u r  amazing :-005:30
wallyworldjam: tl;dr; original intent was to general the same script each time, and the script would use bash as per the bug to detect numa, no need for core to detect numactl. but the mingo doc seems to suggest that numactl can be used to start mongo all the time, even when not needed. so elmo will be asked to see if he's ok with that05:32
wallyworldthis will simply things greatly05:33
jamwallyworld: yeah, I think we'd still want the "is numactl available" which is fine, but I feel like the bug is probably asking us to install it on numa architectures05:33
jamwhich might just mean "always install it, and always call mongo with it"05:33
wallyworldjam: your last line is the current thnking05:34
wallyworldtrivial to install05:34
wallyworldand if no ill effects on non numa arches, then no harm05:34
wallyworldwe already apt intall mongo, just would be adding another small pkg to that05:35
wallyworldaxw: thanks for the fixes \o/06:46
axwwallyworld: nps, gotta back port now right?06:47
wallyworldyeah06:47
wallyworldboth06:47
wallyworldaxw: i'm off to soccer, back later, you can just self approve those backports06:53
axwwallyworld: will do06:55
=== urulama_ is now known as urulama
jamdimitern: morning07:28
dimiternmorning jam07:31
jamdimitern: I wanted to get your input on https://bugs.launchpad.net/juju-core/+bug/138886007:31
mupBug #1388860: ec2 says     agent-state-info: 'cannot run instances: No default subnet for availability zone:       ''us-east-1e''. (InvalidInput)' <deploy> <ec2-provider> <network> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1388860>07:31
jamI sent an email earlier07:31
dimiternjam, yeah, I saw that and replied a few minutes ago07:32
jamdimitern: looks like axw is doing that already07:32
jamdimitern: http://reviews.vapour.ws/r/359/07:32
axwa fix has landed on master already, just skipping over them for now07:33
dimiternjam, axw, awesome!07:34
dimiternjam, I've managed to land the juju-br0 patch yesterday, but I still need to backport it for both 1.20 and 1.2107:35
dimiternjam, and then I'll have a look at the "multiple networks" bug for maas07:36
jamaxw: will you be porting your fix to 1.21 ?07:36
axwdimitern: nice work on maas, glad to see that network stuff working nicely07:36
axwjam: yes, doing that right now07:36
dimiternaxw, cheers, yeah it's getting better07:36
jamaxw: I'm slightly worried about capitalization being an issue, that sort of thing really looks fragile07:38
jamotherwise LGTM for 1.2107:38
axwjam: I'm starting to wonder whether we should just try on all AZs regardless of the error07:41
axwit's proven to be a little fragile in general07:41
jamaxw: as in, if we get an error just go to the next one ?07:41
axwyep07:41
axwperhaps we could pick certain errors to bail on, rather than the other way around07:42
jamwallyworld: anastasiamac: still around ?07:47
anastasiamacjam: wallyworld @ soccer, m about to go. why?07:47
jamanastasiamac: I was thinking about the numactl stuff a bit more, and I think we need a flag07:48
jamdoing always everywhere is a pretty big cahnge07:48
jamchange07:48
jamand we can easily be doing it wrong07:48
jamwith a flag, we can at least disable/enable it07:48
jamI think I'd be happiest with "use it if its there and not disabled"07:48
jamand maybe the first one is to not install the package by default07:49
jamso people can go install it if they want it07:49
anastasiamacjam: from my understanding of flags, they reflect a preference07:50
anastasiamacthe cmd numactl wrap is not - it's kind of needed based on hardware...07:50
jamanastasiamac: "based on hardware" at the very least07:51
anastasiamacjam: mayb completely off but most of this "feeling" is based on07:51
anastasiamacjam: http://docs.mongodb.org/manual/administration/production-notes/#production-numa07:51
jamanastasiamac: yeah, I'm on that page as well07:52
anastasiamacjam: m happy not to run all the time07:52
anastasiamacjam: my preference would be to run when needed: on NUMA07:53
anastasiamacjam: irrespective of # of nodes07:53
jamanastasiamac: so I certainly want us to DTRT according to our best understanding of what's going on07:53
jambut if that is 100%, then we want to have a way for a user to tell us "you're doing it wrong"07:53
jams/is/is not 100%/07:53
=== liam_ is now known as Guest77235
anastasiamacjam: :-) under what circumstance, if on NUMA, a user would not want to run with numctl wrap?07:57
anastasiamacjam:07:57
jamanastasiamac: reading here: http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/07:58
jamactually every juju installation07:58
jamthe reason you have numactl07:58
jamis for cases where Mongo is consuming >50% of the memory07:58
jambut *juju* mongodb doesn't get that big07:58
jamthe machine isn't "a dedicated mongodb machine"07:58
anastasiamacjam: have to go to c kids, back soon-ish07:58
jamanastasiamac: later07:59
mattywmorning everyone08:15
anastasiamacjam: no reason08:23
anastasiamacjam: was not looking08:23
jam:)08:23
jamso digging into it, the reason Mongo wants the numa stuff is because it thinks it is going to use more memory than one node has access to, and their is a swapping bug on Linux associated with that case.08:24
jambut if Mongo is using < 1 nodes worth of RAM, it will actually perform *better* if you don't use numactl --interlace=all08:24
jamsince it won't be trying to fetch from non-local memory.08:24
anastasiamacjam:wow :-)08:25
anastasiamacjam: so either my bug is not a bug08:25
anastasiamacjam: or we just make it an option (flag) and let user set it (don't wrap by default)?08:26
anastasiamacjam: bug 135033708:27
mupBug #1350337: Juju DB should use numactl when running mongo on multi-socket nodes <hours> <maas> <mongodb> <juju-core:Triaged by anastasia-macmood> <https://launchpad.net/bugs/1350337>08:27
jamyeah, I'm responding there08:27
anastasiamacjam: excellent! thnx :-)08:27
jamanastasiamac: can you dig into whether we have a way to get mongodb to be happy without numactl ?08:29
jamLike are they directly touching libnuma ?08:29
anastasiamacjam: k... in couple of hrs :-)08:32
jamanastasiamac: https://github.com/mongodb/mongo/blob/master/src/mongo/db/startup_warnings_mongod.cpp08:35
jamit just does a "does /sys/devices/system/node/node1" exist08:35
jamand then tries to read /proc/self/numa_maps08:36
jamand looks for a line that isn't "interleave"08:36
jamaxw: if you're around, does https://bugs.launchpad.net/juju-core/+bug/1388860 need to do anything on 1.20? Or did the round-robin behavior only land in 1.21 ?08:43
mupBug #1388860: ec2 says     agent-state-info: 'cannot run instances: No default subnet for availability zone:       ''us-east-1e''. (InvalidInput)' <deploy> <ec2-provider> <network> <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1388860>08:43
axwjam: can't remember, I'll check08:44
jamaxw: thanks08:44
axwjam: yeah we'll need to do it for 1.20 too08:44
jamaxw: it may be the 1.20 won't have any default-vpc support, but retrying on round-robin seems good to have there08:45
axwjam: ah, because it's using an older version of the EC2 API? could be08:46
axwagreed08:46
jamaxw: right, it is probably just using ec2-classic08:46
jamdimitern: do you know if VPC (default-vpc) support landed in 1.20 ?08:46
jamaxw: I just got an EC2 instance on "54.171.140.0"08:48
jamI didn't think you could use .008:48
jamheh, it works08:48
davecheneyjam: it'll work if the netmask isn't a /2408:48
axwjam: I'll just test with 1.20 to make sure. I didn't think we were making any calls yet that triggered the VPC support, but I've probably missed something08:51
dimiternjam, the support is there, but unused08:52
dimiternaxw, setting a SubnetID in the RunInstances params struct will trigger the VPC-aware API version to be run, otherwise we use (i.e. EC2 emulates) the classic mode08:54
axwright and we're not setting SubnetID08:55
axwso I should expect this to fail08:55
dimiternaxw, what error are you getting?08:56
axwdimitern: I'm checking whether or not "No default subnet for availability zone" occurs in 1.20 too08:59
axw(it does)08:59
TheMuemorning09:05
voidspacemorning all09:25
voidspacedimitern: what's the region with a default vpc?09:59
jamdimitern: voidspace: standup ?10:01
jamvoidspace: southeast ap 2 IIRC10:01
dimiternjam, omw10:01
voidspacejam: omw10:01
voidspacejam: thanks10:01
jamvoidspace: ap-southeast-210:01
dimiternTheMue, voidspace, a quick review of the LXC/KVM bridge fixes for MAAS backported to 1.21? http://reviews.vapour.ws/r/362/diff/10:44
TheMue*click*10:45
TheMuedimitern: looks well known from the latest change. any special differences because it's a backport where I should take an extra look?10:49
wallyworldappreciate a review of a critical 1.21 fix if some poor soul wants to share the pain http://reviews.vapour.ws/r/363/10:49
dimiternTheMue, no new changes, in fact some code was not needed for 1.2110:50
dimiternwallyworld, looking10:50
wallyworldty :-)10:50
TheMuedimitern: ok, but it also does no harm.10:50
wallyworldyou'll be sorry you offered10:50
wallyworldsimplestreams10:50
dimiternhehe10:51
dimiternwallyworld, btw the diff on RB is somehow screwed10:53
wallyworldoh10:54
wallyworldket me look10:54
dimiternwallyworld, I got hit by this just minutes ago, because proposing PRs not against master, but 1.21 or 1.20, makes the RB automation pick the wrong parent/branch/tracking-branch apparently10:54
dimiternericsnow, ^^10:55
wallyworlddimitern: diff looks ok10:55
wallyworldi proposed against master, will backport10:55
wallyworldthe "errors" are due to files deleted10:55
wallyworldif the whole file is deleted, it gets confused10:55
dimiternwallyworld, if it was deleted it will display it ok, but I got the same error and traceback for existing files10:56
wallyworldhmmm, i just hit View Diff and it looks fine for me10:56
wallyworldhttp://reviews.vapour.ws/r/363/diff/#10:57
wallyworldcould always review in the pr if needed10:58
dimiternwallyworld, hmm.. weird, now it shows less errors; anyhow, I'll go on with the review10:58
wallyworldty10:59
jamvoidspace: I'm back if you want to chat in juju-sapphire11:01
dimiternwallyworld, reviewed11:06
wallyworlddimitern: awesome, ty11:07
dimiternTheMue, voidspace, ?11:07
wallyworlddimitern: curious - is that formatting suggestion part of our guidelines, or more idiomatic go? i've tended to always use the style as written in the pr personally. i'll make the change, was just curious11:09
voidspacejam: omw - sorry11:10
TheMuecoming, wife needed me11:11
voidspaceTheMue: the guide I'm following for maas11:12
voidspaceTheMue: https://insights.ubuntu.com/2013/11/15/interested-in-maas-and-juju-heres-how-to-try-it-in-a-vm/11:12
TheMuevoidspace: thx11:12
TheMuevoidspace: ah, a nice one11:13
dimiternwallyworld, it helps me personally to read the code better11:16
wallyworldnp11:16
wallyworldsadly you'll not like most of my other code :-)11:16
dimiternwallyworld, i'll take what i can get :)11:17
wallyworldso will I :-)11:17
dimiternTheMue, last backport - for 1.20, if you can have a look? https://github.com/juju/juju/pull/1062 (RB is not configured for 1.20 apparently, so if you can review the PR please)13:52
TheMuedimitern: just opened it a sec ago :)13:53
dimiternTheMue, thanks! there are a few more changes, but not to the logic - just 1.20 stuff13:53
TheMuedimitern: currently RB failed :( hmm, trying again13:53
dimiternTheMue, automatic RB integration for 1.21 and 1.20 is broken13:54
TheMuedimitern: argh, now I've seen you already mentioned it. ok13:55
dimiternTheMue, :) np13:55
* dimitern needs to step out for 1/2h, bbl13:55
TheMuedimitern: done13:58
mattywdimitern, can I ask you a question about charm urls and charm upgrades? specifically if you're on say ubuntu-0 and you upgrade to ubuntu-1. Will ubuntu-0 still exist in state?14:19
voidspaceyay, internet back14:36
voidspaceI was just about to give up - I'd got as far as maas running with a node that failed to commission due to lack of internet14:36
TheMuevoidspace: running your local maas?14:43
voidspaceTheMue: yep14:43
voidspaceTheMue: the guide I linked to earlier "just worked" for me14:44
voidspaceTheMue: although now I need to set the power options for the node so MAAS can boot it14:44
TheMuevoidspace: yes, seems to be the best solution14:44
TheMuevoidspace: most control14:44
voidspaceTheMue: my node just went from "Commissioning" to "Ready"14:44
voidspacecommissioning failed due to lack of internet before14:44
voidspaceTheMue: the one thing I had to do was configure the cluster14:44
TheMuevoidspace: which maas are you running?14:45
voidspaceMy maas controller ip address is 172.16.0.2 - so I configured the "router" as 172.16.0.1 and the dynamic dhcp range 172.16.0.3-128 and the static range 129-25514:45
TheMuevoidspace: 1.7?14:45
voidspaceTheMue: yes - 1.7.0~rc114:45
voidspaceTheMue: from the devel ppa14:46
TheMuevoidspace: good, will then spend some disk space for it ;)14:46
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== katco` is now known as katco
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
dimiternmattyw, sorry, I'm back now15:21
dimiternmattyw, I'll check the code, since it's been a while15:21
mattywdimitern, looking at the code it seems like they stay around15:22
mattywdimitern, but I'm not sure if that's by design or just luck15:22
dimiternmattyw, you mean older versions of charms post-upgrade?15:22
mattywdimitern, correct15:22
dimiternmattyw, it kinda depends whether they've come from the store or local repo15:22
voidspaceTheMue: bootstrapping juju to a MaaS node, pxe booted by MaaS15:25
voidspaceso far so good...15:25
TheMuevoidspace: great15:26
voidspaceTheMue: only took a month or so to get to this point...15:26
TheMuevoidspace: *lol*15:26
ericsnowdimitern: re: reviewboard and 1.20, I only see one request from you for a PR that github reported as being based on 1.20: PR #106215:28
ericsnowdimitern: is that the one where RB set the branch to master?15:28
ericsnowdimitern: (review request #365)15:29
voidspaceTheMue: http://pastebin.ubuntu.com/8853137/15:29
voidspacebootstrap completed15:29
dimiternericsnow, that's the one15:29
ericsnowdimitern: I was worried that the code was busted for the non-master branch case, but it should already be doing the right thing :(15:30
TheMuevoidspace: looks good15:30
dimiternericsnow, well, I had issues with 1.21 as well15:30
ericsnowdimitern: I'll look into it when I get a chance15:30
ericsnowdimitern: ah, okay15:31
ericsnowdimitern: thanks for letting me know15:31
dimiternericsnow, the diff generated by the RB hook was missing every file; i'll send you a link15:31
dimiternericsnow, np, thanks; there's the link for the 1.21 PR http://reviews.vapour.ws/r/362/ (if you look at the first diff you'll see what happened; I had to manually run the following to update the diff: rbt post -u -r 362 --parent 1.21 --tracking-branch origin/1.21 --branch 1.2115:32
ericsnowdimitern: ewww15:33
dimiternericsnow, and I tried the same for the 1.20 branch, but rbt reported an error: Unable to find a Review Board server for this source code tree.15:33
dimitern(weird how it created #365 in the first place)15:34
ericsnowdimitern: it's because the .reviewboardrc file doesn't exist in the 1.20 branch15:34
dimiternericsnow, ah, I thought this might be the issue15:35
TheMuedimitern: review has btw been done15:35
ericsnowdimitern: it would probably make sense to backport that file15:35
dimiternTheMue, tyvm!15:35
TheMuedimitern: yw15:36
dimiternericsnow, yes, especially if we're going to support 1.20 for some time and backport fixes15:36
TheMuedimitern: has been simple as I know that change already15:36
ericsnowdimitern: I'll put up a PR15:36
dimiternericsnow, cheers15:38
dimiternTheMue, yeah, but I realized out of being in too much hurry I mis-pasted a few lines of doc comments for the previous 1.21 PR, so I'll propose a trivial fix for that15:39
TheMuedimitern: ok, ping me then15:39
dimiternTheMue, sure, it shouldn't take long15:39
ericsnowdimitern: 1.21 has a related problem (the .reviewboardrc points to master)...I'll update that one too15:41
dimiternericsnow, hmm good catch! that's what probably caused the issue15:42
ericsnowdimitern: it should not affect the github automation though15:42
lazyPowerQuestion - And i'm fairly certain I already know the answer - in a manually provisioned environment, when a HOST ip changes  does the agent do anything to discover its addressing and transmit that back to the state server?15:46
dimiternericsnow, yeah, but now I get it why I had to specify --branch, --parent, and --tracking-branch to rbt explicitly15:47
ericsnowdimitern: ah, yep15:47
dimiternlazyPower, I don't believe we expect the API server IP to change (on the same machine, not like when adding another HA slave) in any type of environment, manual included15:58
dimiternlazyPower, but you might ask axw, as I might be missing something15:59
lazyPowerdimitern: thanks for the reply - I figured that was the case. We have a user in #juju that's got a manually provisioned environment and every machine IP changed out from under their deployment, and its causing headaches. I'm trying to come up with ways to triage this - i figure they can edit the jenv to address reachability from client => state server, however i'm not sure how you would repoint the agents @ the state server after an address change.16:01
fwereadelazyPower, dimitern: bit of a driveby, but there *is* code in existence that goes into every machine and hacks the state server addresses in the agent configs, that was originally written for the frst attempts at backup/restore16:02
fwereadeericsnow might know what we do now, rogpeppe would better remember what we did then ^^16:03
dimiternTheMue, there it is - trivial https://github.com/juju/juju/pull/106316:03
TheMuedimitern: *click*16:04
ericsnowfwereade: I'd ask perrito666 (he has worked on restore)16:04
ericsnowlazyPower: ^^^16:04
dimiternfwereade, that's true, but I think we only do that initially, before an api connection is available?16:04
fwereadeperrito666, hey, are you around? I'm somewhere near you Ithink16:04
TheMuedimitern: hmm, thinking about rejecting it. no unit tests. *scnr*16:05
fwereadedimitern, yeah, I'm just wondering whether it's easily repurposable16:05
fwereadedimitern, I feel like it might be16:05
lazyPowerericsnow: thanks for the heads up - perrito666: ping me when you have a minute16:05
ericsnowfwereade, lazyPower: see http://reviews.vapour.ws/r/298/16:05
dimiternlazyPower, you could go into each machine, change the /var/lib/juju/agents/machine-#/agent.conf (api and state IPs), then kill jujud and let upstart restart it16:05
dimiternTheMue, *lol*16:05
fwereadedimitern, IIRC the first version was just juju ssh in a loop16:06
dimiternlazyPower, and the same for the unit agents conf16:06
dimiternfwereade, I wasn't aware we have that code16:06
ericsnowlazyPower: FYI, perrito666 is at ODS in Paris, so he *might* be busy for another couple hours16:07
lazyPowerThats a respectable reason to be occupied :)16:07
mgzand knowing ODS, 'busy' for many hours more after16:07
natefinchwwitzel3: standup?16:07
ericsnowlazyPower, mgz: I've mostly only seen him around after midnight (in Paris) :)16:08
wwitzel3natefinch: oops, omw16:09
wwitzel3fwereade: it is a bit verbose, but I was thinking --skip-remote-unit-check ? open to better ideas16:10
fwereadewwitzel3, it's a lot clearer than --no-remote-unit though :)16:11
wwitzel3fwereade: yes, I initially didn't consider how it read when they were both being used16:14
wwitzel3fwereade: it was very silly in that context16:14
marcoceppifwereade, tvansteenburgh and I were wondering about how the events are managed internally in juju, is there a queue or a way to query the agents to see if they're processing an event or have events queued for processing?16:26
rogpeppefwereade: yeah, it just did an ssh to each machine and hacked the agent config files directly with sed AFAIR16:29
lazyPowerdimitern, ericsnow - solid answers. thanks for the follow up. Another happy customer - http://askubuntu.com/questions/540209/ip-domainname-of-juju-master-or-slaves-changes16:39
dimiternlazyPower, sweet!16:41
fwereademarcoceppi, the answer is close to "no", but the "juju.worker.uniter.filter" logger is likely to be relevant to your interests16:45
marcoceppifwereade: I mean, even a method not of the ways of the api, like say some terrible horrible hacky method?16:46
fwereademarcoceppi, Iforget the precise syntax but something like env-set logging-config "<root>=WARNING;juju.worker.uniter.filter=DEBUG" should be pretty close16:51
fwereademarcoceppi, we log ~all interesting events that come in at the debug level16:51
marcoceppifwereade: so this is in the pursuit of determining if an environment is idle, I can see how this would give us the stream of events being dispatched but not necessarily if the event is still occurring or if there are others waiting?16:52
marcoceppiwe're trying to find more efficient ways other than sshing to each machine and seeing if jujud is running a hook or not and if no units are running any hooks for X seconds the environment is idle16:54
marcoceppibut it's not the most reliable or fast method16:54
fwereademarcoceppi, ...and what I suggested will not help you re: relation conversations anyway16:54
marcoceppicool16:54
voidspaceright, I'm off to London17:06
voidspacesee you all tomorrow17:06
natefinchanyone else coming to the team meeting18:06
natefinch?18:06
natefinchkatco: just saw the thing about the Missouri judge overturning the same sex marriage ban.  That's awesome.19:31
=== kadams54 is now known as kadams54-away
menn0wallyworld_: I can reproduce bug 138275121:35
mupBug #1382751: non subordinate container scoped relations broken  <regression> <relations> <subordinate> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1382751>21:35
wallyworld_great, first step to fixing :-)21:35
menn0my guy feel it's related to the recent uniter refactoring21:37
menn0the uniter keeps dying saying "service is not a subordinate"21:37
menn0still digging though21:38
wallyworld_hmmm, maybe fwereade's fault :-)21:38
menn0s/guy feel/gut feel/21:38
wallyworld_lol21:38
menn0a guy feel is a totally different thing :)21:38
wallyworld_yes :-)21:38
wallyworld_menn0: what did you deploy to reproduce?21:40
menn0I used apache2 and a dummy charm I whipped up that establishes a container scoped relation to one of its interfaces21:40
wallyworld_rightio21:41
menn0to speed up testing I might create another dummy charm to replace apache221:41
natefinchwallyworld_: do we have a doc on how to write a provider?21:43
wallyworld_natefinch: william wrote one i think, some time back21:43
wallyworld_not sure where it got to21:43
wallyworld_it will be a little out of date since we don't need storage21:44
natefinchwallyworld_: any idea where that doc is?21:45
wallyworld_ummm, no :-(21:45
wallyworld_i know it was done, don't think i ever saw it21:45
natefinchwallyworld_: it's unfortunate that the provider interface is actually called "Environ" :/21:48
wallyworld_yes21:48
wallyworld_one of many things :-)21:48
natefinchok gotta run21:49
=== urulama is now known as urulama__
menn0wallyworld_: I can make this bug happen in 1.20 as well21:56
menn0:(21:56
wallyworld_menn0: some other stuff is being backported to 1.20 - i didn't think we'd need to with 1.21 almost out, but given others stuff is being backported, this fix should be too i think22:07
menn0man 1.19 sucked compared to 1.21...22:30
menn0wallyworld_: this bug exists in 1.19 as well22:31
menn0wallyworld_: in fact, looking at "git blame" I think it probably existed in 1.18 too22:32
wallyworld_menn0: so good it's being fixed :-)22:32
sinzuiwallyworld_, help22:33
sinzuiwallyworld_, I am testing beta1. I get an error generating devel streams http://pastebin.ubuntu.com/8857965/22:33
sinzuiwallyworld_, ^ I suspect it is because our policy is to delete the product file we are generating before calling generate-tools to ensure juju does checksums. This is also the only way we can retract a dangerous agent22:34
wallyworld_looking22:34
sinzuiyep, juju now requires the product file to exist, which prevents retraction and checksums22:36
wallyworld_sinzui: i am guessing that it would be looking in the existing index file, finding an stanza for devel, and then tries to load the product file22:36
wallyworld_you can't delete product files ad hoc22:36
wallyworld_because that means index file is then lying about what's there22:37
sinzuiThe only hack I can think of is to move the index2.json, generate fresh json, then parse both files to merge22:37
wallyworld_hmmm, i wasn't expecting both index and index2 to be generated in the same source tree22:38
wallyworld_but i guess that was a bad assumption22:38
sinzuiwallyworld_, we are now generating index.json We have frozen it and we add it after we create.22:38
sinzuihmm22:38
sinzuiso maybe we need to reset all index.json first.22:39
* sinzui reset everything and tries again22:39
wallyworld_sinzui: i have to go run an important errand, will be back online in about 45 mins22:39
sinzuiwallyworld_, This is your plan, one tree. so we need to preserve everything, and remember that streams are generated by the version we are publishing...beta1 cannot be used to make 1.20.12 which we might release next week22:40
wallyworld_sinzui: i think so, but i can modify things to better suit your workflow22:42
sinzuiwallyworld_, I will try changing my scripts first22:42
sinzuiwallyworld_, I haven't reported a bug, so lets try to keep it that way22:42
wallyworld_the way it works is that you have a dir of tarballs, and run generate for each stream22:43
wallyworld_each time generate is run, it adds to the metadata22:43
wallyworld_so adds each new stream22:43
wallyworld_but, before it adds, it reads what's there22:43
wallyworld_so what's there needs to be valid22:43
* wallyworld_ runs away, back soon22:44
=== kadams54 is now known as kadams54-away
perrito666lazyPower: pong?23:28
perrito666like the most extremely belated pong23:29
rick_h_lol TZs a pita :)23:30
perrito666lazyPower: anyway for whenever you return I know your question, read the code from current masters cmd/plugins/juju-restore/restore.go line 451 and on, they are doing what you need you just need to call that for every client, if you want to call it from the server you need to ssh using identity file /var/lib/juju/system-identity from machine 023:37

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!