[01:35] ericsnow: before I go to sleep and forget about this (its like 2:30 AM here) please add a link to the patch to your last comment and an example of the Version output, I just made a change very similar to your suggestion to accomodate version and need to know how to know if we are above/below version N and since I am in it I can review your patch for archive data so it gets merged, or alternatively have a separate branch [01:35] with this fix for when you are merged [01:35] peace out [01:36] perrito666: k [01:36] perrito666: sleep well === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away [04:09] axw: hi, interrupt time. we've come up with a list of achievable bugs to burn down for 1.21. we want to get these fixed this week. bug 1388860 and bug 1389037 relate to ec2 availability zones. can i ask you to please take a look? [04:09] Bug #1388860: ec2 says agent-state-info: 'cannot run instances: No default subnet for availability zone: ''us-east-1e''. (InvalidInput)' [04:09] Bug #1389037: provider/ec2: try alternative AZ on InsufficientInstanceCapacity error [04:09] the bugs against beta1 milestone are what we're going to fix [04:10] wallyworld: no worries [04:11] thank you [04:14] wallyworld: if you have a change could you look at this again? http://reviews.vapour.ws/r/355/diff/ [04:15] wallyworld: this takes in to account the things we discussed [04:15] wallyworld: for the most part, things worker out simpler and clearer [04:21] menn0: sorry, was otp, looking [04:22] menn0: also, what's the status of bug 1382751 [04:22] Bug #1382751: non subordinate container scoped relations broken [04:22] we have this as needing to be fixed for 1.21 [04:22] and we'd like all fixes done by this week [04:28] wallyworld: I haven't looked in to it [04:28] wallyworld: it was going to be my next thing [04:29] wallyworld: i'm about to EOD but I can make it my top priority tomorrow [04:29] menn0: ok, thanks, see how you go with it. we are aiming to get beta1 out end of week if possible [04:29] sure, ty [04:29] lmost done review, looks good [04:30] wallyworld: regarding the upgrades PR, I've just had an idea. [04:30] shoot [04:30] wallyworld: instead of having Steps() and StateSteps() on Operation [04:30] wallyworld: what about defining 2 separate lists of operations [04:30] wallyworld: one for state based steps and one for the rest [04:31] wallyworld: that could simplify things further [04:31] wallyworld: and also i can see the state based stuff eventually happening in a separate worker [04:31] wallyworld: so having a separate list seems cleaner if that happens [04:32] wallyworld: I guess it doesn't matter too much at this stage [04:32] yeah, main goal from first review was separate lists and contexts [04:32] i think what's there works, but if it can be improved.... [04:33] wallyworld: I'll think about it. I won't land the current PR today anyway (and maybe not tomorrow depending how sorting out this bug goes) [04:33] ok, sounds good [04:36] wallyworld: thanks for the reviews [04:36] wallyworld: i'm off now (local Python meetup) [04:36] menn0: thanks, have fun, you got a +1 [05:20] anastasiamac: ping [05:29] jam: i talked to anastasiamac about the numactl stuff - the solution is being revised, so what's there is obsolete [05:30] sure [05:30] jam,wallyworld: thank u for ur time! u r amazing :-0 [05:32] jam: tl;dr; original intent was to general the same script each time, and the script would use bash as per the bug to detect numa, no need for core to detect numactl. but the mingo doc seems to suggest that numactl can be used to start mongo all the time, even when not needed. so elmo will be asked to see if he's ok with that [05:33] this will simply things greatly [05:33] wallyworld: yeah, I think we'd still want the "is numactl available" which is fine, but I feel like the bug is probably asking us to install it on numa architectures [05:33] which might just mean "always install it, and always call mongo with it" [05:34] jam: your last line is the current thnking [05:34] trivial to install [05:34] and if no ill effects on non numa arches, then no harm [05:35] we already apt intall mongo, just would be adding another small pkg to that [06:46] axw: thanks for the fixes \o/ [06:47] wallyworld: nps, gotta back port now right? [06:47] yeah [06:47] both [06:53] axw: i'm off to soccer, back later, you can just self approve those backports [06:55] wallyworld: will do === urulama_ is now known as urulama [07:28] dimitern: morning [07:31] morning jam [07:31] dimitern: I wanted to get your input on https://bugs.launchpad.net/juju-core/+bug/1388860 [07:31] Bug #1388860: ec2 says agent-state-info: 'cannot run instances: No default subnet for availability zone: ''us-east-1e''. (InvalidInput)' [07:31] I sent an email earlier [07:32] jam, yeah, I saw that and replied a few minutes ago [07:32] dimitern: looks like axw is doing that already [07:32] dimitern: http://reviews.vapour.ws/r/359/ [07:33] a fix has landed on master already, just skipping over them for now [07:34] jam, axw, awesome! [07:35] jam, I've managed to land the juju-br0 patch yesterday, but I still need to backport it for both 1.20 and 1.21 [07:36] jam, and then I'll have a look at the "multiple networks" bug for maas [07:36] axw: will you be porting your fix to 1.21 ? [07:36] dimitern: nice work on maas, glad to see that network stuff working nicely [07:36] jam: yes, doing that right now [07:36] axw, cheers, yeah it's getting better [07:38] axw: I'm slightly worried about capitalization being an issue, that sort of thing really looks fragile [07:38] otherwise LGTM for 1.21 [07:41] jam: I'm starting to wonder whether we should just try on all AZs regardless of the error [07:41] it's proven to be a little fragile in general [07:41] axw: as in, if we get an error just go to the next one ? [07:41] yep [07:42] perhaps we could pick certain errors to bail on, rather than the other way around [07:47] wallyworld: anastasiamac: still around ? [07:47] jam: wallyworld @ soccer, m about to go. why? [07:48] anastasiamac: I was thinking about the numactl stuff a bit more, and I think we need a flag [07:48] doing always everywhere is a pretty big cahnge [07:48] change [07:48] and we can easily be doing it wrong [07:48] with a flag, we can at least disable/enable it [07:48] I think I'd be happiest with "use it if its there and not disabled" [07:49] and maybe the first one is to not install the package by default [07:49] so people can go install it if they want it [07:50] jam: from my understanding of flags, they reflect a preference [07:50] the cmd numactl wrap is not - it's kind of needed based on hardware... [07:51] anastasiamac: "based on hardware" at the very least [07:51] jam: mayb completely off but most of this "feeling" is based on [07:51] jam: http://docs.mongodb.org/manual/administration/production-notes/#production-numa [07:52] anastasiamac: yeah, I'm on that page as well [07:52] jam: m happy not to run all the time [07:53] jam: my preference would be to run when needed: on NUMA [07:53] jam: irrespective of # of nodes [07:53] anastasiamac: so I certainly want us to DTRT according to our best understanding of what's going on [07:53] but if that is 100%, then we want to have a way for a user to tell us "you're doing it wrong" [07:53] s/is/is not 100%/ === liam_ is now known as Guest77235 [07:57] jam: :-) under what circumstance, if on NUMA, a user would not want to run with numctl wrap? [07:57] jam: [07:58] anastasiamac: reading here: http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/ [07:58] actually every juju installation [07:58] the reason you have numactl [07:58] is for cases where Mongo is consuming >50% of the memory [07:58] but *juju* mongodb doesn't get that big [07:58] the machine isn't "a dedicated mongodb machine" [07:58] jam: have to go to c kids, back soon-ish [07:59] anastasiamac: later [08:15] morning everyone [08:23] jam: no reason [08:23] jam: was not looking [08:23] :) [08:24] so digging into it, the reason Mongo wants the numa stuff is because it thinks it is going to use more memory than one node has access to, and their is a swapping bug on Linux associated with that case. [08:24] but if Mongo is using < 1 nodes worth of RAM, it will actually perform *better* if you don't use numactl --interlace=all [08:24] since it won't be trying to fetch from non-local memory. [08:25] jam:wow :-) [08:25] jam: so either my bug is not a bug [08:26] jam: or we just make it an option (flag) and let user set it (don't wrap by default)? [08:27] jam: bug 1350337 [08:27] Bug #1350337: Juju DB should use numactl when running mongo on multi-socket nodes [08:27] yeah, I'm responding there [08:27] jam: excellent! thnx :-) [08:29] anastasiamac: can you dig into whether we have a way to get mongodb to be happy without numactl ? [08:29] Like are they directly touching libnuma ? [08:32] jam: k... in couple of hrs :-) [08:35] anastasiamac: https://github.com/mongodb/mongo/blob/master/src/mongo/db/startup_warnings_mongod.cpp [08:35] it just does a "does /sys/devices/system/node/node1" exist [08:36] and then tries to read /proc/self/numa_maps [08:36] and looks for a line that isn't "interleave" [08:43] axw: if you're around, does https://bugs.launchpad.net/juju-core/+bug/1388860 need to do anything on 1.20? Or did the round-robin behavior only land in 1.21 ? [08:43] Bug #1388860: ec2 says agent-state-info: 'cannot run instances: No default subnet for availability zone: ''us-east-1e''. (InvalidInput)' [08:44] jam: can't remember, I'll check [08:44] axw: thanks [08:44] jam: yeah we'll need to do it for 1.20 too [08:45] axw: it may be the 1.20 won't have any default-vpc support, but retrying on round-robin seems good to have there [08:46] jam: ah, because it's using an older version of the EC2 API? could be [08:46] agreed [08:46] axw: right, it is probably just using ec2-classic [08:46] dimitern: do you know if VPC (default-vpc) support landed in 1.20 ? [08:48] axw: I just got an EC2 instance on "54.171.140.0" [08:48] I didn't think you could use .0 [08:48] heh, it works [08:48] jam: it'll work if the netmask isn't a /24 [08:51] jam: I'll just test with 1.20 to make sure. I didn't think we were making any calls yet that triggered the VPC support, but I've probably missed something [08:52] jam, the support is there, but unused [08:54] axw, setting a SubnetID in the RunInstances params struct will trigger the VPC-aware API version to be run, otherwise we use (i.e. EC2 emulates) the classic mode [08:55] right and we're not setting SubnetID [08:55] so I should expect this to fail [08:56] axw, what error are you getting? [08:59] dimitern: I'm checking whether or not "No default subnet for availability zone" occurs in 1.20 too [08:59] (it does) [09:05] morning [09:25] morning all [09:59] dimitern: what's the region with a default vpc? [10:01] dimitern: voidspace: standup ? [10:01] voidspace: southeast ap 2 IIRC [10:01] jam, omw [10:01] jam: omw [10:01] jam: thanks [10:01] voidspace: ap-southeast-2 [10:44] TheMue, voidspace, a quick review of the LXC/KVM bridge fixes for MAAS backported to 1.21? http://reviews.vapour.ws/r/362/diff/ [10:45] *click* [10:49] dimitern: looks well known from the latest change. any special differences because it's a backport where I should take an extra look? [10:49] appreciate a review of a critical 1.21 fix if some poor soul wants to share the pain http://reviews.vapour.ws/r/363/ [10:50] TheMue, no new changes, in fact some code was not needed for 1.21 [10:50] wallyworld, looking [10:50] ty :-) [10:50] dimitern: ok, but it also does no harm. [10:50] you'll be sorry you offered [10:50] simplestreams [10:51] hehe [10:53] wallyworld, btw the diff on RB is somehow screwed [10:54] oh [10:54] ket me look [10:54] wallyworld, I got hit by this just minutes ago, because proposing PRs not against master, but 1.21 or 1.20, makes the RB automation pick the wrong parent/branch/tracking-branch apparently [10:55] ericsnow, ^^ [10:55] dimitern: diff looks ok [10:55] i proposed against master, will backport [10:55] the "errors" are due to files deleted [10:55] if the whole file is deleted, it gets confused [10:56] wallyworld, if it was deleted it will display it ok, but I got the same error and traceback for existing files [10:56] hmmm, i just hit View Diff and it looks fine for me [10:57] http://reviews.vapour.ws/r/363/diff/# [10:58] could always review in the pr if needed [10:58] wallyworld, hmm.. weird, now it shows less errors; anyhow, I'll go on with the review [10:59] ty [11:01] voidspace: I'm back if you want to chat in juju-sapphire [11:06] wallyworld, reviewed [11:07] dimitern: awesome, ty [11:07] TheMue, voidspace, ? [11:09] dimitern: curious - is that formatting suggestion part of our guidelines, or more idiomatic go? i've tended to always use the style as written in the pr personally. i'll make the change, was just curious [11:10] jam: omw - sorry [11:11] coming, wife needed me [11:12] TheMue: the guide I'm following for maas [11:12] TheMue: https://insights.ubuntu.com/2013/11/15/interested-in-maas-and-juju-heres-how-to-try-it-in-a-vm/ [11:12] voidspace: thx [11:13] voidspace: ah, a nice one [11:16] wallyworld, it helps me personally to read the code better [11:16] np [11:16] sadly you'll not like most of my other code :-) [11:17] wallyworld, i'll take what i can get :) [11:17] so will I :-) [13:52] TheMue, last backport - for 1.20, if you can have a look? https://github.com/juju/juju/pull/1062 (RB is not configured for 1.20 apparently, so if you can review the PR please) [13:53] dimitern: just opened it a sec ago :) [13:53] TheMue, thanks! there are a few more changes, but not to the logic - just 1.20 stuff [13:53] dimitern: currently RB failed :( hmm, trying again [13:54] TheMue, automatic RB integration for 1.21 and 1.20 is broken [13:55] dimitern: argh, now I've seen you already mentioned it. ok [13:55] TheMue, :) np [13:55] * dimitern needs to step out for 1/2h, bbl [13:58] dimitern: done [14:19] dimitern, can I ask you a question about charm urls and charm upgrades? specifically if you're on say ubuntu-0 and you upgrade to ubuntu-1. Will ubuntu-0 still exist in state? [14:36] yay, internet back [14:36] I was just about to give up - I'd got as far as maas running with a node that failed to commission due to lack of internet [14:43] voidspace: running your local maas? [14:43] TheMue: yep [14:44] TheMue: the guide I linked to earlier "just worked" for me [14:44] TheMue: although now I need to set the power options for the node so MAAS can boot it [14:44] voidspace: yes, seems to be the best solution [14:44] voidspace: most control [14:44] TheMue: my node just went from "Commissioning" to "Ready" [14:44] commissioning failed due to lack of internet before [14:44] TheMue: the one thing I had to do was configure the cluster [14:45] voidspace: which maas are you running? [14:45] My maas controller ip address is 172.16.0.2 - so I configured the "router" as 172.16.0.1 and the dynamic dhcp range 172.16.0.3-128 and the static range 129-255 [14:45] voidspace: 1.7? [14:45] TheMue: yes - 1.7.0~rc1 [14:46] TheMue: from the devel ppa [14:46] voidspace: good, will then spend some disk space for it ;) === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === katco` is now known as katco === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [15:21] mattyw, sorry, I'm back now [15:21] mattyw, I'll check the code, since it's been a while [15:22] dimitern, looking at the code it seems like they stay around [15:22] dimitern, but I'm not sure if that's by design or just luck [15:22] mattyw, you mean older versions of charms post-upgrade? [15:22] dimitern, correct [15:22] mattyw, it kinda depends whether they've come from the store or local repo [15:25] TheMue: bootstrapping juju to a MaaS node, pxe booted by MaaS [15:25] so far so good... [15:26] voidspace: great [15:26] TheMue: only took a month or so to get to this point... [15:26] voidspace: *lol* [15:28] dimitern: re: reviewboard and 1.20, I only see one request from you for a PR that github reported as being based on 1.20: PR #1062 [15:28] dimitern: is that the one where RB set the branch to master? [15:29] dimitern: (review request #365) [15:29] TheMue: http://pastebin.ubuntu.com/8853137/ [15:29] bootstrap completed [15:29] ericsnow, that's the one [15:30] dimitern: I was worried that the code was busted for the non-master branch case, but it should already be doing the right thing :( [15:30] voidspace: looks good [15:30] ericsnow, well, I had issues with 1.21 as well [15:30] dimitern: I'll look into it when I get a chance [15:31] dimitern: ah, okay [15:31] dimitern: thanks for letting me know [15:31] ericsnow, the diff generated by the RB hook was missing every file; i'll send you a link [15:32] ericsnow, np, thanks; there's the link for the 1.21 PR http://reviews.vapour.ws/r/362/ (if you look at the first diff you'll see what happened; I had to manually run the following to update the diff: rbt post -u -r 362 --parent 1.21 --tracking-branch origin/1.21 --branch 1.21 [15:33] dimitern: ewww [15:33] ericsnow, and I tried the same for the 1.20 branch, but rbt reported an error: Unable to find a Review Board server for this source code tree. [15:34] (weird how it created #365 in the first place) [15:34] dimitern: it's because the .reviewboardrc file doesn't exist in the 1.20 branch [15:35] ericsnow, ah, I thought this might be the issue [15:35] dimitern: review has btw been done [15:35] dimitern: it would probably make sense to backport that file [15:35] TheMue, tyvm! [15:36] dimitern: yw [15:36] ericsnow, yes, especially if we're going to support 1.20 for some time and backport fixes [15:36] dimitern: has been simple as I know that change already [15:36] dimitern: I'll put up a PR [15:38] ericsnow, cheers [15:39] TheMue, yeah, but I realized out of being in too much hurry I mis-pasted a few lines of doc comments for the previous 1.21 PR, so I'll propose a trivial fix for that [15:39] dimitern: ok, ping me then [15:39] TheMue, sure, it shouldn't take long [15:41] dimitern: 1.21 has a related problem (the .reviewboardrc points to master)...I'll update that one too [15:42] ericsnow, hmm good catch! that's what probably caused the issue [15:42] dimitern: it should not affect the github automation though [15:46] Question - And i'm fairly certain I already know the answer - in a manually provisioned environment, when a HOST ip changes does the agent do anything to discover its addressing and transmit that back to the state server? [15:47] ericsnow, yeah, but now I get it why I had to specify --branch, --parent, and --tracking-branch to rbt explicitly [15:47] dimitern: ah, yep [15:58] lazyPower, I don't believe we expect the API server IP to change (on the same machine, not like when adding another HA slave) in any type of environment, manual included [15:59] lazyPower, but you might ask axw, as I might be missing something [16:01] dimitern: thanks for the reply - I figured that was the case. We have a user in #juju that's got a manually provisioned environment and every machine IP changed out from under their deployment, and its causing headaches. I'm trying to come up with ways to triage this - i figure they can edit the jenv to address reachability from client => state server, however i'm not sure how you would repoint the agents @ the state server after an address change. [16:02] lazyPower, dimitern: bit of a driveby, but there *is* code in existence that goes into every machine and hacks the state server addresses in the agent configs, that was originally written for the frst attempts at backup/restore [16:03] ericsnow might know what we do now, rogpeppe would better remember what we did then ^^ [16:03] TheMue, there it is - trivial https://github.com/juju/juju/pull/1063 [16:04] dimitern: *click* [16:04] fwereade: I'd ask perrito666 (he has worked on restore) [16:04] lazyPower: ^^^ [16:04] fwereade, that's true, but I think we only do that initially, before an api connection is available? [16:04] perrito666, hey, are you around? I'm somewhere near you Ithink [16:05] dimitern: hmm, thinking about rejecting it. no unit tests. *scnr* [16:05] dimitern, yeah, I'm just wondering whether it's easily repurposable [16:05] dimitern, I feel like it might be [16:05] ericsnow: thanks for the heads up - perrito666: ping me when you have a minute [16:05] fwereade, lazyPower: see http://reviews.vapour.ws/r/298/ [16:05] lazyPower, you could go into each machine, change the /var/lib/juju/agents/machine-#/agent.conf (api and state IPs), then kill jujud and let upstart restart it [16:05] TheMue, *lol* [16:06] dimitern, IIRC the first version was just juju ssh in a loop [16:06] lazyPower, and the same for the unit agents conf [16:06] fwereade, I wasn't aware we have that code [16:07] lazyPower: FYI, perrito666 is at ODS in Paris, so he *might* be busy for another couple hours [16:07] Thats a respectable reason to be occupied :) [16:07] and knowing ODS, 'busy' for many hours more after [16:07] wwitzel3: standup? [16:08] lazyPower, mgz: I've mostly only seen him around after midnight (in Paris) :) [16:09] natefinch: oops, omw [16:10] fwereade: it is a bit verbose, but I was thinking --skip-remote-unit-check ? open to better ideas [16:11] wwitzel3, it's a lot clearer than --no-remote-unit though :) [16:14] fwereade: yes, I initially didn't consider how it read when they were both being used [16:14] fwereade: it was very silly in that context [16:26] fwereade, tvansteenburgh and I were wondering about how the events are managed internally in juju, is there a queue or a way to query the agents to see if they're processing an event or have events queued for processing? [16:29] fwereade: yeah, it just did an ssh to each machine and hacked the agent config files directly with sed AFAIR [16:39] dimitern, ericsnow - solid answers. thanks for the follow up. Another happy customer - http://askubuntu.com/questions/540209/ip-domainname-of-juju-master-or-slaves-changes [16:41] lazyPower, sweet! [16:45] marcoceppi, the answer is close to "no", but the "juju.worker.uniter.filter" logger is likely to be relevant to your interests [16:46] fwereade: I mean, even a method not of the ways of the api, like say some terrible horrible hacky method? [16:51] marcoceppi, Iforget the precise syntax but something like env-set logging-config "=WARNING;juju.worker.uniter.filter=DEBUG" should be pretty close [16:51] marcoceppi, we log ~all interesting events that come in at the debug level [16:52] fwereade: so this is in the pursuit of determining if an environment is idle, I can see how this would give us the stream of events being dispatched but not necessarily if the event is still occurring or if there are others waiting? [16:54] we're trying to find more efficient ways other than sshing to each machine and seeing if jujud is running a hook or not and if no units are running any hooks for X seconds the environment is idle [16:54] but it's not the most reliable or fast method [16:54] marcoceppi, ...and what I suggested will not help you re: relation conversations anyway [16:54] cool [17:06] right, I'm off to London [17:06] see you all tomorrow [18:06] anyone else coming to the team meeting [18:06] ? [19:31] katco: just saw the thing about the Missouri judge overturning the same sex marriage ban. That's awesome. === kadams54 is now known as kadams54-away [21:35] wallyworld_: I can reproduce bug 1382751 [21:35] Bug #1382751: non subordinate container scoped relations broken [21:35] great, first step to fixing :-) [21:37] my guy feel it's related to the recent uniter refactoring [21:37] the uniter keeps dying saying "service is not a subordinate" [21:38] still digging though [21:38] hmmm, maybe fwereade's fault :-) [21:38] s/guy feel/gut feel/ [21:38] lol [21:38] a guy feel is a totally different thing :) [21:38] yes :-) [21:40] menn0: what did you deploy to reproduce? [21:40] I used apache2 and a dummy charm I whipped up that establishes a container scoped relation to one of its interfaces [21:41] rightio [21:41] to speed up testing I might create another dummy charm to replace apache2 [21:43] wallyworld_: do we have a doc on how to write a provider? [21:43] natefinch: william wrote one i think, some time back [21:43] not sure where it got to [21:44] it will be a little out of date since we don't need storage [21:45] wallyworld_: any idea where that doc is? [21:45] ummm, no :-( [21:45] i know it was done, don't think i ever saw it [21:48] wallyworld_: it's unfortunate that the provider interface is actually called "Environ" :/ [21:48] yes [21:48] one of many things :-) [21:49] ok gotta run === urulama is now known as urulama__ [21:56] wallyworld_: I can make this bug happen in 1.20 as well [21:56] :( [22:07] menn0: some other stuff is being backported to 1.20 - i didn't think we'd need to with 1.21 almost out, but given others stuff is being backported, this fix should be too i think [22:30] man 1.19 sucked compared to 1.21... [22:31] wallyworld_: this bug exists in 1.19 as well [22:32] wallyworld_: in fact, looking at "git blame" I think it probably existed in 1.18 too [22:32] menn0: so good it's being fixed :-) [22:33] wallyworld_, help [22:33] wallyworld_, I am testing beta1. I get an error generating devel streams http://pastebin.ubuntu.com/8857965/ [22:34] wallyworld_, ^ I suspect it is because our policy is to delete the product file we are generating before calling generate-tools to ensure juju does checksums. This is also the only way we can retract a dangerous agent [22:34] looking [22:36] yep, juju now requires the product file to exist, which prevents retraction and checksums [22:36] sinzui: i am guessing that it would be looking in the existing index file, finding an stanza for devel, and then tries to load the product file [22:36] you can't delete product files ad hoc [22:37] because that means index file is then lying about what's there [22:37] The only hack I can think of is to move the index2.json, generate fresh json, then parse both files to merge [22:38] hmmm, i wasn't expecting both index and index2 to be generated in the same source tree [22:38] but i guess that was a bad assumption [22:38] wallyworld_, we are now generating index.json We have frozen it and we add it after we create. [22:38] hmm [22:39] so maybe we need to reset all index.json first. [22:39] * sinzui reset everything and tries again [22:39] sinzui: i have to go run an important errand, will be back online in about 45 mins [22:40] wallyworld_, This is your plan, one tree. so we need to preserve everything, and remember that streams are generated by the version we are publishing...beta1 cannot be used to make 1.20.12 which we might release next week [22:42] sinzui: i think so, but i can modify things to better suit your workflow [22:42] wallyworld_, I will try changing my scripts first [22:42] wallyworld_, I haven't reported a bug, so lets try to keep it that way [22:43] the way it works is that you have a dir of tarballs, and run generate for each stream [22:43] each time generate is run, it adds to the metadata [22:43] so adds each new stream [22:43] but, before it adds, it reads what's there [22:43] so what's there needs to be valid [22:44] * wallyworld_ runs away, back soon === kadams54 is now known as kadams54-away [23:28] lazyPower: pong? [23:29] like the most extremely belated pong [23:30] lol TZs a pita :) [23:37] lazyPower: anyway for whenever you return I know your question, read the code from current masters cmd/plugins/juju-restore/restore.go line 451 and on, they are doing what you need you just need to call that for every client, if you want to call it from the server you need to ssh using identity file /var/lib/juju/system-identity from machine 0