[00:02] Bug #1519128 opened: lxd provider fails to cleanup on failed bootstrap [00:21] thumper: mwhudson go 1.5.2 release status https://groups.google.com/d/msg/golang-dev/JcZNxZgRR04/yjDXSO_RAQAJ [00:22] thumper: why does fslock have an IsLocked method ? [00:22] raaaaaaaaaaaaaaaaaaacy [00:23] davechen1y: very very close then [00:23] pyup [00:24] i just saw the ppc32 rel cl land [00:24] looks like austin has got another horrid gc bug [00:24] but i don't think it'll hold up the release [00:24] i asked russ last week if there was value in doing a release candidate for the rc [00:25] but he reminded me that they can always relase go 1.5.3, so there is no value in waiting to get 1.5.2 perfect [00:26] Bug #1519133 opened: cmd/jujud/agent: data race [00:26] Found 7 data race(s) [00:26] FAIL github.com/juju/juju/provider/ec2 531.204s [00:32] thumper: mgz, I should have a PR ready to disable the -race build on the remaining failing packages [00:33] would it be pssible to kick off another race job as soon as I check that in, or should I wait for the overnight run ? [00:56] Bug # opened: 1519141, 1519144, 1519145, 1519147 [00:59] Bug #1519149 opened: worker/uniter/remotestate: data race [01:02] axw: thumper menn0 https://github.com/juju/juju/pull/3806 [01:05] davechen1y: LGTM [01:05] Bug #1519149 changed: worker/uniter/remotestate: data race [01:08] Bug #1519149 opened: worker/uniter/remotestate: data race [01:11] ^ eventual consistency eh mup ? [01:17] Bug #1519149 changed: worker/uniter/remotestate: data race [01:26] Bug #1519149 opened: worker/uniter/remotestate: data race [01:41] oh shit [01:42] ? [01:42] wallyworld: is there any way to kick off http://reports.vapour.ws/releases/3346/job/run-unit-tests-race/attempt/597 [01:42] this job immediately ? [01:43] davechen1y: not sure, i'll see if i can [01:45] thanks [01:45] i'd like to get started on restoring the tests i just commented out [01:46] davechen1y: it needs a revision_build param, i'm not sure if that is a git sha or something else [01:46] i can look at previous logs [01:46] unless sinzui is still around? [01:47] davechen1y: to re-run that same attempt? [01:49] sinzui: no, a new attempt please, at 7652514bc7127bf9e1c283479b32733933708da7 [01:50] davechen1y: which branch is that. That will trigger a full round of tests [01:50] oh [01:51] davechen1y: CI got clobbered again. YES I will make CI pickup master tip now [01:52] ericsnow: I forced the ubuntu alias to point to the wily lxd image. We have a pass. I will update the bug with what I did to make lxd and Juju happy http://juju-ci.vapour.ws:8080/view/Juju%20Revisions/job/lxd-deploy-wily-amd64/51/console [01:54] sinzui: yes, that's on master now [01:54] thanks [01:55] davechen1y: your job will start in about 7 minutes. [01:55] I'm finished being mean to anastasiamac, your turn now [01:56] wallyworld: ^^ [01:56] axw: \o/ [01:56] oh, alright [01:56] axw: ur patience is phenominal and greatly appreciated!!! [02:00] sinzui: tahnks [02:02] axw: wallyworld: m hitting "land-me-quickly",... all bruised and battered \o/ [02:02] master is blocked [02:03] just joking!! :-D [02:03] wallyworld: it's k. m on feature branch :D [02:03] wallyworld: which somehow feels even more painful \o/ [02:18] axw: got a second? [02:18] natefinch: heya, what's up? [02:18] axw: I was looking at bug 1517344, but not sure what the actual repro steps are. [02:18] Bug #1517344: state: initially assigned units don't get storage attachments [02:19] * thumper pulls a sadface [02:19] investigating one bug, and found at least four others [02:19] natefinch: you'll need a charm that requires storage. just deploy it, and the storage doesn't get added anymore. I'll push my testing charm to LP [02:20] sinzui: if you see the race job pass, please make it voting :) [02:20] axw: I saw the storage docs mention cs:~axwalk/postgresql ... is that still a valid test case? [02:20] natefinch: I think so, but I haven't tried using it in a while [02:20] * natefinch tries [02:20] thumper: that job cannot be voting for 1.25 though. That will be tricky [02:21] sinzui: no, just master [02:21] sinzui: and the upcoming 1.26 [02:21] thumper: yeah, we don't have branch level voting. it will be tricky [02:21] natefinch: otherwise there's cs:~axwalk/trusty/storagetest [02:21] sinzui: oh? [02:21] axw: cool, thanks [02:21] * thumper is surprised [02:22] natefinch: just "juju deploy cs:~axwalk/trusty/storagetest" *should* give you a unit of that service with a single "filesystem" storage instance [02:22] how do you go about adding ci tests for new branches that don't work on old branches? [02:22] sinzui: ^^ [02:22] natefinch: I think if you deploy the service and then add the unit (separate step), the second unit will get storage [02:23] thumper: the test exit early with pass, I can do that for 1.25 and older, but we also not run the test. [02:23] sinzui: can I watch the progress of the job ? [02:23] davechen1y: yes [02:23] sinzui: that is probably fine [02:23] sinzui: could you tell me the link please [02:24] davechen1y: http://juju-ci.vapour.ws:8080/job/run-unit-tests-race/599/console [02:24] ahh, it's a jenkins job [02:24] that's what I needed to know [02:24] now I can solve my own question [02:24] thumper: Yeah, I thiknk so. We weren't learning anything from the 1.25 runs [02:24] poop [02:24] 404 [02:25] and an empty stack trace for good measure [02:25] davechen1y: you have to be logged in [02:25] how do I log in ? [02:25] i've never logged in [02:25] can the job be changed to public [02:25] i don't think itneeds to be private [02:32] lol "WARNING: config not found or invalid" wait... what? Which is it? [02:33] q [02:35] natefinch: try double negative [02:35] config not not found or not invalid == success! [02:41] I just ... how do you not know if it doesn't exist? Why does the code not differentiate? [02:43] also, juju thinks some of the random juju-* processes in my bin directory are juju plugins and tries to run them when I use tab complete and ends up panicking because that's not what they are [02:44] wallyworld: left some comments on your PR [02:44] s/processes/executables [02:44] axw: ty [02:44] ok github.com/juju/juju/api/uniter1532.421s [02:44] yay cloud [02:44] wow [02:45] davechen1y: the CI job takes >60s for the joyent tests, takes ~2s on my desktop :/ [02:45] yay cloud [02:46] the raison d'etre of false economies [02:48] if only we had some way to run a cloud on top of our own fast bare metal.... [02:48] (not that we don't need to run cloud specific tests, obv...) [02:50] natefinch: you mean like a laptop [02:50] with ubuntu installed on it ? [02:51] davechen1y: I know they're hard to find in this company, but maybe we could scrounge up a few [03:00] thumper: whelp. One more thing to add to my airing of grievances against YAML this year. [03:00] \o/ [03:05] YAML, the configuration format so flexible, no matter what the issue -- it's your fault. [03:05] lol [03:07] davechen1y: https://bugs.launchpad.net/juju-core/+bug/1517632 [03:07] Bug #1517632: juju upgrade-juju after upload-tools fails [03:07] davechen1y: hangout to chat about it? [03:07] thumper: i'll me you in the 1:1 [03:07] k [03:09] axw: replied. since there's doubt, i can ask for clarification from rick etc [03:09] my favorite is still base 60 notation... so a value of 8080:22 is interpreted as 484822.0 but a value of 8080:61 is interpreted as a string "8080:61" [03:14] wallyworld: yes, please [03:14] ok [03:15] Bug #1519176 opened: apiserver/provisioner: tests unreliable under -race [03:18] Bug #1519176 changed: apiserver/provisioner: tests unreliable under -race [03:21] Bug #1519176 opened: apiserver/provisioner: tests unreliable under -race [03:27] Bug #1519176 changed: apiserver/provisioner: tests unreliable under -race [03:30] Bug #1519176 opened: apiserver/provisioner: tests unreliable under -race [03:42] FAILgithub.com/juju/juju/featuretests1411.997s [03:43] axw: i was wrong in my reply - deploy --series wily will cause subsequent add unit commands to use wily [03:43] wallyworld: with your branch? [03:43] wallyworld: or that's the expected outcome? [03:43] axw: with my branch and master [03:43] that's te current behavour [03:44] but we still need --force to get such a unit on a non wily machine each time [03:44] that's what i was trying to say [03:44] but got confused [03:44] wallyworld: ok, so can you just record the Force flag on the service then? [03:44] that's what i don't want to do [03:44] juju add-unit will still used wily without --force [03:45] but it will use a new wily machine [03:45] Bug #1519183 opened: featuretests: tests fail under -race because of crappy timing issues [03:45] --force is needed if we want to use a non wily clean/empty machne [03:45] wallyworld: non capisco [03:45] quick hangout? [03:45] ok [03:45] stndup [03:46] axw: am in standup hangout fwiw [04:03] Bug #1519189 opened: worker/leadership: FAIL: TrackerSuite.TestGainLeadership [04:08] davechen1y I see you've inherited bug 1517632. Congrats :) [04:08] Bug #1517632: juju upgrade-juju after upload-tools fails [04:08] davechen1y: could you put in an update before you EOD? [04:08] davechen1y: I'll need to follow up with bootstack in the morning [04:11] cherylj: i'm working on something else today [04:11] i have no update [04:11] feel free to unasign me [04:11] (thumper only gave me the bug 20 minutes ago) [04:11] Bug #20: Sort translatable packages by popcon popularity and nearness to completion [04:11] ha, funny mup [04:11] cherylj: short answer is nothing on this one so far [04:12] Bug #1519189 changed: worker/leadership: FAIL: TrackerSuite.TestGainLeadership [04:12] wallyworld: did you say you had an environment where you can reproduce bug 1517632? [04:12] Bug #1517632: juju upgrade-juju after upload-tools fails [04:13] cherylj: one sec [04:15] Bug #1519189 opened: worker/leadership: FAIL: TrackerSuite.TestGainLeadership [04:15] Bug #1519190 opened: worker/addresser: FAIL: worker_test.go:260: workerEnabledSuite.TestWorkerAcceptsBrokenRelease [04:15] Bug #1519191 opened: worker/addresser: FAIL: worker_test.go:260: workerEnabledSuite.TestWorkerAcceptsBrokenRelease [04:16] cherylj: i think i saw it once on a stock 1.25 deployment i ran to specifically test the issue but thay env is long gone [04:17] but in general, i was not able to repro again after that [04:17] but i am pretty sure wayne said he could repro on 1.25 [04:18] wallyworld: okay, thanks. [04:18] sorry :-( [04:19] maybe 1.25 is ok after all [04:19] and it nly affects 1.22 [04:25] wallyworld: forgot to ask: any issues with adding URL to RemoteService? [04:26] wallyworld: then it'll be possible to have a different service name locally than what's in the service directory [04:27] axw: i don't think so offhand. i havn't got that branch open but i would have thought wed be recording ServiceURL in the remote service data model. we are not? [04:27] wallyworld: nope, just name, endpoints, life, relation-count [04:28] axw: hmmm, i'm sure i meant to add it. sigh [04:28] wallyworld: I'll add it in my branch [04:28] ty [04:32] wallyworld: in another upgrade bug.... Have you ever seen this message" "upgrader.go:185 desired version is 1.24.7, but current version is 1.23.3 and agent is not a manager node" [04:32] cherylj: wow, no :-( [04:32] I think the state server doesn't think it's the state server [04:32] yeah, that's really sucky [04:33] think there's any chance of recovery around that? [04:33] Bug #1498968 changed: ERROR environment destruction failed: destroying storage: listing volumes: An internal error has occurred (InternalError) [04:33] cherylj: looks like something hasn't replicated [04:33] cherylj: we'd need to see juju status to see what version each agent is [04:34] to et an idea of where to start [04:34] I don't think any have upgraded. [04:36] cherylj: ah right ok. hmmm, do retrying the upgrade work? [04:37] wallyworld: no :( [04:37] we'd need logs etc then sadly :-( [04:37] and juju status [04:37] I have machine-0 log. Should I also ask for syslog? [04:38] would be good to see allmacines.log [04:38] or at least logs from all the HA state servers [04:38] and syslog for mongo won't hurt [04:38] here's the worrisome part, though. They went from 1.20 -> 1.23.3->1.24.7. And I know 1.23 had issues. [04:38] and possibly --debug from client [04:38] wondering if that had something to do with it [04:38] if they got off 1.23 that's a good sign [04:39] they didn't. It's the step to 1.24.7 that hit this problem [04:39] 1.23 tended to hang and not be able to be upgraded [04:39] Bug #1498968 opened: ERROR environment destruction failed: destroying storage: listing volumes: An internal error has occurred (InternalError) [04:39] well... [04:39] cherylj: the symptom i was aware of was that agents got dealocked, but this seems different [04:40] if you use juju ssh to log into an lxd machine, then go 'less /var/log/juju/machine-3.log' you might not get things in the right order [04:40] NFI why [04:40] but if you copy the file to the host, then look, the file is all good [04:40] wallyworld: yeah... [04:41] cherylj: it may well be a related problem and the scripts meno did may help, but because that error message i have not seen before, can't be sure [04:41] hence logs etc [04:41] okay, thanks wallyworld [04:41] thumper: lxd hates you. they put in that easter egg just for you and now you'v found it. well done [04:42] \o/ [04:42] * thumper goes to look for wine [04:45] Bug #1498968 changed: ERROR environment destruction failed: destroying storage: listing volumes: An internal error has occurred (InternalError) [04:46] sinzui: can you kick the job off again with 8b4e8b7d037c52c9a0df00d8227366033eea04d9 [04:46] i tried to do it myself but http://juju-ci.vapour.ws:8080/job/run-unit-tests-race/600/console [04:47] davechen1y: CI is still testing the last revision. Ci Will automaticlly start the next master or 1.25 revision [04:48] ok, thanks [04:49] davechen1y: I think the next round of testing will be in 15 minutes. master tip will be selected [04:55] sinzui: thanks, I think I have now excluded all the troublesome packages [04:56] davechen1y: you rock [05:17] davechen1y: I am off to sleep: Your job is running. http://juju-ci.vapour.ws:8080/view/Juju%20Revisions/job/run-unit-tests-race/601/console [05:23] wallyworld: I thought you were going to undo all the state changes? aren't they all unnecessary for this branch, since the series is just encoded in the charm URL? [05:24] wallyworld: i.e. the only place we check series is when resolving the URL [05:24] axw: i left in the ones needed for unit placement [05:24] ok [05:24] sorry, but i did remove the clean policy ones [05:32] axw: ahh, I see the problem with storage at service creation time. Service.unitStorageOps tries to read the database to get the service constraints, but obviously they haven't been written yet. [05:35] natefinch: yep, that'll be part of it. assignToNewMachine isn't even calling that though? [05:36] axw: it's during unit creation that I see the difference, not assignment [05:37] natefinch: ok. so the unit's not getting storage associated, and then of course when you go to assign it's got no storage so doesn't create attachments.. makes sense [05:37] correct [05:38] easy enough to do what I did with the rest of the stuff, which is factor out how the constraints are discovered and just pass them in [05:41] yay, unused variable compiler error saves me again [05:43] and fixed. [05:44] natefinch: excellent, thank you [05:44] wallyworld: you're going to be adding support for add-unit --series right? [05:45] wallyworld: or is it just add-unit --to --force [05:45] axw: not in very next branch. am wondering if i should just hold off on that [05:45] thinking about just "juju add-unit --to 1 --force" [05:45] but --to implies force i guess [05:45] as we discussed [05:45] wallyworld: right, so I'm wondering if propagating Force is required at all [05:46] wallyworld: if you specify --series, you'll need to record that on the unit doc [05:46] hmm... I'd prefer a force there.. to keep me from screwing myself if I typo or just forget that a machine is the wrong series [05:46] i am still vascilating on whether we want --force [05:46] or just an "are you sure?" or something [05:46] natefinch: --to already overrides everything else [05:47] as in, your constraints may be ignored if you use --to (e.g. in MAAS if you specify --to ) [05:47] yeah, but almost everything else is a nice to have for performance etc. ... series is like "this stuff may totally not function" [05:47] axw: actually, i forgot to mention - --to does not override series mismatch [05:47] currently [05:48] see assignToMachineOps [05:48] so i think that's what convices me we want --force [05:48] wallyworld: right, so the question is do we change --to from meaning "do what I say" to "do what I say (except if it's series, in which case I have to force you to do what I say)" [05:49] so changing semantics for a non 2.0 release would not be good [05:49] i think being conservative here is good and then we get feedback from feature buddies [05:50] that's IMHO [05:50] i can see both sides [05:50] hopefully it'll generally be a non-issue as charms become more multi-series compatible [05:50] you're changing semantics one way or the other [05:50] we aren't changing default --to behaviour [05:50] just adding a new option [05:51] wallyworld: no, but you are changing what --to means [05:51] really? [05:51] axw: --to can't override a charm's series now [05:51] due to an existing limitation, yes [05:51] i'm not sure we are changing what --to means. --to means "do not choose a machine for me or create one, use this one" [05:52] right. but why do I have to --force for series and not anything else? [05:52] partly because that's the current semantics, and also --to now overrides placement, not compatability per se [05:52] seems a bit arbitrary. I may have a charm that requires 64GiB RAM, but I've said to deploy to punydevice and it'll happily do that [05:53] overriding series is more of a compatability issue potentially [05:53] i can se the point about memory [05:53] you can't use --to to put cs:trusty/mysql on a vivid machine in 1.25, can you? [05:53] no [05:54] then having it also fail with series in metadata seems completely consistent [05:54] adding a flag that changes the behavior is certainly not breaking backwards compatibility of the CLI [05:54] that's my argument also [05:54] but i can also see the other side [05:55] but for me, the backwards compatability argument wins [05:55] --to may be inconsistent with respect to constraints, but that's the way it has been. I don't think changing it at this point is a good idea. [05:56] backwards compatibility matters when you're changing something that was possible that something that is not possible. we're doing the opposite. [05:56] anyway, past bedtime for me [05:56] * axw continues review anyway [05:56] natefinch: good night [05:57] wallyworld: what I'm getting at is: why would anyone care if they previously couldn't go "--to " and now they can? [05:57] people may have script that check errors etc [05:58] or expect errors [05:58] wallyworld: I cannot fathom why anyone would script that, except in CI to test for failures [05:59] me either, but then again as we see every day, customers do weird shit with juju [05:59] wallyworld: https://xkcd.com/1172/ [06:00] very relevant [06:00] ok, i'll change it [06:01] WINNER [06:01] I call that argument "appeal to XKCD" [06:01] not sure i fully agree, but we can iterate [06:01] i can honestly see both sides [06:02] wallyworld: yes, this is my opinion of course. I think you should at least bring it up with fwereade [06:02] my not-very-humble opinion [06:02] after i land the branch so that it will be too late [06:02] heh :) up to you [06:03] it was 2v1, maybe he will make it 2v2 [06:12] wallyworld: oh shit. looks like azure-sdk-for-go is using a too-new version of x/crypto :/ [06:12] \o/ [06:12] wallyworld: can't build on 1.2. are we updating soon? [06:12] sigh [07:28] axw: so, this test TestDeployBundleInvalidSeries now fails with --to now not complaining about series mismatch. I think it's valid to accept the bundle case as highlighted in the test as something that should now work [07:28] agree? === urulama__ is now known as urulama [07:40] mgz: the -race build finally passed !! [08:48] dooferlad, do you have a moment to look at http://paste.ubuntu.com/13479943/ please? [09:08] fwereade: on it [09:09] wallyworld: sorry, I missed your message. I think so, yes [09:09] axw: np, i've pushed changes [09:24] fwereade: please review: http://reviews.vapour.ws/r/3221/ [10:02] jam, fwereade, voidspace, standup? [10:04] dimitern: omw [10:05] dimitern: gah, 2fa dance [10:20] dooferlad: http://pastebin.ubuntu.com/13491666/ [10:21] dooferlad: fails with http://pastebin.ubuntu.com/13491674/ [10:40] dooferlad: from that output the space name seems to be being serialised as "string" not "space" [10:41] next issue, unreserved ranges is a map not an array [10:41] just testing with real maas to see if that's my bug - probably is :-) [10:43] voidspace: looking [10:48] dooferlad: hmmm... unreserved-ip-ranges should return an array [10:48] dooferlad: looks like it's returning a map [10:49] dooferlad: I haven't looked at the tests for unreserved ranges, just observing the error in my code (requested array got map) [10:50] dooferlad: although your test is deserialising into an array [10:51] dooferlad: I'll look again, it's possible there's a bug that only sends a map when the array is empty (I didn't populate the ranges first - haven't got that far in my test) [10:51] voidspace: OK [10:53] hmmm... my code looks good [10:54] unless it should be "unreserved_ip_ranges" [10:54] checking the maas docs :-) [10:54] problem is that maas ignores unrecognised ops [10:54] voidspace: I went with underscore [10:54] voidspace: which I think is what the doc has [10:54] dooferlad: underscores are correct, the maas command line translates them [10:55] dooferlad: however I missed an _ip off the middle of the op [10:55] so I'm calling the wrong op [10:55] which is why I'm getting the wrong response [10:55] so another bug that testing has discovered... [10:55] the space name issue is still a real issue though as far as I can see [10:59] dooferlad: dammit, I need "reserved_ip_ranges" not unreserved [10:59] sorry :-/ [10:59] that was a bug in my code too [11:00] voidspace: I did both [11:00] ah, I get an EOF reading reserved_ip_ranges [11:00] I'll look into that [11:04] dooferlad: when I call reserved_ip_ranges I'm looking for the range with "purpose" set to ["dynamic-range"] [11:04] dooferlad: can I set that with the test server? [11:06] It would be easy enough to do [11:08] I can see the Purpose field on AddressRange exists [11:08] dooferlad: note that the address range responses for reserved_ip_ranges and unreserved_ip_ranges are different [11:09] so *technically* having a Purpose field for the unreserved response is incorrect [11:09] however I don't think anyone will actually care [11:09] voidspace: as long as your code doesn't care :-) [11:09] it doesn't :-) [11:09] I'm not currently using reserved ranges [11:09] maybe the code that does address allocation should use it [11:10] but even then it wouldn't care about an extra Purpose field [11:10] grabbing coffee [11:13] voidspace: also grabbing coffee [11:17] fwereade: hey, with series in metadata work, we now support "juju deploy mysql --series vivid --force" to allow a non-vivid charm to be deployed on a vivid machine. we'd like to also make --to for unit placement able to override series just as it overrides other machine constraints, ie "juju add-unit mysql --to 1" would deploy mysql on a vivid machine 1 even if mysql does not support vivid. Note that there was no use of --force in [11:17] that add-unit command. The proposal is to make --to put a unit on a nominated machine and treat series as being overidden just the same as mem etc [11:17] the counter option is "juju add-unit mysql --to 1 --force" [11:17] but that gives series a special meaning for --to [11:18] wallyworld, I must say I don't really understand the use case: when is someone expert enough to do that but can't add the series to the charm anyway? [11:18] not for a charm store charm no [11:19] eg charm store charm supports precise, trusty [11:19] wallyworld, and it opens that bag of rattlesnakes re subordinates etc [11:19] well sort of, but when deploying a subordinate, a series check would be done [11:20] unless --to were specified [11:20] wallyworld, you can't place subordinnates [11:20] the semantics would be the same [11:20] wallyworld, and it ends up meaning that the existence of a subordinate relation tells you nothing about whether or not subordinnates will exist [11:20] so there we'd use --force [11:21] wallyworld, add-relation --force? [11:21] wallyworld, don't think that works [11:21] that's orthogonal the current issue though [11:21] wallyworld, we just need to accept that if you allow series-breaking deploys we also cause arbitrary series-breaking deploys of any subordinnates [11:22] wallyworld, it's not -- we're breaking a fundamental assumption on which all manner of things rest [11:22] only to date with one charm one series [11:22] that model is on the way out [11:23] wallyworld, I think "we will deploy stuff that works" is a kinda important principle [11:23] wallyworld, the choice is simple [11:23] so this is paving the way to deal with that but also allow users control [11:23] we have a clear directive to allow this to happen [11:24] wallyworld, if we allow deliberate I-know-better breakage of one service, we inevitably open the dorr to surprising breakage of subordinates [11:24] wallyworld, and that's fine, it's a choice we can make [11:24] wallyworld, but I see no awareness that we're adding a whole new class of inscrutable failure mode to juju or what we're going to do about it [11:25] well, that's moot - we've been told to do it [11:25] wallyworld, ok, so part of that is *addressing the issues that it raises* [11:25] there's not much we can do if a user deploys a precise charm to vivid and it breaks [11:25] wallyworld, right [11:25] wallyworld, that's fine [11:25] wallyworld, the user said "do X, I know what I'm doing" [11:25] right, hence --to [11:26] which breaks mem and other constraints already [11:26] wallyworld, no [11:26] wallyworld, placement overrides constraints [11:26] right [11:26] wallyworld, no complexity, no breakage, clear interaction model [11:26] it breaks things [11:26] if a charm needs 64M and you place it on a 2M machine, boom [11:28] wallyworld, look, when you say "I want service X running on this series" that also, secretly and dynamically implies "I also want a bunch of other services forced onto this series too" [11:28] wallyworld, (also... deploy-time and --to are *very* different...) [11:28] wallyworld, deploy-time keeps us to one-service-one-series [11:28] deploy supports --to i think [11:29] wallyworld, you're right there, but that can work just fine [11:29] it breaks the same way though [11:30] wallyworld, --to, when handled by juju rather than a provider, implies force-series-of-target-machine [11:30] wallyworld, do we have a mandate to allow multi-series *services*? because I am pretty sure we agreed to descope that precisely because of these concerns [11:30] in the same way it forces a charm into an environment that might not suit it memeory wise [11:31] services are single series [11:31] once the service doc series attribute is set, that defines the series of the units [11:32] wallyworld, ok, so add-unit --to will barf on series mismatch? [11:32] but we still need to allow those units to be placed on any machine [11:32] it will now [11:32] but not by the end of this [11:32] it will barf on OS mismatch [11:32] wallyworld, if you allow add-unit --to, you just implemented multi-series services [11:33] in a way [11:33] wallyworld, which is a monster of complexity and unintended consequences [11:33] as is a number other things in juju [11:33] which we have had to do [11:34] wallyworld, we agreed *not* to do multi-series services [11:34] wallyworld, at least in phase one [11:34] ok, i'll tell people with pitch forks to come and see yu :-) [11:35] wallyworld, multi-series services will be necessary and will be awesome [11:35] wallyworld, but if we shoehorn them in like this we do nobody any favours [11:35] people want to override juju's behaviour [11:36] as with image id etc [11:36] we are fighting a losing battle [11:36] wallyworld, all I am trying to do is to develop a product that has a chance of fucking *working* [11:36] but we'll hold off and see what pusback we get [11:37] wallyworld, magical thinking is not actually a substitute for engineering [11:37] it works fine even with upload-tools, etc [11:37] which we said was evil [11:37] at some point, we have to cater for what users want even if it is not perfect [11:38] wallyworld, upload-tools? you mean the feature that means we never know what version a client is running in the field? yeah that's fucking awesome [11:38] wallyworld, yes [11:38] wallyworld, I know [11:38] sarcasm aside, upload tools solves user problems [11:38] wallyworld, I am advocating for us *thinking through consequences* [11:38] i don't like it either [11:39] wallyworld, right [11:39] wallyworld, it is a shitty half-assed solution [11:39] we can solve consequences through iteration [11:39] wallyworld, are you fucking kidding me [11:39] wallyworld, we cannot just break the model and hope it'll magically work itself out [11:40] that's not what i said [11:40] wallyworld, look, we talked about all this inn the spec [11:40] and yet users still ask for it [11:40] wallyworld, so the spec changed to include multi-series services? [11:40] wallyworld, and we didn't reestimate? [11:41] wallyworld, or take the time to answer the hard questions that caused us to defer the multi-series service bit? [11:42] the spec was updated yes, but full details of consequences not there becauses it's a requirements spec [11:43] i can strikeout some items for now and see what push back we get [11:44] wallyworld, my understanding was that we'd drawn a line before multi-series services because the effort to do them right was *so much* higher [11:46] yeah, we try to [11:46] wallyworld, I hope it's clear that I'm not even against it -- I just do not believe we will do anything but a shitty job of it if it slips in under the radar like this [11:47] sure, understood [11:51] fwereade: i think a lot of it comes down to - are we prepared to allow users to force subordinates onto a machine that the principal may also have been forced onto [11:51] or, we could continue to disallow that [11:52] wallyworld, I *think* that doesn't quite come up? [11:52] wallyworld, so long as each service has 1 series, you can only create subordinate relations between services with matching series [11:53] wallyworld, and so, yes, we will want to be able to force subordinate series too [11:53] wallyworld, but I think it keeps the door to the really surprising behaviour shut [11:55] is there really much difference between forcing a precise mysql charm onto a wily machine, and also forcing a rsyslog subordinate onto that wily machine? [11:57] dooferlad: any idea on why "space" appears to be serialised as "string" in the test server? [11:57] voidspace: not yet [11:57] voidspace: hunting other bugs [11:58] ok [11:58] dooferlad: I can't currently deserialise a subnet - and *all* of my code needs to deserialise subnets [11:58] dooferlad: so I can't currently test any of it :-/ [11:58] voidspace: yea, will get to it ASAP [11:58] it *may* still be my fault, but I can't see why [11:58] dooferlad: cool, thanks [11:59] wallyworld, the difference is entirely in whether it's been explicitly requested [11:59] I'm continuing to write tests, but can't actually run them :-) [11:59] I also need to be able to set the dynamic range to test that code - but I only really need that for a single test [11:59] wallyworld, running a charm in an unexpected environment is reasonable when the user has said they know better [12:00] fwereade: right, so add-relation could do that check, but it would need though on how to make it nice to use [12:00] dooferlad: an NewSubnetWithDynamicRange or similar would be fine [12:00] wallyworld, it's when that leaks into other services -- and especially we don't have a clear model for all the edge cases -- that we have a problem [12:00] wallyworld, add-relation already does that check [12:00] dooferlad: not necessarily any need for a general purpose mechanism for specifying the purpose of ranges [12:02] wallyworld, I wholeheartedly agree that it would be awesome to do clever things that Just Work, but I would rather restrict the domain to simple things that Just Work and figure out how to grow from there [12:02] right, which is sort of what we were doing [12:02] wallyworld, it is, so long as we don't introduce multi-series services [12:03] wallyworld, and I think we really do get plenty of user value out of phase 1 [12:03] it is simple to deploy the initial multi-series units [12:04] that Just Works, and the growing bit is how to allow explcit override for incompatible suborinates :-) [12:04] but we'll skip that for now then [12:04] wallyworld, strongly disagree -- it puts us into broken states and forces us to figure out how to get out of them [12:05] what's broekn? it's no more broken that deploying a trusty charm to a vivid machine in the first place [12:05] wallyworld, AIUI the use case is "deploy a charm to a series not explicitly supported" not "deploy cross-series services" [12:05] but the latter boils down to the former [12:06] wallyworld, if I say "deploy trusty-X on vivid", I am explicitly telling juju to do something strange/new [12:06] sure, and if i say replace this trusty subordinate to this vivid mysql, same thing [12:06] relate* [12:06] wallyworld, if it then turns out that it *also* meant "oh, and an arbitrary set of other services might have some of their units deployed on mixed series" [12:07] wallyworld, that feels like a pretty harsh violation of least surprise [12:07] voidspace: space name issue fixed [12:07] but it's the same basic premise - deploying charms onto machines with mismatched series if the user says it;s ok [12:07] voidspace: I also changed the JSON output to be pretty printed so debugging is easier [12:07] there's no surprise, the user has ok'ed everything [12:08] wallyworld, how so? [12:08] by telling juju it's ok to relate this subordinate to this principal even though the series don't match [12:09] maybe this point is moot - all the charms will be migrated to multi-series :-) [12:09] wallyworld, but that's just another implicit inroad into multi-series services -- yeah, exactly [12:10] but as a user it would suck if i had a trusty rsyslog that i couldn't use [12:10] wallyworld, it will suck exactly as much as it does today -- they can deploy another rsyslog to vivid and relate that one [12:10] wallyworld, not great? sure [12:10] wallyworld, but it's what you have to do already [12:11] wallyworld, and it's not the problem we're trying to solve with force-series-deploy [12:11] wallyworld, it's related, it's probably the *next* problem to solve [12:12] fwereade: well today you can't deploy the rsyslog to vivid - well you can, but the series is still trusty [12:12] so you are screwed [12:12] wallyworld, but largely because it's a special case of "multi-series *services* are the logical next step from multi-series charms" [12:12] wallyworld, huh? if you force rsyslog to vivid *surely* the service has series vivid? [12:13] maybe, i'd need to check [12:13] wallyworld, if we're checking charm series not service series then, ok, we need to update the model [12:13] wallyworld, but that's not a major change afaics [12:15] fwereade: so if you have a trusty rsyslog, off hand, i don't think you can force it to vivid. if you try and deploy using --to, it ha to be to a trusty machine [12:16] wallyworld, ok, now I'm super confused [12:16] you can now with series in metadat awork [12:16] wallyworld, I thought this work was, literally, you can now do that [12:16] wallyworld, ok cool [12:16] but not in 1.25 unless i am misremebering [12:16] wallyworld, no we agree there [12:16] so in 1.25 you are screwed [12:17] and as a user that sucks [12:17] i want to tell juju to do my bidding [12:17] and not have juju say "no" [12:17] wallyworld, no argument there... but I thought we were talking about the series in metadata work [12:17] right, but if i want multiseries services, then juju should do that [12:17] even if i need to --force or --to [12:18] wallyworld, in which case surely you can deploy whatever charm with whatever forced series you like, and get a service with the forced series that happens to use a charm for a different series [12:18] wallyworld, I agree it should, yes [12:18] wallyworld, but it should not do so by exploding the space of surprising deployment possibilities and hoping they aren't too painful [12:19] wallyworld, especially not when we scoped it to solve a problem and ISTM that it is fulfilling that with deploy-time only [12:19] not surprising if i ask juju to do it :-) but we've already been over that [12:19] wallyworld, if we're missing anything, how about upgrade-charm? [12:19] on the list to look at [12:20] wallyworld, in what world is that done *after* multi-series services though? [12:20] wallyworld, that's necessary for a consistent implementation of deploy-force [12:20] in a world where you diliver stuff incrementally [12:20] into a product notyet released [12:21] but which will be usable by the time of release [12:21] but along the way will have rough edges while stuff is developed [12:21] wallyworld, missing upgrade-charm is absolutely one of those rough edges [12:21] yes it is [12:21] which is why it will be done before release [12:22] wallyworld, but having done deploy-series, fixing those rough edges surely comes before making a massive and unconsidered change to the model [12:22] sigh, it's not unconsidered [12:22] wallyworld, which will create more rough edges than you can fix in a dedicated cycle [12:23] wallyworld, you still don't seem to understand that if you tell juju put put service X on series Y, you will be surprised if service Z also ends up there [12:23] wallyworld, "do what I say"? fine [12:23] i won't be surprised because service Z will not go there with an incompatible series unless i say so [12:24] wallyworld, how will you tell it? [12:24] via --force or similar to be decided syntax [12:24] dimitern, dooferlad, voidspace: if we want to rebase our branch I'll need to push with fast-forward; we'll all need to agree when we should do that as it would be best if you have nothing locally in-flight. [12:25] wallyworld, ...that doesn't sound "considered" to me [12:25] frobware, why should it matter if there's ongoing work? [12:25] wallyworld, afaics your choices are to magically deploy to surprising series, or to *not* deploy to surprising series [12:25] the general principal is, UX needs thought [12:25] dimitern, because you wont' be able to pull into your local maas-spaces branch [12:25] frobware, provided each of us also rebases the in-progress work on top of the rebased maas-spaces [12:26] dimitern, ok that should work too. [12:26] it's not magic [12:26] the user needs to ok it [12:26] but it's moot now a [12:26] dimitern, I was trying to avoid the principal of least surprise [12:27] wallyworld, so, when I add-relation foo bar, and foo and bar are both trusty, but foo/2 is on vivid, what do we do? [12:28] wallyworld, and when I add another vivid unit of foo, what do we do then? [12:28] depends on the charm and what it supports [12:28] if foo charm supports vivid, no problem [12:28] wallyworld, but I thought you said we'd have to force it? [12:28] dimitern, dooferlad, voidspace: OK to rebase our branch? [12:28] only if the charm doesn;'t support the series [12:29] if it is a multiseries charm supporting trusty and vivid then yay [12:30] frobware, +1 from me [12:31] wallyworld, then yay, except upgrades get tangly [12:32] we just check that all the series onto which the charm is deployed are also supported by the new charm [12:32] we can then make a call [12:32] wallyworld, again, multi-series charms *will* be great, but the model does not currently accommodate them, and we do actually have to model them [12:33] we can allow the upgrade but then prevent new units added to any unupported series [12:33] wallyworld, race: I add a unit of precise just as you upgrade the service to a charm that doesn't support precise [12:33] wallyworld, to do that right we need a bunch of synchronisation in state that doesn't currently exist [12:33] wallyworld, again, it will be good [12:33] wallyworld, but it's massive scope creep for phase 1 [12:35] i agree upgrades are messy and we are getting near the end [12:35] wallyworld, yeah -- I think we can solve a problem, get a win, draw a line under it, examine the next problem with more clarity [12:36] dimitern, if you're going to rebase your current branch then I'll communicate the commit ID I rebased to [12:37] frobware, ok, sounds good [12:56] voidspace, just want to confirm you're also OK if I rebase maas-spaces with master [13:06] dooferlad, ans same for you too ^^ [13:12] frobware: is that bug fixed? [13:12] frobware: the failing tests on master bug? [13:12] frobware: if not I'll have failing tests on maas-spaces [13:13] frobware: not the end of the world but we'll need to rebase again as soon as fixes land [13:27] voidspace, which bug? on my desktop (trusty) the unit tests on master and the rebased maas-spaces branch are OK [13:35] frobware, voidspace, all seems green on master; I vote to do the rebase ;) [13:45] dimitern, voidspace: pushing rebase [13:47] frobware, cheers, will have a look in a bit how it went [13:48] dimitern, voidspace, dooferlad: diff against master reveals: http://pastebin.ubuntu.com/13492574/ [13:51] frobware, that sounds correct - these should be the 13 commits that are ahead [13:53] dimitern, voidspace, dooferlad: rebase comit was 8b4e8b7d037c52c9a0df00d8227366033eea04d9 [13:59] frobware, thanks, I'll rebase mine soon [15:03] don't suppose anyone's familiar with menn0's StatePool stuff? [15:04] natefinch, sent a few notes on the juju-min-version PR [15:07] fwereade: awesome, thanks [15:09] dimitern: frobware: this bug: https://bugs.launchpad.net/juju-core/+bug/1517748 [15:09] Bug #1517748: provider/lxd: test suite panics if lxd not installed [15:09] the one that causes tests on master to fail for me [15:09] that we discussed yesterday :-) [15:09] voidspace, I just saw that master is blessed with the same revision that I rebased too [15:10] frobware: nonetheless I have failing tests on master on my machine [15:11] voidspace, does comment #3 make any difference for you? [15:11] frobware: read comment #4! [15:11] frobware: (but no) [15:11] I just responded [15:11] I'll look at the example config he suggests in a bit [15:11] voidspace, gee! I'ms so out touch - there's a comment #4 now... [15:12] but my user is in the lxd group [15:12] frobware: :-) [15:12] frobware: sounds like you've done the rebase anyway [15:12] ah well... [15:12] I won't update the branch I'm working on until I'm ready for it to merge into our feature branch [15:12] voidspace, ok [15:13] voidspace, I have a wily machine, in a spare minute I'll try building and running the tests there. [15:14] I don't think wily is the issue [15:14] dimitern has wily and tests pass for him I believe [15:14] well, except the payload/persistence tests that also fail intermittently on master and for which there is another bug [15:19] voidspace, I haven't run make check since the rebase, but will do soon [15:29] jillr, can you ping me when you get a few minutes? [15:32] frobware, I need more coffee, will be there ina minute [15:33] alexisb, ack [15:38] dooferlad: bug in subnetReservedIPRanges [15:39] dooferlad: it panics if there are no InUseIPAddresses on a subnet (index out of range when looking up ipAddresses[0]) [15:45] dooferlad: also, in the current pushed branch the purpose array for the reserved_ip_ranges is nil rather than an empty array if there are no purposes [15:45] dooferlad: defaulting to "assigned-ip" would be better [16:02] voidspace: I think I have both of those fixed. Just found a really odd bug though - need to fix it or unreserved_ip_ranges is broken [16:03] dooferlad: cool [16:03] dooferlad: I've worked around them for the moment [16:03] great [16:04] just fixed another bug in my own code around struct copying [16:04] so all the subnets were missing from Spaces [16:04] to be fair I knew that was probably the case and was waiting for my test to prove it [16:04] fixed now [16:10] Bug #1461957 changed: Does not use security group ids [16:10] Bug #1519403 opened: 1.24 upgrade does not set environ-uuid [16:21] cherylj: hey there, what's up? [16:21] jillr: just wanted to checkpoint with you on the upgrade stuff. I set up a time for us to chat. Would that work for you? (you should have an invite in your inbox) [16:21] jillr: I just saw that you accepted it :) [16:22] cherylj: that time works, just sent an accept [16:22] :) [16:22] cool, chat with you then :) [16:22] good deal [16:22] voidspace, can you move off go 1.3.3 and to 1.5 ? [16:23] voidspace, or put another way - if you switch to 1.5 do you still see the failing test in your branch? [16:25] frobware: I can try that [16:25] frobware: up to my neck in subnet tests right now [16:25] will try in a bit [16:25] voidspace, sure [16:42] voidspace: latest gomaasapi code pushed. [16:42] dooferlad: thanks [16:43] voidspace: didn't fix that index out of range, but the adding a range code is there. [16:43] voidspace: just going to tidy that up now [16:43] dooferlad: now purpose is coming back as a string not an array [16:44] voidspace: is that not OK? [16:44] an array seemed odd and I couldn't work out why it would be >1 thing. [16:44] dooferlad: no, it's an array of strings [16:44] dooferlad: me neither [16:44] but an array of one string is what it is... [16:45] voidspace: easy enough to change. Will do that while I fix the out of range stuff [16:45] thanks [16:54] voidspace: fixed [17:22] dooferlad: SetNodeNetworkLink is new, right? [17:22] dooferlad: could it take a systemId instead of a Node, that would be much more convenient [18:57] ericsnow: why does the controller in an LXD environment need to be wily? [18:58] natefinch: the juju deb is built with Go 1.3+, thus supporting the LXD provider [19:00] ericsnow: we're installing jujud via the deb on the target machine? not just downloading it? [19:01] natefinch: from the stream that was built from the deb [19:01] natefinch: (or use --upload-tools) [19:02] ericsnow: right, but isn't what we download from the stream just a binary? [19:02] natefinch: essentially [19:02] so.... it'll work wherever [19:02] oh wait [19:02] natefinch: the stream matches the series [19:02] I understand [19:03] we're intentionally shooting ourselves in the foot because of Ubuntu. [19:03] natefinch: pretty much [19:03] huzzah [19:03] *lol* [19:11] jillr: I had one more thing I was going to ask! You had an environment that we were able to get to 1.26-alpha1, right? [19:13] cherylj: no to the best of my knowledge we were never able to successfully upgrade from 1.22.8 to 1.26-alpha in our staging environment [19:13] jillr: hmm, I thought wallyworld had gotten you guys a script to force it to that level. But, there was a lot of back and forth, and maybe that was in a test environment on our side [19:14] cherylj: I know there was a script in the works, and we were doing some mongo surgery at one point but that was unsuccessful aiui [19:15] jillr: also, do you want to schedule your test upgrade to 1.25.1? Since we'll be all together in the US, we could get a couple people on a hangout to make sure things go through successfully [19:15] cherylj: if we have that script I can do another test deploy/upgrade later today to confirm where we are [19:15] cherylj: that would be great [19:15] jillr: how does Monday, Dec 7 work for you? [19:15] I imagine we'll have moved 1.25.1 to stable by then [19:17] which releases of MAAS will juju 1.25 and 1.26+ support? 1.8 and 1.9, or older versions too? [19:17] cherylj: if we can shoot for US-west afternoon that would work [19:17] that's actually an excellent question for us as well cherylj ^^ we have a couple MAAS 1.7 deploys we'll be working on with these upgrades [19:18] jillr: if you have time, you could also verify that the VIP switch is fixed in 1.26-alpha1 before then, either by attempting an upgrade again or directly bootstrapping it and trying to recreate bug 1516150 [19:18] Bug #1516150: LXC containers getting HA VIP addresses after reboot [19:19] frobware, jillr, if we don't get an answer here, I'll make sure to bring it up in the release call this afternoon [19:19] (for the MAAS support question) [19:19] thx [19:19] cherylj: can definitely test the VIPs, and thx on the MAAS question [19:20] frobware: will the answer impact your work on that bonding bug? [19:20] cherylj: I dont readily see the script on staging, will want to get a new/current copy of that to be safe please [19:20] jillr: https://github.com/wwitzel3/juju-upgrade-hack [19:21] cherylj: awesome, thx [19:21] I really wish we could all get away from using the word "hack" in anything we produce / comment on [19:21] heh [19:21] I dig the readme :) [19:21] hahaha, yeah [19:22] jillr: you could even try the upgrade to 1.25.1 since it's in proposed. Don't have to futz with 1.26-alpha1 [19:22] cherylj: I'll plan to do a deploy with the VIPs in the high range (to not hit #1516150) first and test 1.26-alpha1 with that script today [19:22] Bug #1516150: LXC containers getting HA VIP addresses after reboot [19:23] cherylj: then if I have time or else tomorrow with the VIPs in the low range for testing 1516150, and also 1.25.1 time permitting [19:23] jillr: awesome. Just let me know if you run into problems [19:23] cherylj: will do [19:39] ericsnow: my bug fix: http://reviews.vapour.ws/r/3224/ [19:39] ericsnow: I'll review your fixes now [19:40] natefinch: k [20:07] sinzui: the -race build has passed a few times now [20:07] (minus the ususal mongo db rubbish) [20:07] can run-unit-tests-race be made voting please [20:10] OMG that would be amazing [20:11] can we set the landing bot to run with -race too? [20:11] davecheney: alredy voting :) I added a second retry since the other voting unit tests also retry === natefinch is now known as natefinch-afk [20:22] sinzui: fabulous! [20:50] Bug #1519473 opened: High resource usage and possible memory leak 1.24.5 [20:59] cherylj: jillr: just saw backscroll, didn't read in detail, but i did upgrade a 1.22.8 to 1.26-alpha1 on jujumanage@blah and the next day it was reset again [20:59] and then wayne took that and made it into a script [21:00] wallyworld: thx. I've got a redeploy cooking now, will give it a run via the script soon as this is done [21:01] sinzui, awesome! [21:02] and davecheney, also awesome, thank you very much [21:02] fwereade: hey [21:02] fwereade: can you chat now? [21:02] thumper, let's [21:02] fwereade, menn0: https://plus.google.com/hangouts/_/canonical.com/migrations?authuser=1 [21:02] thumper: on my way [21:03] fwereade: i'm just writing something to the troups [21:49] wallyworld, thumper when the both of you are free we need to chat, after the release call if you would like [21:50] 4 words guys hate [21:50] we need to talk [21:50] :) [21:50] i'm free now if that helps, not sure about thumper [21:51] it should be the three of us [21:52] sure, i was waiting for thumper to confirm also :-) [21:54] wallyworld, maybe we should keep pinging thumper [21:54] yeah, let's ping thumper [21:54] * thumper is on a call with menn0 and fwereade [21:54] but can come now [21:54] so that thumper continues to here "ding" [21:54] :-D [21:54] * rick_h_ wonders if you thump a thumper vs ping a thumper [21:54] lol [21:54] thumper [21:55] cruel menn0 [21:55] WAT? [21:55] thumper ding ding [21:55] thumper ding ding [21:55] https://plus.google.com/hangouts/_/canonical.com/alexis-tim-ian [21:55] thumper, ^^^ [21:55] so not among friends here [21:55] wallyworld, ^^ [22:12] menn0: can you bump up bug 1517748 in your review queue today? We're waiting on that for alpha2 [22:12] Bug #1517748: provider/lxd: test suite panics if lxd not installed [22:13] cherylj: will do ... have been in calls and still am [22:13] menn0: thanks! [22:19] ericsnow, cherylj: ship it with questions [22:20] menn0: thanks [22:27] sinzui: I *need* CI to run over the controller rename [22:27] sinzui: but we are going to have to be careful about how things are defined to be "multiple environment" [22:28] ericsnow: i'm happy with both those responses. thanks. [22:28] menn0: no, thank you :) [22:34] thumper: CI really cannot. most of the functional tests will fail and we need to clean the damage by hand [22:53] thumper: I will ask abentley to take up the controller insullation work. I have fixes for 80% of jobs, the remaining 20% are hard. if Aaron and I agree that we can accept some known failures for a few run, we can run without complete support [22:53] Bug #1519527 opened: 1.25.1 as proposed: 1 or more lxc units lose agent state [22:54] what are the hard points?\ [22:54] sinzui: worth noting that my idea of saying "look for create-environment" will soon be wrong :-| [22:54] as it will be "create-model" [22:54] sinzui: also... part of the clouds and credentials spec, the cache.yaml file is likely to be replaced too [22:55] thumper: several functional tests don't use the common args and bootstrap helpers. TRheir bepoke code needs to be removed, or in the case of quickstart tests, we need to add the new intelligence [22:55] ah [23:02] Bug #1519527 changed: 1.25.1 as proposed: 1 or more lxc units lose agent state [23:05] Bug #1519527 opened: 1.25.1 as proposed: 1 or more lxc units lose agent state [23:14] axw: anastasiamac: sorry, in another meeting, delayed by 10 minutes or so [23:14] wallyworld: sure, ping when you're ready [23:15] wallyworld: k [23:35] axw: anastasiamac: almost there, 5 more minutes [23:35] wallyworld: k [23:41] anastasiamac: axw: there now