[01:23] wallyworld: lp:gomaasapi/enum.go only has 7 node status consts, where lp:maas/maasserver/enum.py has 15. Do you know if there is a reason for that, or is gomaasapi out of date? If so, I'll add the missing 8 statuses, along with the deployed failed status needed to fix this bug. [01:24] wallyworld anastasiamac: 1.24 release notes are here: https://docs.google.com/document/d/1qKWvSZ06Vx3ZI2RxYg6P7sWdIvOD5QXPfvpUNEtImMA/edit [01:36] thumper: ping? [01:37] hey [01:37] should DestroyEnvironment be moved from apiserver/client to apiserver/environmentmanager so it could be called when you're just logged into a system? [01:38] ah... [01:38] no [01:38] don't think so because it actually operates on an environment... [01:38] which makes me rethink slightly [01:38] bugger [01:38] heh [01:39] ah FFS [01:40] * thumper thinks [01:40] shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit shit vv [01:40] :( [01:41] * thumper rethinks the option of splitting the command up [01:42] damn it [01:44] thumper: need to chat about it? [01:44] yeah [01:44] okay, one sec. [01:44] gimmie a few minutes to finish munching [01:45] sure [01:50] cherylj: https://plus.google.com/hangouts/_/canonical.com/destroy-all-the-environments [01:54] waigani_: yes, gomaasapi out of date, which is part of the reason for the bug [01:54] wallyworld: I've fixed the bug, now I just have to refresh my memory on bzr to propose to gomaasapi... [01:55] waigani_: does the fix include updating pending machine status to error as per the bug report? [01:56] bzr push lp:~waigani/gomaasapi/your-branch [01:56] wallyworld: it adds a TestBootstrapNodeFailedDeploy to the maas provider - that is bootstrap returns an error [01:57] waigani_: i see, and so that propagates through to make start instance error [01:57] it works for any instance starting up, not just bootstrap? [01:59] wallyworld: yes, the fix is in waitForNodeDeployment - but l'll follow up with more testing to make sure [01:59] waigani_: thanks, i'm just being cautious [02:00] wallyworld: yep, of course [02:07] anastasiamac: storage-add is all done on 1.24 right? [02:07] anastasiamac: just updating the docs [02:11] axw: anastasiamac had to go out to the school for a bit; add is done except for the 2 remaining PR's for the hool tool fixes [02:11] wallyworld: thanks [02:26] wallyworld: is it MAAS 1.8 that's required for disk constraints? [02:26] axw: yep [02:31] wallyworld anastasiamac: updated docs, would appreciate your eyes over it when you have time: https://github.com/juju/docs/pull/443 [02:32] axw: ty, am finishing a branch will look soon [02:32] axw: tyvm! [02:32] axw: will look soon :D [02:32] thanks [02:33] axw: tyvm for comments on PR as well :D [02:33] axw: if u have time to cast ur eyes over this one would gr8!! [02:33] axw: http://reviews.vapour.ws/r/1828/ [02:45] thumper: http://paste.ubuntu.com/11510221/ [02:45] latest state of play [02:45] still very messy [02:46] Bug #1460882 was opened: provider/joyent: multiple data races [02:46] davechen1y: keep at it, you're doing a great job! [02:46] wallyworld: https://code.launchpad.net/~waigani/gomaasapi/faildeploy/+merge/260786 [02:46] waigani_: thanks will look soon [02:47] wallyworld: once that lands I'll update dependencies.tsv and propose the juju-core pr [02:47] thumper: /me salutes [02:47] anastasiamac: :( it occurred to me that storage-add is a bit too permissive with the constraints. unit should not be able to specify which pool to add storage from [02:48] anastasiamac: that's an operator concern. units should only be able to request more of storage that has already been assigned [02:48] anastasiamac: i.e. they should be able to specify count -- not even sure about size [02:48] I think disallowing pool would be sufficient for now [02:52] anastasiamac: added a comment to review - when doing dependent branches, please use rbt to set the parent branch [03:14] thumper: here's the PR that adds the logsink log file [03:15] thumper: http://reviews.vapour.ws/r/1835/ [03:15] k [03:16] thumper: my only concern with it is that in a large env there's potentially 1000's of request handlers using the same lumberjack.Logger [03:17] thumper: it's goroutine safe (using a mutex internally) so that's not a problem [03:17] menn0: are we writing one file per stateserver or one file per environment? [03:17] thumper: but I do wonder about the perform impact for the logsink API [03:17] thumper: one per stateserver [03:17] hmm..... [03:18] thumper: the env UUID is included in each log line [03:20] I suppose it is only for post-mortems [03:21] thumper: separate log files would be tricker to manage when envs are destroyed [03:21] * thumper nods [03:21] thumper: re the performance aspect I was thinking there could be a single goroutine which writes to the file, using a buffered channel to help cope with bursts [03:22] thumper: not sure if it's worth the complexity (and it might not really help that much) [03:22] hmm... [03:22] yeah, probably not worth it just yet [03:22] thumper: the way things are now if the server end starts slowly down it should be fine because there's tons of buffering on the client side [03:24] I guess we'll see how it performs :) [03:25] thumper: i could run the performance test again [03:25] up to you... [03:25] I'm not too worried at this stage [03:25] ok [03:26] i'll make a note to run the test again - maybe when i'm writing specs later this week [03:27] menn0, thumper: you can easily toss a bufio.Writer around lumberjack and get buffered writing [03:27] basically zero complexity. I know another application using lumberjack did the same thing, because they were doing a ton of small writes. [03:28] axw: could you look at this for me? discussing with curtis we have been wanting this for a little while - bootstrap command waits till api server is ready before exiting http://reviews.vapour.ws/r/1836/ [03:29] natefinch: not a bad idea ... [03:30] waigani_: gomaasapi change looks good [03:32] wallyworld: looking [03:32] waigani_: cool - that will allow the maas provider to report the right error [03:32] natefinch: the problem is making sure things make it to disk when the agent dies [03:33] menn0: yep, that's a problem [03:33] natefinch: looking at the Writer code it looks like unless you call Flush or Close before things finish up the contents of the buffer at the end won't make it to disk [03:33] natefinch: so i'd still need something watching the api server's tomb which calls flush as it's dying [03:34] natefinch: anyway i'll leave things as they are for now. the OS's own caching may be just fine. [03:35] thumper: do you have time to look at that PR? [03:35] menn0: yeah [03:35] in a few minutes [03:37] thumper: thamks [03:39] wallyworld: reviewed, probably good to ship but I have a few questions first [03:39] sure, ty [03:43] axw: answers published [03:45] thumper: here is a nasty one https://bugs.launchpad.net/juju-core/+bug/1460893 [03:45] Bug #1460893: many unhandled assigned values [03:46] wallyworld: I'll shipit, but what I had expected was that when you bootstrap, Juju wouldn't even export a limited API until bootstrap+possible upgrade had finished [03:47] axw: but then that would not allow status to run [03:47] wallyworld: why does that matter? if we're blocking until the API is available? [03:47] axw: and it's even harder because the upgrade check needs to api [03:48] so we need to start the api for the upgrade workr [03:48] hence everything can see it also [03:48] and so log in [03:49] is that making sense [03:49] wallyworld: it's not unsolvable, but I understand that that makes it more difficult [03:50] it would need changes down at the rpc leve; [03:50] yep [03:50] so messy this late in 1.24 [03:50] and we've wanted bootstrap to behave like this anyway [03:50] it could use loopback-only during bootstrap-upgrade, but that probably wouldn't help for local provider [03:51] likely, yes [03:51] we can take another stab in 1.25 or something [03:51] wallyworld: it's got a rubber stamp now [03:52] ty [03:52] tested on amazon, just testing on local to be sure [03:55] natefinch: how do the log files managed via lumberjack end up with the "correct" ownership and perms? [03:55] natefinch: i'm not seeing where that happens [03:57] menn0: https://github.com/natefinch/lumberjack/blob/v2.0/lumberjack.go#L200 [04:00] natefinch: ok right. so b/c the shell script used with upstart/systemd sets the initial mode and owner lumberjack perpetuates it [04:00] menn0: yep [04:00] * menn0 needs to do something similar for this other log file [04:01] menn0: that's like 2/3rds of the reason I left the upstart script alone, was because I didn't want to have to recreate that logic somewhere else. [04:04] Bug #1460893 was opened: many unhandled assigned values [04:58] time to head off and give a juju talk [05:10] axw: if you have time, i'd like another set of eyes on a problem [05:11] wallyworld: yup [05:11] so in that recent PR with bootstrap - it fails on local provider because the isAgentUpgrading() never returns false [05:12] so bootstrap polls and gives up [05:12] and yet the bootstrap is ok because deploy works [05:13] so i can't quite see right now why the channel to signal agent upgrade checks are finished isn't closed [05:13] wallyworld: I don't see any isAgentUpgrading(), did you change something and not upload? [05:13] no, don't think so. that's in jujud/agent/machine.go [05:14] isAgentUpgradePending? [05:14] ok [05:14] seems to all work fine on aws [05:14] if i hard code isAgentUpgradePending() to return false it works [05:15] so for some reason the agent upgrade worker is not closing the channel on local provider [05:15] that's where i've got to so far [05:16] wallyworld: I'm guessing it's something to do with local provider auto-bumping the version [05:16] ah, yeah maybe, that version is stored as agent-version in state i guess [05:16] yes [05:17] i'm sure i tried this, let me try closing that channel at the start of the upgrade worker [05:18] wallyworld: could you have a quick look at this? http://reviews.vapour.ws/r/1838/ [05:18] ok [05:18] wallyworld: thanks. it's an easy one. [05:19] menn0: sorry, -1 on that PR [05:20] menn0: delete the tests and i'll LGTM it [05:21] davechen1y: fine by me I guess [05:21] davechen1y: the tests are kind pointless but i was expecting a reviewer to require them [05:21] :) [05:21] kinda [05:22] davechen1y: the original func with the juju codebase wasn't tested either [05:23] axw: if i hard code the closing of the agentUpgradeComplete channel at the very start of the agent upgrade worker loop, it still is unhappy, so the version bump may be a red herring [05:24] wallyworld: I'll pull your branch and see if I can spot anything [05:24] axw: thanks, i'm sure it's something ovious [05:24] wallyworld: you ok with me deleting the test as per davechen1y (above) [05:25] wallyworld: ? [05:25] menn0: was looking at test [05:25] i guess so [05:25] menn0: os.Chown has tests [05:25] the gynmastics to mock out the call just to prove that we can call that call [05:25] seem unnecessary [05:25] davechen1y: I agree it's a little contrived [05:25] davechen1y: i could go either way on this one [05:26] davechen1y: the test at least shows that the right things are called [05:26] if it didn't work, there are other tests that would break in other packages [05:26] if you want to ahve tests [05:26] test that calling the os.Chown wrapper changes permissions on disk [05:27] otherwise all you're testing is function dispatch works [05:27] davechen1y: yeah but you can't do that meaningfully if the tests aren't running as root [05:27] have two tests [05:27] if os.GetUid() != = { [05:27] t.Skipf("test skipped, run as roo") [05:27] t.Skipf("test skipped, run as root") [05:27] or seomthing [05:27] davechen1y: yeah but that will never happen [05:28] and you shouldn't have to run tests as root [05:28] again, it comes down to what is the test testing ? [05:28] if the test _requires_ that root can os.Chown, then we have to run as root [05:28] otherwise, mocking the funciotn doesn't test anyhting [05:28] (apart from function dispatch) [05:29] davechen1y: sure but it's checking that the correct funcs are called with the correct args which is something [05:29] davechen1y: anyway, i don't actually mind much [05:29] davechen1y: i'll delete the test [05:31] wallyworld: you have to get a whole new client [05:31] wallyworld: also, you should close the client in that method [05:31] menn0: sgtm [05:31] not worth the argument [05:31] axw: let me check what i did [05:31] wallyworld: would be nice if the "Waiting for API..." message came before "Bootstrap complete" too [05:33] axw: yeah, i thought of that, the bootstrap complete is down in environs - i could move it from there [05:33] so what do you mean by a new client? is NewAPIRoot(0 not enough? [05:34] wallyworld: or change environs/bootstrap.Bootstrap to do the waiting - I think that would be preferable, rather than adding more logic to the command code [05:34] wallyworld: I mean, when you retry, you need to get a new client [05:34] wallyworld: otherwise it uses the same apiserver root [05:34] why a new client when retrying? [05:34] oh right, yes [05:34] wallyworld: ^^ [05:35] axw: i thought about putting the check in environs, but that would have meant putting client code down there [05:35] and i didn't think that to be appropriate [05:35] wallyworld: well to be truthful the bootstrap *is* complete, so I guess it can stay as it is [05:36] yeah, i sorta came to the same conclusion [05:36] so let me retry getting another client each retry [05:38] axw: yeah, that was it, thank you. i think i was hoping / expecting that a server side change would be noticed by existing api server roots [05:39] wallyworld: the current impl doesn't strike me as ideal [05:39] me either [05:39] i keep thinking it works differently [06:41] axw: just read the release notes - the link in the note to the wip doc - the doc is out of date eg says placement not supported [06:41] not sure if we should just cut and paste the wip doc into the release notes as doc [06:42] wallyworld: where? I updated that [06:42] wallyworld: sorry, I updated the doc: https://github.com/juju/docs/pull/443 [06:42] in Unimplemented Cavearts [06:42] oh, ok [06:42] so just needs to be acted on [06:44] axw: makes more sense now. i think maybe would also be worthwhile to mentin the storage hook tools in the google doc so that at least the doc has mentions of the key features? [06:46] wallyworld: seeing as it's the first release, I was trying to avoid having all the same information in two places. I can if you really want. [06:47] axw: the issue is that the release notes are really quite comprehensive, so give the impression they cover "all the things"and so a reader might see them and not realise hook tools are there. i agree agbout 2 places. so perhaps the release notes need to be wound back? [06:48] wallyworld: originally it was just "we've done storage, here's a link to the docs" - yes, I think we should go back to that [06:48] ok, sounds good. i have to head to soccer, will be back later. can you liaise with anastasia if necessary to get her stuff landed? [06:48] and get her to re-read the release notes and wip doc? [06:49] wallyworld: no worries [06:49] tyvm [06:52] * anastasiamac cleans storage-add stuff [07:19] ericsnow: ping :D [08:59] well, that was interesting [09:00] * fwereade just found a small lizard hiding in his dressing gown [09:00] fwereade: I'm sure it's not that small [09:00] don't be so hard on yourself [09:01] haha === mgz is now known as mgz_ [09:39] fwereade, dimitern, voidspace: fancy a little review? (a couple of bug fixes to the juju/schema package) https://github.com/juju/schema/pull/6 [09:46] rogpeppe, LGTM [09:46] dimitern: thanks! [10:30] axw: in case u miss my pmsg - doc is LGTM :D loving your writting style!! [11:51] morning [12:03] anyone know if there's any documentation for the possible environment config attributes anywhere? [12:04] perrito666: morning! [12:04] dimitern, jam, fwereade, perrito666, mgz_, evilnickveitch: ^ [12:05] anastasiamac, perrito666: hiya [12:05] rogpeppe: o/ [12:05] rogpeppe: sorry dont know [12:05] rogpeppe, maybe... [12:06] do you mean this: [12:06] https://jujucharms.com/docs/stable/config-general [12:08] evilnickveitch: that looks good, thanks. except... a some of those values don't seem to have any descriptions. [12:09] evilnickveitch: it's a good starting point though, thanks === liam_ is now known as Guest92132 [12:09] rogpeppe, yes indeed, if you could fill them in as you go along, that would be a great help :) [12:09] evilnickveitch: i'm just about to make a big table inside environs/config that describes them all :) [12:10] hurrah! [12:27] Hello Folks, How can i force remove a service stuck with a hook failed. [12:51] nevermind [13:06] voidspace, dimitern: https://plus.google.com/hangouts/_/canonical.com/maas-juju-net [13:10] dooferlad, voidspace, it's over isn't it? [13:10] dimitern: yea [13:11] dimitern: I was the only Juju guy who turned up and didn't have much to add. [13:11] dooferlad, too bad :) [13:11] dooferlad, yeah, np [13:40] perrito666: it ended up being faster to just write a dumb program than try to figure out how to install that bzr plugin: https://github.com/natefinch/bsmrt [13:40] lol [13:41] natefinch: as ocr, can you give the reviews for this bug precedence? https://bugs.launchpad.net/juju-core/+bug/1451626 [13:41] Bug #1451626: Erroneous Juju user data on Windows for Juju version 1.23 <1.23> [13:42] natefinch: it's been sitting for a bit, and we would like to land it ASAP for 1.24 [13:42] katco: yep. [13:42] natefinch: ty sir [13:42] man being able to hangout and build juju is so cool [13:42] :p [13:43] perrito666: what, did you upgrade to a supercomputer? [13:43] lol [13:44] I have a backpack cluster to run hangout [13:50] the level of annoyance I have to go to get local provider working despite not having juju local installed is.. well annoying :p [13:50] katco: lol, Gabriel has gsamfira has been trying to get me to review that stuff for ages... he'll be glad it's officially my duty to do so now. [13:51] natefinch: haha [13:51] natefinch: sorry... you just happened to be ocr ;p [13:51] katco: I've wanted to review it, but we've just been so tight on deadlines, I haven't felt like I could spare the time, so I'm glad to be able to now. [14:04] evilnickveitch: FWIW, all these config attrs are available but don't seem to be documented: block-remove-object provisioner-safe-mode rsyslog-ca-key block-destroy-environment tools-metadata-url storage-default-block-source block-all-changes lxc-use-clone tools-stream allow-lxc-loop-mounts lxc-default-mtu [14:05] ev: ah, tools-metadata-url is actually documented as deprecated [14:06] rogpeppe, yup. thanks for the others though. [14:07] rogpeppe, what is the difference between lxc-clone and lxc-use-clone? [14:07] or are they the same? [14:07] evilnickveitch: i've no idea :) [14:08] hehehe [14:08] evilnickveitch: i need to find out though [14:11] Bug #1461111 was opened: Allow status-set/get to a service by its leader unit [14:14] evilnickveitch: the same, lxc-use-clone is deprecated [14:14] rogpeppe: ^^^ [14:14] wallyworld: is that documented anywhere? [14:14] wallyworld, thanks! [14:14] rogpeppe: not sure tbh [14:15] wallyworld: is lxc-clone itself documented, in fact? [14:15] rogpeppe: evilnickveitch: also, anastasiamac sent out notes on the block config [14:15] wallyworld: i'm trying to gather info on all the config attributes to put into a table inside environs/config [14:15] rogpeppe: it would have been in release notes but i sadly suspect we didn;t do more than that [14:16] wallyworld, cool, I will sync with her on updating the table [14:16] ty [14:18] rogpeppe, lxc-clone is documented, but as it is provider specific, it is on the lxc page [14:18] https://jujucharms.com/docs/stable/config-LXC [14:19] evilnickveitch: it's not really provider-specific, is it? all environments can have lxc containers [14:19] evilnickveitch: rogpeppe: it used to be but not anymore [14:19] it was changed in 1.20 [14:19] wallyworld: ok [14:19] i think that's when use-clone was deprecated also [14:20] * wallyworld is really going away now to sleep [14:20] heheheh [14:20] night night wallyworld [14:20] ttyl [14:21] wallyworld: thanks for the pointer [14:25] rogpeppe, provisioner-safe-mode is deprecated too, as of 1.21.1 [14:25] evilnickveitch: ok, thanks [14:26] evilnickveitch: presumably superceded by harvest-mode [14:26] yes [14:27] rogpeppe, tools-stream was replaced with agent-stream === brandon is now known as web === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [16:15] is anyone willing to run a test in 1.24 to confirm something? [16:24] Bug #1461150 was opened: lxc provisioning fails on joyent [16:28] I need to leave for a moment can anyone check if, while in debug hooks, you can call any helper at all? [16:28] for 1.24 that is [16:29] I'm leaving for a while too, or I would [16:29] I think something is seriously broken there [16:32] mgz_: ha, looks like 445a79b25d7d7a95127ec36a1f4c41674718a98f changed a little more than you hoped it would [16:32] mgz_: for instance, i just found a bug link to https://bugs.github.com/juju/juju/+bug/1224492 [16:32] Bug #1224492: environs/config: zero-valued port settings are allowed but ignored [16:33] mgz_: which should be the link that mup pointed to... [16:44] DAMMIT: ... Error: Account should have balance of 25000, got 24999 [16:44] I really thought I might have had it that time [16:57] fwereade: trying to pull a Superman III? [17:15] I am trying to reduce the memory of containers by setting up lxc.cgroup.memory.limit_in_bytes in /var/lib/lxc/conatiner/config, but it is throwing errors. [17:16] How can i change the default memory for containers in juju ? [17:16] lxc_cgmanager - cgmanager.c:cgm_setup_limits:1250 - Error setting cgroup memory:lxc/juju-machine-6-lxc-3 limit type memory.memsw.limit_in_bytes [17:22] question: I'm running 1.24-beta5.1 and i've found some weird behavior today. I don't have unit output as i tore the env down and stood it back up wiping the logs in teh process - but i've run into edge cases where it appears that juju did not upload is-leader as part of the toolchain [17:23] is this known behavior that i've missed a bug on? I dont want to file a bug without additional info to support the claim other than lazy's gone skitzo [17:23] katco: btw - ive put aside some time to review the status doc you sent over, will get you feedback before i EOD, ta for sending that over [17:24] lazyPower, I'd still go ahead and report it [17:25] fwereade: ack, will do. Sorry about the zero info bug in advance :| [17:26] lazyPower, no worries, that points to a pretty specific area of the code, the inputs to which have been changing a bit lately === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [18:49] Is it possible to reduce the memory of a lxc container started by juju ? === liam_ is now known as Guest52471 [19:09] Can i increase the memory of a lxc container started by juju ? === kadams54 is now known as kadams54-away [19:54] abentley: btw, I updated the merge proposal, and I believe I addressed all your concerns (for real this time). [19:54] natefinch: Thanks. I'll have a look soon. [19:54] abentley: thanks [19:56] katco: can we chat about min version? [20:00] marcoceppi: sure: https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=1 [20:01] sinzui: Could you please review https://code.launchpad.net/~abentley/juju-reports/web-enqueue/+merge/260879 ? [20:02] katco, marcoceppi: we should rename it to "Capability Flags" since that better reflects what we're really implementing (until someone decides we need a different color shed). [20:06] yes abentley [20:07] natefinch: You still have this wait_for_started that you haven't explained the need for or removed. [20:07] abentley: oh, I thought I deleted that.. honestly, I was just copying what deploy_stack did when it was deploying charms. [20:10] abentley: I can delete it, that's fine. Running a test with it gone right now, just for a sanity check. [20:10] natefinch: I wish I had a perfect script to point you to. The problem with deploy_stack is that, as the oldest script, it is out-of-date in places. [20:11] abentley: yeah, I didn't realize that when I was copying everything it did :/ [20:12] abentley: that's part of why I tell people to follow what assess_log_rotation.py does for new CI tests in the wiki page.... because at least that'll be less wrong. [20:12] Bug #1461246 was opened: UpgradeSuite.TestUpgradeStepsHostMachine consistently fails on utopic [20:13] abentley: test passes, removed and pushed [20:13] natefinch: That seemed like a premature choice given that assess_log_rotation hadn't yet passed code review. [20:15] abentley: but I knew it was being reviewed carefully, so once it was in, it should be a good example. [20:15] abentley: and if not, then I knew someone could just go change the wiki page anyway ;) [20:37] sinzui: Could you tell me what you think of the indentation changes in https://code.launchpad.net/~natefinch/juju-ci-tools/logrot/+merge/259750 ? === kadams54-away is now known as kadams54 [20:39] abentley, sinzui: for the record, everything was run through autopep8... apologies for the apparent spurious indentation changes. [20:43] natefinch: The updated style is more my preference, but I believe the previous style is sinzui's preference and I've been writing to that. === natefinch is now known as natefinch-afk [21:04] natefinch-afk: abentley I have pondered switching to autopep8. I dissagree with the unpythonic closing brace that is NOT specified in PEP8. [21:04] natefinch-afk: abentley The formatting is fine. I accept the change. [21:06] abentley: natefinch-afk setting "ignore": "E123, E133", for pep8 and autopep8 removed trailing brace from discussion [21:07] * sinzui uses both to not take sides when he reviews/updates other people’s code [21:08] r=me abentley. I missed the submit button 30 minutes ago [21:19] waigani: quick status update on the maas fix? should be done today? [21:20] or this morning even :-) ? [21:27] Bug #1460171 changed: Deployer fails because juju thinks it is upgrading Released by sinzui> [21:29] wallyworld: yes, in stand up - first thing after [21:29] ty :-) === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away [21:46] wallyworld: what's the best way to merge the branch into gomaasapi? [21:47] waigani: sorry, in meeting, will need to pull trunk and merge and repush, can you ask thumper [21:47] wallyworld: right, thanks [21:52] sinzui: i don't think this one will be fixed for 1.24 either https://bugs.launchpad.net/bugs/1457225 [21:52] Bug #1457225: Upgrading from 1.20.9 to 1.23.3 works, but error: runner.go:219 exited "machiner": machine-0 failed to set status started: cannot set status [21:52] of machine "0": not found or not alive [21:53] natefinch-afk: did you see this: https://bugs.launchpad.net/juju-core/+bug/1461246 [21:53] Bug #1461246: UpgradeSuite.TestUpgradeStepsHostMachine consistently fails on utopic [21:53] oh goody. wallyworld I don’t see an action we can take on it. Agreed. [22:10] wallyworld: gomaasapi updated, dependencies.tsv updated, Tried to land on 1.24, but it's blocked. Shall I JFDI? [22:10] waigani: yeah, i think so [22:11] wallyworld: done [22:11] tyvm [22:30] if I'm running off 1.24-beta5-trusty-amd64 locally and juju bootstrap --upload-tools, the tools on my deployed machines should match right? [22:31] * whit is seeing tools == 1.24-alpha1.1-trusty-amd64 [22:31] * whit is wondering because is-leader is fairly consistently not present [22:34] waigani: did you see the gomaasapi dep looks wrong because build failed? [22:35] wallyworld: on it [22:40] wallyworld: I don't get it. TestDependenciesTsvFormat passes and the revision id is from here: http://bazaar.launchpad.net/~juju/gomaasapi/trunk/revision/62 [22:55] waigani: sorry, was in another meeting and about to start another but gotta do something first, can you ask thumper (it's not like he doesn't know bzr :-) [23:00] wallyworld: found it. there was a space before the revis id but after the tab - the format test didn't pick that up. [23:01] waigani: you should always run godeps locally before pushing [23:02] wallyworld: lesson learnt. I'll update the test also. [23:02] ty :- [23:02] ) [23:02] davechen1y: fwiw, go 1.4 is in wily now [23:09] ericsnow: ping [23:09] perrito666: hi [23:09] and sinzui and team is working on 1.4.2 verification with juju [23:09] ericsnow: heyhey [23:10] ericsnow: do you have any extra info on https://bugs.launchpad.net/juju-core/+bug/1434437 besides whats on the bug? [23:10] Bug #1434437: juju restore failed with "error: cannot update machines: machine update failed: ssh command failed: " [23:10] perrito666: nope [23:10] ericsnow: I am quite curious abut the original bug, as that would indicate that at least one of the machines was not provisioned? [23:11] I am not entirely sure a machine of juju can be missing a /var/lib/juju [23:15] perrito666: yeah, not sure [23:16] perrito666: could be that thing where juju uninstalls itself [23:16] sinzui: any issue with juju failing to create lxc containers? http://paste.ubuntu.com/11530516/ [23:16] ive hit this in 2 separate labs, one with proxy and one without [23:16] on both trusty and precise [23:17] ericsnow: not sure, although the issue seems to have been resolved since then and replaced by other [23:17] both using maas [23:18] perrito666: standup? [23:18] wallyworld: I am there [23:30] mwhudson: right [23:30] so what was all the hubbub about [23:30] not sure [23:30] * mwhudson is fighting monitors [23:31] davechen1y, mostly my lack of understanding in the process, I apologize for the unneeded alarm [23:32] alexisb: all good [23:33] i don't care how it happens [23:33] only that it happens [23:33] sinzui: so the next step is to get the W 1.4.4 package backported into the juju ppa [23:33] how would you like to track that work ?