[00:50] thumper: menn0: when u get a chance, where do we stand with this? https://bugs.launchpad.net/juju/+bug/1640535 [00:50] Bug #1640535: HA tests fail after the leader is deleted [00:51] anastasiamac: it needs someone to work on it. I thought that sinzui was doing some initial investigation, but I could be wrong. [00:53] menn0: and sinzui believes that Tim is :D or u since u've last commented on it [00:53] menn0: what work is needed? how much day effort will it take? who is best to do it? [00:54] anastasiamac: I just commented b/c I had some initial thoughts and someone (thumper?) asked me to [00:54] anastasiamac: hard to say. if it's a problem with mongodb then we could be screwed, but I'm guessing it's probably a juju issue. [00:54] anastasiamac: 2 days? [00:57] axw, I hear from perrito666 that you did the initial spec for the instance type api. Who helped from the gui side? [00:58] balloons: it was based on reqs from uros [00:58] axw, ahh brillant. I'm trying to formulate a plan for who will write the test for gui. Any plans to ever expose this via the cli on juju side? [00:59] balloons: there's no plan to do that for now [00:59] balloons: it's more for interactive constraints selection [01:02] axw, so the result in the gui is intended to do what exactly then? The user will have to use the cli with a constraint? [01:03] balloons: the question of trusty juju1/juju2 came up today... where r we with it? [01:07] balloons: the idea is that in the GUI you'll be able to list instance types, and see what their characteristics are [01:07] balloons: and potentially also costs [01:08] balloons: and when you prepare a deployment, you'll be able to see which instance type would be chosen for the default or specified constraints [01:09] balloons: e.g. the GUI will be able to show which instance type will be started if I deploy something with "cores=2 mem=4G" [01:13] menn0, babbageclunk: for bug 1641824 is there a way the error message (or similar) is raised via a command or is it just available in the logs? [01:13] Bug #1641824: Migrating a model back to controller fails: model XXXX has been removed [01:38] veebers: i'm afraid not. the problem occurs well after the migration has been kicked off, once the migrate command has returned [01:41] menn0: how would one determine that the migration has failed? [01:42] veebers: by looking at the output of "juju show-model" [01:43] veebers: xtian recently landed the changes so that show-model will show the status of the current or previous (failed) migration [01:46] menn0: sweet, I use the output of show model for another test. [01:46] menn0: has that made it out of develop into master? [01:47] veebers: highly unlikely that it has given that develop hasn't had a bless for some time [01:47] menn0: ok cool, thanks :-) [01:55] anastasiamac, I sent a mail on juju 1 -- need feedback on bryan's bug [01:55] for juju2, waiting in proposed to release -- need feedback [01:56] thumper: fyi ^^ [01:56] balloons: tyvm \o/ u r awesome! === axw_ is now known as axw [02:21] thumper: this feature branch was created by Dave C - https://github.com/juju/juju/tree/api-call-retry [02:21] thumper: do we still ned it? [02:21] need* [02:22] curtis already asked, and I said we could kill it [02:23] thumper: awesome, tyvm \o/ [03:42] menn0 wallyworld katco: it just occurred to me that we could just close the file on timeout, so it's not that hard [03:43] menn0 wallyworld katco: (I think that works anyway, need to test..) [03:43] +1 to try it [04:31] Do maas support multiple regions right now? [04:31] s/Do/Does [04:38] natefinch: i don't think so [04:38] anastasiamac: that's what I thought.. I thought it was future work. Got a bug about it, wanted to double check. [04:39] natefinch: future work is always buggy :) [04:41] anastasiamac: same as past work ;) [04:41] natefinch: ah but present work is perfect :) [04:43] anastasiamac: always :) [04:43] unless someone is rushing you :) [06:53] https://github.com/juju/juju/pull/6523 and https://github.com/juju/utils/pull/249, more thoughts? [11:02] Bug #1642219 opened: [2.0.1] lxd containers fail to start on multiple physical interfaces when using MAAS 2.1 [12:38] macgreagoir: ping [13:00] hmm, I have done a bad [13:00] i landed a branch directly into juju master not develop [13:00] anyone know what I should do now? [13:01] (it was a 2 year old branch that I was reminded of, and forgot to retarget it) [13:01] ahh. no, it's OK it wasn't juju! [13:05] rogpeppe: it was juju/cmd right? [13:05] mgz: yeah [13:06] mgz: i had a nasty feeling for a moment [13:06] still did a bad, because that is bot-managed [13:06] but less of a difficult one [13:06] oh, you did $$merge$$ [13:06] so it's fine [13:07] no bads were had [13:07] mgz: hey, hi o/ [13:07] voidspace: heya, internet funs this morning... [13:07] mgz: what's the best way to contact "the juju QA team" by email? is there a list [13:07] mgz: ah, wondered why you weren't around [13:07] internet fun is not fun... [13:07] I recommend juju-qa@lists.canonical.com [13:08] mgz: cool, thanks :-) [13:08] though juju-dev is also fine for wider questions [13:12] hey wallyworld, I was just replying to your mail. This is the bug I wanted to highlight: https://bugs.launchpad.net/juju/+bug/1639291 [13:12] Bug #1639291: upgrade-juju fails after bootstrapping with agent-version [13:13] let me take a peek at the bug [13:13] mgz: emailed and hit the moderator queue - is the list actively moderated or will it sit there forever? [13:14] wallyworld, your explaination could make sense. The last agent in devel stream is rc3 [13:14] balloons: so the 2.0.1 bug has been fixed for 2.0.2. the 2.0.0 behaviour depends on OLD_AGENT_VERSION [13:14] if that wss rc3 then my comment holds true [13:15] voidspace: it's moderated but I have the keys [13:15] balloons: we can fix for just 2.1 i suspect - not sure how many rc3 deployments there still are out there [13:15] mgz: cool, it's because I'm not subscribed - but I don't want to subscribe I'm afraid :-) [13:16] much as I love you all [13:19] voidspace: you are now allowed but not subscribed [13:20] wallyworld, you should be able to repo following those steps from scratch. Takes just a few mins to bootstrap and try the upgrade if you are curious as to what exactly is happening [13:20] i have a pretty good idea [13:20] wallyworld, there was also the distro-info issue, but I'm not sure if that has been addressed completely or not. I've seen bugs on it [13:20] if the agent version is tagged [13:21] the distro info issue has been fixed for 2.0.2 [13:21] the workaround for the bug is to set aggent-stream to "released" [13:21] in model config [14:36] rick_h: crap, lost you [14:47] anyone else having problems with github? [14:49] looks like they are having issues: https://status.github.com/ [15:00] perrito666: ping, got a sec to do me a favor and verify this is good in develop please? https://bugs.launchpad.net/juju/+bug/1642236 [15:00] Bug #1642236: MAAS 2 Storage Problem [15:04] rick_h, perrito666 has stepped out for a bit [15:05] alexisb: k, ty [15:06] rick_h: what do you mean by "good in develop"? [15:06] voidspace: the develop branch [15:07] rick_h: do you mean, check if it's not a bug in the develop branch? [15:07] rick_h: I'm assuming it's still a bug... working on a repro [15:07] voidspace: yes, if using a build from develop, is the spamming gone there [15:07] rick_h: that bug you linked to is not the spamming bug [15:08] rick_h: do you mean the vsphere space spam bug? [15:08] voidspace: oh sorry, perrito666 https://bugs.launchpad.net/juju/+bug/1642031 :) [15:08] Bug #1642031: show-status-log noise reduction buggy [15:08] cool [15:20] rick_h: sorry I stepped out involuntarlily [15:20] back into decent internet now [15:21] rick_h: ah I just returned and was trying to make sense of the backlog [15:22] rick_h: that is not yet in develop but I can merge it now if you would like [15:23] perrito666: wrong one sec [15:23] perrito666: https://bugs.launchpad.net/juju/+bug/1638401 [15:23] Bug #1638401: vsphere: spaces spams the logs with an error [15:24] third time is a charm darn me trying to bug through stuff while on aclls [15:24] mgz: can you sync with alexisb on the status of the openstack reviews please? [15:25] rick_h: is that the right one? :p [15:25] perrito666: it's the "vsphere is too chatty make it stop" one [15:26] perrito666: let me know if it parses that way to you too :P [15:26] ah ok, well I am not sure I still have access to the vsphere but I can try [15:26] perrito666: actually, that's right. I asked you since you filed it. [15:26] perrito666: so it was a "is perrito666 happy now" checkpoint [15:26] perrito666: ty [15:26] yes that seems to be the link you intended to pass to me [15:26] but first, lets try the shortcut :) [15:26] * perrito666 looks for larrymi [15:26] sorry all have to drop kiddo off, will catch up with folks when I get back [15:31] rick_h: sure thing [15:31] I'll nab her when she's back [15:37] rick_h: larrymi will deploy a develop juju and confirm this in the afternoon is that ok? (its pretty much the same time frame I can give you minus the setup time) [16:49] voidspace: do we support multiple maas regions, do you know? [16:49] alexisb: yo, you wanted to be caught up on neutron things? [16:49] yes, one sec in a meeting but I would like to talk with you [16:51] alexisb: poke me when you're free === frankban is now known as frankban|afk [17:47] natefinch: yes [17:50] voidspace: neat, ok, thanks. [17:51] natefinch: know anything about storage in general and maas storage in particular? [17:51] * natefinch runs away scared [17:51] not even a little [17:52] voidspace: the list of people that knows about storage is as follows: axw [17:52] perrito666: :-) [17:52] perrito666: I want to know what a request for storage converts to for maas - I assume it is just a constraint for node selection [17:53] is it? [17:56] perrito666: maas only manages machines, so what else *can* it be [17:56] perrito666: maas doesn't provide any storage that is separate from a node [17:56] perrito666: my understanding [17:57] voidspace: really? I thought there was a way to manage extra storage [17:57] voidspace: the storage code is quite straight forward, I can look for you if you want, I have been near storage for some providers [17:58] perrito666: how do you attach storage to maas that isn't part of a machine? there's nothing in the ui for it either. [17:58] perrito666: that would be cool [17:58] perrito666: thanks [18:03] voidspace: yes it translates to storage constraints, relevant code is in provider/maas/environ.go:908 [18:03] perrito666: thanks [18:19] perrito666: rick_h: so when I have a maas node with additional storage I can deploy prometheus fine [18:19] voidspace: yay [18:19] perrito666: rick_h: the specific bug report is that after deployment the juju agent doesn't start, I'll see if I can repro that in a minute [18:19] rick_h: hang on... [18:20] rick_h: all I mean is that the initial constraint appears to be fine and the machine is selected appropriately... :-) [18:20] I'll find out if I can repro the bug or not shortly... [18:37] rick_h: seems to be stuck in "agent initializing", so looks like I can repro [18:38] rick_h: nothing unusual in the logs, I'll repeat with trace logging and start forensics [18:38] rick_h: being able to repro is good news [18:38] rick_h: and I'll file a separate issue about the constraint failure when storage isn't available not being surfaced to juju status [18:42] redir, ping [18:46] voidspace: awesome, ty for working through it. [19:31] rick_h: I have fix for that resources bug. turns out it was a change in some middleware in the charmstore that removed content-length. Nothign to do with manual provider. I have a fix, it's very simple, just need to tweak the tests, it'll be a very short PR tonight, but gotta run for right now. [19:32] rick_h: oh yeah, the fix is client-side. We were relying on the content-length to be there, but don't actually need it, since we check the hash of the data anyway. [19:34] natefinch: ah interesting, thanks for the heads up on the breakthrough === alexisb is now known as alexisb-afk [20:01] alexisb-afk: pong sorry missed the ping === alexisb-afk is now known as alexisb [20:29] redir, no worries [20:29] just wanted to check in [20:29] hi [20:30] let me know if you can join a HO [20:30] redir, ^^ [20:31] alexisb: give me about 10min? [20:31] sure just ping me when you are available [20:42] alexisb: pin [20:42] g [20:42] meet you in our 1x1 HO [20:43] k [21:00] Bug #1642385 opened: Juju 2.0.1 with LXD on localhost "provisioning error" "image not imported!" [22:17] rick_h: yt? [22:17] redir: party [22:18] you mean you're at a party? [22:18] rick_h: you reboot guimaas? [22:19] redir: I did a bunch of nodes, but not 7/8 or the CI ones [22:19] redir: and the model running the juju-gui QA stuff blew up as it ran out of disk space [22:19] redir: so they're looking to rebuild it there [22:19] :( 7&8 are powered of [22:19] f [22:19] oh hmm, maybe folks messed with them as they worked on the issue today? [22:20] sorry, when their QA stuff blew up who knows [22:20] * redir is sad [22:22] redir: sorry, did you have a running experiment or the like? [22:23] yeah [22:23] no big [22:23] I'll just rebuild [23:44] axw: ping? [23:44] perrito666: pong [23:45] hey, seems life is bitter and sucks and we need to discuss about it, care to hangout? [23:45] perrito666: sure [23:46] axw: standup bluejeans works for you? [23:46] perrito666: okey dokey, see you there [23:46] wallyworld, ping [23:47] yo [23:47] axw: your image is frozen and you sound like an old radio [23:48] wallyworld, do you want to meet before standup? [23:48] if so I can meet you in our 1x1 ho [23:48] if you want to, unless you want a break [23:48] nope I am good [23:48] ok, see you there [23:49] meh, I think I remotely killed andrew [23:49] perrito666: go straight to jail, do not pass go \o/ [23:50] perrito666: computers :| [23:50] perrito666: let's try hangout [23:50] axw: that is why I dont use computers for these things [23:50] ill hangoutcall you [23:51] axw: I am calling you so some of your devices will ring