[00:24] perrito666: https://bugs.launchpad.net/juju/+bug/1656205 [00:24] Bug #1656205: Check for lxd ipv6 is failing for various cases [00:27] wallyworld: ack [00:39] menn0: free when you are [00:47] wallyworld: i'm back. pick a HO [00:48] menn0: standup works [01:18] anastasiamac_: free to talk bugs now [01:27] axw: m starving :( can i ping u in 30mins or so? [01:27] anastasiamac_: np, helping out IS now anyway [01:28] \o/ [01:41] * redir goes eod [01:43] to answer my question above it deploys charm1 with the alias charm2. That is only confusing when you think it is deploying 2 charms. The second pos arg is an alias... [01:44] after a brief chat with wallyworld I created #1660872 so that the output might be more informative, less misleading. [01:44] Bug #1660872: juju deploy charm alias should provide more detailed output [02:37] Bug #1592887 changed: juju destroy-service deletes openstack volumes [03:17] perrito666: as a developer of an alternative VCS that didn't encourage squashing as standard practice, I'm not a big fan. I have squashed here and there, but it depends what squashes you were asking for [03:17] perrito666: if you squash merging 2.1=>develop you've negated the whole benefit of the merge, as the *point* is to bring across the individual commits so you know they're in both [04:28] wallyworld: axw: do u recall why ModelInfo only returns info if user has a write access? [04:28] hmm, not sure, didn't know it did [04:29] seems like read access should be sufficient unless i'm missing something [04:29] wallyworld: looking at bug 1660745 [04:29] Bug #1660745: write privileges should not be required to see model machine info [04:29] wallyworld: i agree. was just wondering in case i've missed something [04:29] well we both have if that's the case :-) [04:30] that never happens \o/ [05:02] axw: only if you have time, a look at this PR would be great https://github.com/juju/juju/pull/6867 [05:03] i have other work to go on with as well [05:03] wallyworld: ok, in a little while [05:03] sure, no rush at all [05:03] do your other stuff first [06:59] Bug #1660542 changed: container mac addresses should use 'locally assigned' section [08:02] Bug #1660907 opened: jujud and mongod consume 800% CPU, lots of RAM after restart in controller cluster [08:05] Bug #1660907 changed: jujud and mongod consume 800% CPU, lots of RAM after restart in controller cluster [08:08] Bug #1660907 opened: jujud and mongod consume 800% CPU, lots of RAM after restart in controller cluster === frankban|afk is now known as frankban [09:59] Morning [10:23] Bug #1660907 changed: jujud and mongod consume 800% CPU, lots of RAM after restart in controller cluster [11:41] small patch up for review: https://github.com/juju/juju/pull/6896 [11:52] jam: Ill review it (not like you have other choices of reviewer at this time :p ) [11:53] thanks perrito666 [11:59] jam: lgtm with a small comment [11:59] I did not do the actual deploying im trusting in you for this one === mskalka|afk is now known as mskalka === admcleod_ is now known as admcleod [15:53] jam: ping [15:53] jam: I have an hour before I EOD, anything I can help with? [15:54] frobware: actually there is [15:54] frobware: https://github.com/juju/juju/pull/6898 [15:54] should be enough to have containers work on AWS [15:55] I still want to do more work there, to remove the code in "worker/provisioner" that treats any error as falling back to lxdbr0 [15:55] but between that pull and one that just landed [15:55] I think I have lxd containers working again on AWS [15:56] frobware: perrito666: reviews of it are welcome, as that should unblock my parts for 2.1b5, and I can focus more on polish/other bug fixing [15:56] I'm going to try and trigger a CI run by landing it into 2.1-dynamic-bridges [16:13] jam: ill try to look at it [16:21] jam: ping [16:25] frobware: pong [16:25] jam: heh, nm... Might not have been running the right binary. testing again... [16:26] frobware: so *just* that branch needs to be merged with 2.1 [16:26] I had another patch that was in merge review and pending landing [16:26] and I didn't want to merge it into the other branch [16:26] (always-observe is the extra bit) [16:30] frobware: the code also all landed in upstream/2.1-dynamic-bridging if you want to try from there [16:31] jam: I was trying your branch: 2.1-containerizer-config-1657449 [16:31] jam: my wrong binary was... I hadn't source my `require juju-dev` bash scripts which ensure $HOME/go/bin is on my PATH first. [16:32] frobware: ah sure, that particular branch needs to merge 2.1, I'll do it now [16:32] done [16:35] jam: that branch seemed to work OK [16:35] jam: for AWS [16:35] * frobware needs to find some power; back in a minute. [17:10] jam: love this on azure - dns-search vlmw5tgdmhlejaynio3hmmlt4b.gx.internal.cloudapp.net [17:10] jam: presumably the first part is hungarian notation :p [17:19] jam: reviewed, QA'd and approved. I did not try MAAS, but I did try Azure, GCE and AWS. [17:19] jam: I'm unlikely to be around much tomorrow. [17:22] frobware: thanks for the help and the heads up [17:22] have a good day [17:26] forgot to say... morning juju-dev [17:27] afternoon redir [17:27] yay relativity === frankban is now known as frankban|afk === mskalka is now known as mskalka|afk === mskalka|afk is now known as mskalka === mskalka is now known as mskalka|afk [22:01] EOD all see you [22:07] perrito666: ciao [22:13] veebers, anastasiamac: I'm fairly sure that the migration credentials failure is due to a change axw merged yesterday (fff2d6ff710ead222741eb61b7fbee3510eab769) [22:13] confirming now [22:28] thnx, menn0 [22:42] ugh, I'm having real trouble testing my change for the ec2 provider. [22:57] babbageclunk: anyway I can help? [22:57] what's the difficulty? [23:00] redir: no I think I just need to power through it - just hard to extract the information I want to check to see if it's done the right thing. I think I've broken the back of it now. [23:00] redir: thanks though! [23:01] phew [23:18] http://www.businessinsider.com/gitlab-outage-due-to-human-error-2017-2 [23:22] menn0: oy, sorry. I'm here now [23:26] axw: so fff2d6ff710ead222741eb61b7fbee3510eab769 has broken migrations on LXD [23:26] menn0: got a CI failure handy that I can look at? [23:27] axw: I'll get one for you but I can also tell you what's failing [23:27] axw: in state/migration_import.go, the Import method does a credentials check [23:27] axw: and fails the migration if they don't match [23:28] axw: it's the comparison of existingCreds.Attributes() to creds.Attributes() that's failing [23:28] axw: the client-cert and client-key don't match [23:29] axw: i'm not sure if the migration check is invalid and we've been getting away with it or if something not right with the LXD creds change [23:30] menn0: I'll need to see the steps taken to know why they might be different [23:31] menn0: each bootstrap creates new credentials, so that might explain it? [23:31] axw: I can repro locally [23:31] axw: that's probably it [23:31] axw: i'll have to figure out why tim added that check [23:31] menn0: ok. I have an idea about caching the creds on disk for lxd, I can do that if it helps [23:32] menn0: i.e. so we get the same creds each time, so long as the on-disk creds remain the same [23:33] axw: were you thinking of doing that anyway? [23:34] menn0: yeah, was planning to do it later but I can look at doing it now [23:34] axw: another approach would be to not include cloud creds in the model description for LXD models [23:35] axw: it looks like the model migration logic allows for that on the import side [23:35] menn0: would that make it more difficult to support remote-LXD? [23:35] because we're pretty close to being able to support that now I think [23:42] axw: yeah I guess that could be an issue. quick chat after standup? [23:43] menn0: I will have to drop my daughter off at school, I'll ping you as soon as I get back [23:43] axw: sounds good