[00:24] <wallyworld> perrito666: https://bugs.launchpad.net/juju/+bug/1656205
[00:24] <mup> Bug #1656205: Check for lxd ipv6 is failing for various cases <juju:Triaged by jameinel> <https://launchpad.net/bugs/1656205>
[00:27] <perrito666> wallyworld: ack
[00:39] <wallyworld> menn0: free when you are
[00:47] <menn0> wallyworld: i'm back. pick a HO
[00:48] <wallyworld> menn0: standup works
[01:18] <axw> anastasiamac_: free to talk bugs now
[01:27] <anastasiamac_> axw: m starving :( can i ping u in 30mins or so?
[01:27] <axw> anastasiamac_: np, helping out IS now anyway
[01:28] <anastasiamac_> \o/
[01:41]  * redir goes eod
[01:43] <redir> to answer my question above it deploys charm1 with the alias charm2. That is only confusing when you think it is deploying 2 charms. The second pos arg is an alias...
[01:44] <redir> after a brief chat with wallyworld I created #1660872 so that the output might be more informative, less misleading.
[01:44] <mup> Bug #1660872: juju deploy charm alias should provide more detailed output  <juju:New> <https://launchpad.net/bugs/1660872>
[02:37] <mup> Bug #1592887 changed: juju destroy-service deletes openstack volumes <canonical-is> <landscape> <uosci> <juju:In Progress by axwalk> <juju-core:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1592887>
[03:17] <jam> perrito666: as a developer of an alternative VCS that didn't encourage squashing as standard practice, I'm not a big fan. I have squashed here and there, but it depends what squashes you were asking for
[03:17] <jam> perrito666: if you squash merging 2.1=>develop you've negated the whole benefit of the merge, as the *point* is to bring across the individual commits so you know they're in both
[04:28] <anastasiamac_> wallyworld: axw: do u recall why ModelInfo only returns info if user has a write access?
[04:28] <wallyworld> hmm, not sure, didn't know it did
[04:29] <wallyworld> seems like read access should be sufficient unless i'm missing something
[04:29] <anastasiamac_> wallyworld: looking at bug 1660745
[04:29] <mup> Bug #1660745: write privileges should not be required to see model machine info <juju:New> <https://launchpad.net/bugs/1660745>
[04:29] <anastasiamac_> wallyworld: i agree. was just wondering in case i've missed something
[04:29] <wallyworld> well we both have if that's the case :-)
[04:30] <anastasiamac_> that never happens \o/
[05:02] <wallyworld> axw: only if you have time, a look at this PR would be great https://github.com/juju/juju/pull/6867
[05:03] <wallyworld> i have other work to go on with as well
[05:03] <axw> wallyworld: ok, in a little while
[05:03] <wallyworld> sure, no rush at all
[05:03] <wallyworld> do your other stuff first
[06:59] <mup> Bug #1660542 changed: container mac addresses should use 'locally assigned' section <lxd> <mac-address> <network> <juju:Triaged> <lxd (Ubuntu):New> <https://launchpad.net/bugs/1660542>
[08:02] <mup> Bug #1660907 opened: jujud and mongod consume 800% CPU, lots of RAM after restart in controller cluster <juju-core:New> <https://launchpad.net/bugs/1660907>
[08:05] <mup> Bug #1660907 changed: jujud and mongod consume 800% CPU, lots of RAM after restart in controller cluster <juju-core:New> <https://launchpad.net/bugs/1660907>
[08:08] <mup> Bug #1660907 opened: jujud and mongod consume 800% CPU, lots of RAM after restart in controller cluster <juju-core:New> <https://launchpad.net/bugs/1660907>
[09:59] <perrito666> Morning
[10:23] <mup> Bug #1660907 changed: jujud and mongod consume 800% CPU, lots of RAM after restart in controller cluster <juju:Incomplete> <https://launchpad.net/bugs/1660907>
[11:41] <jam> small patch up for review: https://github.com/juju/juju/pull/6896
[11:52] <perrito666> jam: Ill review it (not like you have other choices of reviewer at this time :p )
[11:53] <jam> thanks perrito666
[11:59] <perrito666> jam: lgtm with a small comment
[11:59] <perrito666> I did not do the actual deploying im trusting in you for this one
[15:53] <frobware> jam: ping
[15:53] <frobware> jam: I have an hour before I EOD, anything I can help with?
[15:54] <jam> frobware: actually there is
[15:54] <jam> frobware: https://github.com/juju/juju/pull/6898
[15:54] <jam> should be enough to have containers work on AWS
[15:55] <jam> I still want to do more work there, to remove the code in "worker/provisioner" that treats any error as falling back to lxdbr0
[15:55] <jam> but between that pull and one that just landed
[15:55] <jam> I think I have lxd containers working again on AWS
[15:56] <jam> frobware: perrito666: reviews of it are welcome, as that should unblock my parts for 2.1b5, and I can focus more on polish/other bug fixing
[15:56] <jam> I'm going to try and trigger a CI run by landing it into 2.1-dynamic-bridges
[16:13] <perrito666> jam: ill try to look at it
[16:21] <frobware> jam: ping
[16:25] <jam> frobware: pong
[16:25] <frobware> jam: heh, nm... Might not have been running the right binary. testing again...
[16:26] <jam> frobware: so *just* that branch needs to be merged with 2.1
[16:26] <jam> I had another patch that was in merge review and pending landing
[16:26] <jam> and I didn't want to merge it into the other branch
[16:26] <jam> (always-observe is the extra bit)
[16:30] <jam> frobware: the code also all landed in upstream/2.1-dynamic-bridging if you want to try from there
[16:31] <frobware> jam: I was trying your branch: 2.1-containerizer-config-1657449
[16:31] <frobware> jam: my wrong binary was... I hadn't source my `require juju-dev` bash scripts which ensure $HOME/go/bin is on my PATH first.
[16:32] <jam> frobware: ah sure, that particular branch needs to merge 2.1, I'll do it now
[16:32] <jam> done
[16:35] <frobware> jam: that branch seemed to work OK
[16:35] <frobware> jam: for AWS
[16:35]  * frobware needs to find some power; back in a minute.
[17:10] <frobware> jam: love this on azure - dns-search vlmw5tgdmhlejaynio3hmmlt4b.gx.internal.cloudapp.net
[17:10] <frobware> jam: presumably the first part is hungarian notation :p
[17:19] <frobware> jam: reviewed, QA'd and approved. I did not try MAAS, but I did try Azure, GCE and AWS.
[17:19] <frobware> jam: I'm unlikely to be around much tomorrow.
[17:22] <jam> frobware: thanks for the help and the heads up
[17:22] <jam> have a good day
[17:26] <redir> forgot to say... morning juju-dev
[17:27] <rick_h> afternoon redir
[17:27] <redir> yay relativity
[22:01] <perrito666> EOD all see you
[22:07] <menn0> perrito666: ciao
[22:13] <menn0> veebers, anastasiamac: I'm fairly sure that the migration credentials failure is due to a change axw merged yesterday (fff2d6ff710ead222741eb61b7fbee3510eab769)
[22:13] <menn0> confirming now
[22:28] <anastasiamac> thnx, menn0
[22:42] <babbageclunk> ugh, I'm having real trouble testing my change for the ec2 provider.
[22:57] <redir> babbageclunk: anyway I can help?
[22:57] <redir> what's the difficulty?
[23:00] <babbageclunk> redir: no I think I just need to power through it - just hard to extract the information I want to check to see if it's done the right thing. I think I've broken the back of it now.
[23:00] <babbageclunk> redir: thanks though!
[23:01] <redir> phew
[23:18] <redir> http://www.businessinsider.com/gitlab-outage-due-to-human-error-2017-2
[23:22] <axw> menn0: oy, sorry. I'm here now
[23:26] <menn0> axw: so fff2d6ff710ead222741eb61b7fbee3510eab769 has broken migrations on LXD
[23:26] <axw> menn0: got a CI failure handy that I can look at?
[23:27] <menn0> axw: I'll get one for you but I can also tell you what's failing
[23:27] <menn0> axw: in state/migration_import.go, the Import method does a credentials check
[23:27] <menn0> axw: and fails the migration if they don't match
[23:28] <menn0> axw: it's the comparison of existingCreds.Attributes() to creds.Attributes() that's failing
[23:28] <menn0> axw: the client-cert and client-key don't match
[23:29] <menn0> axw: i'm not sure if the migration check is invalid and we've been getting away with it or if something not right with the LXD creds change
[23:30] <axw> menn0: I'll need to see the steps taken to know why they might be different
[23:31] <axw> menn0: each bootstrap creates new credentials, so that might explain it?
[23:31] <menn0> axw: I can repro locally
[23:31] <menn0> axw: that's probably it
[23:31] <menn0> axw: i'll have to figure out why tim added that check
[23:31] <axw> menn0: ok. I have an idea about caching the creds on disk for lxd, I can do that if it helps
[23:32] <axw> menn0: i.e. so we get the same creds each time, so long as the on-disk creds remain the same
[23:33] <menn0> axw: were you thinking of doing that anyway?
[23:34] <axw> menn0: yeah, was planning to do it later but I can look at doing it now
[23:34] <menn0> axw: another approach would be to not include cloud creds in the model description for LXD models
[23:35] <menn0> axw: it looks like the model migration logic allows for that on the import side
[23:35] <axw> menn0: would that make it more difficult to support remote-LXD?
[23:35] <axw> because we're pretty close to being able to support that now I think
[23:42] <menn0> axw: yeah I guess that could be an issue. quick chat after standup?
[23:43] <axw> menn0: I will have to drop my daughter off at school, I'll ping you as soon as I get back
[23:43] <menn0> axw: sounds good