[00:02] <redir> good good
[01:21] <babbageclunk> Anyone around for a review? https://github.com/juju/juju/pull/6731
[09:22] <perrito666> Morning
[09:22] <perrito666> Babbageclunk still need that review?
[13:02] <perrito666> brb errands
[14:21] <hoenir> could someone please review my patch ? https://github.com/juju/juju/pull/6523
[14:30] <rick_h> macgreagoir: can you peek at ^ and try to get natefinch or katco to second when they're around?
[14:30] <macgreagoir> rick_h: ack
[15:20] <aisrael> Does juju 2 support the use of lxc copy (cloning a base container)?
[16:25] <natefinch> aisrael: I don't think so.  Our philosophy was to let lxd maange everything itself
[16:32] <aisrael> natefinch: Ok. Here's the scenario: I'm looking to see if the base ubuntu image used when launching machines can be customized, i.e., pre-installing packages that will be needed, setting up proxy detection, etc. The goal is to get a faster start-up time for local development.
[16:34] <natefinch> aisrael: so, I believe you can fool juju/lxd by making an image called ubuntu-xenial  (or whatever series) and then fix that up to be what you want
[16:34] <aisrael> natefinch: Ok, awesome, let me give that a try. Thanks!
[16:35] <natefinch> aisrael: also, for local development, manual provider is even faster and easier.  There's no machine startup and you can prepopulate easily.
[16:36] <aisrael> natefinch: manual instead of lxd provider? Hmm
[16:36] <natefinch> aisrael: yeah.  You can still use lxd to spin up the machine (if you want), but then just use it as a manual machine.
[16:40] <aisrael> natefinch: That's an interesting approach. I'd be interested in benchmarking the two for comparison.
[16:53] <natefinch> aisrael: I'd love to see benchmarks.
[17:54] <deanman> aisrael, If you manage to fool juju to use a local lxc image please give a shout. I did try the other day to copy an existing image and giving it alias like ubuntu-xenial but somehow i failed.
[18:57] <aisrael> deanman: will do!
[19:12] <frobware> natefinch: ping
[20:22] <babbageclunk> perrito666: thanks for the review! it was a bit lonely here yesterday.
[20:23] <babbageclunk> perrito666: Also, we didn't get to have that discussion about squashing commits at the sprint. We should! I think you're right, not being able to bisect is a problem.
[20:28] <perrito666> babbageclunk: sorry I left early
[20:29] <perrito666> babbageclunk: of course I am right, that usually is the case :p
[20:29] <babbageclunk> perrito666: :)
[20:31] <perrito666> mm, I believe I found a rather ugly bug in our logic
[20:37] <babbageclunk> perrito666: oh dear
[20:41] <babbageclunk> perrito666: do you know anything about macaroons and authentication? I've been bumping my head on a bug but it might be something obvious to someone more familiar with it.
[21:12] <petevg> Hi, everyone. I'm running into an interesting issue while testing matrix: the landscape bundle in the store deploys successfully when deployed w/ the command line client, but it fails when deployed via the gui, or by python-libjuju (the latter is used in the mtarix).
[21:13] <petevg> Specifically, the config changed hook in haproxy fails, with a KeyError getting the "services"
[21:13] <petevg> *getting "services" from the config.
[21:13] <petevg> In the bundle.yaml, "services" is an empty string. Does anyone know if the command line client does anything special with falsey config values?
[21:13] <petevg> Possibly something that the gui and python-libjuju should replicate?
[21:18] <natefinch> babbageclunk: I know a tiny bit about macaroons
[21:30] <perrito666> babbageclunk: nope, but we can try to think together
[21:33] <babbageclunk> perrito666, natefinch: yay thanks! I'm working on bug 1650451
[21:33] <mup> Bug #1650451: Migration silently  fails when performed by a newley registered and granted super user <juju:New> <https://launchpad.net/bugs/1650451>
[21:34] <babbageclunk> What seems to be happening is that there are normal API requests to the target controller that succeed (so must have the right macaroons)...
[21:35] <babbageclunk> And then the websocket connection to the logtransfer endpoint fails with "cannot get discharge: interaction not possible"
[21:37] <natefinch> do the previous actions need superuser creds?
[21:37] <natefinch> I wonder if it's just a timing issue
[21:40] <babbageclunk> natefinch: yeah, everything on the migrationtarget facade requires superuser
[21:40] <babbageclunk> natefinch: maybe? It happens reliably though.
[21:45] <babbageclunk> natefinch: I think thumper had to fix something in the connection to the target api for a similar scenario (maybe the same test). But I haven't been able to find his change, and he's on holiday. I might ping him on alternative channels to see if he can give me a quick pointer.
[21:53] <natefinch> babbageclunk: so, the last login I see before migration starts starts here: 22016-12-16 01:44:21 TRACE juju.rpc.jsoncodec codec.go:120 <- {"request-id":1,
[21:53] <natefinch> that's the response to the login, which returns some data, which seems relevant: "controller-access":"superuser","model-access":""
[21:54] <natefinch> I wonder if the lack of model access is a problem.  It would be interesting to see if that access field is different in a case where the user is a superuser to start
[21:54] <natefinch> I would expect that controller superuser access would just override needing model access, but I don't know for sure
[21:55] <babbageclunk> natefinch: yeah, that makes sense - it should work in the case where the user doesn't have model access, but maybe the auth in logtransfer isn't getting that right.
[22:00] <natefinch> I gotta run to make dinner, but I'll be back on later.
[22:01] <babbageclunk> natefinch: ok, thanks!
[22:43] <mup> Bug #1651260 opened: landscape bundle error when deployed via gui (KeyError in config changed hook in haproxy charm) <juju-core:New> <https://launchpad.net/bugs/1651260>
[23:16] <babbageclunk> perrito666: ok, what about bug 1650251
[23:16] <mup> Bug #1650251: Model migrations fail if cloud names don't match <model-migration> <juju:New> <https://launchpad.net/bugs/1650251>
[23:21] <perrito666> babbageclunk: cannot open the bug for lack of bw
[23:22] <perrito666> bootstrapping a vmware machine is very bw intensive
[23:22] <perrito666> bbl when this is finished and I dont have 3 seconds of lag for each time I hit enter
[23:23] <babbageclunk> perrito666: no worries - I was about to type my findings and current thinking to you, but then realised I should put them on the bug instead of in irc
[23:45] <redir> perrito666: babbageclunk are we doing standup?
[23:50] <babbageclunk> redir: I thought we weren't but I can!
[23:50]  * redir shrugs it's on my calendar still. but that might just be mine
[23:55] <perrito666> I also thought we werent
[23:56] <perrito666> also, I have been uploading a vmware image for the past 50 mins and it would seem it will take another 50 mins
[23:56] <perrito666> so no chance I can do a video call