[00:04] <mup> Bug #1346597 opened: cannot get replset config: not authorized for query on local.system.replset <cloud-installer> <landscape> <oil> <juju-core:Incomplete> <juju-core 1.24:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1346597>
[00:04] <mup> Bug #1575895 opened: juju loses apt-http/s-proxy information if a model is deleted and a new one created <add-model> <juju-release-support> <landscape> <rc1> <usability> <juju-core:Incomplete> <https://launchpad.net/bugs/1575895>
[00:20] <redir> axw mostly just a question, when I `juju add-model --region us-east-1 foo` from a controller boostrapped in `us-west-1`, does anything actually happen in `us-east-1`? or is that all stored on the controller and nothing happens in us-east-1 until I `juju deploy something`?
[00:21] <redir> axw: I was under the impression from my convo with wallyworld that I should see something come up when I add a model to another region, but it makes sense that it wouldn't, too.
[00:34] <menn0> axw: rubber stamp pls http://reviews.vapour.ws/r/5467/
[00:36] <thumper> menn0, axw, wallyworld: thoughts ?  http://reviews.vapour.ws/r/5468/
[00:37] <wallyworld> output looks nice
[00:38] <menn0> thumper: +1 the output and args are nice
[00:44] <menn0> thumper: ship it
[00:48] <axw> redir: atm, nothing happens. we may change it soon so that the model's security group gets created when you add the model
[01:01] <redir> axw tx
[01:16] <redir> gtg. Until tomorrow, #juju-dev
[01:26] <wallyworld> cmars: yo
[01:27] <wallyworld> cmars: you and roger really need to talk more, or you need to read his email :-D
[02:03] <natefinch> axw or anyone else.... trying to run juju on windows, but it seems to die mysteriously right away? http://pastebin.ubuntu.com/23066001/
[02:03] <axw> natefinch: probably https://bugs.launchpad.net/juju-core/+bug/1612836
[02:03] <mup> Bug #1612836: Cannot bring up hosted  model machines in azure <azure-provider> <ci> <deploy> <regression> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1612836>
[02:04] <axw> natefinch: one mo, I have a small patch you can apply to test
[02:04] <axw> natefinch: http://paste.ubuntu.com/23066004/
[02:06] <natefinch> axw: will give it a go
[02:06] <natefinch> axw: thanks
[02:33] <axw> natefinch: if you have a moment to review, the fix is in a PR now: http://reviews.vapour.ws/r/5469/
[02:33] <natefinch> axw: sure
[02:39] <natefinch> axw: doh - axw/juju-tools/main.go:45: cannot use buildToolsCommand literal (type *buildToolsCommand) as type modelcmd.ModelCommand
[02:39] <natefinch> axw: bah,. it's because gnuflag moved
[02:39] <axw> natefinch: ah, I'll update it in a moment
[02:41] <axw> natefinch: pushed
[02:43] <natefinch> axw: cool
[02:43] <natefinch> I really love that go install has no middle man between git push and me getting and installing the code.
[02:44] <natefinch> like, no npm repository to update or anything.  Just get the code from git and build it.
[02:46] <natefinch> I wish kill controller had a --jfdi that would just kill the machines via the provider... waiting for azure to nicely close everything down is so slow...
[02:46] <mup> Bug #1614329 opened: Cannot deploy charm to new lxd container on machine: permissions error <lxd> <juju-core:New> <https://launchpad.net/bugs/1614329>
[02:46] <mup> Bug #1614330 opened: agree command uses 'less' for a pager, fails on windows <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1614330>
[02:54] <axw> natefinch: I often just "azure group delete -q <resource group> --nowait", then "juju unregister -y <controller>"
[02:58] <natefinch> axw: nice
[02:59] <axw> natefinch: groups are quite neat - only one thing to delete, and your model or controller is cleaned out
[03:00] <axw> the time they take to delete is just a bit painful
[03:04] <anastasiamac> pretty awesome suffix git put on commit 0806e96e6e79b530161cbb40f54bd8b338e0bad1
[03:06] <thumper> axw: where will I find the code that stores storage-pool settings?
[03:10] <thumper> think I've found it
[03:12] <axw> thumper: sorry went afk, sorted?
[03:12] <thumper> yeah
[03:15] <axw> wallyworld: if we start making API calls in list-controllers, then one bad controller is going to hang the whole command. can we isolate the access to show-controller at least?
[03:16] <wallyworld> that becomes rather inconvenient for a user to see at a glance what they have access to
[03:16] <wallyworld> but you are right about a bad controller
[03:17] <wallyworld> maybe it has to be show controller
[03:37] <thumper> axw: will there ever be storage pools for pools that aren't defined on volumes or filesystems?
[03:38] <axw> thumper: non capisco. do you mean "will there ever be storage pools that aren't referred to by volumes or filesystems"?
[03:38] <thumper> kinda
[03:39] <thumper> will there be pool settings for pools not currently refered to
[03:39] <axw> thumper: if so, yes. e.g. when you bootstrap ec2, you get a storage pool out of the box called "ebs-ssd"
[03:39] <thumper> how do I know all the pool names?
[03:39] <axw> thumper: storage/poolmanager.PoolManager.List
[03:40] <axw> name is part of the config returned
[03:40] <thumper> actually, I could just iterate through all settings looking for "pool#" prefix
[03:41] <thumper> I have all the settings read
[03:41] <axw> or that
[03:41] <thumper> perhaps that :)
[03:46] <mup> Bug #1614345 opened: add-subnet subcommand references absent create-subnet command <usability> <juju-core:New> <https://launchpad.net/bugs/1614345>
[04:16] <wallyworld> axw: i did it for show controllers, but also added a tabular option to show controllers to make the output nice if needed
[04:17] <axw> wallyworld: sounds good
[04:17] <wallyworld> i'll now move on to list-models and list-users
[04:23] <veebers> wallyworld: re: --build-agent, can it use just a binary in path (i.e. for jujud) or does it need to build from source?
[04:24] <wallyworld> veebers: it uses a binary in the path if build-agent is not specified
[04:24] <wallyworld> build-agent means build
[04:26] <veebers> wallyworld: hmm ok. Is it still possible to pass something in to upgrade-juju to get the build number bumped (as I was trying with --upload-tools). Does that require --build-agent?
[04:26] <wallyworld> no need, the build number is bumped regardless
[04:28] <veebers> wallyworld: ok cool, thanks :-)
[04:28] <wallyworld> if you find an issue, let me know
[04:28] <veebers> will do :-)
[04:39] <thumper> menn0: http://reviews.vapour.ws/r/5472/
[04:39] <menn0> thumper: about to do a kid pickup. will review after.
[04:40] <thumper> np
[04:49] <mup> Bug #1614364 opened: manual provider lxc units are behind NAT, fail by default <manual-provider> <uosci> <juju-core:New> <https://launchpad.net/bugs/1614364>
[04:51] <axw> wallyworld: I have reviewed http://reviews.vapour.ws/r/5471/, would appreciate if you would also take a look later, after anastasiamac has responded to comments
[04:52] <wallyworld> axw: np, just popping out to get kid from camp, bbiab
[04:52] <axw> wallyworld: there's lots of changes, but there's a bunch of discrete changes in separate commits on github
[04:52] <axw> thanks, later
[05:19] <mup> Bug #1604474 changed: Juju 2.0-beta12  userdata execution fails on Windows <azure-provider> <ci> <juju2.0> <oil> <oil-2.0> <regression> <vpil> <windows> <juju-core:Fix Released by natefinch> <https://launchpad.net/bugs/1604474>
[05:20] <thumper> axw: hmm... part of the reason of using the database implementation to get at the storage pools is that is exactly what the migration import and export code does, it does low level database stuff, which is why the entire setting collection is read at once
[05:20] <thumper> all the migration code has deep knowledge of implementation details
[05:20] <thumper> that is how it works
[05:21] <thumper> I'm happy enough changing the core description definition
[05:21] <axw> thumper: I guess that's fine, changing core description would be enough then
[05:21] <thumper> to make it slightly less hacky
[05:21] <axw> thumper: though I thought importing didn't involve inserting docs?
[05:21] <axw> thumper: but using AddMachine, etc.
[05:21] <thumper> haha, no
[05:21] <axw> no?
[05:21] <axw> ok
[05:21] <thumper> not at all
[05:22] <thumper> much hacky doc op foo
[05:22] <thumper> fu
[05:22] <axw> thumper: yeah that wouldn't work, not sure what I was thinking :)  very well then
[05:22] <thumper> most of the AddMachine etc high level function try to apply checks we don't want
[05:22]  * axw nods
[05:22] <mup> Bug #1604474 opened: Juju 2.0-beta12  userdata execution fails on Windows <azure-provider> <ci> <juju2.0> <oil> <oil-2.0> <regression> <vpil> <windows> <juju-core:Fix Released by natefinch> <https://launchpad.net/bugs/1604474>
[05:23] <thumper> I'll tweak the description :)
[05:23] <thumper> hmm.. why has review board stopped updating with some changes?
[05:24] <thumper> https://github.com/juju/juju/pull/6025
[05:24] <thumper> phew... romulas command branch finally landed
[05:25] <thumper> weird, because it found http://reviews.vapour.ws/r/5474/
[05:28] <mup> Bug #1604474 changed: Juju 2.0-beta12  userdata execution fails on Windows <azure-provider> <ci> <juju2.0> <oil> <oil-2.0> <regression> <vpil> <windows> <juju-core:Fix Released by natefinch> <https://launchpad.net/bugs/1604474>
[05:37] <menn0> thumper: ship it on the loggo one
[05:37] <thumper> menn0: I'm just updating the storage pools
[05:37] <thumper> which loggo one?
[05:37] <menn0> thumper: the small one which changes the default writer
[05:37] <thumper> the cmd one?
[05:38] <thumper> I see it :)
[05:38] <thumper> I hope it lands due to deps
[05:38] <thumper> if not, I'll poke veebers :)
[05:39] <thumper> menn0: https://github.com/juju/juju/pull/6025
[05:40] <menn0> thumper: RB now working?
[05:40] <thumper> menn0: didn't pick up that one
[05:40] <thumper> no idea why
[05:43] <menn0> thumper: LGTM, with one query
[05:44]  * menn0 wants this stuff to land so he can see what it looks like
[05:44] <thumper> menn0: it is different due to loggo.Level vs. string key
[05:44] <thumper> debug-log sends but string representations of the level
[05:44] <menn0> thumper: ah right. makes sense.
[05:44] <menn0> thumper: all good.
[05:45] <veebers> thumper: Hey I'm about to EOD, did you want me to take a look at something?
[05:45] <thumper> perhaps
[05:45] <thumper> gimmie two ticks
[05:46] <thumper> veebers: nope, all good
[05:46] <veebers> thumper: sweet :-)
[05:56] <menn0> thumper: VALIDATION phase done... pushing now
[05:56] <thumper> sweet
[06:08] <thumper> heh
[06:08] <thumper> hmm
[06:08] <thumper> I have two branches
[06:08] <thumper> and was getting confused
[06:08] <thumper> color-debug-log
[06:08] <thumper> and debug-log-color
[06:08] <thumper> was editing wrong one
[06:10] <blahdeblah> Hi folks; anyone know if there has ever been any attempt to deploy juju environments in a fully disconnected environment?  i.e. no streams available, no charm store, no external repos
[06:13] <thumper> not something I do personally
[06:13] <thumper> blahdeblah: some of the folk with orange boxes do I think
[06:13] <blahdeblah> thumper: OK, interesting; any suggestion who to talk to?
[06:14] <thumper> try scott
[06:14]  * thumper looks for nick
[06:14] <thumper> txspud
[06:14] <thumper> Scott Croft
[06:14] <thumper> I think he has
[06:21] <blahdeblah> thanks thumper
[09:07] <voidspace> frobware: ping
[09:07] <frobware> voidspace: hi
[09:07] <voidspace> frobware: hey, hi
[09:08] <voidspace> frobware: I'm looking at a 1.25 port of the fix  for bug 1602716
[09:08] <mup> Bug #1602716: MAAS provider bridge script doesn't handle alias interfaces IP <2.0> <maas-provider> <network> <sts> <juju-core:Fix Released by freyes> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1602716>
[09:08] <voidspace> frobware: even with the new code in place, 1.25 doesn't bridge the alias
[09:08] <voidspace> frobware: is this because 1.25 does a lot less bridging than 2.0?
[09:08] <frobware> voidspace: yes
[09:09] <frobware> voidspace: we only brindge the default route
[09:09] <voidspace> frobware: so I'll leave the code in place, but mark the specific bug as wontfix for 1.25 (again)
[09:09] <frobware> voidspace: is there harm it being there? in general I think the bridge script should be identical between 1.25 and 2.0
[09:10] <voidspace> frobware: I've put the code in there - but there was a card to handle it linked to the LP bug
[09:10] <voidspace> frobware: so I'm landing the code but it doesn't actually fix the bug... I was just checking this was expected.
[09:11] <voidspace> frobware: which you've confirmed. So thanks.
[09:11] <frobware> voidspace: this is not expected. why doesn't it fix the bug?
[09:11] <voidspace> frobware: on 1.25 it doesn't fix the bug because we don't bridge the alias (which was the bug)
[09:11] <voidspace> frobware: surely?
[09:12] <frobware> voidspace: hmm. can you create an alias for the default route (MAAS UI) and validate that we bridge the alias
[09:13] <frobware> voidspace: you'll need an alias for this to make sense
[09:14] <voidspace> voidspace: I have an alias for eth0, eth0 is bridged but eth0:1 isn't - as expected
[09:14] <voidspace> frobware: I have an alias for eth0, eth0 is bridged but eth0:1 isn't - as expected
[09:14] <voidspace> frobware: if I now disable eth0 should that make the alias the default route, or should I do something else?
[09:16] <frobware> voidspace: can you PB your original ENI
[09:17] <voidspace> frobware: http://pastebin.ubuntu.com/23066815/
[09:18] <voidspace> frobware: that's before and after
[09:19] <voidspace> frobware: the gateway is specified on the bond, should I create an alias for that?
[09:20] <frobware> voidspace: looking at the PB
[09:21] <voidspace> re-bootstrapping with an alias to the bond as well and grabbing coffee
[09:23] <frobware> voidspace: the script seems broken
[09:25] <frobware> voidspace: I take that back. apologies.
[09:26] <frobware> voidspace: you can do this without rebooting - just run the script on the original eni, observing the output depending on what you pass to --interface-to-bridge
[09:29] <voidspace> frobware: so what should I test for 1.25?
[09:29] <frobware> voidspace: still looking and understanding...
[09:29] <voidspace> frobware: ok
[09:29] <frobware> voidspace: do you have an ENI where bond0 has an alias?
[09:29] <voidspace> frobware: this is the branch https://github.com/juju/juju/pull/6013
[09:30] <voidspace> frobware: I just did that - bond0 is still active though
[09:30] <voidspace> frobware: juju-br0 is created but the bond alias is left in place, no additional bridge is created
[09:31] <voidspace> frobware: if we only bridge the interface with the gateway (which I assume is the default route) then that is still expected I think
[09:32] <frobware> voidspace: if you're running the script by hand, you can ignore the gateway bit. the arguments --interface-to-bridge and --bridge-name control what gets bridged
[09:32] <frobware> voidspace: on 2.0 we don't use those arguments so everything (appropriate) gets bridged
[09:34] <voidspace> frobware: so it won't fix the specific bug for 1.25 (unless the alias is the default route) but it's nice to have the code the same in both 1.25 / 2.0 for maintenance
[09:35] <frobware> voidspace: yes, for the latter. but you can have an alias so we should check what it would do
[09:35] <frobware> voidspace: just a little confused about what its doing for the 2.0 case atm
[09:35] <voidspace> frobware: ok
[09:36] <voidspace> frobware: dimitern and I have both tested this branch with deploys and it seems to work fine
[09:36] <voidspace> frobware: http://reviews.vapour.ws/r/5460/
[09:36] <frobware> voidspace: deploys that involve an alias on 1.25?
[09:36] <voidspace> frobware: you can see what dimitern used in his comment on the review
[09:37] <frobware> voidspace: and an alias on the interface that is the default route?
[09:37] <voidspace> I'm just looking now
[09:37] <voidspace> ah no, not an alias
[09:37] <dimitern> frobware: no aliases
[09:37] <frobware> voidspace: isn't that a different PR?
[09:37] <dimitern> is it broken with them?
[09:38] <frobware> voidspace: that PR is about the MTU
[09:38] <voidspace> frobware: yeah, but I rolled the code into that
[09:38] <voidspace> frobware: so no, that's the right PR
[09:38] <frobware> ???
[09:38] <frobware> why
[09:38] <voidspace> frobware: I did one PR with all the new code
[09:38] <frobware> voidspace: why not separate PRs?
[09:39] <frobware> voidspace: that makes no sense to me. if we ever had to revert the MTU for example we just added/created more work
[09:39] <voidspace> frobware: because the patches didn't apply cleanly - so it would have been a lot more work to do as separate patches
[09:39] <frobware> voidspace: why not separate patches?
[09:39] <voidspace> frobware: we created "possibly more work" - doing them separately now would be "definitely more work"
[09:39] <frobware> voidspace: disagree. we should do discrete things.
[09:39] <voidspace> frobware: because the patches don't apply cleanly as separate patches
[09:40] <frobware> voidspace: but we can fix the patches per PR
[09:41] <voidspace> frobware: I understand your disagreement. This patch is really "bring 1.25 up to using the same code for bridging as 2.0" which is a discrete thing.
[09:41] <frobware> voidspace: fwiw, I think this is the wrong approach. patches should be discrete so that they can be reverted if necessary. seems like we have conflated things.
[09:42] <voidspace> frobware: I understand and I don't think it is worth the extra effort. If this patch introduces regressions it can be reverted.
[10:05] <frobware> voidspace: so i think the script is broken for the single use case, where single use means we explicitly pass the interface name to be bridged
[10:05]  * frobware was trying to look at yakkety... and wonders how much more of this fun and games is to come in netplan
[10:18] <frobware> voidspace, dimitern: https://bugs.launchpad.net/juju-core/+bug/1614471
[10:18] <mup> Bug #1614471: MAAS bridge script doesn't bridge aliases correctly where --interface-to-bridge is specified <juju-core:New> <https://launchpad.net/bugs/1614471>
[10:19] <voidspace> frobware: ok
[10:19] <voidspace> frobware: what do you think the right fix is for 1.25, or will that need some thinking about?
[10:20] <frobware> voidspace: just going to try it
[10:20] <frobware> voidspace: not a fix, just to understand the current behaviour
[10:20] <voidspace> frobware: I can look at breaking the patch into pieces and talk to Rick about it. I have a critical bug I'm assigned to in the meantime (although that should be easy).
[10:20] <voidspace> frobware: sure
[10:21] <frobware> voidspace: take a look at the critical patch - was going to try the alias stuff
[10:21] <voidspace> frobware: I can't create an alias that is the default route on my maas setup, but it can be done by directly calling the script
[10:21] <frobware> voidspace: even if that means I'm just doing some additional QA
[10:21] <voidspace> frobware: ok
[10:22] <frobware> voidspace: I'm wondering if the alias stuff would have worked if we had landed 395cd8d812c004e8c9a8783c47fcfc22ded9de2e some time earlier.
[10:23] <frobware> voidspace: or more conveniently https://github.com/juju/juju/pull/5792
[10:23] <frankban> axw: could you please take a look at http://reviews.vapour.ws/r/5475/ ?
[10:29] <babbageclunk> My ssd died. :(
[10:32] <mup> Bug #1614471 opened: MAAS bridge script doesn't bridge aliases correctly where --interface-to-bridge is specified <juju-core:New> <https://launchpad.net/bugs/1614471>
[10:48] <babbageclunk> dimitern, voidspace, frobware: my ssd died. Setting up on a loaner machine now (my ansible script has mostly worked!).
[11:02]  * dimitern is back; catching up
[11:08] <dimitern> babbageclunk: oh bugger :/ any notable losses?
[11:09] <voidspace> babbageclunk: :-( but glad you're getting set back up
[11:09] <dimitern> frobware: re bug 1614471
[11:09] <mup> Bug #1614471: MAAS bridge script doesn't bridge aliases correctly where --interface-to-bridge is specified <juju-core:New> <https://launchpad.net/bugs/1614471>
[11:09] <dimitern> frobware: I'm not sure we should create juju-br0:1 in that case
[11:10] <frobware> dimitern: because?
[11:10] <dimitern> frobware: how will that be useful?
[11:11] <dimitern> frobware: juju-br0 will work with its primary IP coming from eth0, which will be on the default route as well
[11:11] <frobware> dimitern: so... this is what I'm trying to understand. why was it broken and would the presence of an auto stanza have fixed the initial bug report
[11:11] <dimitern> frobware: eth0:1 will also work, if you need for some reason not to go through juju-br0
[11:12] <dimitern> or perhaps it won't actually..
[11:12]  * dimitern gives it a quick test
[11:24] <babbageclunk> dimitern: Lost a couple of days work. :( Really wish I'd pushed yesterday afternoon.
[11:25] <dimitern> babbageclunk: oh man :( sorry!
[11:25] <dimitern> yeah, pushing at EOD is a good rule of thumb
[11:26] <dimitern> frobware: so if we're bridging aliases as well, what happens if the alias is on a different subnet?
[11:26] <dimitern> frobware: that would imply a separate bridge, not just moving the alias address over to the bridge
[11:30] <fwereade> so, I am without power where I'm staying, and am rather stuck with cafes if I want internet
[11:33] <fwereade> I'll be catching up on mail and reviews and stuff while I'm here, but if anyone wants my advice/company/whatever, they should talk to me while I'm here
[11:44] <dimitern> fwereade: are you in the uk?
[12:02] <fwereade> dimitern, yeah
[12:02] <dimitern> fwereade: so brexit reached the power companies? :)
[12:03] <fwereade> dimitern, haha
[12:03] <fwereade> dimitern, not yet, that I'm aware of ;p
[12:03] <dimitern> fwereade: ;) too early
[12:03] <fwereade> dimitern, trouble in the building, people are working on it
[12:04] <dimitern> fwereade: is your bandwidth wide enough for a HO, if you have some time as well?
[12:04] <fwereade> dimitern, HO probably not great, there's background noise
[12:04] <dimitern> fwereade: I'd like to chat about the ipam story and how it fits in core
[12:05] <dimitern> fwereade: right, ok - irc then :)
[12:26] <voidspace> frobware: I think you're right - with auto the interface would have come up and the fact that it wasn't bridged would only have mattered for containers
[12:26] <voidspace> frobware: so for 1.25 just ensuring the auto is there is enough
[12:27] <voidspace> frobware: *unless* it's an ipv6 specific thing
[12:27] <voidspace> frobware: can you effectively have two default routes, one for ipv6 and one for ipv4?
[12:27] <voidspace> frobware: the alias in the original bug report was ipv6
[12:38] <dimitern> frobware, voidspace: I've tested with eth0 bridged to juju-br0 and eth0:1 not, deployed 3 lxc containers - works fine
[12:42] <voidspace> frobware: ah no, in the original bug report the alias does have auto in the generated /e/n/i
[12:42] <voidspace> dimitern: cool
[12:43] <frobware> voidspace: yeah, I don't know how ipv6 creeped into that
[12:48] <frobware> voidspace: I think you raise a good point about containers - should aliases be bridged for containers?
[12:48] <frobware> dimitern: ^^
[12:49] <frobware> voidspace, dimitern: can we HO re: aliases?
[12:49] <voidspace> frobware: ok with me
[12:50] <voidspace> frobware: in 1.25 we only alias the default route, we don't provide multi-nic support - which probably means no ipv6  support if that's on a separate interface/alias
[12:50] <dimitern> frobware: sure
[12:50] <voidspace> frobware: which is probably fine as we're doing it in 2.0
[12:50] <voidspace> frobware: HO channel?
[12:50] <frobware> voidspace: standup
[12:51] <voidspace> kk
[12:51] <voidspace> omw
[12:52] <dimitern> me2
[12:58] <babbageclunk> mgz: ping?
[12:58] <voidspace> mgz: ping
[12:58] <babbageclunk> voidspace: dibs
[12:58] <voidspace> mgz: do you still want a review from me? you were going to create some PRs
[12:59] <niedbalski> fwereade, ping
[13:00] <voidspace> babbageclunk: haha, good luck with that...
[13:00] <fwereade> niedbalski, pong
[13:00] <fwereade> niedbalski, what can I do for you?
[13:00] <babbageclunk> voidspace: I'm pretty sure etiquette dictates that he talk to me first.
[13:01] <mgz> babbageclunk: yo
[13:01] <mgz> voidspace: alyo
[13:02] <niedbalski> fwereade, hey William :) , Could you check this https://pastebin.canonical.com/163030/? this is preventing to start the units. I think is related to LP: #1534103
[13:02] <mup> Bug #1534103: "unknown operation kind run-action" (1.26alpha3) <2.0-count> <actions> <sts> <juju-core:Triaged by rharding> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1534103>
[13:02] <babbageclunk> mgz: Hey, the mgo fix is in upstream, so we no longer need to apply the nasty patch in our build.
[13:02] <niedbalski> fwereade, here is the current action list on the database (I've a full mongo dump) http://pastebin.ubuntu.com/23067364/
[13:02] <babbageclunk> mgz: If I land a change to remove the .diff file, juju/juju/patches is probably going to disappear (since there's nothing else in it).
[13:03] <fwereade> niedbalski, that just looks like... utterly broken uniter code
[13:03] <babbageclunk> mgz: Is that going to break your script that applies the patch?
[13:03] <fwereade> niedbalski, looking further
[13:04] <babbageclunk> mgz: Or, I could leave a readme file in patches to keep the dir alive in the repo. Would your script cope with that?
[13:05] <mgz> babbageclunk: a readme sounds good to me
[13:07] <mgz> babbageclunk: it does not at present but that should change I think
[13:08] <mgz> voidspace: I shall link you my prs, onesec
[13:08] <voidspace> mgz: cool, thanks
[13:08] <babbageclunk> mgz: I guess the alternative is a null patch, but if you're happy to change the script that's probably less obtuse.
[13:08] <voidspace> babbageclunk: pthbthpthbthpt
[13:08] <mgz> babbageclunk: so, options, delete the dir, add note to general hacking docs somewhere
[13:08] <mgz> babbageclunk: or add dummy file for git in patches dir, make sure that lands after I update the script
[13:08] <babbageclunk> voidspace: don't care, he responded to me first. I'll allow multiplexing.
[13:09] <voidspace> babbageclunk: I refer the honourable gentleman to my previous comment (pthbthpthbthpt)
[13:09] <mgz> voidspace: https://code.launchpad.net/~gz/juju-ci-tools/jujupy_ssh_keys
[13:09] <mgz> voidspace: https://code.launchpad.net/~gz/juju-ci-tools/assess_ssh_keys
[13:09] <voidspace> mgz: looking
[13:10] <mgz> voidspace: I'll bug a qa guy as well, so don't worry too much about juju-ci-tools specifics, but feedback on approach/code welcome
[13:10] <fwereade> niedbalski, and, yeah, that bug looks like completely broken uniter code as well
[13:10] <babbageclunk> mgz: I'm ok with the latter option - having the note in the dir is probably more useful than in the docs somewhere.
[13:11] <mgz> babbageclunk: so, we think patcher should only apply things named *.patch in the dir given?
[13:11] <mgz> or some other rule would be more suitable?
[13:12] <fwereade> niedbalski, I don't have any immediate insight, except that it's 99.99% certainly a uniter problem and mongo isn't relevant
[13:12] <voidspace> mgz: ok, one of those branches has conflicts (the assess one)
[13:12] <babbageclunk> mgz: The existing one is called *.diff, so should probably use that.
[13:12] <mgz> voidspace: yeah, I did the merge for it's prereq but having propogated that merge yet
[13:12] <mgz> -'
[13:12] <babbageclunk> mgz: but otherwise yes.
[13:12] <mgz> hm...
[13:13] <babbageclunk> mgz: Although now you say it *.patch is better.
[13:13] <niedbalski> fwereade, OK, I tried restarting the agent, then it enters in the agent.go:17 [AGENT-STATUS] failed: resolver loop error, state.
[13:13] <niedbalski> fwereade, can you think on any way to manually workaround this?
[13:14] <mgz> lets allow both, I will just not be able to glob
[13:15] <voidspace> mgz: first one is fine, nice and straightforward
[13:15] <voidspace> mgz: I hate __metaclass__ = type
[13:15] <voidspace> mgz: :-(
[13:15] <voidspace> mgz: but I'm sure we've had this discussion before
[13:16] <voidspace> and future imports generally
[13:16] <voidspace> magic stuff at the start of a file that changes the semantics of code later in the file
[13:17] <mgz> voidspace: I am also not a fan, but am following style
[13:17] <voidspace> mgz: yeah
[13:17] <fwereade> niedbalski, not without hand-hacking the uniter state file and I am very uncertain about what consequences would follow from that
[13:17] <voidspace> mgz: hardcoding the key in the script?
[13:18] <voidspace> mgz: VALID_KEY
[13:18] <voidspace> mgz: ah, is that a test
[13:18] <voidspace> fair enough then
[13:18] <fwereade> niedbalski, my immediate judgement is more "omg that bug is clearly critical, how has it been hanging around for 8 months" :(
[13:19] <mgz> voidspace: yeah, could use ssh-keygen to make one up on the fly, but just using a known valid thing is fine I think?
[13:19] <niedbalski> fwereade, agree :)
[13:19] <voidspace> mgz: if it's just for a test fine
[13:20] <voidspace> mgz: that all looks fine to me
[13:20] <mgz> voidspace: thanks!
[13:21] <voidspace> some interesting patterns in the testing there
[13:21] <voidspace> looks good
[13:24] <niedbalski> anastasiamac, fwereade Could be possible to flag LP: #1534103 as critical for the next beta release?
[13:24] <mup> Bug #1534103: "unknown operation kind run-action" (1.26alpha3) <2.0-count> <actions> <sts> <juju-core:Triaged by rharding> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1534103>
[13:26] <anastasiamac> niedbalski: fwereade: sure :D William, shall I assign it to u?Happy to tackle it?
[13:27] <fwereade> anastasiamac, niedbalski: just did so
[13:27] <fwereade> anastasiamac, mmmmmmmight be better targeted to axw/wallyworld, I am having real trouble with the charm-deletion bits
[13:28] <fwereade> anastasiamac, and they're likely more immediately familiar with that code
[13:28] <anastasiamac> fwereade: k.. we'll assign at release call, unless rick_h_ has a better plan
[13:28] <fwereade> anastasiamac, cheers
[13:29] <anastasiamac> fwereade: niedbalski: \o/
[13:29]  * rick_h_ 's ears tingle 
[13:29] <niedbalski> fwereade, anastasiamac thank you!
[13:30] <anastasiamac> niedbalski: nps -it's not fixed yet :D
[13:30] <rick_h_> niedbalski: do we have somehting to replicate with?
[13:32] <niedbalski> rick_h_, unfortunately I am not able to replicate that bug locally; It's happening on a beta7 installation on which we have access/logs/dumps.
[13:33] <rick_h_> niedbalski: right, so the question for us will be how to replicate/verify it's fixed
[13:33] <mgz> babbageclunk: please review https://code.launchpad.net/~gz/juju-release-tools/non_patch_ext/+merge/303263
[13:34] <rick_h_> it looks like it's happened with a few differnt actions on different workloads, benchmarking, casey's stuff, and openstack in the bug
[13:34] <fwereade> rick_h_, niedbalski: it looks like it should be reproable by unit test
[13:34] <niedbalski> rick_h_, yep, we need to work on getting consistent reproducer, so far, no luck for me.
[13:34] <rick_h_> fwereade: can you brain dump any helpful info for someone to chase down into the bug please?
[13:35] <babbageclunk> mgz: LGTM! Spelling error - should be PATCH_EXTENSIONS
[13:35] <fwereade> rick_h_, it'll be about 1 vague sentence but I will try to compose a helpful one
[13:35] <mup> Bug #1614471 changed: MAAS bridge script doesn't bridge aliases correctly where --interface-to-bridge is specified <juju-core:Invalid> <https://launchpad.net/bugs/1614471>
[13:36] <mgz> babbageclunk: ehe, whoops
[13:37] <mgz> did I tyop that twice, or copy or, or can I just not spell? I am not sure.
[13:41] <natefinch> that feeling when you typo the word typo
[13:55] <dimitern> frobware: so with the eni produced by the script, it boots ok and here's the routes and addresses: http://paste.ubuntu.com/23067462/
[13:56] <frobware> dimitern: and presumably an ipv6 route
[13:56] <frobware> dimitern: curious that it doesn't fail on 'post-up ifup eth0:1'
[13:59] <dimitern> frobware: yeah - both ipv6 addrs ended up on br-eth1 though
[14:00] <rick_h_> dimitern: fwereade voidspace macgreagoir ping for standup
[14:00] <dimitern> omw T-10s
[14:04] <voidspace> rick_h_: omw
[14:36] <mup> Bug #1614559 opened: Juju rejects lxd 2.1 <blocker> <bootstrap> <ci> <jujuqa> <lxd> <regression> <juju-core:Triaged by rharding> <https://launchpad.net/bugs/1614559>
[14:38] <natefinch> rick_h_:  you want me to fix that security bug?
[14:45] <rick_h_> natefinch: yes please
[14:45] <rick_h_> natefinch: hopefully more a packport than anything
[14:47] <natefinch> rick_h_: it's tricky, because the code is in the juju/utils repo, so updating 1.25 with a new version of that may break other things.  BUt maybe I'll get lucky and it won't be a big deal
[14:56] <natefinch> fwereade: I really wish juju controllers listed the juju version
[15:03] <mup> Bug #1614571 opened: Data race: TestLogOutput <ci> <jujuqa> <race-condition> <unit-tests> <juju-core:Triaged by rharding> <https://launchpad.net/bugs/1614571>
[15:13] <katco> rick_h_: ta for the response; updated 1614329. lmk what you think
[15:13] <katco> bug 1614329 that is
[15:13] <mup> Bug #1614329: Cannot deploy charm to new lxd container on machine: permissions error <lxd> <juju-core:New> <https://launchpad.net/bugs/1614329>
[15:14] <rick_h_> katco: looking ty
[15:17] <katco> rick_h_: another update with a simpler/alt repro. sorry if i'm nto being clear
[15:54] <redir> morning juju-dev
[15:55] <katco> redir: morning
[15:55] <redir> :)
[16:01]  * rick_h_ lunches
[16:25] <mup> Bug #1614599 opened: i/o timeout downloading resource <ci> <intermittent-failure> <jujuqa> <resources> <juju-core:Incomplete> <https://launchpad.net/bugs/1614599>
[16:32] <bleepbloop> I have a bundle I exported from the juju gui in the current beta version and it fails to import and complains about `cannot unmarshal !!str `xenial` into charm.legacyBundleData`, is this a known issue?
[16:33] <perrito666> Pseudo morning
[16:36] <rick_h_> bleepbloop: not triggering any bells at the moment. can you file a bug over at the gui tracker https://github.com/juju/juju-gui please?
[16:36] <bleepbloop> Sure will do
[16:42] <redir> pseudo morning, perrito666
[16:42] <redir> or maybe sudo morning
[16:43] <bleepbloop> redir is not in the sudoers file.  This incident will be reported.
[16:44] <redir> hah
[16:44] <redir> that's awesome
[16:46] <redir> rick_h_: can we mark bug 1608533 as invalid or incomplete or something?
[16:46] <mup> Bug #1614622 opened: Cannot list-resources <ci> <intermittent-failure> <jujuqa> <list-resources> <resources> <juju-core:Incomplete> <https://launchpad.net/bugs/1614622>
[16:46] <mup> Bug #1608533: Race in github.com/juju/juju/apiserver/tools_test <ci> <race-condition> <regression> <unit-tests> <juju-core:In Progress by reedobrien> <https://launchpad.net/bugs/1608533>
[16:47] <rick_h_> redir: looking
[16:47] <redir> tx
[16:47] <redir> also are you working on bug 1614571?
[16:47] <mup> Bug #1614571: Data race: TestLogOutput <ci> <jujuqa> <race-condition> <unit-tests> <juju-core:In Progress by rharding> <https://launchpad.net/bugs/1614571>
[16:47] <rick_h_> redir: rgr, looks like it's not occurred since June so definitely ok putting it to bed unless it shows up again
[16:48] <redir> rick_h_: ^^^
[16:48] <rick_h_> redir: I am the target to assign bugs so I can gets folks assigned to work on
[16:48] <rick_h_> redir: so no, I'm not working on it but need to find someone to work on it.
[16:49] <redir> OK I'm happy to start trying to repro that in the bg while working... and switch to fixing when it triggers and I am otherwise waiting for a build  or bootstrap
[16:53] <redir> rick_h_: ^
[16:53] <rick_h_> redir: rgr ty much. Feel free to take it
[16:54] <redir> ack
[17:10] <mup> Bug #1613487 changed: Fails to build tools when no matching tools found. <bootstrap> <ci> <regression> <juju-core:Fix Released by wallyworld> <https://launchpad.net/bugs/1613487>
[17:11] <natefinch> mgz. perrito666:  isee you both have committed stuff to juju/utils 1.25 branch lately.  What's the policy? Do we cherry-pick from master?
[17:11] <mgz> natefinch: I landed the same code on both branches
[17:12] <mgz> basically in parallel
[17:12] <mgz> but whatever work
[17:12] <natefinch> mgz: ok
[17:14] <natefinch> sinzui, mgz: 1.25 still builds with go 1.2, right?
[17:17] <mgz> natefinch: it needs to, yes
[17:17] <mgz> it will build with 1.6 but shouldn't require it
[17:18] <natefinch> mgz: right
[17:18] <natefinch> mgz: ok
[17:22] <sinzui> natefinch: mgz: no, we switched the same say we switched master
[17:22] <mup> Bug #1614633 opened: A unit with a failed storage-detaching hook cannot be destroyed <juju-core:New> <https://launchpad.net/bugs/1614633>
[17:22] <sinzui> mgz: natefinch there is a bug in Juju's Makefile that installs the wrong jgolang
[17:23] <mgz> sinzui: all the backported versions are also going to be 1.6?
[17:23] <sinzui> mgz: natefinch: Ci is testing with Go 1.6 install and the agents and clients are built with go 1.6
[17:24] <mgz> I guess they can be, though I'm not crazy about changing compiler in sru minor updates
[17:25] <sinzui> mgz: golang-1.6 is in trusty universe. Of course the people preparing the ubuntu packages need to update controll an rules to match the ~juju packages
[17:25] <natefinch> wait... so we are building 1.25 with go 1.6?
[17:25] <mgz> natefinch: we are in CI
[17:26] <mgz> and curtis is saying we should be for the distro as well (but we aren't atm)
[17:26] <sinzui> natefinch: I keep saying that on IRC and emails since April 10 2-16
[17:27] <sinzui> oh look, I pointed out the Makefile deps were wrong on April 1- too
[17:28] <mup> Bug #1614633 changed: A unit with a failed storage-detaching hook cannot be destroyed <juju-core:New> <https://launchpad.net/bugs/1614633>
[17:28] <natefinch> I would fix the makefile except I think it's an abomination against nature
[17:29] <natefinch> well good, I'll stop worrying about why I can't build the go tool for go 1.2 then
[17:32]  * natefinch goes back to go 1.7
[17:37] <perrito666> natefinch: lately?
[17:38] <natefinch> perrito666: top two PRs. More lately than anyone else :)
[17:38] <perrito666> natefinch: wow, those must be at least 3 months old
[17:40] <mup> Bug #1614633 opened: A unit with a failed storage-detaching hook cannot be destroyed <juju-core:New> <https://launchpad.net/bugs/1614633>
[17:40] <mup> Bug #1614635 opened: Bundle or app deploy fails behind a proxy <kanban-cross-team> <landscape> <juju-core:New> <https://launchpad.net/bugs/1614635>
[17:42] <rick_h_> cmars: filed 8min ago: https://bugs.launchpad.net/juju-core/+bug/1614635
[17:42] <mup> Bug #1614635: Bundle or app deploy fails behind a proxy <kanban-cross-team> <landscape> <juju-core:New> <https://launchpad.net/bugs/1614635>
[17:43] <cmars> rick_h_, thanks, i'll keep an eye on it
[18:08] <natefinch> for anyone with a little time to kill and who has ever played D&D: http://whothefuckismydndcharacter.com/
[18:09]  * perrito666 uses his +10 in social life to repell that link
[18:09] <natefinch> katco: ^^
[18:10] <katco> lol
[18:10] <natefinch> omg, it's so good
[18:20] <natefinch> quick backport review anyone? http://reviews.vapour.ws/r/5479/
[18:20] <natefinch> literally copy and paste from master, all new code, so... anyone want to rubber stamp me?
[18:21] <mgz> natefinch: lgtm
[18:22] <natefinch> mgz: thanks
[19:26] <redir> what version of go are we using with juju on AWS currently?
[19:26] <natefinch> another backport using the previously mentioned utils backport.  This one was manually created because for some reason cherry-pick was getting mad.... but it's just a mechanical change: http://reviews.vapour.ws/r/5480/
[19:26] <natefinch> redir: 1.6 for everything
[19:27] <redir> 1.6.?
[19:27] <redir> natefinch: ?
[19:28] <natefinch> redir: 1.6.0 IIRC
[19:28] <natefinch> redir: why?
[19:28] <redir> natefinch: but if I bootstrap from my machine it runs the agent built with my version, yes?
[19:29] <natefinch> redir: if you have a self-built client, it'll use the self-built server, yes.
[19:29] <redir> natefinch: trouble reproducing races recently, and the only difference I can think of is the version of go that CI is building juju with
[19:31] <natefinch> races might be load-dependent. I've definitely had ones that only happen if I'm deploying a large bundle etc.
[19:35] <bleepbloop> Could someone help me with respect to completely nuking the configuration of juju and restarting from scratch? I had juju 2 installed and I uninstalled it but it seems there are still some artifacts hanging around and I want to start completely fresh
[19:36] <rick_h_> bleepbloop: sudo updatedb && locate juju ?
[19:36] <natefinch> for juju2, everything is stored in ~/.local/share/juju
[19:37] <natefinch> if you're switching to juju 1.x, you don't need to worry about it, they can run side by side
[19:37] <natefinch> (they use different storage paths, different environment variables, everything)
[19:38] <redir> natefinch: yeah also have process loading the CPU to help drive contention to trigger the race -- hopefully
[19:39]  * natefinch goes to see who is on call reviewer, sees that it is himself.
[19:40] <bleepbloop> Thanks Nate and Rick, the leftover files that were messing things up were located in ~/.local/share/juju
[19:41]  * redir lunches
[20:26] <mup> Bug #1614689 opened: azure-arm provider is very slow <azure-provider> <ci> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1614689>
[20:38] <voidspace> rick_h_: did you get an ok on that 1.25 JFDI for network backports?
[20:39] <rick_h_> voidspace: sorry, I never looked for it tbh
[20:39] <voidspace> rick_h_: ok, no problem
[20:39] <voidspace> rick_h_: I'm hanging around to see if thumper appears so I thought I'd ask
[20:41] <rick_h_> voidspace: yea, I can ask folks now
[20:46] <natefinch> anyone for a rubber stamp on a backport?  http://reviews.vapour.ws/r/5480/
[20:47] <rick_h_> redir: can you peek at ^ please?
[20:47] <redir> rick_h_: ack
[20:47] <rick_h_> redir: <3 ty
[21:06] <redir> natefinch-afk: LGTM
[21:08] <redir> rick_h_: I followed the link in that race bug. It led me to http://goo.gl/R8J8zJ
[21:09] <redir> which is linked to a fixed race, rick_h_. So sinzui is creating a new bug for it and updating.
[21:10] <redir> and which I've repro'd and will fix before the one assigned I guess.
[21:23] <rick_h_> redir: I didn't follow that but ... ok?
[21:26] <mup> Bug #1614724 opened: Juju isn't protected from new versions of LXD <juju-core:New> <juju-release-tools:New> <https://launchpad.net/bugs/1614724>
[21:26] <thumper> hmm... forgot to open this...
[21:28] <rick_h_> thumper: voidspace was going to look for you
[21:28] <rick_h_> thumper: not sure if he found you before this or not then
[21:28] <thumper> ah... nope
[21:30] <thumper> epic review up for grabs:  86 files changed, 775 insertions(+), 719 deletions(-)
[21:30] <thumper> perhaps I'll save it for menno
[21:30] <thumper> he loves those
[21:32] <perrito666> Thumper i owe you one of these
[21:33] <thumper> :)
[21:33] <thumper> just found one failing test...
[21:33] <thumper> two failing tests...
[21:33]  * thumper gets to fixing
[21:35] <voidspace> thumper: hey, hi
[21:35] <thumper> voidspace: hey
[21:35] <voidspace> thumper: so we have another bug related to comparing structs in asserts
[21:35] <thumper> yep
[21:36] <voidspace> thumper: however this one is actually comparing a slice of slices of structs, so building it into a collection of asserts is annoying (double loop to build)
[21:36] <voidspace> thumper: fwereade suggested just comparing txn-revno instead
[21:36] <voidspace> thumper: I wondered if you had any insight before I went ahead and did that
[21:36] <thumper> nope
[21:36] <thumper> :)
[21:36] <voidspace> hah, ok
[21:36] <thumper> sounds reasonable
[21:37] <voidspace> thanks, thought I'd ask
[21:37] <voidspace> yeah, if I have to I can build the asserts with the double loop - but I'd rather not
[21:41] <mup> Bug #1614732 opened: Race in github.com/juju/juju/api/state.go <ci> <intermittent-failure> <race-condition> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1614732>
[21:52] <thumper> sinzui: in the bug ^^^ title says race in state, but text says catacomb
[21:53] <sinzui> thumper: I typo/paste error. I will fix
[21:54] <thumper> ta
[21:59]  * rick_h_ runs away for the day
[22:03]  * thumper sighs
[22:03] <thumper> conflicts with master
[22:03]  * thumper pulls, merges, and resolves
[22:08] <mup> Bug #1614749 opened: juju ssh fails on azure <azure-provider> <juju-core:Triaged by axwalk> <https://launchpad.net/bugs/1614749>
[22:08] <thumper> I guarantee that this branch will be less painful to review than it was to write
[22:12] <thumper> perrito666, menn0: http://reviews.vapour.ws/r/5481/
[22:13] <menn0> thumper: 1 min
[22:13] <thumper> no worries
[22:13]  * thumper goes back to reviewing menno's branch
[22:19] <menn0> thumper: must .... stay .... awake... :)
[22:19] <thumper> heh
[22:25] <perrito666> thumper: QA steps
[22:26] <thumper> heh... test everything?
[22:26] <perrito666> man, you fail the checklist at step 3
[22:26] <perrito666> shame
[22:26]  * thumper builds and tests
[22:26] <thumper> again
[22:28] <thumper> wallyworld: why would 'juju list-models' hang?
[22:29] <thumper> surely it shouldn't
[22:29] <thumper> oh... I have a cunning idea...
[22:29] <thumper> for later
[22:32] <thumper> perrito666: can you check `juju models`?
[22:33] <thumper> wallyworld: fyi http://pastebin.ubuntu.com/23068729/
[22:36] <menn0> thumper: while I'm still working through it, QA steps for that PR? (especially that the smart formatter still works with the feature flag)
[22:37] <thumper> lots of the jujuc tests show smart working with the feature falg
[22:37] <menn0> thumper: ok
[22:37] <thumper> but I can manually check
[22:37] <thumper> I should have done that before deploying elastic search
[22:37] <thumper> ...
[22:38] <thumper> well they all failed due to bad squid proxy
[22:38] <thumper> so I'll create controller
[22:39] <thumper> wallyworld: I think it may have been a more up to date client and older controller
[22:39] <thumper> will check
[22:43] <menn0> thumper: done
[22:43] <perrito666> thumper: what should I check about juju models?
[22:44] <thumper> perrito666: nm
[22:45] <thumper> menn0: I'm just doing some manual QA
[22:45] <thumper> and yes, for application get if you ask for JSON and give a single name, then you don't get json
[22:45] <thumper> this is the existing behaviour
[22:45] <thumper> however dumb
[22:51] <menn0> thumper: worth fixing before 2.0 final?
[22:51] <thumper> maybe
[22:54] <thumper> weird
[22:54] <thumper> apt.cache.FetchFailedException: W:Failed to fetch http://packages.elastic.co/elasticsearch/2.x/debian/dists/stable/main/binary-amd64/Packages  403  Forbidden
[22:54] <thumper> from the install hook
[22:54] <thumper> in lxd
[22:56] <mup> Bug #993557 changed: Charm store should delete charms that have been removed. <store> <pyjuju:Won't Fix> <juju-core:Invalid> <https://launchpad.net/bugs/993557>
[22:56] <mup> Bug #1178497 changed: Provide an API for listing all charms that are present in the store <charmbrowser> <store> <juju-core:Fix Released> <https://launchpad.net/bugs/1178497>
[23:03] <wallyworld> thumper: sorry, was out buying dog food
[23:03] <thumper> that's fine
[23:03] <wallyworld> i also got a 403 when using squid deb proxy
[23:07] <thumper> I've restarted my lxd with no proxy
[23:32] <mup> Bug #1602572 changed: Handler function is not being called even after changed one of the config values of the config.yaml <juju-core:Invalid by johnsca> <https://launchpad.net/bugs/1602572>
[23:43] <perrito666> oh man wallyworld now you have me humming ready for the 80s
[23:46] <wallyworld> lol
[23:46] <redir> I was getting a lot of 403s with squid deb proxy until I allowed world.
[23:50] <thumper> wallyworld: w00t, looks like the big command branch will land, all tests look like they passed
[23:52] <wallyworld> yay
[23:52] <wallyworld> that is most excellent
[23:53] <katco> thumper: i'm afraid to ask, but: does this affect the deploy command?