[00:32] <menn0> anastasiamac: ok, good to know. better just let curtis know.
[00:33] <anastasiamac> menn0: i have emailed Curtis. There is a tiny possibility too that the master was taken from a commit that preceeds ur and my changes... hence our work was re-targeted to next milestone although it is in codebase..
[00:34] <menn0> anastasiamac: could be, although the specific issue xtian and I were working on was one of the reasons for beta13 and I'm fairly certain it did make it
[00:35] <anastasiamac> menn0: \o/ then it's just an oversight and will get cleared tonight
[00:38] <menn0> anastasiamac/thumper: quick review please: http://reviews.vapour.ws/r/5298/
[00:38] <anastasiamac> menn0: looking
[02:12] <thumper> axw: I have some questions around volumes when you have some time
[02:14] <blahdeblah> axw: FWIW, re: https://bugs.launchpad.net/juju-core/+bug/1599503 I reverted the charm storage change which was exercising this bug.  So the production urgency is gone from our perspective.
[02:14] <mup> Bug #1599503: Cannot upgrade charm if storage is modified, even if the service doesn't use said storage <juju-core:In Progress by axwalk> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1599503>
[02:17] <axw> blahdeblah: OK, thanks for letting me know. I was thinking that would be the fastest course of action. I'm on the case anyway, hope to have it fixed today or tomorrow in the 2.0 branch.
[02:17] <axw> thumper: fire away
[02:17] <thumper> axw: probably best over hangout
[02:18] <blahdeblah> axw: FWIW, we don't care about 2.0 for production envs. :-)
[02:18] <axw> blahdeblah: yeah I just mean I'm fixing it there, then will back port :)
[02:18] <axw> back port shouldn't be long behind
[02:19] <thumper> axw: https://hangouts.google.com/hangouts/_/canonical.com/volumes?authuser=1
[02:19] <blahdeblah> axw: cool - thanks
[05:54] <thumper> see ya tomorrow folks
[08:00]  * dimitern waves ;)
[08:02] <dimitern> morning all
[08:02] <macgreagoir> \o
[08:11] <hoenir> morning
[08:11] <frobware> dimitern: welcome back; want to sync?
[08:12] <dimitern> frobware: thanks! sure, just give me ~15m need to sort out the sprint stuff quickly first
[08:13] <frobware> dimitern: ok
[08:35] <dimitern> frobware: hey, let's sync? joining the new "Standup" HO now..
[09:14] <babbageclunk> dimitern: welcome back!
[09:15] <dimitern> babbageclunk: thanks! how's it going ? :)
[09:15] <dimitern> babbageclunk: I see you'll be deserting our motley team for NZ :D
[09:15] <babbageclunk> dimitern: pretty good, I think! How was gophercon?
[09:16] <babbageclunk> dimitern: Yeah :( but :)
[09:16] <dimitern> babbageclunk: awesome! lots of good talks and 2x as many people as 2014
[09:17] <babbageclunk> dimitern: how big is it?
[09:17] <dimitern> babbageclunk: >1500
[09:17] <babbageclunk> dimitern: nice
[09:18] <dimitern> babbageclunk: I'll prep a summary and send it some time this week
[09:20] <dimitern> jam1: ping?
[09:23] <dimitern> jam1: np, ignore that :)
[09:33] <babbageclunk> fwereade_: ping?
[09:37] <babbageclunk> dimitern: got a moment for some advice?
[09:38] <dimitern> babbageclunk: sure
[09:38] <babbageclunk> dimitern: I'm working on bug 1585878
[09:38] <mup> Bug #1585878: Removing a container does not remove the underlying MAAS device representing the container unless the host is also removed. <2.0> <hours> <maas-provider> <network> <reliability> <juju-core:Triaged by 2-xtian> <https://launchpad.net/bugs/1585878>
[09:39] <dimitern> babbageclunk: yeah..
[09:39] <babbageclunk> Talking to Will it turns out to need a bit more structure than it first seemed from the bug.
[09:39] <dimitern> babbageclunk: wanna HO or IRC is ok?
[09:40] <babbageclunk> dimitern: Actually HO would be better, now that you say.
[09:40] <babbageclunk> dimitern: In juju-sapphire?
[09:40] <dimitern> babbageclunk: ok, joining the upcoming standup call - "core" I think it's called
[09:40] <dimitern> I don't think I have that one anymore on my cal
[09:41] <babbageclunk> ok, I still see sapphire.
[09:41] <babbageclunk> how about that
[09:41] <babbageclunk> ?
[09:42] <dimitern> babbageclunk: I see it from last friday
[09:52] <fwereade_> babbageclunk, sorry! what can I do for you?
[09:52] <fwereade_> babbageclunk, dimitern: shall I join somewhere?
[11:22] <mup> Bug #1605714 changed: juju2 beta11: LXD containers always pending on ppc64el systems <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1605714>
[11:22] <mup> Bug #1605747 changed: [ juju2 beta11 ] Maas system is deployed but agent remains pending <oil> <oil-2.0> <juju-core:Invalid> <https://launchpad.net/bugs/1605747>
[11:22] <mup> Bug #1605756 changed: [ juju2 beta11 ] system show up in juju status as pending but there is no attempt to deploy in maas <oil> <oil-2.0> <juju-core:Invalid> <MAAS:New> <https://launchpad.net/bugs/1605756>
[11:31] <mup> Bug #1605790 changed: Unable to initialize agent <vpil> <juju-core:New> <https://launchpad.net/bugs/1605790>
[11:46] <mup> Bug #1605790 opened: Unable to initialize agent <vpil> <juju-core:New> <https://launchpad.net/bugs/1605790>
[11:52] <mup> Bug #1605790 changed: Unable to initialize agent <vpil> <juju-core:New> <https://launchpad.net/bugs/1605790>
[11:52] <mup> Bug #1605986 changed: Creating container: can't get info for image 'ubuntu-trusty' <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1605986>
[11:58] <mup> Bug #1605986 opened: Creating container: can't get info for image 'ubuntu-trusty' <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1605986>
[12:01] <mup> Bug #1605986 changed: Creating container: can't get info for image 'ubuntu-trusty' <oil> <oil-2.0> <juju-core:New> <https://launchpad.net/bugs/1605986>
[13:46] <frobware> dimitern: apt-get update generally working for you atm?
[13:47] <dimitern> frobware: I've been using apt update instead, for a while now
[13:47] <dimitern> frobware: but apt-get update worked - just checked
[13:48] <frobware> dimitern: my network is flaky, possibly because I'm setting up IPv6, but lots of things seem to work, equally lots don't...
[13:49] <dimitern> frobware: oh I see :/
[13:49] <frobware> dimitern: I can generally do enough but any update eventually fails. some machines make lots of progress, others stop after 1 hit
[13:52] <dimitern> frobware: maas-proxy could be messing some reqs?
[13:58] <rick_h_> fwereade_: can you make sure to have a card in kanban with the links and such for your PR: https://github.com/juju/juju/pull/5863 please?
[14:01] <frobware> dimitern: ping - standup
[14:01] <dimitern> oops omw
[14:02] <rick_h_> dimitern: natefinch dooferlad ping for standups and such
[14:02] <frobware> dooferlad: ^^
[14:04] <rick_h_> fwereade_: ping for standup ^
[14:13] <mup> Bug #1606256 opened: AWS failed to bootstrap environment refreshing addresses <bootstrap> <ci> <ec2-provider> <reliability> <retry> <juju-core:Triaged> <https://launchpad.net/bugs/1606256>
[14:22] <mup> Bug #1606256 changed: AWS failed to bootstrap environment refreshing addresses <bootstrap> <ci> <ec2-provider> <reliability> <retry> <juju-core:Triaged> <https://launchpad.net/bugs/1606256>
[14:28] <mup> Bug #1605756 opened: [ juju2 beta11 ] system show up in juju status as pending but there is no attempt to deploy in maas <oil> <oil-2.0> <juju-core:New> <MAAS:Invalid> <https://launchpad.net/bugs/1605756>
[14:28] <mup> Bug #1606256 opened: AWS failed to bootstrap environment refreshing addresses <bootstrap> <ci> <ec2-provider> <reliability> <retry> <juju-core:Triaged> <https://launchpad.net/bugs/1606256>
[14:29] <katco> fwereade_: hey, how did you come up with 100 for the batch size? i'm trying to find documentation for that flag
[14:32] <fwereade_> katco, "more pessimistic than the 1000 many reported success when using"
[14:32] <fwereade_> katco, I drew a blank on docs too, went entirely by empirical "does it still work"
[14:32] <katco> fwereade_: ahh :)
[14:33] <katco> fwereade_: yeah is this a flag on mongod? doesn't seem to be there
[14:33] <katco> fwereade_: or looks like maybe mongorestore
[14:33] <fwereade_> katco, I have half a suspicion that a sufficiently-runaway txn-queue could make specific docs problematic in themselves, which is why I asked for a dump
[14:33] <fwereade_> katco, yeah, mongorestore
[14:34] <dimitern> gtv
[14:37] <katco> fwereade_: sorry got distracted. that sounds interesting (i.e. horrible)... what is a "run away txn-queue"? writes/s txns > reads/s txns?
[14:38] <fwereade_> katco, in particular, if you run transactions that *only* have asserts, the txn-queue fields in the affected documents never get cleaned up and grow without bound
[14:39] <fwereade_> katco, mgopurge fixes it; we have code we run on a timer to catch it; but it's a Thing That Can Happen
[14:39] <katco> ew
[14:39] <fwereade_> katco, especially in older environments from before we discovered this
[14:39] <fwereade_> katco, well put ;)
[14:40] <katco> everything about that is ew haha. the situation, our fix, ew
[14:41] <fwereade_> katco, indeed, we *should* better filter the txns so we don't even allow them to hit mgo/txn, to prevent the issue in the first place
[14:41] <katco> fwereade_: ship it, with a request for just a little documentation
[14:41] <fwereade_> katco, ack, tyvm
[14:55] <babbageclunk> fwereade_: I don't really understand the relationship between state/watchers and watcher/watchers. Can you explain?
[14:55] <rogpeppe1> a question for anyone: given that model names are relative to usernames, how can i switch to a model owned by someone else that has the same name as a model owned by me?
[14:56] <frobware> dooferlad: does this work on your network: wget -6 http://security.ubuntu.com/ubuntu/
[14:56] <babbageclunk> I *think* at the bottom everything ends up being a state watcher, right?
[14:58] <babbageclunk> fwereade_: The admonitions in Boring Techniques against workers depending on state watchers is just a specific case of "workers should use the api rather than talking directly to state".
[15:01] <mup> Bug #1606265 opened: Bogus upgrade in progress <ci> <list-controllers> <regression> <reliability> <juju-core:Triaged> <https://launchpad.net/bugs/1606265>
[15:09] <fwereade_> babbageclunk, hey, sorry
[15:10] <fwereade_> babbageclunk, it *is* a special case of that, yes; but more generally it's "don't use watchers that close their channels except where obliged to by reasons surrounding state"
[15:10] <babbageclunk> fwereade_: Ok.
[15:10] <fwereade_> babbageclunk, the relationship is pretty much *just* that one closes its Changes chan and the other one doesn't
[15:11] <fwereade_> babbageclunk, and they have different types in a sort of attempt to encourage people to distinguish between them
[15:11] <babbageclunk> fwereade_: So in my case, I'll add a state one, and then expose that in the API as a watcher/watcher and make a worker that uses that?
[15:11] <fwereade_> babbageclunk, watchers are basically all implemented either in state, or in api/watcher
[15:12] <fwereade_> babbageclunk, exactly, yeah
[15:12] <babbageclunk> fwereade_: Also, I can't find an implementation of a notifywatcher that watches a whole collection (rather than a specific doc).
[15:12] <fwereade_> babbageclunk, hmm; cleanup watcher maybe? *checks*
[15:13] <babbageclunk> fwereade_: Aha, thanks!
[15:13] <babbageclunk> ok, cool.
[15:14] <fwereade_> babbageclunk, state.newNotifyCollWatcher?
[15:14] <fwereade_> babbageclunk, ah yes indeed
[15:21] <fwereade_> babbageclunk, (derail: I don't think there's anything stopping one from implementing a `watcher.WhateverWatcher` against whatever one chooses -- I think the watcher model is pretty useful -- but it is true that the vast majority, and perhaps all, of our live watcher.WhateverWatchers *are* backed by state.WhateverWatchers on a controller somewhere)
[15:22] <fwereade_> babbageclunk, (if one *doesn't* swap out the implementation when writing worker tests, one generally has an unpleasant time and ends up with slow and/or flaky tests)
[15:23] <babbageclunk> fwereade_: Makes sense.
[15:27] <frobware> dimitern: ping - care to debug some ipv6 issues? Only if you have time...
[15:28] <dimitern> frobware: I have some time before I go out at the top of the hour
[15:29] <frobware> dimitern: 1:1 HO?
[15:29] <dimitern> frobware: omw
[15:30] <frobware> dimitern: oh, think I've deleted it. link? :)
[15:30] <dimitern> frobware: me too :)
[15:30] <dimitern> frobware: let's use the last one (standup)
[15:31] <mup> Bug #1606278 opened: juju (2.0) deploy <charm-name>/<revision#> fails <juju-core:New> <https://launchpad.net/bugs/1606278>
[15:31] <mup> Bug #1606282 opened: juju (2.0) deploy <bundle-name> fails as current working directory has a bundle <juju-core:New> <https://launchpad.net/bugs/1606282>
[15:56] <katco> does the phrase "ABA problem" mean something to anyone?
[15:57] <katco> ah... should have searched first: https://en.wikipedia.org/wiki/ABA_problem
[15:59] <mgz> liking swedish pop too much?
[15:59] <katco> mgz: ha, the same joke occurred to me
[16:05] <perrito666> uh, I arrived way too late for the joke
[16:06] <perrito666> katco: interesting how they try to convey the issue in the name but just make it way more unrelated
[16:06] <katco> perrito666: yeah
[16:07] <katco> perrito666: i feel like it should have "race condition" somewhere in the name
[16:07] <katco> "ABA race condition" at least hints at what's happening
[16:08] <mup> Bug #1606300 opened: Race in github.com/altoros/gosigma <cloudsigma-provider> <intermittent-failure> <race-condition> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1606300>
[16:08] <mup> Bug #1606302 opened: testsuite.TestWatchUnitAssignment got Next <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1606302>
[16:14] <redir> morning juju-dev
[16:15] <katco> hey redir
[16:30] <dooferlad> frobware: sorry about the delay, yes, that works for me
[16:31]  * dooferlad goes back to running around like a crazy person
[16:31] <frobware> dooferlad: yep, things work from my router, not my clients.
[16:31] <dooferlad> frobware: ip -6 route ?
[16:32] <frobware> dooferlad: I've fiddled as this was mostly working before. :)
[16:38] <mup> Bug # opened: 1606303, 1606308, 1606310, 1606313
[17:53] <mup> Bug #1606337 opened: Change single to multiple 'auto' stanzas in generated network configuration. <juju-core:New> <https://launchpad.net/bugs/1606337>
[19:59] <mup> Bug #1606354 opened: Created user has no display-name <juju-core:Triaged> <https://launchpad.net/bugs/1606354>
[20:05] <mup> Bug #1606354 changed: Created user has no display-name <juju-core:Triaged> <https://launchpad.net/bugs/1606354>
[20:06] <endomorphosis_> does anyone know how to deal with this error message?
[20:06] <endomorphosis_> cmd supercommand.go:458 storing charm for URL "cs:juju-gui-130": cannot retrieve charm "cs:juju-gui-130": cannot get archive: Get https://api.jujucharms.com/charmstore/v5/juju-gui-130/archive?channel=stable: dial tcp 162.213.33.122:443: getsockopt: connection timed out
[20:08] <mup> Bug #1606354 opened: Created user has no display-name <juju-core:Triaged> <https://launchpad.net/bugs/1606354>
[20:58] <natefinch> sinzui, thumper: do you know what kind of cert we're using for TLS on the juju server?  i.e. RSA or ECDSA (or both if that's possible?  I don't know).   Working on https://bugs.launchpad.net/juju-core/+bug/1604474
[20:58] <mup> Bug #1604474: Juju 2.0-beta12  userdata execution fails on Windows <azure-provider> <juju2.0> <oil> <oil-2.0> <windows> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1604474>
[20:59] <sinzui> natefinch: I don't know
[21:00] <natefinch> gotta run for a while, but will be back later
[21:00] <natefinch> sinzui: ok, np
[21:59] <mup> Bug #1488245 changed: Recurring lxc issue: failed to retrieve the template to clone  <canonical-bootstack> <landscape> <lxc> <oil> <juju-core:Invalid> <https://launchpad.net/bugs/1488245>
[23:05] <mup> Bug #1605669 changed: grant-revoke User could not check status with read permission <ci> <grant> <regression> <juju-core:Fix Released by gz> <https://launchpad.net/bugs/1605669>