[00:01] <menn0> perrito666: haha
[00:02] <perrito666> I have the exact same, but no one ever rings it since my office has a window outside and I see people comming
[00:08] <perrito666> mm, GoRename in an interface will do the smart thing with its implementation....
[00:10] <redir> perrito666: smart meaning what you want?
[00:10] <perrito666> redir: what else?
[00:11] <redir> :)
[00:11] <redir> perrito666: the right thing?
[00:12] <redir> I wish it updated comments.
[00:12] <redir> I understand why it doesn't but I can wish.
[00:12] <perrito666> ah the comments part is really annoying
[00:12] <perrito666> I tried once GoRename on implementation instead of the interface, not nice
[00:13] <redir> I also with GoTestFunc worked with check, or vice-versa
[00:13] <redir> *also wish
[00:15] <perrito666> menn0: funny, delegatos in spanish (with a space "dele gatos") means "give that person cats"
[00:15] <menn0> perrito666: LOL!
[00:18] <menn0> ok
[00:22] <menn0> perrito666: ship it
[00:23] <perrito666> menn0: tx a lot
[00:31] <perrito666> well look at that, a 4k monitor costs 6X more in my country than in amazon....
[00:32]  * perrito666 stays in HD
[00:42] <menn0> perrito666: ping
[00:52] <perrito666> menn0: pong
[00:52] <menn0> perrito666: https://github.com/juju/juju/blame/master/apiserver/client/status.go#L853
[00:53] <menn0> perrito666: should that really be || ?
[00:53] <menn0> perrito666: I'm wondering if it's supposed to be &&
[00:56] <perrito666> menn0: let me refresh my memory
[00:56] <menn0> perrito666: sure
[00:56] <menn0> perrito666: I'm not completely sure myself, but it looks suspicious
[00:59] <perrito666> menn0: I am pretty sure I am blamed there for a move since I cant recall wth is going on there but, it makes sense because it might be possible to be in Maintenance but with message Installing
[00:59] <perrito666> I would track MessageInstalling to determine that
[01:00] <perrito666> I bet it is set when Agent is in Maintenance, this would be a good reason for such ||
[01:00] <menn0> perrito666: actually, I've misread the code
[01:00] <menn0> perrito666: ignore me
[01:00] <perrito666> menn0: sure thing :p
[01:00] <perrito666> ill go have dinner then return if you need a rubber duck
[01:02] <redir> wallyworld: http://reviews.vapour.ws/r/5563/ or menn0 ...
[01:02] <redir> bbiab
[01:02] <wallyworld> ok
[01:02] <redir> ~8pm or so PDT
[01:16] <perrito666> redir: what's with US people and Time standards
[01:17] <perrito666> menn0: need a hand?
[01:17] <natefinch> perrito666: what, we have 6 time zones, which differ based on what time of year it is, thanks to daylight saving time. Not so bad, right?
[01:18] <perrito666> natefinch: actually the problem is not how many you have, is the names you use
[01:18] <perrito666> much like with every other form of numeric representation aparently
[01:18] <natefinch> lol
[01:21] <natefinch> unless anyone has some special insight into https://bugs.launchpad.net/juju-core/1.25/+bug/1610880 - I'm just going to bail on it.  I've burned a week trying to figure this crap out, and now it's just decided to stop being reproducible :/
[01:21] <mup> Bug #1610880: Downloading container templates fails in manual environment <juju-core 1.25:Triaged by natefinch> <https://launchpad.net/bugs/1610880>
[01:41] <wallyworld> menn0: juju/agent/MigrateParams - there's a check for empty Model, but to me, model should never be empty right?
[01:41] <menn0> perrito666: all good at the moment
[01:42] <menn0> wallyworld: loading context
[01:42] <wallyworld> menn0: also, we now are going to need to start explicitly passing around a controller UUID since that will not be the same as the controller model uuid anymore
[01:43] <menn0> wallyworld: MigrateParams has nothing to do with migrations
[01:43] <menn0> wallyworld: model migrations that is
[01:43] <wallyworld> oh, ffs
[01:44] <wallyworld> it for migrating older format files
[01:44] <wallyworld> hmmm
[01:44] <wallyworld> 2.0 will be clean slate
[01:49] <perrito666> gnight all
[01:49] <menn0> perrito666:  good night
[01:50] <perrito666> wow, tests are especially unreliable today
[02:26] <natefinch> thumper, wallyworld: either of you have thoughts on a bug I should tackle?  Looks like all the criticals under juju are assigned, except this one:  https://bugs.launchpad.net/juju/+bug/1614635
[02:26] <mup> Bug #1614635: Deploy sometimes fails behind a proxy <landscape> <juju:Triaged by rharding> <https://launchpad.net/bugs/1614635>
[02:27] <wallyworld> natefinch: anything landscape related is good to pick up
[02:32] <thumper> ffs
[02:32] <natefinch> wallyworld: I was hoping to avoid that one, just because it sounds setup-intensive.  I'd need a maas environment behind a firewall using a proxy...
[02:32] <thumper> I'm giving up on the manual ha bug right now
[02:32] <thumper> since you can never get into ha, it won't matter that you can't kill it
[02:32] <natefinch> haha
[02:32] <thumper> this is a rabbit hole
[02:33] <thumper> and a lot more work than I thought
[02:33] <thumper> so perhaps better to work on other more important bugs
[02:34] <natefinch> thumper: exactly the reasoning I used when abandoning my manual bug.... only after following the rabbit hole way too deep.
[02:34] <thumper> natefinch: what was your manual bug?
[02:34] <wallyworld> natefinch: what about https://bugs.launchpad.net/juju/+bug/1617190
[02:34] <mup> Bug #1617190: Logout required after failed login <juju:Triaged by alexis-bruemmer> <https://launchpad.net/bugs/1617190>
[02:34] <natefinch> thumper: https://bugs.launchpad.net/juju-core/1.25/+bug/1610880
[02:34] <mup> Bug #1610880: Downloading container templates fails in manual environment <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1610880>
[02:35] <natefinch> thumper: it was 100% reproducible until today, when it became 100% unreproducible, which is when I threw in the towel
[02:35] <thumper> haha
[02:35] <wallyworld> thumper: manual ha will be important for system z etc
[02:36] <wallyworld> but not this week
[02:36] <natefinch> wallyworld: I could do that one, sure.  I wasn't sure if alexis was actually working on it
[02:36] <wallyworld> natefinch: no, she assigns bugs to herself
[02:36] <wallyworld> so she knows what to track
[02:36] <natefinch> wallyworld: ok cool
[02:42] <thumper> wallyworld: http://juju-ci.vapour.ws:8080/job/github-merge-juju/8993/artifact/artifacts/trusty-out.log this is failing often
[02:42] <thumper> but intermittently
[02:42] <thumper>  cmdControllerSuite.TestAddModelWithCloudAndRegion
[02:42] <thumper> [LOG] 0:00.377 ERROR juju.worker.dependency "api-caller" manifold worker returned unexpected error: cannot open api: unable to connect to API: websocket.Dial wss://localhost:59081/model/deadbeef-0bad-400d-8000-4b1d0d06f00d/api: dial tcp 127.0.0.1:59081: getsockopt: connection refused
[02:43] <thumper> not sure why it is failing to connect to the api server
[02:43] <wallyworld> yeah, weird
[02:44] <thumper> actually
[02:44] <thumper> perhaps it failed the first time
[02:44] <thumper> and retried
[02:44] <thumper> but the error was still written out
[02:44] <thumper> the expected string is in the obtained string
[02:44] <thumper> but after an error line
[02:44] <wallyworld> hah, tht could be it
[03:33] <thumper> fuck sticks
[03:42] <thumper> menn0-afk: really afk?
[03:42] <thumper> hmm, only 12 minutes, so probably
[03:42] <thumper> anyway, remeber the i/o timeout retry patch?
[03:42] <thumper> well the rpc layer returns errors.Trace(err)
[03:42] <thumper> so none of the errors had an error code, and were all fatal
[03:42] <thumper> even the ones that said retry :)
[03:42] <thumper> anyway, identified, and fix QA
[03:42] <thumper> then pushing
[03:43] <thumper> wallyworld: cmdRegistrationSuite.TestAddUserAndRegister failing intermittently too
[03:43] <natefinch> heh, katco fixed a bug just like that somewhere recently
[03:43] <thumper> ...     "ERROR \"api-caller\" manifold worker returned unexpected error: cannot open api: unable to connect to API: websocket.Dial wss://localhost:50983/model/deadbeef-0bad-400d-8000-4b1d0d06f00d/api: dial tcp 127.0.0.1:50983: getsockopt: connection refused\n" +
[03:44] <thumper> in the middle of expected output
[03:44] <wallyworld> same root cause then
[03:44] <thumper> probably worthwhile figuring out why we are getting this connectino refused
[03:44] <thumper> it is obviously new
[03:44] <thumper> and causing many failures
[03:44] <wallyworld> might just be slow back end
[03:48] <thumper> part of the problem is the apiserver errors are showing in the client output
[03:48] <thumper> that's just wrong
[03:48] <thumper> but due to how our tests do everything...
[03:51] <thumper> wallyworld: I can see why we are getting the errors
[03:52] <thumper> [LOG] 0:00.168 INFO juju.apiserver listening on "[::]:60736"
[03:52] <thumper> [LOG] 0:00.226 ERROR juju.worker.dependency "api-caller" manifold worker returned unexpected error: cannot open api: unable to connect to API: websocket.Dial wss://localhost:50983/model/deadbeef-0bad-400d-8000-4b1d0d06f00d/api: dial tcp 127.0.0.1:50983: getsockopt: connection refused
[03:52] <thumper> different port
[03:52] <wallyworld> oh, that race condition
[03:52] <thumper> perhaps also ipv6 localhost
[03:52] <wallyworld> that's been there for ages
[03:52] <thumper> but port will do it
[03:53] <wallyworld> apparently it's hard to fix
[03:53] <wallyworld> and the window was considered small enough that we wouldn;t see it, but i may be mis remembering
[04:10] <thumper> wallyworld: http://reviews.vapour.ws/r/5565/
[04:10] <thumper> wallyworld: if you are busy, I could take another look
[04:10] <wallyworld> i am busy, separating controller and model tag, what a a mess, but i can look at pr
[04:12] <wallyworld> thumper: lgtm
[04:17] <thumper> pr is very small
[04:17] <thumper> ta
[04:39] <natefinch> has anyone done anything about not having to pass --config=bootstrap.yaml every time we bootstrap? I keep forgetting, so I end up bootstrapping without all the config options I normally want to use.
[04:46] <thumper> these featuretests are failing almost every time now
[04:46] <thumper> ffs
[04:50] <natefinch> --debug isn't displayed as a flag anywhere anymore?
[04:50] <natefinch> it still works... but how is anyone supposed to know it exists?
[04:52] <natefinch> ahh, help global-options... which is like 3 levels deep in the help :/
[04:59] <menn0> thumper: back
[04:59] <thumper> menn0: got a few minutes to discuss this annoying intermittent failure?
[05:00] <menn0> thumper: yep
[05:00] <thumper> 1:1
[05:49] <menn0> thumper: I used -check.list to see which tests run before TestAddModel
[05:49] <menn0> thumper: passing this should run just those tests: -check.f '(meterStatusIntegrationSuite|UserSuite|debugLogDbSuite|syslogSuite|annotationsSuite|CloudAPISuite|apiEnvironmentSuite|BakeryStorageSuite|blockSuite|cmdControllerSuite.TestAddModel)'
[05:49] <menn0> thumper: i've got the stress tester running now with that
[06:00] <dimitern> wow I've got a full novel as a review from babbageclunk :)
[07:09] <TheMue> morning dimitern
[07:10] <dimitern> hey TheMue :)
[07:10] <TheMue> found some time to lurk around here again ;)
[07:11] <dimitern> TheMue: how's life? ;)
[07:13] <TheMue> dimitern: fine, nice project here using a mix of JS frontend (mostly done by others), Go microserves (here I'm the evangelist), and couchdb as database
[07:14] <TheMue> dimitern: only fighting with some fences of this bigger company, in total 12K worldwide
[07:15] <TheMue> dimitern: so many old-fashioned processes I have to break for a new product
[07:16] <dimitern> TheMue: sounds interesting :)
[07:16] <TheMue> dimitern: yip, it is, and a good local team here
[07:17] <TheMue> dimitern: only less interesting travels than with Canonical :D last tour has been to London again?
[07:20]  * TheMue continuous with his crypto stuff ...
[07:21] <dimitern> TheMue: actually I was in Leiden most recently
[07:21] <dimitern> TheMue: but I'm sure we'll be in London soon enough :)
[07:23] <TheMue> dimitern: Netherlands, interesting. a new location.
[07:23] <TheMue> dimitern: I'm missing San Francisco *sigh*
[07:23] <dimitern> TheMue: yeah - was the first time for me there
[07:24] <dimitern> TheMue: I'll be at the Juju Charmers Summit in Pasadena, CA in a couple of weeks
[07:24] <TheMue> dimitern: quickly have to join *lol*
[07:25] <dimitern> TheMue: here's a link: http://summit.juju.solutions/
[07:30] <TheMue> dimitern: don't think my company would send me
[07:39] <rogpeppe1> i see the get-controller-config command. is there any way of setting controller config values?
[07:39] <rogpeppe1> axw, menn0: ^
[07:40] <axw> wallyworld: ^
[07:40] <axw> rogpeppe1: (I don't think so)
[07:40] <rogpeppe1> axw: hmm
[07:40] <axw> AFAICR, you can only set them at bootstrap time right now
[07:53] <wallyworld> rogpeppe1: axw: juju set-model-defaults
[07:53] <wallyworld> ah
[07:54] <wallyworld> controller-config, that is immutable
[07:54] <wallyworld> apiport, stateport etc
[07:54] <wallyworld> controller uuid
[07:54] <wallyworld> did you mean the model defaults?
[07:56] <axw> wallyworld: for argument's sake, it would be nice to be able to toggle auditing-enabled
[07:57] <wallyworld> axw: i thought there was CLI for that, but maybe not
[07:57] <wallyworld> the auditing stuff is POConly
[07:57] <wallyworld> needs to be cleaned up
[09:26] <macgreagoir> babbageclunk: Following up on that lxd bootstrap failure stuff, I'm not seeing any issues with amd64 (trusty or xenial) but with ppc64le. Have you tested, by any chance?
[09:27]  * macgreagoir should look for your bug...
[09:39] <mup> Bug #1618798 opened: endpoint not used in lxd provider <juju-core:New> <https://launchpad.net/bugs/1618798>
[09:42] <babbageclunk> macgreagoir: sorry, was afk - it's bug 1618636
[09:42] <mup> Bug #1618636: Bootstrapping with beta16 on lxd gives "unable to connect" error <lxd> <juju:New> <https://launchpad.net/bugs/1618636>
[09:42] <macgreagoir> babbageclunk: Cheers, let me compare notes with that...
[09:43] <babbageclunk> It happens when I bootstrap inside a container - I did that just to sidestep the built-from-source juju binary, but I'll try reproducing it outside the container now.
[09:45] <voidspace> menn0: you should try this: http://www.bbc.co.uk/news/world-europe-37228413
[09:46] <babbageclunk> voidspace: gah!
[09:46] <voidspace> babbageclunk: hardcore
[09:47] <babbageclunk> voidspace: ah, it's fine, he's holding onto that stick for most of it.
[09:48] <voidspace> babbageclunk: ah, fair point - I hadn't noticed that
[09:48] <babbageclunk> voidspace: (I think it's a selfie stick.)
[09:48] <voidspace> babbageclunk: :-)
[09:48] <voidspace> babbageclunk: dilemna
[09:48] <voidspace> babbageclunk: I'm working with CloudImageMetadata
[09:48] <voidspace> babbageclunk: should the plural be CloudImageMetadatas
[09:48] <voidspace> babbageclunk: ?
[09:49] <voidspace> it just sounds awful
[09:49] <voidspace> I'm being inconsistent at the moment - I have a type called cloudimagemetadata and then a collection type that I kind of have to call cloudimagemetadatas
[09:49] <babbageclunk> Not as awful as dilemna
[09:49] <macgreagoir> voidspace: Data is plural ;-)
[09:49] <voidspace> macgreagoir: yes it is
[09:50] <macgreagoir> :-)
[09:50] <voidspace> macgreagoir: hence the dilemna
[09:50] <babbageclunk> Yeah, should be CloudImageMetadatum
[09:50] <voidspace> hah
[09:50] <voidspace> it's not a datum it has data
[09:50] <voidspace> at a higher level of abstraction its a datum I guess
[09:51] <babbageclunk> CloudImageMetadataSet?
[09:51] <macgreagoir> CloudImageMetadatabase :-D
[09:51] <voidspace> babbageclunk: dilemna is valid
[09:51] <voidspace> babbageclunk: and better
[09:51] <voidspace> macgreagoir: hah
[09:51] <voidspace> babbageclunk: maybe
[09:52] <babbageclunk> you're invalid
[09:52] <voidspace> babbageclunk: ableist
[09:53] <babbageclunk> Does that really have an e? When I'm the boss it will not.
[09:54] <voidspace> babbageclunk: your spelling is horrifical
[09:54] <voidspace> babbageclunk: whenever there's an option you pick the most confusing one so as to seem superior
[09:54] <voidspace> babbageclunk: *everyone* knows that
[09:54] <voidspace> gah
[09:56] <babbageclunk> Anyway, there are lots of infos in the codebase, but I don't think datas would be so easily tolerated.
[09:56] <voidspace> babbageclunk: yeah, it's pretty horrible
[09:56] <voidspace> infos isn't great
[10:01] <babbageclunk> voidspace: I think dataset's alright, as long as one isn't too bothered by the implication that it should be a set.
[10:02] <voidspace> babbageclunk: right, I think I agree
[10:02] <voidspace> and there is no real set in go anyway
[10:03] <babbageclunk> And really, if you're ok with dilemna, something purporting to be a set but then not really being one is the least of your problems.
[10:03] <voidspace> :-)
[10:04] <macgreagoir> voidspace: +1 metadataset, fwiw
[10:04] <voidspace> macgreagoir: cool, thanks - sounds like it's decided
[10:27] <babbageclunk> macgreagoir: I haven't managed to reproduce that bug outside a container. How can I determine whether the agent is being uploaded from my machine rather than pulled from a stream?
[10:29] <babbageclunk> macgreagoir: (because I think that might be confounding things)
[10:30] <macgreagoir> babbageclunk: Let me check my logs. I can repro on amd64 only in a nested container, like you.
[10:32] <macgreagoir> babbageclunk: I see tools downloaded (in both archs). The error in ppc64le is clearly from the code that tries to use local as remote.
[10:32] <macgreagoir> I guess that's a fair starting point for amd64 too.
[10:33] <macgreagoir> The ppc64le bug is back on beta11.
[10:43] <ashipika> mgz: ping?
[10:51] <macgreagoir> babbageclunk: Just fyi, https://bugs.launchpad.net/juju/+bug/1618636/comments/1
[10:52] <mup> Bug #1618636: Bootstrapping with beta16 on lxd gives "unable to connect" error <lxd> <juju:New> <https://launchpad.net/bugs/1618636>
[10:53] <babbageclunk> macgreagoir: another interesting data point, I get much more output (from cloud-init, apt installs etc) from the bootstrap process in the container than I do from the one running on my host - do you know why that might be?
[10:54] <macgreagoir> An interesting datum? :-)
[10:55] <macgreagoir> babbageclunk: I'm trying to get a ppc64le build with some more loggin output, but not succeeding yet.
[10:56]  * babbageclunk lols
[11:00] <mattyw> hey folks, how can one login to the models' mongo db now?
[11:07] <babbageclunk> mattyw: the juju-mongodb package includes the mongo binary now, so you can run that connecting to 127.0.0.1:37017 on the controller. Get the password from the admin user from the agent.conf, and pass --sslAllowInvalidCertificates.
[11:10] <mattyw> babbageclunk, I get errors about --sslAllowInvalidCertificates not being a valid arg
[11:11] <babbageclunk> mattyw: You need to run the mongo from /usr/lib/juju/mongo3.2/bin - check the version?
[11:13] <rogpeppe1> fwereade: i'm seeing a lot of this kind of test failure. any idea why we might be seeing more of it recently? http://juju-ci.vapour.ws:8080/job/github-merge-juju/9009/artifact/artifacts/trusty-out.log/*view*/
[11:14] <babbageclunk> mattyw: Oh, you need to pass --ssl as well.
[11:15] <babbageclunk> mattyw: I'm trying to do this now too, but I haven't gotten in yet.
[11:15] <mattyw> babbageclunk, I think I wasn't using juju-mongodb, I'm updating my script now
[11:16] <mattyw> babbageclunk, /usr/lib/juju/mongo3.2/bin/mongo: No such file or directory
[11:16] <babbageclunk> mattyw: Really? On xenial?
[11:16] <mattyw> babbageclunk, trusty
[11:17] <babbageclunk> mattyw: that would do it - sorry, all of this has been mongo 3.2 advice.
[11:17] <mattyw> babbageclunk, do we not install 3.2 on trusty as well?
[11:18] <babbageclunk> mattyw: Not sure. Does the directory exist?
[11:18] <mattyw> babbageclunk, it doesn't
[11:19] <babbageclunk> mattyw: Then I think the mongo binary will be in /usr/lib/juju/bin instead, if it's anywhere.
[11:19] <mattyw> babbageclunk, I see all the server side packages but none of the client ones
[11:20] <babbageclunk> mattyw: Ok, in that case you should be able to install mongodb-clients
[11:22] <macgreagoir> babbageclunk: Again, just fyi, I found a beta12 ppc64el deb. I don't see the lxd issue in this test.
[11:23] <mattyw> babbageclunk, so this is the script I'm using http://paste.ubuntu.com/23115673/
[11:23] <babbageclunk> macgreagoir: I don't see the lxd problem in beta15 either.
[11:23] <mattyw> babbageclunk, must be something wrong with the last line - the args sent to mongo
[11:24] <babbageclunk> mattyw: On trusty it's mongo 2.4, so the args are different
[11:26] <mattyw> babbageclunk, any idea what they should be?
[11:27] <macgreagoir> babbageclunk: I can't find or build a ppc beta15 to test :-/
[11:27] <babbageclunk> mattyw: just trying it out now - had to bootstrap a trusty controller
[11:28] <babbageclunk> mattyw: This works for me: mongo -u admin -p <password> 127.0.0.1:37017/juju  --authenticationDatabase admin --ssl
[11:32] <mattyw> babbageclunk, sorry mate, how do I login to the controller, juju ssh -m default 0?
[11:33] <babbageclunk> mattyw: juju ssh -m controller 0
[11:37] <fwereade> rogpeppe1, those look like it's "just" a logging change... and that ideally we'd have the logging going somewhere else so we could inspect the command's contributions to stderr alone
[11:38] <rogpeppe1> fwereade: it's a sporadic failure
[11:40] <rogpeppe1> fwereade: do you think that the API connection failure is expected there then?
[11:41] <fwereade> rogpeppe1, right -- most of the time we connect flawlessly, sometimes the odd connection failure should be retried transparently
[11:41] <fwereade> rogpeppe1, that was my read
[11:41] <rogpeppe1> fwereade: interesting. i'll go back and look at the other test failures in that light
[11:43] <rogpeppe1> fwereade: do you think that perhaps the manifold worker shouldn't be reporting at ERROR level?
[11:44] <rogpeppe1> fwereade: and... this is client-side, right? what's the worker doing logging its errors there?
[11:45] <rogpeppe1> fwereade: or does the client side have workers too now?
[11:45] <fwereade> rogpeppe1, this is a featuretest: it is indeed probable that it's just part of the appropriate chunk-of-controller
[11:46] <rogpeppe1> fwereade: i guess we could paper over the issue by configuring the logger used for the command output to exclude all logs from juju.worker.*
[11:47]  * fwereade winces a little, and worries how well that's going to work in practice... but logs and commands are tangly, so, well, I suppose it'd be expedient
[11:47] <rogpeppe1> fwereade: FWIW this test failure and others similar has wasted me a lot of time yesterday. it's definitely worth some kind of fix :)
[11:48] <rogpeppe1> fwereade: yeah, i'm not sure either.
[11:48] <fwereade> rogpeppe1, yeah, definitely not arguing it doesn't need a solution, just pushing for a bit of find-the-ultimate-cause ;)
[11:49] <rogpeppe1> fwereade: alternatively, maybe that manifold error doesn't justify ERROR status and could be logged at INFO level
[11:49] <rogpeppe1> fwereade: after all, it's fairly run-of-the-mill to get errors there.
[11:50] <fwereade> rogpeppe1, probably also reasonable, yeah
[11:50] <fwereade> rogpeppe1, but similarly leaves us vulnerable to similar changes infecting these tests in future
[11:50] <rogpeppe1> fwereade: yeah, i was thinking that too
[11:51] <fwereade> rogpeppe1, mainly a matter of how-much-time-can-you-justify, I think :(
[11:52] <rogpeppe1> fwereade: zero currently - i'm off on hols tomorrow :)
[11:52] <rogpeppe1> fwereade: i'm just trying to remind myself how that output gets captured in tests
[12:08] <rogpeppe1> fwereade: ha, interesting realisation: there's no way to get a logging source to totally shut up, because CRITICAL messages will always be logged.
[12:09] <rogpeppe1> luckily nothing logs at CRITICAL level
[12:16] <fwereade> rogpeppe1, two opposing bugs in perfect balance, eh
[12:44] <babbageclunk> jhobbs: around?
[12:58] <perrito666> fwereade: ping?
[13:22] <babbageclunk> jhobbs: ping?
[13:24] <frobware> dimitern: ping - can we sync before standup please?
[13:28] <dimitern> frobware: ok, just a couple of minutes and I'll join standup HO
[13:28] <frobware> dimitern: ack
[13:37] <dimitern> frobware: oops, sorry
[14:01] <katco> fwereade: mgz: standup time
[14:06] <dimitern> babbageclunk: ping
[14:12] <babbageclunk> dimitern: pong!
[14:30] <dimitern> babbageclunk: hey, I've pushed some changes to http://reviews.vapour.ws/r/5559/
[14:30] <babbageclunk> dimitern: ok, I'll take a look.
[14:30] <dimitern> babbageclunk: ta!
[14:31] <dimitern> voidspace, frobware: please take a look if you can as well ^^
[14:33] <dimitern> babbageclunk, voidspace, frobware: sorry, I've realized I didn't actually push them :/ now I did
[14:59] <dimitern> fwereade_: you've got a review
[15:04] <fwereade_> dimitern, tyvm
[15:07] <rogpeppe1> mgz: any idea why this failed? i can't see any errors. http://juju-ci.vapour.ws:8080/job/github-merge-juju/9014/
[15:09] <dimitern> rogpeppe1: check trusty-err.log: state/modelmigration_test.go:29: undefined: state.ModelMigrationSpec
[15:09] <dimitern>  
[15:10] <macgreagoir> babbageclunk: Out of interest, your test for lxd bootstrap with a from-source binary, did you also have a jujud available in ./ ?
[15:10] <voidspace> it compiles!
[15:10] <rogpeppe1> dimitern: oh i didn't realise compiler errors were hidden in the other artifact
[15:10] <voidspace> Doesn't *work*, but it compiles...
[15:10] <rogpeppe1> dimitern: that's... unintuitive
[15:11] <babbageclunk> macgreagoir: no, I was in ~, and mangled my path and gopath so that there were no from-source juju binaries around.
[15:11] <babbageclunk> macgreagoir: But I've worked out my problem - we ripped out the bit of the lxd provider that tells LXD to listen to https.
[15:12] <dimitern> rogpeppe1: it's says where to look plainly in the console log :)
[15:13] <macgreagoir> babbageclunk: OK, cool. One down :-)
[15:13] <rogpeppe1> dimitern: it does? where does it say that?
[15:13] <babbageclunk> macgreagoir: well, not quite sure where to put it back in, but yeah, the mystery's solved!
[15:14] <dimitern> rogpeppe1: See /var/lib/jenkins/workspace/github-merge-juju/artifacts/trusty-err.log
[15:14] <dimitern>  
[15:14] <macgreagoir> babbageclunk: I can't reproduce my bug on ppc with tip.
[15:14] <dimitern> rogpeppe1: ~10 lines from the bottom going up
[15:14] <babbageclunk> macgreagoir: yay, it's fixed! ;)
[15:14] <macgreagoir> :-D
[15:15] <rogpeppe1> dimitern: i know that's where the error is
[15:15] <rogpeppe1> dimitern: but i think it's unintuitive that most test failure output goes into trusty-out.log but compiler errors go into trusty-err.log
[15:16] <rogpeppe1> dimitern: i think that both stderr and stdout from the go test should go into the same stream
[15:16] <dimitern> rogpeppe1: ah, sorry - yeah, that's not so obvious
[15:16] <babbageclunk> rogpeppe1: +1
[15:19] <rock___>  Hi. I deployed OpenStack on LXD using https://github.com/openstack-charmers/openstack-on-lxd. I have created "cinder-storagedriver" charm . I pushed the created charm to public charm market place(charm store). So Using JUJU GUI when i was trying to deploy "cinder-storagedriver" charm by adding relation  to "cinder" it was throwing an error.
[15:19] <rock___> ERROR: Relation biarca-openstack:juju-info to cinder:juju-info: cannot add relation   "biarca-openstack:juju-info cinder:juju-info" : principal and subordinate applications' series must match.
[15:19] <rock___> But I can able to deploy my charm through juju cli by taking from charm store. And I can able to add relation to cinder successfully. And everything working fine.
[15:20] <dimitern> rock___: can you paste a link to the charm you pushed?
[15:20] <marcoceppi> rock___: biarca-openstack needs to be the same operating system series as the charm deployed
[15:20] <dimitern> rock___: also, how did you deploy it?
[15:20] <marcoceppi> rock___: I imagine the GUI is pulling the wrong series of teh charm
[15:21] <rock___> https://jujucharms.com/u/siva9296/biarca/0
[15:22] <rock___> my charm support xenial,trusty and precise. Rest of the openstack deployment  support  Xenial. juju status pasted info : http://paste.openstack.org/show/565199/
[15:23] <rock___> As part of debug,  I deployed juju-gui as $juju deploy cs:juju-gui-134 on our setup. But it was showing series as trusty even we have choosen xenial.
[15:25] <dimitern> rock___: that paste link does not open for me - can you retry pasting it with paste.ubuntu.com or some other service please?
[15:27] <rock___> dimitern: juju status pasted info http://paste.openstack.org/show/565217/
[15:28] <dimitern> rock___: ok, I opened that fine, but the charm urls are not there - please paste `juju status --format=yaml` for more details?
[15:28] <rock___> marcoceppi: Yes. GUI is pulling the wrong series of the charm. When I deployed my charm using JUJU CLI. It worked fine.
[15:30] <dimitern> rock___: if your config has 'default-series: trusty' that might be cause; alternatively, deploy biarca with --series xenial to be explicit about it
[15:31] <dimitern> (it should complain if that's required, but it might be just picking the first entry in the list of series-es from metadata - trusty in this case)
[15:32] <dimitern> babbageclunk, frobware, voidspace: a friendly review poke :)
[15:35] <rock___> dinitern: yes . In metadata.yaml trusty is the first one of the list of series. But how it worked fine when I deployed my charm through JUJU CLI.
[15:36] <babbageclunk> dimitern: I was still reviewing it! It takes a long time to type all of my complaining. ;)
[15:37] <dimitern> babbageclunk: ah, sorry :) sure, take your time!
[15:37] <dimitern> rock___: are you using the CLI from a trusty machine or a xenial one?
[15:38] <rock___> dimitern: From a Xenial machine.
[15:38] <babbageclunk> dimitern: Finished now.
[15:38] <babbageclunk> dimitern: that approach is neat!
[15:38] <dimitern> babbageclunk: sweet! thanks :)
[15:39] <rock___> dimitern: juju status --format=yaml pasted info http://paste.openstack.org/show/565227/
[15:42] <dimitern> rock___: thanks! there you have it - the gui is running as trusty
[15:43] <dimitern> how did that happen I'm not sure.. but does it work if you deploy your charm from the CLI?
[15:43] <rock___> dimitern: As part of debug,  I deployed juju-gui as $juju deploy cs:juju-gui-134 on our setup. But it was showing series as trusty even we have choosen xenial.
[15:45] <rock___> Actually, We no need to deploy juju-gui separately. If we run "$sudo juju gui" It will give https://10.75.116.66:17070/gui/3d58eec3-a3ed-4430-8eb4-8f3ec7db7ea8/.
[15:46] <dimitern> rock___: yeah, it sounds like it should refuse to deploy it, if it's multi-series charm and no series is given (it used to - perhaps something changed recently)
[15:47] <rock___> dimitern: We manually deployed juju-gui to know which series it is going to take.
[15:48] <rock___> dimitern: we choosen xenial but it has taken trusty.
[15:48] <dimitern> rock___: but you should then use the full cs: url - e.g. cs:xenial/juju-gui-134
[15:48] <dimitern> or instead, juju deploy cs:juju-gui-134 --series xenial
[15:51] <dimitern> rock___: unless some config gets in the way - you check if `juju model-config | grep -i trusty` returns anything
[15:53] <voidspace> dimitern: you've had a review on 5559 - or do you want a review on something else?
[15:53] <voidspace> dimitern: ah, I'm just slow
[15:53]  * voidspace has now read the backscroll
[15:53] <dimitern> voidspace: :)
[15:54] <babbageclunk> voidspace: other people a
[15:54] <babbageclunk> voidspace: re allowed to review it too!
[15:55] <voidspace> babbageclunk: frankly it's better for everyone if they do...
[15:55] <frobware> dimitern: hey, sorry. was sidetracked with maas-2.1 and bridge_all=True
[15:55] <rock___> dimitern:  I tried $sudo juju deploy cs:juju-gui-134 --series xenial.  pasted log for this is http://paste.openstack.org/show/565229/. sudo juju model-config | grep -i trusty returns nothing.
[15:56] <babbageclunk> voidspace: I feel like you're saying the same thing as me but mean the opposite. :)
[15:57] <voidspace> babbageclunk: uhm, that would assume I have even the faintest idea what I'm talking about
[15:57] <voidspace> babbageclunk: a very dangerous assumption
[15:57] <dimitern> rock___: oops, it seems you've got a panic there - sorry about that
[15:57] <babbageclunk> voidspace: you know what they say about assumption
[15:57] <voidspace> heh
[15:58] <dimitern> rock___: it might be worth trying the next (last) beta16 to see if it's any better (and produces no panic)
[16:00] <rock___> dimitern: So Juju 2.0 is the development channel. So the charms present in development channel were migrated to edge?
[16:00] <dimitern> rock___: if it's still there, please file a but with that output, link to the charm, and the version of juju!
[16:00] <babbageclunk> jhobbs: ping?
[16:01] <rock___> dimitern: So I have to upgrade to juju latest version[Juju 2.0-beta16] right?
[16:02] <dimitern> rock___: I'd first try that to see if it was fixed between beta15 and beta16
[16:02] <dimitern> rock___: if not - then please file a bug and we'll triage it
[16:02] <jhobbs> hi babbageclunk
[16:03] <babbageclunk> jhobbs: Hey! I'm trying to do some digging on https://bugs.launchpad.net/juju/+bug/1611159
[16:03] <mup> Bug #1611159: model not successfully destroyed, and error on "juju list-models" <oil> <oil-2.0> <juju:Triaged by 2-xtian> <https://launchpad.net/bugs/1611159>
[16:04] <babbageclunk> jhobbs: How can I reproduce it?
[16:05] <jhobbs> well it's an automated test setup
[16:06] <jhobbs> babbageclunk: i have a maas server with 12 machines commissioned in it, and 1 vm. i bootstrap with the VM as the controller node, and then jenkins adds a couple of models and deploys openstack to them, runs some tests, then destroys the models
[16:06] <jhobbs> each model is handled by a seperate worker in jenkins and done in parallel
[16:07] <jhobbs> if i were going to try to reproduce it outside of oil i would use juju with a maas provider, one controller, and add and remove a bunch of models in parallel
[16:08] <rock___> dimitern: ok. Thank you.
[16:08] <redir> morning
[16:08] <babbageclunk> jhobbs: ok, I'll try that. Do you feel like the bundles being deployed would make a difference? Or is it more about the models being added and removed?
[16:09] <jhobbs> i really don't know
[16:09] <jhobbs> it seems the failure is around models being created and destroyed
[16:09] <jhobbs> but maybe other stuff going on at the same time contributes to that
[16:09] <babbageclunk> jhobbs: Yeah, that makes sense.
[16:11] <babbageclunk> jhobbs: Is this maas 2, or 1.9? I'll start trying to reproduce it with beta16 - are you working with that or another version in particular?
[16:14] <jhobbs> babbageclunk: MAAS 2, and that was with beta 14
[16:14] <jhobbs> i can try to reproduce again with 16, maybe later today, if i do, are there any settings i should enable for logging?
[16:14] <frobware> dimitern: oooh bridges! http://178.62.20.154/~aim/bridges.png
[16:15] <frobware> dimitern: minor quibble -  I can't actually login to the node
[16:16] <dimitern> frobware: weeell - it's alpha1 :)
[16:16] <babbageclunk> jhobbs: I think the logging you're using is good - mostly I want to be able to poke around in the system once it's in this state to try to work out what to look at next.
[16:17] <frobware> dimitern: and, heh... alll the things we've discovered you can't do.... and some of them are back again.
[16:18] <frobware> dimitern: MTU issues -> http://pastebin.ubuntu.com/23116684/
[16:18] <dimitern> frobware: we've been there, done that
[16:18] <dimitern> frobware: but that's likely as much curtin's fault as maas'es
[16:18] <frobware> dimitern: I'll raise this via email/bug with maas folks
[16:19] <dimitern> frobware: +1
[16:20] <frobware> dimitern: I need to EOD now - will pick this up in the moring.
[16:20] <dimitern> frobware: ok, have a good one ;)
[16:21] <dimitern> I'll be EODing soon as well
[16:41] <mup> Bug #1618963 opened: Local provider can't deploy on xenial <juju-core:New> <https://launchpad.net/bugs/1618963>
[18:10] <jcastro> anyone have options on: https://bugs.launchpad.net/juju/+bug/1618996
[18:10] <mup> Bug #1618996: unable to specify manually added machines from bundle.yaml <juju:New> <https://launchpad.net/bugs/1618996>
[19:26] <katco> jcastro: do you know if that worked in b15?
[19:26] <katco> jcastro: or is this a new feature request?
[19:27] <natefinch> jcastro: is the bundle supposed to declare the machines, too?  I haven't really dealt with bundles yet, but I'm sorta surprised that it would work without machines specified in the bundle itself
[19:33] <jcastro> the bundle is declaring the machines
[19:33] <katco> natefinch: there is a "To" field i think might allow you to place things on existing machines.
[19:33] <jcastro> right
[19:33] <katco> jcastro: i'm not sure if this is a regression or a new feature. could you check to see if it works on b15?
[19:36] <katco> jcastro: also, is this bundle hand-crafted? wondering if the to directives are wrong: https://github.com/juju/charm/blob/v6-unstable/bundledata.go#L201
[19:37] <katco> jcastro: i.e. maybe it should be "cinder/0" or "lxc:0", not "1"
[19:40] <natefinch> also, does it work somewhere other than manual?
[19:41] <jcastro> marcoceppi: ^^
[20:04] <redir> Review anyone? http://reviews.vapour.ws/r/5569/
[20:20] <marcoceppi> jcastro: what's the action item?
[20:21] <alexisb> katco, can you help out redir
[20:21] <katco> alexisb: yes in a bit
[20:21] <redir> no major rush
[20:42]  * katco just finished a true unit test for deploying a bundle :)
[20:42]  * redir plays trumpet, pops champagne and passes a glass to katco
[20:42] <katco> i haven't looked at the diff, but i think this will be a pain to review.
[20:42]  * redir backs away slowly
[20:43] <katco> i need to move some things around too
[20:44] <redir> alexisb: you have some controllers created ?
[20:44] <redir> anyone have a couple controllers created
[20:44] <redir> just laying around...
[20:45] <katco> redir: i have some bootstrapped
[20:45] <redir> katco: what's the output of juju show-controllers?
[20:46] <natefinch> hey, color, neat
[20:46] <katco> redir: gah.... i am lying to you. when did i destroy that...
[20:46] <natefinch> redir:
[20:46] <natefinch> CONTROLLER  MODEL       USER         CLOUD/REGION
[20:46] <natefinch> google*     default     admin@local  google/us-east1
[20:46] <natefinch> localhost   controller  admin@local  lxd/localhost
[20:46] <natefinch> lxd         default     admin@local  lxd/localhost
[20:46] <katco> natefinch: ta
[20:46] <natefinch> google is in blue
[20:46] <redir> natefinch: that is show or list?
[20:47] <redir> looks like show
[20:47] <natefinch> redir: oh... I didn't realize there were two different commands
[20:47] <natefinch> that's horrible
[20:47] <katco> god me either...
[20:47] <redir> which ist that?
[20:47] <redir> that you have the output from?
[20:47] <katco> that is awful. who upon first use knows the difference between show and list?
[20:47] <natefinch> that's "juju controllers"  which uh... is the same as list-contollers
[20:48]  * katco face-palms hard
[20:48] <redir> ok so what's the output of show-controllers?
[20:48] <redir> just one entry?
[20:48] <natefinch> yes, just one
[20:48] <katco> juju show-controllers should be juju controllers --detail{s,ed}
[20:48] <katco> or something
[20:48] <natefinch> katco: +10000000000000000000000000000000
[20:48] <katco> but not 2 commands with verbs that are synonymous
[20:48] <redir> whelp
[20:48] <alexisb> redir, yes I do
[20:49] <redir> alexisb: natefinch answered my question
[20:49] <redir> but it leaves me new questions
[20:49] <natefinch> yeah, show controllers only ever shows one, as far as I can tell, the current one
[20:49] <natefinch> which uh, makes it poorly named
[20:50] <redir> natefinch: yes it appears to be an alias to show-controller
[20:50] <natefinch> aaaaahhhhhhh
[20:50] <katco> natefinch: juju controllers <name> could do the right thing
[20:50] <redir> at least help for plural shows the help for singular
[20:50] <natefinch> redir: yes that appears to be true
[20:50] <katco> wait... so show-controller is just an alias for list-controllers?
[20:50] <natefinch> no no
[20:50] <redir> no
[20:51] <natefinch> show-controllers is an alias for show-controller
[20:51] <redir> show-cotnrollers appears to be an alias for what natefinch said
[20:51] <katco> ...why
[20:51] <natefinch> except that every other command is very careful not to alias plurals... because show-xxx  is supposed to show exactly one, and list-xxx is supposed to show many
[20:51] <katco> we should remove that alias. show-controller makes *more* sense to me at least
[20:51] <alexisb> natefinch, redir, katco: the cli is consistant (or should be consistant) with "<somecommand>s" being the same as "list<command>s"
[20:52] <redir> becase strange attractors
[20:52] <alexisb> if show-controller is aliased to controllers that is wrong
[20:52] <katco> alexisb: show-controllers is aliased to show-controller (i think?)
[20:52] <alexisb> ah yeah that needs to be cleaned up still
[20:52] <redir> ding ding I think katco said the right thing
[20:52] <katco> which makes no sense to me
[20:52] <redir> and natefinch before that
[20:52] <alexisb> we still have plurals where we shouldnt
[20:52] <alexisb> list should be plural and show should be singular
[20:53] <redir> so I just added agent version info to show-controllers
[20:53] <redir> but it only shows me one
[20:53] <alexisb> the original design didnt start that way and we havent cleaned up yet
[20:53] <alexisb> there is a bug open
[20:53] <natefinch> oh man
[20:53] <redir> and I am trying to understand if it should show more
[20:54] <redir> Oh then I just added version to show-controller and plural is a vestigal alias
[20:54] <redir> yes?
[20:54] <natefinch> is there a bug open to hide aliases from juju show commands? cause, right now:
[20:54] <natefinch> r$ juju help commands | wc -l
[20:54] <natefinch> 168
[20:54] <alexisb> redir, there should not be an alias for show controller
[20:54] <redir> OK
[20:55] <redir> alexisb: care to HO?
[20:55] <alexisb> redir, sure
[20:55] <jcastro> while we're at it, all the action CLI commands need a redo IMO
[20:55] <redir> standup?
[20:55] <alexisb> jcastro, adding to the pile will not help (also you will need to elaborate)
[20:55] <jcastro> heh
[20:55] <katco> jcastro: only we can criticize our CLI! we're retaking these complaints
[20:56] <jcastro> yeah so, if you think of doing an action show-action-status and show-action-output break the flow
[20:56] <jcastro> I can never remember them so I constantly have to refer to the docs
[20:57] <natefinch> Our CLI: https://1.bp.blogspot.com/-ZM7ejcL9pk8/Vr4roZBEJsI/AAAAAAAACpY/oyyCEKiAs7A/s1600/TheSimpsons1218-1.jpg
[20:57] <jcastro> but none of my complaints are 2.0 material I don't think
[21:02] <katco> redir: bootstrapping your change now, only 1 minor comment for the review
[21:07] <thumper> morning
[21:08] <redir> katco: k tx
[21:09] <redir> katco: and going to eradicate the alias for that too
[21:09] <katco> redir: ship it
[21:10] <redir> katco: tx
[23:13] <menn0> wallyworld: here's the change to extract the unit status logic in the apiserver http://reviews.vapour.ws/r/5571/
[23:14] <wallyworld> ok
[23:16] <alexisb> thumper, axw ping
[23:16] <alexisb> perrito666, ping
[23:16] <axw> alexisb: pong, just joined
[23:16] <perrito666> alexisb: pong
[23:17] <alexisb> perrito666, standup
[23:33] <perrito666> yay bug landed