[00:01] perrito666: haha [00:02] I have the exact same, but no one ever rings it since my office has a window outside and I see people comming [00:08] mm, GoRename in an interface will do the smart thing with its implementation.... [00:10] perrito666: smart meaning what you want? [00:10] redir: what else? [00:11] :) [00:11] perrito666: the right thing? [00:12] I wish it updated comments. [00:12] I understand why it doesn't but I can wish. [00:12] ah the comments part is really annoying [00:12] I tried once GoRename on implementation instead of the interface, not nice [00:13] I also with GoTestFunc worked with check, or vice-versa [00:13] *also wish [00:15] menn0: funny, delegatos in spanish (with a space "dele gatos") means "give that person cats" [00:15] perrito666: LOL! [00:18] ok [00:22] perrito666: ship it [00:23] menn0: tx a lot [00:31] well look at that, a 4k monitor costs 6X more in my country than in amazon.... [00:32] * perrito666 stays in HD [00:42] perrito666: ping [00:52] menn0: pong [00:52] perrito666: https://github.com/juju/juju/blame/master/apiserver/client/status.go#L853 [00:53] perrito666: should that really be || ? [00:53] perrito666: I'm wondering if it's supposed to be && [00:56] menn0: let me refresh my memory [00:56] perrito666: sure [00:56] perrito666: I'm not completely sure myself, but it looks suspicious === natefinch-afk is now known as natefinch [00:59] menn0: I am pretty sure I am blamed there for a move since I cant recall wth is going on there but, it makes sense because it might be possible to be in Maintenance but with message Installing [00:59] I would track MessageInstalling to determine that [01:00] I bet it is set when Agent is in Maintenance, this would be a good reason for such || [01:00] perrito666: actually, I've misread the code [01:00] perrito666: ignore me [01:00] menn0: sure thing :p [01:00] ill go have dinner then return if you need a rubber duck [01:02] wallyworld: http://reviews.vapour.ws/r/5563/ or menn0 ... [01:02] bbiab [01:02] ok [01:02] ~8pm or so PDT [01:16] redir: what's with US people and Time standards [01:17] menn0: need a hand? [01:17] perrito666: what, we have 6 time zones, which differ based on what time of year it is, thanks to daylight saving time. Not so bad, right? [01:18] natefinch: actually the problem is not how many you have, is the names you use [01:18] much like with every other form of numeric representation aparently [01:18] lol [01:21] unless anyone has some special insight into https://bugs.launchpad.net/juju-core/1.25/+bug/1610880 - I'm just going to bail on it. I've burned a week trying to figure this crap out, and now it's just decided to stop being reproducible :/ [01:21] Bug #1610880: Downloading container templates fails in manual environment [01:41] menn0: juju/agent/MigrateParams - there's a check for empty Model, but to me, model should never be empty right? [01:41] perrito666: all good at the moment [01:42] wallyworld: loading context [01:42] menn0: also, we now are going to need to start explicitly passing around a controller UUID since that will not be the same as the controller model uuid anymore [01:43] wallyworld: MigrateParams has nothing to do with migrations [01:43] wallyworld: model migrations that is [01:43] oh, ffs [01:44] it for migrating older format files [01:44] hmmm [01:44] 2.0 will be clean slate [01:49] gnight all [01:49] perrito666: good night [01:50] wow, tests are especially unreliable today [02:26] thumper, wallyworld: either of you have thoughts on a bug I should tackle? Looks like all the criticals under juju are assigned, except this one: https://bugs.launchpad.net/juju/+bug/1614635 [02:26] Bug #1614635: Deploy sometimes fails behind a proxy [02:27] natefinch: anything landscape related is good to pick up [02:32] ffs [02:32] wallyworld: I was hoping to avoid that one, just because it sounds setup-intensive. I'd need a maas environment behind a firewall using a proxy... [02:32] I'm giving up on the manual ha bug right now [02:32] since you can never get into ha, it won't matter that you can't kill it [02:32] haha [02:32] this is a rabbit hole [02:33] and a lot more work than I thought [02:33] so perhaps better to work on other more important bugs [02:34] thumper: exactly the reasoning I used when abandoning my manual bug.... only after following the rabbit hole way too deep. [02:34] natefinch: what was your manual bug? [02:34] natefinch: what about https://bugs.launchpad.net/juju/+bug/1617190 [02:34] Bug #1617190: Logout required after failed login [02:34] thumper: https://bugs.launchpad.net/juju-core/1.25/+bug/1610880 [02:34] Bug #1610880: Downloading container templates fails in manual environment [02:35] thumper: it was 100% reproducible until today, when it became 100% unreproducible, which is when I threw in the towel [02:35] haha [02:35] thumper: manual ha will be important for system z etc [02:36] but not this week [02:36] wallyworld: I could do that one, sure. I wasn't sure if alexis was actually working on it [02:36] natefinch: no, she assigns bugs to herself [02:36] so she knows what to track [02:36] wallyworld: ok cool [02:42] wallyworld: http://juju-ci.vapour.ws:8080/job/github-merge-juju/8993/artifact/artifacts/trusty-out.log this is failing often [02:42] but intermittently [02:42] cmdControllerSuite.TestAddModelWithCloudAndRegion [02:42] [LOG] 0:00.377 ERROR juju.worker.dependency "api-caller" manifold worker returned unexpected error: cannot open api: unable to connect to API: websocket.Dial wss://localhost:59081/model/deadbeef-0bad-400d-8000-4b1d0d06f00d/api: dial tcp 127.0.0.1:59081: getsockopt: connection refused [02:43] not sure why it is failing to connect to the api server [02:43] yeah, weird [02:44] actually [02:44] perhaps it failed the first time [02:44] and retried [02:44] but the error was still written out [02:44] the expected string is in the obtained string [02:44] but after an error line [02:44] hah, tht could be it === menn0 is now known as menn0-afk [03:33] fuck sticks [03:42] menn0-afk: really afk? [03:42] hmm, only 12 minutes, so probably [03:42] anyway, remeber the i/o timeout retry patch? [03:42] well the rpc layer returns errors.Trace(err) [03:42] so none of the errors had an error code, and were all fatal [03:42] even the ones that said retry :) [03:42] anyway, identified, and fix QA [03:42] then pushing [03:43] wallyworld: cmdRegistrationSuite.TestAddUserAndRegister failing intermittently too [03:43] heh, katco fixed a bug just like that somewhere recently [03:43] ... "ERROR \"api-caller\" manifold worker returned unexpected error: cannot open api: unable to connect to API: websocket.Dial wss://localhost:50983/model/deadbeef-0bad-400d-8000-4b1d0d06f00d/api: dial tcp 127.0.0.1:50983: getsockopt: connection refused\n" + [03:44] in the middle of expected output [03:44] same root cause then [03:44] probably worthwhile figuring out why we are getting this connectino refused [03:44] it is obviously new [03:44] and causing many failures [03:44] might just be slow back end [03:48] part of the problem is the apiserver errors are showing in the client output [03:48] that's just wrong [03:48] but due to how our tests do everything... [03:51] wallyworld: I can see why we are getting the errors [03:52] [LOG] 0:00.168 INFO juju.apiserver listening on "[::]:60736" [03:52] [LOG] 0:00.226 ERROR juju.worker.dependency "api-caller" manifold worker returned unexpected error: cannot open api: unable to connect to API: websocket.Dial wss://localhost:50983/model/deadbeef-0bad-400d-8000-4b1d0d06f00d/api: dial tcp 127.0.0.1:50983: getsockopt: connection refused [03:52] different port [03:52] oh, that race condition [03:52] perhaps also ipv6 localhost [03:52] that's been there for ages [03:52] but port will do it [03:53] apparently it's hard to fix [03:53] and the window was considered small enough that we wouldn;t see it, but i may be mis remembering [04:10] wallyworld: http://reviews.vapour.ws/r/5565/ [04:10] wallyworld: if you are busy, I could take another look [04:10] i am busy, separating controller and model tag, what a a mess, but i can look at pr [04:12] thumper: lgtm [04:17] pr is very small [04:17] ta [04:39] has anyone done anything about not having to pass --config=bootstrap.yaml every time we bootstrap? I keep forgetting, so I end up bootstrapping without all the config options I normally want to use. [04:46] these featuretests are failing almost every time now [04:46] ffs [04:50] --debug isn't displayed as a flag anywhere anymore? [04:50] it still works... but how is anyone supposed to know it exists? [04:52] ahh, help global-options... which is like 3 levels deep in the help :/ === menn0-afk is now known as menn0 [04:59] thumper: back [04:59] menn0: got a few minutes to discuss this annoying intermittent failure? [05:00] thumper: yep [05:00] 1:1 [05:49] thumper: I used -check.list to see which tests run before TestAddModel [05:49] thumper: passing this should run just those tests: -check.f '(meterStatusIntegrationSuite|UserSuite|debugLogDbSuite|syslogSuite|annotationsSuite|CloudAPISuite|apiEnvironmentSuite|BakeryStorageSuite|blockSuite|cmdControllerSuite.TestAddModel)' [05:49] thumper: i've got the stress tester running now with that [06:00] wow I've got a full novel as a review from babbageclunk :) [07:09] morning dimitern [07:10] hey TheMue :) [07:10] found some time to lurk around here again ;) [07:11] TheMue: how's life? ;) [07:13] dimitern: fine, nice project here using a mix of JS frontend (mostly done by others), Go microserves (here I'm the evangelist), and couchdb as database [07:14] dimitern: only fighting with some fences of this bigger company, in total 12K worldwide [07:15] dimitern: so many old-fashioned processes I have to break for a new product [07:16] TheMue: sounds interesting :) [07:16] dimitern: yip, it is, and a good local team here [07:17] dimitern: only less interesting travels than with Canonical :D last tour has been to London again? === frankban|afk is now known as frankban [07:20] * TheMue continuous with his crypto stuff ... [07:21] TheMue: actually I was in Leiden most recently [07:21] TheMue: but I'm sure we'll be in London soon enough :) [07:23] dimitern: Netherlands, interesting. a new location. [07:23] dimitern: I'm missing San Francisco *sigh* [07:23] TheMue: yeah - was the first time for me there [07:24] TheMue: I'll be at the Juju Charmers Summit in Pasadena, CA in a couple of weeks [07:24] dimitern: quickly have to join *lol* [07:25] TheMue: here's a link: http://summit.juju.solutions/ [07:30] dimitern: don't think my company would send me === jamespag` is now known as jamespage [07:39] i see the get-controller-config command. is there any way of setting controller config values? [07:39] axw, menn0: ^ [07:40] wallyworld: ^ [07:40] rogpeppe1: (I don't think so) [07:40] axw: hmm [07:40] AFAICR, you can only set them at bootstrap time right now === gnuoy` is now known as gnuoy [07:53] rogpeppe1: axw: juju set-model-defaults [07:53] ah [07:54] controller-config, that is immutable [07:54] apiport, stateport etc [07:54] controller uuid [07:54] did you mean the model defaults? [07:56] wallyworld: for argument's sake, it would be nice to be able to toggle auditing-enabled [07:57] axw: i thought there was CLI for that, but maybe not [07:57] the auditing stuff is POConly [07:57] needs to be cleaned up [09:26] babbageclunk: Following up on that lxd bootstrap failure stuff, I'm not seeing any issues with amd64 (trusty or xenial) but with ppc64le. Have you tested, by any chance? [09:27] * macgreagoir should look for your bug... [09:39] Bug #1618798 opened: endpoint not used in lxd provider [09:42] macgreagoir: sorry, was afk - it's bug 1618636 [09:42] Bug #1618636: Bootstrapping with beta16 on lxd gives "unable to connect" error [09:42] babbageclunk: Cheers, let me compare notes with that... [09:43] It happens when I bootstrap inside a container - I did that just to sidestep the built-from-source juju binary, but I'll try reproducing it outside the container now. [09:45] menn0: you should try this: http://www.bbc.co.uk/news/world-europe-37228413 [09:46] voidspace: gah! [09:46] babbageclunk: hardcore [09:47] voidspace: ah, it's fine, he's holding onto that stick for most of it. [09:48] babbageclunk: ah, fair point - I hadn't noticed that [09:48] voidspace: (I think it's a selfie stick.) [09:48] babbageclunk: :-) [09:48] babbageclunk: dilemna [09:48] babbageclunk: I'm working with CloudImageMetadata [09:48] babbageclunk: should the plural be CloudImageMetadatas [09:48] babbageclunk: ? [09:49] it just sounds awful [09:49] I'm being inconsistent at the moment - I have a type called cloudimagemetadata and then a collection type that I kind of have to call cloudimagemetadatas [09:49] Not as awful as dilemna [09:49] voidspace: Data is plural ;-) [09:49] macgreagoir: yes it is [09:50] :-) [09:50] macgreagoir: hence the dilemna [09:50] Yeah, should be CloudImageMetadatum [09:50] hah [09:50] it's not a datum it has data [09:50] at a higher level of abstraction its a datum I guess [09:51] CloudImageMetadataSet? [09:51] CloudImageMetadatabase :-D [09:51] babbageclunk: dilemna is valid [09:51] babbageclunk: and better [09:51] macgreagoir: hah [09:51] babbageclunk: maybe [09:52] you're invalid [09:52] babbageclunk: ableist [09:53] Does that really have an e? When I'm the boss it will not. [09:54] babbageclunk: your spelling is horrifical [09:54] babbageclunk: whenever there's an option you pick the most confusing one so as to seem superior [09:54] babbageclunk: *everyone* knows that [09:54] gah [09:56] Anyway, there are lots of infos in the codebase, but I don't think datas would be so easily tolerated. [09:56] babbageclunk: yeah, it's pretty horrible [09:56] infos isn't great [10:01] voidspace: I think dataset's alright, as long as one isn't too bothered by the implication that it should be a set. [10:02] babbageclunk: right, I think I agree [10:02] and there is no real set in go anyway [10:03] And really, if you're ok with dilemna, something purporting to be a set but then not really being one is the least of your problems. [10:03] :-) [10:04] voidspace: +1 metadataset, fwiw [10:04] macgreagoir: cool, thanks - sounds like it's decided [10:27] macgreagoir: I haven't managed to reproduce that bug outside a container. How can I determine whether the agent is being uploaded from my machine rather than pulled from a stream? [10:29] macgreagoir: (because I think that might be confounding things) [10:30] babbageclunk: Let me check my logs. I can repro on amd64 only in a nested container, like you. [10:32] babbageclunk: I see tools downloaded (in both archs). The error in ppc64le is clearly from the code that tries to use local as remote. [10:32] I guess that's a fair starting point for amd64 too. [10:33] The ppc64le bug is back on beta11. [10:43] mgz: ping? [10:51] babbageclunk: Just fyi, https://bugs.launchpad.net/juju/+bug/1618636/comments/1 [10:52] Bug #1618636: Bootstrapping with beta16 on lxd gives "unable to connect" error [10:53] macgreagoir: another interesting data point, I get much more output (from cloud-init, apt installs etc) from the bootstrap process in the container than I do from the one running on my host - do you know why that might be? [10:54] An interesting datum? :-) [10:55] babbageclunk: I'm trying to get a ppc64le build with some more loggin output, but not succeeding yet. [10:56] * babbageclunk lols [11:00] hey folks, how can one login to the models' mongo db now? [11:07] mattyw: the juju-mongodb package includes the mongo binary now, so you can run that connecting to 127.0.0.1:37017 on the controller. Get the password from the admin user from the agent.conf, and pass --sslAllowInvalidCertificates. [11:10] babbageclunk, I get errors about --sslAllowInvalidCertificates not being a valid arg [11:11] mattyw: You need to run the mongo from /usr/lib/juju/mongo3.2/bin - check the version? [11:13] fwereade: i'm seeing a lot of this kind of test failure. any idea why we might be seeing more of it recently? http://juju-ci.vapour.ws:8080/job/github-merge-juju/9009/artifact/artifacts/trusty-out.log/*view*/ [11:14] mattyw: Oh, you need to pass --ssl as well. [11:15] mattyw: I'm trying to do this now too, but I haven't gotten in yet. [11:15] babbageclunk, I think I wasn't using juju-mongodb, I'm updating my script now [11:16] babbageclunk, /usr/lib/juju/mongo3.2/bin/mongo: No such file or directory [11:16] mattyw: Really? On xenial? [11:16] babbageclunk, trusty [11:17] mattyw: that would do it - sorry, all of this has been mongo 3.2 advice. [11:17] babbageclunk, do we not install 3.2 on trusty as well? [11:18] mattyw: Not sure. Does the directory exist? [11:18] babbageclunk, it doesn't [11:19] mattyw: Then I think the mongo binary will be in /usr/lib/juju/bin instead, if it's anywhere. [11:19] babbageclunk, I see all the server side packages but none of the client ones [11:20] mattyw: Ok, in that case you should be able to install mongodb-clients [11:22] babbageclunk: Again, just fyi, I found a beta12 ppc64el deb. I don't see the lxd issue in this test. [11:23] babbageclunk, so this is the script I'm using http://paste.ubuntu.com/23115673/ [11:23] macgreagoir: I don't see the lxd problem in beta15 either. [11:23] babbageclunk, must be something wrong with the last line - the args sent to mongo [11:24] mattyw: On trusty it's mongo 2.4, so the args are different [11:26] babbageclunk, any idea what they should be? [11:27] babbageclunk: I can't find or build a ppc beta15 to test :-/ [11:27] mattyw: just trying it out now - had to bootstrap a trusty controller [11:28] mattyw: This works for me: mongo -u admin -p 127.0.0.1:37017/juju --authenticationDatabase admin --ssl [11:32] babbageclunk, sorry mate, how do I login to the controller, juju ssh -m default 0? [11:33] mattyw: juju ssh -m controller 0 [11:37] rogpeppe1, those look like it's "just" a logging change... and that ideally we'd have the logging going somewhere else so we could inspect the command's contributions to stderr alone [11:38] fwereade: it's a sporadic failure [11:40] fwereade: do you think that the API connection failure is expected there then? [11:41] rogpeppe1, right -- most of the time we connect flawlessly, sometimes the odd connection failure should be retried transparently [11:41] rogpeppe1, that was my read [11:41] fwereade: interesting. i'll go back and look at the other test failures in that light [11:43] fwereade: do you think that perhaps the manifold worker shouldn't be reporting at ERROR level? [11:44] fwereade: and... this is client-side, right? what's the worker doing logging its errors there? [11:45] fwereade: or does the client side have workers too now? [11:45] rogpeppe1, this is a featuretest: it is indeed probable that it's just part of the appropriate chunk-of-controller [11:46] fwereade: i guess we could paper over the issue by configuring the logger used for the command output to exclude all logs from juju.worker.* [11:47] * fwereade winces a little, and worries how well that's going to work in practice... but logs and commands are tangly, so, well, I suppose it'd be expedient [11:47] fwereade: FWIW this test failure and others similar has wasted me a lot of time yesterday. it's definitely worth some kind of fix :) [11:48] fwereade: yeah, i'm not sure either. [11:48] rogpeppe1, yeah, definitely not arguing it doesn't need a solution, just pushing for a bit of find-the-ultimate-cause ;) [11:49] fwereade: alternatively, maybe that manifold error doesn't justify ERROR status and could be logged at INFO level [11:49] fwereade: after all, it's fairly run-of-the-mill to get errors there. [11:50] rogpeppe1, probably also reasonable, yeah [11:50] rogpeppe1, but similarly leaves us vulnerable to similar changes infecting these tests in future [11:50] fwereade: yeah, i was thinking that too [11:51] rogpeppe1, mainly a matter of how-much-time-can-you-justify, I think :( [11:52] fwereade: zero currently - i'm off on hols tomorrow :) [11:52] fwereade: i'm just trying to remind myself how that output gets captured in tests [12:08] fwereade: ha, interesting realisation: there's no way to get a logging source to totally shut up, because CRITICAL messages will always be logged. [12:09] luckily nothing logs at CRITICAL level [12:16] rogpeppe1, two opposing bugs in perfect balance, eh [12:44] jhobbs: around? [12:58] fwereade: ping? [13:22] jhobbs: ping? [13:24] dimitern: ping - can we sync before standup please? [13:28] frobware: ok, just a couple of minutes and I'll join standup HO [13:28] dimitern: ack [13:37] frobware: oops, sorry [14:01] fwereade: mgz: standup time [14:06] babbageclunk: ping [14:12] dimitern: pong! [14:30] babbageclunk: hey, I've pushed some changes to http://reviews.vapour.ws/r/5559/ [14:30] dimitern: ok, I'll take a look. [14:30] babbageclunk: ta! [14:31] voidspace, frobware: please take a look if you can as well ^^ [14:33] babbageclunk, voidspace, frobware: sorry, I've realized I didn't actually push them :/ now I did [14:59] fwereade_: you've got a review [15:04] dimitern, tyvm [15:07] mgz: any idea why this failed? i can't see any errors. http://juju-ci.vapour.ws:8080/job/github-merge-juju/9014/ [15:09] rogpeppe1: check trusty-err.log: state/modelmigration_test.go:29: undefined: state.ModelMigrationSpec [15:09] [15:10] babbageclunk: Out of interest, your test for lxd bootstrap with a from-source binary, did you also have a jujud available in ./ ? [15:10] it compiles! [15:10] dimitern: oh i didn't realise compiler errors were hidden in the other artifact [15:10] Doesn't *work*, but it compiles... [15:10] dimitern: that's... unintuitive [15:11] macgreagoir: no, I was in ~, and mangled my path and gopath so that there were no from-source juju binaries around. [15:11] macgreagoir: But I've worked out my problem - we ripped out the bit of the lxd provider that tells LXD to listen to https. [15:12] rogpeppe1: it's says where to look plainly in the console log :) [15:13] babbageclunk: OK, cool. One down :-) [15:13] dimitern: it does? where does it say that? [15:13] macgreagoir: well, not quite sure where to put it back in, but yeah, the mystery's solved! [15:14] rogpeppe1: See /var/lib/jenkins/workspace/github-merge-juju/artifacts/trusty-err.log [15:14] [15:14] babbageclunk: I can't reproduce my bug on ppc with tip. [15:14] rogpeppe1: ~10 lines from the bottom going up [15:14] macgreagoir: yay, it's fixed! ;) [15:14] :-D [15:15] dimitern: i know that's where the error is [15:15] dimitern: but i think it's unintuitive that most test failure output goes into trusty-out.log but compiler errors go into trusty-err.log [15:16] dimitern: i think that both stderr and stdout from the go test should go into the same stream [15:16] rogpeppe1: ah, sorry - yeah, that's not so obvious [15:16] rogpeppe1: +1 [15:19] Hi. I deployed OpenStack on LXD using https://github.com/openstack-charmers/openstack-on-lxd. I have created "cinder-storagedriver" charm . I pushed the created charm to public charm market place(charm store). So Using JUJU GUI when i was trying to deploy "cinder-storagedriver" charm by adding relation to "cinder" it was throwing an error. [15:19] ERROR: Relation biarca-openstack:juju-info to cinder:juju-info: cannot add relation "biarca-openstack:juju-info cinder:juju-info" : principal and subordinate applications' series must match. [15:19] But I can able to deploy my charm through juju cli by taking from charm store. And I can able to add relation to cinder successfully. And everything working fine. [15:20] rock___: can you paste a link to the charm you pushed? [15:20] rock___: biarca-openstack needs to be the same operating system series as the charm deployed [15:20] rock___: also, how did you deploy it? [15:20] rock___: I imagine the GUI is pulling the wrong series of teh charm [15:21] https://jujucharms.com/u/siva9296/biarca/0 [15:22] my charm support xenial,trusty and precise. Rest of the openstack deployment support Xenial. juju status pasted info : http://paste.openstack.org/show/565199/ [15:23] As part of debug, I deployed juju-gui as $juju deploy cs:juju-gui-134 on our setup. But it was showing series as trusty even we have choosen xenial. [15:25] rock___: that paste link does not open for me - can you retry pasting it with paste.ubuntu.com or some other service please? [15:27] dimitern: juju status pasted info http://paste.openstack.org/show/565217/ [15:28] rock___: ok, I opened that fine, but the charm urls are not there - please paste `juju status --format=yaml` for more details? [15:28] marcoceppi: Yes. GUI is pulling the wrong series of the charm. When I deployed my charm using JUJU CLI. It worked fine. [15:30] rock___: if your config has 'default-series: trusty' that might be cause; alternatively, deploy biarca with --series xenial to be explicit about it [15:31] (it should complain if that's required, but it might be just picking the first entry in the list of series-es from metadata - trusty in this case) [15:32] babbageclunk, frobware, voidspace: a friendly review poke :) [15:35] dinitern: yes . In metadata.yaml trusty is the first one of the list of series. But how it worked fine when I deployed my charm through JUJU CLI. [15:36] dimitern: I was still reviewing it! It takes a long time to type all of my complaining. ;) [15:37] babbageclunk: ah, sorry :) sure, take your time! [15:37] rock___: are you using the CLI from a trusty machine or a xenial one? [15:38] dimitern: From a Xenial machine. [15:38] dimitern: Finished now. [15:38] dimitern: that approach is neat! [15:38] babbageclunk: sweet! thanks :) [15:39] dimitern: juju status --format=yaml pasted info http://paste.openstack.org/show/565227/ [15:42] rock___: thanks! there you have it - the gui is running as trusty [15:43] how did that happen I'm not sure.. but does it work if you deploy your charm from the CLI? [15:43] dimitern: As part of debug, I deployed juju-gui as $juju deploy cs:juju-gui-134 on our setup. But it was showing series as trusty even we have choosen xenial. [15:45] Actually, We no need to deploy juju-gui separately. If we run "$sudo juju gui" It will give https://10.75.116.66:17070/gui/3d58eec3-a3ed-4430-8eb4-8f3ec7db7ea8/. [15:46] rock___: yeah, it sounds like it should refuse to deploy it, if it's multi-series charm and no series is given (it used to - perhaps something changed recently) [15:47] dimitern: We manually deployed juju-gui to know which series it is going to take. [15:48] dimitern: we choosen xenial but it has taken trusty. [15:48] rock___: but you should then use the full cs: url - e.g. cs:xenial/juju-gui-134 [15:48] or instead, juju deploy cs:juju-gui-134 --series xenial [15:51] rock___: unless some config gets in the way - you check if `juju model-config | grep -i trusty` returns anything [15:53] dimitern: you've had a review on 5559 - or do you want a review on something else? [15:53] dimitern: ah, I'm just slow [15:53] * voidspace has now read the backscroll [15:53] voidspace: :) [15:54] voidspace: other people a [15:54] voidspace: re allowed to review it too! [15:55] babbageclunk: frankly it's better for everyone if they do... [15:55] dimitern: hey, sorry. was sidetracked with maas-2.1 and bridge_all=True [15:55] dimitern: I tried $sudo juju deploy cs:juju-gui-134 --series xenial. pasted log for this is http://paste.openstack.org/show/565229/. sudo juju model-config | grep -i trusty returns nothing. [15:56] voidspace: I feel like you're saying the same thing as me but mean the opposite. :) [15:57] babbageclunk: uhm, that would assume I have even the faintest idea what I'm talking about [15:57] babbageclunk: a very dangerous assumption [15:57] rock___: oops, it seems you've got a panic there - sorry about that [15:57] voidspace: you know what they say about assumption [15:57] heh [15:58] rock___: it might be worth trying the next (last) beta16 to see if it's any better (and produces no panic) [16:00] dimitern: So Juju 2.0 is the development channel. So the charms present in development channel were migrated to edge? [16:00] rock___: if it's still there, please file a but with that output, link to the charm, and the version of juju! [16:00] jhobbs: ping? [16:01] dimitern: So I have to upgrade to juju latest version[Juju 2.0-beta16] right? [16:02] rock___: I'd first try that to see if it was fixed between beta15 and beta16 [16:02] rock___: if not - then please file a bug and we'll triage it [16:02] hi babbageclunk [16:03] jhobbs: Hey! I'm trying to do some digging on https://bugs.launchpad.net/juju/+bug/1611159 [16:03] Bug #1611159: model not successfully destroyed, and error on "juju list-models" [16:04] jhobbs: How can I reproduce it? [16:05] well it's an automated test setup [16:06] babbageclunk: i have a maas server with 12 machines commissioned in it, and 1 vm. i bootstrap with the VM as the controller node, and then jenkins adds a couple of models and deploys openstack to them, runs some tests, then destroys the models [16:06] each model is handled by a seperate worker in jenkins and done in parallel [16:07] if i were going to try to reproduce it outside of oil i would use juju with a maas provider, one controller, and add and remove a bunch of models in parallel [16:08] dimitern: ok. Thank you. [16:08] morning [16:08] jhobbs: ok, I'll try that. Do you feel like the bundles being deployed would make a difference? Or is it more about the models being added and removed? [16:09] i really don't know [16:09] it seems the failure is around models being created and destroyed [16:09] but maybe other stuff going on at the same time contributes to that [16:09] jhobbs: Yeah, that makes sense. [16:11] jhobbs: Is this maas 2, or 1.9? I'll start trying to reproduce it with beta16 - are you working with that or another version in particular? [16:14] babbageclunk: MAAS 2, and that was with beta 14 [16:14] i can try to reproduce again with 16, maybe later today, if i do, are there any settings i should enable for logging? [16:14] dimitern: oooh bridges! http://178.62.20.154/~aim/bridges.png [16:15] dimitern: minor quibble - I can't actually login to the node [16:16] frobware: weeell - it's alpha1 :) [16:16] jhobbs: I think the logging you're using is good - mostly I want to be able to poke around in the system once it's in this state to try to work out what to look at next. [16:17] dimitern: and, heh... alll the things we've discovered you can't do.... and some of them are back again. [16:18] dimitern: MTU issues -> http://pastebin.ubuntu.com/23116684/ [16:18] frobware: we've been there, done that [16:18] frobware: but that's likely as much curtin's fault as maas'es [16:18] dimitern: I'll raise this via email/bug with maas folks [16:19] frobware: +1 [16:20] dimitern: I need to EOD now - will pick this up in the moring. [16:20] frobware: ok, have a good one ;) [16:21] I'll be EODing soon as well [16:41] Bug #1618963 opened: Local provider can't deploy on xenial === frankban is now known as frankban|afk === rmcall_ is now known as rmcall [18:10] anyone have options on: https://bugs.launchpad.net/juju/+bug/1618996 [18:10] Bug #1618996: unable to specify manually added machines from bundle.yaml [19:26] jcastro: do you know if that worked in b15? [19:26] jcastro: or is this a new feature request? [19:27] jcastro: is the bundle supposed to declare the machines, too? I haven't really dealt with bundles yet, but I'm sorta surprised that it would work without machines specified in the bundle itself [19:33] the bundle is declaring the machines [19:33] natefinch: there is a "To" field i think might allow you to place things on existing machines. [19:33] right [19:33] jcastro: i'm not sure if this is a regression or a new feature. could you check to see if it works on b15? [19:36] jcastro: also, is this bundle hand-crafted? wondering if the to directives are wrong: https://github.com/juju/charm/blob/v6-unstable/bundledata.go#L201 [19:37] jcastro: i.e. maybe it should be "cinder/0" or "lxc:0", not "1" [19:40] also, does it work somewhere other than manual? [19:41] marcoceppi: ^^ [20:04] Review anyone? http://reviews.vapour.ws/r/5569/ [20:20] jcastro: what's the action item? [20:21] katco, can you help out redir [20:21] alexisb: yes in a bit [20:21] no major rush [20:42] * katco just finished a true unit test for deploying a bundle :) [20:42] * redir plays trumpet, pops champagne and passes a glass to katco [20:42] i haven't looked at the diff, but i think this will be a pain to review. [20:42] * redir backs away slowly [20:43] i need to move some things around too [20:44] alexisb: you have some controllers created ? [20:44] anyone have a couple controllers created [20:44] just laying around... [20:45] redir: i have some bootstrapped [20:45] katco: what's the output of juju show-controllers? [20:46] hey, color, neat [20:46] redir: gah.... i am lying to you. when did i destroy that... [20:46] redir: [20:46] CONTROLLER MODEL USER CLOUD/REGION [20:46] google* default admin@local google/us-east1 [20:46] localhost controller admin@local lxd/localhost [20:46] lxd default admin@local lxd/localhost [20:46] natefinch: ta [20:46] google is in blue [20:46] natefinch: that is show or list? [20:47] looks like show [20:47] redir: oh... I didn't realize there were two different commands [20:47] that's horrible [20:47] god me either... [20:47] which ist that? [20:47] that you have the output from? [20:47] that is awful. who upon first use knows the difference between show and list? [20:47] that's "juju controllers" which uh... is the same as list-contollers [20:48] * katco face-palms hard [20:48] ok so what's the output of show-controllers? [20:48] just one entry? [20:48] yes, just one [20:48] juju show-controllers should be juju controllers --detail{s,ed} [20:48] or something [20:48] katco: +10000000000000000000000000000000 [20:48] but not 2 commands with verbs that are synonymous [20:48] whelp [20:48] redir, yes I do [20:49] alexisb: natefinch answered my question [20:49] but it leaves me new questions [20:49] yeah, show controllers only ever shows one, as far as I can tell, the current one [20:49] which uh, makes it poorly named [20:50] natefinch: yes it appears to be an alias to show-controller [20:50] aaaaahhhhhhh [20:50] natefinch: juju controllers could do the right thing [20:50] at least help for plural shows the help for singular [20:50] redir: yes that appears to be true [20:50] wait... so show-controller is just an alias for list-controllers? [20:50] no no [20:50] no [20:51] show-controllers is an alias for show-controller [20:51] show-cotnrollers appears to be an alias for what natefinch said [20:51] ...why [20:51] except that every other command is very careful not to alias plurals... because show-xxx is supposed to show exactly one, and list-xxx is supposed to show many [20:51] we should remove that alias. show-controller makes *more* sense to me at least [20:51] natefinch, redir, katco: the cli is consistant (or should be consistant) with "s" being the same as "lists" [20:52] becase strange attractors [20:52] if show-controller is aliased to controllers that is wrong [20:52] alexisb: show-controllers is aliased to show-controller (i think?) [20:52] ah yeah that needs to be cleaned up still [20:52] ding ding I think katco said the right thing [20:52] which makes no sense to me [20:52] and natefinch before that [20:52] we still have plurals where we shouldnt [20:52] list should be plural and show should be singular [20:53] so I just added agent version info to show-controllers [20:53] but it only shows me one [20:53] the original design didnt start that way and we havent cleaned up yet [20:53] there is a bug open [20:53] oh man [20:53] and I am trying to understand if it should show more [20:54] Oh then I just added version to show-controller and plural is a vestigal alias [20:54] yes? [20:54] is there a bug open to hide aliases from juju show commands? cause, right now: [20:54] r$ juju help commands | wc -l [20:54] 168 [20:54] redir, there should not be an alias for show controller [20:54] OK [20:55] alexisb: care to HO? [20:55] redir, sure [20:55] while we're at it, all the action CLI commands need a redo IMO [20:55] standup? [20:55] jcastro, adding to the pile will not help (also you will need to elaborate) [20:55] heh [20:55] jcastro: only we can criticize our CLI! we're retaking these complaints [20:56] yeah so, if you think of doing an action show-action-status and show-action-output break the flow [20:56] I can never remember them so I constantly have to refer to the docs [20:57] Our CLI: https://1.bp.blogspot.com/-ZM7ejcL9pk8/Vr4roZBEJsI/AAAAAAAACpY/oyyCEKiAs7A/s1600/TheSimpsons1218-1.jpg [20:57] but none of my complaints are 2.0 material I don't think === natefinch is now known as natefinch-afk [21:02] redir: bootstrapping your change now, only 1 minor comment for the review [21:07] morning [21:08] katco: k tx [21:09] katco: and going to eradicate the alias for that too [21:09] redir: ship it [21:10] katco: tx [23:13] wallyworld: here's the change to extract the unit status logic in the apiserver http://reviews.vapour.ws/r/5571/ [23:14] ok [23:16] thumper, axw ping [23:16] perrito666, ping [23:16] alexisb: pong, just joined [23:16] alexisb: pong [23:17] perrito666, standup [23:33] yay bug landed