[00:07] veebers: i just hacked up something in code as a temp thing on my system [00:07] wallyworld: got any availability in the next 60-90m? [00:07] redir: sure [00:07] wallyworld: pick a time any time [00:07] redir: now is good,that way you can escape and go do dinner or whatever [00:07] k [00:08] tanzanite? [00:08] ok [00:29] thumper: here's the fix for the debug-log sorting issue: http://reviews.vapour.ws/r/5229/ [00:29] menn0: kk, will look shortly [01:01] menn0: http://reviews.vapour.ws/r/5230/ [01:05] thumper: will take a look in a sec [01:06] thumper, wallyworld: I can't land the fix for 1590605 b/c it isn't a blocker [01:06] menn0: JFDI [01:06] +1 [01:06] ok [01:32] wallyworld: hacked up the ACL spec based on conversations I had with jam and urulama today [01:32] wallyworld: PTAL sometime and let me know if that's ok by you and any issues/concerns from here [01:33] rick_h_: will do, in meeting, will look straight after, ty [01:33] wallyworld: all good, thanks for giving it a go when you're free [01:58] Bug #1602010 changed: juju status doens't display proper error when machine fails [01:59] wallyworld: everytime I enable log forwarding (and it works i.e. I use the right host IP) it triggers that logging bug (#1599779) [01:59] Bug #1599779: Very high CPU usage from 'creating cloud image metadata storage' emitted in debug log every second [01:59] wallyworld: is there any useful information that I can provide you? I doubt I'm doing anything special or out of the ordinary [02:00] veebers: i think i have enough now - i am rewriting some stuff, i will have something fixed today [02:00] wallyworld: ah cool :-) [02:00] veebers: but apart from that, good that it works, thanks for testing [02:02] rick_h_: one thing, thumper said that mark himself was the one who originally wanted the add-user command to include the --shared models option, so you will need to be sure that he is across the change to remove that [02:35] wallyworld: FYI I found that state was still allowing controller config into model config in some cases (I think only in tests), so currently fixing a bunch more things. [02:36] ok [02:51] thumper menn0: was environs.MigrationConfigUpdater added only for azure? the need for it has gone away, so I'll remove it unless there's another use case [02:51] not sure TBH [02:52] (we don't have controller-uuid in model config anymore. we'll need a way to retag things, but should be done a bit differently I think) [02:52] thumper: it's not implemented by anything other than dummy, so I'll remove it and we can reinstate if needed [02:58] Bug #1556153 changed: ERROR destroying instances: cannot release nodes: gomaasapi: got error back from server: 504 GATEWAY TIMEOUT (Unexpected exception: TimeoutError [03:01] Bug #1556153 opened: ERROR destroying instances: cannot release nodes: gomaasapi: got error back from server: 504 GATEWAY TIMEOUT (Unexpected exception: TimeoutError [03:07] Bug #1556153 changed: ERROR destroying instances: cannot release nodes: gomaasapi: got error back from server: 504 GATEWAY TIMEOUT (Unexpected exception: TimeoutError [03:08] thumper: ship it for 5230 [03:08] wallyworld: chat? [03:08] menn0: cheers [03:09] menn0: sure, https://hangouts.google.com/hangouts/_/canonical.com/tanzanite-stand [03:09] axw: you canpop in too if you're free - talk about log streamer etc [03:47] wallyworld: can you PTAL at ahttp://reviews.vapour.ws/r/5224/, just the last commit/rev. I've remove ca-private-key from controller config attrs, plugged a hole in state where we allowed controller config through, and fixed fallout [03:47] wallyworld: there was a bit of model migration removed too, which was added for azure but is no longer needed [03:47] ok [03:53] axw: minor typo. so the private key should just go to state serving info (from memory) and nowhere else [03:54] it is used by the cert update worker [03:54] wallyworld: thanks. yes, that is correct [03:55] so godd that it is also removed from controller config as nothing else needs to see it [03:55] good [05:02] Bug #1602508 opened: LastLogTracker accesses mongo collections directly <2.0> [05:29] axw: when you have time, here's that log forward fix http://reviews.vapour.ws/r/5231/ [05:41] wallyworld: reviewed [05:41] ta [05:43] argh, juju-ci is Service Unavailable [05:44] axw: i didn't want to make any assumptions about order - i guess i could pick the one with the biggest timestamp [05:44] wallyworld: well we forward them in order don't we? [05:44] I'm pretty sure that's part of the contract [05:45] fair enough, i was being overly cautious i guess [05:52] axw: fixed [05:54] wallyworld: LGTM, thanks [05:54] ta [05:54] wallyworld: CI is buggered atm [05:54] oh, damn [05:54] seems the jenkins agent died, I don't know how to fix it tho [05:54] axw: same thing happended yesterday too [05:55] interesting [05:58] jam: it was changed from SetList to SetLastSent [06:11] axw: ah, just read it backwards then. [06:11] thanks [06:47] axw: the juju login command - where does that write to the db? ie the macaroon and record of login [06:54] wallyworld: it requests a macaroon from the controller, the controller generates a root key for the macaroon. that's the only thing that's stored in the db [06:54] wallyworld: otherwise it's all stored client side [06:55] axw: np, thanks. i'm commenting on https://docs.google.com/document/d/1xRO2tbeC-Dg5JdSV-wMYIZQel11EfWqPk8LU-0XdH8Y [06:57] wallyworld: when you send the LoginRequest, you can optionally include a model UUID. if you don't include it, you get a controller-only login. so that much already exists, if I'm interpreting it correctly [06:57] axw: yep, that's what i though too [06:58] wallyworld: do you recall why you made this change? https://github.com/juju/juju/commit/f751ea7b54876a5a38dbef6dfc90e1d56b1531d5 [06:59] wallyworld: I'm trying to weed out unnecessary environs.Environ constructions, this code stuck out as a bit odd [07:00] axw: not 100% sure - i think the intent was to not do an upgrade (or attempt it) if there was an issue with the cloud and the machines could not be contacted [07:00] so a sanity / pre-upgrade check [07:00] hm, ok. [07:02] axw: i can't recall where the requirement came from, but i recall "someone" asked for it [07:02] wallyworld: alright. I'll leave it alone [07:19] axw: can you join us? https://hangouts.google.com/hangouts/_/canonical.com/regular-catchup [07:20] wallyworld: be there in a moment [07:34] frobware: ping [07:55] axw: hiya [07:55] rogpeppe2: hey === rogpeppe2 is now known as rogpeppe [07:56] rogpeppe: I'm hopping on a call shortly, but will be around after that (in an hour or so) [07:56] axw: ok, cool [07:56] axw: let's do it then [07:56] rogpeppe: I just did a little test, if you set the value of "user" in accounts.yaml to nothing, and remove the password line, it should trigger external auth [07:56] rogpeppe: except there's a bug, fixed by http://reviews.vapour.ws/r/5232/ [07:57] axw: ah yes, i came across that [07:57] axw: it's still not ideal though [07:57] rogpeppe: re your email, I think we could more or less drop accounts.yaml, we just need the logged-in user's details. we used to support multiple users logged in simulataneously from the same client, but not now [07:58] so models.yaml shouldn't need the account qualifier. they would implicitly be relative to the logged-in user [07:59] axw: ah, ok - i guess that the jujuclient API is already due for an update then [07:59] rogpeppe: it wasn't really necessary until now, but I do think this tips it over the edge [08:00] axw: wouldn't you still need accounts.yaml 'cos you can still have a different user for each controller? [08:01] rogpeppe: yeah, sorry, we can't drop that - we can only drop the account part from models.yaml [08:01] axw: tech board? [08:01] rogpeppe: accounts.yaml would just become controller -> {user creds} [08:01] jam: coming [08:01] axw: right, cool [08:01] axw: as i think i suggested in my email [08:30] axw: just looking at http://reviews.vapour.ws/r/5205/diff/# - doesn't our suggested approach above rather go against what that's doing? [08:38] Bug #1602572 opened: Handler function is not being called even after changed one of the config values of the config.yaml === admcleod_ is now known as admcleod [09:02] night all [09:03] rogpeppe: back. looking... [09:05] rogpeppe: suggested approach of changes to accounts.yaml? [09:05] axw: well, of changes to models.yaml really [09:06] axw: if a model isn't relative to an account, then it's not going to be that easy to namespace model names by user [09:06] rogpeppe: models.yaml will only store information about models that the logged in user has access to. and you can still tell what user you're logged in as [09:07] rogpeppe: I think the only thing that changes in that diff is to replace the use of AccountName with the logged-in user name [09:07] which we get back from login [09:10] axw: i guess i'm wondering whether the model namespace is actually still per-user [09:10] axw: have you got some time to pair for a bit? [09:10] rogpeppe: yes [09:14] Bug #1602591 opened: Juju 2.0 Resources: Issue while fetching empty resources from charm store [10:20] Bug #1602591 changed: Juju 2.0 Resources: Issue while fetching empty resources from charm store [10:20] Bug #1584616 opened: mtu configuration should not be moved to bridge interface [10:21] mgz: are you around? do you know juju-ci is busted? [10:47] dooferlad: ping - do you some time to look over my sholder at some networky things.... [10:47] frobware: sure [10:48] dooferlad: 1:1 HO [11:09] rogpeppe: https://github.com/juju/juju/pull/5788 [11:39] morning all [12:14] rogpeppe: back for 10 mins or so - how's it going? [12:18] axw: yeah, it's I think just the proxy server, master seems to be up and doing things [12:19] mgz: when I ssh'd to the machine, the jenkins job log said the agent had terminated or something like that [12:23] have restarted jenkins to see if it get stuck in the same loop [12:25] axw: well, it's up now, possibly still not very happy? [12:25] mgz: I'll try landing my branch again, we'll see :) [12:25] mgz: thanks [13:26] axw: sorry, was on lunch [14:03] mgz: got a few to help me look at the 1.25 CI failures? [14:04] Help - does anyone know about how cloud-init works? [14:04] cherylj: sure, daily hangout? [14:05] mgz: yeah, omw [14:05] babbageclunk: you may have some luck asking in #cloudware on the canonical IRC [14:06] Ok, thanks - I pinged smoser but caught him on the way out. [14:22] anybody see this "bad ports from GCE: invalid port range 0-0/tcp" from GCE recently? [14:45] Bug #1602716 opened: MAAS provider bridge script doesn't handle alias interfaces IP [14:59] frobware - i have not, i've been running nearly exclusively on GCE since beta-8 [14:59] frobware - any clue on reproduction? I'm happy to give it a go [15:06] controlleruuid and controllermodeluuid are the same always? and if so, is that by design? [15:09] frobware, looks like a bug just got opened [15:15] Bug #1602732 opened: GCE bad ports 0-0/tcp [15:16] lazyPower, frobware just be aware that that bug also leaves the controller instance alive. [15:16] yay! [15:16] mgz: ^^ fyi [15:16] perrito666: yes, that is by design [15:17] babbageclunk: have you been able to get help with your cloud-init question? [15:17] * perrito666 decides not to create a controllerTag [15:17] perrito666: that being said, I don't know if there are plans to change that [15:18] cherylj: yes, some - I'll do an update on the bug. [15:18] cherylj: ah, that was my next question [15:18] Bug #1602732 changed: GCE bad ports 0-0/tcp [15:18] babbageclunk, should this bug be fix committed: https://bugs.launchpad.net/juju-core/+bug/1598897 ?? [15:18] Bug #1598897: juju status: relation name oscillates for peer relations [15:18] if so I will update it for you [15:19] alexisb_: just fyi - that GCE bug means that no one can bootstrap on GCE now with the current master [15:19] alexisb_: oops, yes please. [15:19] cherylj, yes [15:19] * cherylj sadface [15:19] it will need to get fixed before we release [15:19] heh [15:21] Bug #1602732 opened: GCE bad ports 0-0/tcp [15:21] ah ok, i'm currently on beta-11 [15:22] that makes sense why i haven't seen it if its only affecting master [15:23] lazyPower: yeah, just injected yesterday [15:36] gah, who decided that github.com/juju/juju/cloud.Cloud shouldn't have a Name field :/ [15:38] one might assume the name of a cloud might be important info :/ [15:43] oh, great, another breaking change, people is going ot hate me [15:44] perrito666, what change? [15:44] alexisb_: oh, not that breaking [15:44] mgz: thanks for raising the bug [15:45] alexisb_: I am writing controller permissions and thinking on unifying some code but on second thought it might be better to have a bit of code duplication at this point [15:45] lazyPower: yep, fails on tip (plus my changes, which don't seem to make it worse/better) [15:51] Bug #1602749 opened: no visible sign that HA is degraded when lost <2.0> [15:52] searching for the string literal "lxd" returns a distressing number of hits. did we not all have it beaten into our heads to always use constants and not write out string literals everywhere? [15:54] cherylj: any objections to raising bug 1598063 as critical so i can land the fix? [15:54] Bug #1598063: Data race in apiserver/observer package [15:54] katco: no objections :) [15:54] cherylj: great, thx! [15:55] you'll need to add the blocker tag too to use the $$fixes-####$$ notation [15:55] cherylj: ah ok [15:57] dooferlad: thanks for the link to the alias bug [15:59] frobware: I assume that is sarcasm... [15:59] dooferlad: check! [15:59] sigh [16:00] dooferlad: we used to bridge aliases but removed it. I cannot immediately recally why we removed bridges for aliases... [16:03] oh man, we love spooky action at a distance in this codebase [16:04] frobware: I am hoping that https://bugs.launchpad.net/juju-core/+bug/1602054 is related to some interfaces juggling, but I am not seeing much hope. [16:04] Bug #1602054: juju deployed lxd containers are missing a default gateway when configured with multiple interfaces [16:05] dooferlad: you wrote "we save the original in /etc/network" - we do? [16:05] frobware: unfortunately when trying to recreate the issue I set up two NICs on the MAAS controller, put the new NIC on another subnet and then I couldn't add a LXD because Juju was trying to connect to the wrong IP address. Yay. [16:06] * dooferlad goes to make dinner and ponder the wrongs of trying to make computers communicate [16:06] dooferlad: have you tried putting both on the same subnet? [16:07] cherylj, mgz: not that I have cloud-city creds for GCE, is there a way to get to the dashboard to kill of my rogue instance. Up until recently I was using my own (free/trial) account. [16:08] frobware: I can add you if you promise to be good [16:09] mgz: I can be good-ish. Promise-ish. :) [16:09] lol [16:09] * frobware steps out of being bad to "kill" an instance... ??? [16:12] frobware: you can now login to the webconsole via your canonical account and the details in cloud-city [16:12] mgz: thanks & trying now... [16:14] frobware: would you be able to review https://github.com/juju/juju/pull/5790 ? [16:14] cherylj: we used to do this. then removed it - I mentioned this in the bug. [16:16] frobware: ah, so the reasons need to be investigated ? [16:16] cherylj: the change is absolutely fine. it's the semantics that concern me a little. [16:17] cherylj: but... this could have been at a time when there were /other/ things that were broken. [16:20] cherylj: either way, I just commented in the PR [16:20] thanks, frobware === akhavr1 is now known as akhavr [16:27] dooferlad, voidspace, babbageclunk: PTAL @ http://reviews.vapour.ws/r/5234/ [16:27] frobware: looking [16:35] cherylj: wow, that got merged fast... is this the new branching structure in play? it doesn't look like any tests were run? http://juju-ci.vapour.ws:8080/job/github-merge-juju/8403/console [16:39] umm..... [16:39] balloons: can you look at http://juju-ci.vapour.ws:8080/job/github-merge-juju/8403/console ? [16:39] oh, hmm [16:39] cherylj: i knew it was too good to be true =| [16:39] maybe the output from the unit test isn't displayed anymore [16:39] ? [16:40] 2016-07-13 16:08:45 DEBUG Waiting for trusty to finish [16:40] 2016-07-13 16:30:32 DEBUG trusty finished [16:40] ... [16:40] 22 minutes seems reasonable [16:40] i'm not sure how i feel about that... [16:40] i guess if it succeeds you don't need the scrollback? [16:40] "reasonable" [16:40] yeah [16:40] dunno [16:40] maybe we could at least put something there about "tests finished successfully" [16:40] we used to print out the packages running tests... I don't think there's any reason not to do that [16:40] yeah [16:41] maybe if the io was causing slowdown, but i don't think that was the case? [16:41] yeah, but if we're running multiple things in parallel, where would the output go? [16:41] oh it does say ALL passed [16:42] I think we're just running trusty [16:42] abentley: for the new parallel merge job, where is the output from the parallel jobs logged? [16:42] whart's our concern about the merge? The speed? The missing results in the console? [16:42] balloons: the missing results made me think it hadn't run [16:42] cherylj: In the jenkins artifacts for the build. [16:43] http://juju-ci.vapour.ws:8080/job/github-merge-juju/lastSuccessfulBuild/artifact/artifacts/trusty-out.log/*view*/ [16:43] balloons: i wasn't sure if the tests ran. i missed the "All passed, sending merge" which i assume is reerring to the tests? [16:43] well, I guess I should link to your specific build: http://juju-ci.vapour.ws:8080/job/github-merge-juju/8403/artifact/artifacts/trusty-out.log/*view*/. But indeed they are there [16:43] balloons: any reason not to just put that right in the jenkins console output? [16:43] abentley: ah, got it [16:44] natefinch: The reason for not doing that is because we are running 3 sets of tests at once, and outputting the all at the same time would be confusing. [16:44] right. We could echo them to the console in order upon completion I suppose [16:45] balloons: I would like to do that for failing tests. I'm not sure it's desirable for successes. [16:45] cherylj: did an email go out about this? looks like not everyone knew about the change? [16:46] well desire is a funny thing. Developers might rather have the giant log. [16:46] katco: an email did go out [16:46] katco`: Yes, I sent it to juju-dev: "Windows and --race tests are now gating" [16:46] katco: don't think it covered the change in the test output, maybe [16:46] cherylj: It did. [16:46] abentley: oh that one [16:46] * cherylj is corrected :) [16:46] abentley: must have missed that part [16:46] abentley: ty [16:47] ha, we core devs are great about reading entire emails [16:47] anyways, I guess it's good that you are concerned about potential failures -- most would be happy it landed, but you care enough to know how it did so as well [16:47] kudos :-) [16:47] abentley: could we at least write links to the console so we don't have to try to figure out how to get to the artifacts from that page? [16:48] cherylj: well i had thought i got the just of it: make sure your stuff works on windows :) i didn't expect additional information about log formatting in that chain [16:48] lol [16:48] katco`: ditto for me. My bad for not reading the whole email. [16:49] balloons: goal is definitely a stable product, not jfdi ;) [16:49] natefinch: We could, but I would rather print out the logs for failures first, and then see if we still want it. [16:50] abentley: I'd like the links printed out always, and then print out failures. The test output has a lot of very useful information in it... like test times etc. [16:50] right, if you can adjust to the idea that no output is a "good thing", the output can become more usable and grokkable, as it will only ever have useful information of failures [16:50] balloons: no output = good is very dangerous. no output can also mean "nothing ran" [16:51] natefinch, well, you are getting a positive result back, so no output isn't literal [16:52] balloons: ideally I'd like something like this for each test run - Windows: SUCCESS output: http://juju-ci.vapour.ws:8080/job/github-merge-juju/lastSuccessfulBuild/artifact/artifacts/trusty-out.log/*view*/ [16:52] so, like, for example, it looks to me like the only thing we ran for this merge was trusty (so, no windows): http://juju-ci.vapour.ws:8080/job/github-merge-juju/8403/console [16:52] I've no desires either way, but I can definitely see both sides [16:53] the right amount of information at the right time should be the goal [16:53] balloons: That is the only thing we ran. We are temporarily kneecapped. [16:53] natefinch: ^^ [16:54] abentley: understood [16:54] abentley: for the record, I disagree with Ian. The reason the tests on Windows don't pass is *because* we never make them gating. [16:55] natefinch: we have a spectrum of folks on the team: ||-(jfdi we're so busy)------(do it perfect or not at all)-|| [16:55] natefinch: I'd say it's because CI failures don't block beta releases. [16:55] natefinch: as always, the middle-way is preferable [16:57] abentley: I don't understand how we can ship anything, beta or not, if we know even one test is failing. That means we *know* the product doesn't work. [16:57] I mean, I do understand. We assume we know better than the test and that it's not *really* a bug. [16:58] which is dangerous but also often true. [16:58] natefinch: The rationale with the betas is more like "It's a beta, it's allowed to have flaws". [16:59] abentley: natefinch: yes. and i think that direction comes from the top [16:59] natefinch: also, lots of products in beta (and even not) come out with a list of "known issues" [17:01] I guess a bug with a failing test is better than a bug without one.... and we certainly have plenty of bugs without accompanying tests. [17:27] Bug #1602780 opened: "cannot allocate memory" errors on 2.0beta11 controller === mup_ is now known as mup [17:32] bbl [19:31] Bug #1556137 changed: Juju 2.0: cannot connect to LXC container when deploying bundle with LXC container placement with maas 1.9.1 as provider <2.0-count> [19:31] Bug #1600300 changed: [2.0 RC1] can't install lxd - E: The value 'trusty-backports' is invalid for APT - daily fix hasn't been promoted to release [19:39] hey whats up team? I've a few questions concerning network spaces on aws ... [19:40] 1. Is juju capable/aware of pre-existing subnets/networks associated with the aws account? [19:41] eeerrr ... * capable of using them, * aware that the networks exist [19:43] 1.a By what method can I instruct Juju to use the pre-existing subnets, if the functionality exists? [19:43] 2. Feature request if functionality doesn't exist? [19:43] 2.a Where can I find the docs on this functionality if it does exist? [19:44] can't seem to locate anything in juju's docs .. I know they are not complete ... guess I'm just hoping this is a thing ... anyone? [19:45] bdx: most of the network gurus are on European time [19:48] bdx: juju help add-subnet probably is what you want [19:50] perrito666, natefinch quick question; I am facing https://bugs.launchpad.net/juju-core/+bug/1449633, there is any way to modify the db for flagging the stuck machines as non-voting? because right now, if i try to terminate (--force) those machine juju claims that 'machine xxx is required by the environment'. [19:50] Bug #1449633: Cannot terminate/remove broken state server after ensure-availability [19:51] niedbalski_: rerunning ensure-ha should mark them as non-voting and eventually remove them [19:52] Bug #1556137 opened: Juju 2.0: cannot connect to LXC container when deploying bundle with LXC container placement with maas 1.9.1 as provider <2.0-count> [19:52] Bug #1600300 opened: [2.0 RC1] can't install lxd - E: The value 'trusty-backports' is invalid for APT - daily fix hasn't been promoted to release [19:55] Bug # changed: 1556137, 1594958, 1595360, 1600300 [19:55] niedbalski_: it's possible this is an edge case that we can't recover from. A replica never even being created due to lack of cloud availability is probably not something we have actively tested for. I would hope that ensure-availability would do the right thing, but I can't be sure. [19:55] natefinch, I don't see any evident effect, they remain in error state; and machine 0 keep screaming this: machine-0: 2016-07-13 19:54:39 ERROR juju.worker runner.go:223 exited "firewaller": machine 31 not provisioned [19:56] natefinch, actually ha is working with another set of machines; but at the time we ran ensure-ha those machines failed to provision, now they are stuck in error state. [19:56] niedbalski_: natefinch nuking the machines wont do the trick? [19:58] perrito666, I would be more than happy to nuke them orelse flag them as non-voting and terminate those .. any trick? [19:58] niedbalski_: have you tried the same thing in 1.25? We did make some fixes at some point, though I don't know if any of them would have fixed this problem. [19:58] natefinch: oooh niceeee! you da' man! [19:58] natefinch: not sure I have enough authority in the subject to confidently say to nuke something [19:59] natefinch, this is 1.25.5 [19:59] niedbalski_: oh, sorry, the bug mentioned 1.20. Well, good, sorta [20:00] niedbalski_: maybe try adding two new machines and doing ensure-availability --to x,y, where x and y are the new machines' numbers? [20:01] natefinch, yep, that's the way we got ha working; now the issue is to get rid of the machines that remain in error state. [20:01] 30 error pending trusty [20:01] 31 error pending trusty [20:04] niedbalski_: you might be stuck with them, unfortunately. It's probably worth a bug report to say that remove-machine --force should work if the server isn't even running yet (and/or is non-voting etc) [20:05] niedbalski_: you could, in theory, go hack the database, but it's not something I'd feel super comfortable about. [20:07] natefinch, understood, I'll fill an extra bug; I don't feel comfortable with hacking the database, but those stuck errored machines are driving me somehow crazy. [20:08] niedbalski_: make an alias for juju status that strips them out ;) [20:10] natefinch, lol .. yeah, probably I can mock the status output too. [20:58] Bug #1602838 opened: Charm upgrade should use bulk calls for whole model, not one per charm [20:58] Bug #1602840 opened: juju status (or equivalent) should show all addresses a machine has === natefinch is now known as natefinch-afk [21:24] anybody succesfully deploying LXD containers today? I ask because the download seems to stop at about 30% for me then nothing more. [21:26] well, well... it may be something at my end as downloading latest from kernel.org also hangs at ~40% [21:27] * frobware goes to bed and hopes the internet fairies sprinkle unicorns over his internet connection.... [21:28] Bug #1598708 changed: juju use many DEPRECATED apis ,is there new juju version using the latest api? [22:38] veebers: the latest log forward stuff has just landed, cpu issue hopefully fixed [22:39] wallyworld: awesome, I'll give it a spin shortly [22:39] hope it works :-) [22:40] heh I'll let you know either way :-) [22:57] hello juju-core devs :-) [22:57] I am hitting a lxd perms issue [22:57] http://paste.ubuntu.com/19316751/ [22:58] is there a work around for this? [22:58] wallyworld: quick Q: what's the best way to get a models (i.e. the controller) uuid? I'll be using it to confirm the right log message appears in the rsyslog sink [22:58] * arosales tried to reconfigure the network per https://jujucharms.com/docs/devel/getting-started but no luck [22:58] hit this on beta7, beta10, and beta 11 [22:59] veebers: juju show-controller --format yaml should print the controller uuid i think [23:00] wallyworld: cheers [23:00] arosales: i've not seen that particular issue. thumper? ^^^^ [23:01] * thumper looks [23:01] * arosales trying to clean up other containers to see if thats the issue [23:01] ah... nope [23:04] wallyworld: fyi that works, thanks [23:04] veebers: awesome, thanks for testing [23:05] wallyworld: oh sorry to mislead , that was re: show-controller. I'll have tested the cpu stuff soon :-P [23:05] arosales, I have bootstraped several times on lxd in the last week (both from devel, beta7 and master) and not seen that issue [23:05] alexisb_: on Z :-) [23:05] ah ok [23:05] ? [23:05] that I have not done [23:06] :-( [23:06] :) [23:07] arosales, someone on the QA team can provide details on system z tests in CI, maybe balloons [23:07] if he is still around [23:07] alexisb_: ok I see if any qa folks respond [23:08] alexisb_: thanks [23:08] got some z folks interested in a juju demo tomorrow and would like to show 20 [23:08] 2.0 [23:08] but . . . .the above error is a little bit of an issue [23:08] luckily 1.25 is working [23:11] arosales, looking at this we have many working lxd deploys on s390x: http://reports.vapour.ws/releases/4135/job/lxd-deploy-xenial-s390x/attempt/217 [23:13] arosales, not that that really helps for your specific issue [23:13] alexisb_: well good to know that is has been working [23:13] alexisb_: thanks for the information [23:37] Bug #1602885 opened: machine allocation failed due to maas error, but machines stay in pending state forever [23:43] wallyworld: it was a combination of my code and your code that caused the problem, so I eat my words [23:43] we both suck :-) [23:43] wallyworld: the provider is using StateServingInfo, should be using ControllerConfig [23:44] hmm, i thought it was using controller config, oh well [23:44] wallyworld: my change made it so that StateServingInfo is no longer populated when the Environ.Bootstrap call is made [23:44] wallyworld: I'm planning to make InstanceConfig write-only when I get some time, any input should be in StartInstanceParams [23:45] ok [23:50] menn0: http://reviews.vapour.ws/r/5167/diff/# updated [23:53] * perrito666 is back === urulama is now known as urulama|___