* thumper goes to make a test faster | 00:00 | |
thumper | well... when the test run with -race is done | 00:00 |
---|---|---|
thumper | perhaps food now | 00:00 |
alexisb | ok axw, ready when you are | 00:14 |
axw | alexisb: brt | 00:15 |
redir | I think this testing with snap isn't going to work out so slick | 00:32 |
redir | it requires uploading the agent with the features baked into the backend and the snap can't do that | 00:33 |
redir | alexisb: ^ | 00:34 |
redir | or maybe not. I might not understand how juju uses streams | 00:37 |
alexisb | redir, I am pretty sure both menno and wallyworld have done snaps with agents baked in | 00:39 |
alexisb | menn0, ^^^ ?? | 00:39 |
redir | alexisb: deploying one now to see | 00:39 |
menn0 | redir, alexisb: it can be done. https://github.com/mjs/juju-menno-snap | 00:40 |
menn0 | redir, alexisb: here's how to build off an alternate repo and branch: https://github.com/mjs/juju-menno-snap/blob/MM-tabular-trial/snapcraft.yaml | 00:42 |
menn0 | that also shows how to include other files from outside of the juju build | 00:43 |
redir | menn0: I just used . as the repo... | 00:43 |
* redir looks | 00:43 | |
menn0 | redir: if that works that's great too. using a specific branch is probably a bit more repeatable though. | 00:44 |
redir | menn0: so it turns out that it didn't like my named branch earlier because I had a typo in the name. | 00:49 |
redir | :( | 00:49 |
menn0 | redir: ah right | 00:49 |
redir | I went from the snapcraft.yaml in the juju repo | 00:49 |
redir | I am guessing adding jujud to the list of snap binaries will give me a working jujud | 00:50 |
=== thumper is now known as thumper-dogwalk | ||
redir | but --upload-tools is gone, will a prebuilt jujud work with --build-agent? | 00:50 |
redir | me tries | 00:50 |
redir | unhandled snapcraft exceptions, yay:) | 00:54 |
natefinch | redir: if you have a jujud in the same directory as your juju client with the same version, it automatically uses upload tools | 00:57 |
natefinch | redir: as long as your version is greater than what's in streams (otherwise it'll use streams). | 00:58 |
natefinch | redir: if you have the code, you can use --build-agent to force a rebuild and upload of jujud (basically like the old upload tools) | 00:58 |
redir | natefinch: thanks. I think that will work with menn0's hint to include jujud in the list of binaries in parts: juju: snap:... | 00:58 |
redir | natefinch: it's a snap so there's no code with the binaries | 00:59 |
natefinch | redir: right. The whole idea is to support snaps... it's just overly complicated because we *also* still support streams | 00:59 |
redir | we need streamcraft ! | 00:59 |
natefinch | redir: personally, I really wish we just still had --upload-tools to force it to do what we want. It removes a lot of the guessing about what juju bootstrap would do. | 01:00 |
redir | looks promising this build and push | 01:00 |
* redir goes to get some exercise and will be back later | 01:00 | |
menn0 | axw: ping? | 02:59 |
axw | menn0: pong! | 03:00 |
menn0 | axw: I'm trying to determine whether will fixed this before he left: https://bugs.launchpad.net/juju/+bug/1608956 | 03:01 |
mup | Bug #1608956: local charms can be deleted while still referenced <juju:Triaged> <juju-core:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1608956> | 03:01 |
menn0 | axw: b/c I *think* he has based on the last PRs he sumbitted and our discussions with him | 03:01 |
axw | menn0: pretty sure it's resolved for 2.0. I'll double check | 03:02 |
menn0 | axw: any idea how to have the same local charm references by 2 applications? | 03:02 |
menn0 | axw: referenced even | 03:02 |
axw | menn0: hrm, not sure, I think we auto increment each time we upload don't we? | 03:03 |
axw | menn0: normally there would be refs from multiple units, not applications | 03:03 |
menn0 | axw: exactly. i've been playing around with a model and a local charm for a while and I can't think how to get to the situation the ticket describes. | 03:04 |
axw | doesn't make sense for an app to be removed before units though... | 03:04 |
menn0 | yeah... either will was confused or there's more to it | 03:05 |
axw | menn0: I suspect it's theoretical, not sure though. there definitely is code to do ref checking now though. Application.removeOps decrefs, and then schedules a cleanup that will fail if the charm is still in use | 03:09 |
menn0 | axw: I saw that too, but thought I'd try it out. I guess it's theoretical like you said. | 03:10 |
menn0 | axw: i'm going to add some references to PRs and close the ticket. sound good? | 03:10 |
axw | menn0: sounds good | 03:11 |
anastasiamac | menn0: closing theoritical tickets always sounds good \o/ | 03:18 |
thumper-dogwalk | winning !!! | 03:19 |
thumper-dogwalk | oh | 03:19 |
=== thumper-dogwalk is now known as thumper | ||
menn0 | anastasiamac: closing tickets for theoretical problems were actually fixed despite being theoretical is even better :) | 03:19 |
thumper | got StatusHistorySuite.TestPruneStatusHistoryBySize from 42s to 1.5s | 03:19 |
anastasiamac | thumper: \o/ | 03:19 |
anastasiamac | menn0: closing any tickets is amazing! | 03:20 |
thumper | and under race, from 192s to 8s | 03:20 |
thumper | 8s still seems long | 03:20 |
thumper | but we are inserting 20000 documents | 03:20 |
anastasiamac | i'd take 8s over 192s and run with it ;D | 03:20 |
thumper | changing batchsize from 1000 to 10000 takes 1.5s to 1.3s | 03:22 |
* thumper waiting for race test to run | 03:23 | |
menn0 | thumper: nice! | 03:23 |
thumper | still 8s for -race | 03:24 |
thumper | so will stick with smaller batch size | 03:24 |
* thumper looks at next on hit list | 03:25 | |
anastasiamac | thumper: pout of curiosity, what was the fix? how did u reduce time so dramatically? | 03:25 |
thumper | stopped inserting one document at a time | 03:25 |
anastasiamac | ha :) | 03:25 |
thumper | with 20000 documents | 03:25 |
thumper | kinda dumb | 03:25 |
thumper | doing 1000 at a time | 03:25 |
anastasiamac | awesome \o/ i wonder if there is something we can do more gloabbly - even if just detect and improve places where we have similarly large sequential operations... | 03:26 |
thumper | not really, it is very specific | 03:42 |
mup | Bug #1597601 opened: ERROR cannot deploy bundle: cannot deploy application: i/o timeout <2.0> <bundles> <deploy> <oil> <oil-2.0> <repeatability> <retry> <juju:Fix Committed by menno.smits> <juju-core:In Progress by menno.smits> <https://launchpad.net/bugs/1597601> | 03:49 |
Mian | Hi, does anyone here know something about the Xenial version of mongodb charm? it's not available in the charm store right now | 03:51 |
Mian | is there a schedule or calendar as to when we will release mongodb charm on Xenial | 03:51 |
thumper | menn0: https://github.com/juju/juju/pull/6296 | 03:56 |
menn0 | thumper: looking | 04:06 |
menn0 | Mian: you're likely to have better luck on #juju | 04:07 |
Mian | menn0: got it, thanks ... | 04:07 |
menn0 | thumper: did you know there's already a state/clock.go | 04:09 |
menn0 | ? | 04:09 |
menn0 | thumper: it looks like you've done things in a better way | 04:09 |
menn0 | thumper: but all the things used state.GetClock should probably get updated to use the injected clock | 04:10 |
thumper | ah | 04:14 |
thumper | I thought there was a clock somewhere | 04:14 |
thumper | but I thought I had imagined it | 04:14 |
thumper | let me fix them up | 04:14 |
menn0 | thumper: looks like there's only one place? | 04:17 |
thumper | well, half a dozen | 04:18 |
thumper | on it already | 04:18 |
* thumper runs the tests | 04:21 | |
thumper | yay | 04:22 |
thumper | removed lots of TODO | 04:22 |
thumper | and bug references with this branch | 04:22 |
thumper | and *state.State now has a clock | 04:22 |
menn0 | thumper: yeah, really good to have this done - thank you! | 04:22 |
thumper | all GetClock references remvoed, running tests now | 04:23 |
thumper | just to make sure | 04:23 |
thumper | hmm | 04:23 |
thumper | found a reference in worker/uniter | 04:23 |
thumper | has to be the suites with the longest tests runs doesn't it | 04:24 |
thumper | three tests failed in state | 04:28 |
* thumper enfixorates | 04:28 | |
thumper | hmm... think I have broken the uniter tests... | 04:41 |
thumper | I think they are waiting to time out | 04:41 |
thumper | PASS: uniter_test.go:1173: UniterSuite.TestActionEvents39.168s | 04:42 |
thumper | no | 04:42 |
thumper | just that one | 04:42 |
thumper | no, uniter tests all good | 04:42 |
thumper | phew | 04:42 |
thumper | menn0: review updated | 04:48 |
menn0 | thumper: looking | 04:48 |
thumper | menn0: makes many things simpler | 04:50 |
thumper | I was caught out by the lease managers not working | 04:50 |
thumper | but that was because I need to restart the workers when you set the clock | 04:51 |
menn0 | thumper: I thought of that too but when I checked, so had you :) | 04:53 |
menn0 | thumper: ship it | 04:53 |
thumper | w00t | 04:53 |
menn0 | thumper: that shaves a bit of landing times then | 04:54 |
menn0 | thumper: as well as making state testing a whole lot better | 04:54 |
thumper | at least a minute | 04:54 |
thumper | no... | 04:54 |
thumper | only about 20s | 04:54 |
thumper | the race is where it really sucks | 04:54 |
redir | axw: yt? if you have a minute PTAL https://github.com/juju/juju/pull/6297 | 04:55 |
thumper | given how long I worked yesterday, I'm going to call it now | 05:00 |
thumper | I'll check to make sure the branch lands | 05:00 |
thumper | but apart from that, dinner making time | 05:00 |
thumper | laters | 05:00 |
* menn0 is done for now too... long tech board meeting tonight + more calls | 05:01 | |
=== menn0 is now known as menn0-afk | ||
axw | redir: looking | 05:01 |
redir | tx | 05:07 |
redir | having trouble getting snaps to upload a backend with support, but I guess that won't really help either now that I think about it | 05:08 |
redir | :| | 05:08 |
axw | redir: reviewed | 05:11 |
redir | I haven't yet gotten the snap to deploy with an appropriate jujud | 06:20 |
redir | giving up for the day. | 06:20 |
* redir goes eod | 06:20 | |
redir | axw: started on changes per your review | 06:22 |
redir | axw: was going to use an environs.RegionSpec since that is what we use elsewhere to help generate the region key elsewhere | 06:24 |
axw | redir: works for me | 06:24 |
redir | but then found that the param structs for [Unset|SetModelDefaults are different shapes | 06:25 |
redir | so I'll refactor UnsetModeldefaults to be shaped like SetModelDefaults and generate the regionspec there to pass to state. | 06:26 |
redir | unless that sets off some alarm bells for you | 06:26 |
redir | axw ^ | 06:26 |
redir | nite | 06:26 |
axw | redir: don't really understand what you mean by them being shaped differently. they look the same to me. | 06:28 |
axw | good night | 06:28 |
redir | in case it comes up menn0-afk axw the cli snap for reviewing modeldefaults isn't working because I've been unable to make the snap use the right jujud -- so not going to make the tech board agenda | 06:28 |
=== redir is now known as redir-afk | ||
redir-afk | axw one has region and cloud tag on the top of the paramas struct the other has it with each item in the list | 06:29 |
axw | redir-afk: okey dokey. possibly because rc1 images have been released, and you're building code with version=rc1? may need to rebase. anyway, a job for tomorrow | 06:29 |
axw | redir-afk: neither SetModelDefaults nor UnsetModelDefaults have it at the top. | 06:29 |
redir-afk | axw: you're right it is at the top in the private setModelDefaults. | 06:35 |
redir-afk | sigh | 06:35 |
=== frankban|afk is now known as frankban | ||
=== menn0-afk is now known as menn0 | ||
marcoceppi | frobware: ping, maas question | 08:27 |
voidspace | finally submitted some of my expenses for the juju core sprint | 09:26 |
voidspace | the one in june... | 09:26 |
marcoceppi | voidspace: hah, I still have expense reports from May outstanding >.> | 09:32 |
voidspace | marcoceppi: :-) | 09:32 |
perrito666 | Really? I submit them on the go, way easier to track | 09:50 |
voidspace | perrito666: that's much more sensible... | 09:52 |
voidspace | frobware: ping | 10:08 |
voidspace | frobware: do you have opinions on bug 1624495? | 10:08 |
mup | Bug #1624495: operations fails on rackspace because of ipv6 address in dns-name <rackspace> <status> <juju:Triaged by rharding> <juju-ci-tools:Triaged> <https://launchpad.net/bugs/1624495> | 10:08 |
voidspace | frobware: should we filter ipv6 addresses out of dns-name? | 10:08 |
voidspace | frobware: that doesn't seem very forward compatible with work coming "soonish" | 10:08 |
=== jamespag` is now known as jamespage | ||
marcoceppi | halp please | 10:44 |
marcoceppi | juju deploy bundle is hanging on this error | 10:45 |
marcoceppi | we can curl the request, we have DNS and network connectivity | 10:45 |
marcoceppi | http://paste.ubuntu.com/23210954/ | 10:46 |
marcoceppi | any help is appreciated, onsite, etc | 10:46 |
frobware | voidspace, marcoceppi: pong - (sorry was at the opticians) | 10:51 |
marcoceppi | we can't deploy anythin gatm | 10:52 |
frobware | marcoceppi: juju version? | 10:52 |
marcoceppi | beta18 | 10:55 |
marcoceppi | frobware: ^ | 10:55 |
voidspace | frobware: hey, hi | 11:11 |
voidspace | frobware: did you see my question about bug 1624495 | 11:11 |
mup | Bug #1624495: operations fails on rackspace because of ipv6 address in dns-name <rackspace> <status> <juju:Triaged by rharding> <juju-ci-tools:Triaged> <https://launchpad.net/bugs/1624495> | 11:11 |
frobware | voidspace: was partially looking through backlog | 11:11 |
voidspace | frobware: rather than filtering ipv6 out we could just prefer ipv4 (always give an ipv4 address if one is available) | 11:12 |
voidspace | frobware: but if only ipv6 addresses are available still return one for dns-name | 11:12 |
frobware | voidspace: in the PB I didn't see anything IPv6 related - is that captured elsewhere? | 11:12 |
voidspace | frobware: PB? | 11:12 |
frobware | voidspace: pastebin from marcoceppi | 11:12 |
frobware | voidspace: http://paste.ubuntu.com/23210954/ | 11:13 |
voidspace | frobware: my question is unrelated | 11:13 |
frobware | voidspace: oh | 11:13 |
voidspace | frobware: I'm talking about bug 1624495 and ways to fix it | 11:13 |
mup | Bug #1624495: operations fails on rackspace because of ipv6 address in dns-name <rackspace> <status> <juju:Triaged by rharding> <juju-ci-tools:Triaged> <https://launchpad.net/bugs/1624495> | 11:13 |
marcoceppi | rc1 "fixed" it | 11:14 |
voidspace | frobware: you can address marcoceppi first - it sounds like a higher priority | 11:14 |
frobware | marcoceppi: huzzah | 11:14 |
frobware | voidspace, marcoceppi: I'm a confused. we're all talking about the same bug... or so I thought. | 11:15 |
voidspace | frobware: marcoceppi: I have no idea... | 11:16 |
marcoceppi | I just need help, in general, unrelated to any bugs | 11:17 |
marcoceppi | we're onsite in a high pressure situation and weird things are cropping up | 11:17 |
frobware | marcoceppi: ok you got bumped; can I help? wha'ts broke? did rc1 fix things and you are no longer stuck? | 11:18 |
marcoceppi | rc1 got us deploying again | 11:18 |
marcoceppi | we're having issues now where machines aren't be requested from juju in the maas, but we're still looking | 11:18 |
frobware | marcoceppi: MAAS 2.0? | 11:19 |
marcoceppi | maas 2.0, juju 2.0rc1 | 11:19 |
marcoceppi | I think it's realted to requring us to put IP addresses into noproxy | 11:19 |
marcoceppi | this environment has an http-proxy, but it means ALL traffic gets routed to that proxy, including traffic to the controller and maas | 11:20 |
marcoceppi | which is annoying | 11:20 |
rick_h_ | morning | 11:22 |
marcoceppi | I'm getting a TON of activity in the log, but still no instances in maas booted | 11:24 |
rick_h_ | marcoceppi: this is in the MAAS log? | 11:24 |
frobware | marcoceppi: care to share the log | 11:25 |
marcoceppi | juju machine- log | 11:25 |
rick_h_ | marcoceppi: k, yea can you pastebin that log and peek at the maas log? | 11:25 |
marcoceppi | trying to pastbein | 11:26 |
marcoceppi | yeah, I can't really, juju scp doesn't run as root, I don't have access to run a pastebin from that server | 11:28 |
rick_h_ | marcoceppi: k | 11:28 |
rick_h_ | marcoceppi: email? | 11:29 |
marcoceppi | we're getting a few instances booted, but it's taking a very long time for req to come trhough | 11:29 |
marcoceppi | http://paste.ubuntu.com/23211052/ | 11:29 |
marcoceppi | enjoy 5mb of text | 11:30 |
rick_h_ | marcoceppi: hmm, some sort of timeout maybe...what would slow down maas provisioning... | 11:30 |
rick_h_ | marcoceppi: heh, yea browser is choking on it | 11:30 |
marcoceppi | we've gotten three of the 10 machines deployed | 11:31 |
marcoceppi | just seems like juju is taking it's sweet time making these requests | 11:31 |
marcoceppi | rick_h_: http://paste.ubuntu.com/23211059/ that's the bundle | 11:32 |
rick_h_ | marcoceppi: yea, just trying to think wtf. You're saying it's taking a long time for maas to show the machine is pulled by Juju, not that the charms are taking a long time to come up, or a long time for juju to upgrade the machines once up. | 11:33 |
rick_h_ | marcoceppi: all the things I'd expect to be slow behind the proxy you aren't hitting, it's the stuff that should be pretty damn fast | 11:34 |
frobware | marcoceppi: can we isolate juju and/or maas. Can you deploy a machine - does that take similar time? | 11:34 |
marcoceppi | frobware: deploy a machine from maas? | 11:34 |
frobware | marcoceppi: deploy from MAAS, taking Juju out of the equation | 11:34 |
marcoceppi | or add-machine from juju | 11:34 |
marcoceppi | frobware: it's instantly allocated, then comes up in a few mins time | 11:34 |
frobware | :( | 11:34 |
frobware | marcoceppi: can you pase $(ip route) from the maas controller | 11:35 |
frobware | *paste | 11:35 |
marcoceppi | we're 23 minutes in, and only three machines were requested from maas | 11:36 |
marcoceppi | jk, another machine was just allocated | 11:36 |
marcoceppi | frobware: http://paste.ubuntu.com/23211068/ | 11:37 |
frobware | marcoceppi: so are the machines taking a long time to install packages? (just guessing know) | 11:39 |
marcoceppi | frobware: http://paste.ubuntu.com/23211072/ | 11:39 |
marcoceppi | frobware: they're just not being requested from maas | 11:39 |
marcoceppi | frobware: http://paste.ubuntu.com/23211079/ | 11:40 |
frobware | marcoceppi: and can you see the console messages when a machine is booting? | 11:40 |
marcoceppi | frobware: http://i.imgur.com/RgSyCAQ.png | 11:40 |
* frobware wonders at 72 cores | 11:41 | |
marcoceppi | it's not the speed of the machine booting, infact it's like 3 minutes from aquired -> ready in juju | 11:41 |
marcoceppi | it's that I've got a lot of machines ready, and I'm 30 mins in, and Juju as only made the request for a handful of the machines | 11:41 |
marcoceppi | we were doing this with beta18 yesterday, without this problem, we movd to rc1 because we lost the ability to deploy a bundle with beta18 this morning | 11:42 |
frobware | marcoceppi: can you login/ssh to a node once it is mostly up and look to see what CPU usage is being consume | 11:42 |
marcoceppi | 0.00 | 11:43 |
marcoceppi | there's a minimal load | 11:43 |
frobware | bleh | 11:43 |
marcoceppi | you want me to look ath the controller cpu usage? | 11:43 |
frobware | marcoceppi: ok, so, $(juju add-machine) without deploying is how long? Do you have a spare machine to do that operation? | 11:43 |
rick_h_ | voidspace: ping, need a hand here. I'm trying to look at marcoceppi's logs and find the maas communication section where juju asks for the machine and juju gets it and starts responding: http://paste.ubuntu.com/23211052/ | 11:44 |
marcoceppi | well, technically wehave a bunch of machines not being used, but they're supposed to be allocated | 11:44 |
frobware | marcoceppi: also, what's the CPU load on the MAAS controller? | 11:44 |
rick_h_ | voidspace: any clue as to what in the logs I'm looking for besides all the lines that have something like "machine-6" in them? | 11:44 |
marcoceppi | frobware: 0.23 | 11:44 |
marcoceppi | I'll try to add-machine | 11:44 |
marcoceppi | frobware: equally as slow I did an add-machine and we're 2 mins in since I rand the command and not yet acquired | 11:46 |
marcoceppi | it's like maas provider is serializing requests only after the last machine agent fully boots | 11:47 |
frobware | marcoceppi: and could you send truncated logs around the time you did add-machine - what was the machine number? | 11:47 |
marcoceppi | machine # 10 | 11:47 |
marcoceppi | let me try | 11:47 |
frobware | rick_h_: given log size, tryig to narrow down to a known machine allocation ^ | 11:48 |
rick_h_ | frobware: yea, understand | 11:48 |
frobware | rick_h_: that other log made my browser behave like ye-olde-netscape | 11:48 |
rick_h_ | need 32gb of ram :) took a sec but loaded here | 11:49 |
marcoceppi | frobware: the machine never came up, I had to remove-machine because we need it for the deploy | 11:49 |
frobware | marcoceppi: ok | 11:49 |
marcoceppi | frobware: http://paste.ubuntu.com/23211105/ | 11:50 |
marcoceppi | that's 8000 lines, from just around the time I did a `juju add-machine nfv145.maas` | 11:50 |
rick_h_ | frobware: voidspace any clue if this means anything? {"request-id":100,"response":"'body redacted'"} Provisioner[""].MachinesWithTransientErrors | 11:51 |
voidspace | rick_h_: not seen it before, I can grep the code though | 11:52 |
mup | Bug #1613992 opened: 1.25.6 "ERROR juju.worker.uniter.filter filter.go:137 tomb: dying" <canonical-is> <cdo-qa-blocker> <landscape> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1613992> | 11:53 |
voidspace | rick_h_: with an empty response I'd say that means no machines - it's an api facade method that calls into the provisioner_task | 11:54 |
voidspace | rick_h_: no machines with errors I mean | 11:54 |
frobware | rick_h_, voidspace: the other oddity and repeated is: "2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [1EC] {"request-id":1,"type":"Admin","version":3,"request":"Login","params":"'params redacted'"} | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver admin.go:201 hostPorts: [[192.168.0.251:17070 127.0.0.1:17070 [::1]:17070]] | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [1EC] 23.963626ms {"request-id":1,"response":"'body redacted'"} Admin[""].Login | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [1EC] user-admin@local {"request-id":2,"type":"Client","version":1,"request":"FullStatus","params":"'params redacted'"} | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:181 Applications: map[openstack-dashboard:openstack-dashboard cinder:cinder ntp:ntp ceph-radosgw:ceph-radosgw neutron-api:neutron-api cinder-ceph:cinder-ceph glance:glance mysql:mysql neutron-openvswitch:neutron-openvswitch nova-cloud-controller:nova-cloud-controller ceph-mon:ceph-mon ceph-osd:ceph-osd nova-compute:nova-compute rabbitmq-server:rabbitmq-server keystone:keystone neutron-gateway:n | 11:54 |
frobware | eutron-gateway] | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:54 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:55 |
rick_h_ | voidspace: the body is redacted so not sure if the bosy id empty | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver.client status.go:716 error fetching public address: no public address | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [1EC] user-admin@local 75.777822ms {"request-id":2,"response":"'body redacted'"} Client[""].FullStatus | 11:55 |
frobware | 2016-09-21 11:42:19 INFO juju.apiserver request_notifier.go:80 [1EC] user-admin@local API connection terminated after 112.508087ms | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12490,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12491,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 865.024µs {"request-id":12490,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 815.317µs {"request-id":12491,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12492,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 702.154µs {"request-id":12492,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12493,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12494,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:55 |
voidspace | frobware: how many mb did you paste... | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 724.592µs {"request-id":12493,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:55 |
frobware | 2016-09-21 11:42:19 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 852.802µs {"request-id":12494,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:55 |
rick_h_ | wheeee | 11:55 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [D7] unit-keystone-0 {"request-id":481,"type":"LeadershipService","version":2,"request":"ClaimLeadership","params":"'params redacted'"} | 11:55 |
frobware | 2016-09-21 11:42:20 DEBUG juju.worker.lease manager.go:217 waking to check leases at 2016-09-21 11:43:00.159215809 +0000 UTC | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [D7] unit-keystone-0 19.749863ms {"request-id":481,"response":"'body redacted'"} LeadershipService[""].ClaimLeadership | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12495,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12496,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:56 |
babbageclunk | Oh dear | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 568.786µs {"request-id":12496,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12497,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 698.863µs {"request-id":12497,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 11.057ms {"request-id":12495,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12498,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12499,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 689.344µs {"request-id":12498,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 546.075µs {"request-id":12499,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:56 |
marcoceppi | goodbye IRC bouncer | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12500,"type":"InstancePoller","version":3,"request":"InstanceStatus","params":"'params redacted'"} | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 1.395166ms {"request-id":12500,"response":"'body redacted'"} InstancePoller[""].InstanceStatus | 11:56 |
marcoceppi | it was nice knowing you | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12501,"type":"InstancePoller","version":3,"request":"ProviderAddresses","params":"'params redacted'"} | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 587.746µs {"request-id":12501,"response":"'body redacted'"} InstancePoller[""].ProviderAddresses | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12502,"type":"InstancePoller","version":3,"request":"Status","params":"'params redacted'"} | 11:56 |
frobware | 2016-09-21 11:42:20 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 749.181µs {"request-id":12502,"response":"'body redacted'"} InstancePoller[""].Status | 11:56 |
frobware | 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12503,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:56 |
marcoceppi | BIG WHEELS KEEP ON TURNNINGGGG | 11:57 |
frobware | 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 1.050716ms {"request-id":12503,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:57 |
frobware | 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12504,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:57 |
frobware | 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12505,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:57 |
frobware | 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 859.082µs {"request-id":12504,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:57 |
frobware | 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 696.796µs {"request-id":12505,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:57 |
frobware | 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12506,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:57 |
frobware | 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12507,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:57 |
frobware | 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 619.928µs {"request-id":12506,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:57 |
frobware | 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 779.62µs {"request-id":12507,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:57 |
frobware | 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12508,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:57 |
frobware | 2016-09-21 11:42:21 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 1.152377ms {"request-id":12508,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:57 |
frobware | 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12509,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:57 |
frobware | 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 837.126µs {"request-id":12509,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:57 |
frobware | 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12510,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:57 |
frobware | 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12511,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:57 |
frobware | 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 775.213µs {"request-id":12510,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:57 |
frobware | 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 709.194µs {"request-id":12511,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:57 |
frobware | 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12512,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:57 |
frobware | 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12513,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:58 |
frobware | 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 648.18µs {"request-id":12512,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:58 |
frobware | 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 684.41µs {"request-id":12513,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:58 |
frobware | 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12514,"type":"Singular","version":1,"request":"Claim","params":"'params redacted'"} | 11:58 |
frobware | 2016-09-21 11:42:22 DEBUG juju.worker.lease manager.go:217 waking to check leases at 2016-09-21 11:43:22.822052442 +0000 UTC | 11:58 |
frobware | 2016-09-21 11:42:22 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 10.463619ms {"request-id":12514,"response":"'body redacted'"} Singular[""].Claim | 11:58 |
frobware | 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12515,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:58 |
frobware | 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 788.027µs {"request-id":12515,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:58 |
frobware | 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12516,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:58 |
frobware | 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12517,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:58 |
frobware | 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 998.939µs {"request-id":12516,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:58 |
frobware | 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 798.386µs {"request-id":12517,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:58 |
frobware | 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12518,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:58 |
frobware | 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:115 <- [6C] machine-0 {"request-id":12519,"type":"InstancePoller","version":3,"request":"InstanceId","params":"'params redacted'"} | 11:58 |
frobware | 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 1.062551ms {"request-id":12518,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:58 |
frobware | 2016-09-21 11:42:23 DEBUG juju.apiserver request_notifier.go:140 -> [6C] machine-0 748.926µs {"request-id":12519,"response":"'body redacted'"} InstancePoller[""].InstanceId | 11:58 |
frobware | 2016-09-21 11:42:24 INFO juju.apiserver request_notifier.go:70 [1ED] API connection from 192.168.0.1:32796 | 11:58 |
frobware | 2016-09-21 11:42:24 DEBUG juju.apiserver utils.go:72 validate model uuid: 0af2d6d9-6c9c-4bc4-843c-00c6bccb3675 | 11:58 |
frobware | 2016-09-21 11:42:24 DEBUG juju.apiserver request_notifier.go:115 <- [1ED] {"request-id":1,"type":"Admin","version":3,"request":"Login","params":"'params redacted'"} | 11:58 |
frobware | 2016-09-21 11:42:24 DEBUG juju.apiserver admin.go:201 hostPorts: [[192.168.0.251:17070 127.0.0.1:17070 [::1]:17070]] | 11:59 |
marcoceppi | <3 | 11:59 |
marcoceppi | I suppose I could have done that, looking at the ops | 11:59 |
voidspace | :-) | 11:59 |
mgz | surprised flood prevention didn't hit | 11:59 |
mgz | frobware: sorry about that | 11:59 |
* rick_h_ has to run the boy to school, voidspace frobware please see if there's anything we can figure out that would cause a delay in juju asking for a machine and it getting sent back. I see the machines get asked for/found around line 2001 of the log http://paste.ubuntu.com/23211052/ (and might be worth trying to curl the /plain url on that | 11:59 | |
frobware | mgz: well, me too. My machine is crawling atm | 12:00 |
voidspace | marcoceppi: we'd need trace logging enabled to see the body of those MachinesWithTransientErrors calls | 12:00 |
voidspace | marcoceppi: and tracing generates a shit-ton of logging | 12:00 |
* frobware will try and not paste that into IRC | 12:01 | |
marcoceppi | voidspace: we'll I already have a shit-ton, wouldn't mind making it a shit-tonne | 12:01 |
voidspace | frobware: :-) | 12:01 |
voidspace | marcoceppi: heh, it would at least tell us if that's the issue | 12:01 |
babbageclunk | You could use something like ngrep to watch the requests - they're not https. | 12:01 |
marcoceppi | voidspace: well, we just got the lsat machine requested | 12:02 |
marcoceppi | and we need to validate the deployment, but I imagine we'll be redeploying in about 20-30 mins | 12:02 |
marcoceppi | and I'll turn on trace at that point | 12:02 |
voidspace | marcoceppi: cool, thanks | 12:03 |
marcoceppi | voidspace: `<root>=TRACE;unit=DEBUG` seem about right? | 12:03 |
voidspace | marcoceppi: yep that should do it | 12:03 |
marcoceppi | okay, different problems | 12:20 |
marcoceppi | we're bringing up lxd machines in this maas, and they are not getting the cloud-init manifest/userdata and as a result, they are not getting agents or networking configured | 12:21 |
marcoceppi | voidspace frobware steel yourselves, trace logging is about to be enabled | 12:25 |
* frobware makes another pledge to not copy it to the channel | 12:25 | |
babbageclunk | https://media.giphy.com/media/OCu7zWojqFA1W/giphy.gif | 12:26 |
frobware | babbageclunk: hey, text is cheap! | 12:26 |
* marcoceppi premetively kicks frobware ;) | 12:27 | |
voidspace | :-) | 12:28 |
marcoceppi | do you have to enable trace logging before or after deployment | 12:29 |
voidspace | marcoceppi: it takes effect from whenever you set it | 12:29 |
marcoceppi | I'm still only seeing debug output | 12:29 |
voidspace | marcoceppi: so whenever really... | 12:29 |
voidspace | hmmm | 12:29 |
voidspace | marcoceppi: even in the all-machines log? | 12:30 |
marcoceppi | voidspace: what all-machines log ;) | 12:30 |
voidspace | has that gone now? | 12:31 |
voidspace | logsink.log maybe | 12:31 |
voidspace | I'm bootstrapping locally to check | 12:32 |
marcoceppi | yeah, I'm looking at that one, still just debug | 12:32 |
marcoceppi | oh well, guess you'll get it next tim | 12:32 |
voidspace | mgz: you know much about setting juju logging? we want trace logging from the request_notifier | 12:33 |
voidspace | mgz: does this sound sensible, it looks good to me: `<root>=TRACE;unit=DEBUG` | 12:34 |
voidspace | marcoceppi: if I set that logging-config (on the controller model) I see TRACE logs | 12:38 |
marcoceppi | voidspace: I set it on the current model | 12:38 |
marcoceppi | let me do that | 12:38 |
voidspace | probably both would be wise... current model *should* be fine I think (but obviously isn't if we're not getting TRACE) | 12:39 |
marcoceppi | helllooooooooooo data | 12:39 |
mgz | voidspace: seems reasonable | 12:40 |
marcoceppi | voidspace frobware enjoy: http://paste.ubuntu.com/23211230/ | 12:41 |
marcoceppi | I can ship these logs to a server to deliver via plaintxt if that's easier | 12:42 |
* frobware temporarily closes IRC, and fetches the trace. "I'll be back!" | 12:42 | |
voidspace | :-) | 12:44 |
voidspace | marcoceppi: doesn't look like MachinesWithTransientErrors is anything interesting | 12:46 |
voidspace | marcoceppi: is it just as slow this time round? | 12:46 |
aisrael | rick_h_, do bundles support deploying from a specific channel? | 12:51 |
rick_h_ | aisrael: looking | 12:56 |
marcoceppi | voidspace: yes | 12:57 |
marcoceppi | voidspace: we're rolling back to beta18, but aisrael will be updating his juju to 2.0 rc1 to try to replicate | 12:57 |
rick_h_ | aisrael: can you try juju deploy $bundle --channel=edge | 12:58 |
aisrael | rick_h_, I think that would work, but the case I'm looking at is wanting to deploy some components from stable, like mariadb, but other components from edge | 12:59 |
=== petevg_afk is now known as petevg | ||
rick_h_ | aisrael: there's nothing in the bundle definition right now. | 13:00 |
rick_h_ | aisrael: the idea was that you have a working solution, you want to test "does the upcoming" one work. Adding channels to the bundles leads to a hodge podge of bundles that are in different 'states' | 13:00 |
aisrael | rick_h_, ack. I'll try pointing it at the charm revision in edge and see if it'll pull in the right bits | 13:00 |
rick_h_ | aisrael: +1 revision always works | 13:01 |
frobware | rick_h_: are we meeting today? | 13:07 |
rick_h_ | frobware: oh sorry yea | 13:07 |
frobware | rick_h_: we can leave it in lieu of helping marcoceppi | 13:07 |
rick_h_ | frobware: actually yes please. I just see I was invited to a maas cross team at this time I want to check out | 13:07 |
marcoceppi | frobware rick_h_ voidspace we've downgraded to beta18 to progress | 13:08 |
frobware | rick_h_: ack | 13:08 |
marcoceppi | too many shap pointy edges and too little time to grind them down | 13:08 |
frobware | marcoceppi: ack - also trying your bundle to see if I can repro | 13:08 |
marcoceppi | frobware: gl | 13:08 |
frobware | marcoceppi: please let me know if beta18 is radically different in its behaviour | 13:08 |
marcoceppi | frobware: | 13:08 |
frobware | marcoceppi: do your MAAS nodes commission with trusty? | 13:10 |
marcoceppi | frobware: xenial | 13:11 |
marcoceppi | frobware voidspace not good news, we're seeing this with beta18 now | 13:16 |
marcoceppi | the only thing we changed from yesterdday to today was the juju bundle | 13:16 |
marcoceppi | (and added more hardware to maas) | 13:16 |
abentley | sinzui: let's chat when you hit a lull in the release. | 13:18 |
frobware | marcoceppi: well I think that helps us stop chasing ghosts between 18 and rc1 | 13:22 |
marcoceppi | frobware: we're waiting to see if the lxd issue persists, but it's taking forever, still, to get machines allocated in maas | 13:27 |
frobware | marcoceppi: the lxd issue being they end up using lxdbr0? that's true in 18, but should now be fixed in rc1 | 13:28 |
rick_h_ | marcoceppi: so this same network setup an just fine yesterday? | 13:28 |
marcoceppi | frobware: the lxd issue we had is that cloud-init didn't run, we didn't get agents for the machines and networking wasn't configured in rc1 | 13:29 |
marcoceppi | we rolled back to address that, but we're still experiencing a long ass time getting juju to ask for machines | 13:29 |
marcoceppi | rick_h_: the deltas from yesterday were juju rc1, 5 more machines in maas, changes to the bundle | 13:29 |
frobware | marcoceppi: ok - any chance that issue is: https://bugs.launchpad.net/juju/+bug/1611981 | 13:30 |
mup | Bug #1611981: LXD guests not configured due to the lack of DHCP on the interface selected as eth0 <network> <sts> <juju:In Progress by macgreagoir> <https://launchpad.net/bugs/1611981> | 13:30 |
marcoceppi | rick_h_: we elimenated the rc1, we're going to try the bundle in a minute as long as we can verify lxd machines are working | 13:30 |
marcoceppi | frobware: that looks like it | 13:30 |
marcoceppi | frobware: the interface was setup as eth2 in the lxd machine | 13:30 |
marcoceppi | frobware: after dhclient on eth2 in lxd machine, address was allocated | 13:30 |
frobware | marcoceppi: so the "fix" there (asssuming this is your issue) is to reoder the NICS in MAAS | 13:30 |
frobware | macgreagoir: ^^ | 13:30 |
macgreagoir | frobware: Reading back... | 13:31 |
marcoceppi | frobware: reorder the nics for the bare metal? | 13:31 |
frobware | macgreagoir: ^^ yep? | 13:31 |
macgreagoir | marcoceppi: If you are able to rename the nics in maas so that the pxe iface sorts lower, you should work-around that. Is that a possibility? | 13:31 |
marcoceppi | macgreagoir frobware this is what we have now | 13:32 |
macgreagoir | marcoceppi: Aye, on the metal. | 13:32 |
marcoceppi | macgreagoir: http://i.imgur.com/seIUIiZ.png | 13:32 |
rick_h_ | voidspace: chat or are you helping ^ ? | 13:33 |
macgreagoir | marcoceppi: Are you able to see what network-interfaces looks like in <container>/var/lib/cloud/seed/nocloud-net/user-data, please? | 13:36 |
marcoceppi | macgreagoir: we're waiting for another container ot come online | 13:36 |
marcoceppi | macgreagoir: we're hitting two major issues with maas today, this and one where Juju takes 10 minutes to request 1 machine from maas | 13:36 |
macgreagoir | marcoceppi: Is eno3 the consistently used pxe iface? | 13:37 |
marcoceppi | macgreagoir: yes | 13:37 |
marcoceppi | on all the metal | 13:37 |
macgreagoir | If you can try rennaming it to... dynamic0 (or something else lower than eno1) it would be a good test of the dhcp/eth0 bug, at least. | 13:39 |
frobware | macgreagoir: wouldn't user-data have the correct list of interfaces? | 13:39 |
macgreagoir | frobware: It should, yes, I'd like to get a picture of the full config. | 13:39 |
marcoceppi | macgreagoir: so rename eno1, which is not configured, to dynamic0 | 13:43 |
frobware | macgreagoir, marcoceppi: I think to ensure that we are seeing this bug we should try and repro: juju add-machine lxd:<X>; then let's enter the container ($lxc exec juju-<container-name> bash) and poke around | 13:43 |
marcoceppi | frobware macgreagoir we are not seeing the DHCP/misconfiguration on beta18 | 13:43 |
marcoceppi | we are getting containers with IP addresses and agents | 13:43 |
marcoceppi | frobware macgreagoir we're under the gun to get this out, and fighgint maas's delayed instance allocaiton is killing our iterations | 13:44 |
macgreagoir | marcoceppi: What is the 10.95.172.x subnet? | 13:44 |
marcoceppi | frobware macgreagoir if we get the green light to continue for the demo with Mark next week, we'll try to repro tomorrow | 13:44 |
marcoceppi | macgreagoir: that's an external network not managed by MAAS | 13:44 |
frobware | marcoceppi: are you able to HO and screenshare? | 13:44 |
marcoceppi | frobware: we can hangout and screen share, yes, might be easier to explain | 13:45 |
frobware | marcoceppi: let's do that; we're going too slowly - sending link... | 13:45 |
frobware | macgreagoir: https://hangouts.google.com/hangouts/_/canonical.com/maasissues?authuser=0 | 13:46 |
mup | Bug #1626097 opened: juju deploy lxd provider inconsistent ipv4 or ipv6 names <cpe-sa> <juju-core:New> <https://launchpad.net/bugs/1626097> | 13:47 |
voidspace | rick_h_: sorry, missed your msg | 13:49 |
voidspace | rick_h_: was taking a break | 13:49 |
rick_h_ | voidspace: k | 13:50 |
voidspace | rick_h_: oh bugger, forgot we were supposed to chat today! | 13:50 |
voidspace | rick_h_: you free now? | 13:50 |
rick_h_ | voidspace: in the room now | 13:52 |
rick_h_ | voidspace: though we've got standup in 8 | 13:52 |
rick_h_ | macgreagoir: standup if you're free | 14:01 |
katco | \o/ | 14:01 |
macgreagoir | rick_h_: On HO with marcoceppi, sorry | 14:01 |
rick_h_ | macgreagoir: all good, that's the best priority | 14:02 |
babbageclunk | Anyone know how to talk to the introspection worker? | 14:03 |
babbageclunk | It looks like it exposes a web UI with profile info like stack traces, but it's listening on an abstract domain socket and I can't work out how to get a web client to talk to it. | 14:03 |
alexisb | babbageclunk, heh sorry | 14:25 |
alexisb | I dropped to fast | 14:25 |
alexisb | is there somethng else? | 14:25 |
babbageclunk | alexisb: No worries! | 14:26 |
babbageclunk | alexisb: Was just going to say, I'm chatting with thumper tonight so I'll pick his brains about the introspection worker | 14:26 |
babbageclunk | (If I haven't already sussed it out by then.) | 14:26 |
alexisb | cool | 14:27 |
mup | Bug #1626097 changed: juju deploy lxd provider inconsistent ipv4 or ipv6 names <cpe-sa> <usability> <juju:Triaged> <https://launchpad.net/bugs/1626097> | 14:32 |
babbageclunk | Worked it out - you can't do it with ncat but you can with socat. | 14:37 |
babbageclunk | yay, now I have 18k lines of stack traces. | 14:47 |
frobware | rick_h_: did you want to sync? | 16:12 |
rick_h_ | frobware: on a manager call atm, easy to email so I don't hold up your EOD? | 16:32 |
=== frankban is now known as frankban|afk | ||
CorvetteZR1 | hello. i'm trying to do juju bootstrap --upload-tools using xenial image but get connection refused on port 22 | 17:01 |
CorvetteZR1 | google turned up some old posts with people having similar issues, but i haven't found any solution | 17:01 |
CorvetteZR1 | i can see the container is running, but bootstrap can't auth using ssh key-auth. any suggestions? | 17:01 |
natefinch | CorvetteZR1: what provider? AWS, Google, Openstack, Local? | 17:02 |
rick_h_ | CorvetteZR1: and what versin of juju? | 17:03 |
CorvetteZR1 | local. got Maas going and want to play with openstack | 17:03 |
CorvetteZR1 | version 2.0-beta15 i think | 17:03 |
CorvetteZR1 | whatever is latest in 16.04... | 17:04 |
CorvetteZR1 | as far as maas goes, it's up and running and i got a few servers deployed | 17:04 |
CorvetteZR1 | with juju i'm kind of lost. should i be bootstraping it on the maas server or on one of the deployed nodes? i get the same error on both...both are same version of ubuntu and juju | 17:05 |
CorvetteZR1 | natefinch, local | 17:17 |
CorvetteZR1 | rick_h_, 2.0-beta15 | 17:18 |
rick_h_ | CorvetteZR1: can you get the RC1 from the PPA please? There was some networking issues with containers binding to the correct interface/address pre-RC1 | 17:19 |
CorvetteZR1 | ok | 17:19 |
CorvetteZR1 | ok, looks like it logged in now on 2.0-rc1 | 17:26 |
CorvetteZR1 | it's doing apt inside the container | 17:26 |
CorvetteZR1 | thanks rick_h_ ! | 17:27 |
natefinch | yay! | 17:27 |
rick_h_ | CorvetteZR1: <3 ty | 17:28 |
redir-afk | \o/ | 17:51 |
redir-afk | what I am not afk | 17:52 |
=== redir-afk is now known as redir | ||
natefinch | lol I can't tell you how often I do that :) | 17:55 |
hml | has anyone bootstrapped a remote openstack cloud with juju 2.0 recently? i’m having some challenges - it appears that the charm deployed didn’t get assigned a floating ip, causing (i’m assuming) the install of the charm instance to get stuck | 17:57 |
redir | natefinch: right. I usually just don't | 17:57 |
redir | if I don't anwer I'm afk or focused | 17:58 |
natefinch | *nod* | 17:58 |
* rick_h_ goes to grab lunchables | 18:17 | |
redir | anyone else use chromium and having issues with it displaying jpegs? | 18:47 |
CorvetteZR1 | stuck on fetching juju gui 2.1.10 | 19:15 |
CorvetteZR1 | debug doesn't have anything interesting | 19:18 |
CorvetteZR1 | although, can it have something to do with eno1 and eno2 has no address? | 19:18 |
natefinch | not sure... maybe it's a networknig issue? Does your environment have internet access? | 19:27 |
natefinch | CorvetteZR1: you can also bootstrap with --no-gui to skip that step if you're not going to sue it | 19:28 |
CorvetteZR1 | it does have internet access. it's possibly a network issue. i rebooted the maas node, now it just fails saying it can't find a node in the zone | 19:29 |
CorvetteZR1 | meh...i'll poke around :) | 19:29 |
CorvetteZR1 | k, different issue now. cannot acquire a node in the zone | 19:40 |
CorvetteZR1 | i have 2 deployed nodes in my zone...do they need to be ready or allocated instead of deployed? | 19:40 |
natefinch | definitely not deployed. Deployed is effectively "in use" so juju won't mess with them. I believe ready is what they should be in. | 19:42 |
CorvetteZR1 | ah...i think it's doing something now | 19:42 |
CorvetteZR1 | i released it | 19:42 |
natefinch | cool | 19:42 |
CorvetteZR1 | now juju is powering it up | 19:42 |
thumper | so close to a bless | 20:28 |
perrito666 | sounds like a cheap pop song | 20:32 |
rick_h_ | heh, almost | 20:32 |
natefinch | rick_h_: can I get your input on an error message? This is what we show if the user has no current controller specified: | 20:41 |
natefinch | $ juju models | 20:41 |
natefinch | ERROR No current controller. | 20:41 |
natefinch | Please use "juju controllers" to view all controllers available to you. You can | 20:41 |
natefinch | set the current controller by running "juju switch" or "juju login". | 20:41 |
rick_h_ | natefinch: I feel like that's in the wrong order and a lot of info | 20:41 |
natefinch | this is what it used to say: | 20:42 |
natefinch | `not logged in | 20:42 |
natefinch | Please use "juju controllers" to view all controllers available to you. | 20:42 |
natefinch | You can login into an existing controller using "juju login -c <controller>". | 20:42 |
rick_h_ | natefinch: if you don't have a controller specified the first thing is switch, then if you don't know what controller to go to you'd use juju controllers | 20:42 |
natefinch | what it used to say is actually not a good error message because it has nothing to do with being logged in or not. Although I think I should check that, too. Hmm. | 20:43 |
natefinch | rick_h_: the big error messages were from a mark bug: https://bugs.launchpad.net/juju/+bug/1589061 | 20:43 |
mup | Bug #1589061: Juju status with no controllers offers up juju switch <juju:Fix Released by anastasia-macmood> <https://launchpad.net/bugs/1589061> | 20:43 |
redir | anyone name a charm with optional-storage? | 20:43 |
rick_h_ | natefinch: hmm, ok | 20:45 |
natefinch | rick_h_: ¯\_(ツ)_/¯ | 20:47 |
rick_h_ | natefinch: yea, sorry, multi-tasking | 20:47 |
natefinch | rick_h_: oh, no big deal. Just... error messages are hard] | 20:49 |
rick_h_ | natefinch: yea, agree | 20:49 |
natefinch | "I don't know how you got into this situation, but here's a few ways to get out that may or may not be what you really should do" | 20:50 |
natefinch | dinner time, back later | 20:53 |
=== natefinch is now known as natefinch-afk | ||
rick_h_ | natefinch-afk: https://pastebin.canonical.com/166172/ | 20:55 |
perrito666 | mm, I was expecting add-cloud to be interactive | 21:23 |
perrito666 | annybody knows the magic incantation to add-cloud for a maas cloud? | 21:30 |
alexisb | thumper, have you opened your wine? | 22:32 |
thumper | no | 22:32 |
alexisb | I love that are first bless begins with a rev ID of "bad..." | 22:33 |
thumper | :) | 22:38 |
alexisb | axw, great minds ;) ^^^ | 22:40 |
axw | alexisb: :) | 22:40 |
mup | Bug #1626304 opened: Unit disk fills with inprogress-n files <juju-core:New> <https://launchpad.net/bugs/1626304> | 22:54 |
axw | alexisb: just checking, you said we're not worrying about "graduated reviewer" business now that we have the checklist? | 22:56 |
alexisb | axw, no | 22:56 |
alexisb | axw, but we need to update the process given we dont have assigned mentors (as we did before) | 22:57 |
axw | alexisb: no you didn't say that, or no we're not worrying about it? | 22:57 |
alexisb | no I didnt mean to say that | 22:58 |
axw | okey dokey | 22:59 |
axw | anastasiamac: do you have a moment to stamp https://github.com/juju/juju/pull/6294 ? | 23:03 |
anastasiamac | axw: sure thing | 23:04 |
axw | anastasiamac: thanks | 23:08 |
menn0 | redir: thanks for the review. after proposing I decided to do more - hence the lack of QA steps. | 23:11 |
redir | menn0: np | 23:11 |
redir | I figured that or changing to GH reviews altered workflow | 23:12 |
alexisb | axw, is this still a bug: https://bugs.launchpad.net/juju/+bug/1623761 | 23:12 |
mup | Bug #1623761: drop userpass auth-type from azure <azure-provider> <juju:Triaged> <https://launchpad.net/bugs/1623761> | 23:12 |
menn0 | redir: nah, I shouldn't have created the PR yet | 23:12 |
axw | alexisb: yes, still need to drop it | 23:13 |
redir | in that case Not LGTM:) | 23:13 |
axw | alexisb: before 2.0 | 23:13 |
thumper | hmm... | 23:16 |
thumper | ugh | 23:22 |
thumper | can't hear anyone properly | 23:22 |
thumper | all robot | 23:22 |
mwhudson | whee funtimes mongodb 3.2.10-rc1 fails to build on arm64 & s390x | 23:50 |
perrito666 | can anyone with maas experience check http://paste.ubuntu.com/23213786/ and give me an opinion? | 23:50 |
perrito666 | mwhudson: oh? mmap? or the new stuff? | 23:50 |
mwhudson | perrito666: no, nothing so deep i think | 23:51 |
mwhudson | perrito666: https://launchpadlibrarian.net/285886652/buildlog_ubuntu-yakkety-arm64.juju-mongodb3.2_3.2.10~rc1-0ubuntu1~ppa1_BUILDING.txt.gz https://launchpadlibrarian.net/285880249/buildlog_ubuntu-yakkety-s390x.juju-mongodb3.2_3.2.10~rc1-0ubuntu1~ppa1_BUILDING.txt.gz | 23:51 |
perrito666 | mwhudson: AttributeError: Values instance has no attribute 'use-s390x-crc32': > | 23:52 |
perrito666 | ? | 23:52 |
mwhudson | perrito666: yeah, that's going to be something in my s390x patches i guess | 23:53 |
perrito666 | this one is a bit surprising __wt_checksum_init | 23:53 |
mwhudson | yeah | 23:53 |
mwhudson | going to see if that happens with the upstream source and report a bug if it does | 23:54 |
perrito666 | I would not expect wt to try to link to unexisting things :( | 23:54 |
mwhudson | bet it's some per-arch thing | 23:54 |
mwhudson | oh hey someone's done it already i think | 23:56 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!