[00:06] <wallyworld> vinodhini: pr looks good so far
[00:08] <vinodhini> thank u. wallyworld: i shd just add in all call point to verify if its credentials
[00:08] <vinodhini> tast all the work.
[00:32] <veebers> sigh, I just wrote something down then went to select it and copy it :-|
[00:50] <babbageclunk> ha
[01:20] <anastasiamac> wallyworld: thumper: PTAL https://github.com/juju/juju/pull/9101 - a fix for that cmd/plugins issue with no controllers (bug linked in PR)
[01:22] <wallyworld> lgtm, thanks for fixing
[01:22] <wallyworld> can backport to 2.4 after release
[01:22] <anastasiamac> nws, i will :)
[01:22] <anastasiamac> thnx for such a quick review \o/ m speechless
[01:26] <wallyworld> wasn't a big PR :-)
[01:26] <wallyworld> and the logic made sense
[01:27] <anastasiamac> babbageclunk: thnx for review \o/
[01:28] <anastasiamac> babbageclunk: i like to name fail() whitin a func but this one was a variable that was package-visible.. i was naming it to avoid conflicts with other potential fail methods...
[01:29] <babbageclunk> anastasiamac: yeahhhh, but it's a new package right?
[01:29] <anastasiamac> babbageclunk: no, it has other stuff there...
[01:29] <anastasiamac> babbageclunk: i can put a fail in both fun but it'll b just a copy.. i think i'll do it anyway.. might be neater
[01:30] <babbageclunk> anastasiamac: oh, ok then - I'd get rid of the function in that case, `return empty, errors.Trace(err)` is better than `return veryLongFunctionThatDoesntDoAnything(errors.Trace(err))` ;)
[01:31] <anastasiamac> babbageclunk: +1
[01:31] <babbageclunk> (exaggerating, obvs)
[01:33] <anastasiamac> :)
[01:34] <veebers> Is there a way to discover bundles on the charmstore? I'm looking for one that doesn't use trusty to test something
[02:27] <externalreality_> veebers, whats a good strategy for freeing up some space on grumpig?
[02:28] <externalreality_> veebers, du says there is alot of old jobs living on grumpig at 1G a peice.
[02:28] <veebers> externalreality_: hey o/ I just cleared it out :-)
[02:29] <externalreality_> veebers, thank you veebers! :-D
[02:29] <veebers> externalreality_: this is a known issue that I'm working on, I have a PR in the works for fixing how we run the pr jobs
[02:29] <externalreality_> veebers, understood, thank you sir.
[02:29] <veebers> externalreality_: long-short: we pull the source locally and attach that dir to the lxd container, if it fails we keep the container for debugging, but the next job comes along, tries to clean up the build dir (can't as it's being used by lxd) and so moves it aside
[02:30] <veebers> my fixes make everything self contained in the lxd container (as well as simplify the config file, 1 20 line file encompassing all jobs instead of 20 many line files
[02:31] <externalreality_> veebers, when you say "moves it aside" where does it move it aside to?
[02:32] <veebers> in the workspace dir and adds _ws-cleanup-<timestamp>
[02:32] <veebers> (those are what I deleted just now
[02:34] <externalreality_> veebers, gotcha - cool, Yes and those are the old jobs that du was claiming occupy 1G+ of disk space a peice. They must build up quick.
[02:34] <externalreality_> thank you veebers!
[03:30] <wallyworld> veebers: just lifting my head out of the fog of cmr stuff, what's the status of the maas tests etc?
[03:30] <veebers> wallyworld: ugh
[03:31] <wallyworld> oh, that good huh
[03:31] <veebers> wallyworld: Maas deploy test works (that happened earlier), I'm debugging the container network one, I've made a little progress
[03:32] <veebers> wallyworld: re: vsphere test I've made progress there, can bootstrap etc. The test itself is pretty, uh, pants though. I'm just looking at what it's actually trying to do. I think we can pair it back to a 'sensible' deploy for now
[03:32] <wallyworld> sadly i don't have much advice to offer
[03:32] <wallyworld> yeah, minimal smoketest is probably ok for now
[03:32] <veebers> wallyworld: s390x unit test, still unknown, I'm rebooting that machine at the mo (oh I hope it comes back up)
[03:32] <veebers> (oh yay it did)
[03:32] <wallyworld> win
[03:33]  * veebers checks his notes
[03:34] <veebers> got a couple things on the run, um, oh it's possible the charm-store failure is transient, re-run in progress now
[03:34] <veebers> the maas and vsphere stuff takes *ages*, not sure if that's expected
[03:34] <wallyworld> yeah, i think it is
[03:53] <veebers> wallyworld: the network health test is all over the place, I'm proposing a deploy test for vsphere for the meantime to unblock us, then we can untangle the network health mess
[03:53] <wallyworld> sgtm
[03:53] <wallyworld> imo we could still ship even with network test not working
[03:53] <wallyworld> the deployment works and maas 1.9 is only transiitonary
[04:01] <veebers> wallyworld: I'm also sceptical that the assess_container_networking test worked, it does a 'juju run reboot', which will *always* result in a CalledProcessError as the session is terminated by the reboot, but it's not handled
[04:01] <wallyworld> ugh
[04:02] <wallyworld> we need to look at who wrote that test and perhaps ask about the history of it
[04:02] <wallyworld> i wouldn't waste any more time on it
[04:03] <veebers> wallyworld: I fixes for the issue I think. I'll plonk them in and re-run to see. (leave it running in the background etc.)
[04:03] <wallyworld> ok
[04:03] <veebers> wallyworld: the unit test on s390x is still unknown :-| Looking at that now
[04:03] <wallyworld> ty
[04:06] <wallyworld> anastasiamac: https://github.com/juju/juju/pull/8350 breaks aspects of status updates, we will need to discuss how best to fix
[04:08] <wallyworld> i think we could achieve the intend using a check in setStatusOps
[04:08] <wallyworld> *intent
[04:35] <anastasiamac> wallyworld: k.. let's discuss it... when?
[04:35] <wallyworld> anytime suits me
[04:35] <anastasiamac> ho? standup?
[04:35] <wallyworld> sure
[04:39] <veebers> wallyworld: FYI charm-storage got a success, I used a different region, I suspect perhaps storage quota used up in the other one? (looks like teardown on error results in storage left lying around)
[04:41] <wallyworld> plausible
[04:41] <wallyworld> it did seem that it could be transient
[04:51] <veebers> babbageclunk: FYI,, s390x unit test, a more helpful error message: no "bionic" images in some-region with arches [s390x]
[04:51] <babbageclunk> veebers: bloody red herrings!
[04:51] <veebers> it's looking but not finding, not yet sure why. I do know that https://github.com/juju/juju/blob/develop/environs/simplestreams/simplestreams.go#L392 is returning the herring
[04:52] <babbageclunk> Is it because the test isn't seeding the local server with bionic images?
[04:52] <babbageclunk> (might be a stupid question)
[04:53] <veebers> babbageclunk: a possibility, need to dig in. (now I have more info I can do so locally, instead of on a distant machine via vanilla vi :-)
[04:54] <babbageclunk> oh nice
[04:54] <veebers> hmm, but it works on other arches, maybe it's going bung for s390x
[04:56] <veebers> ok, vsphere deploy job passes (have the work, needs a pr), charm-storage passes (changed region, have the work, needs a pr), container networking passing (have the work, needs a pr), s390x unit test still on going
[04:56] <veebers> PRs will probably come after dinner at this rate
[04:57] <veebers> (oh and we nuke network-health vsphere as that test is pants, have a deploy instead)
[04:57] <anastasiamac> veebers: what was the problem/fix for container networking? (well done btw!!)
[04:58] <anastasiamac> we have tests that are pants?
[04:58] <veebers> anastasiamac: unf-ing the test a bit ;-) will have a pr up shortly. but 1. change a string comparison 2. change a reboot command to handle the immediate termination of the ssh session that it's issued on
[04:59] <anastasiamac> veebers: niice
[05:09] <veebers> FYI https://github.com/juju/juju/pull/9102
[05:10]  * anastasiamac looking
[05:53] <babbageclunk> JujuConnSuite is an abomination and I am making it worse
[05:57] <anastasiamac> :(
[05:57] <veebers> babbageclunk: yay \o/ burn baby burn
[05:58] <veebers> jam: re: the s390x test, I was going to look at provider/openstack/local_test.go localServerSuite.SetUpTest, perhaps the uploadfaketools or UseTestImageData isn't prepping things correctly
[06:04] <jam> veebers: thanks for the pointer. This line seems particularly interesting
[06:04] <jam> [LOG] 0:00.722 DEBUG juju.environs.instances matching constraints {region: some-region, series: bionic, arches: [s390x], constraints: mem=3584M, storage: []} against possible image metadata [{Id:1 Arch:amd64 VirtType:pv} {Id:id-1604arm64 Arch:arm64 VirtType:pv} {Id:id-1604ppc64el Arch:ppc64el VirtType:pv}]
[06:04] <jam> It seems to find "ppc64el" binaries, but not s390x ones.
[06:50] <thumper> well fuck...
[06:50] <thumper> my brilliant thought on why my tests are failing was wrong
[06:50] <thumper> now I'm back to not knowing why they are failing
[06:51]  * thumper digs more in the 10m before the meeting
[07:18] <wallyworld> if anyone can do a small review that would be gr8 https://github.com/juju/juju/pull/9103
[07:24] <manadart> wallyworld: Looking.
[07:24] <wallyworld> yay ty
[07:33] <manadart> wallyworld: Approved with comments.
[07:34] <wallyworld> manadart: thanks, i will fix the eror logging. i had it in my head Wait() return non nil error which is bogus
[10:00] <babbageclunk> wallyworld: ping?
[10:04] <babbageclunk> wallyworld: just in case you're around later on: I've updated all the tests that were claiming leases through state to use the dummy provider lease manager. The only package that still has failing tests is cmd/jujud/agent, which looks like it starts a full agent with dependency engine and raft workers...
[10:05] <babbageclunk> wallyworld: it seems like it's falling prey to the startup/bouncing-apiserver issue I was planning on tackling next.
[10:06] <babbageclunk> wallyworld: I'm tempted to set the legacy-leases flag for that test and land it, then fix the startup issue and that test at the same time. What do you think?
[10:07] <babbageclunk> wallyworld: Actually, I'll do that now but not land it, I'll check with you in the morning.
[10:25] <wallyworld> babbageclunk: heyu
[10:26] <wallyworld> your plan sgtm
[10:30] <babbageclunk> wallyworld: sweet
[13:26] <BlackDex_> Hello :). I wonder if it is possible to have lxd 3.0.x installed on xenial during a juju deployment instead of having the default 2.0.x version
[13:35] <stickupkid> BlackDex_: yes that's possible
[13:37] <stickupkid> BlackDex_: you can follow this video, which does the same thing https://www.youtube.com/watch?v=RnBu7t2wD4U
[15:29] <stickupkid> rick_h_: how much backwards compatible should we be with lxd 2.x?
[15:52] <stickupkid> manadart: we never read that file, if the bridge name is "lxdbr0", I need to work out if that's been changed or not
[15:58] <stickupkid> manadart: ignore me... think i got that wrong
[15:59] <rick_h_> stickupkid: sorry, so what do you mean? :)
[16:00] <stickupkid> HO?
[16:00] <rick_h_> stickupkid: k, omw
[16:03] <stickupkid> manadart: we're missing this function https://github.com/juju/juju/blob/2.2/container/lxd/initialisation_linux.go#L179
[16:12] <manadart> stickupkid: Missing?
[16:14] <stickupkid> manadart: it's not there inside the container, which causes it to error out
[18:39] <manadart> externalreality: I tacked on a fix for the tag conversion panic to https://github.com/juju/juju/pull/9105. I know you conditionally approved it, but if you could take a look on account of the commits added since...
[18:40] <manadart> externalreality: I really have to sign off now, but if it goes green, merge it to get the fix in. I am happy to take on defence of the other changes there.
[18:52] <externalreality> manadart, ack
[19:44] <rick_h_> hml: for those posts I've got a ci job category under development. I've moved those two over.
[19:46] <hml> rick_h_: cool.  i’ve figured out the discourse categories, but not the nested ones.  :-)
[20:35] <rick_h_> externalreality: QA note inbound on your PR. Let me know if I missed something
[20:49] <veebers> Morning all
[20:50] <rick_h_> morning veebers
[20:50] <veebers> How are things today rick_h_ ?
[20:51] <rick_h_> veebers: wheeeeee
[20:51] <rick_h_> veebers: once you get settled can you please check in with hml and make sure she carried through the WIP you had going last night?
[20:51] <veebers> can do
[20:51] <rick_h_> veebers: we tried to put together what the status was from the PRs and IRC backlog we had to go off of
[20:51] <rick_h_> but good to make sure we figured it out right
[21:38] <veebers> rick_h_, hml any idea where we landed with the s390x unit test, I believe jam took a bit of a look?
[21:38] <hml> veebers: i didn’t look at it today, so we’re in the same place
[21:38] <veebers> ack
[21:48] <thumper> babbageclunk: morning, got a few minutes?
[21:56] <babbageclunk> thumper: sure!
[21:56] <babbageclunk> in 1:1?
[21:56] <thumper> ack
[23:07] <babbageclunk> wallyworld: do you think I should squash up the commits before I land the megabranch?
[23:07] <wallyworld> i thnk so
[23:08] <babbageclunk> ok, but it's going to be one huge commit
[23:14]  * thumper sighs
[23:14] <thumper> there is a test in state that isn't timing out...
[23:15] <thumper> hmm...
[23:15] <thumper> WatchPodSpec
[23:15] <thumper> hmm maybe not
[23:15]  * thumper digs more
[23:24] <thumper> nope, it is state pool tests