[00:25] wallyworld: looks like the latest update didn't help the s390x build, still failing [00:27] thumper: hmmmm, snaps built on lp [00:27] ok [00:27] https://launchpadlibrarian.net/469345132/buildlog_snap_ubuntu_xenial_s390x_juju-edge_BUILDING.txt.gz [00:27] vendor/golang.org/x/crypto/chacha20/chacha_s390x.go:11:15: undefined: cpu.S390X [00:27] I wonder if this is an additional flag that is set by a later version of the go compiler [00:28] https://code.launchpad.net/~juju-qa-bot/+snap/2.7-edge [00:28] https://code.launchpad.net/~juju-qa-bot/+snap/2.7-edge/+build/870841 [00:28] email six minutes ago [00:28] build log shows correct/latest sha [00:28] wallyworld: do you have a link to the bug for pod DNS from last week ? [00:28] hang on [00:29] perhaps I'm looking at the latest develop build [00:29] which won't have the fix yet [00:29] it is juju-edge snap package [00:29] not 2.7 edge [00:29] wallyworld: I think we may be good then [00:29] thanks for pointing this out [00:29] all good, thanks for checking [00:30] good to be sure [00:30] wallyworld: do you agree that the failure above is because we don't yet have the 2.7 update in develop? [00:30] thumper: let me finish standup [00:31] * thumper nods [01:02] babbageclunk: you happy to +1 my 2.7 merge pr? [01:05] wallyworld: oh, yup - just scrolling through it now [01:07] ty [01:09] wallyworld: approved [01:11] ty [02:51] babbageclunk: hmm... [02:51] machine-1: 15:49:17 WARNING juju.worker.globalclockupdater.raft timed out updating clock, retrying in 1s [02:51] given that my machine isn't doing much... [02:51] not sure why we'd get this [02:52] thumper: uhoh [02:52] let's look! [02:52] are you seeing it a lot? [02:52] or is it just the one? [02:52] just once [02:53] was there a network blip that meant raft leadership changed? [02:53] it's not surprising in that case. [02:53] hmm... [02:53] (or maybe just a machine being slow to respond to a heartbeat because compiling/running tests) [02:54] it was deploying ubuntu-lite [02:54] * thumper just thought of a really useful command addition [02:54] I mean, that doesn't sound like it would make the machine work too hard - it says lite right in the name [02:54] if juju exec specifies --out, and there are multiple targets, we should write a file per target [02:55] +1 [02:55] so we could say "juju exec -m controller --all --out engine-report juju_engine_report [02:55] easy tee [02:55] and end up with juju_engine_report.0 .1 and .2 [02:55] is exec the new run? [02:55] yeah [02:55] yes [02:56] or... [02:56] that behaviour is similar to how wget works with --no-clobber [02:56] maybe treat the filename as a template [02:56] --out engine-report-{}.txt [02:56] and replace the {} with the machine id [02:56] jinx [02:56] that'd be sweet [02:57] perhaps add engine-report-0.txt.stderr if there is any stderr [02:57] or something [02:58] so... I wonder if I broke something in my status code... [02:58] my machine is saying "pending" [02:58] but it has a connection [02:58] poo [02:59] hmm... [02:59] controller shows agents as "started" [02:59] but my default model just showing "pending" [03:26] wallyworld: https://github.com/juju/juju/pull/11315 got this PR for CRD lifecycle, could u take a look when got some time? ty [03:26] ok [03:40] thumper: did you work it out? [03:40] no [03:40] digging [03:40] babbageclunk: meet? [03:42] fark!!! [03:42] babbageclunk: good news bad news kinda thing [03:42] the machine agent really isn't running properly [03:42] hence "pending" [03:43] because the upgrade-steps-runner had issues [03:43] unknown object type "Machiner" (not implemented) [03:45] I don't even know how we ended up in this state... [03:45] and how we haven't caught this issue earlier [03:45] * thumper feels sad [04:49] babbageclunk: ok found out my problem [04:58] oh nice [04:58] * thumper is sending email [04:58] not my fault [08:03] manadart, got a sec? [08:05] stickupkid: In Daily. [08:46] manadart, thoughts about where to put the new series stuff, core/series seems like it has potential [08:47] stickupkid: Yes. [08:58] hi all, [08:58] little question [08:59] when upgrading ubuntu os (apt), holding a percona cluster, I upgraded the non leader first (pause, upgrade, reboot, resume). Do I need to move leader to another node, before to perform the upragde on current leadre node? [09:00] if yes, how do I move leader onwership to another node? [09:05] manadart: it's goimports that complains about already merged stuff on develop; I will push a commit to fix the linter warnings [09:07] flxfoo: The upgrade process handles freezing leadership so that it doesn't thrash while performing series upgrades. [09:07] achilleasa: OK. [09:11] manadart: thanks, can you give more details? I am not upgrading percona or going one OS release up... [09:18] flxfoo: Ah, I see. You are not using `juju upgrade-series ...`. If take the agent offline for longer than the lease period (up to a minute IIRC) a new leader will be elected. [09:18] *If you.. [09:21] manadart: ah ok, I needed to wait longer than, for it to elect new leader... no way to force that then? [09:21] *then [09:23] flxfoo: No, it's in the hands of Raft consensus. You should be able to see it change in `juju status` if the agent is offline. [09:25] achilleasa: Looks like you need to fix some import groupings there. [09:25] manadart: by agent, you mean `mysql-hacluster` unit? [09:26] manadart: if yes, offline does not mean `paused` but `stopped`? [09:27] manadart: guess goimports is not to be trusted... [09:28] manadart: if I `pause` the hacluster, it just stays like that... no election seem to appear in debug-log [09:32] flxfoo: What is the charm you're using percona-cluster? [09:33] manadart: I think it all looks correct now; can you take another look in case I missed an import? [09:34] achilleasa: I think it looks OK now. [09:41] manadart: yeah [09:41] manadart: +hacluster [09:56] flxfoo: If you definitely want the leader to relinquish while you upgrade it, try stopping the primary mysql/percona unit agent rather than using the pause command. I'm not terribly familiar with that charm, but that should do it. [09:59] manadart ok, if I understood properly, I could actually upgrade the non leader ones first (with pause etc... and resume), then shutdown the leader node (stop mysql) in order for an election to take place? [09:59] flxfoo: Yes, I think so. [10:06] manadart: thanks for your time :) [10:06] flxfoo: Sure thing. [12:21] stickupkid: where did you find ineffassign and unconvert for static analysis ? i saw a few private repros perhaps, but that’s it [12:24] hml, ineffassign - github.com/gordonklaus/ineffassign [12:24] hml, unconvert - https://github.com/mdempsky/unconvert [12:25] ha, nice that i kept my urls the same - hmm [12:25] stickupkid: okay, those are the ones i saw, just wasn’t sure. [12:34] hml: can you try rebasing 11318 on top of current develop? That should get rid of the un-related lint errors [12:35] achilleasa: sure [12:52] achilleasa: not, there are a few more changes to be made. working on it now [12:52] hml: left a small comment [12:52] doing the QA steps now [13:56] manadart, https://github.com/juju/juju/pull/11332 [14:03] achilleasa: pushed up changes to 11318. need to remember to squash them before merging. :-) [14:04] hml: btw, when I tried the QA steps it failed to download (or was it upload) a resource [14:04] stickupkid: are you free to join back in daily? [14:05] achilleasa: yeah… i saw that too. i think it’s because we don’t know how to run the charm correctly. that’s didn’t seem to inhibit a few metrics from being collected. [15:07] manadart, rick_h_ ping [15:08] stickupkid: In Daily. [15:08] stickupkid: otp with interview === jsing` is now known as jsing [16:26] hml: 11318 is approved [16:26] achilleasa: ty [16:27] achilleasa: do you have a few minutes? [16:28] sure [16:28] achilleasa: daily? [16:28] omw [16:31] manadart, hml, CR please https://github.com/juju/juju/pull/11332 [18:49] stickupkid: looking at 11332 [19:08] hi, [19:08] just trying to install `canonical's kubernetes` with juju. Inside of the worker-nodes i get: [19:08] `fs.go:540] stat failed on /dev/loop24 with error: no such file or directory` [19:08] And on the host i get this: [19:08] ``` [19:08] $ losetup [19:08] /dev/loop24 0 0 1 1 /var/lib/snapd/snaps/lxd_13741.snap (deleted) 0 512 [19:08] ``` [19:08] What can i do, to search deeper about what is happen here ?