[00:25] <thumper> wallyworld: looks like the latest update didn't help the s390x build, still failing
[00:27] <wallyworld> thumper: hmmmm, snaps built on lp
[00:27] <wallyworld> ok
[00:27] <thumper> https://launchpadlibrarian.net/469345132/buildlog_snap_ubuntu_xenial_s390x_juju-edge_BUILDING.txt.gz
[00:27] <thumper> vendor/golang.org/x/crypto/chacha20/chacha_s390x.go:11:15: undefined: cpu.S390X
[00:27] <thumper> I wonder if this is an additional flag that is set by a later version of the go compiler
[00:28] <wallyworld> https://code.launchpad.net/~juju-qa-bot/+snap/2.7-edge
[00:28] <wallyworld> https://code.launchpad.net/~juju-qa-bot/+snap/2.7-edge/+build/870841
[00:28] <thumper> email six minutes ago
[00:28] <wallyworld> build log shows correct/latest sha
[00:28] <tlm[m]> wallyworld: do you have a link to the bug for pod DNS from last week ?
[00:28] <thumper> hang on
[00:29] <thumper> perhaps I'm looking at the latest develop build
[00:29] <thumper> which won't have the fix yet
[00:29] <thumper> it is juju-edge snap package
[00:29] <thumper> not 2.7 edge
[00:29] <thumper> wallyworld: I think we may be good then
[00:29] <thumper> thanks for pointing this out
[00:29] <wallyworld> all good, thanks for checking
[00:30] <wallyworld> good to be sure
[00:30] <thumper> wallyworld: do you agree that the failure above is because we don't yet have the 2.7 update in develop?
[00:30] <wallyworld> thumper: let me finish standup
[00:31]  * thumper nods
[01:02] <wallyworld> babbageclunk: you happy to +1 my 2.7 merge pr?
[01:05] <babbageclunk> wallyworld: oh, yup - just scrolling through it now
[01:07] <wallyworld> ty
[01:09] <babbageclunk> wallyworld: approved
[01:11] <wallyworld> ty
[02:51] <thumper> babbageclunk: hmm...
[02:51] <thumper> machine-1: 15:49:17 WARNING juju.worker.globalclockupdater.raft timed out updating clock, retrying in 1s
[02:51] <thumper> given that my machine isn't doing much...
[02:51] <thumper> not sure why we'd get this
[02:52] <babbageclunk> thumper: uhoh
[02:52] <babbageclunk> let's look!
[02:52] <babbageclunk> are you seeing it a lot?
[02:52] <babbageclunk> or is it just the one?
[02:52] <thumper> just once
[02:53] <babbageclunk> was there a network blip that meant raft leadership changed?
[02:53] <babbageclunk> it's not surprising in that case.
[02:53] <thumper> hmm...
[02:53] <babbageclunk> (or maybe just a machine being slow to respond to a heartbeat because compiling/running tests)
[02:54] <thumper> it was deploying ubuntu-lite
[02:54]  * thumper just thought of a really useful command addition
[02:54] <babbageclunk> I mean, that doesn't sound like it would make the machine work too hard - it says lite right in the name
[02:54] <thumper> if juju exec specifies --out, and there are multiple targets, we should write a file per target
[02:55] <timClicks> +1
[02:55] <thumper> so we could say "juju exec -m controller --all --out engine-report juju_engine_report
[02:55] <timClicks> easy tee
[02:55] <thumper> and end up with juju_engine_report.0 .1 and .2
[02:55] <babbageclunk> is exec the new run?
[02:55] <thumper> yeah
[02:55] <timClicks> yes
[02:56] <thumper> or...
[02:56] <timClicks> that behaviour is similar to how wget works with --no-clobber
[02:56] <babbageclunk> maybe treat the filename as a template
[02:56] <thumper> --out engine-report-{}.txt
[02:56] <thumper> and replace the {} with the machine id
[02:56] <babbageclunk> jinx
[02:56] <thumper> that'd be sweet
[02:57] <thumper> perhaps add engine-report-0.txt.stderr if there is any stderr
[02:57] <thumper> or something
[02:58] <thumper> so... I wonder if I broke something in my status code...
[02:58] <thumper> my machine is saying "pending"
[02:58] <thumper> but it has a connection
[02:58] <thumper> poo
[02:59] <thumper> hmm...
[02:59] <thumper> controller shows agents as "started"
[02:59] <thumper> but my default model just showing "pending"
[03:26] <kelvinliu> wallyworld: https://github.com/juju/juju/pull/11315 got this PR for CRD lifecycle, could u take a look when got some time? ty
[03:26] <wallyworld> ok
[03:40] <babbageclunk> thumper: did you work it out?
[03:40] <thumper> no
[03:40] <thumper> digging
[03:40] <thumper> babbageclunk: meet?
[03:42] <thumper> fark!!!
[03:42] <thumper> babbageclunk: good news bad news kinda thing
[03:42] <thumper> the machine agent really isn't running properly
[03:42] <thumper> hence "pending"
[03:43] <thumper> because the upgrade-steps-runner had issues
[03:43] <thumper> unknown object type "Machiner" (not implemented)
[03:45] <thumper> I don't even know how we ended up in this state...
[03:45] <thumper> and how we haven't caught this issue earlier
[03:45]  * thumper feels sad
[04:49] <thumper> babbageclunk: ok found out my problem
[04:58] <babbageclunk> oh nice
[04:58]  * thumper is sending email
[04:58] <thumper> not my fault
[08:03] <stickupkid> manadart, got a sec?
[08:05] <manadart> stickupkid: In Daily.
[08:46] <stickupkid> manadart, thoughts about where to put the new series stuff, core/series seems like it has potential
[08:47] <manadart> stickupkid: Yes.
[08:58] <flxfoo> hi all,
[08:58] <flxfoo> little question
[08:59] <flxfoo> when upgrading ubuntu os (apt), holding a percona cluster, I upgraded the non leader first (pause, upgrade, reboot, resume). Do I need to move leader to another node, before to perform the upragde on current leadre node?
[09:00] <flxfoo> if yes, how do I move leader onwership to another node?
[09:05] <achilleasa> manadart: it's goimports that complains about already merged stuff on develop; I will push a commit to fix the linter warnings
[09:07] <manadart> flxfoo: The upgrade process handles freezing leadership so that it doesn't thrash while performing series upgrades.
[09:07] <manadart> achilleasa: OK.
[09:11] <flxfoo> manadart: thanks, can you give more details? I am not upgrading percona or going one OS release up...
[09:18] <manadart> flxfoo: Ah, I see. You are not using `juju upgrade-series ...`. If take the agent offline for longer than the lease period (up to a minute IIRC) a new leader will be elected.
[09:18] <manadart> *If you..
[09:21] <flxfoo> manadart: ah ok, I needed to wait longer than, for it to elect new leader... no way to force that then?
[09:21] <flxfoo> *then
[09:23] <manadart> flxfoo: No, it's in the hands of Raft consensus. You should be able to see it change in `juju status` if the agent is offline.
[09:25] <manadart> achilleasa: Looks like you need to fix some import groupings there.
[09:25] <flxfoo> manadart: by agent, you mean `mysql-hacluster` unit?
[09:26] <flxfoo> manadart: if yes, offline does not mean `paused` but `stopped`?
[09:27] <achilleasa> manadart: guess goimports is not to be trusted...
[09:28] <flxfoo> manadart: if I `pause` the hacluster, it just stays like that... no election seem to appear in debug-log
[09:32] <manadart> flxfoo: What is the charm you're using percona-cluster?
[09:33] <achilleasa> manadart: I think it all looks correct now; can you take another look in case I missed an import?
[09:34] <manadart> achilleasa: I think it looks OK now.
[09:41] <flxfoo> manadart: yeah
[09:41] <flxfoo> manadart: +hacluster
[09:56] <manadart> flxfoo: If you definitely want the leader to relinquish while you upgrade it, try stopping the primary mysql/percona unit agent rather than using the pause command. I'm not terribly familiar with that charm, but that should do it.
[09:59] <flxfoo>  manadart ok, if I understood properly, I could actually upgrade the non leader ones first (with pause etc... and resume), then shutdown the leader node (stop mysql) in order for an election to take place?
[09:59] <manadart> flxfoo: Yes, I think so.
[10:06] <flxfoo> manadart: thanks for your time :)
[10:06] <manadart> flxfoo: Sure thing.
[12:21] <hml> stickupkid: where did you find ineffassign and unconvert for static analysis ?  i saw a few private repros perhaps, but that’s it
[12:24] <stickupkid> hml, ineffassign - github.com/gordonklaus/ineffassign
[12:24] <stickupkid> hml, unconvert - https://github.com/mdempsky/unconvert
[12:25] <stickupkid> ha, nice that i kept my urls the same - hmm
[12:25] <hml> stickupkid: okay, those are the ones i saw, just wasn’t sure.
[12:34] <achilleasa> hml: can you try rebasing 11318 on top of current develop? That should get rid of the un-related lint errors
[12:35] <hml> achilleasa:  sure
[12:52] <hml> achilleasa:  not, there are a few more changes to be made.  working on it now
[12:52] <achilleasa> hml: left a small comment
[12:52] <achilleasa> doing the QA steps now
[13:56] <stickupkid> manadart, https://github.com/juju/juju/pull/11332
[14:03] <hml> achilleasa:  pushed up changes to 11318.  need to remember to squash them before merging.  :-)
[14:04] <achilleasa> hml: btw, when I tried the QA steps it failed to download (or was it upload) a resource
[14:04] <rick_h_> stickupkid:  are you free to join back in daily?
[14:05] <hml> achilleasa:  yeah… i saw that too.  i think it’s because we don’t know how to run the charm correctly.  that’s didn’t seem to inhibit a few metrics from being collected.
[15:07] <stickupkid> manadart, rick_h_ ping
[15:08] <manadart> stickupkid: In Daily.
[15:08] <rick_h_> stickupkid:  otp with interview
[16:26] <achilleasa> hml: 11318 is approved
[16:26] <hml> achilleasa:  ty
[16:27] <hml> achilleasa:  do you have a few minutes?
[16:28] <achilleasa> sure
[16:28] <hml> achilleasa:  daily?
[16:28] <achilleasa> omw
[16:31] <stickupkid> manadart, hml, CR please https://github.com/juju/juju/pull/11332
[18:49] <hml> stickupkid: looking at 11332
[19:08] <sdhd-sascha> hi,
[19:08] <sdhd-sascha> just trying to install `canonical's kubernetes` with juju. Inside of the worker-nodes i get:
[19:08] <sdhd-sascha> `fs.go:540] stat failed on /dev/loop24 with error: no such file or directory`
[19:08] <sdhd-sascha> And on the host i get this:
[19:08] <sdhd-sascha> ```
[19:08] <sdhd-sascha> $ losetup
[19:08] <sdhd-sascha> /dev/loop24         0      0         1  1 /var/lib/snapd/snaps/lxd_13741.snap (deleted)           0     512
[19:08] <sdhd-sascha> ```
[19:08] <sdhd-sascha> What can i do, to search deeper about what is happen here ?