[00:03] <thumper> ugh...
[00:03] <thumper> my update clock branch now conflicts...
[00:03] <thumper> when you touch that many files, I suppose it shouldn't be surprising
[01:03] <wallyworld> veebers: what's the best way for me to ssh into a machine to run windows unit tests?
[01:28] <wallyworld> thumper: here's a small charm.v6 tweak to hopefully fix the windows unit tests https://github.com/juju/charm/pull/256
[01:29]  * thumper looks
[01:30] <wallyworld> why the fark does windows insist and screwed up path separators
[01:30] <wallyworld> *on
[01:36] <thumper> wallyworld: approved
[01:37] <wallyworld> ty
[01:37] <thumper> wallyworld: does it fix the bug?
[01:37] <wallyworld> thumper: nfi, i don't have a windows box. but reading the code i think so
[01:37] <thumper> wallyworld: why can't you log into the windows box?
[01:38] <wallyworld> is it documented somewhere?
[01:46] <thumper> yes...
[01:46] <thumper> somewhere
[01:50] <wallyworld> change is good regardless as it makes the zips generated standards compliant
[01:52] <veebers> wallyworld: (soz was at lunch) there is no automated way, you would scp and ssh into the windows machine and do it that way (you could crib off the windows unit test job)
[01:53] <veebers> wallyworld: I had issues ssh-ing into the machine, you may have more luck rdp-ing into the machine after scp-ing the code onto it
[01:53] <wallyworld> gawd
[01:53] <wallyworld> easier to land the fix and see. it's a good fix regardless
[01:53] <veebers> hah ok. We have plans to add windows unit test to the pr experience
[02:04] <veebers> where is JUJU_DATA for windows?
[02:27] <wallyworld> veebers: i think under the User home dir
[02:27] <veebers> wallyworld: ack thanks
[02:27] <wallyworld> thumper: here's the juju bit of that fix https://github.com/juju/juju/pull/9094
[02:53] <thumper> wallyworld: that is a lot more than just the charms update
[02:53] <wallyworld> tell me about it
[02:53] <thumper> wallyworld: and there are two test failures
[02:53] <wallyworld> yeah, fixing now
[02:53] <wallyworld> due to new hooks
[02:54] <wallyworld> the sands shifted underneath the deps
[02:54] <wallyworld> so bringing in tip of charm.v6 also needed latest of other things
[02:54] <wallyworld> the other things are oci-image and devices support in charm metadata
[02:54] <wallyworld> which are unused in 2.4
[02:55] <wallyworld> and also the new update-series hooks, so tests needs fixing
[02:58]  * thumper nods
[03:27] <wallyworld> thumper: tests pass now
[03:39]  * thumper looks again
[03:39] <thumper> wallyworld: approved
[03:39] <wallyworld> ty
[03:58] <thumper> wallyworld: I'm going to go and make a coffee before our chat
[03:58] <thumper> wallyworld: are you wanting to meet this afternoon?
[03:58] <wallyworld> thumper: sgtm, i'll do ther same
[03:58] <wallyworld> yeah let's briefly
[03:58] <thumper> ok
[03:58]  * thumper goes to make coffee
[04:06] <wallyworld> thumper: in HO
[04:34] <wallyworld> babbageclunk: still on target to land the big mother branch?
[04:38] <babbageclunk> Hopefully - have been chasing the openstack s390x failures with veebers
[04:38] <veebers> sorry to distract you babbageclunk :-P
[04:38] <babbageclunk> and dumb gce problem
[05:23] <veebers> babbageclunk: hah, this commit introduced that test failure: https://github.com/juju/juju/commit/48471e3bac7cd694540067dc5fa823c3a28f52c2
[05:24] <babbageclunk> bums
[05:24] <veebers> it's super complicated :-P I'm not sure why yet but it does. I suspect that the error message is wrong and it should be something different
[05:24] <veebers> we didn't see it come up earlier because I broke the ci-run and we missed a handful of commits
[05:30] <veebers> wallyworld, thumper: I updated the doc, the invalid url thing isn't related to proxy issues (at least for the unit test) but was introed with the lts name change (not sure off the root cause of the failure, potentially masked error?)
[05:31]  * wallyworld is otp with is
[05:32] <veebers> ack
[05:32] <veebers> I need to go sort tea, I'll be back to push a PR for the assess_recovery failure after that (just testing a run now).
[05:33] <anastasiamac> veebers: just out of curiousity..
[05:34] <anastasiamac> veebers: is bionic in /usr/share/distro-info/ubuntu.csv on the ci machines that run our tests?...
[05:34] <veebers> anastasiamac: I'll have a look for you
[05:34] <veebers> anastasiamac: yep, including cosmic
[05:34] <anastasiamac> veebers: thnx... ok..
[05:35] <anastasiamac> veebers: coz we also have hardcoded mapping (m rolling my eyes here) but m hoping that we do not use it
[05:35] <veebers> the s390x machine does say it needs a reboot, perhaps I'll do that after dinner
[05:36] <anastasiamac> veebers: ack
[05:36] <veebers> anastasiamac: ack, I need to confirm but I think this might just happen on s390x
[05:36]  * anastasiamac sighs
[05:36] <anastasiamac> veebers: i was under impression we no longer supported s390x
[05:37] <thumper> no... we do support s390x
[05:37] <anastasiamac> thumper: \o/
[05:47] <anastasiamac> veebers: m not sure if it'll help but it looks like the juju/os commit in depenendecies.tsv needs to be updated... 2.4 branch points to the one I did in July and misses the update from uros to support new mac os (done 14 days ago)... thumper <<
[05:47] <thumper> agreed
[05:47] <thumper> if we are rolling a new hash for testing anyway, we should update it
[05:48] <anastasiamac> k. i'll pause what m doing and will propose now
[05:48] <anastasiamac> just juju/os or can u think of anything else that needs to be updated?
[06:00] <anastasiamac> thumper: veebers: https://github.com/juju/juju/pull/9095
[06:01] <thumper> anastasiamac: approved
[06:01] <anastasiamac> \o/
[11:33] <anastasiamac> a simple review plz https://github.com/juju/juju/pull/9097
[11:42] <stickupkid> anastasiamac: I'll give a look
[11:47] <stickupkid> anastasiamac: AND done... :D
[13:37] <rick_h_> stickupkid: can you please make sure to keep an eye out on that OS PR and land it once it's good?
[13:38] <manadart> externalreality: Do you have any objection to suffixing the status constants in the model package, with "UpgradeSeries" or "Status"?
[13:38] <manadart> Disambiguates them from other exports.
[13:39] <stickupkid> rick_h_: yeap sure can
[13:40] <rick_h_> stickupkid: ty!
[14:15] <hml> stickupkid: rick_h_ : the ci-run for 2.4.2 doc is updated with the latest and greatest.
[14:17] <cory_fu> stub: You around still?
[14:18] <stickupkid> hml: i'll have a quick look at this one - Export bundle test shouldn’t be running in 2.4 ci
[14:18] <rick_h_> hml: awesome
[14:19] <rick_h_> stickupkid: that was moved behind a feature flag. Might just need ot make sure the tests are engaging the flag?
[14:19] <rick_h_> stickupkid: or since it's behind a flag skip the tests for the moment in 2.4
[14:19] <rick_h_> and we'll sort it in 2.5/backport in the future
[14:23] <hml> rick_h_: stickupkid: the export bundle test wasn’t being run in 2.4 ci for a bit… then appeared overnight
[14:24] <rick_h_> hml: like last night overnight?
[14:24] <hml> rick_h_: yes… it wasn’t run in the 1089 run, but is in 1093
[14:24] <rick_h_> lol
[14:28] <stickupkid> haha
[14:28] <stickupkid> tada
[14:29] <rick_h_> well, should make the git bisect easy :)
[14:30] <stickupkid> https://github.com/CanonicalLtd/juju-qa-jenkins/blob/master/jobs/ci-run/functional/functionaltests.yml#L197
[14:32] <stickupkid> so it seems this was changed, to run again
[14:33] <stickupkid> I'll make a PR
[14:33] <rick_h_> well that reads ! doesn't it?
[14:33] <rick_h_> I'm confused
[14:33] <rick_h_> stickupkid: time for our 1-1 chat?
[14:33] <stickupkid> one sec, on a phone :S
[14:35] <rick_h_> stickupkid: lol ok
[14:36] <stickupkid> wife broke her new phone :|
[14:38] <rick_h_> oops, did you tell her that's not how you're supposed to use it?
[14:38] <rick_h_> I find that if I tell my wife that she shouldn't have done that it helps a lot and she's grateful for my advice :)
[14:42]  * hml snickering :-D
[16:09] <kwmonroe> cory_fu: will config.changed.foo always bet set on initial deployment?  i seem to recall config-changed always runs, but am unsure if the c.c.foo flag will be set at that time, or if it's only set if foo is not the default, or if it's not set at all until a 'juju config' operation.
[16:11] <cory_fu> kwmonroe: Yes, it will always be set on the first run
[16:12] <cory_fu> kwmonroe: Specifically, it uses `config.changed(opt)` (see https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/basic.py#L211) which will evaluate to True the first time it's called (https://github.com/juju/charm-helpers/blob/master/charmhelpers/core/hookenv.py#L351)
[16:13] <kwmonroe> excellent cory_fu.  thx
[16:18] <cory_fu> Cynerva_: I think I've addressed all of your comments on https://github.com/juju-solutions/kubernetes/pull/148 if you don't mind taking another look when you have some time
[16:41] <Cynerva> cory_fu: thanks, will look soon
[16:44] <externalreality> rick_h_, (upgrade series feature) - As far as allowing retries when hooks fail. My assumption is that retry on failure should be implicit from a CLI perspective. A user can simply run the command again after hook failure and not need to explicitly specify a flag (--retry).
[16:45] <rick_h_> externalreality: right, currently though it says you can't because it's already started, for instance
[16:45] <externalreality> rick_h_, understood
[16:45] <rick_h_> externalreality: so there's a question then on if we want to warn the user it's already done and if they want to try again make them express that they're aware of that
[16:45] <rick_h_> externalreality: so I guess that was just a "hmm, how do we want to handle this?"
[16:50] <externalreality> rick_h_, where would that warning come from. `juju status` doesn't display that kind of information (?I think?) and the `prepare` command, of course, runs async with respect to the CLI (so cannot warn as command output). The only option would be to log that warning - correct?
[16:50] <externalreality> rick_h_, Where would the user get that warning info from? that is.
[16:51] <rick_h_> externalreality: so currently if I run prepare after I've already run it the command outputs that the lock document already exists (or something to that effect)
[16:51] <rick_h_> externalreality: maybe I'm confused. let me get the test thing setup again.
[16:52] <externalreality> rick_h_, Yes, if you run again the command says the lock already exists.
[16:53] <externalreality> rick_h_, do you mean that if you should run it again and the hooks is in error, the command then notifies you that "The hooks has failed on a previous run, would you like to try again..." kind of thing?
[16:53] <rick_h_> externalreality: thinking
[16:53] <externalreality> rick_h_, ack
[16:53] <rick_h_> sorry, yea my first thought was along that line but I recall a conversation with jam we had around existing methods of dealing with failing hooks and our existing resolved --retry mechanisms
[16:53] <rick_h_> externalreality: so thinking through how that might work
[16:54] <rick_h_> and any cases where it might fail/abort but not be tied to a pre/post hook execution
[16:56] <externalreality> rick_h_, cool
[16:56] <rick_h_> externalreality: ok, let's hold off on that for now then. Let's just make sure we can resolved/--retry the prepare hooks
[16:56] <externalreality> rick_h_, ok
[16:56] <rick_h_> externalreality: and we should validate that once those hooks are successful the process will still move forward doing the machine-related steps
[17:21] <externalreality> grumpig is out of disk space
[17:21] <rick_h_> wheeee
[20:21] <rick_h_> tfw you forget to turn on the feature flag before you bootstrap and deploy the test charm...
[20:25] <rick_h_> oh dammit, and I also forgot to run make install...wheeee take 4
[20:37] <thumper> morning
[20:38] <rick_h_> morning thumper
[21:07] <veebers> Morning o/
[21:09] <rick_h_> morning veebers
[21:14] <veebers> Morning rick_h_ o/
[21:15] <babbageclunk> morning veebers and rick_h_
[21:15] <babbageclunk> and thumper
[21:15] <rick_h_> wheeee
[22:48] <babbageclunk> wallyworld: just doing the last bit of changes, flipping the sense of that feature flag. Do you think I should call it disable-raft-leases or something else?
[22:49] <wallyworld> babbageclunk: what about use-legacy-leases or something?
[22:49] <wallyworld> keep the sense of the flag what to use rather than what not to use
[22:50] <babbageclunk> wallyworld: yeah, that's nicer thanks. Also, do you want to look through my other responses in case there's something I'm being silly about?
[22:50] <wallyworld> babbageclunk: should be good, i can take a quick look
[22:51] <babbageclunk> wallyworld: awesome thanks
[22:51] <babbageclunk> wallyworld: ping me if you want to argue about anything :)
[23:29] <veebers> wallyworld, babbageclunk: yay https://github.com/juju/juju/blob/develop/acceptancetests/jujupy/client.py#L227
[23:31] <babbageclunk> veebers yay, delete it!
[23:39] <anastasiamac> babbageclunk: PTAL https://github.com/juju/juju/pull/9096
[23:43] <babbageclunk> anastasiamac: wilco - are you blocked on it? Do you mind if I finish off my raftlease featureflag change first?
[23:44] <anastasiamac> babbageclunk: m kind of blocked on it but i can also try to exercise my patience... ;) wanted to land it within 24hr of proposing...
[23:44] <anastasiamac> babbageclunk: i can always beg wallyworld or thumper for a review :D
[23:48] <wallyworld> i can look soonish
[23:48] <wallyworld> add it to the queue
[23:50] <veebers> maas spaces and lxd, if I'm seeing the error: (1/lxd/0', 'no obvious space for container "1/lxd/0", host machine has spaces: "ha-space", "space-0") how can I add an obvous space? (this is maas 2.3, juju 2.4)
[23:55] <wallyworld> babbageclunk: changes/comments look ok to me - maybe add a todo or a trello card for the fsm optimisation for expired lease removal
[23:55] <anastasiamac> thanks, wallyworld \o/
[23:55] <babbageclunk> wallyworld: great thanks
[23:56] <wallyworld> babbageclunk: when it's landed i expect a video of you doing a happy dance
[23:56] <wallyworld> and a discouse post :-)
[23:57] <babbageclunk> anastasiamac: I had to run outside to get washing off the line - I'll take a look now!
[23:57] <anastasiamac> babbageclunk: :)
[23:57] <anastasiamac> babbageclunk: it's not that urgent.. i'll need to take my 5yo to do a blood test
[23:57] <babbageclunk> oh no!
[23:58] <anastasiamac> babbageclunk: which apparently requires several specialist to hold him down
[23:58] <babbageclunk> :(
[23:58] <babbageclunk> poor wee guy]
[23:58] <anastasiamac> babbageclunk: so when i said 24 hrs... i meant we have until 10pm bne time
[23:58] <anastasiamac> babbageclunk: yeah :(