[02:25] <wallyworld> hpidcock: pushed up a couple of changes for the run socker and deployment mode stuff, just testing locally. also a makefile fix to get us out of the current build dilema in the least cost way unti; we redo the process
[02:27] <thumper> if anyone wants to review a lot of deleted code... https://github.com/juju/juju/pull/11377/files
[02:28] <hpidcock> wallyworld: thanks
[03:44] <wallyworld> thumper: small PR to fix an annoyance seen in those Solutions QA logs https://github.com/juju/juju/pull/11378
[03:45]  * thumper looks
[03:46] <thumper> lgtm
[03:46] <wallyworld> ta
[04:44] <wallyworld> kelvinliu: here's a small PR to fix an upgrade step https://github.com/juju/juju/pull/11379
[04:44] <kelvinliu> wallyworld: looking now
[05:00] <kelvinliu> lgtm, ty wallyworld
[05:00] <wallyworld> \o/ ty
[05:01] <wallyworld> lots of loose threads to tidy up before beta
[05:21] <kelvinliu> wallyworld: can I get ur 2mins to discuss the new option for storage mode?
[05:22] <wallyworld> kelvinliu: sure, just give me 5
[05:22] <kelvinliu> sure
[05:29] <wallyworld> kelvinliu: standup?
[05:29] <kelvinliu> yep
[06:18] <wallyworld> kelvinliu: hpidcock: if you guys could give early feedback on https://github.com/juju/juju/pull/11381 that would be great
[06:19] <kelvinliu> yep
[06:19] <tlm[m]> lol was going to tidy up a few things first. If it seems obvious or unsafe I am probably already on it
[07:45] <elox> Good morning
[10:01] <achilleasa> stickupkid_: can you take a quick look to this tiny change? https://github.com/juju/names/pull/104
[10:01] <achilleasa> also cmars, when online can you please take a look at https://github.com/juju/terms-client/pull/23
[10:02] <stickupkid_> achilleasa, done
[10:02] <achilleasa> (turns out adding new errors to juju/errors is hard)
[10:04] <stickupkid_> transient dependencies
[13:28] <hml> achilleasa: changes made to https://github.com/juju/juju/pull/11339
[13:30] <hml> achilleasa:  i grabed the card to move facade calls to common
[15:11] <stickupkid> openstack doesn't clear it's own ports up, well that's annoying
[15:52] <achilleasa> hml: can I have a green tick? https://github.com/juju/terms-client/pull/24
[15:52] <hml> achilleasa:  looking
[15:54] <achilleasa> hml: 11339 approved with minor doc-related comments
[15:54] <hml> achilleasa:  ty
[15:56] <hml> achilleasa:  approved
[15:56] <achilleasa> thanks
[16:16] <achilleasa> stickupkid and hml: do you see any strange warning if you `juju deploy x; juju destroy model -y default'?
[16:16] <achilleasa> https://pastebin.canonical.com/p/NRPQMkwTCT/
[16:17] <achilleasa> (that IP is where lxd binds on my dev box)
[16:19] <hml> achilleasa:  not on 2.7.4, though i’ve noticed some command are really long on develop.  like add-model
[16:19] <achilleasa> hml: yes, I get the same with add-model
[16:19] <hml> achilleasa:  not on my local branch either
[16:27] <stickupkid> achilleasa, I've seen that before
[16:27] <stickupkid> I believe manadart has spotted that in CI, but unable to reproduce
[16:27] <stickupkid> we did suspect it was systemd issue, but that was a theory
[16:28] <achilleasa> stickupkid: hitting it consistently :-(
[16:28] <hml> achilleasa:  which branch?
[16:28] <hml> achilleasa:  localhost?
[16:29] <achilleasa> develop/HEAD, local lxd exposed on eth0 ip
[16:30] <stickupkid> achilleasa, can you get info from journal etc
[16:31] <stickupkid> that way we can see if it's systemd theory
[16:38] <achilleasa> stickupkid: journalctl on the host?
[16:38] <stickupkid> yeah
[16:41] <achilleasa> stickupkid: lxd.daemon[529331]: 2020/03/31 17:40:58 http: TLS handshake error from 10.176.227.245:41548: remote error: tls: bad certificate
[16:41] <stickupkid> there we go, not idea why though
[16:43] <stickupkid> s/not/no
[16:45] <rick_h_> achilleasa:  got the merge to go through so think we're good now
[16:45] <rick_h_> stickupkid:  achilleasa when do you time swap? (and why are you still around today? :p )
[16:45] <stickupkid> rick_h_, last weekend
[17:04] <hml> achilleasa:  what kinda of comments on ErrNoSavedState?  it seems self explainitory to me
[17:18] <achilleasa> hml: usually it's something along the lines of 'ErrNoSavedState is returned by X when Y happens'. Just mentioned it because there is a linter (we don't use ATM) that complains about missing docs for exported types
[17:32] <josephillips> hey
[17:33] <josephillips> question i perform a commit for add a functionatily on openstack swift-proxy charm
[17:33] <josephillips> how long take that someone verified the commit?
[17:37] <rick_h_> josephillips:  that depends on the folks around/etc. beisner might be able to give you a better answer more official-like
[17:38] <josephillips> ok and another question i want to commit another change
[17:38] <josephillips> to support another funcionatily on the same charm
[17:38] <josephillips> i have to wait until this commit is accepted or i can perform a second commit
[17:39] <josephillips> that make me thing another question
[17:39] <josephillips> i have to clone the master repo again to perform the new one or i can work with my last clone and changes?
[17:43] <rick_h_> josephillips:  well if the changes are independent then I'd expect you could create two pull requests no problem
[17:43] <rick_h_> josephillips:  if they're dependent on each other you can queue up two pull requests, but the one will have to land/rebase before the other goes through
[17:44] <josephillips> no are not dependent they just add new funcionatily to configure new middlewares
[17:44] <josephillips> on swift-proxy
[17:44] <rick_h_> ok
[18:17] <hml> achilleasa: rgr
[21:27] <babbageclunk> wallyworld: do you think we should restrict acceptable downgrades to max of one minor version? I guess one major in the case of 3.0 -> 2.9 (or whatever)?
[21:27] <wallyworld> babbageclunk: i don't think there's any reason we need to?
[21:27] <babbageclunk> (Although that's difficult without knowing what the maximum minor version of the prev major was)
[21:28] <babbageclunk> wallyworld: ok, that's much simpler obvs - I think I'm just getting the jibblies from ripping out this checking code.
[21:28] <wallyworld> so long as the db matches the agent version?
[21:29] <wallyworld> i guess there's the possibility of local file incompatibility, eg uniter state file
[21:29] <wallyworld> that's one thing we've just ignored
[21:32] <babbageclunk> ugh
[21:33] <babbageclunk> I'd thought about config file inconsistency but not state file
[21:34] <babbageclunk> Is the state file part of the stuff being moved into the db? Or is that just charm state?
[21:34] <wallyworld> i would hope everything
[21:36] <babbageclunk> also reference to https://bugs.launchpad.net/juju-core/+bug/1299802 in the code - I don't think this is still a problem because everything looks at the model version rather than the units looking to the machine to work out what version they should be. Does that seem right?
[21:36] <mup> Bug #1299802: upgrade-juju 1.16.6 -> 1.18 (tip) fails <juju-core:Fix Released by jameinel> <juju-core 1.18:Fix Released by jameinel> <juju-core (Ubuntu):Fix Released> <juju-core (Ubuntu Trusty):Fix Released> <https://launchpad.net/bugs/1299802>
[21:36] <wallyworld> will read that after call
[21:37] <babbageclunk> wallyworld: ok, that would remove the problem in the future, although it doesn't help for the existing versions
[21:37] <babbageclunk> ok, sorry!
[21:40] <babbageclunk> hey hml, are we right in thinking that the unit-state is being moved into the db as well as the charm state?
[21:41] <hml> babbageclunk: yes.. uniter internal state, storage relations metrics
[21:41] <hml> babbageclunk: only bundles  deployer will be left in the state dir… the can be easiler gotten
[21:41] <hml> babbageclunk: but it’s a work in progress right now… not everything is there
[21:41] <hml> or moved
[21:43] <babbageclunk> hml: awesome - thanks!
[22:07] <pmatulis> is there something unstable about ~containers namespace? i'm also wondering whether failed attempts to use the stable/promulgated namespace should leave a hint to the user that another namespace contains the charm in question
[22:08] <wallyworld> babbageclunk: unit state file compatability would still be an issue i think on downgrades, but is negated by that stuff moving to the controller as hopefully a structured doc that is then served via an api and hence compatibile versions can be served
[22:10] <babbageclunk> wallyworld: well, if it's coming from the db then it would be reset along with the restore, right? Once the agents downgrade themselves then the state would be the right format.
[22:11] <wallyworld> that is true. i was thinkin also that the api would serve a version of the data consistent with the api version, but you are right we get that by default with the db version being downgraded also
[22:15] <wallyworld> rick_h_: wadda ya reckon about creating a 2.9 milestone to park bugs that we know we want to do next cycle?
[22:22] <rick_h_> wallyworld:  wfm
[22:41] <wallyworld> thumper: i've always wondered why we don't enforce relation limits, did we want to target at a 2.8.1? bug 1869840
[22:41] <mup> Bug #1869840: Enforce limit: specified in relation metadata <canonical-is> <juju:Triaged> <https://launchpad.net/bugs/1869840>
[22:50] <thumper> is it something we could do post beta 1 before rc?
[22:52] <babbageclunk> wallyworld: making the same change in caasupgrader - k8s isn't really handled in juju-restore at the moment, but when it is it'll be the same situation, right?
[22:52] <wallyworld> thumper: we could, but if code freeze is friday.....
[22:52] <thumper> freeze for features
[22:52] <wallyworld> it is potentially a non-trivial change if there's corner cases
[22:53] <thumper> it could just be validation in the api-sever call for relate
[22:53] <thumper> where if the limit is hit, we just reject the relate call
[22:53] <thumper> doesn't feel like a lot of corner cases
[22:53] <thumper> what we don't have
[22:53] <wallyworld> it could be but i am always wary
[22:53] <thumper> is "my charm only works with one of mysql or pgsql"
[22:53] <wallyworld> this late in the peice
[22:54] <thumper> trying to define that could get very messy
[22:54] <wallyworld> thumper: also bug 1869795, i can't recall why we send --verbose to stderr
[22:54] <mup> Bug #1869795: --verbose output from juju CLI is written to stderr <juju:New> <https://launchpad.net/bugs/1869795>
[22:54] <thumper> I saw that too...
[22:54] <wallyworld> i changed the title as it is all juju cli
[22:55] <thumper> I had thought that we agreed to send info/verbose to stdout except if --format yaml/json
[22:55] <wallyworld> yeah, i thought so too, just wanted to double check as it is a wide reaching hcange
[22:56] <thumper> we could quickly see where we actually call ctx.Infof and ctx.Verbosef
[22:56] <thumper> the problem is the machine readable format bits
[22:57] <wallyworld> yup
[22:59] <wallyworld> babbageclunk: k8s upgrades a slightly different in how they're triggered, i'd need to look specifically at the code etc
[23:01] <babbageclunk> wallyworld: Ok - presumably the agent version being different will still trigger this code though right? I'll try chasing it through.
[23:01] <wallyworld> babbageclunk: the difference is that the agent does not shut itself down and restart,it's a whole new docker image
[23:03] <babbageclunk> ah right
[23:03] <wallyworld> thumper: i want to do a burn down list of bugs for 2.8-beta1. i've created 2.8-rc1, 2.8.1, and 2.9-beta1 milestones. i'd like us to shuffle bugs around to reflect reality and allow folks to pick off bugs to fix once we hit feature freeze
[23:09] <babbageclunk> wallyworld: it looks like essentially the same situation (from my not very familiar reading) - that's the place that decides whether this upgrade is sensible (and unlocks downstream workers), the API accepts the version change and updates the image in k8s to the requested version
[23:11] <wallyworld> babbageclunk: you are most likely right, i need to go look at the code. i just wanted to be sure that any k8s differences were accounted for etc
[23:12] <babbageclunk> wallyworld: cool cool - I'll make the change for now, we can discuss in stdup/review, thanks!
[23:12] <hpidcock> wallyworld: feeling it might soon be time to move ProviderID for Deployments/Daemonsets to pod name, not pod uuid
[23:14] <wallyworld> hpidcock: given pod name is always regenerated as a unique value in those cases, it would work to do that. is the rationale to make it easier to match things up?
[23:15] <hpidcock> wallyworld: the rationale is that UUIDs for now, and probably for a long time are not index in etcd. So you can't just get on UUID, must list all pods and filter on uuid
[23:16] <hpidcock> wallyworld: with k8s usage increasing, and us better supporting non-statefulset deployments, probably a good time to do it
[23:16] <wallyworld> ah that is true, and unfortumate
[23:16] <hpidcock> I'm happy to do it as a driveby
[23:16] <wallyworld> let's discuss in standup
[23:16] <hpidcock> sure thing
[23:20] <babbageclunk> hpidcock: that's weird - why don't they index on uuid? I'd have expected it to be way cheaper than indexing on a string?
[23:21] <hpidcock> babbageclunk: because k8s objects use namespace + name as their key
[23:22] <babbageclunk> ok - so the uuid is just some other field and you can't query by it directly?
[23:22] <hpidcock> correct, used to determine if two objects are different when you delete and recreate an object with the same
[23:23] <babbageclunk> ah, right - thanks!