[02:25] hpidcock: pushed up a couple of changes for the run socker and deployment mode stuff, just testing locally. also a makefile fix to get us out of the current build dilema in the least cost way unti; we redo the process [02:27] if anyone wants to review a lot of deleted code... https://github.com/juju/juju/pull/11377/files [02:28] wallyworld: thanks [03:44] thumper: small PR to fix an annoyance seen in those Solutions QA logs https://github.com/juju/juju/pull/11378 [03:45] * thumper looks [03:46] lgtm [03:46] ta [04:44] kelvinliu: here's a small PR to fix an upgrade step https://github.com/juju/juju/pull/11379 [04:44] wallyworld: looking now [05:00] lgtm, ty wallyworld [05:00] \o/ ty [05:01] lots of loose threads to tidy up before beta [05:21] wallyworld: can I get ur 2mins to discuss the new option for storage mode? [05:22] kelvinliu: sure, just give me 5 [05:22] sure [05:29] kelvinliu: standup? [05:29] yep [06:18] kelvinliu: hpidcock: if you guys could give early feedback on https://github.com/juju/juju/pull/11381 that would be great [06:19] yep [06:19] lol was going to tidy up a few things first. If it seems obvious or unsafe I am probably already on it [07:45] Good morning [10:01] stickupkid_: can you take a quick look to this tiny change? https://github.com/juju/names/pull/104 [10:01] also cmars, when online can you please take a look at https://github.com/juju/terms-client/pull/23 [10:02] achilleasa, done [10:02] (turns out adding new errors to juju/errors is hard) [10:04] transient dependencies === parlos is now known as parlos_afk [13:28] achilleasa: changes made to https://github.com/juju/juju/pull/11339 [13:30] achilleasa: i grabed the card to move facade calls to common === parlos_afk is now known as parlos [15:11] openstack doesn't clear it's own ports up, well that's annoying === parlos is now known as parlos_afk === parlos_afk is now known as parlos === parlos is now known as parlos_afk [15:52] hml: can I have a green tick? https://github.com/juju/terms-client/pull/24 [15:52] achilleasa: looking [15:54] hml: 11339 approved with minor doc-related comments [15:54] achilleasa: ty [15:56] achilleasa: approved [15:56] thanks === parlos_afk is now known as parlos [16:16] stickupkid and hml: do you see any strange warning if you `juju deploy x; juju destroy model -y default'? [16:16] https://pastebin.canonical.com/p/NRPQMkwTCT/ [16:17] (that IP is where lxd binds on my dev box) [16:19] achilleasa: not on 2.7.4, though i’ve noticed some command are really long on develop. like add-model [16:19] hml: yes, I get the same with add-model [16:19] achilleasa: not on my local branch either [16:27] achilleasa, I've seen that before [16:27] I believe manadart has spotted that in CI, but unable to reproduce [16:27] we did suspect it was systemd issue, but that was a theory [16:28] stickupkid: hitting it consistently :-( [16:28] achilleasa: which branch? [16:28] achilleasa: localhost? === parlos is now known as parlos_afk [16:29] develop/HEAD, local lxd exposed on eth0 ip [16:30] achilleasa, can you get info from journal etc [16:31] that way we can see if it's systemd theory [16:38] stickupkid: journalctl on the host? [16:38] yeah [16:41] stickupkid: lxd.daemon[529331]: 2020/03/31 17:40:58 http: TLS handshake error from 10.176.227.245:41548: remote error: tls: bad certificate [16:41] there we go, not idea why though [16:43] s/not/no [16:45] achilleasa: got the merge to go through so think we're good now [16:45] stickupkid: achilleasa when do you time swap? (and why are you still around today? :p ) [16:45] rick_h_, last weekend === parlos_afk is now known as parlos [17:04] achilleasa: what kinda of comments on ErrNoSavedState? it seems self explainitory to me [17:18] hml: usually it's something along the lines of 'ErrNoSavedState is returned by X when Y happens'. Just mentioned it because there is a linter (we don't use ATM) that complains about missing docs for exported types [17:32] hey [17:33] question i perform a commit for add a functionatily on openstack swift-proxy charm [17:33] how long take that someone verified the commit? [17:37] josephillips: that depends on the folks around/etc. beisner might be able to give you a better answer more official-like [17:38] ok and another question i want to commit another change [17:38] to support another funcionatily on the same charm [17:38] i have to wait until this commit is accepted or i can perform a second commit [17:39] that make me thing another question [17:39] i have to clone the master repo again to perform the new one or i can work with my last clone and changes? [17:43] josephillips: well if the changes are independent then I'd expect you could create two pull requests no problem [17:43] josephillips: if they're dependent on each other you can queue up two pull requests, but the one will have to land/rebase before the other goes through [17:44] no are not dependent they just add new funcionatily to configure new middlewares [17:44] on swift-proxy [17:44] ok [18:17] achilleasa: rgr [21:27] wallyworld: do you think we should restrict acceptable downgrades to max of one minor version? I guess one major in the case of 3.0 -> 2.9 (or whatever)? [21:27] babbageclunk: i don't think there's any reason we need to? [21:27] (Although that's difficult without knowing what the maximum minor version of the prev major was) [21:28] wallyworld: ok, that's much simpler obvs - I think I'm just getting the jibblies from ripping out this checking code. [21:28] so long as the db matches the agent version? [21:29] i guess there's the possibility of local file incompatibility, eg uniter state file [21:29] that's one thing we've just ignored [21:32] ugh [21:33] I'd thought about config file inconsistency but not state file [21:34] Is the state file part of the stuff being moved into the db? Or is that just charm state? [21:34] i would hope everything [21:36] also reference to https://bugs.launchpad.net/juju-core/+bug/1299802 in the code - I don't think this is still a problem because everything looks at the model version rather than the units looking to the machine to work out what version they should be. Does that seem right? [21:36] Bug #1299802: upgrade-juju 1.16.6 -> 1.18 (tip) fails [21:36] will read that after call [21:37] wallyworld: ok, that would remove the problem in the future, although it doesn't help for the existing versions [21:37] ok, sorry! [21:40] hey hml, are we right in thinking that the unit-state is being moved into the db as well as the charm state? [21:41] babbageclunk: yes.. uniter internal state, storage relations metrics [21:41] babbageclunk: only bundles deployer will be left in the state dir… the can be easiler gotten [21:41] babbageclunk: but it’s a work in progress right now… not everything is there [21:41] or moved [21:43] hml: awesome - thanks! [22:07] is there something unstable about ~containers namespace? i'm also wondering whether failed attempts to use the stable/promulgated namespace should leave a hint to the user that another namespace contains the charm in question [22:08] babbageclunk: unit state file compatability would still be an issue i think on downgrades, but is negated by that stuff moving to the controller as hopefully a structured doc that is then served via an api and hence compatibile versions can be served [22:10] wallyworld: well, if it's coming from the db then it would be reset along with the restore, right? Once the agents downgrade themselves then the state would be the right format. [22:11] that is true. i was thinkin also that the api would serve a version of the data consistent with the api version, but you are right we get that by default with the db version being downgraded also [22:15] rick_h_: wadda ya reckon about creating a 2.9 milestone to park bugs that we know we want to do next cycle? [22:22] wallyworld: wfm [22:41] thumper: i've always wondered why we don't enforce relation limits, did we want to target at a 2.8.1? bug 1869840 [22:41] Bug #1869840: Enforce limit: specified in relation metadata [22:50] is it something we could do post beta 1 before rc? [22:52] wallyworld: making the same change in caasupgrader - k8s isn't really handled in juju-restore at the moment, but when it is it'll be the same situation, right? [22:52] thumper: we could, but if code freeze is friday..... [22:52] freeze for features [22:52] it is potentially a non-trivial change if there's corner cases [22:53] it could just be validation in the api-sever call for relate [22:53] where if the limit is hit, we just reject the relate call [22:53] doesn't feel like a lot of corner cases [22:53] what we don't have [22:53] it could be but i am always wary [22:53] is "my charm only works with one of mysql or pgsql" [22:53] this late in the peice [22:54] trying to define that could get very messy [22:54] thumper: also bug 1869795, i can't recall why we send --verbose to stderr [22:54] Bug #1869795: --verbose output from juju CLI is written to stderr [22:54] I saw that too... [22:54] i changed the title as it is all juju cli [22:55] I had thought that we agreed to send info/verbose to stdout except if --format yaml/json [22:55] yeah, i thought so too, just wanted to double check as it is a wide reaching hcange [22:56] we could quickly see where we actually call ctx.Infof and ctx.Verbosef [22:56] the problem is the machine readable format bits [22:57] yup [22:59] babbageclunk: k8s upgrades a slightly different in how they're triggered, i'd need to look specifically at the code etc [23:01] wallyworld: Ok - presumably the agent version being different will still trigger this code though right? I'll try chasing it through. [23:01] babbageclunk: the difference is that the agent does not shut itself down and restart,it's a whole new docker image [23:03] ah right [23:03] thumper: i want to do a burn down list of bugs for 2.8-beta1. i've created 2.8-rc1, 2.8.1, and 2.9-beta1 milestones. i'd like us to shuffle bugs around to reflect reality and allow folks to pick off bugs to fix once we hit feature freeze [23:09] wallyworld: it looks like essentially the same situation (from my not very familiar reading) - that's the place that decides whether this upgrade is sensible (and unlocks downstream workers), the API accepts the version change and updates the image in k8s to the requested version [23:11] babbageclunk: you are most likely right, i need to go look at the code. i just wanted to be sure that any k8s differences were accounted for etc [23:12] wallyworld: cool cool - I'll make the change for now, we can discuss in stdup/review, thanks! [23:12] wallyworld: feeling it might soon be time to move ProviderID for Deployments/Daemonsets to pod name, not pod uuid [23:14] hpidcock: given pod name is always regenerated as a unique value in those cases, it would work to do that. is the rationale to make it easier to match things up? [23:15] wallyworld: the rationale is that UUIDs for now, and probably for a long time are not index in etcd. So you can't just get on UUID, must list all pods and filter on uuid [23:16] wallyworld: with k8s usage increasing, and us better supporting non-statefulset deployments, probably a good time to do it [23:16] ah that is true, and unfortumate [23:16] I'm happy to do it as a driveby [23:16] let's discuss in standup [23:16] sure thing [23:20] hpidcock: that's weird - why don't they index on uuid? I'd have expected it to be way cheaper than indexing on a string? [23:21] babbageclunk: because k8s objects use namespace + name as their key [23:22] ok - so the uuid is just some other field and you can't query by it directly? [23:22] correct, used to determine if two objects are different when you delete and recreate an object with the same [23:23] ah, right - thanks!