[01:12] <babbageclunk> tlm: looking through the model manifolds I clicked about something that was confusing me - there are lots of workers wrapped in ifNotMigrating that seem like there should only be one across the controller agents.
[01:12] <babbageclunk> tlm: It turns out they all depend on environTracker/caasBrokerTracker, which *is* wrapped with isResponsible, so they only run once.
[02:09] <evhan> I have a situation where I've upgraded a charm twice in quick succession, and have ended up with one working unit and one unit stuck in state "unknown", and the application "waiting" with 2/1 units running. Apart from the logs is there some place I can look to see what it's waiting *for*?
[02:10] <evhan> Or is that entirely defined by the charm (e.g. it went into "waiting" because some hook has not been implemented)?
[02:12] <tlm> is this k8's ?
[02:12] <evhan> Yeah. The thing is when I give it a minute to go green between deploys, things work OK.
[02:13] <evhan> This was just a case of overcaffeination, but now that it's stuck I figure I should investigate.
[02:13] <tlm> what does the status of the pod in k8s say ?
[02:14] <evhan> Just Running, looking for interesting events now...
[02:15] <tlm> hmmm, my theory is that the k8s watcher in juju has not caught up. If you force some event to happen to the pod does that change the status in Juju? For example changing a pod label, restarting it, etc etc
[02:18] <evhan> Huh, indeed. I just added a dummy env entry to the pod and it came right.
[02:18] <evhan> Thanks.
[02:19] <tlm> np, we just re-implemented the watchers for k8s in 2.7.2 (out soon) and I think this will have solved that problem
[02:20] <wallyworld> 2.7.3
[02:20] <wallyworld> watchers change won't be in 2.7.2
[02:21] <tlm> ah yep
[02:21]  * tlm chants make watchers great again
[02:52]  * thumper needs moar coffee too
[04:11] <anastasiamac> babbageclunk: PTAL https://github.com/juju/juju/pull/11215
[04:11] <anastasiamac> babbageclunk: ends up being mostly mechanical
[04:12] <babbageclunk> anastasiamac: ok, taking a look
[04:13] <anastasiamac> babbageclunk: no rush!
[04:20] <babbageclunk> anastasiamac: ooh, it's big - probably won't get it finished today
[05:49] <wallyworld> hpidcock: i've left come thoughts on the PR, happy to discuss
[06:30] <kelvinliu> wallyworld: https://github.com/juju/juju/pull/11211 PR for adding ValidatingWebhookConfigurations support and upgrade k8s API to 1.17 , could u take a look? ty
[06:30] <wallyworld> sure
[06:31] <kelvinliu> im going out now, I will respond tmr morning if there are any questions. thank you
[06:32] <wallyworld> np
[09:37] <stickupkid> I like how openstack api ref has object fields with colons in it - hmmm
[10:47] <achilleasa> shouldn't the SetPodSpec call check for leadership on the server-side? It seems that this is only checked at the client
[11:26] <nammn_de> manadart: thanks for patch and suggestion! I updated the code and corresponding tests
[11:46] <stickupkid> manadart, here is the code, I just need to speak to hml about how to test this locally https://github.com/go-goose/goose/pull/77
[13:11] <achilleasa> anyone knows if it is possible to be running an agent binary that is *newer* than the controller? The reverse is possible (agent version gets synced when you juju upgrade-model)
[13:17] <rick_h> achilleasa:  I don't think so. I think the controller has to be the newest so we know the apis are available
[13:20] <achilleasa> rick_h: so basically, I am trying to figure out if this check is required or not: https://github.com/juju/juju/blob/develop/api/uniter/unit.go#L786-L788
[13:21] <rick_h> achilleasa:  heh, sure would be nice if there was a comment on the logic of the assertion...
[13:22] <nammn_de> manadart: regarding your comment to add the DocID to the existing constraintsDoc.. I had that initially in the PR. That led to problems because existing Update code were not expecting that. I may be able to work it out and rewrite the update code. Just thought that would not be relevant for that patch
[13:22] <rick_h> achilleasa:  yea, so I know that in Juju upgrades the controller has to be done first and is promised to be later than any of the unit agents at first.
[13:22] <rick_h> achilleasa:  then when you upgrade the other models they slowly come up to speed
[13:23] <rick_h> achilleasa:  so I can't think of any way that hits, but someone clearly wrote it for some reason.
[13:23] <achilleasa> rick_h: that's a pretty old facade version (we are currently at v15)
[13:24] <rick_h> achilleasa:  yea...and being it's in the uniter it's not suceptible to having an old client
[13:31] <stickupkid> https://github.com/juju/juju/pull/11093#pullrequestreview-358214590
[13:31] <stickupkid> hml, tiny nit
[14:27] <hml> stickupkid: notes have “go test -live -check.v”. which now gets a -live not valid flag?  also “go test -live -check.v -image <id> -flavor <name> -vendor canonistack ./…” for nova live tests
[14:28] <stickupkid> hml, it's a start :)
[14:31] <hml> stickupkid: hand written notes from fall 2019.  ha!
[14:52] <nammn_de> manadart: i've fixed and changed the constraints DocID. While working on it a migration_internal_test failed. I added the DocID to let it pass. Does adding the DocID has implication on a migration?
[14:58] <manadart> nammn_de: What you've done is fine.
[15:37] <hml> stickupkid: https://github.com/juju/juju/pull/11217
[15:50] <hml> so is errors.WithStack the new errors.Trace?
[15:51] <stickupkid> ah, yeah sorry i meant errors.Trace()
[15:51] <stickupkid> hml
[15:51] <hml> :-)
[16:03] <hml> stickupkid: we’ve proven that “ InstanceMutaterV2 interface “ is a waste right?  I could remove now
[16:03] <stickupkid> i believe so
[16:28] <stickupkid> hml, holy batman, so many tests testing the test code
[16:29] <stickupkid> goose needs to learn about mocking
[16:29] <rick_h> hah, I think it was a bit pre-mocking party time
[16:29] <hml> hahahahaha
[16:30] <rick_h> careful, you're going to hurt hml with all the laughing
[16:30] <stickupkid> actual code change +184, tests that test the test +1000000000
[16:32] <achilleasa> stickupkid: hmmm... but what verifies that the test runner runs the tests properly instead of cheating? ;-)
[16:33] <hml> achilleasa:  -live
[16:34] <stickupkid> achilleasa, turtles all the way down
[16:55] <achilleasa> I am reading the docs for AddUnitStorage (https://github.com/juju/juju/blob/develop/apiserver/facades/agent/uniter/storage.go#L351-L353). The implementation looks like it does a best effort to provision storage and does not stop after an error. However, the client side logic seems to trigger any failue as an error and will bubble up the error to the hook context flush
[16:56] <achilleasa> Should the batch call (processes storage reqs for a single unit) bail out with an error as early as possible given that no other change will be applied?
[16:56] <achilleasa> rick_h: ^
[16:57] <achilleasa> (s/trigger/treat/)
[17:06] <hml> stickupkid: still around?
[17:10] <achilleasa> rick_h: looks like the facade version check was there because the same API call is used by the juju cli: https://github.com/juju/juju/blob/develop/cmd/juju/storage/add.go#L128
[17:18] <stickupkid> hml, yarp
[17:18] <stickupkid> speaking to builder
[17:20] <hml> stickupkid: ho?
[17:43] <rick_h> achilleasa:  :/ bah that explains it then
[17:43] <rick_h> achilleasa:  not sure on the storage question. Maybe shoot an email to the list and I can see what wallyworld thinks there and what his experience is
[17:44] <achilleasa> rick_h: My theory is that this is for the cli (add storage to multiple things)
[17:44] <achilleasa> for flushing it should be atomic anyway
[17:45] <achilleasa> (we will keep retrying)
[20:14] <thumper> morning team
[20:15] <hml> hi thumper
[20:23] <rick_h> howdy thumper
[20:59] <nammn_de> rick_h: I  finished the remove-space cmd and the edge cases. If you find some time can you give some feedback on ux and output? https://github.com/juju/juju/pull/11183
[20:59] <rick_h> nammn_de:  trying, spending this afternoon trying to get my VPC setup correct.
[20:59] <rick_h> Error details: subnet "subnet-bb18f687" not associated with VPC "vpc-c6391da1" main route table
[20:59] <rick_h> :(
[21:00] <nammn_de> rick_h: no worries,  if that doesn't work I printed the console output to make it look nice :D.
[21:00] <rick_h> nammn_de:  yea, have your PR up and trying to bootstrap with it but :( on my vpc setup
[21:30] <evhan> I've just switched my client from stable to edge, and I'm trying to run upgrade-controller to do the same for the agent, but there doesn't seem to be a corresponding version available(?). Is there a way to see available versions for the agent, and to synchronise the two?
[21:32] <evhan> i.e. `juju upgrade-controller --dry-run --agent-stream=edge` says "no upgrades available".
[21:33] <evhan> Actually, I might be confusing snap channels with juju... Streams?
[21:33] <evhan> Yeah, never mind. RTFM, evhan.
[21:35] <anastasiamac> evhan: this might be worth a discource post ! glad u figured it out so quickly :D
[21:36] <evhan> Although, setting agent-stream=devel says the same.
[21:39] <evhan> Ah, I need to specify both an agent-stream *and* an agent-version, since both have a value set in the model-config.
[21:40] <rick_h> evhan:  yea exactly
[21:40] <rick_h> the snap/edge is only for the local client and the controllers get their data from streams vs the snap world
[21:42] <evhan> Yeah, makes sense.
[21:42] <evhan> I'm still flailing here a bit though: https://paste.ubuntu.com/p/xZ3BX3DqTX/
[21:44] <rick_h> evhan:  yea that last one is right. It's offering to upload the jujud binary from your client snap to the controller
[21:45] <rick_h> evhan:  are you looking to try the edge (2.8?) or 2.7.2 (the current candidate stable?)
[21:45] <evhan> rick_h: edge, having just updated my client version via snap to the same.
[21:47] <rick_h> evhan:  ok, sec testing
[21:48] <rick_h> evhan:  though tbh if you care about this controller it's going to get funny in the future as the updates come along
[21:48] <hml> wallyworld:  are you around to talk actions and k8s charms?
[21:48] <wallyworld> i am
[21:48] <hml> wallyworld:  ho?
[21:49] <evhan> OK, so when I changed the arguments to `juju upgrade-controller --agent-stream=released --agent-version=2.7.2` just to test upgrading full stop (note without --dry-run) it did what I wanted: https://paste.ubuntu.com/p/Scc7j5KfR8/
[21:49] <wallyworld> join the team one, still talking to kelvin
[21:49] <evhan> But not what I would have expected given the CLI flags. So I got the result I wanted but I'm still a little confused.
[21:50] <evhan> rick_h: Yeah, not a controller I care about, just testing.
[21:52] <rick_h> :/ yea that pastebin seems odd
[21:55] <evhan> Worth filing an issue? I can try to reproduce it, or just provide what I have there. I don't really have a guess about what's going on, perhaps the --dry-run flag is interfering?
[21:59] <rick_h> evhan:  I guess. Yes, the dry-run thinks it'll just use the local client as 2.7.2.1 but what it actually did for you is move you to 2.8 even though what you asked for was 2.7.2. :/
[21:59] <rick_h> in the end it's a little bit like "don't do that, why would anyway go from a stable juju controller to a local pre-beta thingy" but here we are :)
[22:02] <evhan> Yeah, fair. The 2.8 version was uploaded successfully when I look on the controller, although nothing was restarted so it's still running the original 2.7.1 version.
[23:44] <anastasiamac> babbageclunk: m trying to run juju-restore and can't...
[23:44] <babbageclunk> anastasiamac: what's happening?
[23:45] <anastasiamac> babbageclunk: m on my primary and runnig the binary but providing just pwd is not enough
[23:45] <anastasiamac> babbageclunk: m getting auth failed error
[23:47] <anastasiamac> babbageclunk: it's me ofc.. i was using statepwd instrad of the actual oldpwd... :)
[23:47] <babbageclunk> anastasiamac: which password are you passing
[23:47] <anastasiamac> all good
[23:47] <babbageclunk> yeah, that one got me as well!
[23:48] <anastasiamac> \o/
[23:48] <babbageclunk> I'll add that to the readme - it's not at all clear which password would be the right one from looking at them
[23:49] <anastasiamac> nice