[00:13] <anastasiamac> babbageclunk: PTAL https://github.com/juju/juju/pull/11131 as discussed
[00:13] <babbageclunk> T'ing an L
[00:14] <anastasiamac> :)
[00:33] <stub> skay: If you remove all the units, but don't remove the application, then the Juju leadership state hangs around. When you add new units to the application, they have no way of telling if they need to wait for what the leadership settings tell them is the master, or if the data is stale. Well, they can now thanks to the goal state feature, but that didn't exist when that code was written.
[00:35] <stub> It might be a mojo (or juju-deployer) bug, where it assumes removing all the units is equivalent to removing the application. It has come up a few times with mojo.
[00:45] <anastasiamac> stub: ah! thanks for explanation - that makes sense :D
[02:42] <babbageclunk> anastasiamac: approved
[04:23] <anastasiamac> babbageclunk: \o/
[10:21] <stickupkid> anyone know if these files are still generated?
[10:21] <stickupkid> https://github.com/juju/juju/blob/2.7/service/windows/zservice_windows.go
[10:21] <stickupkid> and https://github.com/juju/juju/blob/2.7/service/windows/zpassword_windows.go
[10:22] <stickupkid> both are flagging up with import errors in the linter, but I'm trying to work out how they're generated
[10:49] <stickupkid> turns out the new goimport in latest golang just finds they're not correct
[11:07] <achilleasa> stickupkid: were the linter errors in your PR caused by the lint_go.sh bits that your last commit tweaks?
[11:08] <stickupkid> only in tests
[11:08] <stickupkid> very strange
[12:05] <nammn_de> stickupkid achilleasa: just for my understanding. The settings collection is responsible to safe config setting of a model?  E.g. changes to config mysql are reflected there?
[13:23] <skay> stub: I've removed all of the applications are re-run my spec but it still can't tell it's the leader. Is there a bug about that? Meanwhile, I guess someone could delete and recreate the model for me?
[13:58] <skay> when I run remove-application on postgresql it's not removing anything. I have to call it with --force and --no-wait
[13:58] <skay> oh wait, it just takes freaking forever.
[14:03] <rick_h> skay:  yes, it could take forever depending on the underlying cloud, state of things, etc
[14:03] <rick_h> skay:  --force will tell it to try to do things right, but if you have to start cheating in potentially bad ways"
[14:05] <skay> rick_h: might that explain why I got the problem with the messed up leader state?
[14:06] <skay> rick_h: in the scrollback I asked about my situation. I've been testing a mojo spec and when I want to rerun it I remove all the applications. I thought postgresql was stuck somehow because of how long it was taking
[14:06] <skay> rick_h: so I've been using --force --no-wait on it and other things
[14:38] <nammn_de> manadart: was doublechecking with achilleasa. It seems that the method `AllSpaces()` seems to use `AllAddresses` which uses the addresses collection, which in aws, do not contain the public ip's. Therefore I would change that to use `machine.Addresses()`
[14:39] <nammn_de> does that make sense?
[14:47] <manadart> nammn_de: The public IPs are shadow addresses. Those are not reasoned about in terms of spaces, so it shouldn't matter for this purpose.
[14:48] <nammn_de> manadart: quick ho?
[14:49] <manadart> nammn_de: I'm there.
[14:51] <skay> ok, the answer to my problem was that there were still instances running even though juju status did not show them
[14:51] <skay> I deleted hte instances and the postgresql unit became leader
[14:51] <achilleasa> nammn_de: that is ipaddresses collection
[15:17] <nammn_de> rick_h: I updated the description. Talked to manadart about why a machine-count=0 can happen. That's because I happen to bind after a deploy.
[15:17] <nammn_de> Additionally, it seems that we do not care about the public part of the addresses, therefore we just skip that for space information. I updated the QA section in the PR and added a comment. Seems like no change to the code regarding this is needed.
[15:39] <achilleasa> hml: what do you think about adding a dirty check to the in-memory state map in the uniter's context? We can set it each time we write/delete something and clear when flushing. We could avoid needless roundtrips to the controller if the state has not been mutated
[20:05] <skay> I have a puzzling situation with the autocert charm. I'm using mojo to deploy it, and the charm is not getting configured with values in the services file
[20:05] <skay> mojo uses juju-deployer
[20:06] <skay> I can assign the values by hand using juju config