[02:04] <anastasiamac> timClicks: PTAL https://github.com/juju/juju/pull/10790
[02:09] <hpidcock> wallyworld: https://github.com/juju/juju/pull/10785 I added a few comments
[02:11] <anastasiamac> timClicks: thnx!
[02:11] <timClicks> anastasiamac: np :)
[02:14] <wallyworld> hpidcock: will look after call
[02:55] <wallyworld> hpidcock: with the Message_ thing, we use that elsewhere when we need to have stuff available for serialisation but it's really internal (like in the juju/description) package. maybe worth avoiding in juju/juju though
[02:55] <hpidcock> sounds good
[02:55] <hpidcock> I don't mind either way
[03:56] <anastasiamac> wallyworld: r u aware of this caas operator failures? I've seen it couple of times in the last couple of days on develop - https://jenkins.juju.canonical.com/job/github-make-check-juju/1928/testReport/junit/github/com_juju_juju_apiserver_facades_agent_caasoperator/TestAll/
[03:58] <wallyworld> i blame hpidcock :-)
[03:58] <anastasiamac> :)k... lemme re-phrase hpidcock r u aware of these failures? :D
[03:58] <wallyworld> he is now :-)
[03:59] <anastasiamac> x2
[03:59] <hpidcock> yep I'll have a look
[04:00] <anastasiamac> \o/
[04:08] <hpidcock> https://github.com/juju/juju/pull/10792
[04:14] <thumper> patch for windows agent failures: https://github.com/juju/juju/pull/10793
[04:17] <hpidcock> thumper: LGTM
[05:02] <babbageclunk> jam: here's the test charm
[05:02] <babbageclunk> https://github.com/juju/juju/pull/10794
[05:02] <babbageclunk> almost forgot to paste the link
[05:03] <jam> babbageclunk: thx
[05:14] <jam> babbageclunk: reviewed
[05:16] <wallyworld> manadart: so the monfo dial info is being served with 2 addresses: "localhost" and "X.X.X.X". "localhost" is correct and so it seems we (or mgo) is forking off separate dial attempts and doesn't shutdown the one that doesn't work
[05:17] <wallyworld> need to look deeper into it but that's what i have so far
[05:21] <manadart> wallyworld: Ack.
[05:24] <jam> wallyworld: mongo driver doesn't let us change the original list of possible addresses, and keeps pinging on all addresses we passed in
[05:25] <wallyworld> jam: well that sucks, we will get log spam forever then if an address changes
[05:26] <jam> wallyworld: yep. known bug. I came across it when doing HA stuff (juju remove-machine controller) will cause endless debug spam.
[05:26] <jam> wallyworld: I though we spammed DEBUG not INFO+
[05:26] <jam> wallyworld: I don't know the bug #
[05:27] <wallyworld> jam: machine-0: 15:25:01 WARNING juju.mongo mongodb connection failed, will retry: dial tcp 10.152.183.140:37017: i/o timeout
[05:27] <wallyworld> it's warning :-(
[05:28] <wallyworld> and it happens in k8s everytime since we serve up all api addresses
[05:28] <wallyworld> and we only need localhost for mongo
[05:28] <wallyworld> we have forked mgo right
[05:28] <wallyworld> so we should fix
[05:28] <jam> wallyworld: bug #1761237
[05:29] <mup> Bug #1761237: remove-machine seems to leave a mongo address that is gone <enable-ha> <mongodb> <juju:Triaged> <https://launchpad.net/bugs/1761237>
[05:29] <jam> wallyworld: so if you want to take the time to figure out how to update the list of addresses the driver has, we can do that, but it isn't trivial
[05:29] <jam> wallyworld: the warning is in *our* code
[05:29] <jam> as it would be a problem if you actually couldn't connect to a mongo address that should be valid
[05:29] <jam> wallyworld: the "keep trying addresses that might be valid" is mgo code
[05:30] <wallyworld> jam: even debug is wrong surely. we need to be able to say "this address is no longer applicable"
[05:30] <jam> comment #3 is about not being able to update the "userSeeds" list
[05:31] <jam> wallyworld: if mongo is local, how does the IP address change without the agent being restarted?
[05:32] <wallyworld> in k8s the controller agent and mgo run together in the same pod. so localhost. but we can serve up the controller service ip address which will not work
[05:32] <wallyworld> we server up all api addresses which we assume are relevant for connecting to mongo
[05:33] <wallyworld> but that doesn't hold with k8s
[05:33] <wallyworld> since the controller and mgo containers are co located
[05:33] <wallyworld> and we haven't mapped the service port therefore
[05:35] <wallyworld> i guess for now we can make it debug
[05:48] <jam> wallyworld: so I was hoping to use a separate list that tracked "known valid addresses" and would log debug if it thought the address shouldn't be used
[05:48] <jam> I think updating mgo to allow you to poke at userSeeds would be ok, just non-trivial
[05:57] <kelvinliu> wallyworld: got this PR to introduce RBAC for caas credentials , could you take a look? thanks! https://github.com/juju/juju/pull/10776
[08:49] <wallyworld> jam: i just added debug to get past the current noise issue but +1 on adding additional changes
[08:50] <nammn_de> jam: coming back to our discussion yesterday, here would be the pr for the charm logging https://github.com/juju/charm/pull/296/files
[08:50] <nammn_de> stickupkid for interested ^
[08:59] <stickupkid> nammn_de, nice, let me look
[08:59] <achilleasa> jam: have you seen my response to your comment on the juju bind PR? Any objection to landing it as-is?
[08:59] <nammn_de> stickupkid: still need to fix the unit tests, be logic wise should be fine
[09:00] <jam> achilleasa: no issues, I see your point
[09:00] <jam> achilleasa: note I didn't actively review, just raised the question
[09:00] <achilleasa> jam: nw, Heather has already reviewed it
[09:19] <manadart> achilleasa: Were you going to review https://github.com/juju/juju/pull/10779 ?
[09:20] <nammn_de> stickupkid: just saw your comment? We try to get rid of gh actions again? Any plans what replacement we will use?
[09:21] <stickupkid> nammn_de, they don't really fullfill a need once we get jenkins running the same tests
[09:21] <stickupkid> nammn_de, they're burning cpu cycles for fun but not profit
[09:22] <nammn_de> stickupkid: hmm true. I thought that they were kind of used as a gatekeeper to jenkins for the small tests. If they run good we would run the whole suite on jenkins
[09:22] <nammn_de> stickupkid: now unit tests have been fixed/added for the pr mentioned before
[09:22] <nammn_de> wdyt?
[09:22] <nammn_de> but yes, right now they are just happily destroying our good nature :D
[09:22] <stickupkid> nammn_de, the problem with github is they don't merge in the same way as jenkins, so we're actually comparing apples and oranges
[09:23] <stickupkid> nammn_de, this doesn't effect juju/os though, that package needs those tests as setting up a windows bot else where is just plain painful
[09:24] <nammn_de> stickupkid: ha, now that makes me happy. Making it run therre was a pain :D
[09:26] <nammn_de> stickupkid: thansk simon for the rev
[09:30] <achilleasa> manadart: looking now
[10:05] <wallyworld> manadart: want a call?
[10:05] <wallyworld> you nee dto use --agent-version 2.7-rc1
[10:06] <wallyworld> plus make push-operator-image
[10:06] <wallyworld> with your docker hub user name set
[10:06] <wallyworld> cause it queries available binaries using the configured repo
[10:06] <wallyworld> similar to looking up simple streams
[10:07] <wallyworld> you probs don't need --agent-version if you push-operator-image
[10:07] <wallyworld> export DOCKER_USERNAME=fred
[10:07] <wallyworld> make push-operator-image
[10:08] <wallyworld> juju bootstrap microk8s --config caas-image-repo=fred
[10:08] <manadart> wallyworld: In team-standup.
[10:21] <wallyworld> manadart: to confirm, you running "make microk8s-operator-update" right? exporting the docker username and setting caas-image-repo should be enough to tell microk8s to use your custom jujud. but i do suspect you will need push-operator-image also so that the upgrade command can query what's available (not 100% on the last bit but i think i'm right). you only need to push once to seed the list of available tagged images
[10:22] <wallyworld> you can check upgrade with --dry-run
[10:26] <nammn_de> stickupkid: https://github.com/juju/juju/pull/10795
[10:27] <nammn_de> stickupkid: regarding changing the charm version gopkg
[10:27] <stickupkid> nammn_de, k, will check in a bit
[12:35] <stickupkid> nammn_de, done
[13:48] <skay> I have a charm I haven't used in a while, and I'm getting errors when I try to attach a resource. https://paste.ubuntu.com/p/sTbg5Rh7fW/
[13:49] <skay> I get a "connection reset by peer" message when I run the attach command
[13:49] <skay> And on the machine-0.log I get "no kvm containers possible". the pastebin has the full details
[13:57] <jam> btw nammn_de, I am also seeing quite a few "charm is not versioned" warnings in 'juju debug-log' after having done a 'juju deploy' from our acceptancetests repo. Shouldn't it have grabbed the git version from juju/juju ? The charm *is* in a subdir of a versioned dir
[14:07] <N3tw0rK> When deploying a new openstack cluster for dev, I try to deploy a container (LXD) to a node and when it applies the network bridge all my interfaces stop passing traffic (still up). Everything works fine on the host up until the point the container trys to start.
[14:26] <jam> manadart: I had a controller that seemed to be bootstrapped to 2.7-beta1, and just tried to upgrade to my devel branch which claims 2.7-rc1. However, I'm seeing it stuck in upgrade
[14:26] <jam> with no obvious errors in the log other than "lease operation timed out"
[14:27] <jam> not HA
[14:27] <jam> its plausible the 2.6-beta1 wasn't the official release, and I know my 2.7rc1 isn't 'devel', but I'm surprised to see it think it is stuck in "upgrading since...." in 'juju stautus'
[14:45] <nammn_de> jam let me follow on that one. With the change added it should not take the version from the place where you have run the command juju/juju, but the dir where the charm itself is located
[14:46] <nammn_de> because there was a bug, that deploying from the current working dir, which is under vcs, but has nothing todo with the charm
[14:47] <nammn_de> can lead to a charm version which has nothing to do with the charm. The code now takes the version string from the charm path, instead of the cwd
[14:47] <nammn_de> jam: https://github.com/juju/charm/blob/v6/charmdir.go#L490-L491
[14:51] <jam> nammn_de: if you're under a versioned dir, you *are* versioned. eg, acceptancetests/repository/charms/dummy-sink is very much at a known version if you want to go back the original source
[14:52] <nammn_de> jam: ah sorry did not know that the charms are under that repository. I am not sure how far git is willing to follow though. let me try that
[14:53] <nammn_de> jam: You are right, this should work. Let me quick test whether it works or not. I did think it works
[14:55] <achilleasa> hml: got a min?
[14:56] <hml> achilleasa: sure, what’s up
[14:56] <achilleasa> quick ho?
[14:56] <hml> achilleasa: omw
[14:57] <manadart> jam, I will see if I can replicate.
[14:57] <nammn_de> jam: ah i see the, yep that warrants change how this is currently done in the code
[14:57] <nammn_de> *see the error
[14:57] <jam> manadart: don't stress too much, it is plausible I was running a custom version of beta1, but it is a bit of a "are upgrades currently broken"? we should intend to be able to upgrade from beta1 to rc1
[14:58] <nammn_de> dammnit this thing had so many loopholes
[14:58] <nammn_de> *has
[15:02] <nammn_de> jam: right now the code only checks whether a .vcs folder exists and if yes, then executes the corresponding vcs code. My change would do something along the line of:  just try each vcs. If one returns with $success, take that one, if everyone fails, return that warning. Wdyt?
[15:03] <jam> nammn_de: Given things like "we do it frequently" I'd be a little concerned. Especially if we are seeing it on controllers, where the charm they have is fixed, it won't become versioned if it wasn't previously
[15:04] <nammn_de> jam: what would your suggestion be? We can HO for few min if you can
[15:05] <nammn_de> *btw i changed the warn to an info in a previous pr
[15:23] <nammn_de> jam stickupkid: If we want to make sure to take parent vcs dirs, I thought of something along this line https://github.com/juju/charm/pull/297 It is more of a draft to talk about
[15:26] <stickupkid> nammn_de, i like it already tbh, it cleans up the code, but I'ld just inline the strategy part
[15:26] <stickupkid> nammn_de, i.e.  vcsStrategies["hg"] = []string{"hg", "id", "-n"}
[15:27] <stickupkid> nammn_de, or even better, have the strategy be a struct i.e. vcsStrategies["hg"] = vcsCmd{cmd: "hg", args: []string{}}
[15:28] <nammn_de> stickupkid: being a struct is nicer thats very much true!
[15:28] <nammn_de> stickupkid: was just hacking it together to have a discussion point
[15:28] <nammn_de> btw. added a comment to the pr description with possible downside
[15:29] <stickupkid> nammn_de, you can add a func to each struct to handle the errors
[15:30] <stickupkid> nammn_de, vcsCmd{cmd: "hg", args: []string{}, errorHandler: func(err error) error}
[15:30] <stickupkid> nammn_de, that way you can handle the error exactly how you want
[15:31] <nammn_de> stickupkid: my unsure point was more about: right now I just try each vcs. If they fail i can either log or not. The best solution would be only to log, if I know that the underlying vcs is actually git. So only log if git fails. But this would need even more checks
[15:32] <nammn_de> makes sense?
[15:32] <stickupkid> nammn_de, that's what the errorHandler does, i.e. if you only have it for git, then it's fine
[15:33] <stickupkid> nammn_de, i.e. if cmds.errorHandler != nil { err = cmd.errorHandler(err) }
[15:33] <stickupkid> done
[15:35] <nammn_de> stickupkid: not sure if I follow or we one the same page. What would you put in the errorHandler? Reasonably I would only implement it for now for git. But the errhandler has to contain something along: if you fail, check if you are even running under git
[15:35] <nammn_de> and if yes, then log
[15:35] <nammn_de> that would be my approach on the errfunc
[15:58] <bdx> goog morning
[15:59] <bdx> I wanted to have a quick chat about peer relations and unit data
[15:59] <bdx> peer._data['private-address']
[16:01] <bdx> from what I can gather, the 'private-address' in the peer data above is derived from the 'private-address' attribute in the unit_data
[16:01] <bdx> for each peer
[16:02] <bdx> should we be populating the unit_data with information from network_get()?>
[16:04] <bdx> in my use case, all peers need to know eachothers ip address
[16:05] <bdx> previously, I had always ascertained peer information through the unit_data exposed for each unit of the peer relation
[16:05] <bdx> I get the feeling we are moving away from the unit_data model
[16:05] <bdx> or possiboy just moving to replace the unit_data with information from other sources
[16:07] <bdx> wallyworld recently added some functionality that adds ip address data to the return network-get for k8s application charm units
[16:08] <bdx> I'm trying to see my way through to being able to consume that newly added ip address information from the view of a peer
[16:09] <bdx> one thing I'm thinking about doing is extending the Endpoint class via a peer relation to allow for each peer to set its ip address data (the return of network_get()
[16:11] <bdx> in this way I would be able to expose each peer's ip (returned via network_get()) to each other peer
[16:11] <bdx> does this sound like a legitimate way to go about this?
[16:51] <stickupkid> bdx, probably best to speak to manadart or jam (rick_h is away). maybe open a discourse post so the right people see it
[16:51] <bdx> stickupkid: will do, thanks!
[17:05] <skay> I'm seeing weird errors when I deploy an app, and when I try to attach a resource. https://paste.ubuntu.com/p/xQQ4nwcqR3/
[17:06] <skay> The controller is localhost based on lxd.
[17:16] <skay> to take my charm out of the equation I deployed juju-hello https://paste.ubuntu.com/p/7TQ3StsXFz/
[17:19] <stickupkid> skay, what version of juju?
[17:19] <skay> 2.6.9-bionic-amd64
[17:21] <stickupkid> skay, so what are you bootstrapping to? i.e. juju bootstrap lxd (or localhost)?
[17:22] <skay> localhost
[17:22] <stickupkid> skay, if you're bootstrapping to lxd, those errors are normal, there is ticket to try and reduce the verbose nature of that error message, but I've yet to get around to it
[17:22] <skay> stickupkid: ok. is there a hello-world type of charm that I can attach a resource to to see if I get the other error?
[17:23] <skay> those errors might be misleading me
[17:23] <skay> hold on, I thought I included it in a pastebin. https://paste.ubuntu.com/p/7trQtnn5zP/
[17:24] <skay> that's when I tried to run the attach command with a charm I'm working on. Last time I worked on it was pre-bionic and I haven't done anything with it a while
[17:24] <stickupkid> skay, ah ok, that's a different error, yeah does indicate that api server isn't up and caused a connection reset
[17:25] <stickupkid> skay, i would honestly open up a bug, so we can discuss it
[17:25] <stickupkid> skay, https://bugs.launchpad.net/juju/+bugs
[17:25] <skay> will do! thank you
[17:26] <stickupkid> skay, it would be great if you can as much info about what you're bootstrapping to and what charm you're using
[17:26] <stickupkid> skay, that would be fantastic
[17:26] <skay> stickupkid: It's a private charm, but I'll see what I can do.
[17:27] <skay> maybe I can make a charm to reproduce it
[17:27] <stickupkid> skay, sure, if we can boil it down to a simple test case then that would help
[17:27] <skay> unless there is another I can use
[17:27] <skay> one that already exists
[17:29] <stickupkid> skay, we have test charms that might be of some use https://github.com/juju/juju/tree/develop/testcharms/charm-repo/bionic/dummy-resource
[17:30] <stickupkid> skay, or https://jaas.ai/u/juju-qa/upgrade-charm-resource-test/bionic/1
[17:37] <skay> stickupkid: the upgrade-charm-resource-test worked when I attached a resource to it. that is a helpful clue.
[19:29] <skay> stickupkid: I think it has to do with the filesize I was trying to attach. I got a connection refused eventually while trying to attach it to the test charm
[21:24] <skay> how big can those files be? I attach a tarball that includes of my app's dependencies
[21:47] <webstrand> I'm trying to update an ebuild for juju 2.6 on gentoo
[21:48] <webstrand> The one I'm modifying (2.1) has a list of dependencies in the format "github.com/lestrrat-go/jspointer:f4881e6"
[21:48] <webstrand> any idea how I can get a list of dependencies like that for the new version?
[21:55] <wallyworld> webstrand: juju 2.6 uses golang's go dep tool. there's a Gopkg.toml file with all the deps
[21:56] <webstrand> wallyworld: thanks!
[21:58] <wallyworld> no problem. we will be moving to modules at some point soon hopefully
[22:00] <webstrand> Do you know if `dep status` is just a nicely formatted version of Gopkg.toml? Or does it include other stuff
[22:03] <thumper> babbageclunk: https://github.com/juju/juju/pull/10798
[22:03] <thumper> webstrand: there is a 'make dep' command
[22:04] <thumper> yes Juju has a Makefile :)
[22:05] <webstrand> thumper: According to the doc, that only checks to make sure the Gopkg.toml and lock are in sync?
[22:06] <thumper> webstrand: it does pull them into the vendor dir
[22:06] <thumper> and that they are in sync
[22:06] <thumper> I use it daily, so I'm pretty sure it works :)
[22:07] <webstrand> ah, I need the dependency source and revision for the ebuild. Something to do with reproducible builds
[22:10] <thumper> webstrand: the gopkg.toml file does list the hashes or all the dependencies
[22:10] <thumper> so it is entirely reproducible
[22:10] <thumper> what are you missing?
[22:12] <webstrand> I'm just trying to figure out how to parse Godeps.toml and extract what I need, that's all.
[22:12] <thumper> ah
[22:12] <thumper> ok
[22:14] <babbageclunk> webstrand: the .lock file is probably what you should use for fixing a reproducible build
[22:15] <webstrand> oh, that makes sense
[22:16] <babbageclunk> it has all the transitive dependencies which the .toml file might not have (although I think we do try to do that)
[22:20] <webstrand> Oh no... the lock file references the bad packages too: github.com/lestrrat/go-jsschema has now become github.com/lestrrat-go/jsschema but still depends on github.com/lestrrat/go-jsschema
[22:21] <webstrand> I'll deal with this tomorrow
[22:52] <pmatulis> the 19.10 OpenStack Charms release is now available...
[23:23] <babbageclunk> thumper: approved