anastasiamactimClicks: PTAL https://github.com/juju/juju/pull/1079002:04
hpidcockwallyworld: https://github.com/juju/juju/pull/10785 I added a few comments02:09
anastasiamactimClicks: thnx!02:11
timClicksanastasiamac: np :)02:11
wallyworldhpidcock: will look after call02:14
wallyworldhpidcock: with the Message_ thing, we use that elsewhere when we need to have stuff available for serialisation but it's really internal (like in the juju/description) package. maybe worth avoiding in juju/juju though02:55
hpidcocksounds good02:55
hpidcockI don't mind either way02:55
anastasiamacwallyworld: r u aware of this caas operator failures? I've seen it couple of times in the last couple of days on develop - https://jenkins.juju.canonical.com/job/github-make-check-juju/1928/testReport/junit/github/com_juju_juju_apiserver_facades_agent_caasoperator/TestAll/03:56
wallyworldi blame hpidcock :-)03:58
anastasiamac:)k... lemme re-phrase hpidcock r u aware of these failures? :D03:58
wallyworldhe is now :-)03:58
hpidcockyep I'll have a look03:59
thumperpatch for windows agent failures: https://github.com/juju/juju/pull/1079304:14
hpidcockthumper: LGTM04:17
babbageclunkjam: here's the test charm05:02
babbageclunkalmost forgot to paste the link05:02
jambabbageclunk: thx05:03
jambabbageclunk: reviewed05:14
wallyworldmanadart: so the monfo dial info is being served with 2 addresses: "localhost" and "X.X.X.X". "localhost" is correct and so it seems we (or mgo) is forking off separate dial attempts and doesn't shutdown the one that doesn't work05:16
wallyworldneed to look deeper into it but that's what i have so far05:17
manadartwallyworld: Ack.05:21
jamwallyworld: mongo driver doesn't let us change the original list of possible addresses, and keeps pinging on all addresses we passed in05:24
wallyworldjam: well that sucks, we will get log spam forever then if an address changes05:25
jamwallyworld: yep. known bug. I came across it when doing HA stuff (juju remove-machine controller) will cause endless debug spam.05:26
jamwallyworld: I though we spammed DEBUG not INFO+05:26
jamwallyworld: I don't know the bug #05:26
wallyworldjam: machine-0: 15:25:01 WARNING juju.mongo mongodb connection failed, will retry: dial tcp i/o timeout05:27
wallyworldit's warning :-(05:27
wallyworldand it happens in k8s everytime since we serve up all api addresses05:28
wallyworldand we only need localhost for mongo05:28
wallyworldwe have forked mgo right05:28
wallyworldso we should fix05:28
jamwallyworld: bug #176123705:28
mupBug #1761237: remove-machine seems to leave a mongo address that is gone <enable-ha> <mongodb> <juju:Triaged> <https://launchpad.net/bugs/1761237>05:29
jamwallyworld: so if you want to take the time to figure out how to update the list of addresses the driver has, we can do that, but it isn't trivial05:29
jamwallyworld: the warning is in *our* code05:29
jamas it would be a problem if you actually couldn't connect to a mongo address that should be valid05:29
jamwallyworld: the "keep trying addresses that might be valid" is mgo code05:29
wallyworldjam: even debug is wrong surely. we need to be able to say "this address is no longer applicable"05:30
jamcomment #3 is about not being able to update the "userSeeds" list05:30
jamwallyworld: if mongo is local, how does the IP address change without the agent being restarted?05:31
wallyworldin k8s the controller agent and mgo run together in the same pod. so localhost. but we can serve up the controller service ip address which will not work05:32
wallyworldwe server up all api addresses which we assume are relevant for connecting to mongo05:32
wallyworldbut that doesn't hold with k8s05:33
wallyworldsince the controller and mgo containers are co located05:33
wallyworldand we haven't mapped the service port therefore05:33
wallyworldi guess for now we can make it debug05:35
jamwallyworld: so I was hoping to use a separate list that tracked "known valid addresses" and would log debug if it thought the address shouldn't be used05:48
jamI think updating mgo to allow you to poke at userSeeds would be ok, just non-trivial05:48
kelvinliuwallyworld: got this PR to introduce RBAC for caas credentials , could you take a look? thanks! https://github.com/juju/juju/pull/1077605:57
wallyworldjam: i just added debug to get past the current noise issue but +1 on adding additional changes08:49
nammn_dejam: coming back to our discussion yesterday, here would be the pr for the charm logging https://github.com/juju/charm/pull/296/files08:50
nammn_destickupkid for interested ^08:50
stickupkidnammn_de, nice, let me look08:59
achilleasajam: have you seen my response to your comment on the juju bind PR? Any objection to landing it as-is?08:59
nammn_destickupkid: still need to fix the unit tests, be logic wise should be fine08:59
jamachilleasa: no issues, I see your point09:00
jamachilleasa: note I didn't actively review, just raised the question09:00
achilleasajam: nw, Heather has already reviewed it09:00
manadartachilleasa: Were you going to review https://github.com/juju/juju/pull/10779 ?09:19
nammn_destickupkid: just saw your comment? We try to get rid of gh actions again? Any plans what replacement we will use?09:20
stickupkidnammn_de, they don't really fullfill a need once we get jenkins running the same tests09:21
stickupkidnammn_de, they're burning cpu cycles for fun but not profit09:21
nammn_destickupkid: hmm true. I thought that they were kind of used as a gatekeeper to jenkins for the small tests. If they run good we would run the whole suite on jenkins09:22
nammn_destickupkid: now unit tests have been fixed/added for the pr mentioned before09:22
nammn_debut yes, right now they are just happily destroying our good nature :D09:22
stickupkidnammn_de, the problem with github is they don't merge in the same way as jenkins, so we're actually comparing apples and oranges09:22
stickupkidnammn_de, this doesn't effect juju/os though, that package needs those tests as setting up a windows bot else where is just plain painful09:23
nammn_destickupkid: ha, now that makes me happy. Making it run therre was a pain :D09:24
nammn_destickupkid: thansk simon for the rev09:26
achilleasamanadart: looking now09:30
wallyworldmanadart: want a call?10:05
wallyworldyou nee dto use --agent-version 2.7-rc110:05
wallyworldplus make push-operator-image10:06
wallyworldwith your docker hub user name set10:06
wallyworldcause it queries available binaries using the configured repo10:06
wallyworldsimilar to looking up simple streams10:06
wallyworldyou probs don't need --agent-version if you push-operator-image10:07
wallyworldexport DOCKER_USERNAME=fred10:07
wallyworldmake push-operator-image10:07
wallyworldjuju bootstrap microk8s --config caas-image-repo=fred10:08
manadartwallyworld: In team-standup.10:08
wallyworldmanadart: to confirm, you running "make microk8s-operator-update" right? exporting the docker username and setting caas-image-repo should be enough to tell microk8s to use your custom jujud. but i do suspect you will need push-operator-image also so that the upgrade command can query what's available (not 100% on the last bit but i think i'm right). you only need to push once to seed the list of available tagged images10:21
wallyworldyou can check upgrade with --dry-run10:22
nammn_destickupkid: https://github.com/juju/juju/pull/1079510:26
nammn_destickupkid: regarding changing the charm version gopkg10:27
stickupkidnammn_de, k, will check in a bit10:27
stickupkidnammn_de, done12:35
=== skay is now known as Guest48297
=== Guest48297 is now known as skay
skayI have a charm I haven't used in a while, and I'm getting errors when I try to attach a resource. https://paste.ubuntu.com/p/sTbg5Rh7fW/13:48
skayI get a "connection reset by peer" message when I run the attach command13:49
skayAnd on the machine-0.log I get "no kvm containers possible". the pastebin has the full details13:49
jambtw nammn_de, I am also seeing quite a few "charm is not versioned" warnings in 'juju debug-log' after having done a 'juju deploy' from our acceptancetests repo. Shouldn't it have grabbed the git version from juju/juju ? The charm *is* in a subdir of a versioned dir13:57
N3tw0rKWhen deploying a new openstack cluster for dev, I try to deploy a container (LXD) to a node and when it applies the network bridge all my interfaces stop passing traffic (still up). Everything works fine on the host up until the point the container trys to start.14:07
jammanadart: I had a controller that seemed to be bootstrapped to 2.7-beta1, and just tried to upgrade to my devel branch which claims 2.7-rc1. However, I'm seeing it stuck in upgrade14:26
jamwith no obvious errors in the log other than "lease operation timed out"14:26
jamnot HA14:27
jamits plausible the 2.6-beta1 wasn't the official release, and I know my 2.7rc1 isn't 'devel', but I'm surprised to see it think it is stuck in "upgrading since...." in 'juju stautus'14:27
nammn_dejam let me follow on that one. With the change added it should not take the version from the place where you have run the command juju/juju, but the dir where the charm itself is located14:45
nammn_debecause there was a bug, that deploying from the current working dir, which is under vcs, but has nothing todo with the charm14:46
nammn_decan lead to a charm version which has nothing to do with the charm. The code now takes the version string from the charm path, instead of the cwd14:47
nammn_dejam: https://github.com/juju/charm/blob/v6/charmdir.go#L490-L49114:47
jamnammn_de: if you're under a versioned dir, you *are* versioned. eg, acceptancetests/repository/charms/dummy-sink is very much at a known version if you want to go back the original source14:51
nammn_dejam: ah sorry did not know that the charms are under that repository. I am not sure how far git is willing to follow though. let me try that14:52
nammn_dejam: You are right, this should work. Let me quick test whether it works or not. I did think it works14:53
achilleasahml: got a min?14:55
hmlachilleasa: sure, what’s up14:56
achilleasaquick ho?14:56
hmlachilleasa: omw14:56
manadartjam, I will see if I can replicate.14:57
nammn_dejam: ah i see the, yep that warrants change how this is currently done in the code14:57
nammn_de*see the error14:57
jammanadart: don't stress too much, it is plausible I was running a custom version of beta1, but it is a bit of a "are upgrades currently broken"? we should intend to be able to upgrade from beta1 to rc114:57
nammn_dedammnit this thing had so many loopholes14:58
nammn_dejam: right now the code only checks whether a .vcs folder exists and if yes, then executes the corresponding vcs code. My change would do something along the line of:  just try each vcs. If one returns with $success, take that one, if everyone fails, return that warning. Wdyt?15:02
jamnammn_de: Given things like "we do it frequently" I'd be a little concerned. Especially if we are seeing it on controllers, where the charm they have is fixed, it won't become versioned if it wasn't previously15:03
nammn_dejam: what would your suggestion be? We can HO for few min if you can15:04
nammn_de*btw i changed the warn to an info in a previous pr15:05
nammn_dejam stickupkid: If we want to make sure to take parent vcs dirs, I thought of something along this line https://github.com/juju/charm/pull/297 It is more of a draft to talk about15:23
stickupkidnammn_de, i like it already tbh, it cleans up the code, but I'ld just inline the strategy part15:26
stickupkidnammn_de, i.e.  vcsStrategies["hg"] = []string{"hg", "id", "-n"}15:26
stickupkidnammn_de, or even better, have the strategy be a struct i.e. vcsStrategies["hg"] = vcsCmd{cmd: "hg", args: []string{}}15:27
nammn_destickupkid: being a struct is nicer thats very much true!15:28
nammn_destickupkid: was just hacking it together to have a discussion point15:28
nammn_debtw. added a comment to the pr description with possible downside15:28
stickupkidnammn_de, you can add a func to each struct to handle the errors15:29
stickupkidnammn_de, vcsCmd{cmd: "hg", args: []string{}, errorHandler: func(err error) error}15:30
stickupkidnammn_de, that way you can handle the error exactly how you want15:30
nammn_destickupkid: my unsure point was more about: right now I just try each vcs. If they fail i can either log or not. The best solution would be only to log, if I know that the underlying vcs is actually git. So only log if git fails. But this would need even more checks15:31
nammn_demakes sense?15:32
stickupkidnammn_de, that's what the errorHandler does, i.e. if you only have it for git, then it's fine15:32
stickupkidnammn_de, i.e. if cmds.errorHandler != nil { err = cmd.errorHandler(err) }15:33
nammn_destickupkid: not sure if I follow or we one the same page. What would you put in the errorHandler? Reasonably I would only implement it for now for git. But the errhandler has to contain something along: if you fail, check if you are even running under git15:35
nammn_deand if yes, then log15:35
nammn_dethat would be my approach on the errfunc15:35
bdxgoog morning15:58
bdxI wanted to have a quick chat about peer relations and unit data15:59
bdxfrom what I can gather, the 'private-address' in the peer data above is derived from the 'private-address' attribute in the unit_data16:01
bdxfor each peer16:01
bdxshould we be populating the unit_data with information from network_get()?>16:02
bdxin my use case, all peers need to know eachothers ip address16:04
bdxpreviously, I had always ascertained peer information through the unit_data exposed for each unit of the peer relation16:05
bdxI get the feeling we are moving away from the unit_data model16:05
bdxor possiboy just moving to replace the unit_data with information from other sources16:05
bdxwallyworld recently added some functionality that adds ip address data to the return network-get for k8s application charm units16:07
bdxI'm trying to see my way through to being able to consume that newly added ip address information from the view of a peer16:08
bdxone thing I'm thinking about doing is extending the Endpoint class via a peer relation to allow for each peer to set its ip address data (the return of network_get()16:09
bdxin this way I would be able to expose each peer's ip (returned via network_get()) to each other peer16:11
bdxdoes this sound like a legitimate way to go about this?16:11
stickupkidbdx, probably best to speak to manadart or jam (rick_h is away). maybe open a discourse post so the right people see it16:51
bdxstickupkid: will do, thanks!16:51
skayI'm seeing weird errors when I deploy an app, and when I try to attach a resource. https://paste.ubuntu.com/p/xQQ4nwcqR3/17:05
skayThe controller is localhost based on lxd.17:06
skayto take my charm out of the equation I deployed juju-hello https://paste.ubuntu.com/p/7TQ3StsXFz/17:16
stickupkidskay, what version of juju?17:19
stickupkidskay, so what are you bootstrapping to? i.e. juju bootstrap lxd (or localhost)?17:21
stickupkidskay, if you're bootstrapping to lxd, those errors are normal, there is ticket to try and reduce the verbose nature of that error message, but I've yet to get around to it17:22
skaystickupkid: ok. is there a hello-world type of charm that I can attach a resource to to see if I get the other error?17:22
skaythose errors might be misleading me17:23
skayhold on, I thought I included it in a pastebin. https://paste.ubuntu.com/p/7trQtnn5zP/17:23
skaythat's when I tried to run the attach command with a charm I'm working on. Last time I worked on it was pre-bionic and I haven't done anything with it a while17:24
stickupkidskay, ah ok, that's a different error, yeah does indicate that api server isn't up and caused a connection reset17:24
stickupkidskay, i would honestly open up a bug, so we can discuss it17:25
stickupkidskay, https://bugs.launchpad.net/juju/+bugs17:25
skaywill do! thank you17:25
stickupkidskay, it would be great if you can as much info about what you're bootstrapping to and what charm you're using17:26
stickupkidskay, that would be fantastic17:26
skaystickupkid: It's a private charm, but I'll see what I can do.17:26
skaymaybe I can make a charm to reproduce it17:27
stickupkidskay, sure, if we can boil it down to a simple test case then that would help17:27
skayunless there is another I can use17:27
skayone that already exists17:27
stickupkidskay, we have test charms that might be of some use https://github.com/juju/juju/tree/develop/testcharms/charm-repo/bionic/dummy-resource17:29
stickupkidskay, or https://jaas.ai/u/juju-qa/upgrade-charm-resource-test/bionic/117:30
skaystickupkid: the upgrade-charm-resource-test worked when I attached a resource to it. that is a helpful clue.17:37
skaystickupkid: I think it has to do with the filesize I was trying to attach. I got a connection refused eventually while trying to attach it to the test charm19:29
skayhow big can those files be? I attach a tarball that includes of my app's dependencies21:24
webstrandI'm trying to update an ebuild for juju 2.6 on gentoo21:47
webstrandThe one I'm modifying (2.1) has a list of dependencies in the format "github.com/lestrrat-go/jspointer:f4881e6"21:48
webstrandany idea how I can get a list of dependencies like that for the new version?21:48
wallyworldwebstrand: juju 2.6 uses golang's go dep tool. there's a Gopkg.toml file with all the deps21:55
webstrandwallyworld: thanks!21:56
wallyworldno problem. we will be moving to modules at some point soon hopefully21:58
webstrandDo you know if `dep status` is just a nicely formatted version of Gopkg.toml? Or does it include other stuff22:00
thumperbabbageclunk: https://github.com/juju/juju/pull/1079822:03
thumperwebstrand: there is a 'make dep' command22:03
thumperyes Juju has a Makefile :)22:04
webstrandthumper: According to the doc, that only checks to make sure the Gopkg.toml and lock are in sync?22:05
thumperwebstrand: it does pull them into the vendor dir22:06
thumperand that they are in sync22:06
thumperI use it daily, so I'm pretty sure it works :)22:06
webstrandah, I need the dependency source and revision for the ebuild. Something to do with reproducible builds22:07
thumperwebstrand: the gopkg.toml file does list the hashes or all the dependencies22:10
thumperso it is entirely reproducible22:10
thumperwhat are you missing?22:10
webstrandI'm just trying to figure out how to parse Godeps.toml and extract what I need, that's all.22:12
babbageclunkwebstrand: the .lock file is probably what you should use for fixing a reproducible build22:14
webstrandoh, that makes sense22:15
babbageclunkit has all the transitive dependencies which the .toml file might not have (although I think we do try to do that)22:16
webstrandOh no... the lock file references the bad packages too: github.com/lestrrat/go-jsschema has now become github.com/lestrrat-go/jsschema but still depends on github.com/lestrrat/go-jsschema22:20
webstrandI'll deal with this tomorrow22:21
pmatulisthe 19.10 OpenStack Charms release is now available...22:52
babbageclunkthumper: approved23:23

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!