/srv/irclogs.ubuntu.com/2020/04/16/#juju.txt

babbageclunkwallyworld: can you take a look at this? https://github.com/juju/juju-restore/pull/1400:03
wallyworldsure00:04
hpidcockVery quick PR https://github.com/juju/rpcreflect/pull/100:10
babbageclunkhpidcock: approved00:12
babbageclunkI didn't read it all00:12
babbageclunkhpidcock: is that pulled out of the juju/rpc package?00:13
babbageclunk(I mean rpcreflect, not the licence)00:14
hpidcockI'm not sure ask Simon00:14
babbageclunkbut he's always asleep!00:14
babbageclunkand you're right there00:15
hpidcockbut the files themselves already have a license00:15
hpidcockdoing a quick license audit00:15
babbageclunkoh right00:15
hpidcockanother very quick PR https://github.com/juju/jsonschema-gen/pull/500:22
hpidcockbabbageclunk?01:08
babbageclunkhpidcock: looking01:08
babbageclunkapproved01:09
hpidcockthanks01:22
hpidcocksorry babbageclunk because you are so awesome, I missed something https://github.com/juju/jsonschema-gen/pull/601:30
* babbageclunk clicks the button01:31
* hpidcock appreciates babbageclunk01:31
babbageclunk:)01:32
=== lifeless_ is now known as liffeless
=== liffeless is now known as lifeless
wallyworldbabbageclunk: if you are free, given the circumstances, a second +1 and even QA on this would be awesome https://github.com/juju/juju/pull/1144903:43
babbageclunkwallyworld: looking03:44
wallyworldhpidcock: a few tweaks if you have a chance sometime today https://github.com/juju/juju/pull/1144403:51
hpidcocklooking04:06
babbageclunkturns out a lot of things get sad when you delete the credential for a cloud.04:53
babbageclunkI mean controller model04:53
wallyworldtrue that. i did my testing with a different model04:53
wallyworldto simulate the field issue04:54
babbageclunkyeah, I'll try again with that04:54
babbageclunksuddenly couldn't get status or ssh to the machines04:54
wallyworldstatus won't work with cred missing from any model04:56
babbageclunkI just kept my ssh sessions open this time05:08
wallyworldbabbageclunk: if you still have a model, remove-application --destroy-storage is a test i should have done05:09
wallyworldsince that will fail without --force05:09
babbageclunkcontroller's still up05:10
babbageclunkremoving the bad model is difficult, since the storage is still there05:11
wallyworldbabbageclunk: what i did was rename the cred id so i still had it to rename back after05:11
wallyworldthen i could cleanup as normal05:12
babbageclunkah right - turns out I could update it back into the controller05:12
wallyworldyeah, that would work too05:12
babbageclunkok, redeploying postgres and trying again05:13
babbageclunkhow did the credential go away in the first place?05:14
babbageclunkwallyworld: so if I do `remove-application --destroy-storage` without --force after renaming the cred, it should fail? Or will the cleanup fail?05:17
wallyworldbabbageclunk: with destroy storage, the cleanup has to be able to talk to the cloud which will fail. i am gussing there will be errors looged and the app will hang around in dyng state05:18
babbageclunkbut then redoing the remove-application with --force should still clean it up05:19
babbageclunkright?05:19
wallyworldyeah05:19
wallyworldassuming --force woeks as exopetced05:19
hpidcockwallyworld can I grab the charm you were using for the juju-run bug?05:23
wallyworldoh bollocks, sorry forgot05:24
babbageclunkwallyworld: I don't get how you are updating the _id? Did you just paste it in again?05:26
* babbageclunk just does that05:26
wallyworldbabbageclunk: i loaded the record to a doc variable using FindOne(), updated the doc._id, inserted into collection, then removed the old record. then later I updated the doc._id gain and inserted/removed again05:27
babbageclunkfrom go code?05:28
babbageclunkI'm in the shell05:28
babbageclunkwallyworld: ok, that seemed to work fine05:31
babbageclunkwallyworld_: did you see my comments on the PR?05:32
wallyworldbabbageclunk: thanks for QA etc, I've attempted to answer your questions on PR, see if they make sense05:39
wallyworldone core issue if thsat we've over applied the use of --force05:39
wallyworldin general05:39
babbageclunkwallyworld: makes sense to me - just checking05:40
hpidcockwallyworld: the problem is that on caas, juju-run always runs the commands on the remote, not in the operator https://github.com/juju/juju/commit/d780faaf774bddd0b904e4428ae6c33076683f55#diff-89a10e4adacaacbb2dabc98d1b50011e06:23
hpidcockthe reason it was working for me is I misunderstood the question06:23
hpidcockwe will need to add a --operator flag to juju-run I think06:24
wallyworldhpidcock: ah, doh. of course. for charms with deploymentMode=operator, it should only ever run on --operator06:52
hpidcockwell we have no juju-run --operator flag06:52
hpidcockfor operator charms and normal iaas charms there isn't a problem at the moment06:53
wallyworldright, but as for run-action, we can handle the deployment mode case without the flag first up06:53
hpidcockyeah, well juju run and  actions we already have the information we need06:54
hpidcockbut juju-run has no arg for operator or workload06:54
wallyworldso you saying it works as expected for deploymentMode=operator charms?06:55
wallyworldit's just workload charms that need any changes?06:55
hpidcockI'm not sure, depends on the changes you made06:55
wallyworldyeah, i'd need to check the code as well06:55
hpidcockok so action and juju-run (the action made by juju run) should handle this06:57
hpidcockjuju-run doesn't handle either case properly06:57
wallyworldsounds about right06:57
wallyworldjuju-run should execute on the pod it was invoked from06:58
wallyworldwithout any --operator flag, now that i think about it06:58
hpidcockcan't juju-run call cross units/applications though?06:58
wallyworldyes, i mean more operator vs workload06:59
wallyworldas a start anyway06:59
stickupkid_manadart, trolling you via a git diff08:36
manadartstickupkid_: Cunning.08:37
stickupkid_manadart, give me 5-10 minutes and wanna discuss VIPs?08:39
manadartstickupkid_: Sure.08:39
manadartstickupkid_: Be there in a sec.08:48
stickupkid_was scoffing my breakfast down tbh, just there now08:49
eloxpmatulis: This guide https://juju.is/docs/aws-cloud09:03
eloxGood morning09:04
manadartstickupkid_, achilleasa: Anyone able to tick a little race fix? https://github.com/juju/juju/pull/1145210:46
stickupkid_manadart, tick10:49
manadartstickupkid_: Ta.10:52
stickupkid_manadart, https://github.com/juju/juju/pull/1145412:24
manadartstickupkid_: Yep gimme a couple; got one to swap you.12:25
stickupkid_manadart, runaway12:25
stickupkid_manadart, https://i.imgflip.com/3wva2g.jpg12:27
manadartstickupkid_: https://github.com/juju/juju/pull/11455. It's the real fix for the one you approved.12:33
stickupkid_manadart, well that's a horrid test :( // so TearDownTest does not reattempt.12:34
stickupkid_I have serious question, if github actions is saying use "ubuntu-latest", but that comes with a ton of other services that aren't supported by ubuntu, is that not a trademark issue?12:41
stickupkid_I'm getting angry at github for saying something is ubuntu, when it clearly isn't just ubuntu ehre12:42
stickupkid_manadart, WINNER -> https://github.com/juju/juju/pull/1145613:31
stickupkid_CR please13:31
achilleasamongo snap PR is ready for review: https://github.com/juju/juju/pull/1145313:46
achilleasaQA-ing this will be hard so please spend some time to try different things...13:46
hmlstickupkid_: https://github.com/juju/juju/pull/11457. review pls?14:33
stickupkid_hml, ah this makes sense, it's because I turned them on for aws the other month by default14:34
hmlstickupkid_: i think i caught them all14:35
stickupkid_hml, ticked14:36
hmlstickupkid_: cool.  ty!14:36
stickupkid_damn it, why won't anything land14:37
hmlstickupkid_: i’m seeing that mongo not installing on the static analysis14:37
hmlsorry - client tests… analysis is good.  :-)14:38
manadartstickupkid_: This fixes one of the intermittent failures: https://github.com/juju/juju/pull/1145814:45
stickupkid_manadart, I'm seeing two failures14:45
manadartstickupkid_: There are loads.14:45
achilleasacan someone remind me how to 'juju add-machine' and get an eoan one?15:06
rick_hachilleasa:  juju add-machine --series=eoan15:09
achilleasarick_h: thanks... thought that was only for charms :D15:09
achilleasarick_h: AFAICT the eoan image comes with an lxd snap; this means that the lxd-snap-channel flag that I am adding will only affect focal+ (if lxd is already installed on the host we do nothing)15:20
achilleasais that ok?15:20
rick_hachilleasa:  it's not going ot be that way in focal?15:21
rick_hachilleasa:  I'd expect it to be part of the focal testing forward with the mongodb snap15:21
achilleasarick_h: it's a separate PR but currently install-lxd is a noop if lxd is already installed. So the channel will only do something if lxd is not already there15:22
achilleasatesting now... brb15:22
rick_hachilleasa:  right, but if you change the model config and juju add-machine it should use the track for the new machine in focal since focal is getting it from a snap pre-seeded?15:23
achilleasarick_h: that's the expectation. Just wanted to point out that it will have no effect when spinning up machines with series < focal15:25
rick_hachilleasa:  yep understand15:25
achilleasa(iow, we won't force it to use the channel if lxd is already there)15:25
hmlstickupkid_: somethign is definately horked in ci for make check and the merge job.  my make check job failed at a 60 min timeout… the merge job queue is 3 long15:59
hmlstickupkid_: i found a place we were lost 15 min… but still15:59
stickupkid_hml, we need to get manadarts patch to land16:09
hmlstickupkid_: which one?16:10
stickupkid_https://github.com/juju/juju/pull/1145816:10
stickupkid_although there is an issue with it worker/caasunitprovisioner/worker_test.go:47:22: undefined: "github.com/juju/juju/worker/caasunitprovisioner".MockProvisioningStatusSetter16:11
hmlstickupkid_: i was gunna say… that’s could be a problem16:11
achilleasahmm... are we installing lxd when we create new machines? I have patched container/lxd/initialization_linux.ensureDependencies and added logging. That code path gets triggered when adding an lxd container to the machine. It seems like the machine already has the lxd snap installed for focal even though if I 'lxc launch ubuntu:20.04 focal' I don't see it installed. Any ideas?16:18
achilleasaThere doesn't seem to be anything related in the cloudinit bits (just apt pkgs)16:19
stickupkid_achilleasa, lxc launch needs daily as well16:21
stickupkid_achilleasa, lxc launch ubuntu-daily:focal focal16:21
achilleasastickupkid_: doesn't ' lxc launch images:ubuntu/focal focal' work for you?16:22
stickupkid_achilleasa, it might now, it didn't back when i was testing it16:22
achilleasastickupkid_: let me try with the image you suggested. Is that the one we pull in juju?16:23
achilleasastickupkid_: ok... mystery solved; that one has lxd16:24
achilleasarick_h: not sure how to proceed ^16:25
rick_hachilleasa:  stickupkid_ sorry was otp...I'm confused? Yes you need image streams for focal still. It'll be cleaning once the images start rolling out of daily in the coming week16:38
achilleasarick_h: what I meant was that the focal image seems to have the lxd snap pre-installed16:39
stickupkid_achilleasa, that's the case16:39
achilleasarick_h: I could change the code to 'snap refresh lxd' though16:39
rick_hachilleasa:  yes, that's expected. Yes, I think that's the goal to snap refresh to the track in the config as part of our install/provision bits.16:45
achilleasarick_h: ok. that probably needs additional changes to juju/packaging so I will need to revisit this on Tuesday16:50
rick_hachilleasa:  ok, is the other bits landing for the mongodb changes?16:52
rick_hwe can pick up and see what steps we need to update/move forward for focal tests/etc16:52
achilleasarick_h: PR is up; lots of timeout issues with tests (keep !!build!! to get it green) but it is ready for review/QA. Will fire an email to the list in a few min16:53
achilleasahml: small PR for you https://github.com/juju/packaging/pull/817:28
hmlachilleasa:  looking17:28
hmlachilleasa:  approved.17:40
achilleasarick_h: got it working with snap refresh: https://paste.ubuntu.com/p/mgkhwrxwWK. Will push a PR on Tuesday17:51
rick_hachilleasa:  ok, got EOD and run. Thanks17:52
rick_hsorry, go EOD I mean17:52
achilleasa;-)17:52
eloxpmatulis: So, we acctually got through it now. But there are tons of errors and warnings in this step which makes it very uncertain if the adding of credentials was successful or not.17:57
eloxpmatulis: We also had to include the "juju add-model foobar aws/eu-west-1 --credential credname. Omitting that seems not to upload the credentials.17:58
wallyworld_babbageclunk: the PR from yesterday, here's the forward port :-) https://github.com/juju/juju/pull/1145123:11
babbageclunkwallyworld_: sorry otp with thumper - no conflicts presumably23:28
wallyworld_nah, easy peasy23:28
babbageclunkWell, that was a thoroughly brain-melting discussion23:39
hpidcocksounds like it was fun23:43
hpidcockwallyworld_: would love some feedback on https://github.com/juju/juju/pull/11462 before I jump into writing unit tests23:56

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!