[00:07] <veebers> wallyworld, kelvin : ahh, I think I get it, the jujud run in the operator pod is from the operator image right? So if I want to update the logging on the client end I need to build an image, push it and overwrite the one used somehow right?
[00:09] <kelvin> veebers, there is a handy cmd in makefile to replace the image cache on k8s node. so u just build the image on ur host, then push to the k8s node, k8s will use the cache always if it represents
[00:09] <veebers> kelvin: mean thanks!
[00:10] <kelvin> veebers, or u can kubectl exec into the pod, then change the python source code, and test it.
[00:10] <veebers> kelvin: its jujuc code I've added logging
[00:10] <kelvin> if it's charm py lib
[00:10] <veebers> and now thinking about it, I think I have some ideas as to whats going wrong
[00:10] <kelvin> ok
[00:10] <veebers> ugh me need type faster
[00:13] <kelvin> yeah 1st thing is for go bin, the 2nd is for py charm
[00:14] <anastasiamac_> wallyworld: ping
[00:15] <veebers> WTF, does anyone else see this error when 'make install'-ing? service/systemd/service.go:61: syntax error: unexpected = in type declaration
[00:15] <veebers> hmm, seems an env issue?
[00:32] <veebers> kelvin, wallyworld can I set controller config caas-operator-image-path to point at a local docker?
[00:33] <veebers> (my juju status -m ${JUJU_K8S_MODEL} kubernetes-worker has no machines :-\ want to use the image from the start)
[00:37] <veebers> nw, pushing to docker hub, will use that
[00:38] <kelvin> veebers, i don't know how to do that. what i would do is either to ensure the node image cache is ur build or scp ur jujuc into the operator pod(assume ur image build host is Ubuntu same as the operator os)
[00:38] <veebers> kelvin: ack, if this fails (or if I need to operator quicker) I can do that cheers, need to fully get my head around the operator pod etc.
[00:41] <veebers> kelvin: yay I now get the logging I was expecting, cheers!
[00:41] <kelvin> veebers, or u can work on the k8s node directly, clone juju to the host, change, build bin, build image, and redeploy the charm, then u don't need to push then pull images
[00:41] <veebers> now to actually figure out and fix the issue
[00:41] <kelvin> veebers, awesome
[00:56] <wallyworld> anastasiamac_: hi, sorry was immersd in code, missed ping
[00:57] <wallyworld> veebers: operator pod logs don't go to controller
[00:57] <wallyworld> i think i gave you the command yesterday?
[00:58] <wallyworld> kubectl log -f ....
[00:59] <veebers> wallyworld: aye, sorted now, needed the operator image updated with the new jujud
[00:59] <wallyworld> ok
[01:02] <wallyworld> veebers: you figured out that caas-operator-image-path can point to your own image on dh?
[01:03] <veebers> wallyworld: yep, I can get the new image and juju there fine, so I have my loggin now, just need to determine why it's complaing
[01:03] <wallyworld> gr8 ok
[01:03] <veebers> wallyworld: I have this error message, I wonder if it's something to do with how a caas model httpgetter does something? https://pastebin.canonical.com/p/pfP3dMDY2M/
[01:04] <wallyworld> veebers: it is, i can explain, HO?
[01:04] <veebers> wallyworld: yay yep standup omw
[01:05] <balloons> Happy Friday to all you people from the future!
[01:06] <babbageclunk> yay! Happy Friday balloons!
[01:10] <blahdeblah> \o balloons - how's it going?  Where are you working now?
[01:10] <balloons> I hope you're still keeping veebers on his toes babbageclunk.
[01:11] <balloons> hey blahdeblah, heh, never forget that handle :p
[01:12] <balloons> I'm enjoying my time presently working for DO. And yes, no provider for DO means I've been juju free for several months now
[01:12] <balloons> I'm clean, I swear it.
[01:12] <blahdeblah> cool
[01:31] <veebers> hey balloons o/
[01:57] <anastasiamac_> wallyworld: PTAL https://github.com/juju/juju/pull/8929 - disabled users access check
[02:03] <thumper> balloons: I thought you'd be writing a DO provider?
[02:04] <blahdeblah> haha
[02:32] <anastasiamac_> or thumper, PTAL at https://github.com/juju/juju/pull/8929? would b awesome to land it in 2.4.1..
[02:32] <wallyworld> anastasiamac_: lgtm ty
[02:32] <anastasiamac_> wallyworld: \o/
[02:44] <anastasiamac_> wallyworld: fco... apiserver tests fail coz apparently disabled users have login access... as it makes no sense to me - disabled users should not be able to login, m fixing the tests.... i might need u to have another look unless u r happy as-is
[02:48] <wallyworld> hmmm, i sort think they should have no access
[02:48] <anastasiamac_> wallyworld: exactly :)
[02:49] <wallyworld> veebers: i want to keep some aws instances running for demo under the juju-qa account, remind me, so i just go in and mark them in aws console as don't termnate or something
[02:50] <veebers> wallyworld: yep, set termination protection
[02:50] <veebers> wallyworld: if you wanted to be super extra sure you could just disable the aws cleanup job for now
[02:50] <wallyworld> ta. i think i just lost them all
[02:50] <veebers> wallyworld: oh shit :-( hopefully not a biggie to get back?
[02:50] <wallyworld> it's juju, just redeploy :-)
[02:51] <veebers> hah true ^_^
[03:19] <anastasiamac_> wallyworld: and disabled users do not have access to application offers either, right? oh well, shouldn't since they do now...
[03:19] <wallyworld> yeah, shouldn't
[03:51] <veebers> wallyworld: is "Labels:    map[string]string{labelApplication: appName}}," important for a secret? (if so I'll pass in app name to EnsureSecret too
[03:51] <wallyworld> yeah, we should label it with app name
[03:51] <veebers> cool, can do
[03:53] <anastasiamac_> wallyworld: updated PR to ensure no access for disabled user, PTAL?
[03:53] <wallyworld> ok
[03:54] <anastasiamac_> wallyworld: also i think we have a bug in that we still should be able to provide means to list access for disabled users when admins want to see what they'd have when re-enabled... currently i don't think it'll work: t'll come back with "none"... i'll file a bug and work on it accroding to my pririties :D
[03:54] <wallyworld> sgtm
[03:58] <wallyworld> anastasiamac_: just a typo
[03:58] <anastasiamac_> phew! that's good news \o/ tyvm for review :D
[04:10] <veebers> wallyworld: FYI have pushed the fixes and the requested changes to the PR
[04:10] <wallyworld> looking
[04:21] <wallyworld> veebers: just a couple of small things
[04:22] <veebers> wallyworld: cool, on it
[04:26] <veebers> wallyworld: hub.docker.io or just docker.io?
[04:26] <wallyworld> docker.io
[04:26] <veebers> cheers
[04:26] <wallyworld> i tested with docker pull
[04:27] <veebers> ah right, I can see that in my terminal backscroll. /me wonders if hub.docker.com redirects to docker.io
[04:33] <wallyworld> kelvin: lgtm, just a couple of small things
[04:33] <veebers> wallyworld: ok just fixing a couple of unit test failures that fell out of that and will squash && push && merge
[04:33] <wallyworld> ty
[04:34] <kelvin> wallyworld, yup thanks
[05:22] <veebers> It seems there are occasional test issues with github.com/juju/juju/cmd/juju/machine.TestPackage (from github.com_juju_juju_cmd_juju_machine)
[06:54] <anastasiamac_> wallyworld: m pretty sure we de-dup local charms when they r deployed multiple times, right? i seem to reacll ppl doing the work...
[06:55] <wallyworld> the blob contents i think so
[09:06] <stickupkid> manadart: you go 5 minutes?
[09:07] <stickupkid> s/go/got/ ?
[09:08] <manadart> stickupkid: Sure HO?
[09:08] <stickupkid> sure
[09:09] <stickupkid> one sec got to restart
[10:34] <rick_h_> stickupkid: I do want to say let's keep this method around though. This feels like the right autoload-credentials type thing where if you've got it, great!
[10:42] <manadart> Need a review for: https://github.com/juju/juju/pull/8933
[11:40] <valentina_> Hello All,
[11:40] <valentina_> would anyone give me a hint please: I'm using pylxd in my charm: https://git.launchpad.net/charms-6wind/tree/charm-layers/6wind-common/lib/charm/openstack/utils.py
[11:40] <valentina_> I want to add a constraint: that my charm installs and uses only pylxd>=2.2.7 from PyPy repository not python3-pylxd from official ubuntu's repos.
[11:40] <valentina_> I've found small notice about wheelhouse.txt file in official doc
[11:40] <valentina_> I'm hesitating, where I should put this constraint, shall I create wheelhouse.txt in 6wind-common layer directory: https://git.launchpad.net/charms-6wind/tree/charm-layers/6wind-common
[11:40] <valentina_> Or I should put it in charm's dir here:  https://git.launchpad.net/charms-6wind/tree/virtual-accelerator-compute
[11:41] <valentina_> The second question: actually there is no any wheelhouse.txt files in charms-6wind repo, but when I build the charm, the build folder is created and I can see wheelhouse.txt generated here. How it is generated by charm build and which template it takes, how it takes decision which python libraries it have to put to wheelhouse.txt as a needed dependencies ?
[11:41] <valentina_> Thanks for any help
[11:55] <stickupkid> manadart: that worked
[11:55] <stickupkid> * goes for lunch with joy
[11:55] <manadart> stickupkid: Nice.
[11:56] <stickupkid> manadart: i need to just make sure, it seems rather convoluted tbh, but it is what it is
[11:56] <stickupkid> :)
[12:40] <alephnull> Could someone point me to some explanatory guide about relations? I'm looking at https://jujucharms.com/rabbitmq-server/ and it extracts information from relations and I am not entirely sure where this information is coming from.
[16:04] <stickupkid> rick_h_: after finalize cloud when adding a credential, do you think we should update the credential file?
[16:04] <stickupkid> rick_h_: the issue we have is that trust.password isn't assured to be always available, it might be turned off because of security reasons
[16:05] <stickupkid> rick_h_: we have the server cert, so we should remove the trust.password and swap it with that
[16:24] <rick_h_> stickupkid: hmm, so we go through add-credential and ask for the trust password, setup the keys. Do we setup the keys on bootstrap? Or as part of the add-credential call?
[16:24] <rick_h_> stickupkid: e.g. do we actually need to write out that trust pass to anything persistent?
[16:41] <stickupkid> rick_h_: the problem is, we don't finalize the cloud, if you're not in interactive mode. It's only when we bootstrap that we finalise the cloud...
[16:42] <stickupkid> rick_h_: so we need to either finalize the cloud when we add-credentials for all credentials added (maybe add it optionally so i don't break other providers) or a re-write the credentials at bootstrap
[16:42] <stickupkid> rick_h_: that latter feels wrong
[16:43] <rick_h_> stickupkid: yea, I wouldn't expect us to wait until bootstrap to do something with creds
[16:44] <stickupkid> rick_h_: i'll propose it, see if people hate it :)
[16:44] <rick_h_> stickupkid: can you raise an email to the team about finalizing at add-credential time? It feels like the right place but if we've never messed with that...I mean what does azure do?
[16:44] <rick_h_> it has an interactive add-credential that walks back/forth and such
[16:44] <rick_h_> I wonder if there's lessons/style to crib from there
[16:45] <rick_h_> stickupkid: kk, on the propose and get feedback method
[16:45] <stickupkid> rick_h_: so interactive works nicely, it's when you pass a file it assumes that you know what you're doing?
[16:46] <stickupkid> rick_h_: quick HO?
[16:55] <rick_h_> stickupkid: sec
[16:55] <rick_h_> stickupkid: I see...yea the file path. Hmmm
[17:00] <rick_h_> stickupkid: still around?
[17:00] <rick_h_> actually, just go have a good weekend stickupkid
[17:01] <stickupkid> rick_h_: haha, i'm still around, if i get this sorted, i want have to think about it all weekend :)
[17:01] <rick_h_> stickupkid: sounds like a plan
[17:01] <rick_h_> stickupkid: with a cool beverage in hand to assist the thinking :)
[17:01] <stickupkid> HO if you want
[17:02] <rick_h_> stickupkid: no, I want you go to enjoy your weekend and get out of here
[17:02] <rick_h_> stickupkid: appreciate your thinking on the problem.
[17:29] <stickupkid> rick_h_: muhahaha - got it working
[17:29] <rick_h_> stickupkid: lol