[01:19] <veebers> wallyworld: so for mariadb (and probably mysql, maybe gitlab?) we're proposing the charm does this instead: http://paste.ubuntu.com/p/NBrGSTyMxy/ (re: cloud container status)
[01:20] <wallyworld> pretty much except the message is wrong
[01:20] <veebers> wallyworld: oh oops, got lazy. Yeah will update message too
[01:20] <veebers> heh 'got' lazy, more like remained lazt
[01:20] <wallyworld> but yeah, i think that will work
[01:38] <thumper> wallyworld, veebers, kelvinliu__: http://10.125.0.203:8080/view/Unit%20tests/job/RunUnittests-amd64/859/console
[01:38] <thumper> not sure what is causing that issue
[01:42] <wallyworld> thumper: the jenkins job needs to have a make arg ste to not do the dep thing twice
[01:42] <kelvinliu_> veebers, is it related with warning: unable to access '/home/ubuntu/.config/git/ignore': Permission denied
[01:42] <kelvinliu_>  ?
[01:42] <wallyworld> i fixed it last week but seems it got reverted
[01:42] <thumper> wallyworld: I think this is different isn't it?
[01:43] <wallyworld> it's running make build followed by make check
[01:43] <wallyworld> both do the dep thing
[01:43] <wallyworld> only the first one needs to
[01:44] <wallyworld> I previously added JUJU_SKIP_DEP=true along wit the VERBOSE=1 arg to malke
[01:49]  * veebers looking
[01:50] <veebers> thumper, wallyworld, kelvinliu_ this rings a bell. I'm sure I put a fix in for this; let me double check (to do with "make lxd-setup" on an empty system)
[01:50] <wallyworld> veebers: i just edited the job
[01:50] <wallyworld> the RUnUnitTests job itself
[01:50] <veebers> wallyworld: ack, what change?
[01:51] <wallyworld> the same one i added last week
[01:51] <wallyworld> JUJU_SKIP_DEP=true
[01:51] <veebers> wallyworld: ah, you didn't add that to the yaml config so it'll survive a redeploy?
[01:51] <wallyworld> the make check coommand needed that dded
[01:51] <wallyworld> that i didn't know about
[01:52] <wallyworld> i edited several job yamls whichj looked similar but were not the same
[01:52] <wallyworld> so i didn't realise we had a template
[01:53] <veebers> wallyworld: all jenkins jobs are configured via jenkins job builder, no changes made via the web ui are guaranteed to last at all (any redploy will overwrite)
[01:53] <veebers> wallyworld: I see your change, I have that stuff open and setup, I'll make the change in the yaml now so it persists
[01:53] <wallyworld> tyvm
[01:59] <veebers> wallyworld: FYI all RunUnittests-* job has been updated to skip deps on check
[02:00] <wallyworld> gr8
[02:12] <anastasiamac> wallyworld: do u have another min? i *think* i know what m seeing but m not sure i know the solution
[02:12] <wallyworld> ok
[02:12] <anastasiamac> and m sure someone (probably u) have come up with solution
[02:12] <anastasiamac> mobile k?
[02:12] <wallyworld> ok
[02:20] <veebers> sigh, my test error is because I said one thing out loud but typed the opposite
[02:29] <wallyworld> kelvinliu_: sadly, ebs volumes do not support ReadWriteMany, so the idea of sharing a single PV for each operator can't easily work
[02:30] <kelvinliu_> wallyworld, that's annoying!
[02:30] <wallyworld> yeah, back to square one
[02:35] <kelvinliu_> wallyworld, how about DaemonSet
[02:35] <wallyworld> not sure, will have to look
[02:36] <wallyworld> veebers: btw, i just confirmed you can deploy an operator without storage so no idea why you had issues yesterday
[02:36] <veebers> wallyworld: ugh :-| I'll try again later, but it's not blocking me right now. Perhaps I'm doing something different/unexpected (doubt it though)
[02:37] <wallyworld> no rush
[02:43] <wallyworld> kelvinliu_: i think AWS EFS is worth looking at instead
[03:31] <veebers> wallyworld: you have a couple moments?
[03:32] <wallyworld> ok
[03:32] <veebers> wallyworld: HO ok? would be quicker
[03:32] <wallyworld> ok
[03:32] <veebers> see you in standup
[04:58] <veebers> wallyworld: Pushed updates to https://github.com/juju/juju/pull/9081 for when you have a moment
[04:58] <wallyworld> ok, almost done in meeting
[05:15] <wallyworld> veebers: there's still an outstanding todo item to not ignore getting container status
[05:16] <veebers> wallyworld: err, is that in state/unit.go? /me checks for todo
[05:17] <wallyworld> state/status.go
[05:18] <wallyworld> and also a blocvk of code to be deleted
[05:19] <veebers> wallyworld: I'm not following with that todo comment, something from your previous review?
[05:19] <wallyworld> yeah, there's comments that have not been addressed
[05:21] <veebers> wallyworld: oh wow, yeah I didn't see those comments until I went to the /files url, I just saw what was on the root page for the PR. taking a look now, sorry
[05:21] <wallyworld> no worries
[05:22] <wallyworld> veebers: sorry, a left a few more also
[05:24] <veebers> wallyworld: if I 'resolve conversation' do you get an email?
[05:25] <wallyworld> maybe, but reviews emails are filtered so i may miss them sometimes
[05:25] <veebers> ack, no I was hoping you wouldn't so I could tick them off without spamming you
[05:26] <wallyworld> :-) hence the fil;ter
[05:30] <veebers> wallyworld: should I add a SetCloudContainerStatus(...) and CloudContainerStatus() to Unit? Set.. instead of update ops, and CloudContainerStatus() so I can check the status (I take it TestCloudContainerStatus doesn't cover what you expected)
[05:31] <wallyworld> veebers: i think we have a SetAgentStatus already?
[05:32] <veebers> wallyworld: but that doesn't set the cloud container status?
[05:33] <wallyworld> sure, was just wanting to see what we had
[05:33] <veebers> ah, aye
[05:36] <wallyworld> veebers: we have this func (u *UnitAgent) SetStatus
[05:36] <wallyworld> so we'd needsomething similar for cloud container i would think
[05:37] <veebers> wallyworld: ack, I can make that happen
[05:38] <wallyworld> veebers: there is a StatusSetting and StatusGetter interface which are used in places; that's why the status methods exist like that
[10:14] <stickupkid> question: when we deploy, do we know before hand what the provider is in the deploy stage?
[10:14] <stickupkid> jam: you may know?
[10:15] <jam> stickupkid: we generally want to not couple what you can deploy with what the provider is, so we *might* but I would be hesitant to expose that.
[10:16] <jam> stickupkid: its Juju's job to abstract the provider, so the charms don't have to know about everywhere they might be deployed.
[10:17] <stickupkid> jam: I'm looking at the lxd profile stuff, with the idea of having a lxd profile for the charm, but I don't want to validate the profile if it's not hitting lxd
[10:17] <stickupkid> jam: my only option then is to validate when deploying via the agent (if that's the right terminology?)
[10:18] <stickupkid> jam: offically we want to validate early so we can tell users early on, but then to do so we break abstraction rules - which 100% agree with by the way
[10:26] <jam> stickupkid: lxd profile isn't about the lxd provider
[10:26] <jam> stickupkid: it is actually about "juju deploy charm --to lxd:1"
[10:26] <jam> stickupkid: so it applies to all providers
[10:27] <stickupkid> jam: quick HO?
[10:27] <jam> give me a sec to make coffee, and sure
[10:33] <jam> stickupkid: I'm joining the guild ho now
[11:36] <rick_h_> manadart: heads up, had the call with the openstack folks last night and had more promising test results. The updates with the agent work was helpful with things post-reboot so <3
[14:14] <hml> anyone know why juju uses two version of charmrepo?
[14:14] <stickupkid> hml: I didn't have a look properly yesterday either, but that's one of my questions as well
[14:15] <hml> stickupkid: it looks like by v3 and v4 are both being updated… and juju uses v2 and v3.  :-)
[14:17] <hml> stickupkid: maybe we’re not using v2, just didn’t take it out of the deps?
[14:18] <stickupkid> hml: looks like it
[14:23] <hml> stickupkid: i’m looking at removing it
[14:23] <stickupkid> hml: sounds good to me
[17:43] <cory_fu> rick_h_: Do you know if there's a way for a charm to determine what region it's running in, specifically on OpenStack?
[17:44] <rick_h_> cory_fu: not that I know of.
[17:44] <cory_fu> rick_h_: Though ideally in a cloud-agnostic way.  I know there's JUJU_AVAILABILITY_ZONE, but that's different from the region (though it might contain it, in some fashion)
[17:44] <cory_fu> rick_h_: Hrm.  That's annoying.  Why do we have JUJU_A_Z but not JUJU_REGION?
[17:45] <rick_h_> cory_fu: so I think that we want units to be able to make sure they're spread across the AZ in a region
[17:45] <rick_h_> cory_fu: but not sure on what the charm would do different based on a region since it's agnostic
[17:45] <rick_h_> cory_fu: and you can't deploy the same app across regions and have any logic around that
[17:45] <rick_h_> cory_fu: what are you trying to do?
[17:47] <cory_fu> rick_h_: This is specifically for the integrator charm, and one of the things that k8s needs from it is to know what region it's running in.
[17:49] <cory_fu> rick_h_: If it comes down to it, the integrator charm could query it from the OpenStack API, but that adds back in a lot of complexity that we had removed for other reasons
[17:49] <rick_h_> cory_fu: looking, is there a method to find it given the current info on the instance it's running on?
[17:49] <rick_h_> cory_fu: understand
[17:50] <cory_fu> rick_h_: I don't think OpenStack has something like the metadata service that most clouds support that would allow it to be queried easily, or else k8s wouldn't need us to provide it
[17:50] <rick_h_> cory_fu: I see
[17:51] <rick_h_> cory_fu: have you dumped out the env in the hook context?
[17:51] <cory_fu> Yeah.  it's not there
[17:52] <cory_fu> rick_h_: Oh, I'm wrong: https://docs.openstack.org/nova/latest/user/metadata-service.html
[17:53] <rick_h_> cory_fu: k, yea I see some stuff with OS_REGION_NAME but it's on the client side for setting up your credentials
[17:53] <rick_h_> cory_fu: I'd be curious if there's any region related info in the credentials you get via trust
[17:53] <rick_h_> cory_fu: but it might be a list...still looking
[17:54] <cory_fu> rick_h_: There might be region info in the trust creds, but the charm also supports creds via config.  But you said on OpenStack there's actually an OS_REGION_NAME?  I could probably use that
[17:54] <rick_h_> cory_fu: so I see that showing up in the credential adding code such that if you have that env var set juju reads it in, does some detection, etc.
[17:55] <rick_h_> cory_fu: so it might end up as part of the creds data but possibly not all the time
[17:56] <cory_fu> rick_h_: But if it's in the env for Juju to pick up, then I could pick that up in the charm as well, presumably.  I don't have an OpenStack instance that I can test this on.   I should get that resolved so I can see what's on the actual system.
[17:56] <rick_h_> cory_fu: yea, I was just looking I had a bastion at one point but can't recall where I put the creds lol
[17:59] <rick_h_> cory_fu: have a sec for a hangout?
[18:04] <cory_fu> rick_h_: Sorry, chatting about this with David in another channel.  Might be resolved
[18:04] <rick_h_> cory_fu: k
[18:04] <cory_fu> It sounds like OS_REGION_NAME is reliable
[18:04] <rick_h_> cory_fu: even if there's > 1 region?
[18:05] <cory_fu> rick_h_: Can an instance be in more than one region?
[18:05] <rick_h_> cory_fu: no, but there can be > 1 region in the openstack and so we'd have to find which region that instance is in?
[18:05] <cory_fu> rick_h_: Is this value getting accessed by Juju on the instance, or on the client?
[18:05] <rick_h_> cory_fu: because region is part of the credential data vs the juju instance
[18:05] <rick_h_> cory_fu: the client
[18:05] <cory_fu> Oh, that's not useful.  I need it from the instance
[18:05] <rick_h_> cory_fu: right, that's what I'm saying
[18:06] <rick_h_> cory_fu: so if you get the credentials via trust and OS_REGION_NAME is singular you have the answer there, but if > 1 region there I'm not sure if there's another path
[18:07] <rick_h_> cory_fu: I guess that leads into the API calls you noted against the metadata service? Can you easily get the instance info you need to call that api?
[18:08] <rick_h_> cory_fu: I've got to run for few, biab
[18:09] <cory_fu> rick_h_: Ok.  The metadata service is easy to use, but I'm not sure if it includes the region.
[18:37] <rick_h_> hml: do you know if there's any secret way to get the OS region out from an instance that's running?
[18:37] <hml> rick_h_: with juju or any tool?
[18:38] <rick_h_> hml: bonus points with Juju from the running instance, secondary points for any tool that the charm can be updated to leverage
[18:50] <hml> rick_h_:  it could be that the controller region is all you’ll get.  i’m not sure there is a way to change the region once the controller is bootstrapped??
[18:50] <rick_h_> hml: no, but how would you get the controller region from within an instance in a charm?
[18:50] <hml> rick_h_: the openstack commands apply to one Region - that you define in the environment variables or running the command
[18:53] <hml> rick_h_: checking something
[18:54] <hml> rick_h_: out of curiousity, why would a charm need to know?
[18:54] <rick_h_> hml: seems that the openstack integrator charm needs to know the region of the instance for something in k8s.
[18:55] <rick_h_> hml: I'm still wondering then if the info out of juju trust (the credentials) will be limited to the one region that the model is in or if it'll provide a list of an answer if there's > 1 region on the cloud.
[18:55] <hml> rick_h_: i think it’d have to be… creds only allow for 1 region
[18:56] <rick_h_> hml: ok, I couldn't find a way to confirm that. In looking at the discoverregions code I saw it could get a list.
[18:56] <hml> rick_h_: but i ‘m wrong
[18:56] <hml> rick_h_: creds don’t include a region
[18:56] <rick_h_> hml: oh? is it the cloud that's reading the OS_REGION_NAME then?
[18:57] <hml> rick_h_: yes
[18:57] <rick_h_> hml: gotcha, ugh
[19:01] <hml> rick_h_: so far the only place i found it was in the bootstrap-params on the controller
[19:01] <hml> but any charm wouldn’t have access
[19:32] <magicaltrout> rick_h_: yolo
[19:34] <rick_h_> magicaltrout: doing dad stuff. What's up?
[19:37] <magicaltrout> doesn't matter, just looking for some advice on the Openstack LXD conjure-up stuff
[20:04] <rick_h_> magicaltrout: how so?
[20:05] <rick_h_> magicaltrout: the nova-lxd stuff?
[21:04] <magicaltrout> sorry yeah rick_h_
[21:04] <magicaltrout> whatever I do
[21:04] <magicaltrout> nova-compute moans about image not found
[21:18] <babbageclunk> thumper: have space for a review? https://github.com/juju/juju/pull/9161
[21:19] <rick_h_> magicaltrout: oh hmm. Yea not sure on that. Might have to get with the openstack folks.
[22:13] <babbageclunk> Ooh, wallyworld, can I get a review? https://github.com/juju/juju/pull/9161
[22:13] <wallyworld> sure
[22:20] <babbageclunk> thanks
[22:30] <wallyworld> babbageclunk: done
[22:32] <babbageclunk> wallyworld: awesome, thanks
[23:49] <veebers> wallyworld: I added a type UnitCloudContainer to cloudcontainer.go cribbing off UnitAgent (as per convo from last night), I didn't think adding State to cloudContainer would be ideal when it's only needed for status bits, not the provider/address/ports etc.
[23:51] <veebers> ah shoot, the other car still has a dead battery. Being stuck at home sucks. /me puts it on charge