veebers | wallyworld: so for mariadb (and probably mysql, maybe gitlab?) we're proposing the charm does this instead: http://paste.ubuntu.com/p/NBrGSTyMxy/ (re: cloud container status) | 01:19 |
---|---|---|
wallyworld | pretty much except the message is wrong | 01:20 |
veebers | wallyworld: oh oops, got lazy. Yeah will update message too | 01:20 |
veebers | heh 'got' lazy, more like remained lazt | 01:20 |
wallyworld | but yeah, i think that will work | 01:20 |
thumper | wallyworld, veebers, kelvinliu__: http://10.125.0.203:8080/view/Unit%20tests/job/RunUnittests-amd64/859/console | 01:38 |
thumper | not sure what is causing that issue | 01:38 |
wallyworld | thumper: the jenkins job needs to have a make arg ste to not do the dep thing twice | 01:42 |
kelvinliu_ | veebers, is it related with warning: unable to access '/home/ubuntu/.config/git/ignore': Permission denied | 01:42 |
kelvinliu_ | ? | 01:42 |
wallyworld | i fixed it last week but seems it got reverted | 01:42 |
thumper | wallyworld: I think this is different isn't it? | 01:42 |
wallyworld | it's running make build followed by make check | 01:43 |
wallyworld | both do the dep thing | 01:43 |
wallyworld | only the first one needs to | 01:43 |
wallyworld | I previously added JUJU_SKIP_DEP=true along wit the VERBOSE=1 arg to malke | 01:44 |
* veebers looking | 01:49 | |
veebers | thumper, wallyworld, kelvinliu_ this rings a bell. I'm sure I put a fix in for this; let me double check (to do with "make lxd-setup" on an empty system) | 01:50 |
wallyworld | veebers: i just edited the job | 01:50 |
wallyworld | the RUnUnitTests job itself | 01:50 |
veebers | wallyworld: ack, what change? | 01:50 |
wallyworld | the same one i added last week | 01:51 |
wallyworld | JUJU_SKIP_DEP=true | 01:51 |
veebers | wallyworld: ah, you didn't add that to the yaml config so it'll survive a redeploy? | 01:51 |
wallyworld | the make check coommand needed that dded | 01:51 |
wallyworld | that i didn't know about | 01:51 |
wallyworld | i edited several job yamls whichj looked similar but were not the same | 01:52 |
wallyworld | so i didn't realise we had a template | 01:52 |
veebers | wallyworld: all jenkins jobs are configured via jenkins job builder, no changes made via the web ui are guaranteed to last at all (any redploy will overwrite) | 01:53 |
veebers | wallyworld: I see your change, I have that stuff open and setup, I'll make the change in the yaml now so it persists | 01:53 |
wallyworld | tyvm | 01:53 |
veebers | wallyworld: FYI all RunUnittests-* job has been updated to skip deps on check | 01:59 |
wallyworld | gr8 | 02:00 |
anastasiamac | wallyworld: do u have another min? i *think* i know what m seeing but m not sure i know the solution | 02:12 |
wallyworld | ok | 02:12 |
anastasiamac | and m sure someone (probably u) have come up with solution | 02:12 |
anastasiamac | mobile k? | 02:12 |
wallyworld | ok | 02:12 |
veebers | sigh, my test error is because I said one thing out loud but typed the opposite | 02:20 |
wallyworld | kelvinliu_: sadly, ebs volumes do not support ReadWriteMany, so the idea of sharing a single PV for each operator can't easily work | 02:29 |
kelvinliu_ | wallyworld, that's annoying! | 02:30 |
wallyworld | yeah, back to square one | 02:30 |
kelvinliu_ | wallyworld, how about DaemonSet | 02:35 |
wallyworld | not sure, will have to look | 02:35 |
wallyworld | veebers: btw, i just confirmed you can deploy an operator without storage so no idea why you had issues yesterday | 02:36 |
veebers | wallyworld: ugh :-| I'll try again later, but it's not blocking me right now. Perhaps I'm doing something different/unexpected (doubt it though) | 02:36 |
wallyworld | no rush | 02:37 |
wallyworld | kelvinliu_: i think AWS EFS is worth looking at instead | 02:43 |
veebers | wallyworld: you have a couple moments? | 03:31 |
wallyworld | ok | 03:32 |
veebers | wallyworld: HO ok? would be quicker | 03:32 |
wallyworld | ok | 03:32 |
veebers | see you in standup | 03:32 |
veebers | wallyworld: Pushed updates to https://github.com/juju/juju/pull/9081 for when you have a moment | 04:58 |
wallyworld | ok, almost done in meeting | 04:58 |
wallyworld | veebers: there's still an outstanding todo item to not ignore getting container status | 05:15 |
veebers | wallyworld: err, is that in state/unit.go? /me checks for todo | 05:16 |
wallyworld | state/status.go | 05:17 |
wallyworld | and also a blocvk of code to be deleted | 05:18 |
veebers | wallyworld: I'm not following with that todo comment, something from your previous review? | 05:19 |
wallyworld | yeah, there's comments that have not been addressed | 05:19 |
veebers | wallyworld: oh wow, yeah I didn't see those comments until I went to the /files url, I just saw what was on the root page for the PR. taking a look now, sorry | 05:21 |
wallyworld | no worries | 05:21 |
wallyworld | veebers: sorry, a left a few more also | 05:22 |
veebers | wallyworld: if I 'resolve conversation' do you get an email? | 05:24 |
wallyworld | maybe, but reviews emails are filtered so i may miss them sometimes | 05:25 |
veebers | ack, no I was hoping you wouldn't so I could tick them off without spamming you | 05:25 |
wallyworld | :-) hence the fil;ter | 05:26 |
veebers | wallyworld: should I add a SetCloudContainerStatus(...) and CloudContainerStatus() to Unit? Set.. instead of update ops, and CloudContainerStatus() so I can check the status (I take it TestCloudContainerStatus doesn't cover what you expected) | 05:30 |
wallyworld | veebers: i think we have a SetAgentStatus already? | 05:31 |
veebers | wallyworld: but that doesn't set the cloud container status? | 05:32 |
wallyworld | sure, was just wanting to see what we had | 05:33 |
veebers | ah, aye | 05:33 |
wallyworld | veebers: we have this func (u *UnitAgent) SetStatus | 05:36 |
wallyworld | so we'd needsomething similar for cloud container i would think | 05:36 |
veebers | wallyworld: ack, I can make that happen | 05:37 |
wallyworld | veebers: there is a StatusSetting and StatusGetter interface which are used in places; that's why the status methods exist like that | 05:38 |
=== alephnull_ is now known as alephnull | ||
stickupkid | question: when we deploy, do we know before hand what the provider is in the deploy stage? | 10:14 |
stickupkid | jam: you may know? | 10:14 |
jam | stickupkid: we generally want to not couple what you can deploy with what the provider is, so we *might* but I would be hesitant to expose that. | 10:15 |
jam | stickupkid: its Juju's job to abstract the provider, so the charms don't have to know about everywhere they might be deployed. | 10:16 |
stickupkid | jam: I'm looking at the lxd profile stuff, with the idea of having a lxd profile for the charm, but I don't want to validate the profile if it's not hitting lxd | 10:17 |
stickupkid | jam: my only option then is to validate when deploying via the agent (if that's the right terminology?) | 10:17 |
stickupkid | jam: offically we want to validate early so we can tell users early on, but then to do so we break abstraction rules - which 100% agree with by the way | 10:18 |
jam | stickupkid: lxd profile isn't about the lxd provider | 10:26 |
jam | stickupkid: it is actually about "juju deploy charm --to lxd:1" | 10:26 |
jam | stickupkid: so it applies to all providers | 10:26 |
stickupkid | jam: quick HO? | 10:27 |
jam | give me a sec to make coffee, and sure | 10:27 |
jam | stickupkid: I'm joining the guild ho now | 10:33 |
rick_h_ | manadart: heads up, had the call with the openstack folks last night and had more promising test results. The updates with the agent work was helpful with things post-reboot so <3 | 11:36 |
hml | anyone know why juju uses two version of charmrepo? | 14:14 |
stickupkid | hml: I didn't have a look properly yesterday either, but that's one of my questions as well | 14:14 |
hml | stickupkid: it looks like by v3 and v4 are both being updated… and juju uses v2 and v3. :-) | 14:15 |
hml | stickupkid: maybe we’re not using v2, just didn’t take it out of the deps? | 14:17 |
stickupkid | hml: looks like it | 14:18 |
hml | stickupkid: i’m looking at removing it | 14:23 |
stickupkid | hml: sounds good to me | 14:23 |
cory_fu | rick_h_: Do you know if there's a way for a charm to determine what region it's running in, specifically on OpenStack? | 17:43 |
rick_h_ | cory_fu: not that I know of. | 17:44 |
cory_fu | rick_h_: Though ideally in a cloud-agnostic way. I know there's JUJU_AVAILABILITY_ZONE, but that's different from the region (though it might contain it, in some fashion) | 17:44 |
cory_fu | rick_h_: Hrm. That's annoying. Why do we have JUJU_A_Z but not JUJU_REGION? | 17:44 |
rick_h_ | cory_fu: so I think that we want units to be able to make sure they're spread across the AZ in a region | 17:45 |
rick_h_ | cory_fu: but not sure on what the charm would do different based on a region since it's agnostic | 17:45 |
rick_h_ | cory_fu: and you can't deploy the same app across regions and have any logic around that | 17:45 |
rick_h_ | cory_fu: what are you trying to do? | 17:45 |
cory_fu | rick_h_: This is specifically for the integrator charm, and one of the things that k8s needs from it is to know what region it's running in. | 17:47 |
cory_fu | rick_h_: If it comes down to it, the integrator charm could query it from the OpenStack API, but that adds back in a lot of complexity that we had removed for other reasons | 17:49 |
rick_h_ | cory_fu: looking, is there a method to find it given the current info on the instance it's running on? | 17:49 |
rick_h_ | cory_fu: understand | 17:49 |
cory_fu | rick_h_: I don't think OpenStack has something like the metadata service that most clouds support that would allow it to be queried easily, or else k8s wouldn't need us to provide it | 17:50 |
rick_h_ | cory_fu: I see | 17:50 |
rick_h_ | cory_fu: have you dumped out the env in the hook context? | 17:51 |
cory_fu | Yeah. it's not there | 17:51 |
cory_fu | rick_h_: Oh, I'm wrong: https://docs.openstack.org/nova/latest/user/metadata-service.html | 17:52 |
rick_h_ | cory_fu: k, yea I see some stuff with OS_REGION_NAME but it's on the client side for setting up your credentials | 17:53 |
rick_h_ | cory_fu: I'd be curious if there's any region related info in the credentials you get via trust | 17:53 |
rick_h_ | cory_fu: but it might be a list...still looking | 17:53 |
cory_fu | rick_h_: There might be region info in the trust creds, but the charm also supports creds via config. But you said on OpenStack there's actually an OS_REGION_NAME? I could probably use that | 17:54 |
rick_h_ | cory_fu: so I see that showing up in the credential adding code such that if you have that env var set juju reads it in, does some detection, etc. | 17:54 |
rick_h_ | cory_fu: so it might end up as part of the creds data but possibly not all the time | 17:55 |
cory_fu | rick_h_: But if it's in the env for Juju to pick up, then I could pick that up in the charm as well, presumably. I don't have an OpenStack instance that I can test this on. I should get that resolved so I can see what's on the actual system. | 17:56 |
rick_h_ | cory_fu: yea, I was just looking I had a bastion at one point but can't recall where I put the creds lol | 17:56 |
rick_h_ | cory_fu: have a sec for a hangout? | 17:59 |
cory_fu | rick_h_: Sorry, chatting about this with David in another channel. Might be resolved | 18:04 |
rick_h_ | cory_fu: k | 18:04 |
cory_fu | It sounds like OS_REGION_NAME is reliable | 18:04 |
rick_h_ | cory_fu: even if there's > 1 region? | 18:04 |
cory_fu | rick_h_: Can an instance be in more than one region? | 18:05 |
rick_h_ | cory_fu: no, but there can be > 1 region in the openstack and so we'd have to find which region that instance is in? | 18:05 |
cory_fu | rick_h_: Is this value getting accessed by Juju on the instance, or on the client? | 18:05 |
rick_h_ | cory_fu: because region is part of the credential data vs the juju instance | 18:05 |
rick_h_ | cory_fu: the client | 18:05 |
cory_fu | Oh, that's not useful. I need it from the instance | 18:05 |
rick_h_ | cory_fu: right, that's what I'm saying | 18:05 |
rick_h_ | cory_fu: so if you get the credentials via trust and OS_REGION_NAME is singular you have the answer there, but if > 1 region there I'm not sure if there's another path | 18:06 |
rick_h_ | cory_fu: I guess that leads into the API calls you noted against the metadata service? Can you easily get the instance info you need to call that api? | 18:07 |
rick_h_ | cory_fu: I've got to run for few, biab | 18:08 |
cory_fu | rick_h_: Ok. The metadata service is easy to use, but I'm not sure if it includes the region. | 18:09 |
rick_h_ | hml: do you know if there's any secret way to get the OS region out from an instance that's running? | 18:37 |
hml | rick_h_: with juju or any tool? | 18:37 |
rick_h_ | hml: bonus points with Juju from the running instance, secondary points for any tool that the charm can be updated to leverage | 18:38 |
hml | rick_h_: it could be that the controller region is all you’ll get. i’m not sure there is a way to change the region once the controller is bootstrapped?? | 18:50 |
rick_h_ | hml: no, but how would you get the controller region from within an instance in a charm? | 18:50 |
hml | rick_h_: the openstack commands apply to one Region - that you define in the environment variables or running the command | 18:50 |
hml | rick_h_: checking something | 18:53 |
hml | rick_h_: out of curiousity, why would a charm need to know? | 18:54 |
rick_h_ | hml: seems that the openstack integrator charm needs to know the region of the instance for something in k8s. | 18:54 |
rick_h_ | hml: I'm still wondering then if the info out of juju trust (the credentials) will be limited to the one region that the model is in or if it'll provide a list of an answer if there's > 1 region on the cloud. | 18:55 |
hml | rick_h_: i think it’d have to be… creds only allow for 1 region | 18:55 |
rick_h_ | hml: ok, I couldn't find a way to confirm that. In looking at the discoverregions code I saw it could get a list. | 18:56 |
hml | rick_h_: but i ‘m wrong | 18:56 |
hml | rick_h_: creds don’t include a region | 18:56 |
rick_h_ | hml: oh? is it the cloud that's reading the OS_REGION_NAME then? | 18:56 |
hml | rick_h_: yes | 18:57 |
rick_h_ | hml: gotcha, ugh | 18:57 |
hml | rick_h_: so far the only place i found it was in the bootstrap-params on the controller | 19:01 |
hml | but any charm wouldn’t have access | 19:01 |
magicaltrout | rick_h_: yolo | 19:32 |
rick_h_ | magicaltrout: doing dad stuff. What's up? | 19:34 |
magicaltrout | doesn't matter, just looking for some advice on the Openstack LXD conjure-up stuff | 19:37 |
rick_h_ | magicaltrout: how so? | 20:04 |
rick_h_ | magicaltrout: the nova-lxd stuff? | 20:05 |
magicaltrout | sorry yeah rick_h_ | 21:04 |
magicaltrout | whatever I do | 21:04 |
magicaltrout | nova-compute moans about image not found | 21:04 |
babbageclunk | thumper: have space for a review? https://github.com/juju/juju/pull/9161 | 21:18 |
rick_h_ | magicaltrout: oh hmm. Yea not sure on that. Might have to get with the openstack folks. | 21:19 |
babbageclunk | Ooh, wallyworld, can I get a review? https://github.com/juju/juju/pull/9161 | 22:13 |
wallyworld | sure | 22:13 |
babbageclunk | thanks | 22:20 |
wallyworld | babbageclunk: done | 22:30 |
babbageclunk | wallyworld: awesome, thanks | 22:32 |
veebers | wallyworld: I added a type UnitCloudContainer to cloudcontainer.go cribbing off UnitAgent (as per convo from last night), I didn't think adding State to cloudContainer would be ideal when it's only needed for status bits, not the provider/address/ports etc. | 23:49 |
veebers | ah shoot, the other car still has a dead battery. Being stuck at home sucks. /me puts it on charge | 23:51 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!