[02:38] few test fixes for someone https://github.com/juju/juju/pull/10916 https://github.com/juju/juju/pull/10922 https://github.com/juju/juju/pull/10923 [03:11] hpidcock: approved the two quick ones! [03:11] still looking at the pregen certs one [03:11] babbageclunk: danke [04:18] hpidcock: also approved the certs one [04:19] thankyou! [08:49] is someone can confirm me that juju is not using any code related to nova-network for the openstack provider? [08:50] i checked the code and i think we are OK, but just wanted double check with you since nova-network will be completly rmeoved for the next release of nova [09:04] sahid: thank you for checking [09:05] sahid: if you are able, could you please ask on the discourse forum? [09:05] sahid: that way team members who are currently offline will have a chance to check [09:07] timClicks: sure I will do that [09:45] manadart, you got time for a CR https://github.com/juju/juju/pull/10881 [09:48] stickupkid: Yep. [10:01] stickupkid: Done. [10:01] manadart, need to fix the tests fyi [12:24] Need a tick on a forward-merge: https://github.com/juju/juju/pull/10925 [12:27] manadart, tick [12:28] stickupkid: Ta. [12:33] morning party folks [12:35] /join #maas [13:39] I think juju deploy is installing a different version of the charm than the one in the local build directory. The version file and codetree-collect-info.yaml in the local charms dir have a different hash. [13:40] what could be causing that? [13:40] skay: reactive charm that needs to be deployed from the build dir? [13:41] or some missing path expectation? [13:41] rick_h: reactive charm that needs to be deployed from the build dir [13:56] the normal workflow uses mojo. the path to the charms dir is setup by mojo and the charms are collected there via a collect file in my spec. [13:57] while trying to figure this out I've also done it by factoring mojo out of the equation [14:17] nammn_de: so those are the old canonistack region urls [14:17] nammn_de: I've forwarded you the email on the new region, I'd suggest we update to use that [14:17] nammn_de: let me know if that'll work or not [14:17] cool, can do that [14:18] rick_h: should they work, if I try them locally over vpn? [14:18] nammn_de: but yea, I think we just had test data that's on old cloud endpoints and we have to update for the changes [14:18] nammn_de: yep, over vpn should be good [14:18] rick_h: because of the yaml jenkins/I couldnt reach any of those endpoints afaict [14:19] nammn_de: yea, I think they're shut down [14:19] nammn_de: or moved [14:19] nammn_de: so have to update to the current ones [14:19] rick_h: cool will update [14:21] achilleasa: if we can, anywhere we have some sort of interval we should be doing backoff if it might cause any sort of issue [14:21] achilleasa: just one of those lessons where debugging something bouncing and without the backoff it makes it hard to recover [14:21] rick_h: and for manual-xenial-amd64, finfolk-vmaas and vsphere? They don't seem to work either [14:21] nammn_de: sounds like achilleasa sent you notes on vsphere, is that IP still correct? [14:22] nammn_de: and for the vmaas I'm not sure about. Have to check our infra, probably listed in cloud-city details somewhere. [14:22] rick_h: yeah should be, I can use those as long as jenkins has intenrally the creds for them [14:23] nammn_de: in the end this is just to test you can add multiple clouds right? [14:23] rick_h: yeah, I can just steal some from clouds.yaml or just remove some [14:23] nammn_de: so I like the canonistack as it has a couple of regions as well, but honestly we just need to verify a few clouds. Find a few that work and run with it. [14:23] nammn_de: exactly, I don't think we have to over do it [14:24] nammn_de: this was just easy for the original test. let's keep it easy as long as the test validation is still legit [14:24] skay: how did you take mojo out of the question? [14:24] Just wanted to make sure that we align here and that. Yeah Didnt plan to add additional complexity. Will do [14:26] rick_h: I'll do this all again momentarily to make sure I remember it right. I didn't take as good of notes as I should have yesterday. but, from memory... [14:28] rick_h: locally to run things without mojo I have env variables set up for my charm repo. I build the charm from my clone. I have a bundle file that specifies the charm in the build directory and run `juju deploy` on that bundle [14:30] re mojo, I took different steps yesterday to make sure it wasn't somehow pulling from an old checkout. in the /path/to/mojo/workspace I removed spec/, build/, charms/. [14:31] rick_h: do you have more troubleshooting tips to try? [14:35] rick_h: in that particular PR implementing a backoff inside the watcher will not be easy and I think might cause more problems than it actually solves (I have proposed a way to do it if we really really want to in the PR comments) [14:37] achilleasa: ok, it's something I'd really like jam's signoff on in that case [14:37] achilleasa: thanks for the notes [14:37] skay: version of Juju? [14:38] rick_h: 2.6.10-trusty-amd64 [14:38] rick_h: yeap, not in a hurry to merge that one (still have to fix the agent test I was talking about) [14:38] achilleasa: +1 [14:39] skay: hmmm, we started with the effort to have the charm build process add a version file to note git/vcs info in a version file in the charm === narindergupta is now known as narinderguptamac [14:39] skay: might check if that's there and match the sha up [14:39] skay: but not sure, lots of moving parts in this situation [14:39] rick_h: yeah, see above. they don't match. [14:39] rick_h: nor do hashes in the codetree yaml file [15:15] can actions be run from within a charm's config file that is used during deploy? [15:15] pmatulis: no, charms don't have creds to call actions themselves [15:16] pmatulis: they're operator triggered [15:16] rick_h, thank you sir [15:16] pmatulis: if the charm wants to share some code they can manage that internally and wire it up into the install hook and such [15:16] pmatulis: but that's just code at that point vs actions/not actions [16:03] hml, CR https://github.com/juju/juju/pull/10926 [16:04] stickupkid: any rush on this? [16:05] hml, so this fixes the controller test in 2.7 [16:05] hml, no rush [16:05] hml, helps if i base it on the correct branch :| [16:33] stickupkid: trade ya: https://github.com/juju/juju/pull/10927. gave up on unit test, integration test proves fix [16:34] hml, deal [16:34] hml, should target 2.7 right though? [16:34] stickupkid: yes. oops [16:35] stickupkid: fixed [16:38] hml, done [21:21] is it possible to use juju scp as the root user? I'm encountering a permissions (?) issue [21:21] ERROR exit status 1 (Please login as the user "NONE" rather than the user "root".) [21:25] timClicks: no, the ssh key is only on the ubuntu user [21:26] timClicks: you'd have to scp it there and then juju run to mv it [21:26] rick_h: okay, good to know [21:26] timClicks: what are you copying? worth it just to juju scp it to /tmp and then root has access? [21:27] i'm writing a relations tutorial and want to scp before/after versions of config files that have changed [21:27] i'll just do the work via juju run, rather than locally client-side [21:29] timClicks: ah yea, might just juju run --wait -- cat .... [21:47] fwiw we've just had a request for ipv6 support in the #cdk slack channel [23:37] part 1 of a new tutorial series on relations is up https://discourse.jujucharms.com/t/what-is-a-juju-relation-and-what-purpose-do-they-serve/2347