[02:38] <hpidcock> few test fixes for someone https://github.com/juju/juju/pull/10916 https://github.com/juju/juju/pull/10922 https://github.com/juju/juju/pull/10923
[03:11] <babbageclunk> hpidcock: approved the two quick ones!
[03:11] <babbageclunk> still looking at the pregen certs one
[03:11] <hpidcock> babbageclunk: danke
[04:18] <babbageclunk> hpidcock: also approved the certs one
[04:19] <hpidcock> thankyou!
[08:49] <sahid> is someone can confirm me that juju is not using any code related to nova-network for the openstack provider?
[08:50] <sahid> i checked the code and i think we are OK, but just wanted double check with you since nova-network will be completly rmeoved for the next release of nova
[09:04] <timClicks> sahid: thank you for checking
[09:05] <timClicks> sahid: if you are able, could you please ask on the discourse forum?
[09:05] <timClicks> sahid: that way team members who are currently offline will have a chance to check
[09:07] <sahid> timClicks: sure I will do that
[09:45] <stickupkid> manadart, you got time for a CR https://github.com/juju/juju/pull/10881
[09:48] <manadart> stickupkid: Yep.
[10:01] <manadart> stickupkid: Done.
[10:01] <stickupkid> manadart, need to fix the tests fyi
[12:24] <manadart> Need a tick on a forward-merge: https://github.com/juju/juju/pull/10925
[12:27] <stickupkid> manadart, tick
[12:28] <manadart> stickupkid: Ta.
[12:33] <rick_h> morning party folks
[12:35] <icey>  /join #maas
[13:39] <skay> I think juju deploy is installing a different version of the charm than the one in the local build directory. The version file and codetree-collect-info.yaml in the local charms dir have a different hash.
[13:40] <skay> what could be causing that?
[13:40] <rick_h> skay: reactive charm that needs to be deployed from the build dir?
[13:41] <rick_h> or some missing path expectation?
[13:41] <skay> rick_h: reactive charm that needs to be deployed from the build dir
[13:56] <skay> the normal workflow uses mojo. the path to the charms dir is setup by mojo and the charms are collected there via a collect file in my spec.
[13:57] <skay> while trying to figure this out I've also done it by factoring mojo out of the equation
[14:17] <rick_h> nammn_de:  so those are the old canonistack region urls
[14:17] <rick_h> nammn_de:  I've forwarded you the email on the new region, I'd suggest we update to use that
[14:17] <rick_h> nammn_de:  let me know if that'll work or not
[14:17] <nammn_de> cool, can do that
[14:18] <nammn_de> rick_h: should they work, if I try them locally over vpn?
[14:18] <rick_h> nammn_de:  but yea, I think we just had test data that's on old cloud endpoints and we have to update for the changes
[14:18] <rick_h> nammn_de:  yep, over vpn should be good
[14:18] <nammn_de> rick_h: because of the yaml  jenkins/I couldnt reach any of those endpoints afaict
[14:19] <rick_h> nammn_de:  yea, I think they're shut down
[14:19] <rick_h> nammn_de:  or moved
[14:19] <rick_h> nammn_de:  so have to update to the current ones
[14:19] <nammn_de> rick_h: cool will update
[14:21] <rick_h> achilleasa:  if we can, anywhere we have some sort of interval we should be doing backoff if it might cause any sort of issue
[14:21] <rick_h> achilleasa:  just one of those lessons where debugging something bouncing and without the backoff it makes it hard to recover
[14:21] <nammn_de> rick_h: and for manual-xenial-amd64, finfolk-vmaas and vsphere?  They don't seem to work either
[14:21] <rick_h> nammn_de:  sounds like achilleasa sent you notes on vsphere, is that IP still correct?
[14:22] <rick_h> nammn_de:  and for the vmaas I'm not sure about. Have to check our infra, probably listed in cloud-city details somewhere.
[14:22] <nammn_de> rick_h: yeah should be, I can use those as long as jenkins has intenrally the creds for them
[14:23] <rick_h> nammn_de: in the end this is just to test you can add multiple clouds right?
[14:23] <nammn_de> rick_h: yeah, I can just steal some from clouds.yaml or just remove some
[14:23] <rick_h> nammn_de:  so I like the canonistack as it has a couple of regions as well, but honestly we just need to verify a few clouds. Find a few that work and run with it.
[14:23] <rick_h> nammn_de:  exactly, I don't think we have to over do it
[14:24] <rick_h> nammn_de:  this was just easy for the original test. let's keep it easy as long as the test validation is still legit
[14:24] <rick_h> skay:  how did you take mojo out of the question?
[14:24] <nammn_de> Just wanted to make sure that we align here and that. Yeah Didnt plan to add additional complexity. Will do
[14:26] <skay> rick_h:  I'll do this all again momentarily to make sure I remember it right. I didn't take as good of notes as I should have yesterday. but, from memory...
[14:28] <skay> rick_h: locally to run things without mojo I have env variables set up for my charm repo. I build the charm from my clone. I have a bundle file that specifies the charm in the build directory and run `juju deploy` on that bundle
[14:30] <skay> re mojo, I took different steps yesterday to make sure it wasn't somehow pulling from an old checkout. in the /path/to/mojo/workspace I removed spec/, build/, charms/.
[14:31] <skay> rick_h: do you have more troubleshooting tips to try?
[14:35] <achilleasa> rick_h: in that particular PR implementing a backoff inside the watcher will not be easy and I think might cause more problems than it actually solves (I have proposed a way to do it if we really really want to in the PR comments)
[14:37] <rick_h> achilleasa:  ok, it's something I'd really like jam's signoff on in that case
[14:37] <rick_h> achilleasa:  thanks for the notes
[14:37] <rick_h> skay:  version of Juju?
[14:38] <skay> rick_h: 2.6.10-trusty-amd64
[14:38] <achilleasa> rick_h: yeap, not in a hurry to merge that one (still have to fix the agent test I was talking about)
[14:38] <rick_h> achilleasa:  +1
[14:39] <rick_h> skay:  hmmm, we started with the effort to have the charm build process add a version file to note git/vcs info in a version file in the charm
[14:39] <rick_h> skay:  might check if that's there and match the sha up
[14:39] <rick_h> skay:  but not sure, lots of moving parts in this situation
[14:39] <skay> rick_h: yeah, see above. they don't match.
[14:39] <skay> rick_h: nor do hashes in the codetree yaml file
[15:15] <pmatulis> can actions be run from within a charm's config file that is used during deploy?
[15:15] <rick_h> pmatulis:  no, charms don't have creds to call actions themselves
[15:16] <rick_h> pmatulis:  they're operator triggered
[15:16] <pmatulis> rick_h, thank you sir
[15:16] <rick_h> pmatulis:  if the charm wants to share some code they can manage that internally and wire it up into the install hook and such
[15:16] <rick_h> pmatulis:  but that's just code at that point vs actions/not actions
[16:03] <stickupkid> hml, CR https://github.com/juju/juju/pull/10926
[16:04] <hml> stickupkid: any rush on this?
[16:05] <stickupkid> hml, so this fixes the controller test in 2.7
[16:05] <stickupkid> hml, no rush
[16:05] <stickupkid> hml, helps if i base it on the correct branch :|
[16:33] <hml> stickupkid: trade ya:  https://github.com/juju/juju/pull/10927. gave up on unit test, integration test proves fix
[16:34] <stickupkid> hml, deal
[16:34] <stickupkid> hml, should target 2.7 right though?
[16:34] <hml> stickupkid: yes.  oops
[16:35] <hml> stickupkid: fixed
[16:38] <stickupkid> hml, done
[21:21] <timClicks> is it possible to use juju scp as the root user? I'm encountering a permissions (?) issue
[21:21] <timClicks> ERROR exit status 1 (Please login as the user "NONE" rather than the user "root".)
[21:25] <rick_h> timClicks:  no, the ssh key is only on the ubuntu user
[21:26] <rick_h> timClicks:  you'd have to scp it there and then juju run to mv it
[21:26] <timClicks> rick_h: okay, good to know
[21:26] <rick_h> timClicks:  what are you copying? worth it just to juju scp it to /tmp and then root has access?
[21:27] <timClicks> i'm writing a relations tutorial and want to scp before/after versions of config files that have changed
[21:27] <timClicks> i'll just do the work via juju run, rather than locally client-side
[21:29] <rick_h> timClicks:  ah yea, might just juju run --wait -- cat ....
[21:47] <timClicks> fwiw we've just had a request for ipv6 support in the #cdk slack channel
[23:37] <timClicks> part 1 of a new tutorial series on relations is up https://discourse.jujucharms.com/t/what-is-a-juju-relation-and-what-purpose-do-they-serve/2347