[00:34] <veebers> thumper: It should take about how it's used by the tests (not juju) to store the details for substrates to test against
[01:16] <thumper> veebers: do you have a few minutes
[01:16] <veebers> thumper: I do
[01:16] <thumper> veebers: HO?
[01:16] <veebers> thumper: sounds good, up release-call?
[01:16] <thumper> sure
[01:21] <wallyworld> anastasiamac: small one https://github.com/juju/juju/pull/9085
[01:22]  * anastasiamac looking
[01:24] <anastasiamac> wallyworld: lgtm as long as 'hooks' dir is created.. m guessing it is otherwise it would not have worked for u :D
[01:24] <wallyworld> the python code does all that
[01:24] <wallyworld> you just need to assign the hooks
[01:24] <anastasiamac> yep. i assumed so :) thanks for a quick fix!!
[01:24] <wallyworld> np, i should do the same fix for the other charm in the same pr
[01:24] <wallyworld> the constraints one
[01:25] <wallyworld> will be the same thing
[01:25] <anastasiamac> yes, all charms should have it now...
[01:25] <anastasiamac> unless thay r testing the actual failure to deploy invalid charms and I do not think we have ci tests for that, only unit ones)
[01:36] <veebers> wallyworld, thumper it seems the upgrade test failure for 2.4 is legit, upgrade commands states to use proposed, controller logs show it's looking in released: https://pastebin.canonical.com/p/NG8PRCg26f/
[01:37] <wallyworld> shit eh
[01:38] <wallyworld> lucky we have tests
[01:46] <veebers> I've noted in the doc, moving on to the next one
[01:48] <veebers> team, what's the haps with https://bugs.launchpad.net/juju/+bug/1782803 (just noticed it as I was filing a bug)
[01:48] <mup> Bug #1782803: juju 2.4.1: juju status failure <cdo-qa> <cdo-qa-blocker> <foundation-engine> <juju:New> <Juju Wait Plugin:Invalid> <https://launchpad.net/bugs/1782803>
[01:48] <veebers> just noticed it was critical*
[01:55] <wallyworld> veebers: from memory we told them it wasn't for juju to retry
[01:55] <wallyworld> if it's the one i'm thinking of
[01:56] <wallyworld> anastasiamac: i pushed a couple of small fixes for the other 2 CI failures
[01:56] <anastasiamac> veebers: it was marked as New overnight but it should not be critical
[01:57] <veebers> wallyworld: ok, the bug has been updated 6 hours ago. It might not be clear there as it's still marked crit
[01:57] <veebers> ack anastasiamac ^^
[01:57] <anastasiamac> veebers: it's something with their setup and yes, wallyworld is right - not on us
[01:58] <wallyworld> looks like they've reopened it will logs attached
[01:59] <wallyworld> *with
[01:59] <wallyworld> it can be looked at but IMO we'll push back as not a release stopper
[01:59] <anastasiamac> wallyworld: m not convinced that the api change is needed but m not too attached to it :D so my +1 still stands unless u want multiple +1 from me :D
[02:00] <wallyworld> anastasiamac: what api change?
[02:00] <anastasiamac> "zip file spec 4.4.17.1 says that separators are always "/" even on Windows."
[02:00] <wallyworld> ok that. that's why the unit tests are failing on windows
[02:00] <wallyworld> we were looking for a hooks\install
[02:00] <anastasiamac> wallyworld: oh ic... good to know
[02:01] <wallyworld> i have not rerun the windows unit tests but that *has* to be the reason i think
[02:01] <wallyworld> we'll see soon enough
[02:01] <anastasiamac> :)
[02:02] <veebers> kelvinliu__: nice work with the enable-condition
[02:04] <wallyworld> kelvinliu__: has that fix above landed? if so i'll strike out the issue in the doc
[02:05]  * thumper groans
[02:06] <thumper> veebers: the test failed with an unrelated failure AFAICT
[02:06] <kelvinliu__> veebers, wallyworld just in 1:1 meeting with Tim. going to land it now,
[02:06] <wallyworld> gr8 ty
[02:07] <veebers> thumper: which test
[02:07] <kelvinliu__> veebers, I deployed the RunFunctionaltests-amd64 job with the fix, and tested
[02:07] <veebers> thumper: ah log rotation right
[02:07] <thumper> yeah...
[02:07] <veebers> kelvinliu__: sweetbix
[02:08] <thumper> I'm just deploying the charm myself locally and testing that way
[02:08] <kelvinliu__> wallyworld, landed and tested. going to re-test crd now
[02:08] <wallyworld> ty
[02:09] <veebers> thumper: what was the new failure?
[02:10] <thumper> https://pastebin.canonical.com/p/rQfKs22C4d/
[02:12] <veebers> thumper: you weren't expecting "ERROR:root:Wrong unit name: Expected: /var/log/juju/machine-0.log, actual: /var/log/juju/machine-lock.log" ?
[02:14] <thumper> veebers: oh, I was just looking at the last error...
[02:15] <veebers> thumper: ack, that last failure is probably jujupy choking because you used --existing and it screwed up and got confused :-|
[02:15] <thumper> ah
[02:26] <thumper> wallyworld: quick call?
[02:27] <wallyworld> ok
[02:27] <thumper> wallyworld: release call HO?
[02:31] <thumper> wallyworld: https://github.com/juju/juju/pull/9086
[02:50] <veebers> lxc list
[02:50] <veebers> lol, wrong window
[02:58] <anastasiamac> veebers: lolo :) at least no password... we've all putour password into irc chat at least once :)
[02:58] <veebers> hah, I have done that too :-P
[02:59] <veebers> or perhaps 'lxc list' *is* my password >_>
[03:00] <anastasiamac> hmmm k that would b pretty sad pass phrase :) altho m not better - i usually use song lyrics as my pass phrases :)
[03:01] <anastasiamac> like a variation on 'a spoonful of sugar' :D
[03:06] <veebers> ^_^
[03:08] <veebers> ok, I'm redoing how we do the manual provider test, it's silly how we're currently doing it
[03:13] <babbageclunk> Did someone clean up the GCE addresses? The quota is saying 4/23 in use.
[03:15] <veebers> babbageclunk: I didn't, is it split by region?
[03:16] <babbageclunk> veebers: I think so, but this is for the us-central1 region that's in the error.
[03:16] <babbageclunk> huh, curiouser and curiouser.
[03:16] <veebers> babbageclunk: hmm odd, Perhaps it's was a perfect storm and there was heaps of jobs running in that region at the time and we got unlucky to run out
[03:17] <babbageclunk> Yeah, maybe.
[03:19] <veebers> babbageclunk: could be worth checking what regions are used in tests and perhaps manually distributing them out a bit?
[03:20] <babbageclunk> veebers: ok, just looking at the job config to understand what it's doing.
[03:20] <veebers> babbageclunk: heh, let me know if you need anything clarified :-) Most the job configs are setup, the test run is a single build step
[03:22] <babbageclunk> Thanks, I'll have a go at working it out first before roping you in! :)
[03:22] <veebers> is it possible to set a UserKnownHostsFile option for juju (i.e. ssh option)?
[03:51] <wallyworld> veebers: juju help ssh says yes
[03:52] <wallyworld> i assume you are talking about for running juju ssh
[03:52] <veebers> wallyworld: I meant for everything ssh that juju does (i.e. with a manual provider how it gets into the machine)
[03:53] <wallyworld> oh, juju use of ssh internally. i think that's all fixed
[03:53] <veebers> it's ok I've gone with a different approach that'll work. It's just not so fancy
[03:53] <wallyworld> fixed as in hard coded
[03:53] <veebers> wallyworld: ack, thanks for confirming. I've got something working though
[03:54] <veebers> (the reason was: I was 'lxc copy'-ing new machines from a base, but need to auth them to ssh in, using a generated known_hosts key would work, but need to set which file that actually is).
[03:54] <wallyworld> veebers: i left a comment on that upgrade bug - not something we can fix quickly / easily sadly IIANM
[03:54] <veebers> I've since created manual tests for the different clouds and locked down which IPs they start with. The lxd network management seems pretty nifty https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
[03:55] <veebers> wallyworld:  oh, :-(
[03:55] <veebers> wallyworld: it worked previously though right?
[03:55] <wallyworld> not that i can see
[03:55] <wallyworld> i can't have
[03:55] <wallyworld> it
[03:56] <veebers> ah ok
[03:56] <veebers> oh, it works in develop though
[03:56] <wallyworld> simple controller model owrks
[03:56] <wallyworld> but not agents on machines
[03:56] <wallyworld> probably broken in devel too, or not?
[03:57] <wallyworld> need to check but if it works in develop my theory is wrong
[03:59] <wallyworld> veebers: is the pexpect() stuff a substring match? eg does child.expect('(?i)password') match "some text here password:"
[04:02] <veebers> wallyworld: that test is green for develop branch (upgrades)
[04:03] <veebers> wallyworld: re: pexpect, should just be regex IIRC
[04:03] <wallyworld> it could be green because the agent binaries get cached
[04:03] <wallyworld> so my theory could be wrong. the code looks correct though
[04:04] <wallyworld> for the controller be use the supplied agent stream
[04:05] <veebers> wallyworld: FYI https://pexpect.readthedocs.io/en/stable/api/pexpect.html#pexpect.spawn.expect
[04:06] <wallyworld> hmmm, that test should work then
[04:07] <wallyworld> unless it needs ^.* etc
[04:09] <veebers> we should make it as promiscuous as possible, we only care if its asking for a password
[04:12] <veebers> wallyworld: FYI I found a 2.4 branch run of the upgrade tst that passed: http://10.125.0.203:8080/job/nw-upgrade-juju-amd64-lxd/199/console (2.4-rc2)
[04:12] <wallyworld> veebers: it could be the error is misleading then
[04:13] <wallyworld> the agents will only look in release streams
[04:13] <wallyworld> but if the controller has been done successfully first, the agents will be cached
[04:14] <veebers> although this one fails as we're seeing now: http://10.125.0.203:8080/job/nw-upgrade-juju-amd64-lxd/233/console
[04:25] <thumper> I can't seem to get the dbLog feature tests that fail intermittently to fail on my machine at all
[04:26] <wallyworld> veebers: if my reading of the pexpect doc is correct, our test is broken. http://pexpect.sourceforge.net/pexpect.html#spawn-expect seems to say that expect("bar") will not match "foobar". so our expect("password") will not match "Enter a password:"
[04:29] <veebers> wallyworld: huh, that seems to be the case if we're just passing in the string. We could pass in a compiled regex instead
[04:29] <veebers> is the tst really just using ("password")? that sucks
[04:30] <wallyworld> child.expect('(?i)password')
[04:30] <wallyworld> which hopefully is treated as an uncompiled regex
[04:31] <wallyworld> alol other usages seem to do the right thing and use the whole prompt
[04:32] <wallyworld> eg
[04:32] <wallyworld> child.expect('Enter client-email:')
[04:32] <babbageclunk> did someone delete that GCE quota?
[04:32] <wallyworld> not me said the duck
[04:32] <veebers> babbageclunk: I haven't touched it
[04:32] <babbageclunk> Weird, it's not listed on the quota page anymore. :/
[04:33] <veebers> wallyworld: "Strings will be compiled to re types"
[04:33] <veebers> babbageclunk: that's really odd
[04:33] <kelvinliu__> wallyworld, the crd works as expected.
[04:34] <wallyworld> veebers: ok, i'll look to follow convention elsewhere and use the exact prompt
[04:34] <wallyworld> kelvinliu__: awesome ty
[04:34] <kelvinliu__> wallyworld, np
[04:34] <veebers> wallyworld: a regex would be better surely? so we don't get tripped up by minor text changes
[04:35] <wallyworld> our preferred convention elsewhere (in juju also) is to use exact text
[04:35] <wallyworld> so we get breakages
[04:35] <wallyworld> so we think about the consequences of changing
[04:35] <wallyworld> and also so we can see when error messages are dumb
[04:36] <wallyworld> if you just match on a small regexp, you miss things like "could not do this because: could not do this: because could not do it" etc
[04:37] <veebers> wallyworld: ack
[04:37] <veebers> good point
[04:39] <veebers> thumper: it seems like the commands in that job are failing which feeds bad input into the next command. one sec I'll line something up
[04:48]  * thumper nods
[04:59] <veebers> vinodhini: looks like the timeout extension worked, it needed an extra 10 minutes apparently
[04:59] <veebers> 100 minutes is a long time for that test though, maybe there is an issue with azure-arm. Did you try a different region too? Perhaps the default we use is slow etc.
[04:59] <vinodhini> i didnt try diff region.
[05:00] <vinodhini> its just timeout period i incresed first in default reg
[05:00] <vinodhini> http://localhost:18080/job/nw-model-migration-amd64-azure-arm/647/console
[05:05] <veebers> vinodhini: I would attempting trying a different region see if that goes faster; having a test take 1hr 40 min is a bit gross :-)
[05:07] <vinodhini> i will try with actual time period and diff region
[05:07] <vinodhini> i mean the orig time period
[05:16] <vinodhini> veebers: just a quick clarification plz correct me if i am wrong here - ENV=parallel-azure-arm -- iam setting this to different region. and i am listing out the regions from juju list-region azure
[05:16] <veebers> vinodhini: no, that env stays the same (it's the part that says run this test in azure-arm). just below that should be the assess_<blah> call, that should take a --region arg
[05:16] <veebers> one sec, let me check
[05:17] <wallyworld> veebers: a small PR for the pexpect fix
[05:17] <wallyworld> https://github.com/juju/juju/pull/9087
[05:17] <vinodhini> ok. iam seeing in acceptance test assess_model_migration
[05:17] <vinodhini> i got that.
[05:18] <vinodhini> --region is option which overrides it.
[05:18] <vinodhini> it alright thanks veebers
[05:18] <veebers> vinodhini: yeah --region should be there for the model migration test
[05:18] <veebers> sweet :-_
[05:18] <veebers> wallyworld: ack, looking
[05:18] <veebers> wallyworld: you've used a json query CLI tool before? something like jq or so?
[05:18] <wallyworld> i have
[05:19] <wallyworld> can't remember the syntax though
[05:19] <wallyworld> been a while but very useful
[05:20] <vinodhini> its ok. i verified in py script
[05:20] <veebers> wallyworld: ack cool I'll look it up, Should be able to use this 5 piped command using grep/sed/head etc. ^_^
[05:20] <vinodhini> now i have set time 90 and diff region and started it
[05:20] <vinodhini> lets see
[05:20] <wallyworld> veebers: yep, i pipe from stdin etc when i used it
[05:22] <wallyworld> veebers: i thought about controller_name but that is the one bit we don't really care about that could change
[05:22] <veebers> wallyworld: ack, fair enough
[05:22] <wallyworld> and it may not be contreoller_name
[05:23] <wallyworld> the test should be using a different controller
[05:23] <wallyworld> for true multi-controller cmr
[05:35] <babbageclunk> Is anyone else getting gocomplaints from gometalinter about gomocks-generated files not being goimported?
[05:44] <babbageclunk> wallyworld: ^
[05:44] <veebers> I've updated the nw-bootstrap-constraints-maas-2-2 job so it should get the right input for the test, going to have tea will check back in later on.
[05:53] <wallyworld> babbageclunk: i haven't so far
[05:53] <wallyworld> kelvin added some new micks yesterday
[05:53] <wallyworld> but they are all committed in tree
[05:54] <babbageclunk> wallyworld: I tried running it again and it went away, so I don't know what was happening there.
[05:54]  * wallyworld shrugs
[05:54]  * babbageclunk also
[06:25] <veebers> wallyworld: don't forget to propse your fixes to develop too :-)
[06:30] <vinodhini>  veebers: are u strill ard. i did revert back the time qnd changed the region and its all good Success.
[06:30] <vinodhini>  http://localhost:18080/job/nw-model-migration-amd64-azure-arm/648/console
[06:48] <vinodhini> wallyworld: looks like veebers not ard
[06:49] <vinodhini> i would like to know abt this azure failure which is actually fine if we change the regin.
[06:50] <stub> go go gadget gometalinter
[06:50] <vinodhini> so what shd be the solution ? i have made the modification directly in Web UI
[06:59] <vinodhini> I have updated the doc.
[07:15] <wallyworld> vinodhini: not sure, i'll have to read the failure, i am not faimi9lair with it
[07:18] <wallyworld> vinodhini: wouldn't it be better to increase the test timeout? that's what i seem to recall may have been discussed this morning
[07:43] <vinodhini> wallyworld: i was away to get some dinner.
[07:43] <vinodhini> Yes. initially i increased the timeout period and it was successful.
[07:43] <vinodhini> but veebers was asking me not to do that way
[07:52] <wallyworld> vinodhini: ok, i'm surprised at that. i'll talk to him tomorrow. just changing the region is quite fragile as that coud slow down also
[07:52] <wallyworld> thanks for looking into it
[07:52] <vinodhini> its ok.
[07:53] <vinodhini> i was working in credentialsd part its was just side by side running.
[07:53] <wallyworld> good plan
[07:53] <vinodhini> this is not potential failure. its slow thats why its an issue
[07:53] <wallyworld> yeah, azure is very slow at instance creation/destroy
[07:54] <vinodhini> So we arent doing release today ?
[07:54] <vinodhini> I am sure veebers will look into the status a bit later :-)
[07:54] <wallyworld> maybe, maybe not, depends on how the other guys go with the remaining issues. i'd say not today but tomorrow if i had to guess
[07:55] <vinodhini> In this case how to target the solution iam not sure. Modifying a config option is not a fix.
[07:55] <vinodhini> So we should focus on solution.
[07:57] <vinodhini> ok. wallyworld. I am drafting a mail to you. I wont be there tomorrow morning hours as i have appoinment with Indian consulate.
[07:59] <wallyworld> it depends on the root cause. if the substrate is slow, then increasing a timeout seems reasonable to me
[08:04] <veebers> vinodhini, wallyworld: The timeout is already 90 minutes, any more seems like a huge amount. My suggestion was to try a different region in case the original is having troubles etc.
[08:04] <wallyworld> wow 90 minutes!!!
[08:04] <wallyworld> fark
[08:05] <veebers> wallyworld: if it's still taking ages in another region there is an issue there
[08:06] <wallyworld> yeah, let's see
[08:06] <veebers> yeah, it times out after 90 :-) Takes about 1hr 45 min for a successful run
[08:07] <wallyworld> veebers: do you know the gce quota status? was that sorted?
[08:08] <veebers> wallyworld: no idea sorry. I know babbageclunk was looking. We thought perhaps it was bad timing and we had a bunch of stuff all running the same region etc. Not sure if the suggestion to check which region is used across tests (with the thought to share it out a bit) went
[08:08] <wallyworld> ok, np
[08:10] <veebers> wallyworld: the jq way is much better: https://github.com/CanonicalLtd/juju-qa-jenkins/pull/81/files
[08:56] <wallyworld> veebers: looks good
[09:10] <veebers> wallyworld: this is an easy one: https://github.com/juju/juju/pull/9088
[09:10] <wallyworld> looking
[09:11] <wallyworld> lgtm ty
[10:56] <stickupkid> manadart: you got 5 minutes for a quick HO?
[11:04]  * stickupkid gone for lunch
[11:14] <rick_h_> morning party folks
[11:30] <rick_h_> stickupkid: morning
[11:31] <rick_h_> stickupkid: can I ask you to pause WIP and grab an issue from the release blocking doc please?
[12:11] <stickupkid> sure can
[12:12] <rick_h_> stickupkid: ty, the other side of the world cranked out a lot of notes/fixes and we need to help move forward today.
[12:12] <stickupkid> rick_h_: just reading up on the doc
[12:12] <rick_h_> stickupkid: k, let me or hml know if you have any questions/issues
[14:04] <manadart> externalreality: Approved #9084
[14:15] <externalreality> manadart, cool. I spoted that I attempted to push the removal of the Id feild did not make it in. Gonna add that before attempting to land.
[15:16] <manadart> externalreality: Didn't quite get all of my PR done before EoD, but I've put it up as a WIP, if you are able to review: https://github.com/juju/juju/pull/9090
[15:27] <hml> stickupkid: quick pr pls: https://github.com/juju/juju/pull/9091
[15:28] <stickupkid> hml: done
[15:28] <hml> stickupkid: ty
[15:30] <hml> stickupkid: i’m off to long lunch shortly.  do you have anything for me to review?
[15:30] <stickupkid> hml: nope, nothing atm, just digging
[15:30] <stickupkid> pretty sure I'm just making the hole deeper
[15:30] <hml> stickupkid: ha!
[15:34] <externalreality> manadart, reviewing now
[15:38] <rick_h_> stickupkid: "I'm gonna need a bigger shovel!"
[15:39] <stickupkid> rick_h_: true
[15:52] <stickupkid> has anyone seen this recently "16:51:11 DEBUG juju.provider.common bootstrap.go:575 connection attempt for 10.156.96.10 failed: /var/lib/juju/nonce.txt does not exist" - it's been happening a couple of times today
[15:52] <stickupkid> ?
[15:53] <stickupkid> Just doing a "juju bootstrap localhost --debug" on the 2.4 branch
[15:53] <stickupkid> it works in the end, but really takes it's time...
[15:57] <rick_h_> stickupkid: looks like some history https://bugs.launchpad.net/juju-core/+bug/1314682
[15:57] <mup> Bug #1314682: Bootstrap fails, missing /var/lib/juju/nonce.txt (containing 'user-admin:bootstrap') <bootstrap> <juju> <maas-provider> <juju:Expired> <juju-core:Won't Fix> <https://launchpad.net/bugs/1314682>
[15:57] <stickupkid> rick_h_: nice, i'll give that a read
[16:00] <stickupkid> rick_h_: so i guess the retry that's implemented to fix this, does work... maybe my computer was just being slow...
[16:01] <rick_h_> stickupkid: yea, not sure.
[16:01]  * stickupkid back to digging...
[20:55] <veebers> Morning o.
[20:58] <rick_h_> wheeeee
[21:20] <cory_fu> wallyworld, kelvinliu_, knobby: This call reminded me of this, if you haven't seen it: https://www.youtube.com/watch?v=JMOOG7rWTPg  :p
[21:24] <kelvinliu_> cory_fu, ^.@
[21:24] <babbageclunk> wallyworld, veebers: I had a look at the GCE quota thing. As far as I could see the quota was now fine - IP addresses in use was fluctuating between 4 and 0 when the test was running. I couldn't change the region tests were using because it's defined as us-central1 in environments.yaml. Maybe I could duplicate parallel-gce as parallel-gce-us-east1 and move some jobs to use that instead?
[21:25] <veebers> babbageclunk: using --region with an assess script should overwrite that IIRC
[21:25] <hml> babbageclunk: it looks like we may hit it when there are two ci-run going at the same time
[21:26] <hml> babbageclunk: that’s what was giong on when it was hit again in run 1089
[21:26] <babbageclunk> veebers: ah, thanks - so if I change the jobs to use different regions that might avoid it? It definitely looks like a per-region quota.
[21:27] <babbageclunk> veebers: ok, I'm going to do that now.
[21:29] <babbageclunk> (dumb question, but what does the nw- prefix mean?)
[22:31] <babbageclunk> gah, my brain's stopped accepting "likelihood" as a real word.
[22:31] <babbageclunk> likeli
[22:32] <rick_h_> it does look strange written out
[22:46] <babbageclunk> veebers: can you take a look at https://github.com/CanonicalLtd/juju-qa-jenkins/pull/83 ? I've checked there are no errors from jenkins-jobs.
[22:46] <veebers> babbageclunk: can do
[22:46] <babbageclunk> ta
[22:46] <babbageclunk> After it's deployed I'll make sure to run each of the changed jobs, just in case I missed a \
[22:48] <veebers> babbageclunk: LGTM. a redeploy should be just doing nw-* so it redploys all the functional jobs (no need to screw around cherry picking names etc.)
[22:49] <babbageclunk> veebers: more detail? Hang on, I'll read more of the readme.
[22:50] <veebers> babbageclunk: oh, your question earlier re: nw_ prefix; hah it's because while we where spinning up the new CI run bits we continued to run the original jobs; You couldn't run both at the same time as they stomped on each other (workspace/$JOBNAME is the working dir for a job). So I added nw- (new world), it was supposed to be changed when we did the roll over but never was
[22:51] <veebers> babbageclunk: ah sorry, hah yeah the arg for jenkins-job . . . . -r jobs/ci-run nw-*
[22:52] <babbageclunk> Ah, ok - so running `jenkins-jobs update` like in the deploying jobs section, but with a wildcard to do all the new-world jobs.
[22:52] <babbageclunk> veebers: coolthanks!
[22:52] <veebers> babbageclunk: yep that's the one
[22:54] <babbageclunk> veebers: ok, having a go at deploying them now.
[22:55] <veebers> babbageclunk: sweet, let me know when it's done as I'm deploying and testing some changes I'm making
[23:43] <babbageclunk> how do you add a private key interactively (in juju add-credential)? Remove all the linebreaks?