[00:05] <alexisb> anastasiamac, I will be right there
[00:05] <anastasiamac> alexisb: me 2
[00:05] <perrito666> alexisb: ill grab a bite and be back is that ok?
[00:05] <alexisb> yes perrito666
[02:16] <babbageclunk> anyone have a script for quickly opening a mongo shell on a controller?
[02:16] <veebers> babbageclunk: I think menn0 had something that I cribbed off a while ago
[02:17] <menn0> babbageclunk: yes, I do... give me a sec
[02:17] <wgrant> https://github.com/juju/juju/wiki/Login-into-MongoDB is what I normally use
[02:18]  * anastasiamac cheers for ppl using wiki \o/
[02:18] <wgrant> So I see that various Juju 1.25.7 bugs are now Fix Released, but I can't see the release anywhere. Is it mid-release, so we can expect to be able to upgrade environments in the next few days?
[02:18] <menn0> babbageclunk: http://paste.ubuntu.com/23418892/
[02:18]  * menn0 updates the wiki
[02:19] <anastasiamac> wgrant: mid-release. should go into proposed soon. just finalising release notes :D
[02:19] <wgrant> anastasiamac: Marvellous, thanks.
[02:29] <menn0> wgrant, babbageclunk, anastasiamac: i've updated and reorganised that wiki page
[02:29] <babbageclunk> menn0: Thanks!
[02:30] <anastasiamac> menn0: \o/
[02:30] <menn0> babbageclunk: also remember that "juju dump-db" often negates the need for using the mongo shell
[02:43] <babbageclunk> Can anyone tell me why sometimes when I build juju the version number gets increased and sometimes it doesn't? Or rather: I think it used to (so rebuild/upgrade-juju would push out the new version), and now it appears not to.
[02:51] <menn0> babbageclunk: if you're deploying a local build the build number gets incremented
[02:52] <menn0> babbageclunk: and every time you run upgrade-juju with a local build the number gets incremented again
[02:53] <babbageclunk> menn0: ugh, the reason my redirection wasn't happening was because my bootstrapped controllers were using binaries from the stream instead of my local ones, until I bumped the version locally and upgrade-juju'd.
[02:53] <babbageclunk> menn0: but based on your description I don't understand why.
[02:54] <menn0> babbageclunk: if you want to be sure your local build is used, pass --build-agent to bootstrap / upgrade-juju
[02:54] <menn0> babbageclunk: i'm guessing you haven't rebased lately so your tree still has the version as 2.0.1
[02:55] <menn0> babbageclunk: which means the client will use the recently released 2.0.1 instead of your own build
[02:55] <menn0> it's bitten me enough that I always use --build-agent now
[02:55] <babbageclunk> menn0: yeah, that'll be it. I'll do that too.
[02:56] <babbageclunk> menn0: I think I'm not in the habit because it's *usually* ahead, but not in my tree at the moment.
[02:58] <babbageclunk> menn0: Hmm - this tree is rebased though. Although I think it might be based on the wrong branch?
[02:58] <menn0> babbageclunk: I suspect that's true for all of us
[02:58] <menn0> babbageclunk: we're usually ahead, but sometimes not
[02:58] <menn0> babbageclunk: and it often takes a while to figure out what's going on
[02:59] <babbageclunk> menn0: It's branched off staging at the moment.
[02:59] <menn0> that's it then
[03:00] <babbageclunk> What should I be branching off? develop?
[03:00] <menn0> babbageclunk: staging where possible
[03:01] <menn0> babbageclunk: sticking with --build-agent should avoid the issue
[03:01] <babbageclunk> menn0: ok, cool - thankks
[03:21] <menn0> wallyworld or axw: https://github.com/juju/juju/pull/6534
[03:21] <menn0> it's big and dull (sorry)
[03:21] <wallyworld> :-(
[03:21] <wallyworld> :-)
[03:22] <menn0> wallyworld: very mechanical, you can probably skim a lot of it
[03:22] <wallyworld> will do, didn't mean the :-(
[03:23] <axw> wallyworld: I'm thinking I'll add an option to nova.Client to disable API version discovery, and then update juju 2.0 branch to set it. sound OK to you?
[03:23] <axw> wallyworld: we've got another bug: https://bugs.launchpad.net/juju/+bug/1638704
[03:23] <mup> Bug #1638704: openstack provider: if use-floating-ip=true, uses incorrect compute API endpoint to determine available floating IPs <juju:Triaged by alexis-bruemmer> <https://launchpad.net/bugs/1638704>
[03:23] <wallyworld> axw: when would we enable it again?
[03:24] <axw> wallyworld: when we use non-deprecated APIs
[03:24] <wallyworld> so all the neutron ones support the versioning?
[03:24] <axw> wallyworld: we might need to get a little bit smarter about selecting microversions when too
[03:24] <wallyworld> but the nova network ones don't?
[03:25] <axw> wallyworld: the nova API has versioning, but we don't specify which microversion to use. so you get the latest one
[03:25] <axw> wallyworld: and the latest one has the old network API endpoints removed
[03:25] <wallyworld> ah
[03:26] <wallyworld> maybe we need to consider specifying micro version for 2.1 as well then? assuming we disable for 2.0.x as you say
[03:26] <axw> wallyworld: yes, probably a good idea to do that. maybe not immediately, but certainly need to keep it in mind
[03:26] <wallyworld> yup
[03:27] <babbageclunk> menn0: The client side of the redirection isn't working because it doesn't pass any credentials.
[03:27] <babbageclunk> menn0: https://github.com/juju/juju/blob/staging/juju/api.go#L79
[03:28] <babbageclunk> menn0: rogpeppe doesn't seem to be around (presumably at the sprint)
[03:28] <babbageclunk> Oh duh, he's no longer in the same timezone as me!
[03:30] <babbageclunk> menn0: I don't know enough about macaroons to understand "we'll use macaroon authentication directly without sending account details"
[03:36] <babbageclunk> menn0: I can see in the redirected login request it doesn't send any macaroons
[03:38] <menn0> babbageclunk: i've got to pick up a kid... will help when i'm back
[03:38] <babbageclunk> menn0: ok
[03:50] <axw> wallyworld: https://github.com/go-goose/goose/pull/29 <- adds method so we can revert to old behaviour in juju 2.0.x
[03:50] <wallyworld> ok
[03:50] <axw> wallyworld: https://github.com/go-goose/goose/pull/28 <- another fix related to volume attachments
[03:54] <wallyworld> axw: they both look good, thanks
[03:54] <axw> wallyworld: thanks
[03:56] <wallyworld> menn0: oh and i reviewed yours too, had a question but +1
[04:21] <menn0> wallyworld: tyvm
[04:25] <menn0> babbageclunk: so getting back to this redirect issue
[04:25] <menn0> babbageclunk: a user account can either use username/password or macaroons
[04:26] <menn0> it looks like the client code doesn't accomodate that
[04:27] <babbageclunk> menn0: I mean, I think it's right that it can't know what the user should be on the destination controller, isn't it?
[04:27] <menn0> the auth details will in fact be the same for the migration case
[04:27] <babbageclunk> menn0: ah, ok
[04:27] <menn0> actually... they probably will be
[04:27] <menn0> not guaranteed
[04:29]  * menn0 ponders
[04:29] <babbageclunk> so how can I find what macaroons should be attached to the login request? At the moment nothing gets passed.
[04:30] <menn0> babbageclunk: macroons will already get attached if there are some
[04:30] <menn0> babbageclunk: but the admin user won't have any, it'll be using username/password
[04:31] <babbageclunk> menn0: How would I have gotten macaroons? If it'd been registered instead of being admin? I can check that.
[04:32] <menn0> babbageclunk: yes, when juju register is used you get a macaroon
[04:32] <babbageclunk> ok, I'll try that.
[04:32] <menn0> babbageclunk: there's also a way to switch the admin user to macaroons, via "juju login" I think
[04:33] <babbageclunk> I mean, I guess we need to solve the other problem too, but it'd be good to see it working with macaroons
[04:33] <menn0> babbageclunk: yeah and i'm not sure what the right answer is for the user/password case
[04:36] <menn0> babbageclunk: we don't necessarily want to sending off passwords to arbitrary controllers
[04:37] <menn0> maybe it's ok...
[04:39] <axw> babbageclunk: also if you use "juju change-user-password", your initial password is cleared and you get a macaroon
[04:40] <menn0> axw: thanks. I was trying to remember the trick you've mentioned before
[04:40] <axw> anastasiamac: teeny weeny review please: https://github.com/juju/juju/pull/6535
[04:40] <axw> fixes 3 openstack bugs
[04:41] <menn0> babbageclunk: the more I think about it, the more I think using the username and password, if they were provided originally is probably ok
[04:41] <babbageclunk> axw: thanks - annoyingly the sequence of things I tried seems to have left me in an unrecoverable state, other than blowing away the lxcs for my dest controller.
[04:41] <menn0> babbageclunk: not ideal but ok
[04:41] <menn0> babbageclunk: I just saw this: // TODO(rog) update cached model addresses.
[04:41] <menn0> babbageclunk: that'll need to be taken care of now
[04:42] <babbageclunk> menn0: yes - was about to ping him about that.
[04:43] <menn0> babbageclunk: that would involve reporting back to the caller somehow that the redirect has happened, and some access to the new addresses will be required so that the caller can save them
[04:44] <menn0> babbageclunk: I wonder if it would be saner to have NewAPIConnection return the RedirectError instead of handling it all transparently
[04:44] <menn0> babbageclunk: then the caller can take care of updating the cached addresses
[04:45] <menn0> babbageclunk: and the caller can decide if it's ok to retry with username and password in place
[04:45] <menn0> babbageclunk: changing that will involve coordination with rog though
[04:51] <anastasiamac> axw: when asked so amazingly smoothly, of course m looking \o/
[05:08] <axw> veebers: any ideas about this? http://juju-ci.vapour.ws:8080/job/github-merge-juju/9607/artifact/artifacts/trusty-err.log/*view*/
[06:20] <axw> mgz: when you're around, can you please see if any of the CI ec2 instances can be cleaned up? merge jobs are failing like in http://juju-ci.vapour.ws:8080/job/github-merge-juju/9608/artifact/artifacts/trusty-err.log/*view*/
[06:56] <veebers> axw: ugh sorry went EOD without disconnecting and I"m just heading out the door to an appointment now. I can try take a look when I get back in. I'm pretty sure there are scripts that clean up stale machines so it might rectify itself in a little
[06:56] <axw> veebers: np, figured you were EOD
[09:16] <mgz> morning all
[09:16] <mgz> axw: taking a look
[09:21] <mgz> axw: well, that's a little scary, we really shouldn't be close to quota on aws
[09:21] <mgz> axw: but mjs' branch has landed since, so we should be back under, I'll retrigger merge on yours
[09:22] <axw> mgz: gracias
[09:25] <mgz> axw: the pre-check on pr #6536 also failed for that reason, can !!build!! to retrigger
[09:26] <axw> mgz: ok, thanks
[09:27] <axw> mgz: !!build!! never seems to work for me. I just added it, and there's nothing in http://juju-ci.vapour.ws/job/github-check-merge-juju/
[09:27] <axw> nothing running
[09:28] <mgz> axw: fun. it didn't work for me either, but does for some people. I'll see if I can find out why.
[09:28] <axw> mgz: thanks
[09:49] <mgz> well, mystery
[09:50] <mgz> config still doesn't have user restrictions set at all, the match is rebuild on `!!.*!!` pattern
[09:50] <mgz> and the log just states PR ... not changed
[09:51] <mgz> docs for the jenkins plugin not very useful
[12:31] <mgz> rick_h___: are you around or busy with sessions?
[12:32] <rick_h___> mgz: bit of both
[12:33] <rick_h___> On my phone, what's up?
[12:33] <mgz> rick_h___: 1:1 - nothing urgent from me though, so I'm fine leaving it till next week
[12:34] <mgz> only news is I've done a bunch of CI stuff this morning as we'd leaked lots of machines in us-east-1 which was causing job failures
[12:34] <mgz> back to windows this afternoon
[12:36] <rick_h___> mgz: ah sorry, did I not update the calendar?
[12:37] <rick_h___> mgz: thanks for the heads up in ci stuff
[13:00] <voidspace> mgz: care to rubber stamp this?
[13:00] <voidspace> mgz: https://github.com/juju/juju/pull/6537
[13:00] <voidspace> mgz: reviewed and QA'd for develop (and still in the merge queue I think)
[13:01] <mgz> voidspace: sure thing
[13:02] <mgz> voidspace: lgtm
[13:03] <voidspace> mgz: tyvm
[13:20] <rick_h___> frobware: room opereta in 10min
[13:21] <frobware> rick_h___: heading there now
[13:35] <natefinch> redir: thanks for the review of my add-cloud stuff!
[14:01] <dooferlad> voidspace, katco, natefinch, mgz: standup?
[14:01] <katco> dooferlad: oh wow... time warp. brt
[14:44] <alexisb> katco, ping
[14:44] <katco> alexisb: pong
[14:44] <alexisb> good morning
[14:44] <katco> alexisb: howdy
[14:45] <alexisb> katco, are there folks on the rteam that would be able to looks at a regression on devel:
[14:45] <alexisb> https://bugs.launchpad.net/juju/+bug/1638944
[14:45] <mup> Bug #1638944: HA failed: timeout waiting for controller response <ci> <ha> <jujuqa> <juju:Triaged> <https://launchpad.net/bugs/1638944>
[14:46] <katco> alexisb: nate is the only one between things; however, i just emailed torsten. it looks like issues with the CI environment are actually obfuscating blesses
[14:46] <alexisb> katco, lovely, did you include me on that email?
[14:46] <katco> alexisb: no, i will forward to you and rick
[14:46] <alexisb> thanks
[14:47] <katco> alexisb: there you are
[14:47] <katco> alexisb: let me look at the bug
[14:48] <alexisb> katco, thanks, let me know if you guys can pick it up
[14:48] <katco> mgz: do you think this bug ^^^ is legit?
[14:49] <katco> alexisb: also, do we know this is a regression? with HA might be more an intermittent test failure? although doesn't look like it's occurred since june?
[14:50] <alexisb> hmm, if tha tis the case then we need to push back
[14:50] <alexisb> by putting a comment in the bug and marking it invalid
[14:52] <katco> abentley: ping
[14:52] <abentley> katco: pong
[14:52] <mgz> katco: the recovery test failures do seem to have an actual regression recently,
[14:52] <katco> abentley: hey, how are you
[14:52] <mgz> but it's somewhat obscured by being split across several generic issues
[14:52] <abentley> katco: Not bad.  How're you doing?
[14:53] <katco> abentley: doing ok. a bit mad that it's fall here and 85F
[14:53] <katco> abentley: not sure how it is up there
[14:53] <mgz> bug 1626573 seems to be the most actionable one
[14:53] <mup> Bug #1626573: Restore-backup cannot initiate replica set <ci> <intermittent-failure> <restore-backup> <juju:Triaged> <https://launchpad.net/bugs/1626573>
[14:53] <abentley> katco: 11C, and it's been closer to 3 in the past week.
[14:53] <katco> abentley: lucky :)
[14:54] <katco> mgz: ok, you think looking at this would solve more issues including the regression?
[14:55] <mgz> I think we do need to look at the ha-recovery issue, though we don't have a clean window
[14:56] <mgz> due to all the build failures end of last week
[14:56] <katco> mgz: ok ta. it sounds like alexisb is saying this is lower priority than the crit bugs we have on our board anyway, so maybe a moot point
[14:57] <katco> alexisb: fwiw mick is out and frobware is at the sprint. so it's just voidspace natefinch mgz and i
[14:58] <alexisb> katco, I will try to get someone on it today but we are all pretty booked atm and most of NZ/AUS is out today given they are getting on a plane
[14:58] <katco> alexisb: yep
[15:07] <deanman> Should proxy settings be forwarded in all users of a newly spawned LXD?
[15:26] <dooferlad> voidspace, katco, mgz... three tiny PRs to improve your karma: https://github.com/juju/juju/pull/6538 https://github.com/juju/juju/pull/6539 https://github.com/juju/juju/pull/6540
[15:26]  * dooferlad goes for tea.
[15:28] <mgz> dooferlad: a couple of those are small bug scary :)
[16:02] <katco> oh cool a 264 line test!
[16:47] <redir> natefinch: np
[17:03] <voidspace> mgz: care to apply another rubber stamp?
[17:03] <voidspace> mgz: https://github.com/juju/juju/pull/6541
[17:05] <mgz> voidspace: a-looking
[17:06] <mgz> voidspace: lgtm
[17:06] <voidspace> mgz: ta :-)
[18:24] <mgz> dooferlad: your prs are being guinea pigs, don't mind me
[20:49]  * menn0 has 2 blocks of jelly-tip chocolate for wallyworld 
[20:50] <wallyworld> yay :-D
[21:17]  * redir steps out for a few
[21:18] <babbageclunk> menn0: didn't get a response from rogpeppe - do you think I should just start moving the redirect handling up?
[21:18] <babbageclunk> menn0: Or I can pause on this and start doing the status change until I/we can discuss with him.
[21:19] <menn0> babbageclunk: that might be better
[21:19] <babbageclunk> menn0: ok cool
[21:19] <menn0> babbageclunk: i'm worried about breaking the API
[21:19] <menn0> i've already managed to annoy them once recently by breaking the API of the "api" pacakge
[21:32] <babbageclunk> menn0: yeah, would much rather talk to him about it first - he's likely to have a good idea about how we could handle it.
[21:47] <menn0> babbageclunk: agreed
[21:52] <redir> back