[00:41] <mup> Bug #1521017 opened: adding a ssh key to juju does not take a file path <juju-core:New> <https://launchpad.net/bugs/1521017>
[00:41] <mup> Bug #1521020 opened: juju authorized-keys import fails without any output <juju-core:New> <https://launchpad.net/bugs/1521020>
[00:44] <mup> Bug #1521017 changed: adding a ssh key to juju does not take a file path <juju-core:New> <https://launchpad.net/bugs/1521017>
[00:44] <mup> Bug #1521020 changed: juju authorized-keys import fails without any output <juju-core:New> <https://launchpad.net/bugs/1521020>
[00:54] <davecheney> https://github.com/godbus/dbus/issues/45
[00:54] <davecheney> no love
[00:54] <davecheney> guess we'll be fixing that one
[00:55] <davecheney> quit
[00:56] <mup> Bug #1521017 opened: adding a ssh key to juju does not take a file path <juju-core:New> <https://launchpad.net/bugs/1521017>
[00:56] <mup> Bug #1521020 opened: juju authorized-keys import fails without any output <juju-core:New> <https://launchpad.net/bugs/1521020>
[01:54] <menn0> wallyworld or axw: I suspect this might make you happy: http://reviews.vapour.ws/r/3269/
[01:54] <wallyworld> yay
[01:55] <wallyworld> menn0: i did go to review your PR from last week but it already had a shipit by the time i got there
[01:56] <menn0> wallyworld: yep, no worries. already landed. this new one depended on that one.
[02:01] <wallyworld> menn0: lgtm
[02:04] <menn0> wallyworld: cheers
[02:21] <axw> wallyworld: BTW, I've renamed params.RelationUnitsChange to params.RemoteRelationUnitsChange, because we'll use tokens and so on
[02:21] <wallyworld> ok
[02:21] <axw> wallyworld: so I think there shouldn't be a concern about overlap with the existing RelationUnitsChange any more
[02:22] <wallyworld> that sounds right
[04:33] <menn0> axw: could you have a quick look at this one please: http://reviews.vapour.ws/r/3270/
[04:33] <axw> menn0: sure
[04:33] <menn0> axw: there's another upgradesteps cleanup PR ready straight after this one
[04:37] <axw> menn0: LGTM
[04:37] <menn0> axw: cheers
[04:56] <menn0> axw: last cleanup PR here: http://reviews.vapour.ws/r/3271/
[04:57] <axw> looking
[05:05] <axw> menn0: done
[05:48] <menn0> axw: thanks
[09:00] <frobware> jam: rebooting... be there in a bit...
[09:11] <voidspace> dimitern: looks like I finally connected...
[09:16] <dimitern> voidspace, welcome ;)
[09:22] <voidspace> dimitern: shall I just recreate that branch based on a clean patch?
[09:22] <voidspace> dimitern: no need for you to do it
[09:24] <dimitern> voidspace, yes please - that will be easiest I think
[09:24] <voidspace> doing it
[09:34] <voidspace> dimitern: http://reviews.vapour.ws/r/3273/
[09:34] <dimitern> voidspace, cheers, looking
[09:34] <voidspace> dimitern: it's already had a "Ship It" as a previous PR, so need to look again particularly
[09:34] <voidspace> dimitern: I'm doing a full test run here
[09:34] <voidspace> dimitern: and we need to talk about implementation strategy for listing spaces on bootstrap
[09:34] <voidspace> dimitern: we can topic that at/after standup though
[09:35] <dimitern> voidspace, LGTM
[09:35] <dimitern> voidspace, sure
[09:38] <voidspace> dimitern: ta
[09:38] <voidspace> dimitern: waiting for the test run here to finish before I hit $$merge$$
[09:40] <dimitern> voidspace, ack
[09:45] <voidspace> anyone seen this failure with the lxd client test?
[09:45] <voidspace> ERROR juju.utils cannot find network interface "lxcbr0": no such network interface
[09:45] <voidspace> this is on maas-spaces feature branch, so there may already be a fix on master that we haven't picked up
[10:01] <voidspace> jam: fwereade: dimitern: standup?
[10:01] <dimitern> frobware, voidspace, fwereade, omw - 1m
[10:33] <voidspace> frobware: I need coffee
[10:34] <frobware> voidspace, me too. 5 mins?
[10:35] <voidspace> frobware: sounds good
[10:41] <voidspace> frobware: ready when you are
[11:29] <voidspace> frobware: I get the same failures on master, so no *requirement* for an urgent rebase
[11:29] <frobware> voidspace, ack
[13:26] <perrito666> hello all btw
[14:46] <mup> Bug #1519527 opened: juju 1.25.1:  lxc units all have the same IP address <openstack> <sts> <uosci> <juju-core:New> <MAAS:Triaged by mpontillo> <MAAS 1.9:Triaged by mpontillo> <MAAS trunk:Triaged by mpontillo> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1519527>
[14:46] <mup> Bug #1521217 opened: TestWorkerAcceptsBrokenRelease fails <ci> <intermittent-failure> <test-failure> <juju-core:Incomplete> <juju-core cross-model-relations:Triaged> <https://launchpad.net/bugs/1521217>
[14:55] <mup> Bug #1521220 opened: TestShortPollIntervalExponent fails <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1521220>
[15:01] <mup> Bug #1521220 changed: TestShortPollIntervalExponent fails <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1521220>
[15:03] <cherylj> dimitern: is the issue with bug 1519527 just that in 1.25.1 juju uses some new MAAS functionality which is not working as expected?
[15:03] <mup> Bug #1519527: juju 1.25.1:  lxc units all have the same IP address - changed to claim_sticky_ip_address <openstack> <sts> <uosci> <juju-core:New> <MAAS:Triaged
[15:03] <mup> by mpontillo> <MAAS 1.9:Triaged by mpontillo> <MAAS trunk:Triaged by mpontillo> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1519527>
[15:03] <dimitern> cherylj, that's my understanding (proved by mpontillo as well)
[15:04] <dimitern> or at least observed to not work as expected
[15:04] <cherylj> dimitern: what about other levels of MAAS?  does the functionality exist on older levels, but works there?
[15:06] <dimitern> cherylj, yes, it exists in 1.8 and is confirmed to work there as expected, so it's a maas 1.9beta2+ regression
[15:06] <cherylj> dimitern: thanks!
[15:16] <mup> Bug #1521220 opened: TestShortPollIntervalExponent fails <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1521220>
[15:17] <cherylj> dimitern: sorry to bug you again :)  a note from mgz indicated that you might be working on bug 1516989?
[15:18] <mup> Bug #1516989: juju status <service_name> broken <sts> <juju-core:Triaged by cherylj> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1516989>
[15:18] <beisner> hi cherylj, dimitern - i just re-deployed with MAAS 1.9b2 + Juju 1.25.0, and all lxc units get unique and working IP addresses.  With 1.9b2 + 1.25.1, all lxc units get the same IP.  While it may  be an underlying MAAS bug, there is definitely a regression in 1.25.1 Juju.  Will be adding a generic non-openstack reproducer as soon as this next deploy wraps up.
[15:18] <dimitern> cherylj, I haven't started yet, but I plan to start tomorrow on that one
[15:18] <dimitern> beisner, it's still a maas issue
[15:18] <dimitern> beisner, hey btw :)
[15:18] <beisner> dimitern, i don't disagree.
[15:19] <cherylj> dimitern: okay, I can pass it off to onyx since they're on bug squad
[15:19] <dimitern> beisner, you can verify by trying to create a device + parent and then claim-sticky-ip-address for it
[15:19] <beisner> dimitern, but i do know that if 1.25.1 lands, my lab will be borked.  i presume others who are less-automated will eventually see it too.
[15:19] <cherylj> beisner: yeah, we don't plan on moving 1.25.1 out of proposed until the MAAS fix has been released
[15:20] <beisner> cherylj, much appreciated
[15:22] <rick_h_> cherylj: beisner <3 ty both for working through that.
[15:37] <frobware> dimitern, voidspace: http://reviews.vapour.ws/r/3275/  - I'm still doing some manual testing against MAAS 1.8/1.9 and precise, trustu, vivid & wily but if you could take an initial look would be appreciated.
[15:37] <katco> /
[15:37] <katco> /
[15:37] <dimitern> frobware, sure, looking
[15:38] <voidspace> frobware: cool
[15:54] <beisner> o/ rick_h_  yw, happy to help exercise these things
[16:16] <dimitern> frobware, ping
[16:17] <frobware> dimitern, pong
[16:18] <dimitern> frobware, I'm a bit confused - do we use bash or python or both?
[16:18] <frobware> dimitern, boht
[16:18] <frobware> both
[16:19] <frobware> dimitern, I could do 90% python. in fact I did for a while. but we will always need the bash shim to run the python script.
[16:19] <frobware> dimitern, want to HO?
[16:19] <dimitern> frobware, right
[16:19] <dimitern> frobware, got it
[16:20] <dimitern> frobware, can do a HO, but the script looks fine
[16:20] <frobware> dimitern, let's do 10 mins anyway. would be good to talk about it.
[16:20] <dimitern> frobware, ok
[16:21] <frobware> dimitern, standup HO?
[16:21] <dimitern> frobware, sure, omw
[16:40] <mpontillo> cherylj: dimitern: I'm currently fixing a separate issue in MAAS IP allocation which is blocking me from fully triaging, but from what I saw last week, it's a MAAS issue
[16:40] <cherylj> thanks, mpontillo
[16:42] <mpontillo>  cherylj, I think dimitern and Andreas were going to re-setup their MAAS setup from scratch just to be sure though; did that happen?
[16:42] <cherylj> mpontillo: I'm not sure.  dimitern?
[16:54] <voidspace> dimitern: ping if you're still around
[17:03] <dimitern> mpontillo, I have rc2 installed from scratch in lxc, will test with it tomorrow as I'm fixing the related juju bug
[17:03] <dimitern> cherylj, ^^
[17:03] <dimitern> voidspace, pong
[17:04] <frobware> cherylj, I have a LGTM on 1516891 - is 1.25.2 going to be cut today?
[17:04] <dimitern> frobware, cherylj, I'd wait for that until tomorrow TBO
[17:04] <cherylj> frobware: I'm not sure.  There are a couple bugs we're looking at for that cutoff
[17:04] <dimitern> TBH even
[17:06] <frobware> cherylj, if the answer is "possibly not" I might implement some changes dimitern and I just talked about and do some additional testing tomrrow.
[17:06] <cherylj> frobware: that should be fine
[17:11] <voidspace> dimitern: I may not need you...
[17:12] <jam> voidspace: how could you say such a thing
[17:12] <jam> we all need dimitern
[17:12] <dimitern> :D
[17:12] <voidspace> :-)
[17:26] <mup> Bug #1521267 opened: After upgrade juju status no longer works <juju-core:New> <https://launchpad.net/bugs/1521267>
[17:35] <mup> Bug #1521267 changed: After upgrade juju status no longer works <juju-core:New> <https://launchpad.net/bugs/1521267>
[17:53] <mup> Bug #1521267 opened: After upgrade juju status no longer works <juju-core:New> <https://launchpad.net/bugs/1521267>
[17:56] <mup> Bug #1521267 changed: After upgrade juju status no longer works <juju-core:New> <https://launchpad.net/bugs/1521267>
[17:59] <mup> Bug #1519403 opened: 1.24 upgrade does not set environ-uuid <juju-core:New for thumper> <https://launchpad.net/bugs/1519403>
[18:05] <mup> Bug #1519403 changed: 1.24 upgrade does not set environ-uuid <juju-core:New for thumper> <https://launchpad.net/bugs/1519403>
[18:08] <mup> Bug #1519403 opened: 1.24 upgrade does not set environ-uuid <juju-core:New for thumper> <https://launchpad.net/bugs/1519403>
[18:58] <natefinch> well that only took 4 hours of retries.
[19:16] <perrito666> natefinch: ?
[19:17] <natefinch> connecting to freenode
[19:18] <perrito666> ah, yes DoS
[19:54] <perrito666> could anyone run go test -gocheck.list=true github.com/juju/juju/cmd/jujud/agent/... and paste me their output?
[20:05] <davecheney> ping, anyone on call reviewer today ? http://reviews.vapour.ws/r/3266/
[20:08] <perrito666> Ill review it
[20:10] <perrito666> davecheney:  ship it
[20:14] <davecheney> perrito666: ta
[20:20] <thumper> morning
[20:21] <natefinch> morning thumper
[20:21] <alexisb> morning thumper
[20:23] <thumper> davecheney: hey, look master is cursed due to the race build
[20:24] <thumper> davecheney: I'm hoping we actually caught something new and not a mistake
[20:24] <thumper> oh
[20:24] <thumper> http://reports.vapour.ws/releases/3375/job/run-unit-tests-race/attempt/630
[20:24] <thumper> not a race
[20:24] <thumper> just a different failure
[21:14] <mup> Bug #1517992 changed: juju-upgrade to 1.24.7 leaves juju state server unreachable <juju-core:Won't Fix> <https://launchpad.net/bugs/1517992>
[21:14] <mup> Bug #1521327 opened: API client talking to 1.22 server failed: method Service(1).ServicesDeploy is not implemented <api> <ci> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1521327>
[21:33] <katco> wwitzel3: ericsnow: sorry i'm taking so long; nothing for you today. we'll have lots to discuss in tomorrow's standup
[21:44] <mup> Bug #1521354 opened: juju should check common failure conditions before upgrading <juju-core:Triaged> <https://launchpad.net/bugs/1521354>
[21:46] <menn0> cherylj, davecheney: did you notice that the race detector CI job is failing due to a shell script issue: http://data.vapour.ws/juju-ci/products/version-3370/run-unit-tests-race/build-627/consoleText
[21:46] <wwitzel3> katco: ok, np
[21:47] <mup> Bug #1521354 changed: juju should check common failure conditions before upgrading <juju-core:Triaged> <https://launchpad.net/bugs/1521354>
[21:50] <mup> Bug #1521354 opened: juju should check common failure conditions before upgrading <juju-core:Triaged> <https://launchpad.net/bugs/1521354>
[22:05] <davecheney> thumper: yup, sadly not a real failure, just fat fingers
[22:14] <mup> Bug #1513492 opened: add-machine with vsphere triggers machine-0: panic: juju home hasn't been initialized <add-machine> <panic> <vsphere> <juju-core:Triaged by s-matyukevich> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1513492>
[22:16] <davecheney> thumper: xenial, go 1.5 build is not passing
[22:16] <davecheney> s/not/now
[22:17] <thumper> \o/
[22:18] <mup> Bug #1513492 changed: add-machine with vsphere triggers machine-0: panic: juju home hasn't been initialized <add-machine> <panic> <vsphere> <juju-core:Triaged by s-matyukevich> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1513492>
[22:21] <mup> Bug #1513492 opened: add-machine with vsphere triggers machine-0: panic: juju home hasn't been initialized <add-machine> <panic> <vsphere> <juju-core:Triaged by s-matyukevich> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1513492>
[22:23] <davecheney> thumper: juju/container contains it's own on disk lock ....
[22:25] <davecheney> thumper: in addition to juju/utils.fslock ...
[22:33] <thumper> ugh
[22:33] <thumper> hang on
[22:33] <thumper> I added that
[22:33] <thumper> wait
[22:33] <thumper> wat?
[22:33] <davecheney> yup
[22:33] <thumper> a different lock type?
[22:33] <davecheney> slightly different
[22:33] <davecheney> but not different enough
[22:43] <davecheney> thumper: https://github.com/juju/juju/pull/3859
[22:45] <mup> Bug #1521327 changed: API client talking to 1.22 server failed: method Service(1).ServicesDeploy is not implemented <api> <ci> <regression> <juju-ci-tools:New> <juju-core:Won't Fix> <https://launchpad.net/bugs/1521327>
[22:45] <thumper> davecheney: there are issues with this, but I'm otp right now
[22:46] <davecheney> mkay
[22:46]  * davecheney waits
[22:57] <mup> Bug #1521327 opened: API client talking to 1.22 server failed: method Service(1).ServicesDeploy is not implemented <api> <ci> <regression> <juju-ci-tools:New> <juju-core:Won't Fix> <https://launchpad.net/bugs/1521327>
[23:00] <mup> Bug #1521327 changed: API client talking to 1.22 server failed: method Service(1).ServicesDeploy is not implemented <api> <ci> <regression> <juju-ci-tools:New> <juju-core:Won't Fix> <https://launchpad.net/bugs/1521327>
[23:11] <perrito666> sinzui: ping
[23:12] <davecheney> thumper: ping, still otp ?
[23:12] <thumper> yes
[23:14] <davecheney> mkay
[23:15] <wallyworld> anastasiamac: axw: perrito666: 1 minute late, finishing another meeting
[23:15]  * perrito666 clocks wallyworld to make sure its 1 minute
[23:16] <sinzui> perrito666: otp
[23:19] <thumper> davecheney: off the phone now, gimmie 5 minutes?
[23:20] <davecheney> sure
[23:20] <sinzui> hi perrito666
[23:21] <perrito666> sinzui: hi, I just came back from a short week off and am curious about the mongo3 package :)
[23:21] <sinzui> perrito666: still working on it. I am right now looking at "testbed auxverb failed with exit code 255"
[23:22] <perrito666> lovely
[23:23] <sinzui> perrito666: I think it will be a few more days to see packages being used in tests. We are still waiting on arm hardware, so we wont be sure we are done until it is available to us (which shoudl be this week)
[23:23] <perrito666> k, tx a lot, just needed a status upgrade
[23:24] <thumper> davecheney: 1:1 hangout?
[23:25] <davecheney> thumper: why
[23:25] <thumper> fine, I"ll just put it here
[23:25] <davecheney> thanks
[23:26] <thumper> I believe that the lock in container prefers flock, and falls back to fslock on windows
[23:26] <thumper> and was done because there were too many failures with fslock during container template creation
[23:26] <thumper> I'm going to email juju-dev about replacing fslock
[23:27] <thumper> I think we are spending too much time trying to fix a broken system
[23:27] <thumper> instead of investing a little effort into making a good, OS agnostic, dies with the process, lock
[23:27] <thumper> email coming soon from me about that
[23:28] <davecheney> thumper: no argument there
[23:28] <davecheney> i've said I want to use the linux facility to do this
[23:28] <davecheney> sure, it's linux only
[23:29] <davecheney> but really, so is juju to a first approximation
[23:29] <davecheney> wrt most of that PR
[23:29] <thumper> What I want is an OS abstraction
[23:29] <davecheney> i argue it's still fine to move that coed out of container
[23:29] <davecheney> thumper: sure
[23:29] <thumper> davecheney: your branch doesn't change the package name of the windows build file
[23:29] <davecheney> but if you want it to be os agnostic that will come with a reduced readter set
[23:29] <thumper> it'll fail for windows
[23:29] <davecheney> right, tahnks
[23:29] <davecheney> i'll fix that
[23:29] <davecheney> reduced 'feature' set
[23:30] <thumper> I'm fine with that
[23:30] <davecheney> thinks like fslock.IsLocked is racy
[23:30] <davecheney> it cannot be used safely
[23:30] <sinzui> davecheney: the xenial unit tests vote. you rock
[23:30] <davecheney> for hte same reason there is no os.IsExists
[23:30] <davecheney> sinzui: thanks
[23:30] <sinzui> davecheney: is it difficult to do the same for 1.25 which will get a few more releases?
[23:30] <thumper> davecheney: some of the fslock "features" were added because they could, not because they should
[23:30] <thumper> also
[23:31] <davecheney> thumper: precisely
[23:31] <thumper> they were added to work around the problem of the lock not being released when the process dies
[23:31] <davecheney> i want to kill the with fire
[23:31]  * sinzui wants wily unit tests on master passing more that 1.25 on xenial
[23:31] <davecheney> sinzui: no idea
[23:31] <davecheney> but I can look at the failures
[23:48]  * thumper heading out for dogwalk