[00:21] <cherylj> sinzui, katco, alexisb - here's the PR for reverting the rename of environments.yaml:  http://reviews.vapour.ws/r/3629/
[00:22] <cherylj> rb, not pr.  whatever :P
[03:52] <thumper> cherylj, sinzui: failure on machine-dep-engine appears to be bad-record-mac
[03:53] <cherylj> thumper: yeah, I saw that.
[03:53] <cherylj> If other tests pass, I'd say merge away
[03:54] <cherylj> looks like the maas tests (and a few others) are still running / queued
[03:57] <thumper> yeah... not final yet
[09:12] <voidspace> dimitern: ping
[09:13] <voidspace> dimitern: do you know the root cause of the trusty deploy CI failure we're seeing
[09:13] <voidspace> dimitern: "dpkg-deb: error: --extract needs a target directory"
[09:13] <voidspace> dimitern: it looks like a build failure
[09:14] <voidspace> mgz: ping
[09:15] <voidspace> we have two failures the same on xenial
[09:16] <voidspace> a lxd-deploy and aws-deploy-trusty
[09:17] <dimitern> voidspace, hmm let me have a look
[09:17] <voidspace> dimitern: http://reports.vapour.ws/releases/3533
[09:20] <dimitern> voidspace, mgz, it looks like it failed to build xenial deb package, but the job still succeeded: http://data.vapour.ws/juju-ci/products/version-3533/build-binary-xenial-amd64/build-130/consoleText
[09:22] <voidspace> debian/rules:26: recipe for target 'binary' failed
[09:22] <voidspace> dh_install: juju-core missing files: src/github.com/juju/juju/cmd/plugins/juju-backup/juju-backup
[09:22] <voidspace> dimitern: this doesn't fail for master though it seems
[09:24] <voidspace> cmd/plugins/juju-backup is also missing on master
[09:24] <voidspace> maybe a CI rule has been updated for master
[09:25] <dimitern> voidspace, it seems the debian/rules for master and other branches are different
[09:26] <dimitern> which tells me they fixed it for master, but apparently each branch uses its own rules
[09:29] <voidspace> dimitern: ok, cool
[09:29] <voidspace> dimitern: so it's their fault :-)
[09:29] <voidspace> mgz: ^^^
[09:41] <dimitern> :)
[09:46] <voidspace> dimitern: where did you check the debian rules?
[09:47] <dimitern> voidspace, in the log - check for error 2
[09:47] <dimitern> at the end of that "successful" job log I pasted earlier
[09:48] <voidspace> dimitern: thanks
[10:21] <jam> hey guys, sorry I missed the standup. I am around, just spaced the actual event.
[10:21] <jam> Anyone remember who setup reviewboard? I'm noticing that Tycho has submitted stuff to Github but it doesn't seem to end up in RB as well.
[10:22] <jam> https://github.com/juju/juju/pulls/4191
[10:22] <voidspace> jam: it was ericsnow who setup reviewboard
[10:22] <voidspace> we're still talking FWIW
[12:55] <perrito666> Jam: i believe review board requires for the user to do the link with github account to work
[12:55] <perrito666> Btw did you see axw email regarding xdg?
[13:22] <frobware> dimitern, voidspace, dooferlad: please could you take a look at my (mostly complete) bridge script that allows an explicit interface to be specified: http://reviews.vapour.ws/r/3630/
[13:23] <frobware> dimitern, voidspace, dooferlad: if we're happy with the approach I'll take maas-spaces script to 1.25 and test accordingly.
[13:27] <dimitern> frobware, sure, looking
[13:33] <frobware> dimitern, what's the right answer for this? http://pastebin.ubuntu.com/14671696/
[13:42] <dimitern> frobware, reviewed
[13:42] <dimitern> frobware, re the paste - the only difference is no auto eth1 in initial, right?
[13:43] <dimitern> frobware, a guess the answer should be "if it worked before the script shouldn't break it" :)
[13:44] <frobware> dimitern, correct.
[13:44] <dimitern> s/script/script, running it/
[13:44] <frobware> dimitern, I think we should mimic what we got, hence I just pushed an additional test to cover this case.
[13:45] <dimitern> frobware, +1
[13:46] <frobware> dimitern, I haven't see that in the wild, but some time ago I put a check in to only add the 'auto' stanza if there was one before. That all happened in the _bridge() functions but in the main loop we skip over interfaces and this now gets missed, hence the additional test.
[13:48] <frobware> dimitern, bridge-prefix is a default so it will always have a value even if bridge-name && interface-to-bridge are specified.
[13:48] <dimitern> frobware, right - the more cases we cover in tests, the better I think
[13:48] <dimitern> frobware, ah, well I missed that
[13:48] <frobware> dimitern, yeah, the new functionality is a little at odds with the existing behaviour.
[13:48] <dimitern> frobware, feel free to ignore the second issue then :)
[13:56] <dimitern> wow go 1.6b2 got smarter error messages :)
[13:56] <dimitern> ../../state/watcher.go:2395: doc.MachineId undefined (type networkInterfaceDoc has no field or method MachineId, but does have MachineID)
[13:58] <frobware>  
[13:58] <frobware> voidspace, ping
[13:59] <perrito666> mgz: ping
[14:03] <voidspace> frobware: is maas meeting on?
[14:03] <dooferlad> frobware, dimitern, voidspace: https://plus.google.com/hangouts/_/canonical.com/maas-juju-net
[14:03] <voidspace> dooferlad: just grabbing coffee
[14:03] <voidspace> be there in 2mins
[14:24] <perrito666> does anyone know if we have specs to add relations to tabular format?
[14:44] <voidspace>  destroy-environment --force is gone and destroy-controller has no force option but just hangs... :-(
[14:45] <voidspace> mgz: sinzui: ping
[14:45] <sinzui> hi voidspace
[14:45] <voidspace> sinzui: hey, hi
[14:46] <voidspace> sinzui: maas-spaces CI jobs are failing the xenial tests for us because they're looking for juju-backups
[14:46] <voidspace> sinzui: and it looks like this is configured in debian rules per CI task
[14:46] <voidspace> sinzui: as the same doesn't happen on master
[14:46] <sinzui> voidspace: when you need to use the force in that situation, use kill-controller, (a safe force)
[14:46] <voidspace> sinzui: ah, thanks
[14:47] <sinzui> voidspace: I fixed packaging last Friday to not look for a plugin
[14:47] <voidspace> sinzui: that worked
[14:47] <sinzui> voidspace: but lt me see if xenial has a different rule
[14:48] <voidspace> sinzui: we have a fail on monday: http://reports.vapour.ws/releases/3533
[14:48] <voidspace> sinzui: aws-deploy-trusty...xenial and lxd-deploy-xenial both failed for that reason
[14:49] <sinzui> voidspace: the lxd-deploy-xenial issue was a temporary network failure
[14:50] <voidspace> ah, my apologies then
[14:50] <voidspace> I'm fairly sure about the other one :-)
[14:52] <sinzui> voidspace: we can see next test of the branch did have a good network http://reports.vapour.ws/releases/3542
[14:52] <voidspace> sinzui: ah, awesome - I hadn't seen that
[14:52] <voidspace> sinzui: appreciated
[14:52] <voidspace> still cursed though
[14:52] <voidspace> *sigh*
[14:53] <voidspace> frobware: dimitern: dooferlad: new CI run for maas-spaces http://reports.vapour.ws/releases/3542
[14:54] <voidspace> frobware: dimitern: dooferlad: those tests that failed now pass (rules fixed), but we have a wily unit test failure
[14:54] <voidspace> I'm running wily and I don't think that test fails for me
[14:54] <voidspace> investigating
[14:54] <voidspace> frobware: dimitern: dooferlad: http://reports.vapour.ws/releases/3542/job/run-unit-tests-wily-amd64/attempt/1210
[14:57] <dimitern> voidspace, that looks like flakiness
[14:59] <dimitern> voidspace, hmm maybe not - I can see it here on wily
[14:59] <voidspace> dimitern: cool, we'll see in the next test run then
[14:59] <voidspace> it's running for me
[14:59] <voidspace> slowly...
[14:59] <voidspace> heh
[15:00] <voidspace> dimitern: so with my code that blocks logins until space discovery is complete
[15:00] <voidspace> dimitern: bootstrap completes - but shows that error
[15:00] <voidspace> dimitern: so bootstrap doesn't wait, it just appears to error out
[15:00] <voidspace> so more work needed
[15:00] <voidspace> frobware: ^^
[15:00] <voidspace> dimitern: that test passes for me on wily
[15:00] <voidspace> hmm, I may not be up to date though
[15:01] <voidspace> dimitern: right, I've pulled in the merge from master now
[15:01] <voidspace> I wonder if that is the cause
[15:13] <mup> Bug #1526072 changed: Juju-deployer 0.6.0 and juju-core 1.25 - Build doesn't time out and keeps running until aborted <oil> <juju-core:New> <juju-deployer:New> <https://launchpad.net/bugs/1526072>
[15:13] <dimitern> voidspace, I've run cmd/jujud/agent tests a few times - it fails some times, but no more than usual
[15:17] <voidspace> dimitern: right, passes for me every time so far
[15:23] <alexisb> I am seeing random failures while running test locally due to mgo not cleaning up (on several branches)
[15:23] <alexisb> is there something I need to do locally to make sure things are getting cleaned up?
[15:24] <perrito666> Nope, the errors you see might be the cleanup errors after a panic from something else
[15:24] <perrito666> Tmp tends to fill out with mgo stuff though so you might want to flush that
[15:24] <voidspace> alexisb: there really shouldn't be!
[15:24] <voidspace> I gotta go pick up the daughter from school, sorry
[15:25] <alexisb> ok, just seems test on master have gotten really flaky
[15:25] <alexisb> around cmd/jujud
[15:26] <perrito666> Alexisb which are failing?
[15:26] <alexisb> agent and unit
[15:26] <perrito666> Pastebin?
[15:26] <alexisb> github.com/juju/juju/cmd/jujud/agent/...
[15:27] <alexisb> fails consistently
[15:27] <perrito666> I did stumble upon agent failing lately
[15:29] <dimitern> alexisb, try GOMAXPROC=1 go test -check.v github.com/juju/juju/cmd/jujud/agent/... to see if it helps
[15:38] <natefinch> GOMAXPROCS is not the cause.  I run with GOMAXPROCS=8 all the time, and don't have many consistent failures.
[15:40] <natefinch> alexisb: what version of Go?  (run go version)
[15:40] <dimitern> voidspace, dooferlad, frobware, care to review a -3080 line diff :) ? http://reviews.vapour.ws/r/3631/
[16:23] <dimitern> frobware, voidspace, dooferlad, any of you still there?
[16:25] <frobware> dimitern, otp
[16:25] <dimitern> ok
[16:25] <dimitern> just pestering for a review btw
[16:25] <dooferlad> dimitern: can take a look in a moment
[16:26] <dimitern> dooferlad, cheers
[16:32] <frobware> dimitern, one question I have is the "risk" of landing your change in light of merging back to master. just curious...
[16:32] <frobware> dimitern, I don't want to derail the opportunity to be back in master.
[16:33] <alexisb> natefinch, I am running 1.5
[16:33] <frobware> dimitern, ooh. Now that I've actually looked at your PR it's all deletes.. \o/
[16:34] <dimitern> frobware, yeah - for a change :)
[16:37] <dooferlad> dimitern: +1 delete all the old things!
[16:37] <frobware> dimitern, so my question still stands though: do you see any risk for CI tests, stopping a merge into master. It's likely that master may be blocked for a few days so that we can land.
[16:39] <dimitern> frobware, I'm not sure about that PR causing CI tests to fail - not that I know of, and since it's dropping code which was not running for quite a while, I think we're safe
[16:39] <frobware> dimitern, k
[16:40] <frobware> dimitern, just trying to avoid another 24 cycle on getting maas-spaces into master
[16:41] <natefinch> alexisb: 1.4 is more reliable.... 1.5 has some known problems with juju still, last I heard
[16:43] <frobware> alexisb, I have pre-built deb of go 1.4.3 if it helps. http://178.62.20.154/~aim/go1.4.3_1.0.0-14_amd64.deb - it installs into /usr/local/go1.4.3
[16:56] <alexisb> natefinch, perrito666 this is the failure I am seeing consistently with latest master and golang 1.5:
[16:56] <alexisb> https://pastebin.canonical.com/148393/
[16:57] <natefinch> alexisb: yeah, pretty sure I was seeing the sockets thing with go 1.5 too
[16:57] <perrito666> yup, that is go 1.5
[16:57] <perrito666> I have smae issues only in my go1.5 machine
[16:57] <alexisb> yay!
[16:58] <natefinch> sort of :/
[16:58] <alexisb> any idea if it is addressed with 1.5.2+?
[16:58] <natefinch> alexisb: I don't think it's a Go bug, I think it's a juju bug, but I haven't looked super deep into it.
[16:58] <alexisb> natefinch, ok
[17:02] <cmars> ashipika, were you seeing that sockets error as well? ^^ (https://pastebin.canonical.com/148393/)
[17:03] <ashipika> cmars: yes, i was
[17:03] <ashipika> cmars: in cmd/jujud/agent package
[17:04] <ashipika> cmars: and golang 1.5.3
[17:05] <perrito666> how would you call both ends of a relation ?
[17:06] <perrito666> I am trying to add relations to tabular status and finding this problem
[17:08] <ashipika> cmars, alexisb: https://pastebin.canonical.com/148394/
[17:10] <alexisb> ashipika, that is the same failure I am seeing
[17:10] <alexisb> trying with go 1.4 now just to see if it is different
[17:15] <natefinch> ericsnow: is your branch pushed in a good state for me to rebase?  I just got my branch all building, tests passing, etc.
[17:15] <ericsnow> natefinch: working on it
[17:21] <ericsnow> natefinch: should be good now
[17:25] <natefinch> ericsnow: cool
[17:46] <alexisb> so it looks like go 1.4 is allowing the agent tests to pass but I am still seeing failures on master for the status test
[17:46] <alexisb> anyone else seeing failures on master for the status tests?
[18:13] <mup> Bug #1538241 opened: 2.0-alpha2 stabilization <blocker> <juju-core:Triaged> <https://launchpad.net/bugs/1538241>
[18:51] <natefinch> dammit
[18:52] <perrito666> natefinch: ?
[18:53] <natefinch> machine was frozen when I came back and now its doing that "running in low graphics mode" crap
[18:53] <perrito666> natefinch: something is very wrong with some part of your computer
[18:53] <perrito666> did you ever replace the battery?
[18:53] <natefinch> yes I did
[18:57] <natefinch> bad when I can't even get to a command prompt
[18:59] <natefinch> ahh Ctrl alt f1 works at least
[19:00] <katco> ericsnow: natefinch: finally finished revising the user-stories. i posted comments in the doc, but i'd like to review the diff with you two after i do lunch
[19:00] <ericsnow> katco: k
[19:01] <natefinch> katco - hopefully I can figure out my laptop problems before then
[19:01] <katco> natefinch: if not, do you have any other machine/phone you could attend on? i need you there
[19:02] <katco> natefinch: also, is this going to affect your ability to finish your card?
[19:04] <natefinch> I can attend on my tablet if needed.  yes it'll screw up getting the card done, but I'm hopeful I can fix it
[19:04] <katco> natefinch: k, keep me posted please.
[19:04]  * katco lunches
[19:06] <natefinch> hmmm less hopeful now
[19:09] <perrito666> natefinch: need a hand?
[19:11] <natefinch> perrito666 yes
[19:11] <perrito666> natefinch: lets privmsg
[19:11] <natefinch> k
[19:17] <frobware> cherylj, ping; I'm around for 30 mins. anything I can do or help with regard to maas-spaces CI builds... ?
[19:38] <natefinch_> it would be really nice if linux would notice that all possible video drivers have been blacklisted and give me some kind of option to de-blacklist one of them
[19:40] <natefinch_> perrito666: thanks for the help... I figured it out.  I foolishly tried to enable the nvidia driver, but it evidently didn't actualy take until I rebooted after ubuntu froze... but when it did that, it blacklisted the nouveau driver...
[19:41] <perrito666> but why would the nvidia kernel fail?
[19:41] <natefinch> perrito666: because I'm special like that
[19:43] <perrito666> dont worry, this is the year of linux on the desktop :p
[19:49] <cherylj> frobware: I haven't looked at any of the failures, do you guys need me to?
[19:49] <cherylj> (sorry, was out for a walk before the rain starts)
[20:27] <perrito666> ok, something weird just happened
[20:27] <perrito666> I ran a whole test suite and didnt got spammed by mongo
[20:34] <katco> natefinch: meeting time
[20:36] <natefinch> katco: oops, sorry, coming
[21:14] <natefinch> ericsnow: did you say you had a charm that utilized resource-get?
[21:14] <ericsnow> natefinch: yeah, yours :)
[21:15] <ericsnow> natefinch: check it out in my branch in test-charms/.../starsay
[21:16] <natefinch> ericsnow: ahh, I see it.  linux's default directory listing order always messes me up
[21:18] <perrito666> yeah, yeah, blame the OS
[21:19] <natefinch> perrito666: it's not my fault they decided to go top to bottom then left to right, rather than left to right, top to bottom
[21:47] <mup> Bug #1538303 opened: 1.25.0: bootstrap failure - WARNING discarding API open error: EOF <oil> <juju-core:New> <https://launchpad.net/bugs/1538303>
[21:50] <mup> Bug #1538303 changed: 1.25.0: bootstrap failure - WARNING discarding API open error: EOF <oil> <juju-core:New> <https://launchpad.net/bugs/1538303>
[21:56] <mup> Bug #1538303 opened: 1.25.0: bootstrap failure - WARNING discarding API open error: EOF <oil> <juju-core:New> <https://launchpad.net/bugs/1538303>
[21:59] <natefinch> katco, ericsnow: trying to debug why the unit resources aren't getting returned when I call show-service-resources.  I'm not sure resource-get is working correctly.
[21:59] <mup> Bug #1538303 changed: 1.25.0: bootstrap failure - WARNING discarding API open error: EOF <oil> <juju-core:New> <https://launchpad.net/bugs/1538303>
[21:59] <natefinch> katco, ericsnow: but I have to run for a while for dinner.
[21:59] <ericsnow> natefinch: k
[22:02] <mup> Bug #1538303 opened: 1.25.0: bootstrap failure - WARNING discarding API open error: EOF <oil> <juju-core:New> <https://launchpad.net/bugs/1538303>
[22:43] <ericsnow> katco: FYI, I have 6 patches up on RB now for the resource-get stuff
[22:45] <katco> ericsnow: awesome! in a meeting, i'll try and review tonight
[22:45] <ericsnow> katco: note that tests bloat the line count substantially on a couple of those patches :)
[22:47] <katco> ericsnow: lol
[22:47] <ericsnow> katco: I'm going to land them all in 1 PR once they all pass review
[22:48] <katco> ericsnow: that's clever. good idea
[22:48] <ericsnow> katco: yeah, rbt post --parent <parent> was my friend here
[23:30] <ericsnow> katco: so we'll revisit our iteration backlog after standup tomorrow?
[23:31] <katco> ericsnow: yes
[23:31] <ericsnow> katco: sounds good
[23:31] <katco> ericsnow: sorry not sure i can get to the reviews tonight. need to start dinner and the family gets home soon
[23:31] <ericsnow> katco: np
[23:32] <ericsnow> katco: you wrote a decent chunk of it :)
[23:41] <wallyworld> perrito666: we will end standup, can't hear you
[23:42] <perrito666> wallyworld: yes, apparently my 3g network sucks at upload
[23:42] <wallyworld> perrito666: mongo3 will be next week's focus
[23:42] <wallyworld> after we get the next alpha out
[23:42] <perrito666> wallyworld: my question was, regarding master
[23:42] <perrito666> my last branch, merged with master, when running the test suite
[23:43] <perrito666> it doesnt spam syslog
[23:43] <perrito666> that is new
[23:43] <wallyworld> perrito666: rsyslog is removed from juju core, you talking about mongo syslog spam?
[23:44] <perrito666> wallyworld: that is a problem from mongo not syslog
[23:44] <perrito666> it was at least :p
[23:44] <perrito666> you know, when running the full suite, there was a spam to all the terminals at one point
[23:44] <wallyworld> sure, but what i meant was, there will be no juju core syslog spam now
[23:45] <perrito666> I assumed that spam was being done by mongo, not juju
[23:45] <wallyworld> i thought it was core, but not sure now. anyway, mongo2 still being used for now
[23:46] <perrito666> wallyworld: k
[23:46] <perrito666> bbl, ill go try to resurrect internet