[03:10] <thumper> davecheney: so how do we define a custom error type again?
[03:10] <thumper> davecheney: what do I search for in code?
[03:13] <thumper> davecheney: actually nm.
[03:14] <thumper> davecheney: the error I care about is a NotFoundError, which seems sane...
[03:26] <davecheney> anything that has an Error() string method satisifies the error interface
[03:31] <thumper> gah...
[03:31] <thumper> tests seem to be blocking again for some reason
[03:31] <thumper> how the hell can I know if the tests have stopped running?
[03:32] <thumper> I think it is getting deadlocked somewhere
[03:32] <davecheney> cntl-\
[03:33] <davecheney> speaking of wtf
[03:33] <thumper> oh, handy
[03:33] <davecheney> we have a method to override the default dial timeout when calling the state.Open
[03:33] <davecheney> BUT IT IS NEVER USED !!!
[03:33] <davecheney> well, that isn't strictly true, it is used inconsistnetly
[03:34] <thumper> hmm...
[03:34] <thumper> it seemed to be stuck in the guts of go
[03:34] <thumper> not in our code
[03:34] <thumper> wtf?
[03:35] <thumper> davecheney: can I pastebin for you to take a look at?
[03:35] <davecheney> kk
[03:35] <thumper> http://paste.ubuntu.com/5633106/
[03:35] <davecheney> you might get a cleanre stack trace if you send SIGQUIT to the test process
[03:35] <davecheney> in go 1.0
[03:36] <davecheney> cntl-\ will panic both the test program, and the test runner
[03:36] <davecheney> so you get two stack traces intermixed
[03:36] <davecheney> i fixed that for 1.1
[03:36] <davecheney> try again, and high the $PKG.test with a SIGQUIT
[03:36] <thumper> kk
[03:38] <davecheney> s/high/hit
[03:40] <thumper> http://paste.ubuntu.com/5633115/
[03:41] <thumper> davecheney: ^^
[03:43] <thumper> davecheney: can you see what it is doing?
[03:45] <davecheney> sorry, i think that is the wrong process
[03:45] <davecheney> that is the `go` process
[03:45] <davecheney> you want it's child
[03:47]  * thumper sighs
[03:47] <thumper> I seem to have a bucketload of mongod processes and go test processes running
[03:48] <davecheney> you will have at least 8
[03:50] <thumper> I had about 30 something tests, and 4 different mongod processes with 8 children each
[03:50] <thumper> I've killed them all, and will try again
[03:51] <davecheney> might be a good idea to clean out your /tmp
[03:51] <davecheney> there will be lots of junk in there
[03:51] <thumper> hmm...
[03:52] <thumper> there is a lot of mongo*.sock
[03:55] <thumper> davecheney: ok, it has blocked again...
[03:55] <thumper> davecheney: using htop to look at the process tree
[03:56] <thumper> davecheney: it seems that go test ./... has five different /tmp/go-build/.... something.test children
[03:56] <thumper> I thought it wouldn't run in parallel?
[03:56] <davecheney> it does not run the tests inside a package in parallel
[03:57] <davecheney> but it will test packages in parallel
[03:57] <thumper> ok
[03:57] <davecheney> go test -p 1 ./...
[03:57] <thumper> well, they seem to have somehow deadlocked each other
[03:57] <davecheney> will disable
[04:01] <thumper> oh ffs
[04:01] <thumper> even with -p 1, the tests hang
[04:01] <davecheney> which test is is
[04:01] <davecheney> there will be one
[04:05] <thumper> can't tell
[04:05]  * thumper is busy killing things
[04:06] <thumper> I just killed the system mongod process
[04:06] <thumper> and rerunning the tests after switching to trunk
[04:07] <thumper> best to start from a mostly known state
[04:07] <thumper> getting further this time
[04:07] <davecheney> btw, there is a working mongo PPA now
[04:07]  * thumper hates how gocheck doesn't clean up it's test directories
[04:07] <davecheney> if you want to switch to that
[04:07] <thumper> davecheney: what was the problem?
[04:07] <davecheney> no idea
[04:08] <thumper> so... what changed?
[04:08] <davecheney> the latest build jools did worked
[04:08] <davecheney> dunno, ask him
[04:08] <davecheney> i just copied his packge into our ppa
[04:08] <thumper> not for me it didn't
[04:09] <davecheney> https://launchpad.net/~juju/+archive/experimental
[04:09] <davecheney> pls try this one
[04:09] <thumper> I timed the tests here:
[04:09] <thumper> real	2m42.342s
[04:09] <thumper> user	5m34.444s
[04:09] <thumper> sys	0m26.396s
[04:16] <thumper> gah...
[04:16] <thumper> second run through fails
[04:18] <davecheney> do you have the test output ?
[04:18] <thumper> by fails I mean hangs
[04:18] <thumper> I had switched to my branch though
[04:18] <thumper> I'm back on trunk again, and trying that
[04:19] <thumper> I'm beginning to wonder if it is my work
[04:19] <thumper> which while not impossible
[04:19] <thumper> would suprise me
[04:19] <thumper> as I didn't think I was doing anything weird
[04:19] <thumper> but I must be
[04:19] <thumper> trunk worked again
[04:20] <thumper> well, got further than before
[04:57]  * thumper back for the meeting later
[04:57] <davecheney> kk
[05:56] <wallyworld> jam: can you change ownership of goose to ~juju instead of ~gophers since i can triage bugs etc anymore
[05:57] <jam> wallyworld: I'm not in ~gophers anymore either. I thought you said you had super powers still for LP
[05:57] <wallyworld> i was wrong :-(
[05:58] <wallyworld> jam: but you registered the project, so you should be able to do it
[05:58] <wallyworld> change ownership
[05:58] <jam> https://launchpad.net/~gophers/+members#active looks like only Gustavo is in ~gophers. I'll try
[05:59] <jam> wallyworld: shouldn't it be an edit link beside "Maintaners" ?
[05:59] <wallyworld> jam: yeah, if you can't see it then you don't have permission. just to check, the administer link
[06:00] <jam> yeah, I tried that one too
[06:00] <jam> just lets me set "aliases" for the project
[06:00] <wallyworld> balls, i thought you would be able to
[06:00] <jam> I can apparently set whether it tracks bugs in launchpad, but not change who can access those
[06:00] <wallyworld> oh well, we'll have to ask gustavo to change ownership etc
[06:01] <wallyworld> i filed a bug before but couldn;t triage or set importance etc
[06:03]  * wallyworld relocates
[06:39] <jam> wallyworld_: seconded your rsynclogd stuff
[07:02] <wallyworld_> ta, will land
[07:45] <rogpeppe> mornin' all
[07:46] <dimitern> rogpeppe: morning
[07:46] <rogpeppe> dimitern: hiya
[07:51] <thumper> hi rogpeppe, dimitern
[07:51] <rogpeppe> thumper: hya
[07:51] <rogpeppe> thumper: hi, even
[07:56] <dimitern> thumper: yo!
[07:58] <fwereade__> hey all
[07:59] <dimitern> fwereade__: heyhey
[07:59] <fwereade__> dimitern, heyhey
[08:09] <jam> morning mgz
[08:10] <jam> wallyworld: side note I didn't want to forget about. Did we need to change the default firewall ports to include the rsyslogd port?
[08:10] <wallyworld> jam: i didn't change anything, but it worked regardless
[08:11] <wallyworld> jam: i had a mysql node and it logged back to bootstrap with no fw changes
[08:11] <jam> wallyworld: so supposedly they are all in the same security group, so everything can talk to everything in the group, I wasn't sure if that was true on HP/Canonistack
[08:11] <wallyworld> it appears to be true it works by trying it out :-)
[08:43] <thumper> jam: can blue take this one ? https://canonical.leankit.com/Boards/View/103148069/104151606
[08:46] <jam> thumper: that looks like one that is intended for us, but I have a really hard time going from URL links to finding the actual cards on the board
[08:46] <thumper> jam: I'll move it to blue backlog
[08:46] <thumper> jam: it has been moved to blue todo
[08:51] <wallyworld> jam: mgz: dimitern:  this is a critical bug for release tomorrow, could you +1 it and i'll land after the standup later? https://codereview.appspot.com/7937044/
[08:51] <jam> wallyworld: I have an in-progress review of it. I'll make sure to finish that.
[08:51] <jam> your String() function looks like it exists in 2 places.
[08:51] <jam> ah, nm
[08:51] <wallyworld> yes, different structs
[08:51] <jam> genericId vs genericInstanceId
[08:51] <dimitern> wallyworld: i'm on it
[08:51] <wallyworld> thanks guys :-)
[09:08] <jam> rogpeppe: you submitted https://code.launchpad.net/~rogpeppe/juju-core/212-api-doc/+merge/147919 as of yesterday, can we move your 'write API design' card to Merged?
[09:09] <rogpeppe> jam: i wasn't aware that i'd submitted it
[09:09] <rogpeppe> jam: i still haven't seen two LGTMs
[09:11] <rogpeppe> jam: in fact, i definitely haven't submitted it
[09:12] <jam> rogpeppe: yeah sorry, I was looking at https://codereview.appspot.com/7919043/ and saw the last thing was submitted, but clearly that was the wrong link I had followed.
[09:12] <rogpeppe> jam: np
[09:12] <jam> rogpeppe: as a doc, which is better than having nothing, I'm willing to have you land it with a single +1
[09:12] <rogpeppe> jam: thanks for your review BTW
[09:12] <jam> so 'trivial'
[09:12] <rogpeppe> jam: ok, thanks. will do.
[09:13] <jam> TheMue: looking here: https://launchpad.net/~gophers/+related-projects there is "golxc" is that something that should stay in ~gophers or move to ~juju?
[09:13] <rogpeppe> jam: did my responses to your questions seem reasonable, BTW?
[09:15] <jam> rogpeppe: so I think the answer is: (a) it is stateful, but we can just put more processes/machines in front
[09:15] <jam> (b) as a websocket, you maintain the connection until it gets interrupted, in which case you have to set up the state from scratch again
[09:16] <jam> (c) we know this design is a bit worriesome because it needs to know about all changes, but that is a whole different discussion I'm happy to defer for now :)
[09:16] <rogpeppe> jam: a) i'm not sure "in front" is right there - we'd be replicating the API server itself, not putting more things in front of it.
[09:16] <jam> rogpeppe: in front of mongo
[09:17] <rogpeppe> jam: ah yes, indeed
[09:17] <jam> rogpeppe: on that point, could you put haproxy sort of thing to load balance?
[09:17] <rogpeppe> jam: yeah, that's what i'd do
[09:17] <jam> as long as it new to maintain the websocket to the same api server?
[09:17] <rogpeppe> jam: yup
[09:17] <jam> I'm not as familiar with websocket, but presumably it is "give me a connection and then stop pretending I'm talking HTTP"
[09:18] <rogpeppe> jam: yeah, it hijacks the connection AFAIR
[09:19] <rogpeppe> jam: but it has some cruft in the middle too (it does packets)
[09:19] <jam> rogpeppe: so i'm happy to chat about the design, etc, but none of that blocks the landing of a doc that describes what we have/you are actually building.
[09:19] <rogpeppe> jam: sure
[09:32] <thumper> fwereade__: Rietveld: https://codereview.appspot.com/7943043
[09:34]  * thumper is done for the day now
[10:14] <fwereade__> dimitern, ping
[10:14] <dimitern> fwereade__: pong
[10:15] <fwereade__> dimitern, was the source of that bug yesterday clear?
[10:15] <dimitern> fwereade__: which one?
[10:15] <fwereade__> dimitern, the one roger saw, that cascaded nastily
[10:15] <fwereade__> dimitern, rogpeppe: actually would you just update me quickly on what's done/planned wrt that issue?
[10:16] <dimitern> fwereade__: not really, no (for me at least, and couldn't reproduce it)
[10:16] <rogpeppe> fwereade__: i just disabled the test
[10:16] <rogpeppe> fwereade__: and filed a bug
[10:16] <rogpeppe> fwereade__: assigned to dimitern :-)
[10:16] <dimitern> fwereade__: i filed the one about mgo today btw
[10:16] <rogpeppe> dimitern: what one was that?
[10:17] <dimitern> rogpeppe: bug 1158190
[10:17] <_mup_> Bug #1158190: intermittent failure with go tip and GOMAXPROCS=5 <mgo:New> < https://launchpad.net/bugs/1158190 >
[10:18] <fwereade__> dimitern, cool, thanks
[10:18] <fwereade__> rogpeppe, ok, cool, thanks
[10:18] <rogpeppe> dimitern: which revision of mgo are you using?
[10:18] <dimitern> rogpeppe: how can i check? bzr info doesn't say
[10:19] <rogpeppe> dimitern: i use bzr log
[10:20] <dimitern> rogpeppe: ah, bzr info -h also shows that i found. so rev 183
[10:20] <rogpeppe> dimitern: ok, cool, that's the same one as me
[10:20] <dimitern> sorry, that was bzr info -v
[10:21] <rogpeppe> dimitern: that's weird then; i can't see how it could get a nil pointer error on that line
[10:21] <dimitern> rogpeppe: panic dumps don't lie :)
[10:22] <rogpeppe> dimitern: actually, they can quite often be out by a line
[10:22] <dimitern> rogpeppe: oh, i didn't know this
[10:25] <rogpeppe> dimitern: it's *usually* fairly obvious
[10:25] <rogpeppe> dimitern: and in this case, i can't see an obvious candidate (servers is not nil, and that's the only way that line can panic AFAICS)
[10:26] <dimitern> rogpeppe: weird..
[10:26] <rogpeppe> dimitern: tip has had some very significant changes recently. i wouldn't entirely rule out some memory-corruption issue.
[10:48] <fwereade__> long lunch today, bbl
[11:03] <TheMue> lunchtime too
[11:06] <rogpeppe> dimitern, fwreade: i just saw another uniter test failure in trunk : http://paste.ubuntu.com/5633753/
[11:06] <dimitern> rogpeppe: this looks like the same issue
[11:07] <rogpeppe> dimitern: i'm not sure. it first dies in a different test, for a different initial reason
[11:07] <rogpeppe> dimitern: (it first dies in "hook error service dying")
[11:09] <dimitern> rogpeppe: is it consistently failing?
[11:10] <rogpeppe> dimitern: nope
[11:10] <rogpeppe> dimitern: i saw that one after about 10 runs
[11:10] <rogpeppe> dimitern: all with different values for GOMAXPROCS
[11:10] <rogpeppe> dimitern: (that one was with GOMAXPROCS=60)
[11:11] <dimitern> rogpeppe: so values of n > 2xnumber of cores still work?
[11:11] <rogpeppe> dimitern: yeah, you can have any number
[11:11] <rogpeppe> dimitern: it's just the number of processes that can be running cpu-bound stuff at once
[11:11] <rogpeppe> dimitern: i'm continually running dfc's stresstest shell script
[11:11] <dimitern> rogpeppe: i see, so what should we do about it?
[11:12] <rogpeppe> dimitern: we should delve in and try to understand what's happening
[11:13] <rogpeppe> dimitern: i'd start by comparing the logs from the passing test with the logs from the failing test and see where they diverge
[11:14] <dimitern> rogpeppe: ok
[11:25] <jam> rogpeppe, fwereade__: I'm trying to make the change to default to "precise". I can easily update the test. However, when I change it, it only tests the logic inside Config.New() (where if the value is empty it gets auto-set to a new value)
[11:25] <jam> it does not test the value in schema.Defaults
[11:25] <jam> do you know how to trigger schema.Defaults?
[11:26] <rogpeppe> jam: schema.Defaults is triggered when a config attribute isn't specified, no?
[11:27] <jam> rogpeppe: I would think so, but if I just change the line in "New" then all paths return the value I specify
[11:27]  * rogpeppe goes to look
[11:27] <jam> So, AIUI, there should be "not specified" as separate from "specified as the empty string"
[11:28] <jam> rogpeppe: ah, maybe I'm on crack because "default-series": version.Current.Series *is* precise on my machine.
[11:28] <jam> And even though I'm monkey-patching the value during testing, it is too late, because the value is already in the map
[11:28] <rogpeppe> jam: ha! (you're still running precise?)
[11:29] <jam> rogpeppe: the last one to support Unity-2D, and 3D doesn't work very well in a VM
[11:29] <rogpeppe> jam: ah, i see
[11:29] <rogpeppe> jam: i didn't realise you used a VM
[11:29] <jam> not always, but I'm in Windows to do the windows building, etc.
[11:29] <jam> but I can't run the test suite there
[11:29] <jam> so VM
[11:30] <rogpeppe> of course
[11:30] <jam> I do have a raring VM which works ok, but the "3D" support is pretty poor for virtualbox
[11:30] <jam> even with the "allow 3D for guest" checked.
[13:11] <gary_poster> fwereade__, fwiw, bug 1131608 is still a blocker for us to fully deliver (developing the charm needs it)
[13:11] <_mup_> Bug #1131608: deployed series is arbitrary <juju-core:New for fwereade> < https://launchpad.net/bugs/1131608 >
[13:13] <fwereade__> gary_poster, I *think* that, as of thumper's branch proposed this morning, it should actually be resolved
[13:13] <gary_poster> fwereade__, oh, that would be great, thanks.
[13:13] <fwereade__> gary_poster, I'm pretty sure that's the last piece that needed to be added
[13:18]  * TheMue missed to restart irc after reboot *facepalm*
[14:10] <fwereade__> rogpeppe, ping
[14:10] <rogpeppe> fwereade__: pong
[14:10] <fwereade__> rogpeppe, kanban :)
[14:10] <rogpeppe> fwereade__: oh, bugger
[15:20] <fwereade__> rogpeppe, ping
[15:20] <rogpeppe> fwereade__: pong
[15:21] <fwereade__> rogpeppe, since you can repro it, would you try out a fix for that uniter test please?
[15:21] <rogpeppe> fwereade__: sure
[15:21] <fwereade__> rogpeppe, line 570 should read:
[15:22] <rogpeppe> fwereade__: let me just clone $GOROOT :-)
[15:22] <fwereade__> info: "upgrade failed",
[15:22] <fwereade__> ...and that's it
[15:22] <fwereade__> rogpeppe, I *think*
[15:26] <rogpeppe> fwereade__: so where's the source of the indeterminacy?
[15:26] <rogpeppe> fwereade__: i'm first verifying i can still reproduce the bug; then i'll try the fix.
[15:27] <fwereade__> rogpeppe, that we wait until a few steps *before* the point we care about, and then assert we're at that point
[15:27] <rogpeppe> fwereade__: ah, i think i see - "hook failed: "start"" is just a stage on the way to "upgrade failed"
[15:28] <rogpeppe> fwereade__: yeah
[15:29] <fwereade__> rogpeppe, but actually wait
[15:29] <fwereade__> rogpeppe, I am suddenly very confused by the test
[15:29] <fwereade__> rogpeppe, even if it works, something's funny
[15:30] <fwereade__> rogpeppe, fuck, it's harder than I thought
[15:32] <rogpeppe> fwereade__: yeah, sorry, still fails
[15:32] <rogpeppe> fwereade__: it would be great to sort out the test isolation issue too
[15:33] <fwereade__> rogpeppe, yeah, I'm feeling a bit blocked on the thing I picked up, might take a proper look t both of those
[15:38] <rogpeppe> pwd
[15:52]  * rogpeppe wishes that gocheck printed searchable-for string with every assertion failure
[15:53] <mgz> rogpeppe: like what exacty?
[15:53] <rogpeppe> mgz: like "assertion failed"
[15:54] <mgz> ah, you mean just searchable in the output
[15:54] <rogpeppe> mgz: yeah
[15:54] <mgz> yeah.
[15:54] <rogpeppe> mgz: 'cos some tests (looking at uniter here) produce heroic quantities of output, and finding the errors is not easy!
[15:55] <rogpeppe> mgz: ah, this'll work pretty well: search for _test\.go
[15:55] <mgz> ha
[15:56] <rogpeppe> fwereade__: i just saw another uniter test failure (in trunk, or nearly trunk, this time). in steadyUpgradeTests.
[15:56] <rogpeppe> fwereade__: same symptom (never got expected hooks)
[15:56] <fwereade__> rogpeppe, cool, I'll take a look there too, might be similar
[15:57] <rogpeppe> fwereade__: will paste you the output if you want
[15:57] <fwereade__> rogpeppe, please
[15:58] <rogpeppe> fwereade__: http://paste.ubuntu.com/5634430/
[15:58] <fwereade__> rogpeppe, thanks
[16:03] <rogpeppe> pretty simple cleanup CL, if anyone wants to take a look: https://codereview.appspot.com/7945044
[16:46] <fwereade__> rogpeppe, when you have a moment, would you try lp:~fwereade/juju-core/fix-1157898 please?
[16:47] <rogpeppe> fwereade__: running tests on it now
[16:47] <fwereade__> rogpeppe, that other one you saw is profoundly weird... it looks like the hook is (maybe?) running but the juju-log tool is not
[16:49] <fwereade__> rogpeppe, if that fix works I might add a couple of logging lines before I propose, to help diagnose the other one if we see it again
[16:52] <rogpeppe> fwereade__: looking good so far - three full tests without incident
[16:52] <fwereade__> rogpeppe, excellent
[16:52] <rogpeppe> fwereade__: so what's the signifcant of the code move in Uniter.deploy ?
[16:53] <fwereade__> rogpeppe, to make the things I said in the test true -- specifically, by delaying the SetCharm until the operation is not stoppable
[16:53] <fwereade__> rogpeppe, the critical point is doing so after the download has complete, after that it doesn't check Dying until it's done
[16:54] <rogpeppe> fwereade__: so is that a genuine bug?
[16:54] <fwereade__> rogpeppe, motivationwise, a bit of a hack; otherwise a reasonable change, I think... could definitely be argued to be more correct to not set a charm until you actually *have* the charm in hand
[16:55] <rogpeppe> fwereade__: sounds reasonable to me
[16:55] <fwereade__> rogpeppe, I think it is arguable that it was -- I'm not quite sure how it would behave after having set a charm but finding itself unable to download
[16:56] <fwereade__> rogpeppe, but the test was definitely racy, and I think that now it no longer is
[16:56] <rogpeppe> fwereade__: 5 iterations good so far
[16:56] <fwereade__> rogpeppe, awesomesauce
[16:56] <rogpeppe> fwereade__: and the Reset fix looks... why didn't we see that before? :-)
[16:57] <rogpeppe> 6
[16:57] <rogpeppe> pwd
[16:59] <fwereade__> rogpeppe, haha :)
[17:01] <benji> I'm seeing test failures on trunk, is that a known thing?  After skimming the scroll-back I didn't see mention of it.
[17:02] <fwereade__> benji, I am not aware of any myself, would you paste them please?
[17:02] <benji> fwereade__: http://paste.ubuntu.com/5634632/
[17:03] <benji> I'll check back after lunch and see if things are better.
[17:04] <fwereade__> benji, I think that might be jam's change
[17:04] <fwereade__> jam, are you (1) around and (2) running on precise?
[17:05] <dimitern> fwereade__: i think he mentioned he's running on precise
[17:05] <fwereade__> dimitern, grumble grumble, I bet that's it
[17:06] <rogpeppe> fwereade__: 12
[17:07] <fwereade__> rogpeppe, more awesomeness, I'm starting to feel good about it
[17:07] <mgz> `; beep` ha, a sign of a test suite that takes too long to run #;0~~
[17:07] <dimitern> fwereade__: i think frankban could reproduce these failures as well
[17:09] <fwereade__> dimitern, good point
[17:09] <fwereade__> frankban, am I right in thinking you were seeing the uniter test failure that rog disabled yesterday?
[17:10] <mgz> confrimed that's a regression due to r1044 on quantal
[17:10] <mgz> shall I just back the change out for now, as it's jam's eod?
[17:10] <frankban> fwereade__, dimitern: yes you are
[17:10] <frankban> fwereade__: 8 failures in uniter_test
[17:10] <fwereade__> frankban, would you try out lp:~fwereade/juju-core/fix-1157898 please?
[17:10] <mgz> fwereade__: ^
[17:10] <frankban> fwereade__: sure
[17:11] <fwereade__> mgz, yes please
[17:11] <fwereade__> mgz, I'll write him a bug
[17:11] <fwereade__> mgz, hmm, should it be a bug? I'll just mail him
[17:11] <mgz> mail or note in the review should be fine I'd say
[17:14] <mgz> proposed, and will go ahead and submit
[17:15] <mgz> benji: please pull lp:juju-core
[17:16] <mgz> jam: merge lp:~gz/juju-core/backout_r1044 into your feature branch, revert+fix, and repropose
[17:17] <dimitern> mgz: i doubt jam is around to do this at this time
[17:17] <mgz> he has the log :)
[17:18] <mgz> I don't expect him to do it till his next work day
[17:18] <mgz> (true, this kind of thing could also go to the list)
[17:18] <dimitern> but that would be sunday, right? i think it's better to back out that now, since trunk is broken
[17:19] <fwereade__> dimitern, it is backed out :)
[17:19] <frankban> fwereade__: tests pass in your branch
[17:19] <fwereade__> frankban, sweet, tyvm
[17:19] <frankban> fwereade__: np
[17:20] <dimitern> fwereade__: oh, ok :) sorry i missed this
[17:24] <rogpeppe> fwereade__: i just saw this after 22 successful runs: http://paste.ubuntu.com/5634687/
[17:25] <fwereade__> rogpeppe, that's outside the scope of what I did, do I think I'm going to propose it as is for now
[17:25] <rogpeppe> fwereade__: sure. just so's you know :-)
[17:28] <dimitern> rogpeppe: are you running the tests with dave's stress testing script?
[17:28] <rogpeppe> dimitern: yeah
[17:28] <dimitern> rogpeppe: cool! so it's as stable as it gets for now after 22 runs
[17:29] <rogpeppe> dimitern: yeah
[17:29] <rogpeppe> dimitern: it's still not good though
[17:29]  * rogpeppe says, with a serious face on
[17:29] <dimitern> rogpeppe: why? have you seen the same issue again
[17:30] <rogpeppe> dimitern: no, but any intermittently failing test is Bad
[17:30] <rogpeppe> dimitern: even if it's "only" once every 22 times
[17:30] <dimitern> rogpeppe: ah, sure :) but this one at least seems fixed
[17:30] <rogpeppe> dimitern: yup
[17:31] <fwereade__> dammit, must dash: if anyone fancies https://codereview.appspot.com/7950043 I'd be really happy
[17:31] <rogpeppe> fwereade__: will take a look
[18:06] <rogpeppe> reminder to self: never interrupt bzr at work
[18:17] <rogpeppe> g'night all
[18:18] <rogpeppe> i have a few CLs up for review if anyone fancies taking a look; they all have kanban tickets
[18:18] <benji> fwereade__: the trunk appears happy now, thanks
[20:29] <thumper> morning
[20:36] <thumper> fwereade__: don't suppose you are around?
[21:47]  * thumper wonders if there was anything else to email the list with...
[22:01] <fwereade__> thumper, heyhey
[22:02] <fwereade__> thumper, not really, but kinda
[22:08]  * thumper was confused there for a minute
[22:08] <thumper> fwereade__: I'm just wondering about testing the tools selection for start instance
[22:08] <thumper> fwereade__: also, tools are specified by bootstrap...
[22:09] <thumper> fwereade__: trying to finish this off before looking at the machine things we talked about last night my time
[22:10] <fwereade__> thumper, still thinking about the tests, but not quite following on bootstrap
[22:11] <thumper> well... you said that the tools weren't set in the start instance params
[22:11] <thumper> well, they aren't if coming from StartInstance, but are from Bootstrap
[22:11] <fwereade__> thumper, tools selection with --upload-tools should Just Work once we get rid of the weird multi-bucket fallback stuff in tools
[22:12] <fwereade__> thumper, assuming we set default-series and agent-version at upload time, anyway
[22:12] <fwereade__> thumper, ofc none of that is written yet
[22:12] <thumper> :)
[22:13] <fwereade__> thumper, but I think it's the stuff I was talking about in the saner bits of my last email
[22:13] <thumper> I'm not clear on why the multibucket stuff needs to change
[22:13] <thumper> fwereade__: which we do, except for it being mildly broken
[22:13] <thumper> fwereade__: by version.Current anyway
[22:13] <fwereade__> thumper, mainly because the falling back is deeply confusing to me
[22:14] <fwereade__> thumper, so long as we only use the first bucket that has any tools, I can figure it out quite easily
[22:14] <fwereade__> thumper, but the way bad matches from closer buckets beat good matches from distant buckets breaks my brain
[22:14] <thumper> hmm...
[22:15] <thumper> so, here is a question...
[22:15] <thumper> if I have a development version of tools I have uploaded
[22:15] <thumper> how does this interact with start instance?
[22:15] <thumper> when it is looking for tools
[22:15] <thumper> this bit still confuses me
[22:16] <thumper> but I could probably just talk with dave about that when he starts
[22:16] <thumper> if we just used the version defined in agent-version, we may not get my special uploaded tools
[22:16] <fwereade__> thumper, I *think* that if we do agent-version right, we can easily pick the tools right
[22:17] <fwereade__> thumper, but I confess to some uncertainty around the magic insertion of the dev-version flag
[22:17] <fwereade__> thumper, there might be a "development" field in env-config that comes into play somewhere
[22:18] <fwereade__> thumper, but I shouldn't really be getting into this now tbh
[22:18] <thumper> :)
[22:18] <fwereade__> thumper, if I'm still awake when nobody else is I'll swing by again
[22:18] <thumper> fwereade__: just sleep :)
[22:18] <fwereade__> thumper, if I don't, happy weekend :)
[22:18] <thumper> you too
[22:18] <thumper> long weekend for me
[22:56] <thumper> davecheney: morning
[22:56] <thumper> davecheney: I have a question (or two)
[22:56] <davecheney> thumper: shoot mate
[22:56] <thumper> davecheney: apart from all those other questions on the email list
[22:56] <davecheney> i haven't got through all the correspondence on the list yet
[22:57] <thumper> davecheney: https://canonical.leankit.com/Boards/View/103148069/104140367 is this done by the change I recently landed in worker/provisioner/provisioner.go line 264?
[22:57] <thumper> davecheney: in trunk as of this morning
[22:57] <davecheney> gimme a sec
[22:57] <thumper> if so, yay, another thing done quickly
[22:57] <davecheney> i suspect it is
[22:57] <davecheney> the provisioner calls StartInstance
[22:57] <thumper> cool, I'll move it to done
[22:58] <davecheney> please hold, confirming
[22:59] <davecheney> thumper: LGTM,
[23:00]  * thumper holds (even though he has already moved the card)
[23:01]  * davecheney has much love for bzr log --show-diff
[23:02] <davecheney> wow, 3 LGTMs on the logging bikeshed
[23:02] <davecheney> fuck yeah
[23:02] <davecheney> i'm gonna submit that before anyone changes their mind
[23:05] <thumper> :)
[23:05] <thumper> davecheney: I saw two, but thought I'd throw mine in too for good measure
[23:06] <davecheney> wallyworld_: has a point
[23:06] <davecheney> but I prefer cut -f for log splitting
[23:06] <davecheney> so that is how I wrote it
[23:06] <thumper> I don't think wallyworld_ has a point
[23:07] <thumper> don't put the colon in
[23:07] <thumper> better without
[23:07] <thumper> given a defined timestamp, and a defined severity
[23:07] <thumper> it is trivial to get those two out, or ignore them if need be
[23:07] <thumper> adding a colon adds a character for no real benefit
[23:07] <davecheney> okdie dokes
[23:08] <davecheney> shit, all the lanes are full
[23:08] <davecheney> better get cracking
[23:08] <thumper> one reason I did a few reviews :)
[23:08] <davecheney> i'm going to move rogers api doc back out of review
[23:08]  * thumper heads out for lunch
[23:08] <thumper> back later
[23:08] <davecheney> it's been in there since before atlanta