[01:20] <davecheney> grrr, something is leaking /tmp/gui* turds
[01:25] <axw> thumper: "Generally LGTM" = shipit? I've answered your question about newline
[01:26] <axw> thanks for reviewing btw
[01:31] <thumper> axw: yes
[01:31] <axw> thumper: ta
[02:50] <davecheney>         // If any post-MVP command suite enabled the flag, keep it.
[02:50] <davecheney>         hasFeatureFlag := featureflag.Enabled(feature.PostNetCLIMVP)
[02:50] <davecheney>         s.BaseSuite.SetUpTest(c)
[02:50] <davecheney>         s.FakeJujuHomeSuite.SetUpTest(c)
[02:50] <davecheney>         if hasFeatureFlag {
[02:50] <davecheney>                 s.BaseSuite.SetFeatureFlags(feature.PostNetCLIMVP)
[02:50] <davecheney>         }
[02:50] <davecheney> wut
[02:50] <davecheney> if the feature flag is enabled, then enable the feature flag
[02:57] <davecheney> taht feel when you find the same bug copy pasted into several test suites
[02:57] <natefinch> heh
[04:26] <thumper> heh
[04:26] <thumper> golint needs more smarts
[04:27] <thumper> don't use underscores in Go names; type interface_ should be interface (golint)at line 13 col 6
[04:27] <thumper> also complains about type_
[04:53] <menn0> thumper: here's an important one: http://reviews.vapour.ws/r/4414/
[04:53]  * thumper looks
[04:54] <menn0> that was fun
[04:54]  * menn0 hasn't been in the zone like that for a long time
[04:54] <thumper> I'm sure will will like that one
[04:55] <menn0> thumper: yes, he doesn't like these custom watchers
[04:55] <menn0> thumper: and I now pretty much agree with him
[04:57] <thumper> shipit
[04:58] <menn0> thumper: cheers
[05:00] <thumper-afk> bbl to catch europeans
[07:28] <davecheney> fixing cleanup suite failures in the ec2 tests are making me sad
[07:28] <davecheney> all day
[07:29] <davecheney> no fucking idea what is broken
[07:29] <davecheney> touch one thing, and magic values in other parts of the program dont' get overwritten
[07:29] <davecheney> i hate patch value
[07:29] <davecheney> it's tumor
[07:39] <rogpeppe> davecheney: sorry about that
[07:39] <rogpeppe> davecheney: it seemed like a good idea at the time
[07:51] <davecheney> FFFFFFFFFFFFFFFFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU
[07:51] <davecheney> Cleanup suite uses a package singleton
[07:51] <davecheney> so you can have multiple CleanupSuites
[07:51] <davecheney> pooped all the way through your suite construction
[07:51] <davecheney> and they all COMPETE FOR THE SAME FUCKING SingleTON!
[07:51] <davecheney> oh
[07:51] <davecheney> no
[07:51] <davecheney> i was wrong
[07:52] <davecheney> i take that rant back
[07:58] <davecheney> https://github.com/juju/testing/pull/94
[09:00] <dooferlad> frobware, voidspace: hangout time
[09:01] <voidspace> dooferlad: frobware: be there in 2 minutes!
[09:06] <voidspace> dooferlad: omw
[09:32] <voidspace> babbageclunk: I'll ping you shortly, just fending off some emails
[09:32] <babbageclunk> ok cool
[09:34] <voidspace> babbageclunk: I've pulled in the changes from use-boot-resources onto our maas2instance branch
[09:34] <voidspace> babbageclunk: and am doing the rename
[09:34] <babbageclunk> voidspace: the one Tim suggested? ok
[09:34] <voidspace> babbageclunk: yeah, I think it's clearer
[09:35]  * babbageclunk nods
[09:35] <voidspace> babbageclunk: so the old maasInstance becomes maas1Instance and the interface becomes maasInstance
[09:36] <voidspace> babbageclunk: we need to update to the latest version of gomaasapi and fix verifyCredentials to not hit Machines endpoint plus use the new gomaasapi error instead of the net/http one
[09:36] <voidspace> babbageclunk: we can do that in a new branch though
[09:45] <babbageclunk> voidspace: shall I make a start on the verifyCredentials change?
[10:26] <voidspace> babbageclunk: sorry, missed your message
[10:26] <voidspace> babbageclunk: sure you can look at errors too
[10:28] <voidspace> babbageclunk: just grabbing coffee then I'm ready
[10:30] <voidspace> babbageclunk: use-boot-resources landed
[11:40] <voidspace> frobware: dooferlad: gah, updating to use the latest version of gomaasapi from thumper causes 23 test failures :-/
[11:40] <voidspace> just venting
[11:41] <voidspace> now bisecting to find what caused it
[12:03] <frobware> voidspace: 23 test failures in juju or gomaasapi?
[12:23] <perrito666> marcoceppi: hello, you had mentioned the name of a charm that implements update-status what was it?
[12:23] <perrito666> also mgz ping
[12:49] <babbageclunk> frobware: juju - it turned out to be an errors.Trace that was added in gomaasapi.
[12:49] <babbageclunk> frobware: it meant adding errors.Cause in a few places.
[12:54] <perrito666> anyone knows the exact version of lxd that works with master?
[13:18] <voidspace> frobware: that was in juju
[13:18] <voidspace> frobware: we found the cause, and fixed it
[13:18] <voidspace> ah yes
[13:18] <voidspace> babbageclunk: grabbing coffee
[14:06] <voidspace> frobware: dooferlad: http://reviews.vapour.ws/r/4422/
[14:17] <marcoceppi> perrito666: any of the bigdata charms
[14:18] <perrito666> yup, already found tx a lot marcoceppi
[14:18] <mup> Bug # changed: 1229903, 1292157, 1323446, 1365542, 1531954
[14:27] <mup> Bug #972515 changed: Charm store needs search functionality <store> <juju-core:Invalid by niemeyer> <https://launchpad.net/bugs/972515>
[14:42] <mup> Bug #972515 opened: Charm store needs search functionality <store> <juju-core:Invalid by niemeyer> <https://launchpad.net/bugs/972515>
[14:46] <frobware> voidspace, sorry, sidetracked by a customer issue.
[14:46] <frobware> dooferlad: would you mind obliging with voidspace's review ^^
[14:46] <dooferlad> frobware: sure
[14:48] <mup> Bug #972515 changed: Charm store needs search functionality <store> <juju-core:Invalid by niemeyer> <https://launchpad.net/bugs/972515>
[14:48] <mup> Bug #1565826 opened: Unable to build juju2 from master <juju-core:New> <https://launchpad.net/bugs/1565826>
[14:48] <mup> Bug #1565827 opened: TestTimeoutRun fails <ci> <test-failure> <juju-core:Incomplete> <juju-core feature-juju-run-action:Triaged> <https://launchpad.net/bugs/1565827>
[14:52] <dooferlad> voidspace: got a +1
[14:59] <voidspace> dooferlad: thanks!
[15:01] <katco> natefinch: ericsnow: standup time
[15:05] <alexisb> morning all
[15:05] <alexisb> if anything needs attention from me today please ping me directly
[15:06] <alexisb> crawling through email atm
[15:08] <katco> alexisb: wb
[15:08] <alexisb> thanks katco !
[15:09] <mup> Bug #1565831 opened: unable to create authenticated client with maas 1.9 <bootstrap> <ci> <maas-provider> <juju-core:New> <juju-core maas2:Triaged> <https://launchpad.net/bugs/1565831>
[15:15] <perrito666> returning from holidays my brain is all :"oh that english thing again, sure, lemme switch everything, in the meanwhile, talk like scooby doo"
[15:17] <katco> perrito666: you did fine ;)
[15:17] <perrito666> katco: when I just connected my brain was in "what are these people talking?" mode
[15:22] <natefinch> perrito666: lol
[15:47] <ericsnow> fwereade: thanks for the review; processing now
[16:00] <voidspace> rick_h_: is the meeting on today?
[16:01] <voidspace> frobware: dooferlad: is the meeting with rick_h_ on today?
[16:01] <voidspace> I added babbageclunk as a guest
[16:02] <natefinch> ericsnow: charmstore meeting
[16:09] <mup> Bug #1565826 changed: Unable to build juju2 from master <juju-core:Invalid> <https://launchpad.net/bugs/1565826>
[16:10] <rick_h_> voidspace: frobware babbageclunk sorry!
[16:10] <voidspace> dooferlad: frobware: rick_h_ is here!
[16:12] <katco> ericsnow: natefinch: so when we all have a moment, we should point new work, and i can give you update on projected completion of project to see if we're in trouble
[16:13] <frobware> voidspace, rick_h_: omw
[16:13] <ericsnow> katco: k
[16:15] <frobware> rick_h_: sorry was clicking as you were speaking...
[16:16] <rick_h_> frobware: all good
[16:18] <mup> Bug #1565872 opened: As a juju user I would like to use docker on the local provider <adoption> <juju-core:New> <https://launchpad.net/bugs/1565872>
[16:19] <alexisb> katco, ping
[16:19] <katco> alexisb: pong
[16:27] <mup> Bug #1565872 changed: Juju needs to support LXD profiles as a constraint <adoption> <juju-core:New> <https://launchpad.net/bugs/1565872>
[16:39] <mup> Bug #1565872 opened: As a juju user I would like to use docker on the local provider <adoption> <juju-core:New> <https://launchpad.net/bugs/1565872>
[16:39] <mup> Bug #1565880 opened: juju list-credentials --show-secrets does not do anything <docteam> <juju-core:New> <https://launchpad.net/bugs/1565880>
[16:40] <natefinch> katco: done meeting with the charmstore guys, would like to get lunch if that's ok?
[16:42] <katco> natefinch: that's fine
[16:42] <katco> ericsnow: natefinch: we'll meet after
[16:42] <ericsnow> katco: k
[16:44] <perrito666> natefinch: just as a reference, re lxd, the version required for juju to work is the one in the ppa no the one in wily
[16:45] <natefinch> perrito666: wait, I thought they reversed that
[16:46] <perrito666> natefinch: so did I but juju says otherwise
[16:46] <natefinch> perrito666: whatever, my lxd works, that's all I care about for now
[16:47] <perrito666> natefinch: heh, mine too
[16:47] <natefinch> perrito666: you're right, mines from the PPA
[16:47]  * natefinch lunches
[16:58] <voidspace> frobware: dooferlad: review if you get a chance http://reviews.vapour.ws/r/4425/
[17:50] <natefinch> katco, ericsnow: ready when you guys are
[17:51] <ericsnow> natefinch: k
[17:52] <katco> natefinch: ericsnow: let's do it!
[18:17] <marcoceppi> can I remove storage from a service?
[18:19] <marcoceppi> why do we have a detaching hook, but no way to detach storage?
[18:19] <marcoceppi> rick_h_: ^
[18:39] <katco> urulama: rick_h_: hey, can you be authorized to a specific channel?
[18:39] <urulama> yes
[18:39] <urulama> acls are different for each channel
[18:39] <natefinch> urulama: so a macaroon is channel-specific?
[18:40] <mup> Bug #1503029 changed: juju plugins which exit > 0 report a subprocess ERROR <charmers> <juju-core:Fix Released by cmars> <https://launchpad.net/bugs/1503029>
[18:40] <rick_h_> katco: yes, the idea being you give your tester folks access to dev channel
[18:40] <katco> rick_h_: makes sense... suspicion confirmed ty :)
[18:41] <urulama> natefinch: macaroon is not, but ACL is
[18:46] <mup> Bug #1565943 opened: Can't bootstrap on VSphere <vsphere> <juju-core:Triaged> <https://launchpad.net/bugs/1565943>
[18:59] <natefinch> urulama: so that's kinda weird... there's no channel parameter sent up when you get a charm archive.. I guess if that charm is in a public channel it'll just work? and if it's not it'll return an access denied error of some sort?
[19:09] <urulama> natefinch: sure it does. check out https://api.jujucharms.com/charmstore/v5/~jorge/bundle/wiki-simple/archive?channel=unpublished vs https://api.jujucharms.com/charmstore/v5/~jorge/bundle/wiki-simple/archive
[19:11] <natefinch> urulama: ahh, ok, I was looking at the docs, where it doesn't seem to indicate channels get passed to get the archives
[19:12] <natefinch> urulama: https://github.com/juju/charmstore/blob/v5-unstable/docs/API.md#get-idarchive
[19:12] <urulama> natefinch: ha, ok, that needs some updating then
[19:12] <natefinch> urulama: oh, I guess the blurb about channels at the top covers that, for any endpoint that takes an id can take a channel
[19:13] <urulama> natefinch: works also on /meta endpoint
[19:13] <natefinch> urulama: cool, thanks for clarifying
[19:13] <urulama> np
[19:57] <mbruzek> cherylj: Have you seen issues with deploying LXD issues with deploying xenial with the juju agent not finishing installation?
[19:57] <katco> can anyone tell me why we aren't using logging here? https://github.com/juju/juju/blob/master/provider/common/bootstrap.go#L133
[19:59] <mbruzek> ping anyone else working with LXD and xenial
[19:59] <natefinch> katco: weird
[20:00] <cherylj> mbruzek: I haven't tried it in a couple days
[20:00] <cherylj> mbruzek: are you using lxd provider on a daily xenial image?
[20:00] <mbruzek> cherylj: yes.
[20:01] <mbruzek> Marco and I looked at the logs, we saw a modprobe fail.
[20:01] <cherylj> mbruzek: k, let me provision a new xenial instance
[20:01] <natefinch> katco: that's the output for the bootstrap command, to the CLI
[20:01] <katco> natefinch: was just typing that
[20:01] <katco> natefinch: got lost in a lot of --debug output so incorrectly seemed odd looking at backtraces
[20:01] <natefinch> katco: it would still make sense to wrap it in a logger so you don't risk interleaved writes, but it's less likely on the client, I guess
[20:03] <mbruzek> cherylj: I assert the agent will be stuck "Waiting for agent initialization to finish"
[20:03] <mbruzek> after deploying a charm in xenial series
[20:04] <mbruzek> Or at least that is what I am seeing
[20:04]  * thumper dives into email
[20:05] <cherylj> mbruzek: so it's also a xenial container you're deploying too?
[20:05] <cherylj> mbruzek: is it a trusty controller?
[20:05] <mbruzek> I was following this page: https://jujucharms.com/docs/devel/config-LXD
[20:06] <mbruzek> That bootstrap command looks like I am requesting a xenial controller.
[20:13] <mbruzek> cherylj: Once you get bootstrapped, deploy a xenial charm.
[20:13] <mbruzek> juju deploy ubuntu --series xenial --force
[20:13] <mbruzek> cherylj: ^
[20:14] <cherylj> mbruzek: are you using beta3, or tip of master?
[20:14] <mbruzek> beta3
[20:14] <cherylj> omg, I'm deploying an openstack bundle on my vmaas and my laptop is screaming!
[20:15] <cherylj> much lag, such slow
[20:15] <mbruzek> cherylj: Oh
[20:15] <mbruzek> cherylj: Yikes!
[20:16] <mbruzek> cherylj: I just deployed the ubuntu charm on both trusty and xenial
[20:16] <cherylj> k, my instance in aws is coming up :)
[20:16] <mbruzek> LXD inside aws?
[20:17] <cherylj> mbruzek: yeah, I like to spin up a clean xenial install when testing the lxd provider
[20:19] <mbruzek> cherylj: so I got both of the charms to deploy completely but I see the error in the machine log that will never work on lxd
[20:20] <mbruzek> The juju agent checks for kvm:  WARNING juju.cmd.jujud machine.go:760 determining kvm support: INFO: /dev/kvm does not exist
[20:20] <mbruzek> Not there in the base cloud image
[20:20] <mbruzek> HINT:   sudo modprobe kvm_intel
[20:21] <mbruzek> cherylj: Then I see an error when the juju agent tries to do modprobe.
[20:21] <mbruzek> modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-15-generic/modules.dep.bin'
[20:21] <mbruzek> : exit status 1
[20:21] <mbruzek> no kvm containers possible
[20:21] <cherylj> mbruzek: and that's causing the machine agent to barf?
[20:21] <cherylj> mbruzek: that should be fine
[20:22] <cherylj> bootstrapping now...
[20:22] <mbruzek> cherylj: This time it does not appear that they barfed, both trusty and xenial seem OK now.
[20:23] <cherylj> mbruzek: did you do anything differently?  or did just doing it a second time work?
[20:23] <mbruzek> cherylj: why do you think that is OK? doing a modprobe within a LXD container is never going to work.
[20:23] <cherylj> mbruzek: it's just a check to see if we have the capability of hosting KVMs
[20:24] <cherylj> happens on all providers
[20:26] <mbruzek> cherylj: but since LXD is sharing the kernel, a modprobe will not work in KVM. So the error is handled approprately?
[20:26] <mbruzek> cherylj: s/KVM/LXD
[20:26] <mbruzek> Because they share the kernel
[20:26] <mbruzek> you can not insert any more mods
[20:30] <mbruzek> cherylj: OK I was able to bootstrap, deploy and clean up correctly this time.
[20:32] <cherylj> mbruzek: is there a better way to detect whether or not we can host KVMs?  (I'm not familiar with how we would do that)
[20:33] <cherylj> mbruzek: and on this lxd instance issue - when the unit was waiting for agent initialization, did it have an instance ID yet?
[20:33] <mbruzek> cherylj: Not that I am aware of, but perhaps we should check for LXD / containers first and if not in a container then check for kvm.
[20:33] <mbruzek> cherylj: let me check my status
[20:34] <mbruzek> cherylj: yes, I was even able to ssh to it, but the juju agent was dead
[20:34] <cherylj> ah, I'm running into a different issue then
[20:34] <cherylj> mbruzek: were both unit and machine agents dead?
[20:34] <mbruzek> cherylj: I think it was the machine unit, and I was not able to destroy the controller
[20:35] <cherylj> mbruzek: when you did destroy-controller, what was the error?  or did it just hang?
[20:35] <mbruzek> cherylj: it went into an infinite loop
[20:36] <cherylj> mbruzek: ah, in the "waiting for x service" blurb?
[20:36] <mbruzek> I hit control-c and re ran it with --debug on
[20:36] <mbruzek> Waiting on 2 models, 1 machine, 2 services
[20:37] <mbruzek> cherylj: with debug:
[20:37] <mbruzek> 2016-04-04 19:32:20 INFO cmd cmd.go:129 Waiting on 2 models, 1 machine, 2 services
[20:37] <mbruzek> 2016-04-04 19:32:20 INFO cmd cmd.go:141 admin@local/default (alive), 1 service
[20:37] <mbruzek> for ever, I could not --force it
[20:37] <cherylj> mbruzek: this sounds familiar.  I think I've run into this before, but couldn't recreate
[20:38] <mbruzek> cherylj: well kill the machine agent, and try to destroy the controller
[20:38] <cherylj> mbruzek: you can manually terminate the controller machine, then run kill-controller
[20:38] <mbruzek> But this time everything came down OK
[20:39] <cherylj> ok
[20:39] <cherylj> let me poke more, see if I can recreate
[20:39] <mbruzek> cherylj: Yes I figured out the lxc commands to stop and delete the images
[20:40] <mbruzek> But all the Juju stuff was left in the .local/share/juju
[20:40] <cherylj> mbruzek: yeah, if you just kill the controller machine and run kill-controller, it will give up talking to the controller and then go to the provider to clean up
[20:40] <cherylj> and then it will clean up your local cache information
[20:41] <mbruzek> cherylj: OK I was not able to reproduce that, but I didn't know that worked
[20:41] <cherylj> mbruzek: I wrote a lot of the destroy / kill controller code so I know ALL ITS SECRETS
[20:41] <mbruzek> cherylj: OK well thank you for the information
[20:42] <cherylj> mbruzek: there's also a bug open to provide a way to clean up stale controller information
[20:42] <cherylj> It'd be super awesome if we could get that done for 2.0
[20:43] <mbruzek> cherylj: OK so you were able to get a xenial ubuntu and everything looks good?
[20:43] <cherylj> mbruzek: no, I'm running into a different bug
[20:43] <cherylj> mbruzek: what I see is that on a new xenial install, deploying in lxd doesn't even get an instance associated with the service
[20:43] <mbruzek> cherylj: so let me understand, you are using amazon to run a local LXD provider?
[20:44] <cherylj> mbruzek: yes, I manually deploy a xenial machine, install juju2 on it, and bootstrap lxd
[20:44] <mbruzek> cherylj: OK, I just wanted to understand the workflow
[20:44] <cherylj> mbruzek: :)  I do it to make sure it's clean and I haven't messed things up some how
[20:45] <mbruzek> that is a good idea!
[20:45] <cherylj> mbruzek: if you haven't seen it before, I use the cloud image finder to quickly launch a particular instance in a particular region:  https://cloud-images.ubuntu.com/locator/daily/
[20:46] <mup> Bug #1564622 changed: Suggest juju1 upon first use of juju2 if there is an existing JUJU_HOME dir <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1564622>
[20:46] <mup> Bug #1565991 opened: juju commands don't detect a fresh juju 1.X user and helpfully tell them where to find juju 1.X <juju-core:Triaged> <https://launchpad.net/bugs/1565991>
[20:46] <mbruzek> cherylj: Thanks for the link
[20:47] <mbruzek> cherylj: which type do you pick, I imagine the ssd ones are more expensive
[20:47] <cherylj> mbruzek: I just go with the ssd ones
[20:53] <voidspace> thumper: ping
[20:53] <thumper> voidspace: hey
[20:53] <voidspace> thumper: morning
[20:54] <voidspace> thumper: I just wanted to say thanks
[20:54] <thumper> for?
[20:54] <voidspace> thumper: your changes to gomaasapi broke 24 tests and it took us nearly two hours to work out why
[20:54] <voidspace> ;-)
[20:54] <voidspace> thumper: we fixed it, but I do wonder if we ought to change it (which is why I'm really pinging)
[20:54] <thumper> wont make that mistake again then will you?
[20:54] <voidspace> hehe
[20:54] <voidspace> thumper: you added an errors.Trace to the error the client returns
[20:55] <thumper> gomaasapi.GetServerError(err error) (ServerError, bool)
[20:55] <voidspace> thumper: which means that anything that attempts to cast to a ServerError fails
[20:55] <voidspace> sure
[20:55] <thumper> I added that for that exact reason
[20:55] <voidspace> but it's backwards incompatible
[20:55] <thumper> eh...
[20:55] <thumper> yeah
[20:55] <thumper> ish
[20:55] <thumper> a bit
[20:55] <thumper> not heaps
[20:55] <voidspace> well, it broke 23 tests on master...
[20:55] <voidspace> so yeah - ish
[20:55] <thumper> you're welcome
[20:55] <voidspace> :-D
[20:55] <voidspace> it's *better*
[20:56] <voidspace> maybe we just see if anyone else complains
[20:56] <voidspace> gomaasapi does have users
[20:56] <voidspace> not many probably but some :-)
[20:56] <voidspace> (I mean the change is better)
[20:57] <voidspace> anyway, I'm off to bed
[20:57] <voidspace> well - watch TV anyway
[20:57] <voidspace> thumper: just thought as you were around I'd hassle you :-)
[20:57] <voidspace> thumper: have a good day
[20:57] <thumper> :)
[21:34] <mup> Bug # opened: 1564157, 1564163, 1564165, 1566011, 1566014
[21:37] <mup> Bug # changed: 1564157, 1564163, 1564165, 1566011, 1566014
[21:46] <mup> Bug #1565880 changed: juju list-credentials --show-secrets does not do anything <docteam> <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1565880>
[21:46] <mup> Bug # opened: 1564157, 1564163, 1564165, 1564168, 1564670, 1566011, 1566014, 1566023, 1566024
[22:19] <menn0-afk> thumper: PR #20 LGTM. It's a bit hard to read the #21 b/c it builds on it so let me know when you've merged #20.
[22:20] <thumper> menn0: ack, will land, and update pr
[22:37] <mup> Bug #1566044 opened: list-models and show-model should represent similar keys for controller model <juju-core:New> <https://launchpad.net/bugs/1566044>
[22:49] <mup> Bug #1566044 changed: list-models and show-model should represent similar keys for controller model <juju-core:New> <https://launchpad.net/bugs/1566044>
[22:52] <mup> Bug #1566044 opened: list-models and show-model should represent similar keys for controller model <juju-core:New> <https://launchpad.net/bugs/1566044>
[23:15] <perrito666> axw: anastasiamac is standup still happening?
[23:15] <axw> perrito666: yes, omw
[23:15] <anastasiamac> m here
[23:38] <davecheney> # github.com/juju/juju/provider/ec2_test
[23:38] <davecheney> local_test.go:93: ambiguous selector t.AddCleanup
[23:38] <davecheney> FATAL: command "test" failed: exit status 2
[23:39] <davecheney> pungent smell that CleanupSuite is present more than once in this suite
[23:44] <perrito666> davecheney: odd, although I found someone had refactored something upstream last time I was nearby