[02:18] <mbruzek> waigani on a scale of 1 to 10 how hard was Ubuntu to install on a Mac?
[02:20] <waigani> hmmm, not too bad actually. I'd give it a 7
[02:20] <waigani> the trackpad sucks though
[02:21] <waigani> selecting, dragging and dropping - I have not sworn sworn so much in a long time!
[02:23] <waigani> I found this very helpful for the install https://help.ubuntu.com/community/MacBookPro11-1/Saucy
[07:58] <dimitern> jam, ping
[07:59] <dimitern> jam, if you haven't yet started working on bug 1268471 i'd like to pick it up
[07:59] <_mup_> Bug #1268471: cache bootstrap address <bootstrap> <juju-core:Triaged by dimitern> <https://launchpad.net/bugs/1268471>
[08:00]  * dimitern is away for 1h
[08:56] <rogpeppe> anyone have an idea why my /boot partition might have filled up recently?
[09:01] <axw> rogpeppe: usually for me that's because of kernel upgrades, and the old ones being kept around
[09:01] <rogpeppe> axw: yeah, i'd just figured that out, thanks
[09:01] <axw> are there lots of vmlinuz files?
[09:02] <axw> ok
[09:02] <jam> dimitern: I haven't started it yet
[09:02] <rogpeppe> axw: i've just removed some old vmlinuz and initrd.img files
[09:02] <rogpeppe> axw: hopefully no crucial ones
[09:02] <rogpeppe> axw: i'm assuming that old versions will never be used again
[09:02] <axw> rogpeppe: you should probably apt-get remove the packages
[09:02] <rogpeppe> axw: ah
[09:03] <axw> removing the packages updates grub too
[09:03] <axw> rogpeppe: linux-image-<version>-generic
[09:03] <axw> remove them for whichever ones you deleted :)
[09:05] <rogpeppe> axw: how can i list all the packages i've got installed?
[09:06] <rogpeppe> axw: i'd like to do <list all packages> | grep linux-image, just to make sure i'm doing the right thing
[09:06] <axw> rogpeppe: dpkg --get-selections
[09:06] <rogpeppe> axw: ha ha
[09:07] <rogpeppe> axw: you can't do it with apt-get?
[09:07] <rogpeppe> axw: fair enough
[09:07] <axw> not sure actually
[09:10] <rogpeppe> axw: much better! /boot down from 100% to 30%
[09:10] <axw> :)
[09:11] <rogpeppe> axw: one might think that the usual upgrade procedure would garbage collect every now and again
[09:12] <rogpeppe> axw: thanks a lot BTW
[09:12] <axw> rogpeppe: yeah, though you wouldn't want to upgrade and lose your old kernel if the new one is busted
[09:12] <axw> rogpeppe: nps
[09:14] <rogpeppe> axw: yeah, but keeping around 8 versions is probably slightly overkill...
[09:14] <axw> I won't argue with that :)
[09:59] <jam> axw: did something change with the test suite that it is trying to read /home/ubuntu/.ssh/authorized_keys ?
[10:00] <jam> axw: https://code.launchpad.net/~rogpeppe/juju-core/479-desired-peer-group/+merge/201245 for the traceback
[10:00] <jam> it happens that the machine has an ubuntu user
[10:00] <jam> but the test suite is run as "tarmac"
[10:00] <jam> so doesn't have rights
[10:00] <axw> jam: that'd be the authenticationworker I think
[10:00] <jam> and *IMO* shouldn't need them
[10:00] <dimitern> jam, rogpeppe, wallyworld_, any idea how to work around this error while bootstrapping? bootstrap failed: cannot find bootstrap tools: XML syntax error on line 9: element <hr> closed by </body>
[10:01] <jam> dimitern: wallyworld_ has a patch up for it, but also just "juju bootstrap --upload-tools" which I think you need anyway ?
[10:01] <dimitern> jam, oh, silly me, ofc
[10:01] <dimitern> thanks
[10:01] <axw> jam: so, I'm pretty sure the error there is not due to the lack of access
[10:02] <axw> it is very spammy though
[10:02] <jam> axw: [LOG] 31.15149 ERROR juju.worker.authenticationworker reading ssh authorized keys for "machine-0": reading ssh authorised keys file: open /home/ubuntu/.ssh/authorized_keys: permission denied
[10:02] <jam> that may not be why the test suite failed, true
[10:02] <jam> but it does look like something is broken in our test infrastructure
[10:02] <jam> which hides realstuff
[10:10] <axw> jam: I don't suppose I can do multiple prereqs on a proposal, can I?
[10:10] <jam> axw: just one
[10:10] <axw> rats
[10:10] <jam> you could merge them into another branch and use that as the prereq
[10:11] <axw> doesn't matter, I'll just wait for the others to be approved
[10:11] <axw> anyway: hooray, Windows can now bootstrap again (in my sandbox)
[10:42] <rogpeppe> jam: i just saw that
[10:42] <rogpeppe> jam: i suspect that something's not setting up things for the fake home correctly
[10:43] <jam> rogpeppe: well the tests are run as Tarmac, so it should be reading a different home dir
[10:43] <rogpeppe> jam: nothing should be trying to read from /home/ubuntu
[10:43] <jam> I have the feeling authentication worker has hard-coded "ubuntu" user's .ssh/authorized-keys
[10:43] <rogpeppe> jam: yeah
[10:44] <axw> jam: it could be changed, but I wonder if it should be run at all in tests. I don't think it's a good idea for the test user's authorized_keys to be updated...
[10:46] <rogpeppe> jam: it doesn't seem to hard code /home/ubuntu actually
[10:46] <rogpeppe> jam: but even so, it shouldn't be looking at the real home directory
[10:47] <axw> that's a good point
[10:48] <jam> standup
[10:48] <jam> fwereade: ^^
[11:26] <mattyw> fwereade, hi there - the id meeting - I could move it one hour earlier - would that be better?
[11:32] <mgz> wallyworld: lp:~gz/juju-core/1.16_ssl_verification_bootstrap_state_1268913 as an idea. would really like to write a test for it, but have few ideas.
[11:32] <natefinch> jam, rogpeppe: this might be the problem: 6.8G	./test-mgo172948267
[11:32] <rogpeppe> natefinch: probably
[11:32] <rogpeppe> natefinch: but why is it so big?
[11:32] <wallyworld_> mgz: let's see if it works first :-)
[11:33] <natefinch> rogpeppe: I don't know. I'm going to try passing --oplog-size and see if I can trim it down. The help says the oplog defaults to 5% of disk space
[11:33] <jam> rogpeppe, natefinch: note that I cleaned all of /tmp before we restarted the tarmac bot a couple of hours ago, so that is a 'new' run
[11:33] <jam> natefinch: did you submit that one yourself
[11:33] <jam> ?
[11:33] <rogpeppe> natefinch: oh wow
[11:33] <natefinch> jam: that's on my local machine
[11:33] <rogpeppe> natefinch: perhaps we could make it zero...
[11:33] <jam> rogpeppe: natefinch: mongo defaults are very much about "this will be a mongo machine"
[11:33] <jam> which is very much against a test suite :)
[11:34] <jam> natefinch: ah, good. I just checked the bot and I swapped the "used" and "available". The bot currently has6.9GBfree
[11:35] <natefinch> jam: there's no reason why that can't be plenty.  I'll do some tests locally to see if the op log thing helps.
[11:38] <natefinch> jam, rogpeppe: that did it.  oplogSize 100   (100MB) gets the directories down to 387M on my machine
[11:39] <rogpeppe> natefinch: cool
[11:39] <jam> natefinch: even 100 is rather large for the test suite. I don't know what we want in practice
[11:39] <rogpeppe> natefinch: even 10 would seem like plenty
[11:39] <jam> my understanding is you can't resize it
[11:39] <natefinch> I'll try 10 and see if anything blows up
[11:40] <jam> "The oplog exists internally as a capped collection, so you cannot modify its size in the course of normal operations. "
[11:40] <jam> http://docs.mongodb.org/manual/tutorial/change-oplog-size/
[11:40] <jam> so you can restart mongo to change the size (with some real hacks from what I can see)
[11:40] <jam> but you can't just a
[11:40] <jam> adjust it on the fly
[11:41] <jam> anyway, 10MB should work, but will mean we have to do full syncs more often if a slave gets out of date
[11:41] <jam> (more entries in the oplog than it can keep up with)
[11:41] <natefinch> this is just for the tests.... and it seems to run fine with 10
[12:03] <natefinch> I feel somewhat morbid with all these error messages about waiting for sockets to die
[12:06] <mgz> natefinch: the poor sockets... :)
[12:18] <nate_finch> oh USB ethernet adapter.... why do you hate me so?
[12:44] <frankban> hi juju-core devs, trying to bootstrap an azure env i get this: http://pastebin.ubuntu.com/6750245/ . It might be related to bug 1250007
[12:44] <_mup_> Bug #1250007: Bootstrapping azure causes memory to fill <amd64> <apport-bug> <saucy> <juju-core:Incomplete> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1250007>
[12:55] <mgz> frankban: looks like it could be, a recursive error or something
[12:56] <mgz> frankban: you have the stack there, so add some printing and de-incomplete the bug
[12:56] <mgz> hm, actually writing config, not printing an error
[12:57] <frankban> mgz: an encoding error writing the jenv?
[12:57] <mgz> frankban: add some debug stuff on top of a local 1.16.5 and find out
[13:18] <natefinch> mgz: wonder if azure is bootstrapping a machine that's too small?  512mb RAM or something?
[13:22] <mgz> natefinch: the OOM is on the local machine, no?
[13:22] <natefinch> mgz: duh.  yeah.  That's weird
[13:24] <mgz> as frankban can reproduce it, should be pretty trivial for him to narrow down the issue
[13:26] <frankban> mgz: it seems to affect trunk too: data, err := goyaml.Marshal(info) in configstore/disk/Write seems to never return
[13:26] <dimitern> fwereade, ping
[13:27] <mgz> frankban: good, being a branch-only issue would be more annoying
[13:31] <dimitern> fwereade, sorry, never mind
[14:00] <frankban> mgz: it seems a failure marshalling  the azure management-certificate
[14:01] <dimitern>  rogpeppe, jam, fwereade: a quick review? https://codereview.appspot.com/52050043 fixes bug 1268471
[14:01] <_mup_> Bug #1268471: cache bootstrap address <bootstrap> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1268471>
[14:01] <rogpeppe> dimitern: will look in a mo
[14:01] <dimitern> rogpeppe, ta
[14:12] <frankban> mzg, so maybe the bug is in the documentation: https://juju.ubuntu.com/docs/config-azure.html : the management-certificate-path is the pem file, not the cer one.
[14:12] <frankban> mgz: the bootstrap node has been successfully created, but status fails: 2014-01-14 14:10:30 DEBUG juju.state open.go:88 connection failed, will retry: dial tcp 137.117.132.43:37017: connection refused
[14:13] <frankban> mgz: and now it works... it has taken several minutes. thanks for your help mgz
[14:13] <mgz> frankban: presumably because the client OOMed before actually finishing up everything
[14:14] <mgz> or... okay, what did you change?
[14:16] <frankban> mgz: nothing, it just takes a lot more time than I was used to with ec2. anyway, the problem I see here are two:1) the documentation is ambiguous and 2) juju explodes very badly if you set up your azure environment with the wrong certificate file
[14:17] <mgz> okay, so what did you actually change? the management-certificate setting?
[14:17] <mgz> having any value that OOMs on serialisation just needs fixing
[14:18] <frankban> mgz: the management-certificate-path setting
[14:18] <mgz> frankban: can you record that in the bug?
[14:18] <frankban> mgz: sure
[14:19] <sinzui> hi natefinch. I have some go+win questions that you might help me answer
[14:22] <natefinch> sinzui: sure
[14:22] <rogpeppe> dimitern: argh, i've just discovered that this CL, which should have gone in a month ago, never made it in: https://codereview.appspot.com/37650048/
[14:23] <sinzui> natefinch, I have setup a windows instance to build juju and the installer. There is a arch mismatch though
[14:23] <dimitern> rogpeppe, looking
[14:23] <dimitern> rogpeppe, ah yes
[14:23] <rogpeppe> dimitern: i'm sorry, it probably means that the logic in your branch will need looking at again, but i'd like to get it in, if that's ok
[14:23] <sinzui> natefinch, do we need 386? If so can I choose go 1.2 or go 1.1rc3?
[14:24] <smoser> hey. is streams.canonical.com supposed to have some data ?
[14:24] <sinzui> smoser, not yet
[14:24] <dimitern> rogpeppe, get it in, I'll merge the changes and fix mine
[14:24] <rogpeppe> dimitern: thanks.
[14:24] <sinzui> smoser, http://streams.canonical.com/juju will have something soon though
[14:24] <rogpeppe> dimitern: i was looking at the code and thinking "i'm sure i made this a bit simpler"..
[14:24] <natefinch> sinzui: we build 386 for go because it's compatible with both x86 and x64  but the OS bitness doesn't matter
[14:24] <dimitern> rogpeppe,  :)
[14:25] <natefinch> sinzui: 1.2 should be fine
[14:25] <sinzui> fab, thank you natefinch.
[14:25] <smoser> k. thanks, sinzui . jamespage ^.
[14:25] <natefinch> sinzui: np
[14:25] <smoser> jamespage noticeds that it was being hit from his juju.
[14:26] <jamespage> sinzui, yeah - I was trying to deploy a saucy charm under the local provider (running on trusty)
[14:26] <jamespage> #fail
[14:27] <sinzui> jamespage, 1.6.x always checks streams, then fails over to aws. But also note that many charms wont deploy on saucy because they rely on a package from a ppa that doesn't exist.
[14:27] <jamespage> sinzui, thats 1.17
[14:27] <jamespage> sinzui, this charm should work on saucy
[14:27] <frankban> mgz: commented on bug 1250007
[14:27] <_mup_> Bug #1250007: Bootstrapping azure causes memory to fill <amd64> <apport-bug> <saucy> <juju-core:Incomplete> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1250007>
[14:28] <sinzui> jamespage, you can always set the tools-metadata-url to the location you expect tools to be found
[14:28] <sinzui> jamespage, which cloud? your own?
[14:28] <jamespage> sinzui, local provider so yes
[14:29] <dimitern> rogpeppe, but you can review mine anyway, at least as it is now maybe?
[14:29] <rogpeppe> dimitern: yeah, i will
[15:24] <rogpeppe> dimitern: i am still looking at your CL BTW, and trying to work out how to avoid apiConfigConnect calling prepareAPIInfo. i'm thinking that it's perhaps not quite right that prepareAPIInfo calls Environ.StateInfo, but i haven't quite worked out what it *should* do
[15:27] <dimitern> rogpeppe, why avoid calling it?
[15:27] <rogpeppe> dimitern: firstly, because it's really slow
[15:28] <rogpeppe> dimitern: secondly, because if we've got an API connection, we can get the most up to date API server addresses by asking the API
[15:28] <dimitern> rogpeppe, well, apiConfigConnect calls NewAPIConn, which does the same
[15:29] <rogpeppe> dimitern: your code is invoking Environ.StateInfo twice, AFAICS
[15:30] <dimitern> rogpeppe, yes
[15:30] <dimitern> rogpeppe, but it's not really slow if we have the cache
[15:30] <dimitern> rogpeppe, it's slow only once initially
[15:30] <rogpeppe> dimitern: isn't this happening when we *don't* have the cache?
[15:30] <dimitern> rogpeppe, I was thinking how to avoid calling it twice but couldn't find a way
[15:31] <rogpeppe> dimitern: i don't see why apiConfigConnect needs to call NewAPIConn at all
[15:31] <rogpeppe> dimitern: can't it just call apiOpen directly?
[15:32] <rogpeppe> dimitern: we're discarding the APIConn type anyway
[15:33] <dimitern> rogpeppe, ah, that's a good point
[15:35] <rogpeppe> dimitern: that old branch of mine has landed, BTW
[15:36] <dimitern> rogpeppe, ok, will merge mine
[15:40] <dimitern> another small review anyone? https://codereview.appspot.com/52130043/ - fixes bug 1259925
[15:40] <_mup_> Bug #1259925: juju destroy-environment does not delete the local charm cache <destroy-environment> <local-provider> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1259925>
[15:43] <mgz> dimitern: do you need the prereq reviewed first?
[15:43] <dimitern> mgz, they're not really related
[15:43] <dimitern> mgz, just in the same pipeline, and rogpeppe is already reviewing the prereq
[15:43] <mgz> k
[15:43] <mgz> I shall assume sanity then
[15:44] <dimitern> :) ta
[15:48] <mgz> dimitern: revie
[15:48] <mgz> -wed
[15:48] <dimitern> mgz, cheers!
[15:49] <rogpeppe> dimitern: reviewed
[15:49] <dimitern> rogpeppe, thanks
[15:51] <natefinch> rogpeppe: are there tests besides the ones involving git that *always* fail on trusty?
[15:52] <rogpeppe> natefinch: i don't *think* so, but i'll just check again.
[16:13] <natefinch> rogpeppe: FYI, QA just ran the test suite on trusty and it passed, so you don't have to bend over backwards trying to figure out if it's possible to pass on trusty
[16:25] <rogpeppe> natefinch: i'm running the test suite continuously now. so far i've had 2/2 failures
[16:26] <natefinch> rogpeppe: that's about the ratio I get on saucy :/
[16:26] <rogpeppe> natefinch: one with the juju package failing with "Waiting for sockets to die", the other with that and minunits tests failing with "no reachable servers"
[16:27] <natefinch> rogpeppe: yeah, I get those fairly often
[16:48] <marcoceppi> Hey guys, maas question, where does juju sync-tools put the files on the maas master?
[16:49] <mgz> marcoceppi: in the filestorage thing, which I think boils down to postgres blobs
[16:50] <marcoceppi> mgz: ack, thanks!
[16:50] <mgz> you can get at them using the maas cli thing... I think the bug with that got fixed
[17:16] <rogpeppe> natefinch: 6/6 test failures so far
[17:16] <rogpeppe> natefinch: test run times varying between 9 and 20 minutes
[17:17] <natefinch> rogpeppe:  yeesh
[17:19] <TheMue> Tschakka, it passes. *phew* Some more details tomorrow morning and I can push it again. *happy*
[17:19] <rogpeppe> natefinch: the juju package fails every time. other packages i've seen fail: worker/minunitsworker state/apiserver/upgrader	worker/provisioner
[17:30] <arges> hi. Are there simple instructions for 'seeding' a juju maas provider with juju tools? I'm imaginging first i need to mirror the tools directory, then 'juju sync-tools --source <path to tools>'. is there a better way to do this? and how do I mirror the tools easily?
[17:32] <natefinch>  juju bootstrap --upload-tools should work: http://maas.ubuntu.com/docs/juju-quick-start.html
[17:32] <natefinch> arges: ^
[17:33] <arges> natefinch: well what i'm asking is, imagine i'm using juju from a machine that has s3 access blocked. how would i get tools on that machine so I can use juju
[17:33] <arges> and its a maas enviornment
[17:34] <mgz> args, no there should be simple instructions, but that sort of thing with sync-tools is the right idea
[17:35] <mgz> *arges
[17:35] <mgz> **kwarges
[17:35] <arges> heh
[17:35] <natefinch> lol
[17:35] <arges> mgz: now the question is. how do I easily mirror juju tools
[17:36] <arges> wget -m on the s3 url doesn't work. and I can download specific files, but i don't want to miss anything
[17:36] <arges> i also thought of 'juju sync-tools -e lxc' and downloading tools locally, then syncing that to maas (which seems like a hack)
[17:39] <mgz> arges: I think --local-dir, then rsync or whatever that to somewhere with maas access, then --source from there should work... but the streams.canonical.com being broken bug prevents me from actually testing that
[17:39] <arges> mgz: --local-dir is an argument to which command?
[17:39] <mgz> both to sync-tools
[17:39] <arges> oh ok
[17:40] <arges> mgz: i think --local-dir is in newer version of juju
[17:41] <arges> its not in 1.16.5
[17:41] <mgz> arges: that seems likely then...
[17:44] <mgz> your trick with the local provider seems worth trying, or just getting the tar bits you need from the ec2 bucket and generating the metadata files, which then requires several more commands and is also not well documented, and probably only nice and usable on trunk...
[17:44] <mgz> this stuff has been semi-broken for too long
[17:44] <arges> mgz: yea the local provider trick works for now. I'll check out trunk when I have time
[18:20] <rogpeppe> natefinch: 10/10 failures so far
[18:27] <natefinch> rogpeppe: dang.  I'm 100% fail as well..... trying to figure out what's going on.  Given that a lot of it seems to have to do with the test mongo server, I wonder if I managed to break it in some odd way
[18:30] <natefinch> rogpeppe: seems like it's probably one specific problem that is cascading across tests
[18:31] <rogpeppe> natefinch: yeah, i think so
[18:31] <rogpeppe> natefinch: i suspect some kind of race in mgo, but i dunno really
[18:31] <rogpeppe> natefinch: i need to muster up the energy to dive in again, now that i can repro it on my machine for the first time reliably
[18:42] <rogpeppe> i'm done for the day though
[18:42] <rogpeppe> g'night all!
[18:47] <natefinch> heh, running go test -race somehow froze my computer
[18:49] <natefinch> ....twice
[18:49] <natefinch> maybe I'll stop doing that
[18:57] <sinzui> ha ha. The windows client for 1.17.1 gives up on the public address and attempts to connect on the private address
[18:59] <natefinch> good luck with that
[19:03] <sinzui> I think this is another day to hit the sauce
[19:03] <natefinch> hot sauce?
[19:03] <sinzui> natefinch, beer
[19:03] <sinzui> Before developing in Windows, I drank 1 beer a month
[19:04] <sinzui> Except for yesterday's hacking via python and ssh, I drink to numb the pain
[19:04] <natefinch> sinzui: yeah, windows is pretty terrible.
[19:05] <natefinch> So, I went to upgrade to trusty, and it runs through the whole huge thing.  At the end it says:
[19:05] <natefinch> Upgrade complete
[19:05] <natefinch> The upgrade has completed but there were errors during the upgrade
[19:05] <natefinch> process.
[19:06] <sinzui> natefinch, The good news is that CI can build the windows installer along with the release tarball. Running the client tests looks like an exercise in adapting bash and *ix=isms
[19:06] <sinzui> natefinch, don't remove any packages
[19:06] <natefinch> ..... so... am I on trusty?  did it work?  Were the errors fatal?  Do I need to do somtehign to fix them?
[19:07] <sinzui> natefinch, I have found that re-enabling the disabled archives (most still using saucy), then updating again fixes the issues
[19:07] <natefinch> sinzui: cool, I'll give that a try.  Still don't get why they think it's ok to just disable a bunch of my archives :/
[19:09] <sinzui> natefinch, yeah, that is a pain. They do it to ensure only the tested/compatible are installed, but as developers, we are crippled. I one let upgrade remove the packages that came from other archives and that caused a chain of breakages.
[19:09]  * sinzui wont do that again
[19:11] <natefinch> brb rebooting
[19:12] <marcoceppi> did you guys change the default behavior of destroy-environment from using the -e flag to a list of environments as parameters? Or has that how it's been?
[19:13] <natefinch> well great, now I can't open "software & updates" :/
[19:51] <_thumper_> morning
[19:53]  * thumper sighs
[19:53] <thumper> big email backlog...
[20:25] <natefinch> sinzui: lsb_release -a says trusty, but the "About This Computer" window still says 13.04.... is that normal for being on the release branch, or did my mucked up upgrade screw things up?
[20:25] <natefinch> sinzui: s/release branch/development branch/
[20:25] <sinzui> natefinch, yes
[20:26] <natefinch> sinzui: ok, phew
[20:27] <thumper> natefinch: the about computer package gets updated later
[20:28] <thumper> sometimes the login screen will show old version number too
[20:28] <thumper> until that is updated
[20:28] <natefinch> thumper: yeah, it does.  boo.
[20:28] <thumper> natefinch: tests passing?
[20:29] <natefinch> .....and that answers the question of whether go test -race still crashes my laptop
[20:32] <thumper> haha
[21:29] <sinzui> natefinch, do you have any incites into this error. I can run the script from powershell over  rdp, but ssh errors on bootstrap command (but not the version command) http://pastebin.ubuntu.com/6752700/
[21:34] <natefinch> sinzui: looks like it's looking for the juju home directory, which doesn't exist for some reason
[21:35] <sinzui> JUJU_HOME is defined (at least is in powershell)
[21:35]  * sinzui looks in ssh env
[21:35] <natefinch> C:\Users\Administrator\AppData\Roaming\Juju  would be the default
[21:37] <sinzui> natefinch, thank for the clue. It isn't defined when I execute the script via ssh
[21:37] <natefinch> sinzui: welcome
[21:48] <natefinch> thumper, wallyworld_:  This test consistently fails for me: TestStartInstanceWithDefaultSecurityGroup  (in provider/openstack/live_test.go)
[21:57] <thumper> natefinch: is this run normally with make check?
[21:57] <thumper> natefinch: if so, just passed for m,e
[21:58] <natefinch> thumper: not sure what you mean about make check, I just did go test in that directory (also fails when running the full test suite)
[21:59] <thumper> ok, works for me then
[22:03] <natefinch> thumper: weird.  Fails for me every single time
[22:03] <natefinch> live_test.go:248:
[22:03] <natefinch>     c.Assert(defaultGroupFound, gc.Equals, useDefault)
[22:03] <natefinch> ... obtained bool = false
[22:03] <natefinch> ... expected bool = true
[22:04] <natefinch> gotta go
[22:58] <davecheney> sinzui: hold on, imma coming