[00:24] <thumper> wallyworld: checked the bot recently?
[00:25] <wallyworld> thumper: yep and it's farked. i deleted 100000000 old test processes and restarted and it's been running tests for the 1.17.5 upgrade branch for ages now
[00:25] <wallyworld> so it's still broken
[00:25] <thumper> :-(
[00:26] <wallyworld> thumper: my next step was to beg in #is for more resources to be allocated to that instance?
[00:26] <thumper> wallyworld: we can try, right?
[00:26] <wallyworld> yeah
[01:31] <waigani> thumper: are we standing up?
[01:31] <thumper> yes
[01:31] <thumper> wallyworld: coming?
[09:14] <mgz> morning all
[09:17] <dimitern> morning from blue fin ;)
[10:10] <dimitern> natefinch, \o
[10:10] <natefinch> fwereade, dimitern, mgz, rogpeppe: morning
[10:10] <dimitern> natefinch, we have a camera setup for you
[10:11] <natefinch> dimitern: that is awesome, thanks guys.
[10:12] <mgz> hey nate!
[10:14] <dimitern> natefinch, i've sent you a g+ link as a pm
[10:14] <natefinch> dimitern: oh, thanks, sorry
[10:16] <natefinch> dimitern: one sec, having trouble getting G+ to use my canonical account instead of my gmail
[10:17] <dimitern> natefinch, sure, join when you can, just to check it out and say hi, we're about to have a little break
[10:17] <natefinch> dimitern:  ok
[10:20] <dimitern> natefinch, you're frozen
[10:20] <dimitern> natefinch, can you hear us well?
[10:21] <natefinch> it froze for a while seems better now
[10:21] <natefinch> the sound is pretty soft and somewhat choppy
[10:22] <natefinch> dimitern: seems the connection is not very good
[10:23] <natefinch> dim that's a bit better
[10:23] <natefinch> dimitern: ^
[10:23] <mgz> how mean nate
[10:24] <natefinch> mgz: ug, stupid hangouts
[10:32] <mgz> I have a coffee grinding story for you nate
[10:38] <natefinch> mgz: heh, a story? :)
[10:39] <rogpeppe> natefinch: lp:~rogpeppe/juju-core/507-peergrouper-integration
[10:44] <rogpeppe> natefinch: in worker/peergrouper, go test -gocheck.vv -gocheck.f TestEnsureAvailability
[10:50] <wallyworld> jam: mgz: landing bot is broken. i found it was cpu starved and the on aloja where it was running, there was a high load average. i tried resizing the landing instance to m1.large but that failed. so now the only option is to redeploy with a constraint to force it to use a larger instance. is the landing bot fully charmed or do we still have manual steps?
[10:53] <mgz> yeah, I'm not sure that resize attempt was wise
[10:53] <mgz> I tried sorting it at the end of friday, but the machine was pretty borked
[10:54] <mgz> we probably need to redeploy, which does have several manual steps
[10:57] <Beret> hi
[10:57] <Beret> is there an add-machine API call?
[10:57] <Beret> note I said add machine, not add machines
[10:57] <natefinch> Beret: not sure what the difference is?
[10:57] <wallyworld> mgz: yeah, the resize was a last resort to avoid redeploying
[10:58] <wallyworld> mgz: today, it took about 12 hours for a test run which eventually failed
[10:58] <wallyworld> totally cpu bound
[10:58] <wallyworld> i reckon it needs to be on prodstack
[11:02] <dimitern> wallyworld, so you're sprinting as well this week?
[11:02] <wallyworld> dimitern: no?
[11:03] <mgz> he's just doing the normal wallyworld week, which is sprinting by everyone else's standard
[11:05] <dimitern> ah :)
[11:06] <dimitern> wallyworld, we're told au/nz guys are doing a sprint at the same time
[11:06] <wallyworld> dimitern: nah, we did get together a few weeks back though
[11:07] <wallyworld> i would have preferred london to dunedin :-)
[11:09] <wallyworld> the maas guys are in brisbane this week also
[11:17] <mattyw> dimitern, ping?
[11:20] <natefinch> mattyw: he's in a sprint, only partly paying attention
[11:20] <mattyw> natefinch, ah ok, thanks
[13:46] <mgz> natefinch: we're back! will get on the hangout again shortly, yell when you're around
[14:03] <frankban> hi core devs: am I right assuming the API ServiceDeploy, when called without a ToMachineSpec, uses the following unit placement policy? if clean and empty machines/containers are available, use those, otherwise create a new top level machine for each unit
[14:13] <natefinch> frankban: should be, yes
[14:15] <frankban> natefinch: thanks
[14:52] <rogpeppe> natefinch: a little review for you :-) https://codereview.appspot.com/70770043
[14:52] <rogpeppe> natefinch: hmm, i think you're frozen
[14:53] <rogpeppe> natefinch: oh, no, just immobile :-)
[14:57] <natefinch> rogpeppe: haha
[14:57] <rogpeppe> natefinch: took you a while to see that :)
[14:58] <rogpeppe> natefinch: i'm serious though - a review would be appreciated
[14:58] <natefinch> rogpeppe: sure thing
[15:01] <natefinch> rogpeppe: why is errgo github.com/juju/errgo/errors and not just github.com/juju/errgo ?  Seems like errgo would never clash with anything, and it makes it more clear that it's errgo, not the std errors package
[15:02] <rogpeppe> natefinch: because i think "errors" is a better name for the identifier, and it's a strict superset of the standard errors package
[15:09] <rogpeppe> natefinch: links to your reviews?
[15:09] <natefinch> rogpeppe: https://codereview.appspot.com/69600043/
[15:53] <tasdomas`> I'm having problems constructing a manual environment from lxc containers
[15:53] <tasdomas`> the error I get is:
[15:54] <tasdomas`> Fetching tools: curl -o $bin/tools.tar.gz 'http://10.0.3.40:8040/tools/releases/juju-1.17.4.1-saucy-amd64.tgz'
[15:54] <tasdomas`> 2014-03-03 15:50:01 ERROR juju.environs.manual provisioner.go:78 provisioning failed, removing machine 5: rc: 1
[15:54] <tasdomas`> 2014-03-03 15:50:01 ERROR juju.cmd supercommand.go:293 rc: 1
[16:41] <mgz> sinzui: thanks for resolving bug 1286885 - it is indeed deliberate to restore 1.16 compat
[16:41] <_mup_> Bug #1286885: juju 1.17.3 incompatible with tools in canonistack bucket <compatibility> <regression> <juju-core:Won't Fix> <https://launchpad.net/bugs/1286885>
[16:42] <sinzui> mgz, no
[16:42] <sinzui> mgz, I mean NP
[18:07] <marcoceppi> uh, so how do you remove a subordinate?
[19:27] <coreycb> has anyone seen this with juju-core 1.17.4?  https://pastebin.canonical.com/105803/
[20:03] <rick_h_> coreycb: is that with lxc?
[20:05] <coreycb> rick_h_, no it's using kvm
[20:05] <rick_h_> coreycb: ok, but local provider? The local provider doesn't support debug-log yet. I think that's still in progress
[20:14] <coreycb> rick_h_, I'm using openstack
[20:14] <coreycb> rick_h_, I think that means I'm not using a local provider
[20:14] <rick_h_> coreycb: ah ok then not sure.
[20:15] <coreycb> rick_h_, btw this worked last week on 1.17.3
[20:16] <rick_h_> coreycb: k, yea sorry. Can you file a bug with your error and environment details?
[20:17] <coreycb> rick_h_, sure and not a problem.  do you know how I can get back to 1.17.3?  it doesn't appear to be available in ppa:juju/devel
[20:18] <rick_h_> coreycb: no, I'm not sure. sinzui is that possible without going compiling it custom?
[20:20] <sinzui> coreycb, I think the error means the log is not where juju expects it to be. I understand the machine can be configured to put the logs else where...but I think you would remember doing that
[20:22] <sinzui> coreycb, is the env your talking too 1.17.3? If so I think you need to exec "juju upgrade-juju" to bring the server and agents in line with your client
[20:23] <sinzui> coreycb, 1.17.[0-3] were not backwards compatible with 1.16.x, the fix for 1.17.4 made it not work so well with previous devel versions of Juju.
[20:26] <coreycb> sinzui, good question.. I'm not sure what the server is at
[20:27] <coreycb> sinzui, juju upgrade-juju didn't change anything, but I'll look into what the server is at
[20:29] <sinzui> coreycb, juju status will report the server and agent machine versions...and since 1.7.4 shows empty info for 1.17.x, you will know you will need to upgrade-juju
[20:30] <coreycb> sinzui, thanks.  juju status shows 1.17.3.
[20:31] <sinzui> and you are using juju 1.17.4 your local machine?
[20:31] <coreycb> sinzui, yes
[20:32] <sinzui> coreycb, I am surprised you can get that much info out of status. If you upgrade, you will get better debug and logging
[20:33] <coreycb> sinzui, ok.  I'll try to get them both to the same version.  I think I need to do more than juju upgrade-juju to upgrade.
[20:35] <sinzui> coreycb, 1.17.x looks for public versions of the tools...
[20:36] <sinzui> coreycb, oh, I didn't see streams.canonical.com get the new tools. you may be right. I will look into this now
[20:37] <coreycb> sinzui, appreciate it
[20:38] <bodie_> Go newbie here, any recommendations for getting my feet wet?
[20:44] <bodie_> in terms of bugfixes or tasks that need taking a whack at
[21:00] <marcoceppi> bodie_: you can find a list of the bugs here: https://bugs.launchpad.net/juju-core/
[21:00] <marcoceppi> bodie_: and "small" bugs are typically tagged "papercut" https://bugs.launchpad.net/juju-core/+bugs?field.tag=papercut
[21:06] <bodie_> Okay, cool.
[21:07] <thumper> o/ bodie_ marcoceppi
[21:08] <thumper> good luck finding a simple bug
[21:08] <thumper> juju is quite a complicated system
[21:09] <bodie_> heh... I noticed that.  I'm taking a look over a few things on Github before taking the plunge, I think.  :)
[21:14] <thumper> bodie_: what is your background?
[21:15] <sinzui> coreycb, juju-upgrade will work with an open network now. streams.canonical.com has the new tools
[21:16] <bodie_> undergrad CS analysis, math, etc, graduated last year, self taught in a bunch of languages.  Good w/ python, perl, C, java
[21:16] <bodie_> I know my way around linux and such
[21:17]  * thumper nods
[21:17]  * thumper takes a quick look to see if he can find a simple bug
[21:19] <bodie_> Juju, because I was actually thinking about making a similar service and discovered this
[21:19] <thumper> bodie_: nice
[21:20] <bodie_> yeah, the one thing I see as problematic here is being tied to openstack
[21:20] <bodie_> if i read it right
[21:20] <thumper> bodie_: I have found that often a good place to start with new projects is to look at low priority bugs (which the main dev team seldom get to) or tech-debt type bugs
[21:20] <thumper> ?!
[21:20] <thumper> juju isn't tied to open stack
[21:20] <bodie_> ahhh
[21:21] <thumper> open stack is just one provider
[21:21] <bodie_> I must have misread the site then
[21:21] <bodie_> i was at digitalocean for a while and a couple of us were talking about making a platform agnostic cluster deploy tool
[21:21] <bloodearnest> bodie_: I think you just found it :)
[21:21] <thumper> hazmat has a plugin for dealing with digital ocean
[21:21] <bodie_> i'll have to get my head adjusted and re-read ;)
[21:21] <thumper> also first class ec2, azure, soon joyent
[21:21] <bodie_> nice!
[21:21] <thumper> also maas, and a local provider
[21:21] <thumper> local uses LXC containers on the host machine
[21:22] <thumper> we also have manual provisioning
[21:22] <thumper> do work with any machine you happen to have
[21:22] <thumper> bodie_:  this could be a simple(ish) intro bug https://bugs.launchpad.net/juju-core/+bug/1197365
[21:22] <_mup_> Bug #1197365: instance.Instance.WaitDNSName() no longer needed <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1197365>
[21:23] <bodie_> right on
[21:23] <thumper> bodie_: or this one: https://bugs.launchpad.net/juju-core/+bug/1217868
[21:23] <_mup_> Bug #1217868: move from state.D to bson.D <hours> <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1217868>
[21:23] <thumper> both are easy(ish) tasks that are just refactoring
[21:24] <thumper> and noone else is looking at them
[21:27] <bodie_> good deal, I'll take a look
[21:30] <hazmat> bodie_, http://github.com/kapilt/juju-digitalocean
[21:36] <coreycb> sinzui, I should be able to get that with ppa:juju/devel
[21:36] <coreycb> right?
[21:36] <coreycb> sinzui, upgrade still isn't working for some reason
[21:37] <sinzui> coreycb, no. Ubuntu and the PPA provide command line and local deploy tools. They provide what goes on your computer.
[21:38] <sinzui> coreycb, "juju --show-log upgrade-juju" tells the server and agents to find newer tools in the the current cloud or on streams.canonical.com
[21:39] <sinzui> coreycb, you can also use your local installation to put the tools in the environment...
[21:39] <sinzui> coreycb, "juju --show-log upgrade-juju --upload-tools"
[21:39] <sinzui> coreycb, while the later always works, I prefer letting juju choose the correct origin so that it never think I have to provide more tools
[21:40] <coreycb> sinzui, ok so the good news is that the latter worked.   but the former got "INFO juju upgradejuju.go:120 no upgrades available"
[21:42] <sinzui> coreycb, okay. That means juju isn't finding tools
[21:42] <bloodearnest> hazmat: about to have a play with jlxc
[21:42] <sinzui> in the list of places to work
[21:43] <hazmat> bloodearnest, that one isn't end user polished.. lots of undocumented setup.. i'm doing a talk on thursday.. i'm hoping to polish it up before then
[21:43] <sinzui> coreycb, you are in a private cloud? Someone, perhaps you placed juju-*.tgz into a container?
[21:44] <bloodearnest> hazmat: sure, np, I'm just gonna see if I can get it working, if not no worries
[21:44] <hazmat> bloodearnest, actually with the aufs option, its reasonably straight-forward.. i've had some minor issues with aufs though and some software.
[21:44] <hazmat> nfs in particular
[21:45] <coreycb> sinzui, it's private.. this is on the server team's openstack on openstack deployment (serverstack)
[21:46] <coreycb> sinzui, but anyway not sure.. I'll have to check into that
[21:47] <hazmat> sinzui, if you don't provide tools you have no idea what tools juju will select, and no dry-run mode.. that's terrifying for prod
[21:47] <sinzui> coreycb, okay. they probably seeded the tools. does the cloud have an open network? eg, can you deploy any charm whithout needing to fork and repackage its deps
[21:47] <coreycb> sinzui, hmm, now juju destroy-environment gets "error: no environment specified"
[21:47] <coreycb> sinzui, ah, it may not be open
[21:48] <sinzui> coreycb, I would have thought the "--upload-tools" arg would have works from your 1.17.4 host. The feature works too well
[21:50] <sinzui> coreycb, juju 1.17.x changed the destroy-environment args. You must specify the environment name to be clear you know what you are doing...
[21:50] <sinzui> juju destroy-environment my-env
[22:01] <coreycb> sinzui, I think I'm good to go.  --upload-tools did work, and debug-log works after I successfully upgrade-juju to 1.17.14.  and destroy-environment works as you said.
[22:02] <sinzui> coreycb, great
[22:02] <coreycb> sinzui, oh and there is a juju-1.17.3-precise-amd64.tgz getting pulled down in the bootstrap
[22:02] <coreycb> sinzui, anyway, thanks!
[22:50] <bloodearnest> hazmat: Great success! :)
[22:52] <bloodearnest> hmm, maybe I spoke too early
[22:53] <bloodearnest> but progress, anyway
[23:09] <hazmat> bloodearnest, what's the hangup.. its pretty instaneous if its working.
[23:10] <hazmat> bloodearnest, if you dont have btrfs.. you need a code mod (its already commented) to enable the aufs support... i should toss that into a cli param.
[23:15] <hazmat> bloodearnest, oh.. that install.sh.. i was going to yank it.. doh.
[23:16] <hazmat> i haven't actually used it, its an old artifact from some previous exploration.