[00:24] wallyworld: checked the bot recently? [00:25] thumper: yep and it's farked. i deleted 100000000 old test processes and restarted and it's been running tests for the 1.17.5 upgrade branch for ages now [00:25] so it's still broken [00:25] :-( [00:26] thumper: my next step was to beg in #is for more resources to be allocated to that instance? [00:26] wallyworld: we can try, right? [00:26] yeah [01:31] thumper: are we standing up? [01:31] yes [01:31] wallyworld: coming? [09:14] morning all [09:17] morning from blue fin ;) [10:10] natefinch, \o [10:10] fwereade, dimitern, mgz, rogpeppe: morning [10:10] natefinch, we have a camera setup for you [10:11] dimitern: that is awesome, thanks guys. [10:12] hey nate! [10:14] natefinch, i've sent you a g+ link as a pm [10:14] dimitern: oh, thanks, sorry [10:16] dimitern: one sec, having trouble getting G+ to use my canonical account instead of my gmail [10:17] natefinch, sure, join when you can, just to check it out and say hi, we're about to have a little break [10:17] dimitern: ok [10:20] natefinch, you're frozen [10:20] natefinch, can you hear us well? [10:21] it froze for a while seems better now [10:21] the sound is pretty soft and somewhat choppy [10:22] dimitern: seems the connection is not very good [10:23] dim that's a bit better [10:23] dimitern: ^ [10:23] how mean nate [10:24] mgz: ug, stupid hangouts [10:32] I have a coffee grinding story for you nate === rogpeppe is now known as rogpeppe1 [10:38] mgz: heh, a story? :) [10:39] natefinch: lp:~rogpeppe/juju-core/507-peergrouper-integration [10:44] natefinch: in worker/peergrouper, go test -gocheck.vv -gocheck.f TestEnsureAvailability [10:50] jam: mgz: landing bot is broken. i found it was cpu starved and the on aloja where it was running, there was a high load average. i tried resizing the landing instance to m1.large but that failed. so now the only option is to redeploy with a constraint to force it to use a larger instance. is the landing bot fully charmed or do we still have manual steps? [10:53] yeah, I'm not sure that resize attempt was wise [10:53] I tried sorting it at the end of friday, but the machine was pretty borked [10:54] we probably need to redeploy, which does have several manual steps [10:57] hi [10:57] is there an add-machine API call? [10:57] note I said add machine, not add machines [10:57] Beret: not sure what the difference is? [10:57] mgz: yeah, the resize was a last resort to avoid redeploying [10:58] mgz: today, it took about 12 hours for a test run which eventually failed [10:58] totally cpu bound [10:58] i reckon it needs to be on prodstack [11:02] wallyworld, so you're sprinting as well this week? [11:02] dimitern: no? [11:03] he's just doing the normal wallyworld week, which is sprinting by everyone else's standard [11:05] ah :) [11:06] wallyworld, we're told au/nz guys are doing a sprint at the same time [11:06] dimitern: nah, we did get together a few weeks back though [11:07] i would have preferred london to dunedin :-) [11:09] the maas guys are in brisbane this week also [11:17] dimitern, ping? [11:20] mattyw: he's in a sprint, only partly paying attention [11:20] natefinch, ah ok, thanks === psivaa is now known as psivaa-lunch [13:46] natefinch: we're back! will get on the hangout again shortly, yell when you're around [14:03] hi core devs: am I right assuming the API ServiceDeploy, when called without a ToMachineSpec, uses the following unit placement policy? if clean and empty machines/containers are available, use those, otherwise create a new top level machine for each unit [14:13] frankban: should be, yes [14:15] natefinch: thanks === edu-afk_ is now known as edamato === psivaa-lunch is now known as psivaa [14:52] natefinch: a little review for you :-) https://codereview.appspot.com/70770043 [14:52] natefinch: hmm, i think you're frozen [14:53] natefinch: oh, no, just immobile :-) [14:57] rogpeppe: haha [14:57] natefinch: took you a while to see that :) [14:58] natefinch: i'm serious though - a review would be appreciated [14:58] rogpeppe: sure thing [15:01] rogpeppe: why is errgo github.com/juju/errgo/errors and not just github.com/juju/errgo ? Seems like errgo would never clash with anything, and it makes it more clear that it's errgo, not the std errors package [15:02] natefinch: because i think "errors" is a better name for the identifier, and it's a strict superset of the standard errors package [15:09] natefinch: links to your reviews? [15:09] rogpeppe: https://codereview.appspot.com/69600043/ [15:53] I'm having problems constructing a manual environment from lxc containers [15:53] the error I get is: [15:54] Fetching tools: curl -o $bin/tools.tar.gz 'http://10.0.3.40:8040/tools/releases/juju-1.17.4.1-saucy-amd64.tgz' [15:54] 2014-03-03 15:50:01 ERROR juju.environs.manual provisioner.go:78 provisioning failed, removing machine 5: rc: 1 [15:54] 2014-03-03 15:50:01 ERROR juju.cmd supercommand.go:293 rc: 1 === tasdomas` is now known as tasdomas === edamato is now known as edu-afk [16:41] sinzui: thanks for resolving bug 1286885 - it is indeed deliberate to restore 1.16 compat [16:41] <_mup_> Bug #1286885: juju 1.17.3 incompatible with tools in canonistack bucket [16:42] mgz, no [16:42] mgz, I mean NP === edu-afk is now known as edamato [18:07] uh, so how do you remove a subordinate? [19:27] has anyone seen this with juju-core 1.17.4? https://pastebin.canonical.com/105803/ [20:03] coreycb: is that with lxc? [20:05] rick_h_, no it's using kvm [20:05] coreycb: ok, but local provider? The local provider doesn't support debug-log yet. I think that's still in progress [20:14] rick_h_, I'm using openstack [20:14] rick_h_, I think that means I'm not using a local provider [20:14] coreycb: ah ok then not sure. [20:15] rick_h_, btw this worked last week on 1.17.3 [20:16] coreycb: k, yea sorry. Can you file a bug with your error and environment details? [20:17] rick_h_, sure and not a problem. do you know how I can get back to 1.17.3? it doesn't appear to be available in ppa:juju/devel [20:18] coreycb: no, I'm not sure. sinzui is that possible without going compiling it custom? [20:20] coreycb, I think the error means the log is not where juju expects it to be. I understand the machine can be configured to put the logs else where...but I think you would remember doing that [20:22] coreycb, is the env your talking too 1.17.3? If so I think you need to exec "juju upgrade-juju" to bring the server and agents in line with your client [20:23] coreycb, 1.17.[0-3] were not backwards compatible with 1.16.x, the fix for 1.17.4 made it not work so well with previous devel versions of Juju. [20:26] sinzui, good question.. I'm not sure what the server is at [20:27] sinzui, juju upgrade-juju didn't change anything, but I'll look into what the server is at [20:29] coreycb, juju status will report the server and agent machine versions...and since 1.7.4 shows empty info for 1.17.x, you will know you will need to upgrade-juju [20:30] sinzui, thanks. juju status shows 1.17.3. [20:31] and you are using juju 1.17.4 your local machine? [20:31] sinzui, yes [20:32] coreycb, I am surprised you can get that much info out of status. If you upgrade, you will get better debug and logging [20:33] sinzui, ok. I'll try to get them both to the same version. I think I need to do more than juju upgrade-juju to upgrade. [20:35] coreycb, 1.17.x looks for public versions of the tools... [20:36] coreycb, oh, I didn't see streams.canonical.com get the new tools. you may be right. I will look into this now [20:37] sinzui, appreciate it [20:38] Go newbie here, any recommendations for getting my feet wet? [20:44] in terms of bugfixes or tasks that need taking a whack at [21:00] bodie_: you can find a list of the bugs here: https://bugs.launchpad.net/juju-core/ [21:00] bodie_: and "small" bugs are typically tagged "papercut" https://bugs.launchpad.net/juju-core/+bugs?field.tag=papercut [21:06] Okay, cool. [21:07] o/ bodie_ marcoceppi [21:08] good luck finding a simple bug [21:08] juju is quite a complicated system [21:09] heh... I noticed that. I'm taking a look over a few things on Github before taking the plunge, I think. :) [21:14] bodie_: what is your background? [21:15] coreycb, juju-upgrade will work with an open network now. streams.canonical.com has the new tools [21:16] undergrad CS analysis, math, etc, graduated last year, self taught in a bunch of languages. Good w/ python, perl, C, java [21:16] I know my way around linux and such [21:17] * thumper nods [21:17] * thumper takes a quick look to see if he can find a simple bug [21:19] Juju, because I was actually thinking about making a similar service and discovered this [21:19] bodie_: nice [21:20] yeah, the one thing I see as problematic here is being tied to openstack [21:20] if i read it right [21:20] bodie_: I have found that often a good place to start with new projects is to look at low priority bugs (which the main dev team seldom get to) or tech-debt type bugs [21:20] ?! [21:20] juju isn't tied to open stack [21:20] ahhh [21:21] open stack is just one provider [21:21] I must have misread the site then [21:21] i was at digitalocean for a while and a couple of us were talking about making a platform agnostic cluster deploy tool [21:21] bodie_: I think you just found it :) [21:21] hazmat has a plugin for dealing with digital ocean [21:21] i'll have to get my head adjusted and re-read ;) [21:21] also first class ec2, azure, soon joyent [21:21] nice! [21:21] also maas, and a local provider [21:21] local uses LXC containers on the host machine [21:22] we also have manual provisioning [21:22] do work with any machine you happen to have [21:22] bodie_: this could be a simple(ish) intro bug https://bugs.launchpad.net/juju-core/+bug/1197365 [21:22] <_mup_> Bug #1197365: instance.Instance.WaitDNSName() no longer needed [21:23] right on [21:23] bodie_: or this one: https://bugs.launchpad.net/juju-core/+bug/1217868 [21:23] <_mup_> Bug #1217868: move from state.D to bson.D [21:23] both are easy(ish) tasks that are just refactoring [21:24] and noone else is looking at them [21:27] good deal, I'll take a look [21:30] bodie_, http://github.com/kapilt/juju-digitalocean [21:36] sinzui, I should be able to get that with ppa:juju/devel [21:36] right? [21:36] sinzui, upgrade still isn't working for some reason [21:37] coreycb, no. Ubuntu and the PPA provide command line and local deploy tools. They provide what goes on your computer. [21:38] coreycb, "juju --show-log upgrade-juju" tells the server and agents to find newer tools in the the current cloud or on streams.canonical.com [21:39] coreycb, you can also use your local installation to put the tools in the environment... [21:39] coreycb, "juju --show-log upgrade-juju --upload-tools" [21:39] coreycb, while the later always works, I prefer letting juju choose the correct origin so that it never think I have to provide more tools [21:40] sinzui, ok so the good news is that the latter worked. but the former got "INFO juju upgradejuju.go:120 no upgrades available" [21:42] coreycb, okay. That means juju isn't finding tools [21:42] hazmat: about to have a play with jlxc [21:42] in the list of places to work [21:43] bloodearnest, that one isn't end user polished.. lots of undocumented setup.. i'm doing a talk on thursday.. i'm hoping to polish it up before then [21:43] coreycb, you are in a private cloud? Someone, perhaps you placed juju-*.tgz into a container? [21:44] hazmat: sure, np, I'm just gonna see if I can get it working, if not no worries [21:44] bloodearnest, actually with the aufs option, its reasonably straight-forward.. i've had some minor issues with aufs though and some software. [21:44] nfs in particular [21:45] sinzui, it's private.. this is on the server team's openstack on openstack deployment (serverstack) [21:46] sinzui, but anyway not sure.. I'll have to check into that [21:47] sinzui, if you don't provide tools you have no idea what tools juju will select, and no dry-run mode.. that's terrifying for prod [21:47] coreycb, okay. they probably seeded the tools. does the cloud have an open network? eg, can you deploy any charm whithout needing to fork and repackage its deps [21:47] sinzui, hmm, now juju destroy-environment gets "error: no environment specified" [21:47] sinzui, ah, it may not be open [21:48] coreycb, I would have thought the "--upload-tools" arg would have works from your 1.17.4 host. The feature works too well [21:50] coreycb, juju 1.17.x changed the destroy-environment args. You must specify the environment name to be clear you know what you are doing... [21:50] juju destroy-environment my-env [22:01] sinzui, I think I'm good to go. --upload-tools did work, and debug-log works after I successfully upgrade-juju to 1.17.14. and destroy-environment works as you said. [22:02] coreycb, great [22:02] sinzui, oh and there is a juju-1.17.3-precise-amd64.tgz getting pulled down in the bootstrap [22:02] sinzui, anyway, thanks! [22:50] hazmat: Great success! :) [22:52] hmm, maybe I spoke too early [22:53] but progress, anyway [23:09] bloodearnest, what's the hangup.. its pretty instaneous if its working. [23:10] bloodearnest, if you dont have btrfs.. you need a code mod (its already commented) to enable the aufs support... i should toss that into a cli param. [23:15] bloodearnest, oh.. that install.sh.. i was going to yank it.. doh. [23:16] i haven't actually used it, its an old artifact from some previous exploration.