[04:37] Alright good progress.. [04:38] But it's way past bed time.. night all === renato_ is now known as renato [12:46] Hallo! [12:51] niemeyer: hi! [12:54] rog: Yo! [12:54] niemeyer: i'm considering putting (at least part of) the ec2 test server under launchpad.net/goamz/ec2/test rather than juju/ec2, because it's bound so tightly to the ec2 protocol. [12:54] does that seem reasonable? [13:39] g'morning [13:43] rog: Yeah, depending on how generic it is [13:43] rog: Sorry, missed your message earlier [13:43] hazmat: morning [13:43] hazmat: morning [13:43] niemeyer: np [14:04] niemeyer: hi. I'm around if you feel like talking about what you'd like me to start with. [14:08] mpl: Hey! [14:09] mpl: Cool, so.. [14:09] mpl: The first thing I'd recommend is getting comfortable with the concepts around the project [14:10] mpl: Trying to use it in the first place, so you understand the overall idea, would be a great start [14:10] mpl: Do you have an account on EC2? [14:15] niemeyer: nope. I quickly tried locally with lxc but got some errors. some dude pointed me to some troubles he had locally as well but I haven't had time to investigate more. [14:17] niemeyer: but maybe I can get an account. Do I need to pay if I just want an account and play with juju on it? [14:20] mpl: You have, even though it's going to be quite cheap for what you'll have to do [14:20] mpl: Like cents [14:20] ok, gonna set up one then [14:20] mpl: I recommend playing on EC2 rather than local, because if you're happy with it I'll try to get you involved in developing some features around the EC2 support [14:21] mpl: FWIW, if it starts to get expensive because you've spending a good deal of time hacking, I'll be very happy to sponsor your costs [14:22] bah, we're not there yet. besides, at some point I suppose I'll need an EC2 to test some camlistore stuff on there too. [14:22] but thx, good to know. [14:34] <_mup_> Bug #889125 was filed: Initial implementation of basic ec2 functionality. < https://launchpad.net/bugs/889125 > [14:54] niemeyer: new merge proposal for you. it relies on the previous error fixes though, so i'm not expecting an immediate look. [15:02] rog: Cool [15:02] rog: Still pumping through.. [15:02] rog: Bazaar branch diffing works, and auth works [15:02] niemeyer: nice [15:02] rog: Now just need to sort the actual rietveld interaction [15:02] rog: Hopefully this afternoon I can nail the whole thing down [15:03] niemeyer: you gonna branch upload.py or use the stuff from the go tree or do it from scratch? [15:03] rog: Did it from scratch.. [15:03] niemeyer: probably best [15:04] rog: Want to integrate into lbox, so upload.py wasn't suitable [15:04] niemeyer: right [15:04] rog: The patch stuff in the Go tree was interesting, but ended up being a red herring [15:04] rog: So writing a rietveld library, then will glue on lbox [15:05] niemeyer: yeah, i saw your CL [15:05] But lunch first! :) === jcastro_ is now known as jcastro [16:13] mpl, if you have any notes from your troubles with the local provider i'd be happy to help debug it [16:14] hazmat: I think you're the one who pointed me to your troubles you had reported. I hadn't even rebooted after I had installed lxc (which is recommended), so that may just be it. as I said, haven't had time to look more into it. but thanks anyway. [16:40] hi [16:40] I'm trying to deploy a juju formula to lxc locally, and it seems to be hanging on "Starting container..." [16:41] At the charm school at UDS, I remember being pointed to a more helpful log file [16:41] but I can't find that now [16:51] jml, there's a master-customize.log under the units directory in the environments.yaml specified data-dir for the local provider [16:51] jml, there are several log files in that directory [17:06] hazmat: thanks. [17:07] jml, the log for the initial lxc container modification used as a template for units is in the master-customize.log [17:08] hazmat: that file doesn't exist. have destroyed environment and trying again [17:08] jml, it helps to check the status of containers with lxc-ls or checking for the debootstrap in a process listing [17:09] jml, cool, i'll be around but running errands (national holiday in the us) [17:09] hazmat: thanks [17:19] happy weekends everyone :) [17:27] Oh. Hmm. [17:33] fwereade: Have a great one! [17:38] fwereade: et toi! [17:39] niemeyer: william raised an interesting point in his review of the Go code that i'm conflating machines and instances, and that's probably not a good thing to do [17:39] niemeyer: what do you think? [17:41] argh, ubuntuone-syncdaemon strikes again! 5GiB real memory and rising [17:46] rog: jsyk, the non literal and more correct translation would be "toi aussi" :) [17:46] mpl: there's always one. dammit. :-) :-) [17:47] mpl: i did know that once, honest. [17:47] I don't doubt it. just pointing it out because I'm not sure you could get away with "et toi" while speaking. [17:47] mpl: but thanks for letting me know. [17:47] np [17:48] mpl: and if i was using 3rd person? is "et vous" incorrect too? [17:48] yep. just go with "vous aussi" or "vous de même" if you want to be fancy. [17:49] lovely. [17:58] heh, the amazon automatic call lady voice sounds a bit like glados :) [18:05] :( [18:06] OK So, no master-customize.log exists; debug-log says "starting container", no debootstrap in process list [18:14] in my environments file, should I change "default-series: oneiric" if I'm using lucid lynx ? [18:15] blew away all my lxc environs, the juju data dir, killed relevant twistd & zookeeper processes -- seems to be working now, at least I have a master-customize.log [18:16] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/python-crypto/python-crypto_2.3-2_amd64.deb Hash Sum mismatch [18:22] jml, destroy-environment is a safe reset, you shouldn't ever have to kill processes by hand, unless you below away the data-dir beforehand [18:22] hazmat: it was giving me errors about "address already in use" [18:23] jml, in general using destroy-environment is the best way to clean state/kill processes, it sounds like you had a process around from a previous attempt [18:23] jml, so what do you see know from lxc-ls ? [18:23] or juju status [18:24] hazmat: ... a previous attempt where destroy-environment didn't do what it says on the tin [18:24] machines: [18:24] 0: {dns-name: localhost, instance-id: local} [18:24] services: [18:24] etherpad-lite: [18:24] charm: local:oneiric/etherpad-lite-7 [18:24] relations: {} [18:24] units: [18:24] etherpad-lite/0: [18:24] machine: 0 [18:24] public-address: null [18:24] relations: {} [18:24] state: null [18:24] 2011-11-11 18:24:04,680 INFO 'status' command finished successfully [18:24] jml-local-0-template jml-local-etherpad-lite-0 [18:26] jml do you see any files in data-dir/user-env/units/etherpad-lite ? [18:26] hazmat: container.log and a broken symlink to unit.log [18:27] so juju boostrap says "SSH authorized/public key not found.", because I followed the getting started guide and entered my access key and secret key. is the guide deprecated on that aspect? [18:27] jml hmm.. so the unit agent never started if the symlink is broken, could you paste the master-customize.log [18:27] hmm [18:28] http://paste.ubuntu.com/735521/ [18:28] hazmat: ^ the master-customize.log. [18:29] mpl, the ssh key isn't about the aws keys or usage, juju is looking for a public key on your machine to setup for ssh on all launched machines, it looks for some defaults (id_dsa.pub, etc).. you can specify one directly with authorized-keys-path in your environments.yaml under the provider block [18:31] I've blown away the python-crypto deb in apt-cacher-ng [18:31] hazmat: thx. I haven't seen that in the getting started guide. does it come later? [18:31] and now I'm going to tear down in this clean & safe way that doesn't involve manual process killing [18:31] mpl, its described in the provider docs, but this should probably be a FAQ entry [18:32] http://paste.ubuntu.com/735524/ [18:32] and look, juju processes still around: http://paste.ubuntu.com/735525/ [18:32] hmm [18:33] jml, indeed.. lxc hanging on a shutdown command was unexpected [18:33] that needs some reordering then [18:34] so it looks like that would mean there is another lxc-wait operation being done concurrently [18:35] jml, could you paste a ps aux | grep lxc [18:39] hazmat: nothing comes up, but I'd already manually killed the processes in http://paste.ubuntu.com/735525/ [18:42] hazmat: that did it, thx. now to wait for the machine to be initialized I suppose... [18:43] jml, one last thing that might have useful information.. could you paste the unit's container.log [18:44] will do. [18:45] note that this is from a run where I have resolved the python-crypto hash mismatch problem [18:45] (there's now a different problem) [18:46] http://paste.ubuntu.com/735534/ [18:46] http://paste.ubuntu.com/735536/ <- master-customize.log from that run [18:46] http://paste.ubuntu.com/735537/ <- debug log from that run [18:50] jml, thanks [18:53] jml, same question re process grep on lxc if its still in the same state. the lxc address error is from another lxc process binding to the address [18:53] http://paste.ubuntu.com/735545/ [18:53] jml, this should get a bug report [18:54] i'm not sure the code does a very good job assuming serial use of the lxc api [18:54] at least not across processes [18:55] hazmat: I suspect you'd be able to file a more useful bug report than I. [18:55] jml, sounds good [18:55] And besides, my direct problem is that I can't get an LXC instance deployed. Dodgy cleanup when I fail to do so is second-tier :) [18:56] Although maybe this is the language thing. [18:56] crap. I need to pack. [18:56] it still doesn't really answer the problem in this case, which is why the container isn't up, it might be the error comes when we try to check the status and something else is still listening to hte status, jml thanks cheers [18:56] i'll have a look at it some more next week [18:57] hazmat: thanks :) [19:17] right, i'm off for the weekend. see y'all monday. [19:52] rog: Have a good one rog [19:53] I'll head outside for a while too.. [23:38] <_mup_> juju/robust-zk-connect r414 committed by jim.baker@canonical.com [23:38] <_mup_> Initial version