[05:30] i just discovered juju [05:30] is it similar at all to opennebula? [05:30] nm, i think i know the answer to my own question [05:31] rather, can juju use something like opennebula to start VMs in a private cloud? [10:11] good morning [11:09] hazmat: hiya [11:09] hi rog, how's plan9 kicking it these days? [11:10] hazmat: fragmenting... [11:10] * hazmat vaguely remembers that was the topic of the conference [11:10] hazmat: but always interesting [11:20] hazmat: i drank too much beer too [11:20] ... and discovered that Madrid 3am finishes don't mix well with conference 7am starts. [11:21] rog, yeah.. rude awakening it is. [11:21] :-) [11:22] rog, there are these crazy kids at the python conference who go drinking to the wee hrs, and then go running at like 8am before the conf starts.. [11:22] * hazmat tries rebooting kicking xchat to kill a unity problem [11:23] hazmat: i guess that would wake you up. although it might not get rid of the heatache too well... [11:23] * rog can't quite bring himself to upgrade to oneiric [11:23] for the most part its a good upgrade, unity2d is a pretty solid fallback [11:35] SpamapS, you at hadoop world? [11:35] oh.. pre uds [13:48] hazmat: no.. ? [13:48] SpamapS, oh.. just saw your email early in the am, assumed you were east coast [13:49] ah, no, stupid smoke alarms [14:13] hazmat: to experiment with juju, is it possible to run a complete environment using lxc running within a kvm virtual machine? [14:15] tauren, it is possible to use the local/lxc provider in a virtual machine [14:15] and create a service deployment that way [14:16] cool. but will that cause more headaches that I wouldn't have if I just set it all up on a physical box? [14:16] tauren: we've had people doing it in virtualbox vms too [14:16] there is a caveat that the vm performance cause some failures atm, its something we're working on (handling transient disconnects for the internal connection topology) [14:17] hmm, ok [14:17] tauren, it definitely works, just that you need to be aware that the vm can be overloaded if your going past its capacity [14:17] ok, good to know. [14:17] yeah I've overloaded my laptop using the local provider a few times.. [14:17] now i assume it isn't possible to set up a full openstack environment without multiple physical systems. is that true? [14:18] i end up doing most of my dev testing / charm development with a local provider [14:18] installing packages can be hard on the disks ;) [14:18] ssd ftw ;-) [14:18] SpamapS, did you end up ordering an ssd? [14:18] no [14:18] I can't decide on one. Analysis paralysis. [14:18] SpamapS, which laptop model are you doing this for? [14:19] i can give a single suggestion if that would help [14:19] * hazmat is still waiting for native 7mm ssd [14:19] macbook pro 5,1 [14:19] SATA II only [14:20] tho some SATA III drives are known to work fine [14:20] ooo, i'm interested too. macbook pro 5,5 [14:21] hazmat: considering getting the superdrive replacement kit too since my superdrive no longer seems to function [14:28] SpamapS, hard to narrow down to one, so i'd go probably go with a ocz vertex3 or kingston hyperx (i'm partial to the sandforce controlllers), i think the crucial/micron m4 is pretty good as well, the intel 510 or 320 are both pedestrian but solid. here's a nice roundup http://www.anandtech.com/show/4421/the-2011-midrange-ssd-roundup.. i'd probably just go with the crucial/micron after verifying size compatibility. the lack of sata III means you won't really [14:28] be pushing any of these to their limits, as you'll saturate the bus, the sandforce controllers have slightly higher power usage (their doing more work in the controller). [14:29] don't care much about power consumption on this beast [14:29] 2 hours is a triumph [14:29] hazmat: now to see which of those is available via Amazon Prime ;) [14:30] if you do go with the crucial m4 you'll need to a firmware double check to make sure its on the latest.. http://www.anandtech.com/show/4712/the-crucial-m4-ssd-update-faster-with-fw0009 [14:30] SpamapS, pretty much all of them are on prime ;-) [14:32] You're the second person to suggest kingston hyperx [14:32] so I'll probably look at that [14:32] I wonder if I can have it shipped to the Caribe Royale. ;) [14:34] Now if my machine could actually resume from suspend, I'd be quite happy. :-P [14:34] doh [14:34] Its unbelievable that this is an issue, still. [14:45] * hazmat takes a break from email bbiab [15:10] * hazmat attacks the review queue [15:11] rog, will you'll be able to do a review on niemeyer's go-new-revisions branch? [15:11] hazmat: yes - coming up. there's been lots of discussion to try to absorb! [15:12] indeed, lots of good discussion [15:40] hazmat: hmm, when i do bzr qdiff go-trunk go-new-revisions i get an error. how is go-trunk not a parent of go-new-revisions? the log looks like it is. http://paste.ubuntu.com/717939/ [15:40] cd go-new-revisions && bzr qdiff -r ancestor:../go-trunk [15:40] rog, ^ [15:41] rog, you need to tell bzr that you want an ancestor branch diff [15:41] hazmat: ah, i haven't seen ancestor: before [15:42] and i thought -r just took a revision number. [15:43] cool. [15:43] thanks [15:43] bzr help revisionspec has some gory details of the many ways to spell a revision [15:43] i just looked. ultra-simple it is not. [16:02] hazmat: hmm, something's really wrong. the diff should not look like this: http://i.imgur.com/Q2PoM.png [16:03] (note invalid code on right, redundant ifs on left) [16:04] the output from bzr diff looks ok, so maybe qdiff isn't coping with the ancestor: spec very well [16:04] rog its a context diff, you can switch to complete by clicking the radial button [16:04] the output between qdiff and diff should be equivalent, just formatted differently (side by side vs unidiff) [16:05] hazmat: ah, the gray spaces represent omitted lines. i see. [17:04] * rog is heading off. [17:05] hazmat: sorry, haven't quite got to the end of go-new-revisions, got side tracked by thinking about error handling vs identifiable errors. [17:08] rog, no worries, if you can it would be great to have it reviewed by end of day, its pending for a while [17:08] i'm digging into some of the other items in the queue [17:08] hopefully next week we can talk about better team review practices to keep the queue down [19:13] jimbaker, thanks for the review [19:20] hazmat, sounds good, i can't wait until the retry work lands, it's going to be very helpful [19:35] jimbaker, yeah.. i'm looking forward to local environments surviving suspend [19:35] it should close out about 4 related issues [19:36] hazmat: any chance you could answer this question I asked previously? "i assume it isn't possible to set up a full openstack environment without multiple physical systems. is that true?" [19:37] tauren, oh.. sorry missed that earlier.. it is possible to setup openstack without multiple machines.. it depends on how you want to do it though.. if you want to do it with juju or just run openstack [19:37] tauren, openstack can run with uml or lxc on a local machine in a developer setup.. it can also be setup that way in ec2.. as for being driven by juju [19:38] there was a guy who had virtualbox setup with netboot and orchestra in a vm to manage the other vms, which he pointed juju to [19:39] but if you mean like best practice... for prod use, its physical machines + orchestra + juju deploying openstack [19:39] we have some docs on the orchestra wiki [19:39] my goal is to experiment with openstack, juju, lxc, etc before moving forward with purchasing multiple physical machines. if i could do it all on a single box for testing, that would be awesome. [19:41] thanks for the answer! i'll look for those docs. [19:50] tauren, this blog post has the links for getting all the pieces in place https://wiki.ubuntu.com/ServerTeam/OrchestraJuju [19:50] great! thanks for the help. [21:22] rog ping? [21:23] oh not here [21:54] jimbaker, hazmat: thanks for the review on statusd [21:56] bcsaller, np [21:56] fwereade, what's the base for dynamic-unit-state? [22:16] woot only one more review till i've hit the full queue [22:19] * SpamapS would be interested in working on a gamification of lp's merge review UI ;) [22:19] "You earned the Review WARRIOR badge (10 reviews in a day)" [22:20] nice [22:20] i started working on a touch ui for lauchpad, but i'm doubtful now i'll have it ready for a uds lightning talk [22:21] sigh.. so little time [22:22] * hazmat saves the last review for tomorrow [22:22] tis beer o clock, cheers [22:25] hazmat: nastravi!