[05:13] <liam> is there an ubuntu ec2 ami image with only 10gb hdd ebs that anyone can link?
[06:21] <rsvp> liam, check http://cloud.ubuntu.com/ami
[09:34] <asac> smoser: please let me know when there are new AMIs out after kernel landing for the high memory instances?
[09:34] <asac> thx!!
[10:28] <flaccid> asac: you have problem with current AMIs?
[10:29] <asac> flaccid: yes ... kernel bug https://bugs.launchpad.net/ubuntu/+source/linux/+bug/651370
[10:29] <asac> for high memory instances
[10:30] <flaccid> ah that one
[10:30]  * asac would love to see a verification AMI being made for this bug ;)
[10:31] <flaccid> smoser is most likely on the case
[10:31] <asac> yeah ... talked about that with him
[10:31] <asac> eventually it will go to -updates ... and then new AMIs to be made
[10:32] <asac> i hope the proposed doesnt take much longer though :/
[10:49] <asac> hmm ... i wonder if i need to tweak some sys/proc stuff to get _more_ file caching on high memory
[10:49] <asac> IO is slow so i would like the full build tree to live in mem
[10:50] <asac> btw, are there other cloud services with better IO that are nicely integrated with ubuntu?
[10:50] <asac> i found storm and gogrid, but those seem to be not really well API/tools wise
[10:51] <flaccid> rackspace is available
[10:57] <asac> but rackspace doesnt really have better IO, right?
[10:59] <flaccid> test it out i guess
[11:01] <asac> http://blog.cloudharmony.com/2010/06/disk-io-benchmarking-in-cloud.html
[11:01] <asac> yeah ...
[11:01] <asac> i guess i will wait for high mem and put whole build on tmpfs ;)
[12:14] <Evet> i want whole server to make a single cloud node
[12:14] <Evet> do i still need host os, guest os seperation?
[12:44] <kim0> Evet: The only supported configuration is 2 machines at least for a cloud, although some people have made it work on a single machine (sort of)
[12:44] <Evet> kim0: i have multiple servers
[12:45] <Evet> want to build one node per physical server
[12:47] <kim0> Evet: the way it works is to have one cloud controller node (CLC) .. and all other boxes become Node Controllers (NCs) .. would that work for you ?
[12:48] <Evet> kim0: yes
[12:48] <kim0> Evet: great .. that's the standard setup :)
[12:49] <kim0> Evet: https://help.ubuntu.com/community/UEC/CDInstall
[12:49] <Evet> kim0: so can create 4gb ram node on a 4gb box?
[12:50] <kim0> Evet: 4G VM on 4G physical box .. yeah should be ok I guess (3.5G or so would surely work) otherwise your bare-metal installation would be close to running out of ram
[12:55] <Evet> hmm, max is 3.5
[13:10] <kim0> Evet: I'm not saying that .. I'm just guessing .. feel free to try it
[14:11] <smoser> asac, i'm really hoping that as soon as that kernel is through we'll get updated amis.
[14:12] <smoser> asac, remember, you'll get better IO to instance store disk than EBS.
[14:12] <smoser> also people play with raiding EBS volumes, which , depending on report can increase performance.
[17:11] <Evet> is there an automatic solution to make slave nodes access public internet through cloud controller master?
[18:36] <kim0> Hi folks .. Can two UEC installations talk to each other ? Check out this question http://ubuntuforums.org/showthread.php?t=1644440
[18:41] <kiall> kim0, I dont believe that is possible .. And I don't believe it should be supported either :) .. What your probably looking for, is 1 cloud with a HA CLC+Walrus ...