=== jfluhmann__ is now known as jfluhmann === dendrobates is now known as dendro-afk [10:56] Hi all can anyone explain me what Cloud Computing is? [11:15] ranjan: try the entry in wikipedia [14:05] kim0, regarding python versions [14:05] https://launchpad.net/ubuntu/+source/python-defaults [14:05] python 2.7 would be ok but then, any utility instances that you fire have to have python2.7 if you end up running python code there. [14:06] that said, limiting yourself to python2.6 means you run on everything supported except for hardy. === dendro-afk is now known as dendrobates [16:40] I've got a test cloud set up where launching instances works just fine, but creating a storage volume immediately fails: http://pastie.org/1537407 [16:43] I pastied the applicable log entries from both the CLC & SC [16:45] does a file actually get created in //var/lib/eucalyptus/volumes/vol-* [16:46] kim0: let me check [16:48] kim0: yep, it creates the file, and it looks like it actually stays there until I do a euca-describe-volumes [16:49] can you increase log verbosity level [16:49] it's already on the default debug [16:49] -h [16:50] they definitely stick around until you run a describe-volumes...not sure if I can actually mount/use them [16:50] elasticdog: do you have too many volumes already running [16:50] I have no volumes running and plenty of free space [16:50] It seems [16:50] losetup /dev/loop0 //var/lib/eucalyptus/volumes/vol-59920626 [16:50] is failing [16:51] can you try that on any file you create [16:52] running that manually seems to exit cleanly [16:52] ls /dev/loop0 exists ? [16:52] yep [16:53] hmm [16:53] well clean it up for now .. losetup -d /dev/loop0 [16:56] odd that it only seems to disappear after running a euca-describe-volumes on it...let me see if I can actually mount it [17:04] it won't let me attach the volume either, but the file sticks around still === dendrobates is now known as dendro-afk === dendro-afk is now known as dendrobates [17:53] found another error in the cloud-error.log: ERROR com.eucalyptus.util.EucalyptusCloudException: Could not export AoE device /dev/vg--g73dA../lv-5GN9hA.. [17:55] might be VNET_INTERFACE related on the CLC per http://open.eucalyptus.com/wiki/EucalyptusTroubleshooting_v1.5 [18:02] does anyone know if RackSpace is going to be API compatable with AWS at any point? [18:10] kim0: I figured it out...the VNET_*INTERFACE settings in eucalyptus.conf do NOT apply to the SC for whatever reason, but using the admin web interface I was able to properly set it to eth1 instead of eth0 [18:10] there's a caveat listed about that in the eucalyptus.conf man page [18:35] is it correct that neither the SC nor Walrus have any built-in data replication capabilities? === mrjazzcat is now known as mrjazzcat-lunch [19:04] elasticdog: AFAIK, openstack already provides partial EC2 api compat [19:06] kim0: yeah, I'm investigating OpenStack as well...looks like integration with a SAN or using a homebrew solution with something like GlusterFS is the way to get data replication for the SC/Walrus [19:07] elasticdog: agree .. drbd is an option as well [19:19] so I assume that you are limited to a single SC per cluster, and a single Walrus machine per cloud? [19:20] it seems like OpenStack's ring architecture approach might fit in better with our use-case === mrjazzcat-lunch is now known as mrjazzcat === Kiall is now known as Kiall|AFK [20:36] Any idea why the following is failing as unauthorized [20:36] ec2-describe-image-attribute --block-device-mapping ami-0601f16f [20:36] that's a public natty ami [20:37] smoser ^ [20:37] you dont have access to that. [20:37] its strange, but true. [20:38] :) [20:38] I wanna know the snapshot of the ami [20:38] so that we can clone it [20:38] you dont have access to the snapshot anyway [20:39] my understanding is .. we get AMI-ID -> Volume -> snapshot -> create new vol -> mount that [20:39] this is something that i've considered doing , for exactly this reason [20:39] is that incorrect [20:39] that's for the tool to copy AMIs across regions [20:39] i dont really follow what you're suggesting, but you can do that for images you own [20:40] you're not guaranteed to be able to see snapshots or apparently some image attributes of images you don't own. [20:40] smoser: so basically the tool to copy AMIs .. gets an AMI ID .. How is it possible to find out the volume to attach/mount [20:41] like you're suggesting [20:41] Are you saying I can only copy AMIs that I own! [20:41] you just can't copy an AMI that you dont own. [20:41] duh [20:41] that make sense. [20:41] err.. that *does* make sense. [20:41] I can already start it .. and copy it then [20:42] right [20:42] well, no. [20:42] the ability to launch an instance of something is not the ability to read it "raw" [20:42] ie, chmod 0711 /bin/foo [20:42] you have execute access to ami-0601f16f but not read [20:43] so for the purposes of this tool .. to test it .. [20:43] we'd need to upload our own image ? [20:43] and copy that [20:43] any easier path to get a golden image [20:43] to copy/test with === Kiall|AFK is now known as Kiall [20:44] like do you have any image that has that read permission allowed [20:48] well, its easy enough to test in 1 of 2 ways [20:49] 1.) who cares if the instance runs or not, use a 1G volume, and register it as an AMI. will cost basically $0.00, it just wont boot. [20:49] 2.) populate a snapshot from uec-images, and make one that *does* boot. still can use a 1G filesystem. [21:24] smoser: thanks [21:25] kim0, if you want to push on making snapshots public... i do think that would be useful [21:25] but in the describe-image-attribute case, it would still fail [21:26] there woudl be no data publically available in AWS that would link AMI -> SNAPHOST [21:26] or even SNAPSHOT :) [21:26] interesting anagram in this case [22:29] kim0, question: why don't you put the website on the topic ? === dendrobates is now known as dendro-afk === daker_ is now known as daker === dendro-afk is now known as dendrobates === tubadaz_ is now known as tubadaz === jfluhmann is now known as jfluhmann_away