=== jfluhmann__ is now known as jfluhmann | ||
=== dendrobates is now known as dendro-afk | ||
ranjan | Hi all can anyone explain me what Cloud Computing is? | 10:56 |
---|---|---|
flaccid | ranjan: try the entry in wikipedia | 11:15 |
smoser | kim0, regarding python versions | 14:05 |
smoser | https://launchpad.net/ubuntu/+source/python-defaults | 14:05 |
smoser | python 2.7 would be ok but then, any utility instances that you fire have to have python2.7 if you end up running python code there. | 14:05 |
smoser | that said, limiting yourself to python2.6 means you run on everything supported except for hardy. | 14:06 |
=== dendro-afk is now known as dendrobates | ||
elasticdog | I've got a test cloud set up where launching instances works just fine, but creating a storage volume immediately fails: http://pastie.org/1537407 | 16:40 |
elasticdog | I pastied the applicable log entries from both the CLC & SC | 16:43 |
kim0 | does a file actually get created in //var/lib/eucalyptus/volumes/vol-* | 16:45 |
elasticdog | kim0: let me check | 16:46 |
elasticdog | kim0: yep, it creates the file, and it looks like it actually stays there until I do a euca-describe-volumes | 16:48 |
kim0 | can you increase log verbosity level | 16:49 |
elasticdog | it's already on the default debug | 16:49 |
elasticdog | -h | 16:49 |
elasticdog | they definitely stick around until you run a describe-volumes...not sure if I can actually mount/use them | 16:50 |
kim0 | elasticdog: do you have too many volumes already running | 16:50 |
elasticdog | I have no volumes running and plenty of free space | 16:50 |
kim0 | It seems | 16:50 |
kim0 | losetup /dev/loop0 //var/lib/eucalyptus/volumes/vol-59920626 | 16:50 |
kim0 | is failing | 16:50 |
kim0 | can you try that on any file you create | 16:51 |
elasticdog | running that manually seems to exit cleanly | 16:52 |
kim0 | ls /dev/loop0 exists ? | 16:52 |
elasticdog | yep | 16:52 |
kim0 | hmm | 16:53 |
kim0 | well clean it up for now .. losetup -d /dev/loop0 | 16:53 |
elasticdog | odd that it only seems to disappear after running a euca-describe-volumes on it...let me see if I can actually mount it | 16:56 |
elasticdog | it won't let me attach the volume either, but the file sticks around still | 17:04 |
=== dendrobates is now known as dendro-afk | ||
=== dendro-afk is now known as dendrobates | ||
elasticdog | found another error in the cloud-error.log: ERROR com.eucalyptus.util.EucalyptusCloudException: Could not export AoE device /dev/vg--g73dA../lv-5GN9hA.. | 17:53 |
elasticdog | might be VNET_INTERFACE related on the CLC per http://open.eucalyptus.com/wiki/EucalyptusTroubleshooting_v1.5 | 17:55 |
terje | does anyone know if RackSpace is going to be API compatable with AWS at any point? | 18:02 |
elasticdog | kim0: I figured it out...the VNET_*INTERFACE settings in eucalyptus.conf do NOT apply to the SC for whatever reason, but using the admin web interface I was able to properly set it to eth1 instead of eth0 | 18:10 |
elasticdog | there's a caveat listed about that in the eucalyptus.conf man page | 18:10 |
elasticdog | is it correct that neither the SC nor Walrus have any built-in data replication capabilities? | 18:35 |
=== mrjazzcat is now known as mrjazzcat-lunch | ||
kim0 | elasticdog: AFAIK, openstack already provides partial EC2 api compat | 19:04 |
elasticdog | kim0: yeah, I'm investigating OpenStack as well...looks like integration with a SAN or using a homebrew solution with something like GlusterFS is the way to get data replication for the SC/Walrus | 19:06 |
kim0 | elasticdog: agree .. drbd is an option as well | 19:07 |
elasticdog | so I assume that you are limited to a single SC per cluster, and a single Walrus machine per cloud? | 19:19 |
elasticdog | it seems like OpenStack's ring architecture approach might fit in better with our use-case | 19:20 |
=== mrjazzcat-lunch is now known as mrjazzcat | ||
=== Kiall is now known as Kiall|AFK | ||
kim0 | Any idea why the following is failing as unauthorized | 20:36 |
kim0 | ec2-describe-image-attribute --block-device-mapping ami-0601f16f | 20:36 |
kim0 | that's a public natty ami | 20:36 |
kim0 | smoser ^ | 20:37 |
smoser | you dont have access to that. | 20:37 |
smoser | its strange, but true. | 20:37 |
smoser | :) | 20:38 |
kim0 | I wanna know the snapshot of the ami | 20:38 |
kim0 | so that we can clone it | 20:38 |
smoser | you dont have access to the snapshot anyway | 20:38 |
kim0 | my understanding is .. we get AMI-ID -> Volume -> snapshot -> create new vol -> mount that | 20:39 |
smoser | this is something that i've considered doing , for exactly this reason | 20:39 |
kim0 | is that incorrect | 20:39 |
kim0 | that's for the tool to copy AMIs across regions | 20:39 |
smoser | i dont really follow what you're suggesting, but you can do that for images you own | 20:39 |
smoser | you're not guaranteed to be able to see snapshots or apparently some image attributes of images you don't own. | 20:40 |
kim0 | smoser: so basically the tool to copy AMIs .. gets an AMI ID .. How is it possible to find out the volume to attach/mount | 20:40 |
smoser | like you're suggesting | 20:41 |
kim0 | Are you saying I can only copy AMIs that I own! | 20:41 |
smoser | you just can't copy an AMI that you dont own. | 20:41 |
kim0 | duh | 20:41 |
smoser | that make sense. | 20:41 |
smoser | err.. that *does* make sense. | 20:41 |
kim0 | I can already start it .. and copy it then | 20:41 |
kim0 | right | 20:42 |
smoser | well, no. | 20:42 |
smoser | the ability to launch an instance of something is not the ability to read it "raw" | 20:42 |
smoser | ie, chmod 0711 /bin/foo | 20:42 |
smoser | you have execute access to ami-0601f16f but not read | 20:42 |
kim0 | so for the purposes of this tool .. to test it .. | 20:43 |
kim0 | we'd need to upload our own image ? | 20:43 |
kim0 | and copy that | 20:43 |
kim0 | any easier path to get a golden image | 20:43 |
kim0 | to copy/test with | 20:43 |
=== Kiall|AFK is now known as Kiall | ||
kim0 | like do you have any image that has that read permission allowed | 20:44 |
smoser | well, its easy enough to test in 1 of 2 ways | 20:48 |
smoser | 1.) who cares if the instance runs or not, use a 1G volume, and register it as an AMI. will cost basically $0.00, it just wont boot. | 20:49 |
smoser | 2.) populate a snapshot from uec-images, and make one that *does* boot. still can use a 1G filesystem. | 20:49 |
kim0 | smoser: thanks | 21:24 |
smoser | kim0, if you want to push on making snapshots public... i do think that would be useful | 21:25 |
smoser | but in the describe-image-attribute case, it would still fail | 21:25 |
smoser | there woudl be no data publically available in AWS that would link AMI -> SNAPHOST | 21:26 |
smoser | or even SNAPSHOT :) | 21:26 |
smoser | interesting anagram in this case | 21:26 |
daker_ | kim0, question: why don't you put the website on the topic ? | 22:29 |
=== dendrobates is now known as dendro-afk | ||
=== daker_ is now known as daker | ||
=== dendro-afk is now known as dendrobates | ||
=== tubadaz_ is now known as tubadaz | ||
=== jfluhmann is now known as jfluhmann_away |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!