[00:04] develop: I use s3fs to mount S3 buckets as a fuse file system. EBS and S3 each have their uses. [00:04] thnks [00:04] develop: Only one EC2 instance can connect to a given EBS volume at a time and it must be in the same availability zone. [00:04] Any number of instances (and outside computers) can connect to an S3 bucket with s3fs. [00:05] but it has different performance characteristics, and operates on complete files instead of blocks, and ... [00:46] and you risk corrupting file integrity if you have more than 1 client writing back to an s3 bucket [00:48] as far as I know you won't corrupt the actual object that is stored in S3. S3 guarantees write consistency. That is, exactly one write will succeed. You don't know which one but data is not interleaved. [00:49] well technically the integrity of the file is broken if the wrong write remains [00:49] not from the infrastructures perspective. certainly from the user's perspective but that is the user's job. [00:50] so 1 client opens the file contents, does some stuff for a while, then writes back after another client has done the same thing. the data from the 2nd client is lost [00:50] yep, that is why you use the etag to make sure you read what you expect. [00:50] well yes. but its infrastructure that is connect to the bucket as well as the user.. [00:50] well thats why you use ebs [00:51] they have very different characteristics. API wise, performance wise, scale wise, reliability wise. [00:51] yes [01:24] it seems that i do not understand the following: i launched an ami with a db server attached an ebs volume and the rebundled and uploaded. But i can still not stop the instance to startit later. Do i havedo bundle as a ebs root-device? take snapshot? where does the db data get saved? === flaccid_ is now known as flaccid [02:37] erichammond: cheers for that. this will set all components: sudo passwd root -u && sudo sed -i 's/disable_root=1/disable_root=0/g' /etc/ec2-init/ec2-config.cfg && sudo cp -v /home/ubuntu/.ssh/authorized_keys /root/.ssh/ [02:38] flaccid: You probably don't want to unlock the root password. Stick with ssh keys. [02:38] oh i must of misread the man [02:39] ah yeah thats pam only [02:40] hmm there are these ec2 mirrors in the sources.lst that seem down [02:40] [Connecting to us.ec2.archive.ubuntu.com (174.129.36.139)] [02:41] flaccid: Those only allow access from within the same EC2 region. [02:42] ah rightio. they must of set a firewall rule [02:42] thats us-east ? [02:42] us-east-1, yes [02:42] okies [02:43] Gotta be specific as Amazon will someday launch us-east-2, etc. as they take over the world. [02:43] just going to see if this build process works with nps on karmic, then i'll re-save that readme [02:43] hehe take over the world. well if you can get them to get a cloud going here in AU that would be good [02:43] flaccid: Are you on the ec2ubuntu Google group? There's somebody there asking for help getting Karmic to work with RightScale. [02:44] I was considering publishing how I did it for a personal client, but don't know if RightScale wants to provide the official way. [02:44] i don't think i'm on that group. i'll join [02:44] http://groups.google.com/group/ec2ubuntu/ [02:45] technically no official way yet, but this will provide a POC to show internal to move towards that [02:45] http://groups.google.com/group/ec2ubuntu/browse_thread/thread/0e56d0c5f2f224ca [02:47] ah yeah, cheers for that. i'm a python guy so this should be cool [03:10] yeah python 2.6 is in karmic, so we'll show him how to do the rightimage build [04:12] ping erichammond around ? [04:12] smoser: 'lo [04:12] you have a few minutes? [04:13] sure thing [04:17] see private message === dendrobates is now known as dendro-afk [09:12] morning everyone [09:12] howdy === dendro-afk is now known as dendrobates === fairwinds__ is now known as fairwinds === erichammond1 is now known as erichammond === rberger_ is now known as rberger === zul_ is now known as uzl === uzl is now known as zul === zul_ is now known as zul === erichammond1 is now known as erichammond