[02:46] whats the difference between the .pem key you get with your instance to the access keys, X.509 certs and key pairs that are in your account access credentials page in aws? [02:49] does anyone know if you do the api tools steps outlined here https://help.ubuntu.com/community/EC2StartersGuide that you can ssh in to your ec2 instance with having to use the command "-i key.pem" ?? [02:49] liam: http://docs.amazonwebservices.com/AWSSecurityCredentials/1.0/AboutAWSCredentials.html [02:50] liam: no, a private key is required by default with images [02:50] you might like to add the private key to your client ssh [02:50] the private key is the .pem right? [02:50] i would assume so [02:51] flaccid: I still cant find how to add it to the ssh client... [02:51] rtfm [02:51] is it just ssh add key.pem?? [03:00] liam: Instead of having Amazon generate an ssh key, you can simply upload your default to avoid needing "-i" [03:00] liam: I wrote about it here: http://alestic.com/2010/10/ec2-ssh-keys [03:01] erichammond: thank you. [03:01] This is my recommended way of working with EC2 now. There's no need to have Amazon generate ssh keys any more. [03:01] sorry that i am short today. gotta do a release in 2 hours [03:02] flaccid: I often say nothing to avoid saying nothing nice :) [03:02] true [03:06] #ubuntu-* channels have higher standards: https://wiki.ubuntu.com/IRC/Guidelines [03:06] It's one of the things that drew me to Ubuntu [03:07] besides the fact that it just worked better and didn't break when upgrading. [03:07] lol [03:07] i wasted 3 years with ubuntu, i aint going back [03:08] and the software packages were more recent versions [03:08] and... [03:08] i have much higher standards than ubuntu [09:54] hi guys. Is there a way to re-register (deregister - register) walrus without losing any of the already uploaded images, kernels, etc? [10:30] progre55: never tried that, but I don't think that this is possible :( [10:31] TeTeT: I've managed to change it, from the admin web ui, but still, when I say "euca-describe-images" I get "no route to host" after some time.. [10:33] TeTeT: Oh, I havent changed the environment variables after I changed the IP address =) [10:33] progre55: you probably need to d/l new credentials from the admin interface [10:34] TeTeT: it's working now, after I changed the EC2_URL env. variable to the correct one =) [10:34] thanks [10:35] progre55: was merely reading, didn't do anything [10:36] TeTeT: but still, it's nice to know that there are people trying to help you =) [10:37] :) [10:46] TeTeT: a question, if I may =) [10:47] I have downloaded one of my images using "euca-download-image" and it's a bunch of 10Mb files with a manifest. how can I convert it to a single image file now? [10:49] oops, i.e. euca-download-bundle [10:52] oh, euca-unbundle, I guess [10:54] progre55: yes, euca-unbundle [10:54] TeTeT: thanks =) [10:58] hi there, just out of curiosity.. is UEC the only distribution able to make use the POWERSAVE option? [11:00] make use OF the.. [11:58] what is the parameter port in boto.connect_ec2() [12:16] gijo: probably 8773 [12:17] gijo: for UEC that is, don't know for Amazon EC2 [12:18] thx [12:32] that you know of, is UEC the only distribution able to make use of the POWERSAVE option? since I get powernap error 255 out of the box, is there anything I need to change in the default config to make it work in maverick? is apparmor aware of powernap policies? === dendrobates is now known as dendro-afk === dendro-afk is now known as dendrobates === dendrobates is now known as dendro-afk === dendro-afk is now known as dendrobates [16:01] hey all [16:17] samuel: hey === dendrobates is now known as dendro-afk === dendro-afk is now known as dendrobates === dendrobates is now known as dendro-afk === dendro-afk is now known as dendrobates [21:15] is UEC the only distribution able to make use of the POWERSAVE option? since I get powernap error 255 out of the box, is there anything I need to change in the default config to make it work in maverick? is apparmor aware of powernap policies? [21:25] smoser: hi! [21:25] smoser: do you know if someone looked at hosting an apt repository in S3? [21:26] smoser: given that S3 is accessible via http and that apt uses http to download files, I wonder if an apt repository could be hosted in S3? [21:27] i had read something about this at one point [21:27] iirc there was some issue with it, vut i don't really recall what. i only went as far as seing someone say they had issues... [21:27] but don't let that stop you from trying [21:28] https://github.com/kyleshank/apt-s3 might have more info [21:28] hm.. htat seems to do authetnticated s3... so maybe unautghed s3 "just works" [23:08] mathiaz: I think S3 should be fine for an apt repository as long as you can get the files uploaded in a timely manner and detect which files need to be updated on each refresh (following the correct order). [23:09] mathiaz: I tried a couple years ago using s3fs (and rsync?). I gave it up because my initial upload was taking forever and RightScale (then Canonical) started mirroring Ubuntu repositories on EC2 instances. [23:15] erichammond: right [23:15] erichammond: I'm looking at hosting my own packages as part of the infrastructure [23:15] erichammond: and I would also need private access to S3 [23:15] erichammond: as some of the packages may include private data [23:16] erichammond: for a public repository I think that a normal apt repo can just be mirrored to an S3 bucket [23:16] erichammond: if file names in the Packages.gz files are relative [23:16] erichammond: it should even be possible to mirror a pool/ dists/ layout in S3? [23:16] erichammond: if S3 supports sub-directories [23:18] mathiaz: S3 supports "keys" where the key can include slashes. This means that you can emulate subdirectories as far as HTTP goes, and many tools like s3fs and web UIs also present a subdirectory paradigm. [23:18] erichammond: that's great - so it should possible to host a public apt repository in S3 follow the same structure as a local repo [23:19] mathiaz: I believe so, yes. [23:19] erichammond: however providing a private repo in S3 may be a bit more complicated [23:19] It won't auto-generate subdirectory listings or replace /dir/ with /dir/index.html, but I don't think apt repositories require those. [23:19] erichammond: correct - apt doesn't require those [23:20] mathiaz: Using the HTTP(S) protocol, the only way you could get private would be to pick a very long, random bucket name and always use SSL. [23:20] The bucket name would effectively be your password. [23:24] Not a highly recommended approach, but I toss it out for consideration as it may work for some situations. [23:25] erichammond: right - smoser pointed out an apt method that supports authentication [23:25] erichammond: using an access id and a secret key [23:25] mathiaz: Yep, I saw that when it came out (2008) but haven't tried it.