liam | whats the difference between the .pem key you get with your instance to the access keys, X.509 certs and key pairs that are in your account access credentials page in aws? | 02:46 |
---|---|---|
liam | does anyone know if you do the api tools steps outlined here https://help.ubuntu.com/community/EC2StartersGuide that you can ssh in to your ec2 instance with having to use the command "-i key.pem" ?? | 02:49 |
flaccid | liam: http://docs.amazonwebservices.com/AWSSecurityCredentials/1.0/AboutAWSCredentials.html | 02:49 |
flaccid | liam: no, a private key is required by default with images | 02:50 |
flaccid | you might like to add the private key to your client ssh | 02:50 |
liam | the private key is the .pem right? | 02:50 |
flaccid | i would assume so | 02:50 |
liam | flaccid: I still cant find how to add it to the ssh client... | 02:51 |
flaccid | rtfm | 02:51 |
liam | is it just ssh add key.pem?? | 02:51 |
erichammond | liam: Instead of having Amazon generate an ssh key, you can simply upload your default to avoid needing "-i" | 03:00 |
erichammond | liam: I wrote about it here: http://alestic.com/2010/10/ec2-ssh-keys | 03:00 |
liam | erichammond: thank you. | 03:01 |
erichammond | This is my recommended way of working with EC2 now. There's no need to have Amazon generate ssh keys any more. | 03:01 |
flaccid | sorry that i am short today. gotta do a release in 2 hours | 03:01 |
erichammond | flaccid: I often say nothing to avoid saying nothing nice :) | 03:02 |
flaccid | true | 03:02 |
erichammond | #ubuntu-* channels have higher standards: https://wiki.ubuntu.com/IRC/Guidelines | 03:06 |
erichammond | It's one of the things that drew me to Ubuntu | 03:06 |
erichammond | besides the fact that it just worked better and didn't break when upgrading. | 03:07 |
flaccid | lol | 03:07 |
flaccid | i wasted 3 years with ubuntu, i aint going back | 03:07 |
erichammond | and the software packages were more recent versions | 03:08 |
erichammond | and... | 03:08 |
flaccid | i have much higher standards than ubuntu | 03:08 |
progre55 | hi guys. Is there a way to re-register (deregister - register) walrus without losing any of the already uploaded images, kernels, etc? | 09:54 |
TeTeT | progre55: never tried that, but I don't think that this is possible :( | 10:30 |
progre55 | TeTeT: I've managed to change it, from the admin web ui, but still, when I say "euca-describe-images" I get "no route to host" after some time.. | 10:31 |
progre55 | TeTeT: Oh, I havent changed the environment variables after I changed the IP address =) | 10:33 |
TeTeT | progre55: you probably need to d/l new credentials from the admin interface | 10:33 |
progre55 | TeTeT: it's working now, after I changed the EC2_URL env. variable to the correct one =) | 10:34 |
progre55 | thanks | 10:34 |
TeTeT | progre55: was merely reading, didn't do anything | 10:35 |
progre55 | TeTeT: but still, it's nice to know that there are people trying to help you =) | 10:36 |
TeTeT | :) | 10:37 |
progre55 | TeTeT: a question, if I may =) | 10:46 |
progre55 | I have downloaded one of my images using "euca-download-image" and it's a bunch of 10Mb files with a manifest. how can I convert it to a single image file now? | 10:47 |
progre55 | oops, i.e. euca-download-bundle | 10:49 |
progre55 | oh, euca-unbundle, I guess | 10:52 |
TeTeT | progre55: yes, euca-unbundle | 10:54 |
progre55 | TeTeT: thanks =) | 10:54 |
TritoLux | hi there, just out of curiosity.. is UEC the only distribution able to make use the POWERSAVE option? | 10:58 |
TritoLux | make use OF the.. | 11:00 |
gijo | what is the parameter port in boto.connect_ec2() | 11:58 |
TeTeT | gijo: probably 8773 | 12:16 |
TeTeT | gijo: for UEC that is, don't know for Amazon EC2 | 12:17 |
gijo | thx | 12:18 |
TritoLux | that you know of, is UEC the only distribution able to make use of the POWERSAVE option? since I get powernap error 255 out of the box, is there anything I need to change in the default config to make it work in maverick? is apparmor aware of powernap policies? | 12:32 |
=== dendrobates is now known as dendro-afk | ||
=== dendro-afk is now known as dendrobates | ||
=== dendrobates is now known as dendro-afk | ||
=== dendro-afk is now known as dendrobates | ||
samuel | hey all | 16:01 |
kim0 | samuel: hey | 16:17 |
=== dendrobates is now known as dendro-afk | ||
=== dendro-afk is now known as dendrobates | ||
=== dendrobates is now known as dendro-afk | ||
=== dendro-afk is now known as dendrobates | ||
TritoLux | is UEC the only distribution able to make use of the POWERSAVE option? since I get powernap error 255 out of the box, is there anything I need to change in the default config to make it work in maverick? is apparmor aware of powernap policies? | 21:15 |
mathiaz | smoser: hi! | 21:25 |
mathiaz | smoser: do you know if someone looked at hosting an apt repository in S3? | 21:25 |
mathiaz | smoser: given that S3 is accessible via http and that apt uses http to download files, I wonder if an apt repository could be hosted in S3? | 21:26 |
smoser | i had read something about this at one point | 21:27 |
smoser | iirc there was some issue with it, vut i don't really recall what. i only went as far as seing someone say they had issues... | 21:27 |
smoser | but don't let that stop you from trying | 21:27 |
smoser | https://github.com/kyleshank/apt-s3 might have more info | 21:28 |
smoser | hm.. htat seems to do authetnticated s3... so maybe unautghed s3 "just works" | 21:28 |
erichammond | mathiaz: I think S3 should be fine for an apt repository as long as you can get the files uploaded in a timely manner and detect which files need to be updated on each refresh (following the correct order). | 23:08 |
erichammond | mathiaz: I tried a couple years ago using s3fs (and rsync?). I gave it up because my initial upload was taking forever and RightScale (then Canonical) started mirroring Ubuntu repositories on EC2 instances. | 23:09 |
mathiaz | erichammond: right | 23:15 |
mathiaz | erichammond: I'm looking at hosting my own packages as part of the infrastructure | 23:15 |
mathiaz | erichammond: and I would also need private access to S3 | 23:15 |
mathiaz | erichammond: as some of the packages may include private data | 23:15 |
mathiaz | erichammond: for a public repository I think that a normal apt repo can just be mirrored to an S3 bucket | 23:16 |
mathiaz | erichammond: if file names in the Packages.gz files are relative | 23:16 |
mathiaz | erichammond: it should even be possible to mirror a pool/ dists/ layout in S3? | 23:16 |
mathiaz | erichammond: if S3 supports sub-directories | 23:16 |
erichammond | mathiaz: S3 supports "keys" where the key can include slashes. This means that you can emulate subdirectories as far as HTTP goes, and many tools like s3fs and web UIs also present a subdirectory paradigm. | 23:18 |
mathiaz | erichammond: that's great - so it should possible to host a public apt repository in S3 follow the same structure as a local repo | 23:18 |
erichammond | mathiaz: I believe so, yes. | 23:19 |
mathiaz | erichammond: however providing a private repo in S3 may be a bit more complicated | 23:19 |
erichammond | It won't auto-generate subdirectory listings or replace /dir/ with /dir/index.html, but I don't think apt repositories require those. | 23:19 |
mathiaz | erichammond: correct - apt doesn't require those | 23:19 |
erichammond | mathiaz: Using the HTTP(S) protocol, the only way you could get private would be to pick a very long, random bucket name and always use SSL. | 23:20 |
erichammond | The bucket name would effectively be your password. | 23:20 |
erichammond | Not a highly recommended approach, but I toss it out for consideration as it may work for some situations. | 23:24 |
mathiaz | erichammond: right - smoser pointed out an apt method that supports authentication | 23:25 |
mathiaz | erichammond: using an access id and a secret key | 23:25 |
erichammond | mathiaz: Yep, I saw that when it came out (2008) but haven't tried it. | 23:25 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!