[02:25] <areay> hi all -- i'm using bash scripts for cloud-init, and essentially including files from within (cat << EOF > file)... it didn't take long before i hit the 16k limit. guides online suggest storing any auxiliary files in S3, but i've never used S3 and i'm unsure of how to access it from within an instance... is that even the best way of doing it?
[02:34] <flaccid> areay: i guess in your script used for cloud-init, do bash < <( curl -s https://mybucket.s3.amazonaws.com:443/myscript.bash ) 2>&1 | tee /var/log/install.log
[02:42] <areay> flaccid, thanks man :) i just realized i meant WS3 and not S3 tho... but just the url you've given me there tells me a lot about Walrus in general and i understand a lot more now
[02:42] <flaccid> okies
[02:42] <areay> am i right in assuming i can use the same kind of url to access a walrus bucket?
[02:42] <flaccid> i assume so
[02:43] <areay> awesome, i'll get on it,... thanks
[07:19] <areay> hi all -- i've set MAX_CORES="16" in eucalyptus.conf on my node, and restarted both eucalyptus-nc and eucalyptus-cc but euca-describe-availability-zones verbose still shows the same number of available instances... i haven't yet reached the limit but i'm assuming this means that my MAX_CORES setting didn't work :/
[07:29] <TeTeT> areay: did you try a $ sudo restart eucalyptus-nc CLEAN=1 on the NC?
[07:31] <areay> TeTeT, yup... and when that didn't work i did a sudo restart eucalyptus-cc on the CC :/
[07:32] <areay> not sure if that would have made any difference... i also tried adding it to the eucalyptus.conf on the CC which didn't work either
[07:33] <TeTeT> areay: that's weird, if you do a 'grep MAX_CORES /etc/eucalyptus/*' what is the result?
[07:33] <TeTeT> on the NC
[07:34] <areay> TeTeT, just one result: /etc/eucalyptus/eucalyptus.conf:MAX_CORES="16"
[07:39] <areay> should i try it without the quotes maybe?
[07:40] <TeTeT> areay: let me check my host
[07:40] <areay> kk thx
[07:43] <TeTeT> areay: nope, I have it in quotes as well, see /etc/eucalyptus/eucalyptus.conf:MAX_CORES="32"
[07:43] <areay> i'm using lucid -- does that make any difference to any of this?
[07:44] <TeTeT> areay: i'm with this cloud on maverick, but I have others running on Lucid and it worked for me
[07:45] <TeTeT> areay: are you positively sure that you changed the MAX_CORES on the right NC? just to be 100% certain
[07:47] <areay> yeah definitely -- i just have one NC (i will add more if and when i get this working ;)
[07:54] <TeTeT> areay: the only remaining restart that came to my mind would 'sudo restart eucalyptus CLEAN=1' on the front-end, but it should not be needed
[07:55] <TeTeT> areay: and are you sure you have enough disk and RAM on the NC, so it can hold the 16 VMs? UEC will use the lowest of that number, e.g. I have a NC with MAX_COFES="32" but I see only 30 available instances, as I don't have enough disk space
[07:56] <areay> ah fair enough, i'll investigate..  i even restarted both machines so i'm guessing it's to do with RAM or disk space
[07:56] <areay> i would have expected it to change at least slightly though
[08:04] <TeTeT> areay: note that 'sudo restart eucalyptus CLEAN=1' is more thorough than a cold reboot!
[08:09] <areay> TeTeT, tried it, no luck :/ i'm still being told i can only have 3 m1.small instances (i'm guessing that's because the host is using one of the four available physical cores)
[08:17] <TeTeT> areay: weird, what's your memory and diskspace on the NC? 3 is an odd number, if you have a quad core it should be 4
[08:18] <areay> TeTeT, disk space is 291gb, memory is a measly 1gb
[08:21] <TeTeT> areay: what's the mem size of m1.small? 256 or 312 mb? Try to change that to 64 in the web console and see if suddenly you have more instances
[08:22] <areay> TeTeT, m1.small is 192mb -- i'll change it now though to see what i get
[08:23] <areay> TeTeT, ahhh it was RAM... lol - just got 11 available instances :)
[08:27] <TeTeT> areay: ah! finally :)
[08:30] <areay> TeTeT, thx :)
[18:00] <mathiaz> smoser: hi!
[18:01] <smoser> yo
[18:01] <mathiaz> do the latest ec2 images for 10.04 support kernel upgrade?
[18:02] <smoser> if launched with proper pv-grub kernel
[18:03] <smoser> https://lists.ubuntu.com/archives/ubuntu-cloud/2010-December/000466.html
[18:03] <smoser> and the latest dailies have pv-grub by default
[18:04] <mathiaz> smoser: great - thanks!
[18:05] <smoser> mathiaz, if you run with pv-grub and find bugs, please, please let me know.
[18:05] <smoser> i'm about to pull the trigger on pv-grub by default
[18:05] <mathiaz> smoser: sure!
[23:19] <terje> hi, I'm having an issue with an ec2 instance I've built
[23:20] <terje> or image, rather
[23:20] <terje> I can't really find an aws or ec2 channel - so I thought I'd drop by here and ask.
[23:20] <terje> when I launch my instance, and specify the key to use, nothing ends up in my /root/.ssh/authorized_keys file.
[23:21] <flaccid> how did you check that?
[23:29] <terje> I have a user account I created
[23:30] <terje> and gave that user sudo access
[23:32] <flaccid> there should be a service in init that fetch the public key and puts it in authorized_keys. the ami is not designed for plain password authentication with ssh, you'd have to change sshd_config
[23:45] <terje> well, I did that.
[23:45] <terje> I thought that when you start an AMI, it asks which keys you wish to start it with
[23:45] <flaccid> i don't even know what ami you are using.
[23:46] <terje> and it some how injected those keys into your authorized_keys file
[23:46] <terje> I'm using an AMI that I created myself.
[23:46] <flaccid> yes and you keep the private key, aws/ec2 doesn't store that for you
[23:46] <flaccid> well thats why...
[23:46] <terje> yes, I have that key.
[23:46] <terje> so I have to do something when the thing boots, curl from 169...
[23:46] <terje> > authorized_keys
[23:46] <flaccid> pretty much
[23:46] <terje> got it, thanks.
[23:47] <flaccid> np
[23:47] <flaccid> i wrote an lsb compliant script i'll get a link
[23:47] <flaccid> vmbuilder essentially does this which is what they build the official images with
[23:49] <flaccid> terje: https://rightscale-services.s3.amazonaws.com:443/scripts%2Finit%2Fgetsshkey.rc.debian.bash