[06:57] <Makere> So I downloaded maverick-server-uec-amd64.tar.gz from http://uec-images.ubuntu.com/maveric/current/ and used uec-publish-tarball with --resize 8G
[06:57] <Makere> It's stuck on pending when I'm trying to launch it
[06:57] <Makere> any tips?
[06:57] <Makere> I don't see what I'm doing wrong
[07:00] <Makere> writing GET/GetDecryptedImage output to /var/lib/eucalyptus/instances//admin/i-51A60A11/kernel is the last thing I see about it in nc.log
[07:01] <Makere> after that it just keeps doDescribeInstances and gives info about it
[07:01] <Makere> I get similar behaviour with my own image
[07:02] <Makere> but the UEC image works without modifications
[08:52] <Makere> after some time I get EXPIRED
[09:59] <kiall> Makere, I havent see that before .. but I presume the instance type your using is allowed 8GB of disk?
[10:00] <kiall> (i have no idea what the defaults are anymore .. but i seem to remember the smaller instance types being about 5GB
[10:08] <Makere> yea
[10:08] <Makere> adjusted the amounts
[10:09] <Makere> also tried launching with larger
[10:11] <kiall> so do you see a kvm process start up? (or the dd process preparing the disk image before that..)
[10:11] <kiall> (or xen)
[10:12] <kiall> wait .. this is #ubuntu-cloud not #eucalyptus .. of course is kvm ;)
[10:12] <Makere> didn't actually check this time, but last time it did this, no kvm process
[10:14] <Makere> I'll check inside an hour
[12:57] <Makere> "hour"
[12:57] <Makere> no kvm processes lol
[13:20] <kevinw> how does one go about de-registering a node controller in euc?
[13:37] <kevinw> let me rephrase that, i see the node controller registered but with 0 available vm's, whats causing this?
[13:43] <Makere> anything in cc/nc/registartion logs?
[13:46] <kevinw> nope
[13:47] <Makere> well I recommend giving up on cloud now while you can
[13:53] <kiall> lol ...
[13:55] <kiall> kevinw, what have you got in /var/log/eucalyptus/euca_test_nc.log ?
[13:55] <kiall> (for total memory and cores)
[14:00] <kevinw> total_memory=1026
[14:00] <kevinw> nr_cores=2
[14:09] <kevinw> i see the node as registered: root@cloudcc1:/var/log/eucalyptus# euca_conf --list-nodes
[14:09] <kevinw> registered nodes:
[14:09] <kevinw>    213.xxx.xxx.xx  cluster1
[14:09] <kevinw> root@cloudcc1:/var/log/eucalyptus#
[14:35] <mcella> hi there, we are using ubuntu amis on ec2
[14:36] <mcella> what's the proper/best way to launch new instances with a custom hostname?
[14:36] <mcella> we need a predicable hostname for our own applications (+ rabbitmq)
[14:39] <kiall> mcella, you can "fudge" the hostname with RabbitMQ ..
[14:39] <kiall> HOSTNAME=test / NODENAME=rabbit@test in /etc/rabbitmq/rabbitmq.conf gives a fudged hostname of "test" to rabbitmq
[14:40] <kiall> (assuming you have test in your hosts file or DNS)
[14:41] <mcella> kiall: that's interesting, but we still required a know hostname for our own apps :-/
[14:42] <kiall> DNS and an elastic IP will give you that ..
[14:43] <kiall> if you set the hostname on the server to microsoft.com, you dont start getting MS's traffic .. so an elastic (static) ip and a DNS record like server.yourdomain.com pointing at it will work ..
[14:44] <mcella> kiall: yep, but our instances are short lived, we need to start and stop them for demonstration purpose, so we decided to not use elastic ips
[14:45] <mcella> it would cost too much
[14:45] <mcella> we are also exploring route 53
[14:46] <kiall> then there really isn't much you can do - without an elastic IP, there is no standard way to allow one instance determine the dynamic IP of another .. You'd have to get Dynamic DNS, or pushing the IPs to specific S3 keys or something like that going ..
[14:46] <kiall> but only dynamic DNS will help with a consistent hostname ..
[14:50] <kiall> ah wait .. i read S3 there .. not route 53 their new service
[14:50] <kiall> I'm betting that can help you now that you mention it .. been meaning to read up on it
[14:51] <mcella> kiall: yeah, anyway we don't need intra instances communication
[14:52] <mcella> we just need a way to launch/create an instances and have it accessible at a know location (parametrized by the instance user data probably)
[14:52] <mcella> we have a base ami (built from the ubuntu one) to create new instances
[14:53] <mcella> another option is to set a static know hostname but cloud init at the first run will override it right? is there a way to avoid that thing
[14:53] <mcella> '
[14:54] <mcella> ?
[14:54] <kiall> by "know location", you mean a hostname like "demo.domain.com" I presume? You need DNS for that - be it static pointing at an elastic IP .. or dynamic pointing at a non-elastic IP ..
[14:54] <kiall> and by accessible, you mean you can point your browser at it?
[14:55] <kiall> (Im assuming a web app here .. but you get the idea)
[14:57] <kiall> yea - cloudinit will override it .. I haven't looked, but im sure you can turn that off .. otherwise, you can just use cloudinit to set it to something after its reset it
[14:59] <mcella> kiall: exactly
[15:00] <mcella> kiall: at what stage runs cloudinit?
[15:01] <mcella> before or after rabbit?
[15:01] <mcella> I assume it runs as early as possibile in the boot process right?
[15:09] <kiall> Its prerry early
[15:09] <kiall> rc.local -ish if i remember right
[15:10] <kiall> either way - adjusting the hostname on the instance is not required, nor will it help with, access the instance from outside amazon, or from other instances ...
[15:10] <kiall> accessing*
[15:12] <mcella> yep, that's true
[15:12] <kiall> It would help with RabbitMQ .. but is an awful hack in my opinion .. Telling RabbitMQ to use a specifc, additional hostname thats been planted in /etc/hosts (or DNS) is way less hacky...
[15:13] <kiall> (... instead of fudging the entire server .. your only fudging RabbitMQ.. aka less hacky ;))
[15:13] <mcella> :-)
[17:18] <jmgalloway> does anyone know why Im getting this error "bash: .ssh/authorized_keys: Permission denied" when I try to exchange keys from the cluster controller and the cloud controller?
[17:20] <RoAkSoAx> jmgalloway: probably something realted to the permissions of authorized_keys in the CC
[17:20] <RoAkSoAx> i experienced something similar
[17:21] <jmgalloway> I'm doing a clean install and this step is failing
[17:21] <RoAkSoAx> jmgalloway: yeah it is that then
[17:22] <RoAkSoAx> jmgalloway: might wanna take a look to eucalyptu's autherized_keys file, and check that the ownership is set to the eucalyptus user
[17:23] <jmgalloway> ok let me look
[17:24] <RoAkSoAx> jmgalloway: i experienced exactly the same issue
[17:24] <RoAkSoAx> Daviey: ^^ might be of your interest
[17:24] <jmgalloway> it is owned by root
[17:24] <jmgalloway> -rw-r--r-- 1 root       root        396 2010-12-06 12:26 authorized_keys
[17:26] <jmgalloway> should I change "sudo -u eucalyptus ssh-copy-id -i ~eucalyptus/.ssh/id_rsa.pub eucalyptus@cc" to "sudo -u root ssh-copy-id -i ~eucalyptus/.ssh/id_rsa.pub eucalyptus@cc"
[17:29] <jmgalloway> or I could just chmod the permissions of the keys file..