#ubuntu-cloud 2010-12-13
<liam> is there an ubuntu ec2 ami image with only 10gb hdd ebs that anyone can link?
<rsvp> liam, check http://cloud.ubuntu.com/ami
<asac> smoser: please let me know when there are new AMIs out after kernel landing for the high memory instances?
<asac> thx!!
<flaccid> asac: you have problem with current AMIs?
<asac> flaccid: yes ... kernel bug https://bugs.launchpad.net/ubuntu/+source/linux/+bug/651370
<uvirtbot> Launchpad bug 651370 in linux "ec2 kernel crash invalid opcode 0000 [#1]" [Medium,Fix committed]
<asac> for high memory instances
<flaccid> ah that one
 * asac would love to see a verification AMI being made for this bug ;)
<flaccid> smoser is most likely on the case
<asac> yeah ... talked about that with him
<asac> eventually it will go to -updates ... and then new AMIs to be made
<asac> i hope the proposed doesnt take much longer though :/
<asac> hmm ... i wonder if i need to tweak some sys/proc stuff to get _more_ file caching on high memory
<asac> IO is slow so i would like the full build tree to live in mem
<asac> btw, are there other cloud services with better IO that are nicely integrated with ubuntu?
<asac> i found storm and gogrid, but those seem to be not really well API/tools wise
<flaccid> rackspace is available
<asac> but rackspace doesnt really have better IO, right?
<flaccid> test it out i guess
<asac> http://blog.cloudharmony.com/2010/06/disk-io-benchmarking-in-cloud.html
<asac> yeah ...
<asac> i guess i will wait for high mem and put whole build on tmpfs ;)
<Evet> i want whole server to make a single cloud node
<Evet> do i still need host os, guest os seperation?
<kim0> Evet: The only supported configuration is 2 machines at least for a cloud, although some people have made it work on a single machine (sort of)
<Evet> kim0: i have multiple servers
<Evet> want to build one node per physical server
<kim0> Evet: the way it works is to have one cloud controller node (CLC) .. and all other boxes become Node Controllers (NCs) .. would that work for you ?
<Evet> kim0: yes
<kim0> Evet: great .. that's the standard setup :)
<kim0> Evet: https://help.ubuntu.com/community/UEC/CDInstall
<Evet> kim0: so can create 4gb ram node on a 4gb box?
<kim0> Evet: 4G VM on 4G physical box .. yeah should be ok I guess (3.5G or so would surely work) otherwise your bare-metal installation would be close to running out of ram
<Evet> hmm, max is 3.5
<kim0> Evet: I'm not saying that .. I'm just guessing .. feel free to try it
<smoser> asac, i'm really hoping that as soon as that kernel is through we'll get updated amis.
<smoser> asac, remember, you'll get better IO to instance store disk than EBS.
<smoser> also people play with raiding EBS volumes, which , depending on report can increase performance.
<Evet> is there an automatic solution to make slave nodes access public internet through cloud controller master?
<kim0> Hi folks .. Can two UEC installations talk to each other ? Check out this question http://ubuntuforums.org/showthread.php?t=1644440
<kiall> kim0, I dont believe that is possible .. And I don't believe it should be supported either :) .. What your probably looking for, is 1 cloud with a HA CLC+Walrus ...
#ubuntu-cloud 2010-12-14
<cuyler> trying to install
<cuyler> ubuntu 10.04.1 server edition on amd 64
<flaccid> sweet
<cuyler> yea been using desktop for a while tryin to learn somethin new
<cuyler> but i am getting a boot error along the lines of unknown keyword in configuration file gfxboot
<flaccid> which cloud?
<cuyler> front end clc
<cuyler> first machine
<flaccid> never heard of that cloud before
<smoser> cuyler, you get that when booting off a usb creator usb key ?
<cuyler> I'm trying to install the ubuntu server edition 64 .iso
<cuyler> yup
<cuyler> didnt see you there smoser
<cuyler> thats where i got it from
<flaccid> a usb key is a cloud?
<smoser> i think you can atually just type 'linux' or maybe even hit enter.
<smoser> its a usb createor bug
<smoser> flaccid, he's trying to install UEC (i think)
<cuyler> yes i am
<flaccid> oh
<flaccid> boot error is what the installation media or booting the OS afer install?
<cuyler> installation media
<smoser> http://ubuntuforums.org/showthread.php?p=9704603 is a hnt.. there is a bug, but i don't see it
<flaccid> could be a bug, this is suited for #ubuntu . search launchpad bugs and consider the alternate cd
<smoser> i dont know how it was fixed, but the problem was that usb-creator uses the host OS's isolinux
<cuyler> thx- gona give it a shot
<smoser> and that maverick and lucid have different enough versions that there were problems creating one from the other (both ways)
<cuyler> haha thats a nice bug
<smoser> https://bugs.launchpad.net/ubuntu/+source/usb-creator/+bug/608382
<uvirtbot> Launchpad bug 608382 in usb-creator "Maverick images burned to usb key on lucid fail to boot - different syslinux version" [High,Fix released]
<smoser> it should be fixed though, if you're up to date on lucid
<smoser> (comment 42 there)
<smoser> cuyler, ^
<cuyler> thx i appreciate it
<cuyler> it doesnt seem to be fixed tho
<cuyler> im gona reed up on it a little
<cuyler> read*
<liam> any ideas why I wouldn't be able to reach my ec2 server? ssh times out and so does pinging the public dns...
<Makere> it's dead?
<Makere> dunno
<Makere> :p
<progre55> Hi guys. I had a cluster and a node functioning properly. But then the cluster and node addresses changed on the LAN (cluster: 172.16.4.30, node: 172.16.5.217) and now they dont even see each other. Any suggestions, please?
<Makere> so with how many nodes have you tried the cloud?
<Makere> general question, not aimed for anyone :)
<Makere> currently running with 38 nodes
<TeTeT> progre55: in general UEC won't work well in a DHCP environment, you want static IPs for all components
<TeTeT> progre55: or testing you probably can get away with using euca-register-* and euca-discover-nodes
<TeTeT> Makere: only got 2 nodes :( Do you run all of the 38 in one cluster or spread them over several clusters?
<Makere> one cluster
<Makere> mainly because secondary cluster installation is broken
<Makere> and all we are doing is running folding@home :p
<TeTeT> Makere: broken on the same LAN you mean? per the problem you reported a couple of days ago?
<TeTeT> he he
<Makere> ye
<Makere> http://fah-web.stanford.edu/cgi-bin/main.py?qtype=teampage&teamnum=197910 F@H statistics
<Makere> doesn't list all instances as active CPU's yet for some reason
<Makere> I guess it would be better with more clusters, now it seems that the network has slowed down to a crawl
<TeTeT> Makere: check the cluster controller for network I/O. I bet it's strained
<silencieux> hello?
<silencieux> I am trying to set up a private cloud for a school project and my noob backside is having a heck of a time.  Can anyone help me with a error:  Failed check! Invalidating registration: image-store-1289694986/ramdisk.manifest.xml
<nijaba> silencieux: it seems that the image checksum failed.  Indicates damage in download.  I guess you should try again
<silencieux> fun... reload the images then... well thanks a ton
<silencieux> hey have any of you been able to get a <gasp!> M$ Windows image to run in the cloud yet?
<silencieux> again... pardon my n00bness
<Makere> I think you can run them, but it's not officially supported in the free version
<Makere> http://www.youtube.com/watch?v=PATWTb0llSU <- something we made for the lulz at school project :D
<flaccid> "the best experience of cloud computing"
<flaccid> really?
<Makere> let's not go there
<Makere> it's there for a joke
<Makere> :D
<flaccid> you just did
<flaccid> oh
<Makere> a bad joke I may add
<flaccid> where can i read the project plan and project obvjectives? just looks like someone using uec to me
<Makere> the project plan is in finnish on the blog
<flaccid> what is the objective in 1 sentence in english?
<Makere> well the plan was to research UEC to see if it's worth teaching at school (university) and is there any value of it for the students
<flaccid> you needed a project for that?
<Makere> we're (hopefully) wrapping up the project tomorrow, presenting the end report and demoing the system to our Project instructor
<flaccid> ok, well good luck
<Makere> well the studies include a project related to windows, networking or linux
<Makere> and it was only a 8 week project
<flaccid> personally i would be making a course that is about teaching cloud computing fundamentals, then move in to private and public clouds. do eucalyptus and maybe at least 1 other private cloud and use UEC to be independent of linux distribution
<Makere> I think cloud computing is just not ready yet
<Makere> especially UEC
<Makere> too many problems, with not enough solutions
<flaccid> really? it's been ready for years. eucalyptus is still a bit young though. aws ec2 is not ..
<flaccid> UEC != cloud computing
<Makere> I know
<Makere> I wouldn't still put my entire server room into EC2 cloud
<Makere> I must say that I don't have much experience on EC2
<Makere> so let me correct myself and say that UEC is not ready yet
<flaccid> yeah
<flaccid> many companies have moved prod or dev to the ec2 cloud and quite effectively. companies like zynga wouldn't be able to operate without it
<smoser> fyi: http://ubuntu-smoser.blogspot.com/2010/12/ubuntu-natty-cluster-compute-instances.html
<smoser> ubuntu cluster compute instances now available.
<liam> I cannot connect to my ec2 instance... It is created from the daily ubuntu ami release... Could it be that the instance can only be reached from the US?
<flaccid> connect with what?
<liam> flaccid: ssh and I cant event ping the server....
<flaccid> you need both those open in the security group used by the instance
<liam> flaccid: could you try to ping it at ec2-50-18-57-32.us-west-1.compute.amazonaws.com please
<flaccid> thats the internal iface
<flaccid> nobody can reach that remotely
<liam> flaccid: its says public dns...?
<flaccid> root@sandbox:/opt/rightscale# host ec2-50-18-57-32.us-west-1.compute.amazonaws.com
<flaccid> ec2-50-18-57-32.us-west-1.compute.amazonaws.com has address 10.160.119.152
<liam> flaccid: so can you reach the server?
<flaccid> flaccid: nobody can reach that remotely
<flaccid> what is in your security group?
<liam> flaccid: ok so I didn't have ssh in the security group.... that is probably why?
<flaccid> yes
<flaccid> same goes for ping
<liam> flaccid: ok thank you.
<flaccid> you should read up on security groups. its your nat firewall.
<liam> yes I have used it before with other servers I just completely forgot....
#ubuntu-cloud 2010-12-15
<erichammond> http://cloud.ubuntu.com/ami/ is broken on my browser (chromium): "DataTables warning: JSON data from server failed to load or be parsed. This is most likely to be caused by a JSON formatting error."
<kim0> erichammond: It's not browser specific .. something seems to have went wrong .. thanks for the heads up
<kim0> Can anyone help me answer that:   Will Eucalyptus be 2.0.1 (upstream released Oct 20) be in Natty ?
<smoser> Daviey, ?
<Daviey> kim0: can't say for certain, but 'probably'... depending if there are any difficult regressions when one of us has a smoke test.
<Daviey> kim0: certainly will not have 2.1
<kim0> Daviey: thanks :)
<kim0> Daviey: Any luck thinking of some community contribution venues
<kim0> smoser: your input is much appreciated as well :)
<Daviey> kim0: smoser has an AMAZING idea.
<smoser> https://blueprints.launchpad.net/ubuntu/+spec/cloud-server-n-image-rebundle
<smoser> :)
<smoser> mostly seriously on that.  rebundling is a very important area of our images that isn't addressed in documentation or tools terribly well.
<TeTeT> smoser: we have an exercise on it in the UEC training
 * kim0 in a call
<kim0> but I would like to understand exactly what would contributors do ?
<kim0> is it documentation
<smoser> some is documentation, yes. but some is development of utilities for "cleaning" or making that process easier.
<kazade> hi smoser, I've just been asking some questions in #ubuntu-uk about Amazon EC2 and Daviey mentioned you might be able to help me (they aren't Ubuntu related, just general EC2 questions). you free?
<smoser> sure.
<kazade> basically, I have this running EC2 instance and I'd like to create a duplicate
<kazade> the EC2 instance has a single EBS root device..
<smoser> kim0, this is the type of thing that i'd like better covered
<smoser> (it almost smells like a setup)
<smoser> kazade, what release ? 10.10 ? 10.04 ?
<kazade> 10.04
<kazade> basically, if I click "Create Image" will it freeze/pause the running instance? and will that create me an image I can create a new EC2 vm with?
 * kazade is confused generally
<kim0> smoser: I'll ping ya in a 20 mins or so .. thanks a lot though! :)
<kazade> also, if I create a snapshot of the EBS device... would restoring that snapshot essentiall "rollback" the EC2 vm?
<smoser> if you hit 'Create Image', that makes the 'CreateImage' (i think thats the api name) call , which can also be done via 'ec2-create-image'.
<smoser> it will stop the instance (by issuing 'shutdown'), and then snapshot the volume and create a new ami from that snapshot, then start the instance back up.
<smoser> this is probably what i would recommend you do. it is very easy and very effective.
<kazade> ok cool that sounds like exactly what I needed to know. How long would the image creation take normally?
<smoser> couple minutes
<smoser> i might advise that you shut down your instance (/sbin/poweroff) yourself
<smoser> it just seems cleaner to me, but i believe it does work the other way.
<kazade> smoser, thanks  just one final question: is taking a snapshot of the EBS device exactly equivalent of taking a snapshot in virtualbox (for example)
<smoser> i dont know what it means in virtualbox, but its pretty much a volume snapshot
<smoser> ie, it saves all data at the block device level
<kazade> what I mean is, if I reverted the snapshot and booted the EC2 vm, would it essentially rollback? (I mean, the EC2 wouldn't break or anything)
<smoser> did i answer your question ?
<smoser> the running instance is not affected by any snapshots of its root device
<kazade> right..
<smoser> you can even do them to a mounted filesystem
<kazade> if I restored a snapshot of the device though, what would that do to the running machine? (or is that not possible?)
<smoser> you cannot reset an existing volume's content to a snapshot you can only create new volumes from the snapshot.
<kazade> smoser, right ok, that makes sense
<kazade> Can an EC2 machine change it's root device?
<smoser> kazade, sort of.
<smoser> what do you want to do ?
<kazade> I just wondered if I could snapshot the EBS device, and then if I screwed up the VM, restore the EBS device
<kazade> (e.g. by pointing the EC2 machine at a newly restored snapshot)
<smoser> Daviey, if you see kazade, you can tell him that you can easily attach a volume of a snapshot to runnign instance, mount that volume, diff it against / anda sync contents that way.
<kim0> Hi everyone, anyone new around here ? say Hi please
<daker> poor kim0
<kim0> daker: It takes some effort man :)
<kim0> However I am presistent hehe
<daker> yes
<kim0> Daviey: must the CLC be configured to NAT packets manually ? i.e. is that not part of setup ?
<kim0> Daviey: I'm trying to answer http://ubuntuforums.org/showthread.php?t=1646048 .. where basically NC doesn't have internet connectivity
<kim0> kirkland: ^
<kim0> smoser: Hi! so you're basically proposing people attack https://blueprints.launchpad.net/ubuntu/+spec/cloud-server-n-image-rebundle
<smoser> sure.
<kim0> smoser: did you ever blog or write something explaining what needs to be done
<kim0> manually
<smoser> well if you have a way they can do it automatically (as opposed to manually) i'm happy to hear it!
<kim0> what I'm saying this .. did you ever explain the manual steps, so that I have a better understanding of what needs to be done
<kim0> s/this/is/
 * kim0 reading https://wiki.ubuntu.com/CloudServerNattyCloudUtils
<DigitalFlux> kim0: Interesting
<DigitalFlux> smoser was talking about it i think today at #openstack
<kim0> yeah interesting indeed :)
<smoser> kim0, no. not really. theres a bunch of things covered there.
<kim0> smoser: by first impression .. that's a whole lot of things :) can u pick an important one or two scenarioes to tackle
<kim0> I guess downloading ec2 images .. boot in kvm .. modify .. republish
<smoser> i've probably covered many of them before on posts to ec2ubuntu and such
<kim0> smoser: what about if this "cleaning" utility records files at ~ while booting .. and at cleaning stage presents the user with all newly created files to delete or keep
<kim0> we could have defaults based on regex matches .. like for /var/log/* -> delete
<kim0> for /etc/ -> keep
<kim0> it may monitor the whole system .. not just ~
<kim0> don't know .. comments
<smoser> yeah, its really crappy and heuristic
<smoser> so it needs config options
<smoser> and its going to be wrong in many cases in both ways
<kim0> smoser: what's the main purpose of uncloud-init
<kim0> can't find any proper page talking about that
<smoser> to hackily boot one of our images without a ec2 metadata service
<kim0> without cloud-init blocking for ages
<kim0> that once bit me :)
<smoser> that problem sucks in general
<smoser> there is no way to tell if i'm on EC2, and thus should wait for the metadata service
<smoser> but on EC2 (or on UEC), i *have* to wait for the metadata service, indefinitely.
<kim0> yeah!
<raymdt> hey
<kim0> raymdt: hey
<soren> smoser: This is my last working day of the year. If you want to chat about stuff, this is a good time :)
<smoser> hm..
<smoser> give me a couple minutes ?
<soren> smoser: Sure, I'm not going anywhere.
<soren> yet :)
<smoser> soren, ok
<smoser> so
<smoser> our cloud images, i *want* them to be bit for bit identical on different clouds.
<smoser> or as close to it as possible
<smoser> i want the download on uec-images.ubuntu.com to not need a bunch of "fixing" for before its useable on EC2, UEC, openstack, and ideally "raw kvm/libvirt"
<smoser> but we have this metadata service thing, that i want/have-to wait around for in some cases (uec / ec2) and not in others ('raw kvm/libvirt')
<smoser> right now in the .tar.gz files, there is a -floppy file, which is a bootable floppy, and documentation, then that says how to boot it (boot with floppy bootable).
<soren> smoser: Right.
<smoser> the floppy then boots the kernel in the instance with special args telling it "hey, theres no cloud here"
<soren> smoser: What is the canonical use case that makes us have to wait for the meta-data service (rather than just handling it in the background once it turns up).
<soren> ?
<smoser> but even that is yucky. ideally, it would boot up, find that there was no metadata service or any other method of letting a user log in, and "uncloud" itself.
<smoser> well, righ tnow it actually blocks boot.
<smoser> which allows the user to do things (via user data) when it runs
<smoser> some of these things might include modifying the system in ways that are difficult to do once it is further up
<smoser> ie, at that point you can still mount another volume over the top of /home or modify config of a server before it has started.
<smoser> so those things are valuable, and backgrounding would make them impossible (or require restart/reboot)
<soren> right, ok.
<soren> Do we have any data on how long it takes before the meta-data service turns up?
<smoser> we have annoyances
<soren> min/max/mean time sort of stuff?
<smoser> i would have marked this as not an issue sometime about 6 months ago on EC2.
<smoser> i have never really seen a "waiting for metadata service" message there.
<smoser> and i had even turned it down to something small, 20 seconds or something, maybe less.
<smoser> but UEC started failing, and we were told "you have to wait longer"
<smoser> so, wait longer we did.
<soren> :(
<smoser> in the vast majority of the cases, waiting more than 30 seconds was not that useful, i thikn, but dont have a lot of data on it. although we did at the time.
<soren> smoser: So the difficult case is UEC.
<soren> smoser: Luckily, that's the one we control.
<smoser> mostly, yes, but then it came about because of EC2
<smoser> so UEC is feature compatible :)
<soren> Sure, sure.
<soren> How about we change UEC to pass a "Hi, I'm UEC. Please bear with me while I get off my a*** and get the meta-data service up and running for you" sort of kernel parameter?
<smoser> soren, well, i can somewhat control the kernel parameters that are passed
<smoser> however, the control i have over that is via /boot/grub/grub.cfg
<soren> Eh?
<smoser> which would ideally be read by kvm or openstack "full disk" or ...
<soren> UEC never boots an out-of-image kernel anymore?
<smoser> remember my 'loader' hack/feature in eucalyptus
<soren> I didn't think that was used all the time.
<smoser> it does, and in that case, sure, you could hard code a "you're running on UEC" parameter.
<soren> That's a start, at least.
<smoser> its not used all the time, but that is the "right way" going forward (or some other mechanism that is more 'normal' to allow guest to control its own kernel loading)
<smoser> so that solution is really only for "legacy"
<soren> Can't we ask the BIOS how it booted?
<soren> If it's from floppy, there's a 99.999% chance it's UEC.
<smoser> well, the way i have to run kvm without cloud for those right now is via a floppy also
<soren> Doh.
<smoser> that was the easiest way to inject kernel parameters
<soren> Wait, what?
<smoser> (the -floppy disk distributed is hacky
<smoser> but it basically does: boot (hd0,1)/vmlinuz some-kernel-args NOCLOUD
<smoser> so its quite less than ideal.
<smoser> in natty cycle, i hope to have full disk images, and just say "kvm -hda disk.img"
<smoser> btw, i'm thankful for your input, please dont' think i'm just shooting down ideas
<smoser> one thing i've considered is putting something into the image, which could be written to without understanding the filesystem
<smoser> and the providing a tool that could muck that bit
<kim0> smoser: what about providing different kernels, such that the user would choose a different kernel for a direct kvm boot
<smoser> well that would incur the overhead of maintaining 2 kernels.
<smoser> we recently got a *major* feature add when we dropped the -ec2 kernel
<soren> smoser: You've lost me. I thought you said you used grub.cfg for something.
<kim0> smoser: not really different kernels .. just 2 kernel entries .. booting same kernel with different params
<soren> smoser: (/me is catching up on the conversation... Many things going on at the same time)
<soren> smoser: re: "btw, i'm thankful for your input, please dont' think i'm just shooting down  ideas
<soren> "
<soren> smoser: Not a problem at all.
<smoser> kim0, in which case i'd still have to get the user (or cloud) to select the kernel to boot.
<smoser> soren, currently we jsut ship a partition image
<kim0> smoser: defaults to cloud kernel .. and kvm user chooses a different one
<smoser> that partition image has no bootloader in it
<smoser> but, it has both /boot/grub/grub.cfg and /boot/grub/menu.lst in it
<smoser> grub.cfg is used if you boot with 'loader support' on UEC
<smoser> menu.lst is used if you boot with pv-grub on EC2
<smoser> then, there is a floppy disk in the .tar.gz file.
<smoser> when you boot off of that, it does a dirty hack and says "boot /vmlinuz from inside disk 1 partition 1, and pass 'nocloud' as a kernel parameter"
<kim0> what about a count down .. like Press N for no-cloud 5 4 3 2 1
<smoser> kim0, i'd like to avoid manual action on every reboot (even the first)
<kim0> so cloud kernel works ok for ec2 and uec .. only problem is, direct kvm case
<kim0> well u just have to detect run time env I guess then .. somehow
<kim0> no detectable info on ec2 xen store ?
<smoser> there is definitely stuff in the xen store, but not something published that i could rely on being there.
<smoser> the thing that is documented is the metadata service :)
<smoser> and even then... ideally.... the image is easily used as xen instance.
<smoser> i know that i'm asking a lot, but thas what the goal is... this disk image "just works"
<smoser> so far, your timeout on selection of non-EC2 cloud kernel entry is about the best i have.
<smoser> the issue with that is that
<smoser> a.) then you penalize the UEC boot by 5 seconds
<smoser> b.) that default isn't easily modifyable without mounting the image -- this is what i really want to stay away from.
<kim0> what about a no-penalty .. press shift while booting kinda thing ?
<euca> hi guys! I'm trying to run a private cloud with UEC and I want all my machines to have a static IP on the network. The VMs get an IP, but then they don't boot. There is a script that tries to obtain metadata from a server it can't reach, and so it just keeps looping.
<euca>  DataSourceEc2.py[WARNING]: waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id
<euca>  DataSourceEc2.py[WARNING]:   16:06:10 [ 1/100]: url error [timed out]
<kim0> lol :)
<kim0> haahaahh .. speak of the devil
<kim0> euca: how long did you wait for them to boot ?
<euca> I've read that this happens when the private IP is the same as the public IP.. but I can't seem to make them be different. Has anyone dealt with this? I'm using the cloud controller runs Ubuntu Server 10.10, the node runs CentOS, both with eucalyptus 2.0.0
<euca> kim0:  I let it stay overnight
<kim0> ew!
<kim0> why aren't you running one software stack in all ur cloud
<euca> kim0:  because Ubuntu Server failed to install on the node
<kim0> :)
<euca> and CentOS didn't
<kim0> euca: how did it fail
<euca> it didn't detect any storage
<euca> devices
<kim0> can you post a bug report please
<euca> well, I can, but this is sort of urgent for me
<kim0> hmm
<euca> I'm sure the underlying OS is not the issue
<kim0> try asking in #eucalyptus maybe
<smoser> euca, make sure you're not running a dhcp server on your network
<euca> smoser:  I don't have control over the network
<euca> but there is a dhcp server
<euca> on that network
<smoser> then you really can't run eucalyptus
<smoser> the guests will dhcp, which gets put onto the network of the host
<smoser> there are 2 dhcp servers
<smoser> it will take the lease from whichever runs first
<smoser> returns first
<kim0> wild west competition
<euca> the guest get the correct IP
<euca> I configured it to be STATIC
<euca> I can traceroute and ping the machine. I even checked the MAC and the IP. They are both correct.
<euca> when I try to ssh, I get a connection was reset error. and the guest is trying to fetch metadata from that URL
<euca> I tried the URL manually, and it returns 404 Not Found
<euca> I've read that this used to be a bug in lucid, when both the public and private IPs are the same
<smoser> euca, that url will only be good from inside the guest
<euca> smoser: I suspected as much, but I thought I'd try ;)
<smoser> i wonder how you configured the IP to be static, though, and which IP you chose.
<euca> smoser:  hold on, I'll post the eucalyptus.conf somewhere
<smoser> because the instance should be configured with the private ip
<smoser> (ie, ifconfig inside it will show the private IP)
<smoser> and routing magic on the node lets it through
<smoser> euca, for debugging, you can try running my ttylinux images, which do not depend on the metadata service
<smoser> http://smoser.brickies.net/ubuntu/ttylinux-uec/
<kim0> smoser: btw for UEC, does CLC perform NATing as part of default setup, or is the user expected to do that manually
<smoser> use the latest there, use uec-publish-tarball to publish it, run an instance, and you can ssh in.  of course you have to somehow set your IP address (those expect that they get dhcp)
<smoser> they will also, after a time, start dumping debug info on their view of the world.
<smoser> kim0, i dont belive it does any natting, i'm not really certain on the config of that stuff.
<kim0> so out of the box, NCs doesn't have internet connectivity
<euca> smoser: http://codepad.org/uwgop8kl (eucalyptus.conf on the CC)  http://codepad.org/BoFMV18z (part of eucalyptus.conf from the NC)
<euca> smoser:  euca-describe-instances output http://codepad.org/OUozYObx
<smoser> so how do you think that you made the instances use a static IP address ?
<smoser> i could be incorrect, but i believe that everything is based on dhcp
<euca> I thought so too.. but then I ran nmap and saw the MAC associated to that IP
<smoser> well, it could very well get that.
<smoser> but its a race
<smoser> run one of those ttylinux images
<euca> I'll try doing that in a sec
<smoser> and see what happens.
<smoser> they'll open up a ssh server on 22 with a password
<euca> what's the username and password?
 * euca 30k/sec
<smoser> really ?
<euca> yes
<smoser> i think its ubuntu:passw0rd
<smoser> hm..
<euca> and it's stuck now
<euca> hm
<euca> 40k now ;)
<smoser> hm.. that stinks. i just pulled to here *michigann) at 600k, but my ec2 instance is only getting it at ~ 30 k
<euca> I'll wait
<smoser> no idea whats wrong there, maybe i should get a mirror for those
<euca> smoser: can I run it in a t1.small ?
<smoser> m1.small you mean ?
<smoser> yes, it should fit inside your pocket
<euca> yes, m1.small :)
<smoser> (it has a 24M root filesystem i think)
<euca> pending..
<euca> interesting
<euca> I can't run it
<euca> the machine doesn't show up in 'xen list'
<euca> and there are no errors in the error log
<euca> and it's not in /usr/local/eucalyptus/admin/
<euca> i.e the folder is there, but there are no files
<euca> and on the second try, only kernel and kernel-digest were copied to the NC
<euca> hm, apparently it failed to delete  /usr/local/eucalyptus//eucalyptus/cache/eki-457115F8/kernel-staging
<euca> i'm deleting the cache and retrying
<euca> nope
<euca> it's not running it
<euca> and I can't find an error in nc.log
<euca> nothing in xend.log
<euca> :/
#ubuntu-cloud 2010-12-16
<timholum1> hello, I know ubuntu-cloud require's 2 server's to run, Does it require them to be in the same location? I am asking becouse I am looking at renting some rackspace from a company, and i would rather have only one server there
<timholum1> I would like the vm's that I am running to run off of that server, but I dont need the controll server to be there. Any Idea's as to how to get that to work?
<smoser> timholum1, i dont know that it'd really work out all that well
<smoser> its a difficult thing to have hosted
<smoser> as the CLC really needs a lot of control of the network
<timholum1> ya, i have been doing some more research, and it looks like I am just going to end up with a machine that has kvm installed
<timholum1> with one of the many web interfaces out there to manage it
<rsvp> when one creates a S3 connection with BOTO, under what conditions does it close?? Is it just left open, or is there is explicit close() ?
<mcurran> friggin' signed-up for EC2, it says I already have service, but when click on EC2 console, in AWS, it says sing-up again...  over and over...  friggin' annoying...
<flaccid> check and see if your c/c was accepted
<liam> this is probably the wrong place but does anyone know how I can execute git commands to my ec2 instance usign the .pem key?
<liam> using*
<liam> does anyone know how to set up ssh so I don't have to specify the .pem key every time I connect to my ec2 server?
<gijo> hw can i run an ec2 instance using boto
<kim0> gijo: Are you looking for something like http://spazidigitali.com/2009/03/17/controlling-an-ec2-instance-with-python/
<gijo> @kim8 i have checked that already, conn.run_instances()  will create a new instance and run it . i want to run an existing instance
<gijo> @kim8 instance.start() cause error AttributeError: 'Instance' object has no attribute 'start'
<robbiew> mathiaz!
<mathiaz> robbiew!
<robbiew> mathiaz: any chance you could provide me admin access to http://ubuntuserver.wordpress.com/ ?
<robbiew> I'm "undacuvabrutha" on wordpress.com
<mathiaz> robbiew: I need to have your email address
<mathiaz> robbiew: so that I can invite/add you to the blog
<mathiaz> robbiew: ie the email address associated with your wordpress.com account
<robbiew> mathiaz: one sec.../me checks
<robbiew> mathiaz: robbie.v.williamson@gmail.com
<mathiaz> robbiew: done - you're an admin
<mathiaz> robbiew: if you could remove me from the list of user I'd appreciate
<mathiaz> robbiew: last time we tried it didn't work
<robbiew> mathiaz: ok
<mathiaz> robbiew: (may be because I created the blog)
<robbiew> mathiaz: nope...can't
<robbiew> you might be right
<mathiaz> robbiew: I'll poke around and ask for help
<robbiew> cool
<mathiaz> robbiew: I may be able to move *ownership* to someone else
<robbiew> mathiaz: ah..yeah
<euca> hi! does anyone know if it's ok to have your guest vm's public ip be the same as the private ip?
<kim0> not sure but my guess is it is NOT ok
<euca> how would I make the 2 different?
<kim0> AFAIK, the private IP range is inherently different from the public network's IP
<kim0> when installing you choose a priv network .. that's it .. make it different
<smoser> kirkland, i just pushed to cloud-utils (lp:~ubuntu-on-ec2/ubuntu-on-ec2/cloud-utils/)
<smoser> ./uec-query-builds latest lucid i386 us-east-1
<smoser> lucid	server	release	20101020
<smoser> er...
<euca> hi smoser
<smoser> ./uec-query-builds latest-ec2 lucid i386 us-east-1
<smoser> lucid	server	release	20101020	ebs	i386	us-east-1	ami-480df921	aki-6603f70f	
<smoser> lucid	server	release	20101020	instance-store	i386	us-east-1	ami-a403f7cd	aki-6603f70f	
<euca> i couldn't run your vm yesterday
<smoser> yeah, it wont work on xen hypervisor
<euca> lol
<smoser> and that may well have been why your NC's didn't install for ubuntu
<euca> why? because of Xen?
<euca> so all ubuntu uec's are kvm images?
<smoser> so i dont know for a fact that the ttylinux wont run in xen paravirt
<smoser> i guess it could...
<kim0> smoser: http://cloud.ubuntu.com/ami/ :)
<smoser> ubuntu uec images can run under xen (they do, in fact, on ec2)
<mathiaz> smoser: hi!
<mathiaz> smoser: are the AWS IAM cli tools packaged somewhere?
<smoser> in natty, yes
<smoser> iam-cli
<smoser> and maverick and lucid at
<smoser> <euca> so all ubuntu uec's are kvm images?
<smoser> <smoser> so i dont know for a fact that the ttylinux wont run in xen paravirt
<smoser> woops. paste fail.
<smoser> mathiaz, https://launchpad.net/~awstools-dev/+archive/awstools
<mathiaz> smoser: awesome! thanks!
<euca> what's this cloud-init business anyways?
<euca> and how can I make it find 169.254.169.254?
<kirkland> smoser: neat, thanks
#ubuntu-cloud 2010-12-17
<liam> whats the difference between the .pem key you get with your instance to the access keys, X.509 certs and key pairs that are in your account access credentials page in aws?
<liam> does anyone know if you do the api tools steps outlined here https://help.ubuntu.com/community/EC2StartersGuide that you can ssh in to your ec2 instance with having to use the command "-i key.pem" ??
<flaccid> liam: http://docs.amazonwebservices.com/AWSSecurityCredentials/1.0/AboutAWSCredentials.html
<flaccid> liam: no, a private key is required by default with images
<flaccid> you might like to add the private key to your client ssh
<liam> the private key is the .pem right?
<flaccid> i would assume so
<liam> flaccid: I still cant find how to add it to the ssh client...
<flaccid> rtfm
<liam> is it just ssh add key.pem??
<erichammond> liam: Instead of having Amazon generate an ssh key, you can simply upload your default to avoid needing "-i"
<erichammond> liam: I wrote about it here: http://alestic.com/2010/10/ec2-ssh-keys
<liam> erichammond: thank you.
<erichammond> This is my recommended way of working with EC2 now.  There's no need to have Amazon generate ssh keys any more.
<flaccid> sorry that i am short today. gotta do a release in 2 hours
<erichammond> flaccid: I often say nothing to avoid saying nothing nice :)
<flaccid> true
<erichammond> #ubuntu-* channels have higher standards: https://wiki.ubuntu.com/IRC/Guidelines
<erichammond> It's one of the things that drew me to Ubuntu
<erichammond> besides the fact that it just worked better and didn't break when upgrading.
<flaccid> lol
<flaccid> i wasted 3 years with ubuntu, i aint going back
<erichammond> and the software packages were more recent versions
<erichammond> and...
<flaccid> i have much higher standards than ubuntu
<progre55> hi guys. Is there a way to re-register (deregister - register) walrus without losing any of the already uploaded images, kernels, etc?
<TeTeT> progre55: never tried that, but I don't think that this is possible :(
<progre55> TeTeT: I've managed to change it, from the admin web ui, but still, when I say "euca-describe-images" I get "no route to host" after some time..
<progre55> TeTeT: Oh, I havent changed the environment variables after I changed the IP address =)
<TeTeT> progre55: you probably need to d/l new credentials from the admin interface
<progre55> TeTeT: it's working now, after I changed the EC2_URL env. variable to the correct one =)
<progre55> thanks
<TeTeT> progre55: was merely reading, didn't do anything
<progre55> TeTeT: but still, it's nice to know that there are people trying to help you =)
<TeTeT> :)
<progre55> TeTeT: a question, if I may =)
<progre55> I have downloaded one of my images using "euca-download-image" and it's a bunch of 10Mb files with a manifest. how can I convert it to a single image file now?
<progre55> oops, i.e. euca-download-bundle
<progre55> oh, euca-unbundle, I guess
<TeTeT> progre55: yes, euca-unbundle
<progre55> TeTeT: thanks =)
<TritoLux> hi there, just out of curiosity.. is UEC the only distribution able to make use the POWERSAVE option?
<TritoLux> make use OF the..
<gijo> what is the parameter port in boto.connect_ec2()
<TeTeT> gijo: probably 8773
<TeTeT> gijo: for UEC that is, don't know for Amazon EC2
<gijo> thx
<TritoLux> that you know of, is UEC the only distribution able to make use of the POWERSAVE option? since I get powernap error 255 out of the box, is there anything I need to change in the default config to make it work in maverick? is apparmor aware of powernap policies?
<samuel> hey all
<kim0> samuel: hey
<TritoLux> is UEC the only distribution able to make use of the POWERSAVE option? since I get powernap error 255 out of the box, is there anything I need to change in the default config to make it work in maverick? is apparmor aware of powernap policies?
<mathiaz> smoser: hi!
<mathiaz> smoser: do you know if someone looked at hosting an apt repository in S3?
<mathiaz> smoser: given that S3 is accessible via http and that apt uses http to download files, I wonder if an apt repository could be hosted in S3?
<smoser> i had read something about this at one point
<smoser> iirc there was some issue with it, vut i don't really recall what. i only went as far as seing someone say they had issues...
<smoser> but don't let that stop you from trying
<smoser> https://github.com/kyleshank/apt-s3 might have more info
<smoser> hm.. htat seems to do authetnticated s3... so maybe unautghed s3 "just works"
<erichammond> mathiaz: I think S3 should be fine for an apt repository as long as you can get the files uploaded in a timely manner and detect which files need to be updated on each refresh (following the correct order).
<erichammond> mathiaz: I tried a couple years ago using s3fs (and rsync?).  I gave it up because my initial upload was taking forever and RightScale (then Canonical) started mirroring Ubuntu repositories on EC2 instances.
<mathiaz> erichammond: right
<mathiaz> erichammond: I'm looking at hosting my own packages as part of the infrastructure
<mathiaz> erichammond: and I would also need private access to S3
<mathiaz> erichammond: as some of the packages may include private data
<mathiaz> erichammond: for a public repository I think that a normal apt repo can just be mirrored to an S3 bucket
<mathiaz> erichammond: if file names in the Packages.gz files are relative
<mathiaz> erichammond: it should even be possible to mirror a pool/ dists/ layout in S3?
<mathiaz> erichammond: if S3 supports sub-directories
<erichammond> mathiaz: S3 supports "keys" where the key can include slashes.  This means that you can emulate subdirectories as far as HTTP goes, and many tools like s3fs and web UIs also present a subdirectory paradigm.
<mathiaz> erichammond: that's great - so it should possible to host a public apt repository in S3 follow the same structure as a local repo
<erichammond> mathiaz: I believe so, yes.
<mathiaz> erichammond: however providing a private repo in S3 may be a bit more complicated
<erichammond> It won't auto-generate subdirectory listings or replace /dir/ with /dir/index.html, but I don't think apt repositories require those.
<mathiaz> erichammond: correct - apt doesn't require those
<erichammond> mathiaz: Using the HTTP(S) protocol, the only way you could get private would be to pick a very long, random bucket name and always use SSL.
<erichammond> The bucket name would effectively be your password.
<erichammond> Not a highly recommended approach, but I toss it out for consideration as it may work for some situations.
<mathiaz> erichammond: right - smoser pointed out an apt method that supports authentication
<mathiaz> erichammond: using an access id and a secret key
<erichammond> mathiaz: Yep, I saw that when it came out (2008) but haven't tried it.
