#ubuntu-cloud 2011-02-07
<ranjan>  Hi all can anyone explain me what Cloud Computing is?
<flaccid> ranjan: try the entry in wikipedia
<smoser> kim0, regarding python versions
<smoser> https://launchpad.net/ubuntu/+source/python-defaults
<smoser> python 2.7 would be ok but then, any utility instances that you fire have to have python2.7 if you end up running python code there.
<smoser> that said, limiting yourself to python2.6 means you run on everything supported except for hardy.
<elasticdog> I've got a test cloud set up where launching instances works just fine, but creating a storage volume immediately fails: http://pastie.org/1537407
<elasticdog> I pastied the applicable log entries from both the CLC & SC
<kim0> does a file actually get created in //var/lib/eucalyptus/volumes/vol-*
<elasticdog> kim0: let me check
<elasticdog> kim0: yep, it creates the file, and it looks like it actually stays there until I do a euca-describe-volumes
<kim0> can you increase log verbosity level
<elasticdog> it's already on the default debug
<elasticdog> -h
<elasticdog> they definitely stick around until you run a describe-volumes...not sure if I can actually mount/use them
<kim0> elasticdog: do you have too many volumes already running
<elasticdog> I have no volumes running and plenty of free space
<kim0> It seems
<kim0> losetup /dev/loop0 //var/lib/eucalyptus/volumes/vol-59920626
<kim0> is failing
<kim0> can you try that on any file you create
<elasticdog> running that manually seems to exit cleanly
<kim0> ls /dev/loop0 exists ?
<elasticdog> yep
<kim0> hmm
<kim0> well clean it up for now .. losetup -d /dev/loop0
<elasticdog> odd that it only seems to disappear after running a euca-describe-volumes on it...let me see if I can actually mount it
<elasticdog> it won't let me attach the volume either, but the file sticks around still
<elasticdog> found another error in the cloud-error.log: ERROR com.eucalyptus.util.EucalyptusCloudException: Could not export AoE device /dev/vg--g73dA../lv-5GN9hA..
<elasticdog> might be VNET_INTERFACE related on the CLC per http://open.eucalyptus.com/wiki/EucalyptusTroubleshooting_v1.5
<terje> does anyone know if RackSpace is going to be API compatable with AWS at any point?
<elasticdog> kim0: I figured it out...the VNET_*INTERFACE settings in eucalyptus.conf do NOT apply to the SC for whatever reason, but using the admin web interface I was able to properly set it to eth1 instead of eth0
<elasticdog> there's a caveat listed about that in the eucalyptus.conf man page
<elasticdog> is it correct that neither the SC nor Walrus have any built-in data replication capabilities?
<kim0> elasticdog: AFAIK, openstack already provides partial EC2 api compat
<elasticdog> kim0: yeah, I'm investigating OpenStack as well...looks like integration with a SAN or using a homebrew solution with something like GlusterFS is the way to get data replication for the SC/Walrus
<kim0> elasticdog: agree .. drbd is an option as well
<elasticdog> so I assume that you are limited to a single SC per cluster, and a single Walrus machine per cloud?
<elasticdog> it seems like OpenStack's ring architecture approach might fit in better with our use-case
<kim0> Any idea why the following is failing as unauthorized
<kim0> ec2-describe-image-attribute --block-device-mapping  ami-0601f16f
<kim0> that's a public natty ami
<kim0> smoser ^
<smoser> you dont have access to that.
<smoser> its strange, but true.
<smoser> :)
<kim0> I wanna know the snapshot of the ami
<kim0> so that we can clone it
<smoser> you dont have access to the snapshot anyway
<kim0> my understanding is .. we get AMI-ID -> Volume -> snapshot -> create new vol -> mount that
<smoser> this is something that i've considered doing , for exactly this reason
<kim0> is that incorrect
<kim0> that's for the tool to copy AMIs across regions
<smoser> i dont really follow what you're suggesting, but you can do that for images you own
<smoser> you're not guaranteed to be able to see snapshots or apparently some image attributes of images you don't own.
<kim0> smoser: so basically the tool to copy AMIs .. gets an AMI ID .. How is it possible to find out the volume to attach/mount
<smoser> like you're suggesting
<kim0> Are you saying I can only copy AMIs that I own!
<smoser> you just can't copy an AMI that you dont own.
<kim0> duh
<smoser> that make sense.
<smoser> err.. that *does* make sense.
<kim0> I can already start it .. and copy it then
<kim0> right
<smoser> well, no.
<smoser> the ability to launch an instance of something is not the ability to read it "raw"
<smoser> ie, chmod 0711 /bin/foo
<smoser> you have execute access to ami-0601f16f but not read
<kim0> so for the purposes of this tool .. to test it ..
<kim0> we'd need to upload our own image ?
<kim0> and copy that
<kim0> any easier path to get a golden image
<kim0> to copy/test with
<kim0> like do you have any image that has that read permission allowed
<smoser> well, its easy enough to test in 1 of 2 ways
<smoser> 1.) who cares if the instance runs or not, use a 1G volume, and register it as an AMI.  will cost basically $0.00, it just wont boot.
<smoser> 2.) populate a snapshot from uec-images, and make one that *does* boot. still can use a 1G filesystem.
<kim0> smoser: thanks
<smoser> kim0, if you want to push on making snapshots public... i do think that would be useful
<smoser> but in the describe-image-attribute case, it would still fail
<smoser> there woudl be no data publically available in AWS that would link AMI -> SNAPHOST
<smoser> or even SNAPSHOT :)
<smoser> interesting anagram in this case
<daker_> kim0, question: why don't you put the website on the topic ?
#ubuntu-cloud 2011-02-08
<daker> kim0, why you don't put the link of the portal on the topic ?
<kim0> daker: hey .. yeah I can do that
<daker> :D
* kim0 changed the topic of #ubuntu-cloud to: All questions relating to Ubuntu Enterprise Cloud (UEC), Ubuntu over EC2, cloud configuration management tools are welcome | Ask clearly, and wait patiently for an answer | Ubuntu-Cloud mailing list at
* kim0 changed the topic of #ubuntu-cloud to: All questions relating to Ubuntu Enterprise Cloud (UEC), Ubuntu over EC2, cloud configuration management tools are welcome | Ask clearly, and wait patiently for an answer | Ubuntu-Cloud mailing list at http://goo.gl/fpm0F | News, Venus and Involvement at Cloud Portal http://cloud.ubuntu.com/
<kim0> Attaching a disk to an ec2 instance as /dev/sdh .. it actually appears as /dev/xvdh .. any idea why ? Is there any consistent renaming scheme?
<TeTeT> kim0: I think the hotplug driver is responsible for that, but not 100% certain
<flaccid> kim0: thats usually the xen kernel used. ubuntu has that modified iirc from the debian xen kernel
<flaccid> something there is
<flaccid> its xvda* in debian kernel
<Hussain> can i get the ubuntu url for network vm installation?
<Error404NotFound> I am using AWS. I have setup 2 webservers and one NFS server. Both of the webservers are under and Elastic Load Balancer and autoscaled. Both mount a directory from the NFS Server. My problem? Due to autoscaling more instances will be fired, while the access list is defined statically inside NFS like 10.0.0.1/32, how can i counter that? I don't want to open access to whole 10.0.0.x or any subnet and due to autoscaling i ha
<Error404NotFound> ve no way of know the IPs of newly started servers.
<Error404NotFound> Without NFS dir being mounted the webserver started as part of autoscaling will not work
<flaccid> um its a limitation of elb i think
<flaccid> i.e. no security group group permissions
<flaccid> thats more relevent to #aws
<Error404NotFound> hmmm, pasting it there.
<jmgalloway> does anyone know why a vm will not leave the pending state?
<jmgalloway> nothing has changed on my uec cloud since yesterday...and now vm's will not go to the running state
<jmgalloway> anyone know why my vm stays in a pending state?
<RoAkSoAx> kirkland: ping
<RoAkSoAx> ups
<RoAkSoAx> wrong channel
<kim0> erichammond: Hi o/
<kim0> erichammond: Is the ec2-consistent-snapshot tool only supposed to be used inside the instance ?
<erichammond> kim0: ec2-consistent-snapshot was originally written to be used on the instance as that is where you need to freeze the XFS file system.  I believe a community patch was submitted to let you you run it remotely with the freeze being done automatically over ssh, but it doesn't look like I applied it in the public version.
<kim0> erichammond: aha .. I'm probably throwing a screencast demo'ing it soonish. Just wanted to check this is how it's supposed to work
<erichammond> Yep.
<erichammond> It basically adds XFS filesystem flushing and freezing and/or MySQL table flushing and locking on top of the standard EBS snapshot API call.
<kim0> erichammond: cool tool :)
<erichammond> It doesn't have a lot of use if you aren't using XFS and/or MySQL.
<kim0> yeah indeed .. I guess a LAMP setup is quite common .. and putting it on an ebs xfs vol makes a lot of sense
<erichammond> It started out very minimal, but ended up with a lot of error handling and retrying to catch various situations that come up in the real world.
<kim0> Is that perl library still not packaged for natty ?
<kim0> that final one to install from cpan
<erichammond> Net::Amazon::EC2 ?
<kim0> yeah
<erichammond> I don't run natty, so don't know.
<kim0> libnet-amazon-ec2-perl actually exists
<kim0> for natty
<erichammond> nice
<erichammond> Let me know if it works.
<kim0> the ppa builds for natty right
<erichammond> Hm, libnet-amazon-ec2-perl even exists on 10.04 Lucid.
<kim0> sweet
<kim0> add a dependency then :)
<erichammond> I'll do some testing and update the ec2-consistent-snapshot package with a dependency and documentation.
<erichammond> That one step has tripped up a lot of people as installing CPAN packages can be tough for the uninitiated.
<kim0> erichammond: will you do that today (since I'll play first thing in my morning)
<kim0> will you be able to*
<kim0> fingers dropping complete phrases ;)
<erichammond> If I did it, it would probably be late tonight my time (US/Pacific).
<erichammond> kim0: Are you recording something or doing it live?
<kim0> erichammond: recording a screencast yeah
<kim0> if you do it tonight for you anytime .. I think I will catch it tomorrow morning .. leave me a line if you do
<kim0> if it's not possible .. I can always wait
<kim0> since as you say .. cpan kinda makes things not silk smooth
#ubuntu-cloud 2011-02-09
<erichammond> will do
<erichammond> Let me know if you have any other questions or want me to give feedback on any content.
<kim0> erichammond: Thanks man!
<kim0> erichammond: drop me a line please if you update the ppa with the dependency .. Thanks
 * kim0 â /dev/bed
<jmgalloway> anyone online?
<flaccid> nope
<erichammond> kim0:  ec2-consistent-snapshot 0.37 has been uploaded to the Alestic PPA for Lucid, Maverick, Natty.  It includes the dependency on libnet-amazon-ec2-perl and updated install documentation.
<erichammond> All that's needed to install it now is: sudo add-apt-repository ppa:alestic && apt-get update && sudo apt-get install ec2-consistent-snapshot
<kim0> erichammond: awesome .. you're da man ;)
<erichammond> er, sudo apt-get update
<atretes> Hi all, I'm currently setting up a cloud base on 10.04 and I've downloaded a repackaged image from ubuntu uec but when I launch it into the cloud and attempt to ssh into it I get 'No route to host'?
<atretes> However, I downloaded the 10.04 Lucid image from the Image Store and that image works fine with ssh etc. What could possibly be the problem?
<kim0> atretes: did you publish the repackaged image using uec-publish-tarball ?
<atretes> kim0: no I used the euca-bundle-image, euca-upload-bundle and euca-register
<kim0> atretes: uec-publish-tarball is made to make this procedure easy and error free .. would you mind trying with it ?
<smoser> atretes, pastebin euca-get-console-output <instance-id>
<smoser> sorry to jump in, kim0 , but want to see if there is anything obvious there.
<kim0> sure .. you're da man ;)
<smoser> no route to host likely means higher level eucalyptus issues, though, i fear
<kim0> smoser: while working with a community fellow yesterday, I noticed a natty instance did not launch in eu-west. Unfortunately I don't know the exact AMI id, but what I'm asking is, do we have some auto-testing of all uploaded AMIs to make sure they can boot and be ssh'able ?
<kim0> if not, that'd be a nice tool to write I guess
<smoser> we do not.
<smoser> yes, we need much test.
<smoser> i386 t1.micro is known broken in alpha2
 * kim0 notes down
<smoser> but all others "should work"
<kim0> indeed .. that was probably it
<smoser> it does work on dailies now
<kim0> and I thought alpha2 had more testing than dailies :)
<atretes> smoser, http://pastebin.com/w8gv6Bh8
<smoser> well, kim0 it did. i *knew* alpha2 didn't work on t1.micro i386.
<smoser> if you asked me about yesterday's daily, i wouldn't have known for sure :)
<kim0> atretes: hehe
<kim0> sorry .. that was to smoser
<smoser> atretes, where did you get this image ?
<smoser> it would seem to me that it is a lucid image, and it is waiting on the availability of eth0.
<atretes> smoser, I downloaded the tarball from http://uec-images.ubuntu.com/releases/lucid/release/ and then added the postgresql package to it by mounting the image via loop and then bundled it with euca2ools
<atretes> smoser, might is be a udev issue?
<smoser> hm...
<RobertLaptop> I have a licensing question.  We are currently using ESXi and wanting to move a more cloud based structure but money is an issue there is no budget for the conversion.  I was looking at various cloud options and found a webnair on ubuntu-cloud but what I can't tell is if you have a CE version or a non-pay version?  Does anyone have an info on that?
<atretes> smoser, I tried building my own kvm images from scratch and got the same issue but the image from the Ubuntu Image Store works with no problems
<smoser> atretes, could you do me a favor and cut out the "added postgresql package" step ?
<smoser> ie, just take the iamge you downloaded, do not modify and test it
<smoser> via the same bundle and upload that you did after you modified it
<kim0> RobertLaptop: if you don't need support .. you're free to use Ubuntu server/cloud for free forever
<RobertLaptop> kim0, Cool.  What about Landscape Dedicated Server is that true as well?
<atretes> smoser, sure let me try
<kim0> RobertLaptop: nope, afaik landscape only offers a free trial
<kim0> RobertLaptop: I'm sure price wise, it would be quite competitive however compared to other options
<kim0> RobertLaptop: and you don't necessarily need it to manage UEC, although it does make things nicer
<RobertLaptop> Is landscape a one time cost or a required subscription?  I am referring to the Dedicated Server not the hosted version.  Also does that mean all node servers have to be licensed as well?
<smoser> atretes, but, in the end, i woudl recommend either uec-publish-tarball, or uec-publish-image (as kim0 suggested).  they're just much easier to work with IMO than euca-bundle-image, euca-upload-bundle, euca-register
<atretes> smoser, yeah I will give that a bash too
<kim0> RobertLaptop: I think you should contact Sales https://forms.canonical.com/sales/
<RobertLaptop> kim0, I tried that a few months ago and never got a reply.  I guess I could retry.
<atretes> smoser, Ok this sucks - I used the base 'untarnished' image that I downloaded and bundled is with the euca2ools, and ssh is working. So what could possibly have changed with my custom image?
<smoser> i do suspect udev persistent network
<smoser> but don't really understand how you would have gotten those there.
<smoser> hold on
<smoser> atretes, do you have /etc/udev/rules.d/70-persistent-net.rules or /etc/udev/rules.d/z25_persistent-net.rules in the instance ?
<smoser> err.. in your re-bundled image
<atretes> smoser, well to get apt running in my mounted image I had to 'mount -bind /dev /tmp-mnt/dev'  and then chroot... that might've messed with things
<smoser> do you have those files there ? (/etc/udev)
<atretes> smoser, well in the instance that is currently running it is 70-persistent-net.rules
<smoser> well, i suspect that that file has a mac addr in it that is different than your instance
<smoser> ie, your instance probably has eth1 and is sitting waiting forever for eth0
<atretes> smoser, and the same in the re-bundled image
<atretes> smoser, hmmm yeah that would be a problem - so should I just remove is completely?
<smoser> atretes, yes, remove that.
<smoser> in your image, remove it, then re-register an ami, and try again
<smoser> i relaly dont know what would have gotten that file there... i guesss somehow udev got started in your chroot ... and wrote that.
<smoser> i'd not seen that before though
<atretes> smoser, ok so I assume eucalyptus creates a new rule file when the image get instantiated with the generated mac?
<smoser> no
<smoser> eucalyptus does not modify the image contents. (other than in the networking setups that do not have a metadata service, and then, they only insert .ssh/authorized_keys)
<atretes> ok
<smoser> fwiw, i find it a serious bug for them to tinker inside the image contents.
<smoser> (i would be pretty ticked off if my thinkpad bios decided it should read the filesystem and modify some things on my behalf)
<atretes> I think so too
<atretes> but what confuses me is that if a ssh into the instance that is currently running it does contain a 70-persistent-net.rules file...
<atretes> smoser, you are a legend - my image is working! thanks so much :)
<smoser> atretes, it *should* have that file
<smoser> the instance should, but the image should not.
<atretes> smoser, ah I understand
<smoser> bug 341006 has more info
<uvirtbot> Launchpad bug 341006 in udev "ease cloning of virtual images by disabling mac address rules" [Wishlist,Fix released] https://launchpad.net/bugs/341006
<kim0> erichammond: Pushed the cast, thanks for the help http://www.youtube.com/user/ubuntucloud#p/a/u/0/SPVqJWWiLVI
<atretes> kim0, Very nice cast, does this work with eucalyptus?
<kim0> atretes: um probably erichammond might know better
<kim0> I guess it depends on the api compatibility level .. but generally should work I'd think
<atretes> nice, I will check it out a bit further
<erichammond> kim0, atretes: I've never used Eucalyptus/UEC and have no plans to as I'm happy to get out of the hardware maintenance business and let Amazon take care of it for me.
<Abd4llA> ping kim0
<kim0> Abd4llA: hey
<kim0> How's it going
<Abd4llA> fine, I've started implementing the EC2 AMI migration tool, saw u were looking for volunteers on ur blog
<kim0> woohoo
<Abd4llA> was just gonna send a mail to mailing list
<kim0> that sounds awesome
<kim0> where can we check out the code
<Abd4llA> https://code.launchpad.net/~abd4lla/+junk/ec2-ebs-migrate
<Abd4llA> nothing fancy yet
 * kim0 clicking
<Abd4llA> was thinking to take some opinions and ask for help, the tool is not big though, but opinions at this point would be valuable
<kim0> Abd4llA: Is this your first contribution to ubuntu
<Abd4llA> first code contribution idd, I delivered a session previously in AppDev week
 * kim0 hugs Abd4llA 
<Abd4llA> hehe
<Abd4llA> so what do u think the plan should be?
<kim0> Abd4llA: Ok, first of all ... Indeed I think you should send an email to the ubuntu-cloud list
<kim0> so that others wanting to work on this tool can join forces
<kim0> actually I'll probably try to hack on it a bit too
<kim0> Other than that .. do you feel like you have concrete questions or parts you'd like help with ?
<Abd4llA> not really, but would be gr8 if someone from the servers guys did a quick review or something, as I said, opinions at this early stage are valuable
<Abd4llA> specially regarding the general implementation approach
<kim0> aha
<kim0> smoser is generally the man who'd know best about that tool ..
<kim0> smoser would you be able to quickly check out the implementation approach
<kim0> Abd4llA: well don't expect something realtime :) but it'll come
<kim0> hang in here for a while if that's ok
<Abd4llA> sure
<smoser> Abd4llA, so, reading a bit, overall looks reaonsble. i like that you laid things out and documented what you're expecting to do
<smoser> def prepareDestinationVol(dstInstance, volumeSize):
<smoser> really, the ideal migrate of the instance involes copying the filesystem type and LABEL also.
<kim0> I notice a couple of issues .. Do we always assume ext3 ? Do we always assume the ebs vol is not partitioned ?
<smoser> and i would even suggest UUID.
<kim0> are those reasonable assumptions ?
<smoser> ebs root volumes are not partitioned.  amazon/xen does tricks such that the root volume comes up when booted named /dev/sda1
<smoser> (xen is really wierd...actually, the device you're used to seeing as /dev/xvda1 or /dev/sda1 is not a partition, it is a funny named block device -- look in /sys and you'll see what i mean)
<smoser> but, no, you can't assume ext3
<Abd4llA> smoser: yeah , would do that idd,
<Abd4llA> kim0: I was thinking about detecting the fs, so far the only idea I've is using the "file" command
<smoser> we're cheating in some way by not copying the full volume. we're only copying the filesystem contents, which is good, but if you lose attributes of that filesystem, its bad.
<smoser> :)
<kim0> what about blkid
<kim0> I hate running commands like so .. I wish Linux servers had a low level api :)
<smoser> i recently did this for euca-bundle-vol and ec2-bundle-vol
<smoser> and use blkid to get UUID and LABEL and TYPE
<Abd4llA> ok, good enough for me
<Abd4llA> smoser: are the kernel_ids avaible across regions ?
<smoser> you can see mkfs at http://bazaar.launchpad.net/~ubuntu-virt/ubuntu/natty/euca2ools/natty/view/head:/euca2ools/euca2ools/__init__.py if you're interested.
<smoser> oh, good question.
<kim0> consistent you mean ?
<smoser> no.
<Abd4llA> kim0: yes :)
<smoser> for migrating, what you'll have to do is fish through available kernels/ramdisks in the target region and try to match
<smoser> fun, eh?
<Abd4llA> :S
<kim0> smoser: try to match based on ?
<smoser> based on manifest path
<Abd4llA> I C
<smoser> what i would suggest is first looking and seeing if there is a single match by owner-id and manifest-basename in the target region
<smoser> if so, use it.
<smoser> if there are no candidates matching basename, give up, require user to tell you
<smoser> if there are more than 1 (and there will be for anything we've published), then you really get to fixh
<smoser> fish
<smoser> we use a naming convention, we have different buckets, and you have to be careful to stay in the same "bucket basename".
<smoser> our buckets are named
<smoser> according to https://wiki.ubuntu.com/UEC/Images/NamingConvention
<smoser> ubuntu-kernels-us -> ubuntu-kernels-eu-west-1 -> ubuntu-kernels-ap-southeast-1
<smoser> but we also have
<smoser> ubuntu-kernels-testing-us -> ubuntu-kernels-testing-ap-souteast-1 ...
<kim0> can't we just download the kernel and re-register it on the other side
<kim0> smoser: also with newish pvgrub images .. we don't need to do anything right?
<smoser> well, you still have to basename match
<smoser> you cannot download kernels, and only priviledged accounts can register on the other side
<kim0> Abd4llA: how are you planning on establshing ssh keys across the 2 instances ?
<kim0> for rsync'ing
<Abd4llA> kim0: I plan to generate a key pair on a machine and then copy the key to other one
<kim0> so you'd download it locally .. and upload to the other side
<kim0> I guess we'd have to do it that way
<Abd4llA> I'd just remotely cat it from one machine, and remotely write it to the file on the other one
<Abd4llA> if u consider that *downloading*
<kim0> Abd4llA: catloading is better :)
<Abd4llA> smoser: any suggestion regarding that point ?
<smoser> i didn't follow it
<kim0> copying ssh keys between the 2 sides
<smoser> oh. i see. yeah, i think his solution is good.
<smoser> you could use cloud-config to add the ssh key to the instances
<kim0> smoser: inst1 needs to ssh into inst2 .. neither have private keys .. so we'd need to generate them I guess
<smoser> create one locally, then, launch both instances such that they have that key (then you can even *use* that key to get to them)
<smoser> then you wouldn't have to muck around with '--key' in the launching of the instance
<smoser> that make sense ?
<kim0> not to me .. I still think we *have* to generate
<kim0> cloud-init puts public keys right ?
<Abd4llA> I'm not full aware of cloud-init
 * Abd4llA googles
<kim0> instance won't have a private key .. How can ubuntu@i1 ssh to ubuntu@i2 then ?
<smoser> right, kim0
<smoser> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt
<smoser> Abd4llA, thats what you're looking for.
<smoser>  * generate a new ssh private/public key
<smoser>  * launch both instances with 'ssh_authorized_keys'
<kim0> aha
<smoser>  * connect to source instance, put private key in place (you can't do this from cloud-init, but you could do it with a runcmd or user-script)
<erichammond> Info on uploading an ssh key to ec2: http://alestic.com/2010/10/ec2-ssh-keys
<smoser> at that point, src can talk to dest
<erichammond> though smoser's approach sounds simpler for this use.
<smoser> erichammond, yeah, that works.
<Abd4llA> but that'd add a dependency on cloud-init
<erichammond> no need to involve EC2 account
<smoser> i dont know if my use case is simpler or not.
<smoser> Abd4llA, its a dependency on the utility instances.
<smoser> the end user doesn't give 2 hoots which instances you use to do this for them
<kim0> erichammond: that uploads pub keys only right? no way to send over priv keys ?
<smoser> i would suggest not requiring utility instance-ids to be input, but using either hard coded values, or values from http://uec-images.ubuntu.com/query (for known regions)
<erichammond> kim0: Correct.  You are not giving EC2 access to your private keys, just the public side.
<smoser> kim0, correct. you really don't ever want to give someone your private keys !
<smoser> that was one of the benefits of the "upload keypair" functionality
 * kim0 scratches head
<kim0> if I'm only uploading pub keys to the 2 instances .. how can I expect them to ssh into one another
<smoser> so, for erichammond's solutionj with upload-keypair to work, you'd still have to deal with getting the private key to source instance.
<smoser> you have to do that.
<smoser> period
<Abd4llA> hmm, maybe I'm not getting the full requirements, so this utility would be used by end users, but they'd use predefined ids provided by ubuntu
<smoser> why not?
<kim0> the utility instance could be ubuntu .. but the tool could be copying centos or windows
<erichammond> What "ids"?
<kim0> erichammond: the utility ami
<smoser> they have to 2 utility instances to write to EBS volumes.
<smoser> have to run 2 utility instances
<erichammond> ah, sure.  AMIs don't matter.
<kim0> Windows .. I guess we can't really copy that yet :)
<smoser> you can't possibly expect that you can work with *any* 2 utility instance image ids
<smoser> right ?
<smoser> ie, it can't be windows, it has to have ssh...
<kim0> I mean the vol to be copied .. cant even be windows
<erichammond> er, don't matter to the user.
<smoser> right.
<Abd4llA> yeah, I thought we'd just document the utility instance requirements
<kim0> so it's Ubuntu instance copying any Linux
<Abd4llA> but ok, that works even better for me :)
<Abd4llA> smoser: one final Q, ubuntu instances have their apt repos configured per region, some blog post did a manual cleanup to the sources list of the AMI after migration
<kim0> I wonder if there's some higher level tool than tar .. to copy (potential partitions, fs, label, uuid, data, acl, xattr...)
<Abd4llA> *ubuntu AMIs
<smoser> Abd4llA, you should not need to do that.
<Abd4llA> I was considering offering the option to mount provide the end user with access to the AMI mounted under some directory and prompt him to do any manual cleanup
<smoser> /etc/apt/sources.list is written on instance-first-boot with appropriate data.
<kim0> nice
<Abd4llA> :)
<Abd4llA> nice
<smoser> Abd4llA, you could allow for something like tha tthough.
<smoser> it is possible that there are other things that someone would want to do.
<smoser> i'd suggest allowing the end user to input scripts to run, and execute those scripts on the utliity instance, passing them the path to the mount point, and possibly information like "region" or something.
<smoser> but thats getting fancy
<Abd4llA> hehe :) , but yeah that's possible, maybe running the scripts in a chroot ?
<kim0> smoser: do you think using some higher level tools (partimage ..etc) might make sense ?
<kim0> we're still loosing acls, xattrs, selinux contexts ..etc right?
<kim0> with tar that is
<smoser> oh, i didn't see there was using tar
<smoser> dont use tar
<smoser> :)
<kim0> hehe
<kim0> actually I think it was rsync
<kim0> I still wonder if it can copy those
<smoser> rsync -aXHAS
 * kim0 nods
<smoser> you could optionally allow the user to specify volume-copy, which you'd just use 'dd'.
<kim0> at which stage you would have done a full enterprise datacenter cloning utility :)
<Abd4llA> dd over nc ?
<kim0> I guess we'd wanna compress the ssh connection as well
<kim0> ssh I'd think
<Abd4llA> interesting :)
<kim0> cool
<kim0> Abd4llA: great work man .. rock on
<Abd4llA> thnx kim0 smoser
<kim0> Abd4llA: ping me if you need any help .. if I don't know, I'll at least point you
<Abd4llA> sure thing
<smoser> Abd4llA, no problem. feel free to ping.
<smoser> you would probably do better to just to rsync -z, than to compress the ssh session.
<smoser> hm..
<smoser> i think it would work:
<smoser> rsync -some-options-here -S /dev/sdg other-host:/dev/sdg
<smoser> woudl be better than dd as it woudln't send zeros, or write zeros
 * Abd4llA councling his big rsync man
<smoser> the -some-option- was because in that case you dont want it to copy the node, but the contents of the device. so -a isn't right idont think
<kim0> that block mode is probably simpler to implement
<smoser> yeah, i think you'd get it with no arguments.
<kim0> smoser: thanks for all the help
<jwstasiak> hey all - new to the cloud-init/config world. I'm running on an ec2 instance of maverick (ami: ami-cef405a7) and having a few problems: 1. I can't get ouput to send output to a file. 2. I haven't been able to turn off interactivity - installing sun-java6-bin looks like it's clobbering my apt packages. Any ideas?
<kim0> jwstasiak: for silent java install .. you need something like http://mmcgrana.github.com/2010/07/install-java-ubuntu.html
<kim0> jwstasiak: for redirecting logs to a file check user_setup in http://smoser.brickies.net/ubuntu/uec-seed/user-data
<kim0> for a sample
<jwstasiak> kim0: thanks - I had something similar in a user-data script I've been working on  - I was hoping there'd be a way to do it via cloud-config, but didn't see anyway of doing it after looking through the source (.5.15 ubuntu3)
<jwstasiak> kim0: after poking around everything today, I think the user-data scrpt is prolly the way to go for now
#ubuntu-cloud 2011-02-10
<smoser> jwstasiak is gone now, but if he comes back... in natty's version of cloud-init output is configurable via cloud-config.  see 'output' section at http://bazaar.launchpad.net/%7Ecloud-init-dev/cloud-init/trunk/annotate/head%3A/doc/examples/cloud-config.txt
<smoser> kim0, ^^ if you're ever asked again
<smoser> regarding debconf, look for debconf_selections in above url
#ubuntu-cloud 2011-02-11
<superxgl> hi all, i don't know much about managed-novlan mode , does it have to need  *two* nics and one switch ??
<superxgl> in this mode , the VMs use private IP addressed ,right ?
<superxgl> i want to change to this mode
<superxgl> because i think this mode can solve my problem (i.e. lacking of suffient IP addresses)
<superxgl> smoser : ok, sorry, i want to ask that can managed-novlan mode solve my problem( i.e . i don't have so much ip addressed)?
<smoser> TREllis, was just playing with that yesterday i think.
<superxgl> oh, TREllis, r u there ?
<superxgl> i don't have so many ip addresses and now i use "system" mode , so that some of my instances *CAN'T* acquire ip addresses
<superxgl> they are running with which ip addreses keeps "0.0.0.0/0.0.0.0"..
<TREllis> superxgl: hi
<TREllis> superxgl: with system mode, they try and grab dhcp leases from your public network afaik
<TREllis> superxgl: best bet, if you are low on public addresses is to just remove the address range from eucalyptus.conf on the cc and use managed-novlan... then when launching instances use --addressing private
<TREllis> unless anyone else has a better idea :)
<TREllis> I have the same problems, public network doesn't have too many free ips... flat network for clc/cc/nc's each with public addresses but none to give to instances, not a problem if you want people to use those instances via the CLC/CC...
<TREllis> in this setup for web development, I've just added mod_proxy to the front end, which redirects to instances on the backend
<superxgl> TREllis : so , now that means the only way i can choose is to use managed-novlan or maybe managed mode ?
<TREllis> superxgl: I *think* so, my knowledge of the different eucalyptus networking modes is rather fresh
<TREllis> superxgl: with managed of course, you could do vlan tagging too... this page is rather helpful: http://open.eucalyptus.com/wiki/EucalyptusNetworking_v1.6
<superxgl> TREllis:  tnx. so maybe i will choose managed-vlan then, and it needs two nics?
<superxgl> hmm...now i only have 1 nics.
<TREllis> superxgl: no, I'm using it on a flat network at the moment, of course, if you want the instances to be accessible outside of the cc you'll need to use public ips
<superxgl> no need *two* nics ? i only want the instances to be accessible via cc,
<superxgl> but i see the doc says that it needs two  nics
<superxgl> the book : eucalyptus beginners guide
<superxgl> anyway , if one nic works ,it sounds good to me
<TREllis> superxgl: works for me :)
<superxgl> TREllis : so ,  you use managed mode or managed novlan mode ?
<TREllis> superxgl: managed-novlan
<superxgl> TREllis: cool...and ,u said that u just added mod_proxy to the front end, how to do this?  sorry i am really a newbie to it ..
<TREllis> superxgl: sure, one sec
<superxgl> Ok :)
<TREllis> superxgl: some rough notes http://pastebin.ubuntu.com/565937/
<TREllis> if you want to disable the http://clc/ redirect just take it out of /var/www/index.html
<superxgl> TREllis : thank you very much for ur kindly help and nice guide :)
<superxgl> u help me a lot today
<TREllis> superxgl: np
<superxgl>  then i will solve my problem , u know, it bothered me a few days :)
<TREllis> superxgl: it's not so apparent, although if you look at the debconf question you get asked on install (when it asks for public ip range) it does state it pretty clearly, missed that myself :)
<TREllis> superxgl: https://help.ubuntu.com/community/UEC/CDInstall?action=AttachFile&do=get&target=uec4.png that one :)
<superxgl> TREllis : i forgot to say that , i use CentOS , i could not use UEC for my box not support kvm :(
<superxgl> but should be the same
<superxgl> TREllis: i will try it tomorrow :) it's late night here ,  gonna to go now . good night and tnx again :)
<jmgalloway> I have a general ubuntu install question...why does it hang on the language selection screen?
#ubuntu-cloud 2011-02-12
<superxgl> TREllis:  r u there ?
<superxgl> TREllis: how did ur configure this for managed-novlan mode ?
<superxgl> VNET_DHCPDAEMON="/usr/sbin/dhcpd"
<superxgl> VNET_PUBINTERFACE="eth0"
<superxgl> VNET_PRIVINTERFACE="eth1"
<superxgl> VNET_MODE="MANAGED"
<superxgl> VNET_SUBNET="192.168.0.0"
<superxgl> VNET_NETMASK="255.255.0.0"
<superxgl> VNET_DNS="173.205.188.129"
<superxgl> VNET_ADDRSPERNET="32"
<superxgl> VNET_PUBLICIPS="173.205.188.131-173.205.188.150"
<superxgl> TREllis: since you said that there no need for two nics , did u specify eth1 here ?
#ubuntu-cloud 2011-02-13
<balachmar> Hi, I want to download everything from an s3 bucket onto a external HD. For that I use s3fs, to mount it as a local drive and then rsync to create a copy. However, using tar c /data-dir/ | md5sum on the external hd and running the same command on an ec2 instance (also using s3fs to mount the bucket) results in a different md5sum.
<balachmar> How do I know if the download was successful?
<kim0> balachmar: what about md5sum'ming individual files
<balachmar> kim0: I thought getting one for the entire dir was easier.
<kim0> balachmar: yeah but tracking where the difference occurs, helps spot the problem
<balachmar> kim0: Thanks for the idea. It seems that the fact that S3 doesn't have directories makes my way of md5summing a dir go wrong. At least for one dir the md5sums of the files are good. An ec2 instance is still calculating the md5sums for the other dir.
<kim0> smoser: Hi! Do those tools null the need for the utility to migrate amis across regions ??
<kim0> smoser: http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/index.html?CLTRG-ami-migrate-bundle.html
<kim0> smoser: http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/index.html?ApiReference-cmd-MigrateImage.html
<daker> hello kim0 ã
<kim0> daker: hey
<flaccid> got this python error with cloud-init on debian squeeze (using 0.6.0 natty package) http://dpaste.org/Omy4/
<flaccid> smoser or erichammond you wouldn't be around by any chance
<benlake> Hmm, I believe kvm requires hardware support for virtualization and UEC/Ubuntu have focused on using kvm as the hypervisor of choice. So I'm wondering if anyone thinks it would be a good idea to put a notice up on http://www.ubuntu.com/cloud/private/deploy and https://help.ubuntu.com/community/UEC/CDInstall that says, "Don't bother running NCs without VT support."? Thoughts?
<kim0> benlake: the CDinstall page lists "VT extensions" in the min reqs table for NCs
<benlake> kim0: ah, indeed it does...
#ubuntu-cloud 2012-02-06
<jo-erlend> I installed qemu-kvm-spice. This also means kvm, etc. Do I need to do anything else, such as add myself to certain groups? I haven't done this stuff in a long while. :)
<jo-erlend> oh. I'm also installing virt-manager since I noticed it now supports spice.
<arosales> jcastro: Nice blog post on S3-backed EC2 mirrors.
#ubuntu-cloud 2012-02-08
<uksysadmin> hi all
<uksysadmin> I've got a bug to raise in Precise A2 against OpenStack - do I raise it as a nova bug or is there one specific to this distro?
<TREllis> uksysadmin: good question, I think either is fine the good thing about launchpad is you can target bugs to multiple distros/projects
<uksysadmin> cheers TREllis
<TREllis> uksysadmin: it's even easier for nova as their bugs are in launchpad too of course
<TREllis> uksysadmin: also, if you can, use 'ubuntu-bug' to file it
<uksysadmin> yeah cheers - was never quite sure as it made sense me raise bugs in openstack when I was using their ppas, but when its a main distro, wondered if there was a more direct route.
<uksysadmin> I've raised this under openstack-nova in this case anyway now.
<TREllis> uksysadmin: Cool. It'll always get put right by someone anyway, for precise it probably wouldn't even matter anyway since it's not released, if it was fixed by openstack upstream it would be pulled into precise before release
<uksysadmin> true
<uksysadmin> cheers
<TREllis> uksysadmin: what's the bug number?
<uksysadmin> https://bugs.launchpad.net/nova/+bug/928819
<uvirtbot> Launchpad bug 928819 in nova "Launching instance fails in Precise A2 - network/manager.py too many values to unpack" [Undecided,New]
<TREllis> uksysadmin: thanks
#ubuntu-cloud 2012-02-10
<msw> smoser: around?
<msw> I just started a new 64-bit instance of 10.04 (ami-55dc0b3c) and apt-get commands are failing: https://gist.github.com/1791804
<msw> anyone seen this?
<msw> ok, apt-get update fixed it
<msw> nevermind
<smoser> msw, here.
<smoser> msw, right. that is expected.
<msw> smoser: *nod* - sorry, should have looked closer. :-P
<smoser> no problem.
<nOStahl> hi guys
<nOStahl> im gearing up to start a small local cloud
<nOStahl> found some towers with vt enabled
<nOStahl> got 2 for nodes and one for controller.
<nOStahl> starting out each node will have 2 gigs ram and controller will have 2 gigs as well
<nOStahl> been trying to find info on what gets stored where.
<nOStahl> what size hard drives do I need for each node
<smoser> hm.. so anyone have thoughts about boto upgrade?
<erichammond> smoser: If an upgrade means that more AWS services are supported, then I'm all for it.
<erichammond> though I have no idea what context this conversation is in.
<smoser> erichammond, mostliy just upgraade
<smoser> and yeah, there are more services in it.
