[00:02] <erichammond> will do
[00:02] <erichammond> Let me know if you have any other questions or want me to give feedback on any content.
[00:06] <kim0> erichammond: Thanks man!
[00:06] <kim0> erichammond: drop me a line please if you update the ppa with the dependency .. Thanks
[00:09]  * kim0 → /dev/bed
[04:38] <jmgalloway> anyone online?
[04:38] <flaccid> nope
[07:37] <erichammond> kim0:  ec2-consistent-snapshot 0.37 has been uploaded to the Alestic PPA for Lucid, Maverick, Natty.  It includes the dependency on libnet-amazon-ec2-perl and updated install documentation.
[07:40] <erichammond> All that's needed to install it now is: sudo add-apt-repository ppa:alestic && apt-get update && sudo apt-get install ec2-consistent-snapshot
[07:55] <kim0> erichammond: awesome .. you're da man ;)
[08:48] <erichammond> er, sudo apt-get update
[14:21] <atretes> Hi all, I'm currently setting up a cloud base on 10.04 and I've downloaded a repackaged image from ubuntu uec but when I launch it into the cloud and attempt to ssh into it I get 'No route to host'?
[14:25] <atretes> However, I downloaded the 10.04 Lucid image from the Image Store and that image works fine with ssh etc. What could possibly be the problem?
[14:36] <kim0> atretes: did you publish the repackaged image using uec-publish-tarball ?
[14:38] <atretes> kim0: no I used the euca-bundle-image, euca-upload-bundle and euca-register
[14:39] <kim0> atretes: uec-publish-tarball is made to make this procedure easy and error free .. would you mind trying with it ?
[14:39] <smoser> atretes, pastebin euca-get-console-output <instance-id>
[14:39] <smoser> sorry to jump in, kim0 , but want to see if there is anything obvious there.
[14:39] <kim0> sure .. you're da man ;)
[14:39] <smoser> no route to host likely means higher level eucalyptus issues, though, i fear
[14:41] <kim0> smoser: while working with a community fellow yesterday, I noticed a natty instance did not launch in eu-west. Unfortunately I don't know the exact AMI id, but what I'm asking is, do we have some auto-testing of all uploaded AMIs to make sure they can boot and be ssh'able ?
[14:41] <kim0> if not, that'd be a nice tool to write I guess
[14:41] <smoser> we do not.
[14:41] <smoser> yes, we need much test.
[14:42] <smoser> i386 t1.micro is known broken in alpha2
[14:42]  * kim0 notes down
[14:42] <smoser> but all others "should work"
[14:42] <kim0> indeed .. that was probably it
[14:42] <smoser> it does work on dailies now
[14:42] <kim0> and I thought alpha2 had more testing than dailies :)
[14:52] <atretes> smoser, http://pastebin.com/w8gv6Bh8
[14:53] <smoser> well, kim0 it did. i *knew* alpha2 didn't work on t1.micro i386.
[14:53] <smoser> if you asked me about yesterday's daily, i wouldn't have known for sure :)
[14:53] <kim0> atretes: hehe
[14:53] <kim0> sorry .. that was to smoser
[14:53] <smoser> atretes, where did you get this image ?
[14:54] <smoser> it would seem to me that it is a lucid image, and it is waiting on the availability of eth0.
[14:55] <atretes> smoser, I downloaded the tarball from http://uec-images.ubuntu.com/releases/lucid/release/ and then added the postgresql package to it by mounting the image via loop and then bundled it with euca2ools
[14:56] <atretes> smoser, might is be a udev issue?
[14:56] <smoser> hm...
[14:57] <RobertLaptop> I have a licensing question.  We are currently using ESXi and wanting to move a more cloud based structure but money is an issue there is no budget for the conversion.  I was looking at various cloud options and found a webnair on ubuntu-cloud but what I can't tell is if you have a CE version or a non-pay version?  Does anyone have an info on that?
[14:57] <atretes> smoser, I tried building my own kvm images from scratch and got the same issue but the image from the Ubuntu Image Store works with no problems
[14:57] <smoser> atretes, could you do me a favor and cut out the "added postgresql package" step ?
[14:57] <smoser> ie, just take the iamge you downloaded, do not modify and test it
[14:57] <smoser> via the same bundle and upload that you did after you modified it
[14:57] <kim0> RobertLaptop: if you don't need support .. you're free to use Ubuntu server/cloud for free forever
[14:59] <RobertLaptop> kim0, Cool.  What about Landscape Dedicated Server is that true as well?
[14:59] <atretes> smoser, sure let me try
[14:59] <kim0> RobertLaptop: nope, afaik landscape only offers a free trial
[14:59] <kim0> RobertLaptop: I'm sure price wise, it would be quite competitive however compared to other options
[15:00] <kim0> RobertLaptop: and you don't necessarily need it to manage UEC, although it does make things nicer
[15:03] <RobertLaptop> Is landscape a one time cost or a required subscription?  I am referring to the Dedicated Server not the hosted version.  Also does that mean all node servers have to be licensed as well?
[15:03] <smoser> atretes, but, in the end, i woudl recommend either uec-publish-tarball, or uec-publish-image (as kim0 suggested).  they're just much easier to work with IMO than euca-bundle-image, euca-upload-bundle, euca-register
[15:06] <atretes> smoser, yeah I will give that a bash too
[15:06] <kim0> RobertLaptop: I think you should contact Sales https://forms.canonical.com/sales/
[15:07] <RobertLaptop> kim0, I tried that a few months ago and never got a reply.  I guess I could retry.
[15:13] <atretes> smoser, Ok this sucks - I used the base 'untarnished' image that I downloaded and bundled is with the euca2ools, and ssh is working. So what could possibly have changed with my custom image?
[15:14] <smoser> i do suspect udev persistent network
[15:15] <smoser> but don't really understand how you would have gotten those there.
[15:15] <smoser> hold on
[15:16] <smoser> atretes, do you have /etc/udev/rules.d/70-persistent-net.rules or /etc/udev/rules.d/z25_persistent-net.rules in the instance ?
[15:16] <smoser> err.. in your re-bundled image
[15:16] <atretes> smoser, well to get apt running in my mounted image I had to 'mount -bind /dev /tmp-mnt/dev'  and then chroot... that might've messed with things
[15:18] <smoser> do you have those files there ? (/etc/udev)
[15:19] <atretes> smoser, well in the instance that is currently running it is 70-persistent-net.rules
[15:19] <smoser> well, i suspect that that file has a mac addr in it that is different than your instance
[15:20] <smoser> ie, your instance probably has eth1 and is sitting waiting forever for eth0
[15:20] <atretes> smoser, and the same in the re-bundled image
[15:21] <atretes> smoser, hmmm yeah that would be a problem - so should I just remove is completely?
[15:21] <smoser> atretes, yes, remove that.
[15:22] <smoser> in your image, remove it, then re-register an ami, and try again
[15:22] <smoser> i relaly dont know what would have gotten that file there... i guesss somehow udev got started in your chroot ... and wrote that.
[15:22] <smoser> i'd not seen that before though
[15:23] <atretes> smoser, ok so I assume eucalyptus creates a new rule file when the image get instantiated with the generated mac?
[15:24] <smoser> no
[15:24] <smoser> eucalyptus does not modify the image contents. (other than in the networking setups that do not have a metadata service, and then, they only insert .ssh/authorized_keys)
[15:25] <atretes> ok
[15:25] <smoser> fwiw, i find it a serious bug for them to tinker inside the image contents.
[15:25] <smoser> (i would be pretty ticked off if my thinkpad bios decided it should read the filesystem and modify some things on my behalf)
[15:25] <atretes> I think so too
[15:26] <atretes> but what confuses me is that if a ssh into the instance that is currently running it does contain a 70-persistent-net.rules file...
[15:46] <atretes> smoser, you are a legend - my image is working! thanks so much :)
[15:47] <smoser> atretes, it *should* have that file
[15:47] <smoser> the instance should, but the image should not.
[15:47] <atretes> smoser, ah I understand
[15:48] <smoser> bug 341006 has more info
[16:18] <kim0> erichammond: Pushed the cast, thanks for the help http://www.youtube.com/user/ubuntucloud#p/a/u/0/SPVqJWWiLVI
[17:16] <atretes> kim0, Very nice cast, does this work with eucalyptus?
[17:17] <kim0> atretes: um probably erichammond might know better
[17:17] <kim0> I guess it depends on the api compatibility level .. but generally should work I'd think
[17:21] <atretes> nice, I will check it out a bit further
[18:01] <erichammond> kim0, atretes: I've never used Eucalyptus/UEC and have no plans to as I'm happy to get out of the hardware maintenance business and let Amazon take care of it for me.
[18:04] <Abd4llA> ping kim0
[18:04] <kim0> Abd4llA: hey
[18:04] <kim0> How's it going
[18:05] <Abd4llA> fine, I've started implementing the EC2 AMI migration tool, saw u were looking for volunteers on ur blog
[18:05] <kim0> woohoo
[18:05] <Abd4llA> was just gonna send a mail to mailing list
[18:05] <kim0> that sounds awesome
[18:05] <kim0> where can we check out the code
[18:06] <Abd4llA> https://code.launchpad.net/~abd4lla/+junk/ec2-ebs-migrate
[18:06] <Abd4llA> nothing fancy yet
[18:06]  * kim0 clicking
[18:06] <Abd4llA> was thinking to take some opinions and ask for help, the tool is not big though, but opinions at this point would be valuable
[18:07] <kim0> Abd4llA: Is this your first contribution to ubuntu
[18:08] <Abd4llA> first code contribution idd, I delivered a session previously in AppDev week
[18:09]  * kim0 hugs Abd4llA 
[18:09] <Abd4llA> hehe
[18:09] <Abd4llA> so what do u think the plan should be?
[18:10] <kim0> Abd4llA: Ok, first of all ... Indeed I think you should send an email to the ubuntu-cloud list
[18:10] <kim0> so that others wanting to work on this tool can join forces
[18:10] <kim0> actually I'll probably try to hack on it a bit too
[18:10] <kim0> Other than that .. do you feel like you have concrete questions or parts you'd like help with ?
[18:12] <Abd4llA> not really, but would be gr8 if someone from the servers guys did a quick review or something, as I said, opinions at this early stage are valuable
[18:12] <Abd4llA> specially regarding the general implementation approach
[18:12] <kim0> aha
[18:12] <kim0> smoser is generally the man who'd know best about that tool ..
[18:14] <kim0> smoser would you be able to quickly check out the implementation approach
[18:14] <kim0> Abd4llA: well don't expect something realtime :) but it'll come
[18:15] <kim0> hang in here for a while if that's ok
[18:15] <Abd4llA> sure
[18:17] <smoser> Abd4llA, so, reading a bit, overall looks reaonsble. i like that you laid things out and documented what you're expecting to do
[18:18] <smoser> def prepareDestinationVol(dstInstance, volumeSize):
[18:19] <smoser> really, the ideal migrate of the instance involes copying the filesystem type and LABEL also.
[18:19] <kim0> I notice a couple of issues .. Do we always assume ext3 ? Do we always assume the ebs vol is not partitioned ?
[18:19] <smoser> and i would even suggest UUID.
[18:19] <kim0> are those reasonable assumptions ?
[18:19] <smoser> ebs root volumes are not partitioned.  amazon/xen does tricks such that the root volume comes up when booted named /dev/sda1
[18:20] <smoser> (xen is really wierd...actually, the device you're used to seeing as /dev/xvda1 or /dev/sda1 is not a partition, it is a funny named block device -- look in /sys and you'll see what i mean)
[18:20] <smoser> but, no, you can't assume ext3
[18:21] <Abd4llA> smoser: yeah , would do that idd,
[18:21] <Abd4llA> kim0: I was thinking about detecting the fs, so far the only idea I've is using the "file" command
[18:21] <smoser> we're cheating in some way by not copying the full volume. we're only copying the filesystem contents, which is good, but if you lose attributes of that filesystem, its bad.
[18:21] <smoser> :)
[18:22] <kim0> what about blkid
[18:22] <kim0> I hate running commands like so .. I wish Linux servers had a low level api :)
[18:23] <smoser> i recently did this for euca-bundle-vol and ec2-bundle-vol
[18:23] <smoser> and use blkid to get UUID and LABEL and TYPE
[18:24] <Abd4llA> ok, good enough for me
[18:25] <Abd4llA> smoser: are the kernel_ids avaible across regions ?
[18:25] <smoser> you can see mkfs at http://bazaar.launchpad.net/~ubuntu-virt/ubuntu/natty/euca2ools/natty/view/head:/euca2ools/euca2ools/__init__.py if you're interested.
[18:25] <smoser> oh, good question.
[18:25] <kim0> consistent you mean ?
[18:25] <smoser> no.
[18:25] <Abd4llA> kim0: yes :)
[18:26] <smoser> for migrating, what you'll have to do is fish through available kernels/ramdisks in the target region and try to match
[18:26] <smoser> fun, eh?
[18:26] <Abd4llA> :S
[18:26] <kim0> smoser: try to match based on ?
[18:26] <smoser> based on manifest path
[18:26] <Abd4llA> I C
[18:27] <smoser> what i would suggest is first looking and seeing if there is a single match by owner-id and manifest-basename in the target region
[18:27] <smoser> if so, use it.
[18:27] <smoser> if there are no candidates matching basename, give up, require user to tell you
[18:27] <smoser> if there are more than 1 (and there will be for anything we've published), then you really get to fixh
[18:27] <smoser> fish
[18:28] <smoser> we use a naming convention, we have different buckets, and you have to be careful to stay in the same "bucket basename".
[18:28] <smoser> our buckets are named
[18:28] <smoser> according to https://wiki.ubuntu.com/UEC/Images/NamingConvention
[18:29] <smoser> ubuntu-kernels-us -> ubuntu-kernels-eu-west-1 -> ubuntu-kernels-ap-southeast-1
[18:29] <smoser> but we also have
[18:29] <smoser> ubuntu-kernels-testing-us -> ubuntu-kernels-testing-ap-souteast-1 ...
[18:30] <kim0> can't we just download the kernel and re-register it on the other side
[18:31] <kim0> smoser: also with newish pvgrub images .. we don't need to do anything right?
[18:32] <smoser> well, you still have to basename match
[18:32] <smoser> you cannot download kernels, and only priviledged accounts can register on the other side
[18:33] <kim0> Abd4llA: how are you planning on establshing ssh keys across the 2 instances ?
[18:33] <kim0> for rsync'ing
[18:35] <Abd4llA> kim0: I plan to generate a key pair on a machine and then copy the key to other one
[18:37] <kim0> so you'd download it locally .. and upload to the other side
[18:37] <kim0> I guess we'd have to do it that way
[18:38] <Abd4llA> I'd just remotely cat it from one machine, and remotely write it to the file on the other one
[18:38] <Abd4llA> if u consider that *downloading*
[18:39] <kim0> Abd4llA: catloading is better :)
[18:40] <Abd4llA> smoser: any suggestion regarding that point ?
[18:40] <smoser> i didn't follow it
[18:40] <kim0> copying ssh keys between the 2 sides
[18:41] <smoser> oh. i see. yeah, i think his solution is good.
[18:41] <smoser> you could use cloud-config to add the ssh key to the instances
[18:42] <kim0> smoser: inst1 needs to ssh into inst2 .. neither have private keys .. so we'd need to generate them I guess
[18:42] <smoser> create one locally, then, launch both instances such that they have that key (then you can even *use* that key to get to them)
[18:42] <smoser> then you wouldn't have to muck around with '--key' in the launching of the instance
[18:42] <smoser> that make sense ?
[18:43] <kim0> not to me .. I still think we *have* to generate
[18:43] <kim0> cloud-init puts public keys right ?
[18:43] <Abd4llA> I'm not full aware of cloud-init
[18:43]  * Abd4llA googles
[18:43] <kim0> instance won't have a private key .. How can ubuntu@i1 ssh to ubuntu@i2 then ?
[18:44] <smoser> right, kim0
[18:44] <smoser> http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt
[18:44] <smoser> Abd4llA, thats what you're looking for.
[18:44] <smoser>  * generate a new ssh private/public key
[18:45] <smoser>  * launch both instances with 'ssh_authorized_keys'
[18:45] <kim0> aha
[18:46] <smoser>  * connect to source instance, put private key in place (you can't do this from cloud-init, but you could do it with a runcmd or user-script)
[18:46] <erichammond> Info on uploading an ssh key to ec2: http://alestic.com/2010/10/ec2-ssh-keys
[18:46] <smoser> at that point, src can talk to dest
[18:46] <erichammond> though smoser's approach sounds simpler for this use.
[18:46] <smoser> erichammond, yeah, that works.
[18:46] <Abd4llA> but that'd add a dependency on cloud-init
[18:46] <erichammond> no need to involve EC2 account
[18:46] <smoser> i dont know if my use case is simpler or not.
[18:47] <smoser> Abd4llA, its a dependency on the utility instances.
[18:47] <smoser> the end user doesn't give 2 hoots which instances you use to do this for them
[18:48] <kim0> erichammond: that uploads pub keys only right? no way to send over priv keys ?
[18:48] <smoser> i would suggest not requiring utility instance-ids to be input, but using either hard coded values, or values from http://uec-images.ubuntu.com/query (for known regions)
[18:48] <erichammond> kim0: Correct.  You are not giving EC2 access to your private keys, just the public side.
[18:48] <smoser> kim0, correct. you really don't ever want to give someone your private keys !
[18:48] <smoser> that was one of the benefits of the "upload keypair" functionality
[18:48]  * kim0 scratches head
[18:49] <kim0> if I'm only uploading pub keys to the 2 instances .. how can I expect them to ssh into one another
[18:49] <smoser> so, for erichammond's solutionj with upload-keypair to work, you'd still have to deal with getting the private key to source instance.
[18:49] <smoser> you have to do that.
[18:49] <smoser> period
[18:49] <Abd4llA> hmm, maybe I'm not getting the full requirements, so this utility would be used by end users, but they'd use predefined ids provided by ubuntu
[18:49] <smoser> why not?
[18:50] <kim0> the utility instance could be ubuntu .. but the tool could be copying centos or windows
[18:50] <erichammond> What "ids"?
[18:50] <kim0> erichammond: the utility ami
[18:50] <smoser> they have to 2 utility instances to write to EBS volumes.
[18:50] <smoser> have to run 2 utility instances
[18:50] <erichammond> ah, sure.  AMIs don't matter.
[18:50] <kim0> Windows .. I guess we can't really copy that yet :)
[18:50] <smoser> you can't possibly expect that you can work with *any* 2 utility instance image ids
[18:50] <smoser> right ?
[18:50] <smoser> ie, it can't be windows, it has to have ssh...
[18:51] <kim0> I mean the vol to be copied .. cant even be windows
[18:51] <erichammond> er, don't matter to the user.
[18:51] <smoser> right.
[18:51] <Abd4llA> yeah, I thought we'd just document the utility instance requirements
[18:51] <kim0> so it's Ubuntu instance copying any Linux
[18:51] <Abd4llA> but ok, that works even better for me :)
[18:54] <Abd4llA> smoser: one final Q, ubuntu instances have their apt repos configured per region, some blog post did a manual cleanup to the sources list of the AMI after migration
[18:54] <kim0> I wonder if there's some higher level tool than tar .. to copy (potential partitions, fs, label, uuid, data, acl, xattr...)
[18:54] <Abd4llA> *ubuntu AMIs
[18:55] <smoser> Abd4llA, you should not need to do that.
[18:55] <Abd4llA> I was considering offering the option to mount provide the end user with access to the AMI mounted under some directory and prompt him to do any manual cleanup
[18:55] <smoser> /etc/apt/sources.list is written on instance-first-boot with appropriate data.
[18:56] <kim0> nice
[18:56] <Abd4llA> :)
[18:56] <Abd4llA> nice
[18:56] <smoser> Abd4llA, you could allow for something like tha tthough.
[18:56] <smoser> it is possible that there are other things that someone would want to do.
[18:57] <smoser> i'd suggest allowing the end user to input scripts to run, and execute those scripts on the utliity instance, passing them the path to the mount point, and possibly information like "region" or something.
[18:57] <smoser> but thats getting fancy
[18:58] <Abd4llA> hehe :) , but yeah that's possible, maybe running the scripts in a chroot ?
[19:00] <kim0> smoser: do you think using some higher level tools (partimage ..etc) might make sense ?
[19:00] <kim0> we're still loosing acls, xattrs, selinux contexts ..etc right?
[19:00] <kim0> with tar that is
[19:01] <smoser> oh, i didn't see there was using tar
[19:01] <smoser> dont use tar
[19:01] <smoser> :)
[19:01] <kim0> hehe
[19:01] <kim0> actually I think it was rsync
[19:01] <kim0> I still wonder if it can copy those
[19:02] <smoser> rsync -aXHAS
[19:03]  * kim0 nods
[19:03] <smoser> you could optionally allow the user to specify volume-copy, which you'd just use 'dd'.
[19:03] <kim0> at which stage you would have done a full enterprise datacenter cloning utility :)
[19:04] <Abd4llA> dd over nc ?
[19:04] <kim0> I guess we'd wanna compress the ssh connection as well
[19:04] <kim0> ssh I'd think
[19:04] <Abd4llA> interesting :)
[19:04] <kim0> cool
[19:04] <kim0> Abd4llA: great work man .. rock on
[19:05] <Abd4llA> thnx kim0 smoser
[19:05] <kim0> Abd4llA: ping me if you need any help .. if I don't know, I'll at least point you
[19:06] <Abd4llA> sure thing
[19:06] <smoser> Abd4llA, no problem. feel free to ping.
[19:07] <smoser> you would probably do better to just to rsync -z, than to compress the ssh session.
[19:07] <smoser> hm..
[19:07] <smoser> i think it would work:
[19:07] <smoser> rsync -some-options-here -S /dev/sdg other-host:/dev/sdg
[19:08] <smoser> woudl be better than dd as it woudln't send zeros, or write zeros
[19:08]  * Abd4llA councling his big rsync man
[19:09] <smoser> the -some-option- was because in that case you dont want it to copy the node, but the contents of the device. so -a isn't right idont think
[19:11] <kim0> that block mode is probably simpler to implement
[19:11] <smoser> yeah, i think you'd get it with no arguments.
[19:12] <kim0> smoser: thanks for all the help
[20:26] <jwstasiak> hey all - new to the cloud-init/config world. I'm running on an ec2 instance of maverick (ami: ami-cef405a7) and having a few problems: 1. I can't get ouput to send output to a file. 2. I haven't been able to turn off interactivity - installing sun-java6-bin looks like it's clobbering my apt packages. Any ideas?
[22:03] <kim0> jwstasiak: for silent java install .. you need something like http://mmcgrana.github.com/2010/07/install-java-ubuntu.html
[22:04] <kim0> jwstasiak: for redirecting logs to a file check user_setup in http://smoser.brickies.net/ubuntu/uec-seed/user-data
[22:04] <kim0> for a sample
[22:11] <jwstasiak> kim0: thanks - I had something similar in a user-data script I've been working on  - I was hoping there'd be a way to do it via cloud-config, but didn't see anyway of doing it after looking through the source (.5.15 ubuntu3)
[22:12] <jwstasiak> kim0: after poking around everything today, I think the user-data scrpt is prolly the way to go for now