[03:45] <mr-russ1> dummy cloud question.  How is the private cloud different from just using KVM?
[04:20] <sidnei> mr-russ1, it's got an ec2-compatible API, so if you ever decide to move out, you can use the same tools.
[04:22] <mr-russ1> okay, so mainly it provides better management interface (possibly) and ability to pick up the machines and move to another cloud.  does it provide HA by default, you need a bit more config for that with KVM.
[04:26] <RoAkSoAx> ha as in infrastructure or ha as in instances?
[04:26] <RoAkSoAx> and it doesnt buy deafault
[04:28] <RoAkSoAx> mr-russ1
[04:28] <RoAkSoAx> ^^
[04:31] <mr-russ1> ha as in instances.  If I put up X nodes, it will fail over if one node drops out.  like VMWare's HA stuff.
[04:32] <RoAkSoAx> so as in libe migration
[04:32] <RoAkSoAx> ylive
[04:33] <RoAkSoAx> you could do HA by running a second instance if the application runninng in the fisrt one fail
[04:34] <RoAkSoAx> but as in instance itself like live migration. no that i know of
[04:34] <mr-russ1> okay, I still don't understand cloud computing stuff properly at all.
[04:35] <RoAkSoAx> well i see it as a highly scalable cluster to be able to run virtualized servers
[04:36] <mr-russ1> by scalable you mean add lots of nodes and can then run more machines?
[04:37] <RoAkSoAx> htw you can also setup HA in kvm by failing over if the HW orr OS fails or similar and that is not instance related
[04:37] <RoAkSoAx> i believe vmware does the same but
[04:37] <RoAkSoAx> mr-russ1 yes that as scalable
[04:38] <mr-russ1> I've got kvm running on a host and wondering what benefit I might get if I moved to the cloud when I'm looking at expanding.  Mainly I've had great difficultly understanding the difference between cloud and kvm ha.
[04:40] <RoAkSoAx> KVM HA it is just 2 nodes. when one fails it failovers to the slave node that takes control of the service
[04:40] <mr-russ1> okay.  on the cloud, do you assign machines to specific nodes?
[04:41] <RoAkSoAx> a clouin the cloud you have a cluster of nodes that run instances using an hypervisor such as kvm in the case of ubuntu
[04:42] <RoAkSoAx> you might have 10 physical nodes running varios virtual i stances
[04:43] <RoAkSoAx> and provides scalability by allowing you to add more nodes easily to run more images
[04:44] <mr-russ1> and you move images between nodes if you want to?
[04:45] <RoAkSoAx> idk but it should be possible. im not cloud expert but kvm does provide live migration so i beleive thatit should be possible
[04:46] <mr-russ1> it feels a lot like vmware vmotion and ha together.  run X physical servers with Y vm's/images and if 1 physical server dies, you keep going.  Need more grunt, add another node.
[04:46] <mr-russ1> RoAkSoAx: you look like an expert compared with myself :)
[04:46] <mr-russ1> thanks for answering my questions.
[04:49] <RoAkSoAx> well in kvm in a 2 node cluster you do live migration and HA  but limited to two node. while a cloud provides scalabilty mainly. i just read and it shluld ssupport live migration soon
[04:49] <RoAkSoAx> and no problem glad to help
[04:51] <mr-russ1> hmm, reading the ubuntu install guide, you can't oversubscribe you cpu's with ec2.
[11:22] <kim0> Hey folks, any idea why this is not working
[11:22] <kim0> ec2-describe-images -o canonicalteam
[13:35] <progre55> hi guys. how to bundle an image from a running instance? euca-bundle-image?
[14:46] <niemeyer> progre55: No, that works with local files
[14:47] <niemeyer> progre55: Is it an EBS image?
[14:48] <progre55> niemeyer: no, just a simple image
[14:49] <niemeyer> progre55: You'll likely need ec2-bundle-vol then
[14:49] <progre55> oh, how about euca-bundle-vol?
[14:57] <progre55> niemeyer: ^^
[14:58] <niemeyer> progre55: Yeah, that should do it
[14:59] <progre55> thanks
[15:02] <niemeyer> progre55: np!
[18:10] <RoAkSoAx> Hey anyone know if UEC already supports live migration?
[18:49] <erichammond> kim0: -o expects a numerical AWS user id.  "amazon"  and "self" are special exceptions.
[18:49] <kim0> erichammond: hey eric :)
[18:49] <kim0> thanks
[18:49] <kim0> any idea how to map the username into an ID
[18:50] <erichammond> There is no username in AWS.  It's a figment of your imagination.
[18:50] <erichammond> :)
[18:50] <erichammond> (or some other tool you might be using?)
[18:51] <kim0> hmm .. I see
[18:51] <kim0> thanks
[18:51] <erichammond> Though I suppose the user identifier in the new IAM might be considered a username of sorts. It just doesn't map to an account.
[18:56] <kim0> The thing is .. when I view an AMI like http://developer.amazonwebservices.com/connect/entry.jspa?externalID=3872
[18:56] <kim0> It says submitted by: canonicalteam, so I was thinking I can probably filter by that .. but it seems not
[18:57] <erichammond> Ah, that is the username of the person who submitted the article on the AWS forum software.  It is unrelated to AWS itself.
[18:58] <erichammond> You can see the numerical user id of the user that created any given AMI and then find out other AMIs created by the same account.
[18:59] <erichammond> For example, ami-1a837773 on that page was created by userid 099720109477
[19:46] <RoAkSoAx> kim0: ping
[19:46] <kim0> RoAkSoAx: pong
[19:47] <RoAkSoAx> kim0: quick question. If I change the Cloud IP, would it be better to manually change it too in the CLC at CLOUD_IP_ADDR="$addr" ?
[19:47] <RoAkSoAx> in /etc/eucalyptus/eucalyptus-ipaddr.conf
[19:50] <RoAkSoAx> kim0: or when I change it in the Web Interface, it is automatically detected?
[19:50] <kim0> RoAkSoAx: I'm not sure .. hang on for an answer from the devs
[19:51] <RoAkSoAx> kim0: ok ty ;)
[19:52] <RoAkSoAx> hggdh smoser Daviey kirkland any ideas ^^ ?
[19:53] <Daviey> RoAkSoAx: I think it's a suck it and see.  I know i have never needed to do it.
[19:54] <Daviey> Sorry, couldn't be more help
[19:55] <hggdh> RoAkSoAx: yes. I would expect changing the CLC ip address would need a restart
[19:55] <RoAkSoAx> Daviey: npp :) thanks though
[19:55] <hggdh> you might also need to check the other components -- probably they will update the registration, but I never tried it
[19:55] <smoser> kim0, fwiw, those pages i hope to get assinged and maintained by a different ec2 account
[19:56] <smoser> so in the future that 'canonicalteam' would be something else anyway.
[19:56] <smoser> was hoping to start that process today.
[19:56] <smoser> ami pages are real PITA
[19:56] <kim0> smoser: got ya .. thanks
[19:56] <smoser> creating them takes 2+ weeks before they get acked
[19:56] <RoAkSoAx> hggdh: right... well for once, the keys will change, so that's for sure. then in the CC I can just specify the CLC ip, however IDK if I should also do that for the CLC itself
[19:57] <RoAkSoAx> i guess I'll just have to try
[19:57] <hggdh> you should do it on the CLC also
[19:58] <RoAkSoAx> hggdh: well I have changed the IP and everything seemed to be working, but I just wasn't sure if i should do a manual change of the CLC ip in the same CLC at /etc/eucalyptus/eucalyptus-ipaddr.conf instead of letting it be configured automatically
[19:58] <RoAkSoAx> s/configured/obtained
[19:58] <RoAkSoAx> becuase eitherway, i'm using a VirtualIP as the IP for the cloud
[19:59] <RoAkSoAx> in an HA environment i'm setting up
[19:59] <hggdh> RoAkSoAx: you should not need to touch eucalyptus-ipaddr.conf, there are no IPs there
[19:59] <hggdh> it is sourced by other scripts
[19:59] <RoAkSoAx> hggdh: that's the thing/. I'm not using a "regular static" ip for the CLC so idk if the sourcing will work as I expect
[20:00] <RoAkSoAx> because I'm using a VirtualIP shared between two CLC's in HA
[20:00] <hggdh> oh
[20:00] <hggdh> now this is interesting
[20:00] <RoAkSoAx> so if CLC1 fails, the CLC2 will have that VIP for the CLC
[20:01] <hggdh> so it is a head-of-cluster scenario, where only one is active
[20:01] <RoAkSoAx> hggdh: I did a really simple test over the weekend, but I'm not sure how that'd work. So i'm redoing it :)
[20:01] <RoAkSoAx> hggdh: yeas an Active/Passive HA Cluster
[20:01] <hggdh> and the DBs, where are they?
[20:01] <RoAkSoAx> hggdh: replicated with DRBD
[20:02] <hggdh> I wonder what would happen with currently-running instances
[20:02] <hggdh> well, anyway, on an active-passive scenario, the backup CLC would be down
[20:02] <hggdh> so, after you move the VIP, you start it, and all is fine
[20:03] <hggdh> both should be set to the VIP
[20:04] <RoAkSoAx> hggdh: I have 4 nodes, 1 CLC, 1 Walrus, 1CC/SC, 1 NC. So, theorically, if the CLC1 goes down, it shouldn't affect the CC nor the NC because the CLC2 will take over the service with the VIP
[20:04] <RoAkSoAx> and that VIP is the Cloud IP
[20:04] <hggdh> yes, I understand that. I am just unsure how the CLC recovers (never tested this scenario)
[20:05] <hggdh> RoAkSoAx: what I mean: I am *very* interested on your results
[20:05] <RoAkSoAx> hggdh: these are actually part of the Cluster Stack blueprint so there's still a long way to go... :)
[20:05] <hggdh> heh
[20:05] <hggdh> RoAkSoAx: keep in mind that the CLC is the glue between all components
[20:06] <hggdh> I know the CC recovers from failures, I just do not know what happens when the CLC fails
[20:08] <RoAkSoAx> hggdh: I thought the CC had in memory which instances are running, and if it fails, that tracking would be lost...?
[20:12] <hggdh> RoAkSoAx: I found that it recovers the sessions -- when it comes up it queries the NCs
[20:13] <hggdh> I am not sure about how far it goes (security groups, iptables, etc)
[20:13] <hggdh> I only found it by accident, when upgrading my test rig
[20:14] <RoAkSoAx> hggdh: if that's the case, it would be really simple to provide HA
[20:15] <RoAkSoAx> hggdh: if not, there would have to be some kind of sync daemon between two CC's in HA for the running instances
[20:16] <hggdh> yes
[20:16] <hggdh> I think the best course here would be to ask upstream about it
[20:17] <RoAkSoAx> hggdh: indeed, but I remember reading in one eucalyptus forum post that they will provide HA only for their paid version
[20:18] <hggdh> RoAkSoAx: we can ask some slightly different questions: what would happen if the CLC is restarted? The SC? The CC?
[20:19] <hggdh> and then plan around it
[20:20] <RoAkSoAx> hggdh: for what I tested over the weekend, if the CLC1 fails and CLC2 takes control over the cluster without any problems. However, I haven't test this with running instances
[20:21] <hggdh> RoAkSoAx: which is good. Now we should try to find what happens with running sessions. But I think this is the way to go indeed
[20:21] <hggdh> RoAkSoAx: thank you for doing it :-)
[20:21] <RoAkSoAx> hggdh: either way, the hardest part will be to provide HA to the NC... which will have to be with live migration
[20:22] <hggdh> RoAkSoAx: why HA the NC? If the NC goes down, all instances there are already lost
[20:22] <RoAkSoAx> hggdh: and in any HA environment, it is expected to "lose" the connection for a few seconds while performing the failover
[20:24] <RoAkSoAx> hggdh: it is possible to setup a 2 node KVM clusters. Imagine that images are running in node1. If node1 fails, then node2 will take control of the service, by "live migrating" the instances from node1
[20:24] <RoAkSoAx> hggdh: however, to be able to do this, you need off course shared storage between the two nodes
[20:25] <RoAkSoAx> so it is simple, if node1 fails, node2 willstart the instances that were running on node1
[20:25] <hggdh> I can understand having the libvirt storage on a NAS-something, but there is no instance to recover -- they went down
[20:25] <hggdh> so there is more than just restarting them -- the services being run must be set for recovery also
[20:27] <RoAkSoAx> hggdh: http://www.linux-ha.org/wiki/VirtualDomain_(resource_agent)
[20:29] <hggdh> empty page?
[20:30] <RoAkSoAx> hggdh: check that _(resource_agent) is in the URL
[20:32] <hggdh> RoAkSoAx: heh, that was it, thank you
[20:38] <hggdh> RoAkSoAx: yes, this is interesting. How would we recover group security?
[20:39] <RoAkSoAx> hggdh: idk yet :) I actually haven't looked into HA for CC, NC and Walrus... but seems it's going to be a hard process :)
[20:40] <hggdh> mind keeping me posted of your results?
[20:42] <RoAkSoAx> hggdh: sure ;)
[20:43] <RoAkSoAx> hggdh: btw.. one more thing what is the eucalyptus-cloud-publication upstart script for?
[20:45] <hggdh> it runs the avahi service for the CLC
[20:45] <hggdh> auto-registration
[20:46] <RoAkSoAx> hggdh: so, when everything is already registered, we only need eucalyptus and eucalyptus-cloud started?
[20:46] <hggdh> hum
[20:46] <hggdh> this is a question for Daviey ;-)
[20:47] <RoAkSoAx> Daviey: ^^ :)
[20:48] <RoAkSoAx> hggdh: thanks btw :)
[22:19] <RoAkSoAx> hggdh: re-tested. So far, so good
[22:20] <hggdh> RoAkSoAx: \o/
[22:21] <RoAkSoAx> hggdh: will post config steps someday this weekl xD
[23:14] <RoAkSoAx> hggdh: how/where do I tell the walrus which CLC IP to use?
[23:22] <hggdh> RoAkSoAx: you have to register it
[23:23] <hggdh> from the CLC via 'sudo euca_conf'
[23:39] <RoAkSoAx> hggdh: yeah but what I mean is that I want the walrus to contact the CLC via an specifically ipaddress. For example, the CC uses VNET_CLOUDIP
[23:42] <RoAkSoAx> hggdh: or, where do I till the CLC "use XX network interface"
[23:46] <RoAkSoAx> s/till/tell