[03:45] dummy cloud question. How is the private cloud different from just using KVM? [04:20] mr-russ1, it's got an ec2-compatible API, so if you ever decide to move out, you can use the same tools. [04:22] okay, so mainly it provides better management interface (possibly) and ability to pick up the machines and move to another cloud. does it provide HA by default, you need a bit more config for that with KVM. [04:26] ha as in infrastructure or ha as in instances? [04:26] and it doesnt buy deafault [04:28] mr-russ1 [04:28] ^^ [04:28] RoAkSoAx: Error: "^" is not a valid command. [04:31] ha as in instances. If I put up X nodes, it will fail over if one node drops out. like VMWare's HA stuff. [04:32] so as in libe migration [04:32] ylive [04:33] you could do HA by running a second instance if the application runninng in the fisrt one fail [04:34] but as in instance itself like live migration. no that i know of [04:34] okay, I still don't understand cloud computing stuff properly at all. [04:35] well i see it as a highly scalable cluster to be able to run virtualized servers [04:36] by scalable you mean add lots of nodes and can then run more machines? [04:37] htw you can also setup HA in kvm by failing over if the HW orr OS fails or similar and that is not instance related [04:37] i believe vmware does the same but [04:37] mr-russ1 yes that as scalable [04:38] I've got kvm running on a host and wondering what benefit I might get if I moved to the cloud when I'm looking at expanding. Mainly I've had great difficultly understanding the difference between cloud and kvm ha. [04:40] KVM HA it is just 2 nodes. when one fails it failovers to the slave node that takes control of the service [04:40] okay. on the cloud, do you assign machines to specific nodes? [04:41] a clouin the cloud you have a cluster of nodes that run instances using an hypervisor such as kvm in the case of ubuntu [04:42] you might have 10 physical nodes running varios virtual i stances [04:43] and provides scalability by allowing you to add more nodes easily to run more images [04:44] and you move images between nodes if you want to? [04:45] idk but it should be possible. im not cloud expert but kvm does provide live migration so i beleive thatit should be possible [04:46] it feels a lot like vmware vmotion and ha together. run X physical servers with Y vm's/images and if 1 physical server dies, you keep going. Need more grunt, add another node. [04:46] RoAkSoAx: you look like an expert compared with myself :) [04:46] thanks for answering my questions. [04:49] well in kvm in a 2 node cluster you do live migration and HA but limited to two node. while a cloud provides scalabilty mainly. i just read and it shluld ssupport live migration soon [04:49] and no problem glad to help [04:51] hmm, reading the ubuntu install guide, you can't oversubscribe you cpu's with ec2. [11:22] Hey folks, any idea why this is not working [11:22] ec2-describe-images -o canonicalteam [13:35] hi guys. how to bundle an image from a running instance? euca-bundle-image? === dendrobates is now known as dendro-afk === dendro-afk is now known as dendrobates === niemeyer_ is now known as niemeyer [14:46] progre55: No, that works with local files [14:47] progre55: Is it an EBS image? [14:48] niemeyer: no, just a simple image [14:49] progre55: You'll likely need ec2-bundle-vol then [14:49] oh, how about euca-bundle-vol? [14:57] niemeyer: ^^ [14:58] progre55: Yeah, that should do it [14:59] thanks [15:02] progre55: np! === xfaf is now known as zul [18:10] Hey anyone know if UEC already supports live migration? [18:49] kim0: -o expects a numerical AWS user id. "amazon" and "self" are special exceptions. === dendrobates is now known as dendro-afk [18:49] erichammond: hey eric :) [18:49] thanks [18:49] any idea how to map the username into an ID [18:50] There is no username in AWS. It's a figment of your imagination. [18:50] :) [18:50] (or some other tool you might be using?) [18:51] hmm .. I see [18:51] thanks [18:51] Though I suppose the user identifier in the new IAM might be considered a username of sorts. It just doesn't map to an account. [18:56] The thing is .. when I view an AMI like http://developer.amazonwebservices.com/connect/entry.jspa?externalID=3872 [18:56] It says submitted by: canonicalteam, so I was thinking I can probably filter by that .. but it seems not [18:57] Ah, that is the username of the person who submitted the article on the AWS forum software. It is unrelated to AWS itself. [18:58] You can see the numerical user id of the user that created any given AMI and then find out other AMIs created by the same account. [18:59] For example, ami-1a837773 on that page was created by userid 099720109477 === dendro-afk is now known as dendrobates === dendrobates is now known as dendro-afk [19:46] kim0: ping [19:46] RoAkSoAx: pong [19:47] kim0: quick question. If I change the Cloud IP, would it be better to manually change it too in the CLC at CLOUD_IP_ADDR="$addr" ? [19:47] in /etc/eucalyptus/eucalyptus-ipaddr.conf [19:50] kim0: or when I change it in the Web Interface, it is automatically detected? [19:50] RoAkSoAx: I'm not sure .. hang on for an answer from the devs [19:51] kim0: ok ty ;) [19:52] hggdh smoser Daviey kirkland any ideas ^^ ? [19:53] RoAkSoAx: I think it's a suck it and see. I know i have never needed to do it. [19:54] Sorry, couldn't be more help === dendro-afk is now known as dendrobates [19:55] RoAkSoAx: yes. I would expect changing the CLC ip address would need a restart [19:55] Daviey: npp :) thanks though [19:55] you might also need to check the other components -- probably they will update the registration, but I never tried it [19:55] kim0, fwiw, those pages i hope to get assinged and maintained by a different ec2 account [19:56] so in the future that 'canonicalteam' would be something else anyway. [19:56] was hoping to start that process today. [19:56] ami pages are real PITA [19:56] smoser: got ya .. thanks [19:56] creating them takes 2+ weeks before they get acked [19:56] hggdh: right... well for once, the keys will change, so that's for sure. then in the CC I can just specify the CLC ip, however IDK if I should also do that for the CLC itself [19:57] i guess I'll just have to try [19:57] you should do it on the CLC also [19:58] hggdh: well I have changed the IP and everything seemed to be working, but I just wasn't sure if i should do a manual change of the CLC ip in the same CLC at /etc/eucalyptus/eucalyptus-ipaddr.conf instead of letting it be configured automatically [19:58] s/configured/obtained [19:58] becuase eitherway, i'm using a VirtualIP as the IP for the cloud [19:59] in an HA environment i'm setting up [19:59] RoAkSoAx: you should not need to touch eucalyptus-ipaddr.conf, there are no IPs there [19:59] it is sourced by other scripts [19:59] hggdh: that's the thing/. I'm not using a "regular static" ip for the CLC so idk if the sourcing will work as I expect [20:00] because I'm using a VirtualIP shared between two CLC's in HA [20:00] oh [20:00] now this is interesting [20:00] so if CLC1 fails, the CLC2 will have that VIP for the CLC [20:01] so it is a head-of-cluster scenario, where only one is active [20:01] hggdh: I did a really simple test over the weekend, but I'm not sure how that'd work. So i'm redoing it :) [20:01] hggdh: yeas an Active/Passive HA Cluster [20:01] and the DBs, where are they? [20:01] hggdh: replicated with DRBD [20:02] I wonder what would happen with currently-running instances [20:02] well, anyway, on an active-passive scenario, the backup CLC would be down [20:02] so, after you move the VIP, you start it, and all is fine [20:03] both should be set to the VIP [20:04] hggdh: I have 4 nodes, 1 CLC, 1 Walrus, 1CC/SC, 1 NC. So, theorically, if the CLC1 goes down, it shouldn't affect the CC nor the NC because the CLC2 will take over the service with the VIP [20:04] and that VIP is the Cloud IP [20:04] yes, I understand that. I am just unsure how the CLC recovers (never tested this scenario) [20:05] RoAkSoAx: what I mean: I am *very* interested on your results [20:05] hggdh: these are actually part of the Cluster Stack blueprint so there's still a long way to go... :) [20:05] heh [20:05] RoAkSoAx: keep in mind that the CLC is the glue between all components [20:06] I know the CC recovers from failures, I just do not know what happens when the CLC fails [20:08] hggdh: I thought the CC had in memory which instances are running, and if it fails, that tracking would be lost...? [20:12] RoAkSoAx: I found that it recovers the sessions -- when it comes up it queries the NCs [20:13] I am not sure about how far it goes (security groups, iptables, etc) [20:13] I only found it by accident, when upgrading my test rig [20:14] hggdh: if that's the case, it would be really simple to provide HA [20:15] hggdh: if not, there would have to be some kind of sync daemon between two CC's in HA for the running instances [20:16] yes [20:16] I think the best course here would be to ask upstream about it [20:17] hggdh: indeed, but I remember reading in one eucalyptus forum post that they will provide HA only for their paid version [20:18] RoAkSoAx: we can ask some slightly different questions: what would happen if the CLC is restarted? The SC? The CC? [20:19] and then plan around it [20:20] hggdh: for what I tested over the weekend, if the CLC1 fails and CLC2 takes control over the cluster without any problems. However, I haven't test this with running instances [20:21] RoAkSoAx: which is good. Now we should try to find what happens with running sessions. But I think this is the way to go indeed [20:21] RoAkSoAx: thank you for doing it :-) [20:21] hggdh: either way, the hardest part will be to provide HA to the NC... which will have to be with live migration [20:22] RoAkSoAx: why HA the NC? If the NC goes down, all instances there are already lost [20:22] hggdh: and in any HA environment, it is expected to "lose" the connection for a few seconds while performing the failover [20:24] hggdh: it is possible to setup a 2 node KVM clusters. Imagine that images are running in node1. If node1 fails, then node2 will take control of the service, by "live migrating" the instances from node1 [20:24] hggdh: however, to be able to do this, you need off course shared storage between the two nodes [20:25] so it is simple, if node1 fails, node2 willstart the instances that were running on node1 [20:25] I can understand having the libvirt storage on a NAS-something, but there is no instance to recover -- they went down [20:25] so there is more than just restarting them -- the services being run must be set for recovery also [20:27] hggdh: http://www.linux-ha.org/wiki/VirtualDomain_(resource_agent) [20:29] empty page? [20:30] hggdh: check that _(resource_agent) is in the URL [20:32] RoAkSoAx: heh, that was it, thank you [20:38] RoAkSoAx: yes, this is interesting. How would we recover group security? [20:39] hggdh: idk yet :) I actually haven't looked into HA for CC, NC and Walrus... but seems it's going to be a hard process :) [20:40] mind keeping me posted of your results? [20:42] hggdh: sure ;) [20:43] hggdh: btw.. one more thing what is the eucalyptus-cloud-publication upstart script for? [20:45] it runs the avahi service for the CLC [20:45] auto-registration [20:46] hggdh: so, when everything is already registered, we only need eucalyptus and eucalyptus-cloud started? [20:46] hum [20:46] this is a question for Daviey ;-) [20:47] Daviey: ^^ :) [20:48] hggdh: thanks btw :) === dendrobates is now known as dendro-afk === dendro-afk is now known as dendrobates [22:19] hggdh: re-tested. So far, so good [22:20] RoAkSoAx: \o/ [22:21] hggdh: will post config steps someday this weekl xD === dendrobates is now known as dendro-afk [23:14] hggdh: how/where do I tell the walrus which CLC IP to use? === dendro-afk is now known as dendrobates [23:22] RoAkSoAx: you have to register it [23:23] from the CLC via 'sudo euca_conf' === dendrobates is now known as dendro-afk [23:39] hggdh: yeah but what I mean is that I want the walrus to contact the CLC via an specifically ipaddress. For example, the CC uses VNET_CLOUDIP [23:42] hggdh: or, where do I till the CLC "use XX network interface" [23:46] s/till/tell