[02:10] <tgm4883> bkerensa, ping
[07:43] <bkerensa> tgm4883: ping
[07:43] <bkerensa> pong even
[07:45] <bkerensa> tgm4883: when you see this e-mail me if your ping is time sensitive since I will be gone most of tomorrow :)
[14:29] <tgm4883> bkerensa, no worries, I was going to ask you about juju stuff
[15:24] <nathwill> morning all
[17:32] <bkerensa> nathwill: morning
[17:33] <bkerensa> tgm4883: Im actually in for a little bit I dont have to leave for a hour or two
[17:33] <bkerensa> :D
[17:34] <tgm4883> bkerensa, how much do you know about juju/other similar types of software?
[17:35] <bkerensa> tgm4883: I know a decent amount about juju.... Not much about puppet/chef etc
[17:36] <bkerensa> nathwill: :P yahoo epic fail with the private key
[17:36] <tgm4883> Here's my issue: We have a decent VMWare ESXi environment in our datacenter. We have a range of different servers (intel, sun, hp, dell). We've noticed a bunch of the older ones (intel) don't like ESX so much (either it doens't install, or has kernel panics)
[17:37] <tgm4883> we'd like to get something working on these machines, in a similar capacity to ESX
[17:37] <tgm4883> (not to mention that ESX licensing is lots of $$)
[17:37] <pwnguin> ganeti!
[17:37] <pwnguin> tgm4883: do you need vm migration?
[17:37] <tgm4883> So I'm wondering if there is other software out there that would help, don't know much about juju
[17:37] <pwnguin> betewen disparate archs
[17:38] <tgm4883> pwnguin, no. We would want live migration for our production vm's, but that would be all the same arch
[17:38] <tgm4883> I've looked at openstack and kvm (using convirt)
[17:38] <bkerensa> tgm4883: Juju will not work for you yet
[17:38] <tgm4883> bkerensa, ok
[17:38] <bkerensa> Juju only works on AWS atm
[17:39] <bkerensa> I mean you could run it on LXC but that would not be good
[17:39] <bkerensa> :)
[17:39] <bkerensa> they are working to get it working on more platforms though
[17:40] <pwnguin> tgm4883: here at the OSL we use ganeti
[17:40] <pwnguin> http://code.google.com/p/ganeti/
[17:40] <tgm4883> What I'd like to do is install 12.04 on the boxes, install something like KVM on all of them, and then have a central way to manage them
[17:40] <tgm4883> pwnguin, I'm looking at that page now :)
[17:40] <pwnguin> we also write ganeti web manager
[17:40] <tgm4883> pwnguin, ah so it does have a web interface?
[17:40] <bkerensa> pwnguin: nice... OSL does such good things
[17:41] <pwnguin> it can, if you use GWM
[17:41] <pwnguin> tgm4883: ganeti has a json remote api
[17:41] <tgm4883> that is a key requirement, as I can deal with cmd line stuff, but if I have to give the banner team some access they won't like that
[17:41] <pwnguin> tgm4883: which uni do you work ofr?
[17:41] <tgm4883> chemeketa community college
[17:43] <pwnguin> oh, that's in salem
[17:44] <tgm4883> yea salem and we have some outreaches in some other cities
[17:44] <pwnguin> yea, who doesn't these days
[17:45] <pwnguin> so one of the nice features we have in GWM is an ajax/html5 vnc client
[17:46] <pwnguin> as you may know, esx does not like linux desktops
[17:46] <tgm4883> pwnguin, is there a demo of this anywhere?
[17:46] <tgm4883> by demo, I mean like a youtube video of features?
[17:46] <pwnguin> good question
[17:46] <pwnguin> theres probably a dozen or so
[17:47] <pwnguin> let me confer with my senior folk on this
[17:47] <tgm4883> pwnguin, we don't use esx for anything really desktop related yet. It runs most of our servers though (web, email, etc)
[17:47] <tgm4883> pwnguin, ok
[17:47] <pwnguin> tgm4883: no, i mean, if you run an ubuntu desktop
[17:47] <pwnguin> the esx client hates you
[17:47] <tgm4883> ah yes
[17:47] <tgm4883> it really does hate that
[17:47] <pwnguin> so Ramereth is a better ganeti salesman than I ;)
[17:47] <Ramereth> tgm4883: i don't have a video (yet) but i have a vagrant setup that will make it easy for you to try it out
[17:48] <Ramereth> i'm working on better documentation on it but I can at least point you at a few things
[17:48] <tgm4883> I RDP into a box in the datacenter that has the vmware client on it
[17:48] <Ramereth> https://github.com/ramereth/vagrant-ganeti <- vagrant repo
[17:48] <Ramereth> http://ftp.osuosl.org/pub/osl/ganeti-tutorial/presentation-ganeti-tutorial.pdf <- old tutorial I used from OSCON last year (pre-vagrant setup). If you ignore the node setup and just follow the ganeti command walkthrough it should be a good setup
[17:49] <Ramereth> i may have time this weekend to cleanup that tutorial pdf so its more accurate
[17:49] <Ramereth> oops, that's the presso
[17:49] <Ramereth> http://ftp.osuosl.org/pub/osl/ganeti-tutorial/GanetiTutorialPDFSheet.pdf <- that's the pdf
[17:49] <tgm4883> Ramereth, sounds good
[17:49] <Ramereth> ganeti gets installed by puppet automatically in the vagrant setup i have
[17:50] <tgm4883> so this is what you guys use at OSUOSL for your VM cluster?
[17:50] <pwnguin> clusters
[17:50] <Ramereth> basically if you go from step #6 onward it should be fine
[17:50] <Ramereth> tgm4883: yup, all of our virtualization and we have several clusters
[17:50] <tgm4883> sweet
[17:51]  * tgm4883 brb
[17:56] <tgm4883> back
[17:56] <tgm4883> I've got one server I can work with now, i'll need to get cabling for the others
[17:57] <tgm4883> right now these intel servers are just sitting in the rack without cabling
[17:58] <Ramereth> what os are you planning on using?
[17:59] <tgm4883> Ubuntu for the hosts
[18:00] <Ramereth> that shouldn't be hard to deploy then
[18:00] <Ramereth> although i've had issues with 12.04
[18:00] <tgm4883> bummer, that's what we're installing
[18:01] <Ramereth> mostly related to gnutls and openssl
[18:01] <Ramereth> although i was installing from source, the ppa might be better
[18:03] <tgm4883> looks like it's available directly in the repos
[18:03] <tgm4883> at least some is
[18:03] <Ramereth> ya but its probably pretty old
[18:03] <tgm4883> true
[18:03] <Ramereth> i'm not sure what version they include
[18:03] <Ramereth> you really want to keep up on the versions as they include improved features and bug fixes
[18:03] <bkerensa> =o
[18:03] <bkerensa> wow
[18:03] <tgm4883> Version: 2.4.5-1
[18:03] <bkerensa> Ramereth is in our channel :P
[18:04] <Ramereth> tgm4883: that's not too bad, 2.5 has some better features
[18:04] <Ramereth> bkerensa: yup
[18:05] <tgm4883> so DRBD replicates the disk to other nodes in the cluster
[18:05] <Ramereth> to a secondary node
[18:05] <tgm4883> and that is required for HA
[18:06] <tgm4883> While I can set it up in test to the local drives, can we use an iSCSI mount on each of the nodes instead and forgo the DRBD replication?
[18:07] <tgm4883> forgive me if I'm not seeing something, I've got a ESXi mindset here
[18:09] <Ramereth> ya the ganeti mindset is very different
[18:10] <tgm4883> I'm just wondering if there is any benefit to using DRBD over iSCSI mounts (although I don't know much about DRBD)
[18:10] <tgm4883> it seems like unnecessary overhead if you've got a SAN
[18:10] <Ramereth> sorry i keep getting stuck in a conversion in my office
[18:10] <Ramereth> so ganeti does support "shared storage"
[18:10] <tgm4883> Ramereth, no worries, I'm reading though the documentation and PDF's you posted
[18:11] <Ramereth> but i guess it depends on how you want to go around. we have one cluster where we do have an iscsi device and we decided to only use drbd for the system disks of the VM and have iscsi mounted inside of the VMs themselves
[18:11] <Ramereth> and that seems to work pretty well
[18:11] <Ramereth> but it depends on how you plan on using the storage too I suppose
[18:11] <tgm4883> the way we currently do it, and how we'd probably like to continue is this
[18:12] <Ramereth> we used to have iscsi backed storage for VMs in the past in the pre-ganeti days and we hated every minute of it
[18:12] <tgm4883> we have 4 production ESXi servers for our main services. Each ESX server mounts the same iSCSI share.
[18:13] <tgm4883> so 4 servers, 1 share (although technically we've broken that 1 share up as well)
[18:13] <tgm4883> then the VM's sit on the iSCSI share
[18:13] <tgm4883> share isn't the correct terminology :/
[18:14] <Ramereth> well, same lun is shared among each node?
[18:14] <tgm4883> yes
[18:14] <Ramereth> and esx moves it from node to ndoe as it needs to?
[18:14] <tgm4883> ESX moves the VM from node to node, yes
[18:14] <Ramereth> ya that makes sense
[18:14] <tgm4883> the storage doesn't need moved though, as all 4 hosts have access to the same storage
[18:15] <Ramereth> we've separated our large storage needs from the vm technology itself ourselves
[18:17] <tgm4883> Ramereth, ok, so you have a bunch of hosts, where does the storage for the VM's reside?
[18:18] <tgm4883> or rather
[18:18] <tgm4883> how much storage do you need on each node for that setup?
[18:18] <Ramereth> depends on your needs
[18:19] <Ramereth> but i generally get as much storage as I can afford and then use an nfs or iscsi storage solution for larger needs
[18:19] <Ramereth> so in the case of one client, we only have 10-20G system disks on drbd w/ ganeti but have 100-500G volumes mounted over nfs
[18:20] <tgm4883> and the VM's live on the system disks or the NFS?
[18:21] <Ramereth> system
[18:21] <tgm4883> hmm
[18:21] <bkerensa> nathwill: Ramereth is the OSU OSL guy who gives +1 or -1 for Colo
[18:21] <tgm4883> I don't think we could fit many vm's on a system disk. Even if we were just installing minimal images
[18:22] <Ramereth> tgm4883: for you you might look into the shared storage option that ganeti now supports
[18:22] <Ramereth> i haven't personally used it before so I can't tell you how it works but I think the idea is that you have a large slice mounted on each node and ganeti just deals with managing it
[18:23] <tgm4883> looking
[18:23] <Ramereth> i'm not sure how well documented that feature is :/
[18:25] <tgm4883> Ramereth, yea this looks like exactly what we're doing
[18:27] <tgm4883> the only PPA I found with ganeti is https://launchpad.net/~sdeziel/+archive/ganeti
[19:09] <MarkDude> Ubuntu Oregon logs make the news >>> http://www.itworld.com/node/279368
[19:10] <MarkDude> http://irclogs.ubuntu.com/2012/05/24/%23ubuntu-us-or.txt
[19:10]  * MarkDude was against logging, figured he might as well make it work to his advantage, annoucing stuff in a stealth manner
[19:17] <MarkDude> Hmmmm apparently  <blkperl> has their address on public record
[19:17] <bkerensa> MarkDude: what?
[19:17] <MarkDude> read the log
[19:18] <MarkDude> wawit maybe thats an event location
[19:29] <Ramereth> tgm4883: if you need anymore help i tend to idle in #ganeti or #osuosl. i'm going to head out of this channel
[19:47] <bkerensa> ttyl folks
[20:56] <nathwill> bye
[21:44] <blkperl> MarkDude: yeah its the Engineering Building address
[21:49] <MarkDude> Fair enough-
[21:50] <MarkDude> It looks like I will need to clarify with the councilfolk- it was ME that created Twitter account
[21:50]  * MarkDude thought it would be Fedora logs that would trace it to me.
[21:51]  * MarkDude had it explained there was a delay in logs being posted with Ubuntu- apparently not :)
[21:52] <MarkDude> maybe I will say it was all nathwill 's idea, he told me to do it
[21:52] <MarkDude> :D