[02:22] <smoser> hallyn_, stgraber what happened with this
[02:22] <smoser> http://sourceforge.net/mailarchive/message.php?msg_id=29829732
[02:22] <smoser> did that get in in some form ?
[02:45] <sarnold> smoser: I saw some keen monitor mechanism when I did the lxc MIR audit; check out src/lxc/monitor.c and see if it looks like what you need or want
[02:46] <sarnold> (normally I'm against anything that deviates too far from APUE's daemonize() or daemon() function, but this was surprising in a nice way. :)
[02:49] <smoser> hm.. i didn't even bother looking at code.
[02:49] <smoser> reading that thread, i kind of wished they'd gone a similar route to qemu guest agent
[04:09] <roasted> hello friends
[06:17] <ScottK> Looks like nodejs can be synced.  That would fix installability of node-resolve.
[07:56] <Pupeno> Is there a way to see the stdout of a process that cron is running right now?
[10:29] <cefk> hi allesamt. wenns um routing geht - ohne nat - gibts da nen trick oder ist in sysctl mit ip_forward = 1 alles getan? ich kriegs nämlich nicht hin - hab mittlerweile auch routing-regeln editiert etc .. weiss nicht mehr weiter. - hat jemand kurz zeit und nerv für sowas?
[11:00] <cefk> wrong language, sorry. I have an 8-NIC ubuntu server, that should route between 8 different networks. I do not want it to do NAT, so as I understand the Routing-Wiki, I need to set ip_foward to 1 via sysctl and be done. Is that true, or do I have to use the nat-rule (postrouting/masquerading)?
[11:29] <ivoks> zul: pingy pongy
[11:29] <zul> ivoks:  just waking up
[11:29] <ivoks> zul: here's something to make it quicker ;) https://bugs.launchpad.net/cloud-archive/+bug/1214275
[11:29] <zul> ivoks:  cool ill have a look
[11:30] <ivoks> or jamespage ^
[11:30] <jamespage> ivoks, thanks....
[11:30] <jamespage> I'll let zul backport that  :-)
[11:30] <ivoks> it's not easy
[11:30] <ivoks> we also have this one: https://bugs.launchpad.net/horizon/+bug/1210253
[11:39] <jamespage> Daviey, if you have time python-websocket-client, python-jujuclient and juju-deployer are all in the NEW queue for saucy now
[11:43] <ivoks> zul: i believe this is specific to django 5.1
[11:43] <zul> greeeeat...
[11:49] <Daviey> jamespage: Yes, i'm sure i can do them today
[11:50] <jamespage> Daviey, thanks!
[11:50] <Daviey> jamespage: I am wondering if UCA needs some better tooling to check deps are satisfied
[11:50] <Daviey> There has been a few whoopees
[11:54] <jamespage> Daviey, probably - do you know how the installability checker works in -proposed? that would be useful
[11:55] <Daviey> jamespage: it's plain Britney ?
[12:55] <Daviey> jamespage: Hmm, do you know why the Havana uploads for UCA 'b3' rather than 'h3' ?
[12:55] <Daviey> For Grizzly we did G3.
[12:56] <jamespage> Daviey, just lining up behind how upstream named things for havana
[12:57] <Daviey> jamespage: Oh, you are quite right
[12:59] <zul> jamespage:  im going to be synching up the cloud-archive today
[12:59] <jamespage> zul, +1 please do
[12:59] <jamespage> we are a few weeks off h3 but good to get it done in advance
[12:59] <jamespage> zul, pls can you remember to backport the packages in the lab as well so we stay in sync
[13:00] <zul> jamespage:  ack
[13:02] <zul> jamespage:  the new openvswitch is ok for the CA i know that we had problems with libvirt so im delaying libvirt for now at least
[13:03] <jamespage> zul, openvswitch -> CA - OK
[13:03] <jamespage> zul, ack on libvirt
[13:08] <hallyn_> smoser: we've never agreed on a sufficiently distro-generic solution.  In the end (in another thread iirc) people agreed they should do it manually for now.  For example have lxc.mount.entry bind-mount a file or dir into the container which userspace in the container updates when it is read
[13:09] <hallyn_> smoser: i can mail you an mbox with most of that thread for easier perusal than the web interface
[13:09] <hallyn_> (i hate the sf mail archive interface)
[13:09] <smoser> hallyn_, nah. no worries.
[13:09] <smoser> one comment i have i sthat it seems *very* simlar to the qemu-agent
[13:10] <smoser> in overall goal / function / usefulness
[13:10] <smoser> it'd be very good if a solution could be made that utilized that
[13:11] <hallyn_> do you use qemu-ga?
[13:12] <hallyn_> it's not too far from what i'm suggesting.  we bind-mount a file or a socket into the container so that the container can update it.
[13:12] <zul> jamespage:  looking at the qa report greenlet is at ubuntu2 in the ppa but ubuntu1 is in the ca
[13:13] <jamespage> zul, is ok in the staging PPA
[13:13] <zul> jamespage:  i think so
[13:13] <jamespage> its*
[13:13] <jamespage> zul, yeah - I did the update in ubuntu so I pushed to the staging ppa as well
[13:13] <zul> jamespage:  ack
[13:16] <hallyn_> smoser: http://www.securityfocus.com/bid/61388  QEMU Guest Agent CVE-2013-2231 Local Privilege Escalation Vulnerability
[13:17]  * hallyn_ dives deeper to make sure i'ts not a fundamental design issue
[13:18] <smoser> hallyn_, nice. :)
[13:18] <hallyn_> yeah no so qemu-ga wants to provide full QMP access.  i really don't think we want anything like that.  just a 'done booted' status update, maybe a few more like that (maybe a free-form text field)
[13:19] <hallyn_> that's all you want right?
[13:19] <smoser> hallyn_, i really dont care about a silly 'booted'
[13:19] <hallyn_> then what do you care about?
[13:19] <hallyn_> i'm sure it's not "silly"
[13:19] <smoser> if you're going to create a communication pipe, then you should create a communication pipe.
[13:20] <hallyn_> to what end
[13:20] <hallyn_> hm, http://www.securiteam.com/securitynews/5DP2Y2KAWA.html seems to be a different one
[13:20] <smoser> people want such things. i'm not sure why.
[13:20] <smoser> but generally, it seems like a useful thing
[13:20] <smoser> but doing a one way pipe seems not that useful to me.
[13:20] <hallyn_> to what end, when you can just lxc-attach into the container
[13:21] <hallyn_> well, two-way pipe, maybe.  having guest send requests for action to host, i dno't think so
[13:21] <smoser> those actions can be ignored.
[13:21] <smoser> the pipe is the big thing really.
[13:21] <hallyn_> yeah sure if the host is functioning perfectly
[13:21] <hallyn_> or, "always ignored"?  ok :)
[13:21] <smoser> and qemu-ga is trying to have some communication protocol over a generic pipe
[13:22] <smoser> thats what i think is sane.
[13:23] <hallyn_> i'm not opposed, once we have specific examples of needs.  so far,
[13:23] <smoser> i dont have strong feelings here. but i dont think that doing a little one way status pipe is particularly useful.
[13:23] <hallyn_> we've had several requests for 'done booting' message - from different ppl at different times.
[13:23] <hallyn_> and that's all we've ever gotten that i can recall
[13:23] <smoser> yeah.
[13:24] <smoser> i just saw the discussion and thought it was interesting.
[13:24] <smoser> openstack basically has this same desire
[13:24] <smoser> in the end its very guest dependent.
[13:24] <smoser> thus my desire for some standard communication protocol
[13:25] <smoser> over a hypervisor specific bi-directional socket
[13:25] <hallyn_> right, so you want lxc to provide what qemu provides so guests can be hypervisor independent
[13:25] <hallyn_> i'm not opposed to that, i just need more specific examples :)  nova uses something like that?
[13:25] <hallyn_> could cloud-init use it by chance?
[13:25] <smoser> nova does not use anything like this yet.
[13:26] <smoser> the only usecase that i can ever actually come up with where the guest would need such a thing is "snapshot"
[13:26] <smoser> ie, where guest would freeze its filesystem, and then ask the host to snapshot it, and then unfreeze.
[13:26] <smoser> to ensure consistent.
[13:27] <smoser> i figure at some point nova will end up with something like this.
[13:27] <smoser> and at that point i would like a generic guest agent (or at least generic communication protoco)
[13:28] <smoser> and at that point i'd like our images to have a agent, and i'd like for them to "just work" to do the same generic stuff on lxc as on openstack
[13:28] <smoser> :)
[13:35] <jamespage> zul, I ready backported ceph - http://people.canonical.com/~jamespage/ca/
[13:35] <jamespage> but I've not uploaded yet
[13:36] <zul> jamespage:  +1, im just doing build testing here
[13:42] <Hakameda> Aynone that could help me with a script for running a minecraft server. Specfically it will load from a hardrive to a ramdisk, Run, The when its done save to a hardrive before exiting. I'm making some sort of silly mistake no doubt in the script
[13:50] <RoyK> Hakameda: you shouldn't need a ramdisk - linux does good caching as it is, and with a ramdisk, you'll lose it all if the system goes down
[13:51] <Hakameda> It's just an old server, Trying to run a 121 Mods on it. Figured may get some extra out of it with a ram disk. The script does a save and a backup on intervals as well
[13:52] <RoyK> Hakameda: linux should do this caching for you automatically
[13:53] <Hakameda> So if we did set it up, It most likely wouldn't help? Still new to using it. Lots to learn
[13:53] <RoyK> well, it would help, but if you just use a normal disk, linux will cache whatever most used and keep that in RAM
[13:54] <Hakameda> We had it set up and running without the ram disk, It was getting massive lag spikes when it came to world generation
[13:57] <RoyK> oh, ic
[13:57] <RoyK> perhaps an SSD would do better?  ;)
[13:58] <Hakameda> I guess worst case remove some mods but that does take away the fun haha. Upgrading hardware just wasn't an option atm
[14:05] <RoyK> it's true that linux' caching won't help much for writes
[14:06] <RoyK> ext4 supports delayed writes, and that would help a bit, but not a whole lot
[14:06] <RoyK> which filesystem are you using?
[14:06] <mardraum> how much ram are you giving to minecraft
[14:07] <Hakameda> I used just the default install, The server just runs TS3 and now this MC Server
[14:07] <Hakameda> It's getting 2GB
[14:08] <mardraum> 2GB is fine for a lightly loaded minecraft server, try it without your mods and it shoudl run ok
[14:10] <Hakameda> I can run it with 2GB on Windows and it does fine, But it just stutters when trying to do it on the ubuntu server. The mods is the whole reason for the server =)
[14:11] <mardraum> so you try it on windows with identical mods and no other services on the same hardware and internet connection and it works fine?
[14:13] <Hakameda> Yes, I first ran it from Windows. Thought i could get more preformance by using an Ubuntu Server
[14:14] <mardraum> looks like you can't hey
[14:15] <mardraum> if, as you say, all is equal on each side, then perhaps your mods are better optimised to run under the windows java implementation
[14:17] <RoyK> Hakameda: is this with oracle java or openjdk?
[14:18]  * LargePrime runs a Tekkit Classic server with 100 people that never uses over 1GB
[14:18] <Hakameda> Royk: openjdk
[14:20] <mardraum> LargePrime: 100 simultaneous on 1GB?
[14:20] <LargePrime> are the mods using mysql?
[14:20] <LargePrime> ya
[14:20] <LargePrime> mardraum: ya
[14:20] <LargePrime> mcpc+ for the win
[14:21] <mardraum> is it public?
[14:21] <LargePrime> well an old spigot build
[14:21] <LargePrime> ya, and cracked
[14:21] <Hakameda> The one im doing is 120 Mods, Specfically Its FTB Unleashed + Gregtech and some other smaller mods
[14:21] <LargePrime> FTB and greg is very heavy and laggy
[14:22] <mardraum> you run a cracked minecraft server for 100 people on 1GB of ram
[14:22] <LargePrime> i dont know why
[14:22] <LargePrime> mardraum:  no, it has 3GB, but never uses over 1GB
[14:22] <Hakameda> Its only the world generation thats causing grief, I tried to grab Nallar's Tickthreading but the jenkins for it was down when i last checked
[14:22] <LargePrime> mardraum:  so sorts?
[14:23] <mardraum> is it because you all stand around the same area be dickheads?
[14:23] <mardraum> being*
[14:23] <LargePrime> mardraum: I object!  we dont have to stand around together to be dickheads
[14:23] <LargePrime> we can be dickheads apart
[14:23] <Hakameda> Only trying to run a 4 person server too
[14:24] <LargePrime> I got a World border at 3000 blocks
[14:24] <LargePrime> Hakameda:  I also have a FTB server, ultimate and 1.4.7.  it has 8 people and no server lag, and get tonns of client lag
[14:25] <LargePrime> and it makes no sence to me
[14:25] <Hakameda> I don't get client lag, Its Worldgen Lag. Block Lag.
[14:26] <zul> jamespage:  looks like flask needs python-itsdangerous
[14:26] <LargePrime> hak, does it cause ticl lag?  like does /lag show a drop?
[14:27] <LargePrime> ticl=tick
[14:27] <Hakameda> When new chunks are generated, It does cause Tick lag
[14:27] <LargePrime> you can pregen all the chunks
[14:27] <LargePrime> with worldguard?
[14:27] <Hakameda> So says cofh tps, As i can't download tickthreading atm
[14:27] <Hakameda> The jenkins is down
[14:28] <LargePrime> you run MCPC+
[14:28] <LargePrime> ?
[14:28] <Hakameda> Nope, No need its a private. I thought the plugins would be extra strain that i didn't need
[14:28] <jamespage> zul, hrm - yes
[14:29] <jamespage> zul, I thought I uploaded to saucy already?
[14:29] <jamespage> nope
[14:29] <jamespage> zul, in which case yes we do need to include python-itsdangerous
[14:29] <LargePrime> MCPC+ is also lighter and faster
[14:29] <jamespage> sorry - I must have missed that
[14:29] <LargePrime> i recommend
[14:30] <LargePrime> you dont HAVE to run the plugins
[14:30] <Hakameda> MCPC+ itself would be another larger mod to add
[14:30] <zul> jamespage:  for the CA
[14:31] <jamespage> zul, yep
[14:31] <jamespage> ceilometer uses entry points so everything #explodes unless deps line up!
[14:32] <LargePrime> CAn I ask a mysql silly noob question?  I want to move all my mysql stuff to /home cause need room.  can i just move it all and symlink /var/lib/mysql to the dir?
[14:32] <LargePrime> and restart?
[14:32] <LargePrime> or is that stupid
[14:37] <LargePrime> was it something i said?
[14:57] <zul> jamespage:  also python-pbr will be need to built before the clients and swift as well
[14:58] <jamespage> zul, why?
[14:58] <jamespage> if so please version the build dependency
[14:58] <jamespage> otherwise things get complicated...
[14:58] <zul> jamespage:  otherwise ftbfs because a newer pbr is required
[14:58] <jamespage> zul, OK _ so version the BD
[14:58] <zul> jamespage:  ack
[14:58] <jamespage> that way it gets a dep-wait rather than fud
[14:59] <jamespage> and things just sort themselves out
[15:12] <zul> http://people.canonical.com/~chucks/ca
[15:29] <Siebjee> Does some one know the new IRC channel of Canonical, or has a phone numer of the sales department ?
[15:31] <arosales> smoser, fyi you are on the hook for todays ubuntu server meeting
[15:32] <arosales> https://wiki.ubuntu.com/ServerTeam/Meeting
[15:33] <smoser> oh joy.
[15:33] <smoser> Siebjee, what are you looking for help with ?
[15:33] <Siebjee> Support Sales
[15:34] <smoser> there isn't a cnaonical support irc channel
[15:34] <Siebjee> #canonical has been moved, and the link they have in the irc channel no one has permissions to view it :)
[15:36] <smoser> Siebjee, http://www.ubuntu.com/about/contact-us/form
[15:36] <smoser> i think the reason for the move was that '#canonical' was an internal canonical channel, which is now moved to an internal server.
[15:36] <zul> jamespage:  https://code.launchpad.net/~zulcss/python-ceilometerclient/pbr/+merge/181079
[15:36] <smoser> but you should be able to use that web form. above.
[15:37] <Siebjee> smoser, already filled it in 2 days ago. However, i need to have a chat with them like yesterday. Was hoping to speak to them sooner before they contact me
[15:38] <zul> jamespage/roaksoax: https://code.launchpad.net/~zulcss/python-novaclient/pbr/+merge/181080
[15:38] <Pici> Siebjee: The "Our Address" section of this page has phone numbers: http://www.canonical.com/about-canonical/contact
[15:39] <Siebjee> Pici, already tried. They aint redirecting my call correctly.. All of them seem to be out of the office :x
[15:40] <zul> jamespage/roaksoax: https://code.launchpad.net/~zulcss/python-cinderclient/pbr/+merge/181082
[15:40] <jamespage> zul, +1 +1
[15:41] <jamespage> cinderclient - not sure - do you still need to drop all the patches?
[15:42] <jamespage> zul, for reference - https://code.launchpad.net/~james-page/python-cinderclient/drop-patches/+merge/180230
[15:42] <jamespage> that was a trunk fix
[15:42] <jamespage> zul, oh - a btw flash looks like its creating issues in ceilometer
[15:43] <zul> flask you mean?
[15:43] <smoser> for anyone reading backscroll, i got Siebjee hooked up.  Sorry that was difficult.
[15:44] <jamespage> zul: https://bugs.launchpad.net/ceilometer/+bug/1212851
[15:45] <jamespage> zul; yeah- I do mean that
[15:46] <zul> iso8601 im gonig to get rid of iso8601 last update according to pypi was 2007
[15:48] <rbasak> zul: what do you mean by get rid of it?
[15:48] <zul> rbasak:  replace it with dateutil
[15:49] <rbasak>  ValueError: Unable to parse date string u'Mon, 27 Aug 2012 07:00:00 GMT'
[15:49] <rbasak> That's not an iso8601 formatted date anyway
[15:50] <rbasak> Anyway my concern was just that I use python-iso8601 in other stuff. No comment on the ceilometer bug.
[16:09] <Patrickdk> people use iso dates? other than mysql?
[16:10] <ogra_> Patrickdk, yes, 99% of the world use iso dates
[16:47] <arosales> smoser, thanks for following up with Siebjee
[17:07] <Arrick> hey all, I am trying to get cron jobs to run on ubuntu 12.04 LTS... every time I try to run it, it tells me cron is not running.... how do I get it to run?
[17:09] <rbasak> !details | Arrick
[17:09] <rbasak> Please define "every time I try to run it", paste the actual output, not your interpretation of what it says, etc.
[17:12] <Arrick> I am trying to get a cron.php script to run every 5 minutes, and I have used crontab -e to put in this minus quotes "*/5 * * * * /usr/bin/php /path/to/cron.php" .... If I try to run the cron.php file using "php /path/to/cron.php" it runs fine... I also have a cron_watcher.php in the same directory that */15 * * * * /usr/bin/php /path/to/cron.php" is supposed to run every 15 minutes.... when I run it manually, it reports "Cron is not running".
[17:13] <Arrick> I am using ubuntu server 12.04 LTS, which is running "Totara LMS"... I need to setup the cron job for the updates, etc.
[17:13] <Arrick> so... my question is "how do I get Cron running?"
[17:13] <Pici> Arrick: What do you mean by "when I run it manually, it reports "Cron is not running""? What exactly are you typing in?
[17:14] <Arrick> I am typing in php  /path/to/cron_watcher.php
[17:14] <Arrick> which reports if cron is running or not.
[17:14] <Arrick> totara lms is basicaly a customized moodle install.
[17:15] <ogra_> did you check your logs ?
[17:15] <Arrick> I havent got a clue where to look.
[17:15] <Pici> Does  service cron status   actually tell you that cron is running?
[17:15] <ogra_> /var/log/syslog for a start
[17:16] <Arrick> cron start/running, process 1152
[17:16] <ogra_> so cron runs fine
[17:18] <rbasak> Sounds like a problem or incompatibility with your cron_watcher.php. It might be better to try their community channel for help.
[17:19] <Arrick> thats what i am starting to think now that I see what that said.
[17:19] <rbasak> For cron.php itself, are you sure you're running the cron job as the correct user?
[17:20] <rbasak> Also note that PATH may not match, since cron's default PATH is rather insane. It's documented in the manpage.
[17:20] <Arrick> running it as root presently for testing, should be running properly.
[17:20] <Arrick> im looking at the logs now
[17:20] <Arrick> it looks like the cron.php is running now.
[17:51] <hallyn_> smoser: good news is i've got snapshot dependency tracking working..
[17:51] <smoser> nice.
[17:52] <hallyn_> right now though i'm out for a walk and lunch - will post the patch later today
[19:03] <arges> whats the difference between updated and outstanding merges?
[19:03] <arges> oops, this was meant for -devel
[21:10] <roasted_> hello friends
[21:10] <roasted_> have any of you had success with having a server running root on an SSD
[21:10] <roasted_> cause so far I've burnt out two SSDs running /
[21:11]  * RoyK has
[21:11] <roasted_> in a matter of 5 months
[21:11] <roasted_> RoyK: do you have swap on your server
[21:11] <roasted_> with the SSD on
[21:11] <roasted_>  /
[21:11] <RoyK> yes
[21:11] <roasted_> why the snot did I have two fry
[21:11] <roasted_> this is enraging me
[21:11] <RoyK> but not much memory pressure - 8 gigs of ram
[21:11] <roasted_> yeah - I have 4gb
[21:11] <roasted_> I rarely hit half that
[21:15] <roasted_> I don't think I'm going to bother putting an SSD back in this
[21:15] <roasted_> I'm sick of this :/
[21:15] <roasted_> plus I have no other SSDs to spare
[21:17] <LargePrime> 2 ssd in 5 months, means all under warranty?
[21:17] <LargePrime> ssd had a bad batch some while ago
[21:17] <LargePrime> could be pure luck
[21:18] <genii> Those Vertex 2 had issues
[21:19] <genii> eg: They would hibernate and never wake up
[21:19] <roasted_> these are Corsairs
[21:19] <roasted_> V4 and M4
[21:19] <roasted_> bought about 4 months apart
[21:22] <LargePrime> you play with swappyness?
[21:25] <genii> Hm. Looks like some of the Corsair same type of issue, according to http://forum.corsair.com/v3/showthread.php?t=96663   ( in this case they do a Windows regedit to disable powersave )
[21:27] <roasted_> genii: I wonder if that's kind of a different issue, though...
[21:27] <roasted_> genii: it'll run great for months, then the issue comes up again. Some people are saying, yeah been 24h running great...
[21:27] <roasted_> on a brand new SSD, it was perfect from the get go for a long time, then out of no where, RO
[21:28]  * genii sips and ponders.
[21:28] <roasted_> I have some spare hdds. I just might throw one in and be done with it.
[21:29] <roasted_> the main advantage with a server/ssd combo was less moving parts, so I figured it would last longer. But at this rate, screw that.
[21:29] <roasted_> I didn't even have this many failures with seagate HDDs. :P
[21:30] <roasted_> fortunately I tar /var and /etc nightly and rsync it to a 2nd server on my LAN, so I have backups of my configs. I can install ubuntu server, install some services, copy/paste, done in no time.
[21:32] <roasted_> plus I have 5 160GB HDDs on the shelf... I might use one of them, pull a full disk image, and that way I have a full image + spare drives on the ready.
[21:33] <LargePrime> while ssd's have less moving parts, they still are a newer tech.  so failures are bond to happen?  Are lower MTBFs expected with SSds?
[21:33] <LargePrime> bound*
[21:33] <roasted_> LargePrime: what gave me confidence about these SSDs is we have the same model SSDs running Linux on desktops at work...
[21:33] <roasted_> there's been 1 or 2 failures, but there's like... 200? 300 systems?
[21:34] <roasted_> been running all day every day (during school anyway, not on during summers or at night) for a year.
[21:34] <LargePrime> so you are saying this was just luck
[21:34] <roasted_> I swear by SSDs on my end user systems 10000X
[21:34] <roasted_> what I'm wondering is if an SSD in a *server* plays a different role that is working against me.
[21:34] <LargePrime> cant see it
[21:35] <LargePrime> smart have any help for you?
[21:35] <roasted_> I ran smart against the last drive - it was definitely toast. I'll run it against this one quick...
[21:36] <roasted_> if I can remember the command...
[21:37] <roasted_> I didn't instal smartmontools, and it's refusing to let me install right now due to the fact it's hosed.
[21:37] <LargePrime> lol
[21:37] <roasted_> I'll pull the drive and USB bridge it into my laptop and scan it. That's waht I did last time.
[21:37] <roasted_> this SSD has only been 'live' for... 3 weeks? a month?
[21:37] <roasted_> long enough for me to fine tune it, at the very least.
[21:37] <roasted_> ironically I was going to take it down this weekend and pull an image of it since it's *finally* where I want it. derp.
[21:37] <LargePrime> that seems right.  if they fail, typically, they fail fast
[21:38] <roasted_> at any rate, it's not the image that matters most to me. Waht matters most to me are the configs.
[21:38] <roasted_> and I have a nettop running ubuntu server on my LAN too... I rsync configs to it nightly.
[21:38] <roasted_> so I can have all of my services running as quickly as I can nano into each config and paste the contents in
[21:41] <roasted_> can you run / on a pair of mdadm RAID'd drives?
[21:41] <roasted_> I run mdadm on my data drives in a mirror, but I never tried to run mdadm on a software RAID'd drive.
[21:42] <roasted_> bad idea, perhaps? good idea? eh...
[21:51] <Patrickdk> it's ok, but you must configure it correctly
[21:51] <Patrickdk> or it won't boot
[21:52] <roasted_> maybe I'll just stick to a single drive for /
[21:52] <roasted_> and rely on image backups and my nightly config backups
[21:53] <roasted_> the data is what maters most, and that's backed up and on a mirrored array anyway, so... I just want to avoid having to redo the OS drive as much as possible as its a real pain
[22:01] <sidnei> genii: i had issues with Vertex too, had to RMA it, when i got a replacement I sold it and got an intel one instead *wink*.
[22:03] <roasted_> vertex is.... corsair?
[22:03] <genii> roasted_: OCZ
[22:03] <roasted_> ah
[22:03] <roasted_> yeah I read about OCZ issues
[22:03] <roasted_> hese are crucial
[22:03] <roasted_> I think I said corsair earlier on accident.
[22:03] <genii> I think there are more generalized problems with the Sandforce controller though, which is in models by many different manufacturers
[22:04] <sarnold> (including new intel)
[22:08] <Patrickdk> heh? pretty sure the sandforce issues where solved
[22:08] <Patrickdk> many ssd's using them don't have issues
[22:08] <Patrickdk> lots of them using it, doesh ave issues