[00:01] <sdeziel> pcfreak30: I'm not very familiar with GPT, especially not when using legacy boot :/ 
[00:01] <pcfreak30> it seems to force it to be gpt? i would do normal mbr if i could
[00:02] <sdeziel> pcfreak30: that said, I'd try to have 2 partitions on each of the drives you'd like to have raid'ed. The 1st part would be the bios_grub part and the 2nd would take the rest of the drive
[00:02] <pcfreak30> thats what ive been TRYING to do :P. on one dry only though
[00:02] <sdeziel> pcfreak30: after that, presumably you'd be able to feed all the 2nd parts to create a software raid (MD)
[00:03] <pcfreak30> it seems to bve very anal. the debian version doesnt give a shit but grub install borks and i couldnt find the tools in a shell to fix it as it uses busybox
[00:03] <sdeziel> pcfreak30: OK then in that case, your single 2nd part should not be ext4, isn't it?
[00:03] <pcfreak30> no. it wasnt going to be formatted as ext4. at that point i was having issues.
[00:04] <pcfreak30> right now i have it setup for 3 drives in raid, but would have to add a 4th post install
[00:04] <mwhudson> yeah d-i is great for letting you configure setups that won't actually work
[00:05] <mwhudson> good luck, i have to run now
[00:05] <sdeziel> pcfreak30: doesn't sound fun if that's even possible to grow a raid 0 like that
[00:06] <sdeziel> pcfreak30: it's probably not the best way but here's what I'd do personally
[00:07] <sdeziel> pcfreak30: I'd drop to a console, use fdisk to nuke the partitions, replace GPT by msdos, create single big part on all 4 drives
[00:07] <sdeziel> pcfreak30: then feed all 4 to mdadm
[00:07] <sdeziel> put a pv on the md array
[00:07] <pcfreak30> yea ive been doing all that already, just not explictly msdos
[00:07] <sdeziel> then switch back into the installer and see if it's happier
[00:08] <pcfreak30> ive gone and done the 3 drives just to get this BS installed and will see what I can do on the 4th disks. may be wiping this server a few times for tests anyways. its 8 pm also.
[00:08] <sdeziel> you see to be aware already but I need to stress that a raid 0 made of 4 disks is putting your whole array at great risks ;)
[00:08] <pcfreak30> i know
[00:09] <sdeziel> yeah, 8PM on a Friday ;)
[00:09] <pcfreak30> There isnt going to be any critical data on this server. its going to be for proicessing and copying data out. its sort of going to be a container with a lot of temp storage as an analogy
[00:10] <pcfreak30> So if the whole thing borks then i would reinstall. reconfiguring may be a pain but i need capacity and not redundancy in my case
[00:10] <sdeziel> pcfreak30: for those use cases, I would use 2 arrays, a small raid 1 spanning all 4 drives and hosting the OS and the unimportant data on a raid 0 spanning the bulk of the 4 disks
[00:11] <pcfreak30> and the drives i need are only 900GB SAS drives and im limited without getting a jbod enclosure. so going a step at a time
[00:11] <sdeziel> with the system on a raid 1, you get the best of both worlds
[00:12] <pcfreak30> the system is going to be write heavy, so i need better write than read i believe. its part of my bigger project. Need to see if this hardware performs at all for my tasks.
[00:12] <sdeziel> the OS part can typically fit inside 16-24G depending on what you do so you'd get 900-24GB usable times 4
[00:13] <sdeziel> pcfreak30: gotta run, good luck and have a good weekend
[00:13] <pcfreak30> are you saying a raid 1 25 GB partition across drives and for the os the rest for the temp work?
[00:13] <pcfreak30> sdeziel: ^
[00:15] <sdeziel> pcfreak30: yeah
[00:15] <pcfreak30> thx.
[00:16] <sdeziel> pcfreak30: you'd then be relatively sure to not go through the reinstallation pain due to a dead disk ;0
[00:17] <pcfreak30> np. ill have to weigh that after my initial experiments. If it turns out well its likely ill do that and then only need to reinstall twice. Have to judge if loosing 100 gb is worth it
[00:18] <pcfreak30> thanks for the idea.
[00:19] <sdeziel> pcfreak30: if you want to extract more, you might be better served with raid 10 on those 4 small partitions
[00:20] <sdeziel> or maybe raid 5, I don't know, play with http://www.raid-calculator.com/default.aspx
[00:21] <sdeziel> pcfreak30: if you want to extract the most performance out of those spinning rust disks, consider putting the OS partition at the end of the disks
[00:21] <pcfreak30> ill have to think when i can actually think.  for my needs as the disks i have are 10k rpm sas and i couldnt get a 1t or 2t at a decent price,  space matters a lot for my needs potentially. so it will all depend on the outputs of what im trying to do. as i dont even know if im wasting my time yet :P
[00:22] <pcfreak30> i got 16 netapp sas 900 gb drives for ~280 and reformatted them
[00:22] <sdeziel> pcfreak30: you can also check with ZFS that supports some form of raid but most importantly super fast compression (lz4) which might give a decent performance boost
[00:23] <sdeziel> anyway, I'm out now
[00:23] <pcfreak30> thanks. basically what im doing will be a ton of writes out more than reads if i researched right. 
[07:27] <patstoms> i got a usbmux user after ubuntu server upgrade
[07:27] <patstoms> is there any documentation about why do i need it?
[07:28] <patstoms> documentation says that it is "iPhone/iPod Touch USB multiplex server daemon"
[07:28] <patstoms> what could be a reason that i got it in ubuntu server?
[10:04] <tomreyn> patstoms: that's a standard user, but it should not have a shell
[19:04] <pcfreak30> sdeziel: was able to get the install done smoothly with msdos partition. GPT causes a lot of pain.