[01:12] <pmatulis> teward: not sure what you mean by 'isn't depending on the management features'. nothing is forcing you to send it management commands
[04:13] <ksx4system> what would be safest possible disk pool solution without hardware RAID controller? what would be safest filesystem?
[04:13] <ksx4system> (preferably something that works on 14.04 LTS)
[04:14] <sarnold> ksx4system: it's not exactly easy or transparent, but zfs provides checksums, compression, multiple redundancy methods to choose from
[04:14] <ksx4system> sarnold: unfortunately ZFS is insanely resource hungry :(
[04:15] <sarnold> ksx4system: there's a lot to learn about zfs before jumping in, http://zfsonlinux.org/ https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ good starting points..
[04:15] <sarnold> ksx4system: oh? granted you do currently want a 64 bit system, but if you don't use dedup (and you shuoldn't use dedup) it shouldn't be too bad; lots of folks use it in 4 or 8 gig machines..
[04:15] <ksx4system> I'd like to build a server which eats up under 100W at full load (so some kind of 64-bit Atom board with 2-4GB of ram and four 2TB critical appliance grade disks)
[04:17] <ksx4system> is ZFS only solution (given that I'm going to build that array of super-expensive critical appliance grade disks)
[04:17] <sarnold> yeah that sounds like something zfs should do alright; you could do raidz2 if you want any two drives to die and IOPS aren't too important, or mirrors if iops are more important, but with mirrors only the 'right' two drives could die, hehe
[04:18] <sarnold> no, you could also use mdadm to build your system, but it doesn't do checksums; it's more designed to handle disks outright dying, rather than corrupting data slowly via cosmic rays
[04:18] <ksx4system> IOPS are quite important (it'll act as NAS for 3-4 of my desktop computers for nearly everything)
[04:18] <sarnold> I went right to zfs because you started with "safest" :)
[04:18] <ksx4system> I know, I know
[04:18] <ksx4system> but I don't want to wreck my ultra low power home with loud and extremely power hungry server to handle ZFS properly
[04:18] <sarnold> to be fair there's more people using mdadm than zfs on linux, but checksums and compression are good stuff..
[04:19] <sarnold> amen.
[04:20] <ksx4system> sarnold: if I don't really want ZFS (because of its requirements) - what else? mdadm with ext4?
[04:20] <sarnold> ksx4system: yeah, that's a decent second choice
[04:21] <ksx4system> how many drives could die in 4 disk array with this one?
[04:21] <ksx4system> (one for sure, two maybe?)
[04:21] <sarnold> two, if the right two die
[04:21] <ksx4system> yeah, not the 1+mirror1 drive
[04:23] <ksx4system> isn't it better to run ZFS on Solaris anyway?
[04:23] <sarnold> aha, here's the guide on md method, https://help.ubuntu.com/lts/serverguide/advanced-installation.html
[04:23] <ksx4system> oh, one more thing: OS will not sit on data array
[04:23] <ksx4system> dedicated SSD for this
[04:24] <sarnold> perhaps; but the zfsonlinux software is storing unfathomable amounts of data at the moment and the devs only know a few data-loss events that were caused by the software
[04:25] <sarnold> that really helps, I think ubuntu's installer or boot or something is cranky with some kinds of md raid, and getting rpool on zfs is possible but looksl ike more effort than I want to spend :)
[04:25] <ksx4system> would it be safer to just buy hardware RAID card for PCI-express and run two RAID1/one RAID10/one RAID5 or even 6?
[04:26] <sarnold> I don't much like the hardware raid solutions, I've heard of too many cases of the raid card dying and the resulting pile of disks not reassembling when the card is replaced
[04:26] <ksx4system> sarnold: while music, those ultra large backup tarballs and other non-exec stuff could live on HDD operating systems are unsusably slow without SSD
[04:26] <ksx4system> and 16-32Gb ones are really cheap
[04:26] <sarnold> other people swear by them, of course, they sell like hotcakes :) -- but I'd rather have something that is software only so I stand a chance of rebuilding it without exotic or expensive or high-end services..
[04:27] <sarnold> hehe, I've had my eye on those pcie intel 750 ssd monsters for a while.. $950 for 1.2 tb of 400kiops.
[04:28] <ksx4system> an example: server dies, I move disks to new identical one (software RAID) - do I have 100% chance to access my data again?
[04:28] <ksx4system> (ofc given that hard drives are ok)
[04:28] <sarnold> ksx4system: yeah, assuming the drives survive
[04:28] <ksx4system> ok, so I don't care about hardware cards (more money to build server, less reliable solution)
[04:28] <sarnold> right
[04:30]  * ksx4system will backup most important stuff to some cheapskate board like Orange Pi/Banana Pi with 1Tb 2,5" drive on SATA anyway 
[04:31] <ksx4system> and the uber important stuff on DVDs also
[04:31] <ksx4system> is there anything to improve?
[04:31] <sarnold> good plan, raid is for availability, not to avoid backups :)
[04:31] <ksx4system> I never really did backups
[04:32] <ksx4system> after another lost drive (80% data retrieved, the rest... well, fsck it) I've decided to backup *everything possible*
[04:32] <ksx4system> daily
[04:32] <sarnold> you may wish to look into systems with multiple NICs that you can gang together to get e.g. 2gig ethernet out of them; doing LACP may require a nicer switch, too, but might be worth it i fyou'v egot several machines doing IO to it at once
[04:33] <ksx4system> well, those will be 100Base-T boxes (two Raspberry Pis, one Intel Compute Stick with USB 100Base-T and another Raspberry Pi)
[04:33] <sarnold> one nice thing about zfs is you can use an ssd as both l2arc and slog, to handle data that hasn't fallen out of RAM and to take bursts of synchronous storage requests faster than spinning metal drives can keep up
[04:33] <ksx4system> single gigabit will handle it without even hitting 50% load
[04:33] <sarnold> ohhh, so the l2arc and slog probably aren't a big deal either :)
[04:34] <ksx4system> laptops will sync over Wi-Fi so the same crappy performance
[04:35] <ksx4system> still, I have quad 100Base-T nic - I could use it for those tiny computers on desktop
[04:36] <sarnold> do you use those little pis and intel compute stick as desktops? server things? bridges to devices?
[04:36] <ksx4system> desktops
[04:36] <sarnold> how do they work out for that?
[04:37] <ksx4system> ICS is wonderful for low resource stuff (LibreOffice, Spotify, HexChat, Cygwin, modern browser)
[04:37] <sarnold> when I installed my pandaboard es I played in the desktop for ten minutes and thought it did well enough but I never _used_ it..
[04:37] <ksx4system> one Pi runs RISC OS (B+ and it's blazing fast)
[04:37] <ksx4system> but RISC OS will be blazing fast on anything 200MHz+
[04:38] <ksx4system> another one will run... some kind of Linux, i've got to install stuff there (probably bare-bones Ubuntu/Debian with Fluxbox)
[04:39] <ksx4system> for more demanding tasks (audio editing) I still have that old quadcore/16Gb RAM/500+1000 HDDs/60 SSD
[04:39] <ksx4system> 90% of time I'm happy with those ultra low power boxes :)
[04:40] <ksx4system> btw ICS feels faster than dual core AMD box with 5Gb ram but only HDD (320Gb, for system and data)
[04:41] <sarnold> wow
[04:41] <ksx4system> this one is ultra-sluggish and will retire as soon as I'll finish with Pi2
[04:41] <sarnold> it's been so long since I used hard-drive based OS, it's hard to even remember those days..
[04:42] <ksx4system> I was forced to do so (failed power supply in quad core monster, failed motherboard in monster ThinkPad)
[04:42] <ianorlin> sarnold: yeah I can relate to that I have an ssd in my core 2 duo laptop and it feels much more responsive than a junky hp laptop with a 5400 rpm drive
[04:42] <sarnold> ksx4system: ouch :/
[04:42] <ksx4system> ianorlin: good SD card is faster than 5400k drive...
[04:42] <sarnold> ianorlin: yeah, an ssd is a cheap way to bring an older system back to life for a while..
[04:43] <ksx4system> i3 dual 1,33GHz laptop with two gigs of ram and SSD faster than its SATA bus = this thing fscking flies
[04:43] <sarnold> my panda killed an sd card in ~one year of light torrenting, package updates, light logging..
[04:44] <ksx4system> I log to ram (and copy periodically to USB drive)
[04:44] <ksx4system> noatime mounts
[04:44] <ksx4system> cards live more than two years without problems
[04:44] <ianorlin> yeah but it won't magically give it vt-x so I still can't host virtual machines on my laptop, I can however over ssh use virt0manager and even create them with that
[04:45] <ksx4system> ianorlin, omg no vt-x/amdv
[04:45]  * ksx4system doesn't even remember seeing desktop/laptop box that old
[04:45] <sarnold> ksx4system: yeah.. no more torrenting on that machine for me, and the second card seems to hav elsated 1.2 years or so now :)
[04:45] <ianorlin> althogh since I got this beast desktop I have done so many testcases in virtual machines to help quality
[04:46] <ksx4system> even ICS has vt-x (but it's useless with two gigs of ram)
[04:47] <ianorlin> is setting up vnc on the ICS hard? putting it in the back of a tv and running like inkscape on it could do interesting stuff like dnd maps in inkscape
[04:48] <ksx4system> ianorlin: no, just install your VNC server of choice :) quadcore Atom will take care
[04:49] <ksx4system> but it might not support your TV (only goes up to 1920x1080, nothing higher)
[04:50] <ksx4system> not even 1920x1440 (those huge LCDs)
[04:50] <ianorlin> ksx4system: unfortaenly my tvs are ancient it doesn't have component out
[04:51] <ksx4system> only HDMI :( but accepts HDMI-DVI dumb converter (but then you have to use USB soundcard/DAC)
[04:51] <ksx4system> and yes, afaik it will boot Linux (but it'll break warranty to install it on eMMC instead of Windows 8)
[04:53]  * ksx4system runs his desktop on three identical XGA 15" LCDs, ultra low power (around 10-12W/piece)
[05:28] <arooni> Awould like to run to vhosts on the same vps using ubuntu 14.04 and nginx.... but now getting "[emerg] 19896#0: duplicate listen options for [::]:80 in /etc/nginx/sites-enabled/sitename"  ... can i have no default server listed?
[05:29] <sarnold> virtual hosts share an ip address, listening socket, etc
[05:29] <sarnold> they only notice which virtual host to use when they aceept and then read from the socket
[05:30] <sarnold> you'll need to have exactly one listen config option in the entire process
[07:58] <lordievader> Good morning.
[15:39] <penw> hello beautiful people
[15:40] <penw> how can I go about running a cronjob every 30 minutes between hours
[15:40] <penw> e.g 8-14
[15:40] <penw> on one line
[15:40] <penw> if I specify 30 8-14 it will run on 14:30 as well
[15:40] <penw> well 0,30 :^)
[15:43] <penw> after 3 minutes of using the powerful search engine named Google the cleanest option would be to just have two lines
[15:43] <penw> oh well
[16:32] <jamespage> coreycb, just testing to see if we can patch out ryu from the neutron deps
[16:33] <coreycb> jamespage, ok, tough go getting it into main I see
[16:33] <jamespage> coreycb, I have a hunch that the test suite is patch-tastic
[16:34] <jamespage> coreycb, and its for an experimental driver anyway
[16:34] <jamespage> alternative to the command line driver for ovs
[16:34] <jamespage> so its not default
[16:34] <coreycb> jamespage, ok
[16:34] <jamespage> coreycb, testr did not crap out with an import error so fingers crossed
[16:34] <coreycb> jamespage, cool