[02:33] <kuld> hi
[02:33] <kuld> i currently uploaded my website files using filezilla to my website but it doesnt show any pictures
[02:33] <kuld> i gave permission using filezilla to the files permission 744
[03:50] <MACscr> my 12.04 LTS server has a 3.2 kernel, but im being told by zimbra that 3.0 to 3.9 isnt supported and supposedly canonical doesnt even support them anymore and its only 3.13 and above as of august. If thats the case, why isnt apt-get upgrade seeing the newer kernel?
[03:52] <MACscr> all the instructions im seeing for updating the kernel are for installing the raring or another releases kernel on an precise system and thats obviously not correct.
[04:29] <sarnold> MACscr: "It's complicated"
[04:31] <sarnold> MACscr: there were a few releases of 12.04 LTS -- 12.04, 12.04.1, 12.04.2 .. 12.04.5.  12.04 and 12.04.1 were released with the same kernel / X11 / etc., and those kernels are still supported. the .2, .3, .4 versions were all sharing kernels / X11 / whatever with raring, saucy, trusty, and those packages have reached EOL. Those can be upgraded to the packages shipped with 12.04.5, which shares its HWE stack with 14.04.1.
[04:31] <sarnold> MACscr: there are thus two supported ways forward for 12.04 LTS -- the original 12.04 release (and first point release) and the newest, 12.04.5.
[04:31] <MACscr> my system is virtual, so the hardware stick isnt supposed to apply
[04:31] <sarnold> MACscr: of course what third-party vendors choose to support varies
[04:32] <sarnold> MACscr: virtualbox, for instance, only supports the original stack (and -maybe- the .2 version? I forget..) -- in any event, if you wanted to run a virtualbox host off of it, you'd NEED the original 12.04 kernel etc
[04:33] <MACscr> well its not specifically virtualbox, but any VM should be running the virtual kernel
[04:33] <sarnold> MACscr: I don't know what specifically zimbra cares about but I'd expect the 12.04.5 HWE stack to work for you
[04:33] <MACscr> so from what I am reading, Canonical is no longer supplying kernel updates for Precise?
[04:33] <sarnold> MACscr: note, virtualbox _host_ -- kvm has no restrictions because kvm is built alongside the kernel
[04:33] <sarnold> MACscr: virtualbox _guests_ should all just work
[04:34] <MACscr> because some Hardware stack addon has nothing to do with regular updates
[04:34] <sarnold> MACscr: canonical is supporting TWO different kernels for precise -- the original stack and the new stack
[04:34] <MACscr> so 2.3 is stil supported? Thought  it was EOL in August
[04:34] <sarnold> what's 2.3?
[04:35] <MACscr> 3.2.x
[04:35] <sarnold> yes, 3.2.x is still supported via the 'linux' package: https://launchpad.net/ubuntu/+source/linux
[04:37] <sarnold> aha, finally found the new HWE stack, a 3.13.x kernel: https://launchpad.net/ubuntu/+source/linux-lts-trusty
[04:38] <MACscr> wow, thats so convoluted
[04:38] <sarnold> MACscr: yes.
[04:38] <MACscr> why make it such a pain in the arse on admins? From what i have read, any kernel between 3.0 and 3.9 is horrible and shouldnt be used because of disk performance issues
[04:39] <sarnold> MACscr: this wiki page tries to explain it all, but because it was written _before_ the .2, .3, and .4 stacks died, it suffers from a lot of other problems: https://wiki.ubuntu.com/1204_HWE_EOL
[04:39] <sarnold> MACscr: naaaah, that's far too gloomy a representation of earlier linux versions
[04:40] <sarnold> MACscr: it was Good Enough from 1.2.13 through 2.6.16, and got pretty decent at 2.6.17, I'd say 2.6.32 and newer is much the same, though specific versions along the way have had slightly better or worse performance...
[04:40] <sarnold> MACscr: honestly I haven't noticed any actual -changes- since 2.6.37 except perhaps for the nicer perf / tracing tools.
[04:41] <sarnold> MACscr: I'm surprised zimbra would care; most applications can run just on 2.6 kernels without trouble
[04:50] <MACscr> sarnold: and from what i am seeing, there are not virtual kernels for precise in this newer stack?
[04:50] <sarnold> let me see...
[04:51] <MACscr> BTW, i am using them in xen pvm's. The point of the virtual kernel is to make it more efficient and slim because of the lack of needed hardware support
[04:55] <sarnold> interesting, I don't see any of the -virtual packages, nor the -powerpc and -powerpc64 packages that the standard linux package provides...
[04:56] <MACscr> yep, screwed over again….
[04:57] <sarnold> how much performance penalty have you found with the non-PV guest kernels?
[04:58] <sarnold> .. and do your CPUs really not have virtualization extensions? they've been around for a while now, I'm surprised anyone still ran PV hosts
[04:59] <MACscr> sarnold: of course i do, but they are still faster
[04:59] <MACscr> heck, you even do PV with KVM when you can
[05:01] <sarnold> MACscr: interesting, I didn't know kvm had any paravirt hypercalls available..
[05:02] <Logos01> Oh yeah, this is going to end well.
[05:02] <Logos01> "33 packages are going to be removed. 681 packages are going to be installed. 1515 packages are going to be upgraded."
[05:02] <MACscr> lol
[05:02]  * Logos01 is upgrading his laptop from 12.04 to 14.04
[05:02] <MACscr> good luck!
[05:02]  * Logos01 is also using ZFS as the rootfs for this setup
[05:02] <sarnold> Logos01: \o/ :)
[05:02] <sarnold> Logos01: YIKES
[05:03] <Logos01> As I said. This will end well.
[05:03] <sarnold> Logos01: are you also upgrading from 0.6.2 to 0.6.3?
[05:03] <Logos01> I don't think so
[05:04] <Logos01> Naw, I'm on 0.6.3
[05:04] <sarnold> okay, that upgrade sounded like it hurt some folks, the userspace <-> kernel protocols changed and the tools do not handle that well :/
[05:04] <sarnold> Logos01: oh okay
[05:04]  * sarnold exhales again
[05:04] <sarnold> Logos01: how much work was it to get zfs root? :) I've discounted it as too much work and too dangerous but would LOVE the features..
[05:04] <Logos01> I've done it enough times that I'm used to it.
[05:04] <Logos01> I use rlaager's walkthrough.
[05:05] <sarnold> Logos01: thanks :D
[05:05] <Logos01> I don't think I have a single Ubuntu system right now that *doesn't* use ZFS as its rootfs.
[05:06] <Logos01> Wait, no, I forgot my NAS and hypervisor are still on ext4+LVM
[05:06] <Logos01> I'll probably change that soon.
[05:08] <Logos01> https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem <-- that's pretty valid
[05:10] <sarnold> wow, rlaager's guide is 0.6.1..
[05:10] <sarnold> .. from the mists of time :)
[05:11] <Logos01> Haha, it hasn't been updated recently no.
[05:11] <Logos01> But he and ryao both are awesome.
[05:12] <sarnold> *nod*
[05:12]  * Logos01 lurks in #zfsonlinux ... for years
[05:12] <MACscr> hmm, how should i properly recreate my /boot/grub/default?
[05:13] <sarnold> Logos01: crazy though, I haven't seen either rlaager's guide or this pkg-zfs guide, both look less daunting than the other guides I've seen
[05:14] <MACscr> nvm, got it
[05:14] <Logos01> sarnold: Yeah, I can't account for that.
[05:14] <sarnold> MACscr: oh, what was it?
[05:14] <MACscr> sarnold: no idea, it was created again when i installed the trusty kernels
[05:14] <sarnold> MACscr: oh :) hehe
[05:14]  * Logos01 probably shouldn't watch the exact packages being installed
[05:14] <Logos01> "libthai" <-- O_O
[05:15] <sarnold> mmmm
[05:15] <sarnold> dammit now I'm hungry
[05:15] <Logos01> HAhahahahaha
[05:15] <Logos01> If only.  libudon
[05:15] <sarnold> OMG
[05:15] <MACscr> hmm, so with Ubuntu 10.04 LTS server. Can i install the trusty kernel on it too?
[05:15] <Logos01> MACscr: Well, I don't see why you particularly *couldn't*
[05:15] <Logos01> But ... why not just distro upgrade?
[05:16] <Logos01> I mean, aside from that whole /var/run -> /run nonsense?
[05:16] <MACscr> because with zimbra, you have to completely reinstall it
[05:16] <sarnold> MACscr: it would -probably- work...
[05:17] <Logos01> Well I mean worst thing that happens is he has to reboot into an older kernel version
[05:17] <Logos01> Right?
[05:17] <MACscr> Couldn't find package linux-generic-lts-trusty
[05:17] <sarnold> Logos01: probably, I'd hope newer kernels' post-inst scripts wouldn't do anything that a 10.04 couldn't handle..
[05:17] <Logos01> MACscr: Actually ...
[05:18] <sarnold> MACscr: yeah, you'd need to add new package lists to apt and perhaps add some pinning or similar to make sure they are lower priority than the lucid lists
[05:18] <Logos01> MACscr: try adding the backports repo and seeing what kernel versions show up
[05:18] <Logos01> Barring that you might be able to add xorg-edgers and get it that way.
[05:18] <sarnold> Logos01: he just wants kernel, doesn't care about x11
[05:18] <Logos01> sarnold: They ship the kernels
[05:18] <sarnold> oh :)
[05:19] <MACscr> well its a vm too
[05:19] <Logos01> He could also probably grab the exact kernel off of the mainline-ppa (you can't add that as a PPA though)
[05:19] <MACscr> i know i cant use mainlines with ksplice
[05:19] <MACscr> im 100% sure on that
[05:19] <Logos01> Oh, you're using ksplice??
[05:20] <MACscr> yep
[05:20] <Logos01> Throw everything we just said out the window
[05:20] <Logos01> You're on your own.
[05:20] <Logos01> (I don't mean to be hostile, or seem it, but ksplice destabilizes the entire conversation)
[05:20] <MACscr> ksplice is awesome
[05:20] <Logos01> It is, but it also is highly unsupportable in situations exactly like this.
[05:21] <Logos01> There's no way to know exactly what it's done, and it *definitely* won't support whatever kernel you try to add.
[05:21] <sarnold> just out of curiosity, how many digits per year do you have to throw to oracle to get that?
[05:21] <Logos01> sarnold: You can still get it for free for Ubuntu desktop kernels.
[05:21] <MACscr> like this? im only using official precise kernels
[05:21] <sarnold> Logos01: really?? cool
[05:21] <Logos01> (And Fedora)
[05:21] <MACscr> sarnold: about 3 bucks per month per server
[05:21] <Logos01> Oh, you have one of the pre-Oracle licenses?
[05:21] <sarnold> MACscr: damn. that's far cheaper than I expected.
[05:21] <Logos01> Good on you.
[05:21] <MACscr> but only because i dont have that many that im using it with
[05:22] <MACscr> yes i do =)
[05:22] <sarnold> oh :)
[05:22] <Logos01> sarnold: That pricing isn't available.
[05:22] <MACscr> and i can keep adding servers too =)
[05:22] <sarnold> $3/mo really didn't seem like larry to me, hehe
[05:22] <Logos01> It's from when it was ksplice.
[05:22] <sarnold> *sigh*
[05:22] <sarnold> oracle
[05:22] <sarnold> where good things go to die
[05:22] <Logos01> Yuuuup
[05:22] <sarnold> and horrible things get worse
[05:22] <Logos01> ... I know far too many former Sun employees.
[05:25] <sarnold> Logos01: hmmm, step 3.2.1 has create a sparse file for mirroring, then delete the file and degrade the mirror .. but mirrors can be created and broken at any time with zfs, why bother with the sparse file here?
[05:25] <sarnold> Logos01: do you skip that step now?
[05:28] <Logos01> sarnold: Umm...
[05:28] <Logos01> 3.2.1 from what?
[05:29] <Logos01> Oh, I see.
[05:29] <Logos01> I should've read that closer. I didn't have to do anything like that.
[05:29] <sarnold> Logos01: yeah it seems like needless complication..
[05:30] <Logos01> Yeah
[05:33] <Logos01> If you found the rlaager walkthrough it's probably still better.
[05:33] <Logos01> Yup, errors encountered in install
[05:34] <sarnold> Logos01: wow, it's worth it for this line alone, "The /etc/zfs/zpool.cache file embedded in the initrd for each kernel image must be the same as the /etc/zfs/zpool.cache file in the regular system. Run update-initramfs -c -k all after any /sbin/zpool command changes the /etc/zfs/zpool.cache file."
[05:34] <sarnold> Logos01: I've also never seen this advice written down anywhere before :) hehe
[05:35] <Logos01> Hehe... yeah, that's a valid assertion.
[05:35] <sarnold> (a) I never knew the initrd had a copy of the cache (b) oh man keeping it in sync sounds miserable :)
[05:36] <Logos01> Well, if you don't futz around with zpool.cache then you're okay
[05:36] <Logos01> Not many commands do that.
[05:36] <Logos01> Oh dear.
[05:36] <sarnold> uhoh, upgrade problem?
[05:36] <Logos01> Yarp
[05:36] <sarnold> :/
[05:36] <Logos01> 804 left to upgrade
[05:37] <sarnold> oww
[05:41] <Logos01> Oh yeah ... I'm a little hosed here.
[05:41] <Logos01> It's that grub bug I mentioned before.
[05:41] <Logos01> Yay
[06:11] <lordievader> Good morning.
[06:13] <Logos01> sarnold: Yeah, this is pretty badly hosed now.
[06:14] <Logos01> grub-probe cannot find filesystem for /
[06:14] <sarnold> Logos01: owwww :(
[06:14] <sarnold> Logos01: so -- rollback to 12.04 and pretend it never happened? or .. try to go forward and fix?
[06:14] <Logos01> I tried downgrading grub to the version provided by zfs-native PPA for precise, but no dice.
[06:15] <Logos01> Rollback's not an option here, because like an idiot I didn't snapshot this.
[06:15] <sarnold> ashift 9 or 12?
[06:15] <Logos01> I thought I had zfs-auto-snapshot running and I don't.
[06:15] <sarnold> :(
[06:15] <Logos01> blargh, ashift 12
[06:15] <Logos01> This was working before.
[06:51] <Logos01> sarnold: Thanks for helping/trying though. :)
[06:52] <sarnold> Logos01: good for me too, hehe
[06:52] <Logos01> Heh
[06:52] <Logos01> You might have more luck.
[06:52] <Logos01> But all I did was add the ZFS grub PPA and then apt-get install zfs-grub/precise
[06:52] <Logos01> I had to then do an apt-get install --reinstall grub
[06:52] <Logos01> And *THAT* took.
[06:52] <sarnold> if I'd just gone to bed at the right time I might have though "oh look how easy zfs rpool is these days" without knowing that it is infact a ticking timebomb of annoying :)
[06:53] <Logos01> Wwwwweeeeellll
[06:53] <Logos01> This is an unusual case.
[06:53] <Logos01> I kinda got where I am because everything that can't go wrong decides "Oh hey, it's Logos, let's break anyhow."
[06:57] <sarnold> well, you're not the first to have that error message :) I don't think the previous guy I saw with it got it sorted :/
[06:59] <sarnold> night Logos01 :) hope it finishes off well enough
[08:14] <Logos01> sarnold: I had a successful boot.
[08:14] <Logos01> sarnold: Though it was ... troublesome.  Booted into initramfs a few times before I figured out kernel flags were missing from the GRUB config.
[09:34] <MACscr> can i install the enablement stack on 10.04?
[09:46] <rbasak> MACscr: that's not supported. You might be able to make it work but I suspect there will be more issues than it's worth.
[09:46] <MACscr> np
[09:47] <MACscr> i need to upgrade it anyway
[10:13] <Siebjee> Hi There, does anyone has a quick fix if initramfs is stating that the root uuid disk is not existing ?
[10:15] <Siebjee> ubuntu 14.04 and 12.04 are giving me the same issue.
[10:18] <alex88> hi guys, I'm trying to install libvips-dev package on a 12.04 box from circleci CI service
[10:19] <alex88> https://gist.github.com/alex88/6b0d829b591a07af8a40 problem is that it's not able to install them since deps never met
[10:30] <MACscr> hmm, when should i run barrier=0 and barrier=1 in fstab? Its a Xen PVM and the file system is ext3
[11:26] <DenBeiren> hi people,.. i'm having issues with /boot filling up to 100% and i seem to be unable to purge older kernels :s
[11:27] <DenBeiren> http://pastie.org/9627549
[13:30] <rbasak> ivoks: did you file a bug for your pacemaker issue? To fix it we're going to need an SRU justification, test case, etc.
[13:42] <paco1> hi folks!
[13:48] <nido> when i log in to a ubuntu 12.4lts server i get a message saying "N packages can be updated. M  updates are security updates."; is it possible to get an overview of which packages have specifically security updates?
[13:50] <ivoks> rbasak: not yet; will do tomorrow i guess; it's a holiday, so i'll find some time to do things like that :)
[13:54] <rbasak> ivoks: thanks :)
[14:22] <bencc> can I install utopic package on ubuntu 14.04? http://packages.ubuntu.com/utopic/gstreamer1.0-tools
[14:22] <cfhowlett> bencc, mixing repo versions is not recommended and will probably break things
[14:22] <jamespage> zul, hmm - I'm seeing a pretty bad performance degredation on glance in juno
[14:22] <jamespage> zul, 10x slower
[14:22] <jamespage> ceph backend
[14:23] <zul> jamespage:  thats *not* good
[14:23] <cfhowlett> bencc, if it's a package you really need, perhaps someone will backport it
[14:23] <jamespage> zul, for uploading images at least
[14:23] <jamespage> I was getting 200MB/s out of icehouse - ~20 out of juno
[14:24] <bencc> cfhowlett: even the ppa doesn't have gstreamer 1.4 https://launchpad.net/~gstreamer-developers/+archive/ubuntu/ppa
[14:24] <cfhowlett> bencc, you can always try installing from source rather than repo mixing
[14:26] <bencc> cfhowlett: I've found instructions but it seems that it might interfere with the existing gstreamer installation and not sure if it includes the plugins http://askubuntu.com/questions/517910/installing-can-i-install-gstreamer-1-4-on-ubuntu-14-04
[14:28] <jamespage> zul, lets have a bug - https://bugs.launchpad.net/ubuntu/+source/glance/+bug/1378388
[14:28] <zul> jamespage:  glance store was split out wasnt it?
[14:34] <jamespage> zul, yeah it was
[15:34] <paco1> how can i read bind9 binary zone en 14.04lts? thanks!
[16:19] <jamespage> coreycb, zul: could one of you look at enabling the tests in websockify to support the MIR for nova?
[16:19] <zul> jamespage: sure
[16:19] <zul> jamespage:  i have some bandwidth
[16:41] <blackyboy> Hi everyone , How RAID 5 works ? If i have 6 drives and if any one the drive fails it will rebuild from 6th drive ? or it will rebuild from all 5 other drives ?  Please explain little.
[16:44] <genii> blackyboy: It rebuilds from the information left on the remaining good drives
[16:45] <blackyboy> genii: :) bunch of thanks, really happy for your reply thank you.
[16:45] <genii> np
[17:59] <sarnold> Logos01: oh, wow, yeah... :) so you're up and running?
[18:00] <Logos01> Yeah, yeah I am.
[18:01] <Logos01> It was a trial. My path to getting bootable is not replicable, but then again apparently neither is the exact issue.
[18:01] <Logos01> So I mean there's that.
[18:03] <RoyK> blackyboy: see https://en.wikipedia.org/wiki/Standard_RAID_levels - it uses parity blocks - with 6 drives, I think you should consider raid6, if you're concerned about safety
[18:05] <blackyboy> RoyK: yes got it now i have cleared my doubts thanks
[18:05] <mgw> I've upgraded some of my Trusty systems to the latest kernel (3.17) for the better btrfs support/bug fixes. What are potential problems this could lead to down the road?
[18:10] <sarnold> Logos01: ugh :) well at least it's all going now... and the next one will hopefully not be so horrible
[18:10] <sarnold> mgw: it's on you to keep up with security updates, but beyond that, not much.
[18:11] <mgw> sarnold: ok, thanks
[18:11] <mgw> so if I keep the latest 3.17 release in my private apt repo, and keep updated to that, I should be ok?
[18:12] <RoyK> blackyboy: what sort of drives do you have in that raid?
[18:12] <mgw> Any reason then why the LTS releases don't just keep up with the latest kernel?
[18:12] <blackyboy> RoyK: Just ask for knowledge base. I don't have a RAID here
[18:12] <sarnold> mgw: heh, they kind of do, but last night a user spent two hours in here complaining about that being confusing (he's not wrong)
[18:13] <mgw> i see
[18:13] <mgw> so they backport everything and call it 3.13?
[18:13] <mgw> .x
[18:13] <sarnold> mgw: the 12.04.{2,3,4,5} point releases all had newer kernels -- I expect the same from 14.04.{2,3,4...}...
[18:13] <sarnold> mgw: no, they had different packages entirely -- linux-lts-trusty, iirc
[18:14] <mgw> if I download the kernels from here (http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.17-utopic/) and put them in my local apt repo, will they probably get selected over the stock trusty packages?
[18:15] <sarnold> mgw: I think so
[18:15] <rberg> mgw: note that those packages are missing some things the official kernel has, I noticed aufs was missing from the mainstream 3.16 build
[18:15] <RoyK> blackyboy: ok - just wondered - if you have drives of 2TB or larger, and 5+ of them, given standard error rates, which really are stable per sector for modern drives, I'd suggest using raid-6 (or raidz-2 with zfs on linux)
[18:16] <mgw> as long as it's not missing btrs :-)
[18:16] <mgw> btw, nilfs was just brought to my attention
[18:16] <mgw> how does it compare to btrfs
[18:17] <blackyboy> RoyK: yes your suggestion is good one but while comparing other RAID levels RAID6 is poor in performance right.
[18:17] <RoyK> blackyboy: it performs worse than raid5 for writes, yes
[18:18] <RoyK> blackyboy: what sort of data are you planning to put on this?
[18:19] <RoyK> blackyboy: if it's music/movies/archival things, raid-6 works well
[18:19] <blackyboy> RoyK: Not planned to build now. May be later, Just reading RAID article and i want to know about it some better than before.
[18:19] <blackyboy> Oh cool Music movies
[18:20] <RoyK> blackyboy: storage all depends on what sort of data you have
[18:20] <blackyboy> ok
[18:20] <RoyK> blackyboy: for VM storage in a production system, use RAID-1+0
[18:20] <RoyK> blackyboy: same as for transactional databases
[18:20] <RoyK> blackyboy: for a home server? just stick with raid-6 or something similar
[18:21] <RoyK> blackyboy: I'm a ZFS fan, so I use RAIDz2, ZFS' equivelent of RAID-6
[18:21] <blackyboy> Cool, Then can i use RAID 1+0 for my Proxmox servers ?
[18:21] <blackyboy> Cool!
[18:23] <RoyK> blackyboy: what's proxmox? I can't reach their site
[18:23] <blackyboy> https://www.proxmox.com/
[18:23] <blackyboy> Same as XEN, Hyper-v
[18:23] <sarnold> nice gui in front of containers and xen or kvm..
[18:24] <RoyK> blackyboy: yeah - google said so, but it's dead slow
[18:24] <RoyK> sarnold: ah - ok
[18:24] <blackyboy> Proxmox VE is a complete open source virtualization management solution for servers. It is based on KVM virtualization and container-based virtualization and manages virtual machines, storage, virtualized networks, and HA Clustering.
[18:24] <RoyK> ncie
[18:24] <RoyK> nice
[18:24] <RoyK> gotta look more into that one
[18:24] <blackyboy> Cool!
[18:24] <RoyK> anyway - take a look at http://wiki.illumos.org/download/attachments/1146951/zfs_last.pdf before you decide using mdraid
[18:25] <blackyboy> sarnold we use Hyper, XEN, Promox
[18:25] <RoyK> zfs is a wee bit cooler
[18:25] <RoyK> not as flexible, though
[18:25] <blackyboy> XEN is good if you need only GUI option :D
[18:25] <blackyboy> RoyK: Oh Thanks for link
[18:26] <RoyK> I never liked Xen - KVM for me (or vmware at work)
[18:26] <blackyboy> ok
[18:26] <blackyboy> I have installed FreeNAS in one of the local server in office it has ZFS, i dont have good idea in it so left as it.
[18:26] <RoyK> is proxmox open source?
[18:27] <blackyboy> RoyK: yes its Open source
[18:27] <RoyK> hm. doesn't look that way
[18:28] <blackyboy> RoyK: Read this to know all about Proxmox http://www.amazon.in/Mastering-Proxmox-Wasim-Ahmed/dp/1783980826
[18:28] <RoyK> proxmox.com is *really* slow
[18:29] <RoyK> seems it's AGPLv3, though, which is good :)
[18:30] <blackyboy> oh i can access it right now with good speed
[18:30] <blackyboy> http://www.amazon.com/Mastering-Proxmox-Wasim-Ahmed/dp/1783980826
[18:30] <RoyK> that's amazon.com, not proxmox.com :P
[18:30] <blackyboy> Sorry before i have provided indian site link
[18:31] <blackyboy> RoyK: yea to know and read completely about building a Cluster using promox we used this Guide
[18:31] <RoyK> blackyboy: anyway - about this raid question - was that for home use or large scale/production use?
[18:32] <blackyboy> RoyK: no its for office use, not a large scale use just a small one
[18:32] <RoyK> blackyboy: then look more into zfs
[18:33] <blackyboy> oh sure
[18:33] <RoyK> blackyboy: if you want to use zfs for VM storage, I'd suggest using striped mirrors (aka RAID-1+0) and a SLOG (separate ZFS intent log) on an SSD (or perhaps two, in a mirror)
[18:33] <blackyboy> Okay
[18:34] <RoyK> sync writes to ZFS aren't very fast, since they have to flush the ZIL at every write
[18:34] <RoyK> that's especially bad with things like RAIDz (or RAIDz2 etc), but an SLOG helps out a lot for that
[18:35] <blackyboy> hmm
[18:35] <RoyK> blackyboy: the reason ZFS isn't very quick at sync writes, is that it really takes care of the data
[18:35] <sarnold> adding more vdevs can help though
[18:36] <RoyK> most sarnold sure, thus striped mirrors
[18:36] <blackyboy> RoyK: thanks for the above links seems it got enough information to understand about ZFS let me go through it first .
[18:36] <RoyK> blackyboy: it's a good introduction
[18:36] <blackyboy> adding vdevs ?
[18:36] <blackyboy> 8-)
[18:36] <RoyK> blackyboy: in ZFS land a VDEV is something like a drive, a mirror or a RAIDz set
[18:36] <sarnold> blackyboy: that's a good looking presentation, but this might give a better flavor of how zfs is used: https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/
[18:37] <blackyboy> oh cool
[18:37] <RoyK> erm
[18:37] <RoyK> "RAID-0 is faster than RAID-1, which is faster than RAIDZ-1, which is faster than RAIDZ-2, which is faster than RAIDZ-3."
[18:38] <RoyK> RAID-1 is faster on iops than RAID-0 :P
[18:38] <blackyboy> yes
[18:39] <RoyK> blackyboy: that's the thing to consider with storage - do you want the storage fast for sequencial I/O or random I/O? For VM storage, it's basically all random I/O
[18:39] <blackyboy> RAID 0 stripes so its faster than mirroring
[18:39] <RoyK> blackyboy: zpool create mypool mirror dev1 dev2 mirror dev3 dev4 mirror dev5 dev5
[18:40] <blackyboy> ok
[18:40] <RoyK> that'll create three mirror over which data is striped, and a very good place to store VMs
[18:40] <RoyK> (preferably more than six drives, though, and better add a SLOG)
[18:40] <blackyboy> oh while creating VMs i don't used those options let me check.
[18:40] <RoyK> log devices and cache devices (think of it as write and read cache) can be removed
[18:41] <blackyboy> Choosing I/O options will give performance to VMs ?
[18:41] <RoyK> blackyboy: these options are used for storage, not what you put on them
[18:41] <RoyK> blackyboy: obviously
[18:41] <blackyboy> Wow Cool!
[18:41] <cyber_dweller> trying to setup a server in my office, the server should do http,mail,and some more services from wan side. also, it'll serve the lan with some more common services like nfs,cifs,internal mail,buckup server, dhcp and remote openvpn access from wan side to lan. trying to figure out if tap is the way to go to create two diffrent vlans one for public access and the second for open vpn and local. what do you think? using a single n
[18:41] <cyber_dweller> ic
[18:42] <RoyK> cyber_dweller: so long as you're not going to use a DMZ or use the server as a router or otherwise separate the traffic from the internet from that on the LAN, yes
[18:43] <RoyK> cyber_dweller: one NIC will do, and will be easier to setup
[18:43] <RoyK> cyber_dweller: I guess the basi answer is "use two machines" (or one machine with two VMs) on separate networks
[18:44] <blackyboy> RoyK: hear about ansible ?
[18:44] <blackyboy> heard about ansible
[18:45] <blackyboy> http://www.ansible.com/home
[18:48] <blackyboy> Ok good night everyone, time to bed. Thanks Royk
[18:48] <RoyK> blackyboy: why not puppet?
[18:49] <blackyboy> RoyK: Puppet no idea about it.
[18:49] <qman__> I also recommend breaking those roles up into separate servers or VMs, a lot more maintainable that way
[18:49] <RoyK> cyber_dweller: what sort of machine do you have? how many users? how many networks?
[18:50] <RoyK> qman__++
[18:50] <qman__> That way when you have to upgrade your web server, you can leave your mail server alone for the time being
[18:51] <qman__> This problem happened to me, because my postfix+dovecot config isn't compatible with current versions
[18:51] <RoyK> qman__: it's quite common - that's why we have a few hundred VMs at work :P
[18:52] <qman__> I will fix it eventually, but separating it allows for piecemeal upgrades
[18:52] <cyber_dweller> RoyK, i'm running i5 machine as the server, no services exposed to wan side yet. i5 serves other machines through hba and ethernet, running a i7 as a xen machine.
[18:53] <RoyK> cyber_dweller: get a xeon machine (or something from AMD) with ECC memory and setup a new machine (or two) to host VMs. virtualize those machines and make use of them elsewhere
[18:53] <RoyK> that's what I'd do, anyway
[18:59] <cyber_dweller> RoyK, hardware is not the issue, ofcourse ecc would be much appreciated but uptimes are long enough so this machine can be used as a server. i want the i5 server to keep serving to the lan with a remote access capability from wan to lan side using openvpn. and i want to open some wan traffic to the server to run http mail and stuff. what would be the best approach?
[19:01] <cyber_dweller> RoyK, not risking running dmz :)
[19:01] <RoyK> cyber_dweller: just saying something with ECC (i[357] doesn't support that) because errors do happen and you don't want those errors in production servers - and - having a dedicated xen/vmware/kvm/something box doing the hard parts, separating networks etc, is nice
[19:03] <Logos01> sarnold: It had better not be. The last straggler is my router.
[19:03] <Logos01> <_<
[19:04] <sarnold> Logos01: hahaha
[19:04]  * Logos01 isn't kidding
[19:04] <Logos01> I have this SFF miniITX box whose original purpose in life was to be a lab for rpm buildouts and cluster software deployment a couple of jobs ago.
[19:04] <sarnold> Logos01: how many routers in the world run zfs do you figure? :)
[19:05] <sarnold> nice
[19:05] <Logos01> Probably just the one.
[19:05] <Logos01> Yeah, dual-core hyperthreaded Core i3 CPU, 16GB DDR3 RAM (2x8 1600MHz), 128GB SSD.
[19:06] <sarnold> nice router :)
[19:06] <Logos01> Originally running a KVM hypervisor with about 16 essentially idle CentOS VMs.
[19:06] <Logos01> (Except the Spacewalk and OpenLDAP servers.)
[19:06] <Logos01> Anyhoo, I paid for this thing out of pocket because the company wasn't budging on giving me the resources I needed to, well, get my job done.
[19:06] <Logos01> So when I left there, I took it with me.
[19:07] <sarnold> *nod*
[19:07] <Logos01> And I just couldn't let it sit idle, so I threw a 4-port gigabit NIC PCI card in that badboy and called it a router.
[19:07] <sarnold> that's the way that goes :)
[19:07] <Logos01> I also use it as a media pc, because why not.
[19:08] <cyber_dweller> RoyK, i'm saying the ecc is not the issue here, the issue is the question of what would be the best approach for setting two different vlans using single nics?
[19:09] <RoyK> cyber_dweller: I just mentioned ecc because it's good. using two different VLANs on a single nic isn't a problem, if the switch/firewall in front supports that - just ifconfig eth0.myvlanid (something)
[19:10] <RoyK> cyber_dweller: the separation between VLANs in linux are good
[19:10] <RoyK> s/are/is/ :P
[22:59] <ruben23> guys anyone cna help how do i check an disoalte source of this issue --> http://i58.tinypic.com/w70ug2.jpg
[23:02] <sarnold> ruben23: what issue?
[23:11] <Patrickdk> sarnold, oviously, all the colors
[23:11] <sarnold> Patrickdk: heheh
[23:12] <sarnold> related https://i.imgur.com/pQT0l.gif
[23:12] <RoyK> ruben23: pastebin the text
[23:14] <ruben23> the cpu cores are spiing straight 100 percent causing lagged
[23:14] <ruben23> and i see its coming from mysql application
[23:14] <Patrickdk> well, don't turn on your computer, and that wouldn't happen
[23:14] <RoyK> ruben23: using hyperthreading?
[23:16] <sarnold> ruben23: you could try turning on query execution plan or something if mysql has that information available, so you could optimize your SQL
[23:16] <RoyK> ruben23: just turn off hyperthreading
[23:16] <sarnold> RoyK: really? o_O
[23:16] <RoyK> ruben23: it's normally not good except for small data loads
[23:17] <RoyK> sarnold: ?!?
[23:17] <Patrickdk> well, hyperthreading only works on *different* loads at once
[23:17] <Patrickdk> mysql + mysql, not so much
[23:17] <sarnold> RoyK: the lsat time I did testing I got consistently better compile speeds with hyperthreading :) heh
[23:17] <Patrickdk> mysql + vlc, more likely :)
[23:17] <Patrickdk> sarnold, compile is lots of different stuff though
[23:18] <Patrickdk> your using lots of different cpu subsets
[23:18] <Patrickdk> most people only do integer stuff with sql, not so much floating point and other things
[23:18] <RoyK> sarnold: hyperthreading makes each core have half the cache size usable
[23:19] <RoyK> sarnold: I've seen different results, but mostly, if you stay close to what you're doing and at least with a high memory load, hyperthreading dosn't work that well
[23:20] <RoyK> with little memory and thus cache load, ht works well
[23:21] <Patrickdk> or if your doing vm's :) busy vm's on real cores, and idle vm's on ht cores :)
[23:21] <Patrickdk> but never assign more cores than real cores to a single vm
[23:21] <sarnold> heh, a pal recently assigned a VM 1024 cores ..
[23:22] <sarnold> .. but he only 64 real cores to work with. he said it was slow to boot.
[23:22] <Patrickdk> my friend configured the db servers for 24 cores, servers have 12 + 12ht
[23:22] <Patrickdk> it's a bear
[23:22] <Patrickdk> now, that vm has 20 cores, and we have 20 real, and 20 ht
[23:22] <Patrickdk> it doesn't down the box, when it's used anymore :)
[23:28] <sarnold> Patrickdk: hehehe, did he do it just to see if it could be done? or mistake? or ... what lead him to that madness? :)
[23:30] <Patrickdk> he just wasn't thinking
[23:30] <Patrickdk> 24cores? well, this vm is rarely used, but when it is, we need it to go fast!
[23:31] <Patrickdk> did not think about the sideeffects
[23:38] <sarnold> :)