[02:16] <MACscr> whats the proper way to limit the number of kernels installed on my ubuntu servers? I only want to keep the newest one and whatever is installed
[02:19] <MACscr> bikeshed seems cool, but for a small server, it it installs way to many packages
[02:21] <pmatulis> MACscr: cron-triggered shell script?
[02:21] <sarnold> I thuoght apt had some clever setting about that
[02:21] <MACscr> i know yum does, but doesnt seem like apt does
[02:38] <pmatulis> MACscr: http://askubuntu.com/questions/563483/why-doesnt-apt-get-autoremove-remove-my-old-kernels perhaps?
[02:41] <MACscr> pmatulis: oh there are tons of ways to do it, but im quite surprised there isnt an official way that doesnt require scripting
[02:41] <sarnold> pmatulis: YES!
[02:41] <sarnold> pmatulis: now why isn't that in the first three pages of google results? heh
[02:50] <RoyK> MACscr: wht the newest kernel?
[02:53] <MACscr> RoyK: not sure. why does it matter?
[02:53] <RoyK> MACscr: the older ones work well too ;)
[02:54] <MACscr> and?
[02:54] <MACscr> i have limited space for the OS storage
[02:54] <MACscr> so want to keep things as minimal as possible
[02:55] <RoyK> a new kernel woon't help that
[02:56] <MACscr> still not sure what that has to do with my question
[04:31] <k2gremlin> Hello all, quick and easy question for you guys. I am trying to setup a br0 interface. I have installed bridge-utils and configured in my /etc/network/interface file. When I try to do ifup br0 it tells me cannot find device br0. Thoughts?
[04:43] <ianorlin> k2gremlin: did you reload the config file?
[04:58] <lordievader> MACscr: Apt should only keep the current and current -1 kernels when running apt-get autoremove.
[05:23] <qman__> MACscr, lordievader: it should, but often doesn't, I'm not sure exactly why. I solved the problem for me by writing a script which removes all but the currently running and most recent kernels, and put it in cron.weekly. https://deadface.org/index.php?p=kernelkeeper
[05:24] <qman__> it's also variant aware, so if you ahve more than one kernel variant (such as -generic and -rt) it manages both variants separately
[05:25] <MACscr> it does seem to work sometimes and other times it doesnt. its weird
[05:26] <lordievader> qman__: Nice ;)
[09:36] <adun153> Anybody here experienced with LVM in DRBD?
[09:45] <adun153> In DRBD, If I re-create an internal MD, will it delete the data?
[11:11] <atralheaven_> how can I create a user that can do nothing, and access to nothing, only for using ssh socks proxy?
[11:12] <atralheaven_> I mean ssh -D port user@ip
[12:43] <jonah> Hi I just wanted a bit of advice. Upto now I've always rented servers from a datacentre, but I now want to have a pysical office server. I've started building it and have ordered most of the parts BUT before I start installing ubuntu server I just wanted to check about the best options for disk/raid setups. I've read it is good to keep the OS on a seperate disk and then have a raid for the data?
[12:43] <jonah> But the server I'm building is mainly going to be for websites, cloud login storage and basically just a LAMP
[12:44] <jonah> so I ordered an SSD to put ubuntu on and then 4 x 5.25" drives for the raid
[12:44] <rbasak> Why do you want RAID?
[12:44] <jonah> will this work, or is it just as well to stick everything on one raid?
[12:44] <rbasak> For reliability or performance or something else?
[12:45] <jonah> rbasak: well I already have a big enough backup drive to backup the raid, so I just wanted the speed and the potential to hotswap and expand/repair if a drive goes down
[12:45] <rbasak> So all of the above :)
[12:46] <jonah> rbasak: haha yeah
[12:47] <jonah> rbasak: but before I put the disks in the server case, I'm wondering if I should just install ubuntu on the raid and then install lamp as normal and leave it that way. Is there an advantage of having the extra SSD drive there? I've read a lot of conflicting things and also don't know how I'd actually get it to work right and set it up
[12:47] <rbasak> My home server runs on two disks with RAID-1 and LVM on top of that.
[12:47] <rbasak> I have no need for an SSD on my server. Cache suffices for me.
[12:48] <rbasak> You could look into bcache but we don't have installer support for that yet. Depends on how much you're prepared to do manually, skill level, etc.
[12:49] <jonah> so do you think I'm best just leaving that SSD out of there or is there a way I can use it to improve performance. I suppose I thought the OS would be faster and the boot/reboot fast etc.
[12:49] <jonah> I don't have that much skills with linux, I use it as a desktop daily and also do a few ssh into data centre and copy the odd thing etc but not too sure about raids, fdisks, partitions, caches and the like
[12:50] <rbasak> If you have enough RAM then you won't get much of an OS speedup with an SSD, except when doing things that you haven't done in a while (presumably non-workload things).
[12:51] <rbasak> Boot will be faster but does that really matter on a server?
[12:51] <rbasak> I'd stick to what the installer lets you set up. Keep it simple. The closer your configuration is to others, the less likely you are to be on your own for any problems.
[12:52] <shauno> I think one thing to remember putting the OS on a separate disk, is that you've chosen raid for reliability, and then introduced another disk as a single-point-of-failure anyway.  murphy says the non-raid disk will be the one that goes.
[12:55] <jonah> ok great thanks, so i just put the normal drives in (leave out the ssd) and just use what ubuntu installer offers me and it will set up the software raid and install all the defaults?
[12:55] <jonah> then I just back up the whole raid to my backup drive?
[12:57] <rbasak> I'd say so, yes. Though I haven't used the installer in a while so I can't really help with that part.
[12:58] <rbasak> I would definitely have RAID-1 at a minimum for a server nowadays though. Disks are guaranteed to fail eventually and it's a very easy way to get continuity.
[13:00] <jonah> baffle: ok thanks I'll opt for just the raid in that case, I was going to use raid 5 as most lamp servers I think use that, will that be ok?
[13:01] <jonah> rbasak: sorry sent the last reply to baffle by mistake! oops
[13:01] <rbasak> RAID-5 does what it says on the tin. It's just a cost/risk thing.
[13:02] <rbasak> But understand that your disks will fail eventually. If they're both from the same batch and have had a similar workload (eg. by being part of a RAID) then they are likely to fail at around the same time.
[13:03] <rbasak> I've seen disks fail during RAID-1 and RAID-5 reconstruction.
[13:03] <rbasak> I do not put disks from the same batch into a RAID.
[13:03] <rbasak> (well 1 or 5)
[13:04] <rbasak> IMHO buying five of one SKU all at once and putting them into a RAID-5 is pointless. Might as well just have a RAID-0 for all the good it does.
[13:04] <Walex2> rbasak: that's a bit excessive...
[13:05] <rbasak> Walex2: which bit?
[13:05] <Walex2> rbasak: anyhow I have seen commercial storate systems with hundreds of identical drives with virtually consecutive serial numbers...
[13:06] <Walex2> rbasak: the "five of one SKU ... just have a  RAID-0 for all the good it does
[13:06] <rbasak> Maybe those storage systems are doing more to handle concurrent failures?
[13:07] <rbasak> In a previous job we won business due to concurrent RAID disk failures by previous suppliers not doing this.
[13:07] <rbasak> It was a reasonably regular thing, in that I've seen it multiple times.
[13:07] <rbasak> With both commodity SATA and expensive "server grade" SCSI drives.
[13:08] <rbasak> Or perhaps they didn't wait for drives to fail before replacing them? I don't know.
[13:20] <Walex2> rbasak: I agree that is a bad idea, but concurrent failures can wait years to happen even among hundreds of drives.
[13:20] <Walex2> rbasak: I agree that is a bad idea, but concurrent failures can wait years to happen even among hundreds of identical drives.
[13:22] <rbasak> Walex2: depends on how the drives are used, and whether they're from the same batch!
[13:23] <rbasak> Walex2: having the same usage pattern and the same environmental conditions from the same batch will make it more likely that they will fail close together in time, clearly.
[13:23] <rbasak> As I say, I have seen it happen multiple times.
[13:24] <rbasak> So for a small business buying one server, it makes sense to avoid that risk because there's virtually no cost to doing so.
[13:24] <rbasak> If OTOH you are backblaze or someone similar, then clearly you can't achieve that. But your usage patterns are probably different enough that the risk is lower anyway.
[13:25] <rbasak> And in any case, you probably aren't using a minimum level of redundancy like RAID-5 that is more at risk.
[13:32] <Walex2> rbasak: your level of optimism is astounding... :-)
[13:33] <Walex2> rbasak: imagine rows and rows of racks with identical drivess with nearly consecutive serial numbers arranged in 16-wide RAID5s "because it optimizes the space".
[13:34] <jpds> Walex2: Dude, build a Ceph cluster at that point
[13:36] <RoyK> Walex2: I remember an email on some zfs mailing list some 4-5 years back. someone had built a raidz1 (similar safety as with raid5) with 30 drives and some drives were failing...
[13:39] <Walex2> RoyK: I collect emails like that. The 32-wide RAID5 was particularly amusing, but a 30-wide RAIDZ1 is good too :-).
[13:40] <Walex2> http://www.sabi.co.uk/blog/14-two.html#141019 for the 32-wide RAID5
[13:50] <jonah> sorry to pipe back in but say I use the Raid 5 and have 4 hard drives. My system is running nice but then one fails, how do i hotswap in a new drive and rebuild the array? Won't ubuntu just see a new drive, not a replacement for the failed one if I just whip it out and stick a new one in?
[13:57] <RoyK> Walex2: this guy had even added three spares
[13:57] <Walex2> jonah: depends on what the RAID system is. Most require you to explicitly label a drive as a spare before it will be added into a RAID.
[13:58] <RoyK> jonah: mdadm --add /dev/md0 /dev/newdisk
[13:59] <jonah> Walex2: well i mean just a standard ubuntu server install running on software raid5
[13:59] <RoyK> jonah: that adds a disk to the raid and unless you grow it, that disk is flagged a spare and will work like one
[14:00] <jonah> RoyK: I see Roy, so you'd power off and take out the dead one, then power back up with the new one added in it's bay then run that command?
[14:00] <RoyK> jonah: then just mdadm --remove the failed drive, unplug it, install a new one and mdadm --add it
[14:00] <RoyK> jonah: if you don't have hotpluggable disks, yes, but both SATA and SAS should handle hotplug
[14:01] <RoyK> it's part of the specification
[14:01] <jonah> RoyK: ah so with mine being sata 3 i can just hotswap with it all still turned on
[14:02] <RoyK> should work
[14:02] <jonah> RoyK: so I'd just find the dead one, pull it out and put the new one in and run mdadm
[14:02] <RoyK> you should probably mdadm --remove the dead one first, then mdadm --add the new one
[14:02] <RoyK> jonah: are you using partitions? if so, you'll need to create those first, obviously
[14:03] <jonah> RoyK: haha I know it sounds silly but how do you know which is the dead one
[14:03] <RoyK> hehe
[14:03] <rbasak> /proc/mdstat will tell you what the system considers to be alive or dead
[14:03] <rbasak> (or hot spare)
[14:03] <RoyK> jonah: try smartctl -i /dev/nameofdisk
[14:03] <rbasak> etc
[14:04] <RoyK> jonah: that should give you the make and serial number
[14:04] <RoyK> jonah: otherwise, it should be in /dev/disk/by-id
[14:04] <rbasak> RoyK: assuming that the disk isn't timing out on commands :)
[14:04] <rbasak> (I agree, but you might want to know your mapping in advance if that might be a problem)
[14:04] <RoyK> rbasak: yeah, but the data in /dev/disk/by-uuid should stick
[14:05] <jonah> RoyK: well my plan was to just partition 4 drives all with 10% swap partition and the rest free. then set up the raid on the ubuntu server installer. So if I add a label name to them all when I partition them I'll know which has died if one fails?
[14:05] <RoyK> jonah: I'd recommend using a pair of smallish (2,5" perhaps) drives for the root and the rest for data with LVM on top
[14:05] <jonah> RoyK: ah I see so I just have the serials numbers of each drive on the front of them so I can see when I open the bays up to swap them out
[14:06] <RoyK> better poweroff first, so you don't unplug something in use
[14:08] <jonah> RoyK: well the plan is to just have a big lamp server and then run owncloud and some other cms stuff on there. Do I need the LVM and different drives/partitions or can I just have the bog standard raid5 and just install?
[14:08] <RoyK> jonah: I always use LVM - it doesn't hurt and it's more flexible
[14:09] <RoyK> jonah: but really - if you have a couple of old drives, use them as a mirror for the root, don't mix root and data
[14:10] <RoyK> jonah: some even use USB sticks for the root - it's not much in use anyway
[14:10] <jonah> RoyK: ah this is interesting as when I first came into this chat my initial question was whether I should have an SSD for the OS (or two I suppose if mirrored) and then a seperate raid array for the data.
[14:10] <lordievader> LVM ftw :D
[14:11] <jonah> RoyK: but I wasn't sure how complex or necessary this was to setup up. Especially if I'm backing everything up anyway
[14:11] <RoyK> jonah: I helped a friend of mine to setup her home server, and we chose a smallish ssd and an old laptop HDD for the root, the HDD set to write-mostly
[14:12] <RoyK> jonah: that gives you the read speed of an SSD and HDD write speed
[14:12] <RoyK> which is quite handy :)
[14:16] <jonah> RoyK: sounds good, but I'm thinking more of the overall lamp with webmim virtualmim and all the rest of it installed. There is a bit of a mixture of data and config/os - not sure how I could separate it or benefit. For example if the OS drive failed and I had to reinstall, would the data stuff work correctly with it still. Sounds like a minefield.
[14:16] <RoyK> don't use webmin
[14:16] <RoyK> !webmin
[14:17] <RoyK> jonah: with a mirror of an ssd and a hdd, you can handle a disk failure
[14:17] <RoyK> jonah: and no, it's not a minefield, mixing root and data is, though
[14:26] <jonah> RoyK: Blimey RoyK - I'm used to cpanel currently but thought webmin was the best open source had to offer if I don't want to pay?
[14:27] <RoyK> jonah: use the commandline :P
[14:27] <disposable> i'm trying to install 14.04 on hp microserver (n54l). i have a problem with the installer though. as soon as the purple ncurses based interface starts, my usb keyboard stops working. does anybody know a workaround? (it works fine with centos installer and omnios (solaris)) and yes, i have tried a different usb keyboard. same story. i'm installing from a usb2 drive onto a usb3(in usb2 port) drive,
[14:27] <disposable>  in case that matters. google shows many people with the problem but no solution.
[14:29] <RoyK> jonah: it's not that hard, and once you've learned it, you'll never look back
[14:32] <jonah> RoyK: yes fair enough. The thing that scares me I suppose is the security, I can probably follow some guides and get something working but if it's served to the outside world hackers could well have a field day. I've just had a cpanel hacked recently and that had all the modsecuirty addons and a firewall running and cloudflare etc - so doing it commandline is really honourable and I'd love that but the last thing I need is a hackfest!
[14:34] <RoyK> jonah: just choose good passwords, like http://xkcd.com/936/, setup ufw to block anything you don't want to be open (it's simple, really) and use SSL/TLS whenever possible
[14:34] <RoyK> don't allow ssh login with passwords, only keys, or at least only keys with root (which I think is default now)
[14:34] <rbasak> Also, install and use unattended-upgrades, and pay close attention to anything you don't install from the archive (or even from the archive and in universe).
[14:35] <rbasak> Which also means: be biased against any technology not in the archive. Ask why it isn't packaged as part of Ubuntu.
[14:35] <RoyK> jonah: it will *not* be any more secure if you trust som flashy GUI to do the job
[14:35] <RoyK> jonah: and at the end, you'll even end up with more linux knowledge, which won't hurt :)
[14:36] <jonah> RoyK: you've talked me into it!
[14:42] <RoyK> jonah: :)
[18:23] <fx159> hello, is there anything I can do to debug reboot problems? my server gets stuck after displaying "all processes ended after 2 seconds", I'm using 14.04.3
[18:24] <sarnold> fx159: try fiddling with shutdown -H, shutdown -P, shutdown -r, I've heard some systems handle some of those poorly
[18:25] <RoyK> fx159: how do you reboot the server? have you checked the logs?
[18:25] <fx159> RoyK: I enter reboot into the console
[18:26] <RoyK> fx159: should work
[18:26] <RoyK> fx159: if you have another linux machine around, try setting up rsyslog to log to that machine as well to see if you get anything useful out of the logs
[18:26] <fx159> RoyK: also nothing obvious in the syslog
[18:27] <fx159> RoyK: I have a serial console to the machine... it just sits there after "all processes ended after 2 seconds", no further output, also no errors before
[18:27] <RoyK> no idea, sorry
[18:28] <fx159> too bad, well I can live with it... machine is online 24/7 anyways
[18:28] <TJ-> fx159: it's usually a firmware ACPI bug; there are some workarounds, such as matching the expected OSI string with a kernel command-line entry of the form "acpi_osi=Windows XXXX" where XXXX is some Windows version string present in the ACPI DSDT (the most recent Windows version usually). "sudo strings /sys/firmware/acpi/tables/DSDT | grep -i windows" might help you find those strings
[18:29] <sarnold> TJ-: good idea
[18:30] <fx159> TJ-: I know it worked with earlier versions of 14.04, before rebooting it displayed a message saying something like "rebooting system now", that message no longer appears, still a firmware bug?
[18:30] <TJ-> fx159: maybe a regression in the kernel
[18:30] <sarnold> fx159: there's a 'fwts' package that's supposed to help test firmwares; I haven't used it myself so I can't really say if it is appropriate for end users or just hardware distributors, but it may useful too
[18:31] <sarnold> fx159: ooh, interesting. if you're up for testing the 14.04.2 and 14.04.1 kernels, that might be worthwhile. granted, it'd take some time, but it'd make a bug report more interesting :)
[18:32] <fx159> sarnold: oh well, already tested a lot of the 3.19 kernels for another bug report, not again hehe
[18:32] <sarnold> fx159: hehehe
[18:33] <sarnold> there are more fun ways to spend your time, that's for certain.
[18:33] <fx159> https://bugs.launchpad.net/bugs/1504909 any ideas regarding this bug? hehe
[18:35] <sarnold> fx159: argh. that looks -really- annoying
[18:36] <sarnold> fx159: does a scrub repair it?
[18:36] <fx159> sarnold: scrub fixes the errors, yes
[18:37] <fx159> sarnold: but I'd prefer spin down to work without data corruption ;)
[18:37] <sarnold> fx159: yes :)
[18:37] <sarnold> especially since scrubs aren't exactly fast
[18:39] <fx159> sarnold: 600M/s is kinda fast, but scanning the pool still takes about 3 hours :/
[18:40] <fx159> sarnold: I'm considering going back to the 14.04.2 kernel
[18:41] <sarnold> fx159: There's a few approaches that might be worthwhile, but none of them are fun. maybe try 14.04.1's kernel, see how well that works; try to reproduce on a single drive without zfs; try replacing the controller with something else (funny, I'd heard really good things about the m1015, but perhaps not many people spin them down..)
[18:41] <fx159> sarnold: spindown is something that also worked with earlier verions :(
[18:41] <sarnold> fx159: have you asked around #zfsonlinux or filed github issues there? those guys are helpful and might know something that I don't..
[18:42] <fx159> sarnold: yes I did, initally there was also a zfsonlinux bug... but that got fixed...
[18:42] <sarnold> fx159: heh
[18:42] <fx159> sarnold: https://github.com/zfsonlinux/zfs/issues/3785
[18:43] <sarnold> fx159: doing the full bisection is probably the best bet, though that might be dozens of compiles and reboots..
[18:44] <fx159> sarnold: too bad, I don't have that much spare at the moment...
[18:44] <fx159> *time
[18:53] <fx159> sarnold: I believe it has something to do with zfs... error occurs with 3.16 as well now, wtf?
[18:54] <fx159> sarnold: I'm 100% it never appeared with 3.16 before ._.
[18:54] <sarnold> fx159: interesting. did you upgrade pool or dataset features? if not you could try an older zfs...
[18:55] <sarnold> bisecting zfs/spl may be easier than the kernel :)
[18:55] <fx159> sarnold: upgraded the pool :-(
[18:56] <sarnold> :(
[18:57] <fx159> sarnold: I always wanted to try out FreeNAS... hm... whatever
[18:57] <fx159> sarnold: no more spin down or reboots for me, for now hehe
[18:57] <sarnold> fx159: did you notice any decent power savings or noise savings when spinning down the disks?
[18:59] <fx159> sarnold: noise is not a concern, power draw with spinning disks is about 90 - 100W, with spun down disks something like 45W, so yes, there is potential
[19:08] <sarnold> fx159: wow. thanks. I'm sooner or later going to be putting together my own smallish zfs system and was curious about power draw, heat, and noise from all those drives..
[19:10] <fx159> sarnold: I'm using a supermicro 4U server case, 8 bay hot swap in the front, heat is no problem, noise...well... there are system that are quieter, power draw is quite good as you see :)
[19:12] <sarnold> fx159: hehe, yeah, server gear is never going to be -quiet- but it still seems surprising to me that there's not much in the middle ground of ~dozen drives systems for home use. it's all "look! four drives in this nas!" or "this chassis holds 24 drives" :)
[19:28] <fx159> sarnold: Yea, four is just not enough, and 24 is overkill...
[19:40] <dasjoe> sarnold: is http://cdimage.ubuntu.com/releases/wily/release/ supposed to contain just powerpc and ppc64el images?
[19:40] <sarnold> dasjoe: hah, good question
[19:42] <dasjoe> sarnold: same for vivid. trusty has some weird images I don't recognize, too: "64-bit Mac (AMD64) desktop image"
[19:42] <OerHeks> dasjoe, use the server and install the desktop you want
[19:42] <sarnold> OerHeks: those -are- the server images, and it's only two oddball arches :)
[19:42] <sarnold> dasjoe: I've poked infinity in #ubuntu-devel, he seems most likely to know what's going on..
[19:43] <sarnold> off to lunch..
[19:44] <dasjoe> OerHeks: thanks, but that's not what I'm after :) I just noticed cdimages does not contain images for any arch I use
[19:45] <tarpman> dasjoe: are you looking for http://releases.ubuntu.com/wily/
[19:45] <OerHeks> images like logos / artwork ?
[19:46] <Obelus> Probably disc images...
[19:48] <shauno> dasjoe: it seems to be intentional.  the front page of http://releases.ubuntu.com has an explanation.  in all honestly, I'm surprised the root of cdimage. doesn't
[19:48] <dasjoe> shauno: thanks, just found that explanation, too
[19:50] <OerHeks> oh the regular versions http://releases.ubuntu.com/15.10/ , i was lost in powerpc
[20:00] <dasjoe> I finally found what I was looking for in the first place, the netboot minimal ISOs: http://cdimage.ubuntu.com/netboot/
[20:01] <Obelus> Ah netboot.
[20:30] <rattking> hello, does anyone know if the grub2 password behavior was suppose to change between precise and trusty? I just upgraded a test node to trusty and its asking for a PW to boot when it use to require a PW to edit or access  grub's cli
[20:38] <rattking> ohh yes there was a change in behavior there, and its documented :)
[21:00] <Wicaeed> how long does it take to sync a single image with uvt-simplestreams-libvirt? I'm running the command to sync a single image yet I'm not seeing any noticeable network activity
[21:01] <Wicaeed> I see it taking a boatload of CPU though
[21:06] <atralheaven_> Hello, what tool do you suggest for downloading torrents on vps?
[21:10] <bittin> rtorrent
[21:14] <keithzg> atralheaven_: Yeah, bittin's suggestion of rtorrent is probably the best one. Although, I remember some time ago having to go with ctorrent instead because of dependencies. That was a while ago and it was on an OpenBSD VPS, though.
[21:15] <atralheaven_> I think I've used it before...
[21:15] <atralheaven_> im installing it
[21:17] <atralheaven_> how can I add a user that can only use ssh socks proxy? nothing more, only ssh socks proxy, is it possible?
[21:18] <atralheaven_> the user shouldn't be able to run any command