[03:27] <smapty> Earlier today, I had a server running 12.04 go down. Found out that a package got updated that removed firmware for a aic94xx controller. Seems like it got removed from a package for legal reasons, but there was no warning before it removed the firmware.
[03:31] <Patrickdk> there is always a warning
[03:32] <Patrickdk> it's in the change logs
[03:42] <smapty> Fair enough. Unless I was looking for it in the changelogs specifically, amongst all the other updates that the machine received, it’s incredibly easy to overlook.
[03:48] <Patrickdk> hmm, I have mine set to show me all the changelog when I update the machine
[03:48] <Patrickdk> I also have it set so all updates get emailed to me, and I look over them
[03:49] <Patrickdk> just sign up on the maillist to receive them
[03:49] <Patrickdk> it just matters exactly how much you care
[03:53] <smapty> It’s not a matter of if I cared or not, the server is a tool and it’s maintainence is important, but it’s not my primary responsibility. It’s a tool I use for development for other things. Thanks for the direction toward getting the changelists when I go to update.
[03:54] <smapty> I’ll take a look at getting that setup.
[05:27] <morenoh152> hello all
[05:30] <morenoh152> can I use gparted to make my current server raid 1?
[05:30] <morenoh152> I already used it to format my new hdd. But I need to make the new hdd work with my current hdd. Anyway to avoid a installation?
[05:30] <morenoh152> *reinstallation
[06:01] <morenoh152> how can I make a partition extended with gparted?
[08:21] <lordievader> Good morning.
[10:14] <bitbyte> Is there a way to run chmod, on a full directory and tell it only to do files not folders ?
[10:14] <bitbyte> so I want to it to run through check every folder and amend every file
[10:15] <maswan> you'd need to use find to do that
[10:15] <TJ-> bitbyte: " find /path/to/base -type d -execdir chmod .... {} \;" maybe?
[10:16] <TJ-> bitbyte: oops, "-type f" !
[10:16] <bitbyte> mmmm i’ll give it a shot, i’m currently just amending my plex directory so you can see lots of folders and files scattered all over
[10:17] <bitbyte> TJ- : so you think this will do without the quotes “ find /media/data/plex_source/ -type f -execdir chmod 644 {} \; “
[10:36] <bitbyte> TJ- it worked like a charm thanks for the help
[11:02] <bitbyte> Another quick one guys if you don’t mind, im trying to run “ find /media/data/plex_source/anime/One\ Piece/ -type f -execdir rename -v 's/L@mBerT/ /' *.mkv {} \; “ without the quotes but when it gets to the @ sign it dies thinking it needs a package
[11:12] <TJ-> bitbyte: does it need escaping, with "\@"
[11:12] <bitbyte> i’ll give it a shot thanks
[14:35] <histo> t
[14:37] <ahmadgbg> Hi, i need to put 20TB in raid 5. Which drives should i use? I heard something about "Nonrecoverable Read Errors per Bits read". Thanks for the help!
[14:42] <ahmadgbg> anyone
[14:49] <RoyK> ahmadgbg: well
[14:50] <RoyK> ahmadgbg: what sort of use?
[14:50] <ahmadgbg> RoyK: Storage and like 5 websites
[14:51] <RoyK> what sort of storage? VMs? archive?
[14:51] <RoyK> what sort of i/o pattern do you expect?
[14:52] <RoyK> ahmadgbg: if you don't know, just describe the use of the server as well as you can
[14:53] <ahmadgbg> Royk: Archive, i will put raw videos from the cam to save them, I will even use a NAS to back them up.
[14:53] <ahmadgbg> Royk: The websites will be wordpress
[14:54] <andol> ahmadgbg: Not sure if you want to use RAID5 for a 20TB array. With that amount of data there is supposedly a significant risk of a second disk failuring during the rebuild which happens after a firsk disk dies.
[14:54] <RoyK> ahmadgbg: do you expect lots of traffic on the websites? if not, it shouldn't matter a lot
[14:54] <RoyK> andol++
[14:54] <RoyK> ahmadgbg: better use raid6 - rebuild time for a large raid is significant
[14:54] <ahmadgbg> Royk: not so much.. like 2000 unique visitors/month
[14:54] <RoyK> ahmadgbg: I guess my mobile phone could do that ;)
[14:55] <ahmadgbg> Royk: haha :D... So if i use Raid 6.. which drives should i use
[14:55] <RoyK> ahmadgbg: for such use, probably WD Red or something
[14:55] <bekks> ahmadgbg: how man drive ports does your controller have?
[14:55] <RoyK> ahmadgbg: doesn't seem to be very i/o intensive
[14:56] <RoyK> bekks: I guess it's better to get controllers enough or controllers large enough than just basing it on what's there
[14:56] <ahmadgbg> Royk: so i can use 10^14 drives?
[14:56] <ahmadgbg> bekks: 6 ports
[14:57] <bekks> RoyK: yeah, so it would be interesting wether he already has a controller.
[14:57] <tonyyarusso> Does it need to be hardware RAID or would mdadm work?
[14:57] <RoyK> ahmadgbg: IMHO there's very little difference between enterprise drives and consumer drives. it's mostly marketing BS
[14:57] <RoyK> tonyyarusso: MD usually does a better job than hw raid ;)
[14:57] <tonyyarusso> RoyK: I often use MD myself.  It's an ongoing debate at the office.  :)
[14:58] <RoyK> ahmadgbg: I'd recommend md raid for flexibility, or zfs if you're nervous about data consistency
[14:58] <RoyK> ahmadgbg: but then - a bitflop or two won't ruin a video
[14:58] <ahmadgbg> RoyK: i was thing about zfs2 i think its named?
[14:58] <tonyyarusso> So, add a couple of plain SATA cards and you can support that amount a heck of a lot cheaper
[14:58] <andol> Well, having the HW RAID battery backup can sometimes be nice, but otherwise I too have a preference for mdadm.
[14:58] <RoyK> ahmadgbg: zfs2? no such thing afaik
[14:59] <tonyyarusso> I also normally go for 10 rather than 5/6 - a few more drives, but better performance and safer.
[14:59] <RoyK> tonyyarusso: sure, but doesn't look like he needs the performance
[14:59] <ahmadgbg> Royk: Z2 :D
[14:59] <RoyK> tonyyarusso: and no, raid1+0 isn't really safer than raid6
[14:59] <RoyK> ahmadgbg: ah - raidz2 - it's zfs' implementation of raid6
[15:00] <RoyK> ahmadgbg: raidz3 if you're really paranoid (and don't care much about write performance :P)
[15:00] <ahmadgbg> RoyK: ye.. should i go with that and WD red drives?
[15:00] <RoyK> ahmadgbg: keep in mind that if you choose zfs, you'll lose most of the flexibility offered by md
[15:00] <ahmadgbg> RoyK: well im going to backup the data on a NAS so that wont be needed right?
[15:00] <ahmadgbg> RoyK: like?
[15:01] <RoyK> ahmadgbg: so, on top of my head, I'll suggest setting up an mdraid with 4TB WD Red drives in RAID6
[15:01] <tonyyarusso> I've had good luck with my handful of reds so far, fwiw.  Small sample size, but since when does the Internet care about reproducible validity of opinions?  :P
[15:01] <RoyK> ahmadgbg: flexibility of adding/removing drives to the raid etc
[15:01] <ahmadgbg> RoyK: ok then i will use that :D
[15:02] <RoyK> so, 7 4TB drives in RAID6, that'll give you 20TB, or 18TiB (TiB as in what the OS reports as terabyte)
[15:02] <RoyK> so perhaps get 8 drives
[15:02] <ahmadgbg> RoyK: but i saw something about if i use 10^14, there is a change that it wont rebuild if a drive crashes
[15:03] <RoyK> ahmadgbg: that's why you should use raid6 ;)
[15:03] <ahmadgbg> RoyK: smart :D
[15:03] <RoyK> ahmadgbg: is this a home server or a production thing?
[15:03] <ahmadgbg> RoyK: its in between :D
[15:04] <tonyyarusso> Home *is* production!
[15:05] <RoyK> if it's a production thing and you have the budget, I'd go for seagate constellation 4TB drives
[15:05] <RoyK> tonyyarusso: hehehe
[15:05] <ahmadgbg> RoyK: its a home then :D
[15:05] <ahmadgbg> RoyK: WD red are much cheaper...
[15:05] <RoyK> ahmadgbg: so, 7 or 8 drives in raid6 plus maybe a spare drive if you're nervous
[15:06] <ahmadgbg> RoyK: it wont be a problem right :D
[15:06] <RoyK> ahmadgbg: and perhaps a controller like this http://www.ebay.com/itm/LSI-SAS-9211-8i-6Gbps-8Port-PCI-Express-SATA-SAS-Host-Bus-Adapter-New-/380703631558?pt=US_Server_Disk_Controllers_RAID_Cards&hash=item58a3b468c6
[15:06] <RoyK> those are good
[15:07] <RoyK> you'll need sas-sata cables, though, but sata plugs neatly into sas, so it'll work well
[15:07] <RoyK> http://www.ebay.com/itm/Mini-SAS-4i-SFF-8087-36P-36-Pin-Male-to-4-SATA-7-Pin-Splitter-Adapter-Cable-0-5M-/111316051784?pt=US_Drive_Cables_dapters&hash=item19eaf42748
[15:07] <RoyK> a cable like that
[15:08]  * RoyK likes working with storage ;)
[15:08] <ahmadgbg> Nice :D.. and if i want to add more drives it will be easy right
[15:08] <RoyK> with mdraid, yes
[15:08] <RoyK> with zfs, no
[15:09] <ahmadgbg> is it easy to setup?
[15:09] <RoyK> it'll probably take a week (or two) to add a drive to a raid that size, but it'll work well (and the raid will be usable during that time)
[15:09] <RoyK> yes
[15:09] <ahmadgbg> nice :D
[15:09] <ahmadgbg> Now its buy time :D
[15:10] <RoyK> hehe
[15:10] <RoyK> good luck :)
[16:32] <luminous> hi! what permissions does logstash need to read /var/log/* ?
[16:32] <luminous> I've added logstash to the adm group as an initial attempt on this, but that is not sufficient, and logstash is still getting permissions errors
[16:50] <RoyK> luminous: normally group membership in adm
[16:50] <luminous> I thought that would work too
[16:50] <luminous> apparently not
[16:50] <RoyK> luminous: restarted logstash?
[16:50] <luminous> yes, even restarted the whole system
[16:51] <RoyK> which logs does it fail to read?
[16:51] <luminous> the init.d does some chroot stuff, but it can read other logs in /var/log just fine
[16:51] <luminous> RoyK: auth.log and all those that are owned by root:adm
[16:51] <luminous> or root:root
[16:52] <RoyK> chrooting it to disallow it to read /var/log disables it to do its job :P
[16:53] <luminous> RoyK: don't get confused there, it can read /var/log/*
[16:54] <luminous> but it doesn't have permissions to read some of these files
[16:54] <luminous> let me look at the script to be more specific
[16:55] <luminous> is it ok to paste 4 lines here?
[16:55] <luminous> sorry, 5
[16:55] <RoyK> !pastebin | luminous
[16:55] <luminous> ok, thanks for confirming
[16:59] <RoyK> luminous: find out which files that aren't readable
[17:00] <luminous> RoyK: right now, it's anything that logstash does not own
[17:00] <luminous> but actually, let me add world readable to auth.log
[17:00] <luminous> and we can confirm
[17:00] <luminous> I was also thinking of dropping the chroot in that init script
[17:00] <luminous> to test
[17:01] <luminous> but the chroot chroot's logstash to /
[17:04] <luminous> hrm, it seems removing the chroot let's logstash read auth.log with the perms I've given it (added to adm group)
[17:04] <luminous> that's weird
[17:04] <luminous> I don't know if I can debug this deeply enough right now, it might have to wait
[17:04] <luminous> somehow the chroot --userspec seems to be limiting what logstash can see
[17:05] <luminous> even though it's in the adm group
[17:05] <RoyK> luminous: sudo su -c "cat /var/log/auth.log" logstash
[17:05] <luminous> RoyK: logstash has no shell, so that doesn't work
[17:06] <RoyK> luminous: sudo su -s /bin/bash -c "cat /var/log/auth.log" logstash
[17:06] <luminous> that worked
[17:06] <RoyK> then the logstash user can read those
[17:07] <luminous> but not when run in the chroot
[17:07] <luminous> but I'll need to redeploy this vm to test that theory some more
[17:07] <luminous> but I'll need to push that out till later
[17:07] <luminous> too many deadlines
[17:07] <luminous> :)
[17:07] <luminous> thanks for the assistance so far RoyK :)
[17:08] <RoyK> :)
[20:42] <SpaceBass> hey folks
[20:43] <SpaceBass> trying to upgrade an old 10.10 (PPC) server which hasn't been online in a while. when I run apt-get update I get a lot of 404s and it fails. Any tips?
[20:46] <Patrickdk> use the archive
[20:46] <andol> SpaceBass: That being the result of 10.10 not having been supported for a while, and not longer being availible in the regular repo server.
[20:46] <SpaceBass> andol, exactly
[20:46] <SpaceBass> Patrickdk, is that http://old-releases.ubuntu.com ?
[20:47] <andol> SpaceBass: Yepp
[20:47] <Patrickdk> yes
[20:47] <Patrickdk> edit your /etc/apt/sources.list to use it
[20:47] <SpaceBass> any way to apply that change to the entire sources.list ?
[20:47] <Patrickdk> your best text editor
[20:47] <Patrickdk> or sed
[20:47] <Patrickdk> or perl
[20:47] <Patrickdk> or ...
[20:47] <SpaceBass> yeah...my see and regx are too rusty... vi it is
[20:48] <andol> SpaceBass: Still, a reinstall might be preferable, given that your current upgrade path looks something like 10.10 -> 11.04 --> 11.10 -- 12.04.
[20:49] <SpaceBass> yeah...probably easier to stand up a basic amazon instance
[20:49] <SpaceBass> had the hardware and thought it might be worth reviving
[20:56] <Havenstance> so i installed ubuntu 14.04 server, and it put GRUB on my USB Key, not really a problem except it wasn't the USB Key I'd like it on. how can I low level copy one drive to the other?
[21:10] <Havenstance> anyone able to answer a question about Grub?
[21:11] <bekks> Havenstance: why dont you install grub onto where you want it?
[21:13] <Havenstance> bekks, I haven't the foggiest how, I hit the guided, Whole Disk Encryption with LVM. and when I do grub-install /dev/sdb it gives an error about installing it on encrypted disk, so I did what it said and it said it installed successfully and I now have a /boot directory which I didn't have before
[21:14] <Havenstance> but when i reboot it throws up non-system disk or disk error replace and strike any key when ready. I put in the flash drive reboot it boots fine
[21:14] <Havenstance> May need to reinstall, I've had this issue before and I usually pull the USB key and let it fail then tell it where I want it to go.
[21:29] <dkorras> hi all. please can you help me, my ubuntu server has WLAN and LAN enabled and conencted to the same network, when i boot wihtout LAN connected and then connect it later, all traffic still flows over WLAN, how can i make the switch automatic to use LAN when available ?
[21:35] <SierraAR> dkorras: If by WLAN you mean wireless and by LAN you mean wired, disconnect from the wireless network when you hook up the cable
[21:36] <SierraAR> dkorras: I'm not certian if there's a way to automatically switch from wireless to the wired connection
[21:36] <Patrickdk> sure, using metric's
[21:36] <SierraAR> I also just realised this is for  aserver not a desktop, and I'm probably gave a rather stupid answer
[21:36] <Patrickdk> but it won't switch, just new connections will use the lower one
[21:39] <dkorras> i have search there is a program called guessnet
[21:39] <dkorras> there is no documentation to set it up though
[21:53] <r4do> hi guys. i'm trying to open my perl application in browser (http-service is nginx) and i getting such error: An error occurred while reading CGI reply (no response received)
[21:53] <r4do> i'm using unix socket fcgiwrap, ubuntu 14.04
[21:53] <Patrickdk> and?
[21:53] <Patrickdk> fix your perl application
[21:54] <r4do> what i need to fix? it worked on other server
[21:54] <Patrickdk> how should I know?
[21:54] <Patrickdk> your perl application does log it's errors right?
[21:54] <r4do> yes, but there are no errors in log file
[21:55] <Patrickdk> I'm not talking about the nginx log file
[21:55] <Patrickdk> did you attempt to run the perl app yourself?
[21:55] <r4do> nginx error.log is clear
[21:55] <Patrickdk> yes I know that
[21:56] <Patrickdk> that is why you should not be looking at it
[21:56] <Patrickdk> but at your perl applications log file
[22:00] <r4do> Patrickdk: my application log file is clear, just thar data which i write there
[22:00] <Patrickdk> well, your perl application is crashing for some reason
[22:00] <r4do> run from bash is ok, any errors
[22:35] <Thatguy> does any one here use proftpd?
[22:38] <luminous> openssh sftp?
[22:52] <Thatguy> Luminous if you talking to me for some reason all my config files for it need to be able to be read as everyone
[22:52] <Thatguy> so 777 or so
[22:59] <Patrickdk> 777?
[22:59] <Patrickdk> everyone needs to write to your config files?
[22:59] <Thatguy> well i got t as
[22:59] <Thatguy> 744
[23:00] <Thatguy> for some reason it cant read the config files unless it can be read by everyone even thought the process is run under root
[23:00] <Thatguy> really wierd
[23:01] <Patrickdk> well, that is simple
[23:01] <Patrickdk> the file is owned by the wrong user/group
[23:02] <Thatguy> currently root but when i look at ps aux o see who is running proftpd its root
[23:02] <Patrickdk> what is the /etc/proftpd folder owned by?
[23:02] <Thatguy> "root      8720  0.0  0.0 118484  2400 ?        Ss   Jun28   0:00 proftpd: (accepting connections)"
[23:03] <Thatguy> root
[23:03] <Thatguy> and group root
[23:03] <Patrickdk> hmm
[23:03] <Patrickdk> mine is the same
[23:04] <Patrickdk> except the config files I have passwords in, is owned by proftpd and only readable via proftpd user
[23:04] <Thatguy> hmm
[23:04] <Thatguy> I'll see if there a ruser caled proftpd
[23:04] <Patrickdk> proftpd   1446  0.0  0.0  84076  2184 ?        Ss   Jun22   0:02 proftpd: (accepting connections)
[23:06] <Thatguy> hmm
[23:06] <Thatguy> just set the user of the files to proftpd and nothing
[23:06] <Patrickdk> ya, in the proftpd you configure the user/group
[23:06] <Patrickdk> so it's up to you
[23:06] <Patrickdk> the debian maintainer never really *debianized* it
[23:07] <Thatguy> says root
[23:07] <Thatguy> I believe its set to root so I can set the ftp user uid
[23:07] <Thatguy> so say if you login as
[23:08] <Thatguy> user1 it will use say thatguy on the system
[23:08] <Thatguy> how i gt it setup for my websites
[23:08] <Thatguy> each website under different user
[23:08] <Thatguy> then each ftp account needs to be under a different user as well
[23:08] <Patrickdk> same with me
[23:09] <Thatguy> got mine reading off mysql db :D
[23:09] <Thatguy> for the users
[23:10] <Thatguy> in your init.d/proftpd does it say what user to run as?
[23:10] <Patrickdk> no
[23:11] <Thatguy> ok
[23:12] <Thatguy> I think i have fixed it
[23:12] <Thatguy> soon know
[23:21] <Thatguy> Patickdk thanks I think we've fixed it
[23:21] <Thatguy> yay
[23:22] <Patrickdk> :)
[23:25] <z1haze> would someone please help me make a script that would make backups for a directory for me every like every hour? im trying to setup a backup system for this game server im running.. but the console is very primitive because its in alpha still
[23:28] <histo> z1haze: use cron
[23:28] <histo> z1haze: where are you copying the backup to?
[23:28] <z1haze> i just want to create a backup folder within the game folder
[23:29] <z1haze> i dont know what cron is, im sorry i dont know how to program i just play games :\ lol
[23:29] <z1haze> i found a tut on making a shell script to backup
[23:29] <histo> z1haze: cron is just what you want.
[23:29] <z1haze> but can it be automated, like if the server process is running, it does this every hour
[23:30] <z1haze> ok great: how do i use this?
[23:30] <histo> z1haze: you can schedule a task to run every hour of every day of the month to cp /some/file /to/another/place
[23:30] <histo> z1haze: crontab -e
[23:30] <z1haze> ok great, but in my case its a series of region files
[23:31] <z1haze> well actually i want to do the entire contents of the region folder
[23:31] <z1haze> only because new region files are generated when players scout new land
[23:31] <z1haze> and of course id like to have something like FILENAME=ug-$(date +%-Y%-m%-d)-$(date +%-T).tgz
[23:31] <Thatguy> I would be zip my self
[23:31] <histo> z1haze: so you want to create a tgz?
[23:32] <z1haze> i juist want the concept of a backups folder, generating a new zip/tar each time so its not overwriting
[23:32] <z1haze> say like, someone gets griefed 2 days ago, but i wasnt on
[23:32] <z1haze> i dont want th ebackups to overwrite the loss, i want to be able to go back and fix that region if i need to
[23:33] <Thatguy> "zip -r date(date thinghere).zip /path/to/folder/"
[23:33] <histo> z1haze: then use tar czf /path/to/save/location.tgz /path/to/compress/directory
[23:34] <histo> z1haze: oh nvm you want sequential backups
[23:34] <z1haze> ok can we kinda start from crontab -e
[23:34] <z1haze> yes sequential
[23:34] <z1haze> im in contrab now, and i opened with nano
[23:35] <histo> z1haze: http://catlingmindswipe.blogspot.com/2010/02/linux-how-to-incremental-backups-using.html
[23:36] <histo> z1haze: keep in mind that you will fill up your drive unless you monitor how many of the backups you are going to create
[23:36] <z1haze> oh wait histo, i dont need this type
[23:37] <z1haze> it seems that backup ur describing will overwrite the same file, but only modifying new files or ones that have been changed
[23:38] <z1haze> hmm lets say i have region files 1.0 1.1 1.2 1.3 1.4 and 1.5.. i want tobackup all region files every hour
[23:39] <z1haze> but lets say region file 1.5 has some issues with it that i need to investigate, i learn the time of the incident, i find the region backup with that timestamp, and i pull the 1.5 region file out of that backup and replace the real region file
[23:40] <z1haze> basically, i need to schedule the execution of a shell script based on if a process is running
[23:40] <z1haze> i will write my own shell backup script, how can i schedule this based on IF the game is running
[23:42] <histo> z1haze: just call the script with cron
[23:45] <z1haze> can you give me an example
[23:45] <z1haze> i dont understand the 0 0 5 0 stuff
[23:45] <z1haze> also, i dont see how i can make it dependent upon a certain process is running
[23:48] <histo> z1haze: You will have to learn about scripting, cron is easy enough to explain in here.
[23:48] <z1haze> so cant u explain if its easy enough
[23:49] <histo> z1haze: first fields are minutes, hour, day of month, month, day of week.
[23:49] <z1haze> so what about one that is just every hour
[23:50] <histo> z1haze: 0 * * * *
[23:50] <histo> z1haze: that would be at 0 minutes, every hour, every day of mont, every month, every day of week.
[23:50] <histo> z1haze: or you could just do @hourly
[23:51] <histo> z1haze: there are special strings, @reboot, @yearly, @annually, @monthly, @weekly, @daily, @midnight, @hourly
[23:51] <z1haze> ok 1 sec let me try this out on a new instance
[23:51] <z1haze> how would i go about these 2 issues ill run ito though:
[23:52] <z1haze> it will backup even if the server is offline (pointless)
[23:52] <z1haze> and secondly, how to i say, i only want to keep.. 72 backups
[23:52] <z1haze> but now that im thinking about it, your ideea of the incremental is not bad because if nthing is changed is those previous regions, why do i need them backed up every time
[23:53] <z1haze> it wil make it harder to find the filse i need though i bet
[23:53] <histo> z1haze: you could just back up the changes or the whole thing.
[23:53] <z1haze> yea i want to backup changes
[23:53] <z1haze> do you know how to write that
[23:53] <histo> z1haze: yes
[23:54] <histo> z1haze: just write a script that checks if your process is running, if it is then create a backup.
[23:54] <z1haze> great, let me get this instance up and running so it can be tested
[23:55] <z1haze> haha i have no diea how to do that but ill be back
[23:55] <z1haze> 1 min, thanks for helping me
[23:55] <histo> z1haze: i'm not going to write it for you. Then you won't learn, I will answer questions that you come across though.
[23:56] <z1haze> ive never written scripts in my life im not a programmer
[23:57] <histo> z1haze: you don't have to be a programmer. It's plain text just the commands that you would type in a console.