=== Guest61418 is now known as esde === not_phunyguy is now known as phunyguy === hxm is now known as Guest36663 === Adri2000 is now known as Guest49824 [03:27] Earlier today, I had a server running 12.04 go down. Found out that a package got updated that removed firmware for a aic94xx controller. Seems like it got removed from a package for legal reasons, but there was no warning before it removed the firmware. [03:31] there is always a warning [03:32] it's in the change logs [03:42] Fair enough. Unless I was looking for it in the changelogs specifically, amongst all the other updates that the machine received, it’s incredibly easy to overlook. [03:48] hmm, I have mine set to show me all the changelog when I update the machine [03:48] I also have it set so all updates get emailed to me, and I look over them [03:49] just sign up on the maillist to receive them [03:49] it just matters exactly how much you care [03:53] It’s not a matter of if I cared or not, the server is a tool and it’s maintainence is important, but it’s not my primary responsibility. It’s a tool I use for development for other things. Thanks for the direction toward getting the changelists when I go to update. [03:54] I’ll take a look at getting that setup. [05:27] hello all [05:30] can I use gparted to make my current server raid 1? [05:30] I already used it to format my new hdd. But I need to make the new hdd work with my current hdd. Anyway to avoid a installation? [05:30] *reinstallation [06:01] how can I make a partition extended with gparted? === Lcawte|Away is now known as Lcawte === mswart_ is now known as mswart [08:21] Good morning. === Lcawte is now known as Lcawte|Away === airtonix_ is now known as airtonix === Ursinha is now known as Ursinha-afk [10:14] Is there a way to run chmod, on a full directory and tell it only to do files not folders ? [10:14] so I want to it to run through check every folder and amend every file [10:15] you'd need to use find to do that [10:15] bitbyte: " find /path/to/base -type d -execdir chmod .... {} \;" maybe? [10:16] bitbyte: oops, "-type f" ! [10:16] mmmm i’ll give it a shot, i’m currently just amending my plex directory so you can see lots of folders and files scattered all over [10:17] TJ- : so you think this will do without the quotes “ find /media/data/plex_source/ -type f -execdir chmod 644 {} \; “ [10:36] TJ- it worked like a charm thanks for the help [11:02] Another quick one guys if you don’t mind, im trying to run “ find /media/data/plex_source/anime/One\ Piece/ -type f -execdir rename -v 's/L@mBerT/ /' *.mkv {} \; “ without the quotes but when it gets to the @ sign it dies thinking it needs a package === Ursinha-afk is now known as Ursinha [11:12] bitbyte: does it need escaping, with "\@" [11:12] i’ll give it a shot thanks === Ursinha is now known as Ursinha-afk === Lcawte|Away is now known as Lcawte === luminous_ is now known as luminous [14:35] t [14:37] Hi, i need to put 20TB in raid 5. Which drives should i use? I heard something about "Nonrecoverable Read Errors per Bits read". Thanks for the help! [14:42] anyone [14:49] ahmadgbg: well [14:50] ahmadgbg: what sort of use? [14:50] RoyK: Storage and like 5 websites [14:51] what sort of storage? VMs? archive? [14:51] what sort of i/o pattern do you expect? [14:52] ahmadgbg: if you don't know, just describe the use of the server as well as you can [14:53] Royk: Archive, i will put raw videos from the cam to save them, I will even use a NAS to back them up. [14:53] Royk: The websites will be wordpress [14:54] ahmadgbg: Not sure if you want to use RAID5 for a 20TB array. With that amount of data there is supposedly a significant risk of a second disk failuring during the rebuild which happens after a firsk disk dies. [14:54] ahmadgbg: do you expect lots of traffic on the websites? if not, it shouldn't matter a lot [14:54] andol++ [14:54] ahmadgbg: better use raid6 - rebuild time for a large raid is significant [14:54] Royk: not so much.. like 2000 unique visitors/month [14:54] ahmadgbg: I guess my mobile phone could do that ;) [14:55] Royk: haha :D... So if i use Raid 6.. which drives should i use [14:55] ahmadgbg: for such use, probably WD Red or something [14:55] ahmadgbg: how man drive ports does your controller have? [14:55] ahmadgbg: doesn't seem to be very i/o intensive [14:56] bekks: I guess it's better to get controllers enough or controllers large enough than just basing it on what's there [14:56] Royk: so i can use 10^14 drives? [14:56] bekks: 6 ports [14:57] RoyK: yeah, so it would be interesting wether he already has a controller. [14:57] Does it need to be hardware RAID or would mdadm work? [14:57] ahmadgbg: IMHO there's very little difference between enterprise drives and consumer drives. it's mostly marketing BS [14:57] tonyyarusso: MD usually does a better job than hw raid ;) [14:57] RoyK: I often use MD myself. It's an ongoing debate at the office. :) [14:58] ahmadgbg: I'd recommend md raid for flexibility, or zfs if you're nervous about data consistency [14:58] ahmadgbg: but then - a bitflop or two won't ruin a video [14:58] RoyK: i was thing about zfs2 i think its named? [14:58] So, add a couple of plain SATA cards and you can support that amount a heck of a lot cheaper [14:58] Well, having the HW RAID battery backup can sometimes be nice, but otherwise I too have a preference for mdadm. [14:58] ahmadgbg: zfs2? no such thing afaik [14:59] I also normally go for 10 rather than 5/6 - a few more drives, but better performance and safer. [14:59] tonyyarusso: sure, but doesn't look like he needs the performance [14:59] Royk: Z2 :D [14:59] tonyyarusso: and no, raid1+0 isn't really safer than raid6 [14:59] ahmadgbg: ah - raidz2 - it's zfs' implementation of raid6 [15:00] ahmadgbg: raidz3 if you're really paranoid (and don't care much about write performance :P) [15:00] RoyK: ye.. should i go with that and WD red drives? [15:00] ahmadgbg: keep in mind that if you choose zfs, you'll lose most of the flexibility offered by md [15:00] RoyK: well im going to backup the data on a NAS so that wont be needed right? [15:00] RoyK: like? [15:01] ahmadgbg: so, on top of my head, I'll suggest setting up an mdraid with 4TB WD Red drives in RAID6 [15:01] I've had good luck with my handful of reds so far, fwiw. Small sample size, but since when does the Internet care about reproducible validity of opinions? :P [15:01] ahmadgbg: flexibility of adding/removing drives to the raid etc [15:01] RoyK: ok then i will use that :D [15:02] so, 7 4TB drives in RAID6, that'll give you 20TB, or 18TiB (TiB as in what the OS reports as terabyte) [15:02] so perhaps get 8 drives [15:02] RoyK: but i saw something about if i use 10^14, there is a change that it wont rebuild if a drive crashes [15:03] ahmadgbg: that's why you should use raid6 ;) [15:03] RoyK: smart :D [15:03] ahmadgbg: is this a home server or a production thing? [15:03] RoyK: its in between :D [15:04] Home *is* production! [15:05] if it's a production thing and you have the budget, I'd go for seagate constellation 4TB drives [15:05] tonyyarusso: hehehe [15:05] RoyK: its a home then :D [15:05] RoyK: WD red are much cheaper... [15:05] ahmadgbg: so, 7 or 8 drives in raid6 plus maybe a spare drive if you're nervous [15:06] RoyK: it wont be a problem right :D [15:06] ahmadgbg: and perhaps a controller like this http://www.ebay.com/itm/LSI-SAS-9211-8i-6Gbps-8Port-PCI-Express-SATA-SAS-Host-Bus-Adapter-New-/380703631558?pt=US_Server_Disk_Controllers_RAID_Cards&hash=item58a3b468c6 [15:06] those are good [15:07] you'll need sas-sata cables, though, but sata plugs neatly into sas, so it'll work well [15:07] http://www.ebay.com/itm/Mini-SAS-4i-SFF-8087-36P-36-Pin-Male-to-4-SATA-7-Pin-Splitter-Adapter-Cable-0-5M-/111316051784?pt=US_Drive_Cables_dapters&hash=item19eaf42748 [15:07] a cable like that [15:08] * RoyK likes working with storage ;) [15:08] Nice :D.. and if i want to add more drives it will be easy right [15:08] with mdraid, yes [15:08] with zfs, no [15:09] is it easy to setup? [15:09] it'll probably take a week (or two) to add a drive to a raid that size, but it'll work well (and the raid will be usable during that time) [15:09] yes [15:09] nice :D [15:09] Now its buy time :D [15:10] hehe [15:10] good luck :) [16:32] hi! what permissions does logstash need to read /var/log/* ? [16:32] I've added logstash to the adm group as an initial attempt on this, but that is not sufficient, and logstash is still getting permissions errors [16:50] luminous: normally group membership in adm [16:50] I thought that would work too [16:50] apparently not === Sieb is now known as sieb [16:50] luminous: restarted logstash? [16:50] yes, even restarted the whole system [16:51] which logs does it fail to read? [16:51] the init.d does some chroot stuff, but it can read other logs in /var/log just fine [16:51] RoyK: auth.log and all those that are owned by root:adm [16:51] or root:root [16:52] chrooting it to disallow it to read /var/log disables it to do its job :P [16:53] RoyK: don't get confused there, it can read /var/log/* [16:54] but it doesn't have permissions to read some of these files [16:54] let me look at the script to be more specific [16:55] is it ok to paste 4 lines here? [16:55] sorry, 5 [16:55] !pastebin | luminous [16:55] luminous: For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic. [16:55] ok, thanks for confirming [16:59] luminous: find out which files that aren't readable [17:00] RoyK: right now, it's anything that logstash does not own [17:00] but actually, let me add world readable to auth.log [17:00] and we can confirm [17:00] I was also thinking of dropping the chroot in that init script [17:00] to test [17:01] but the chroot chroot's logstash to / [17:04] hrm, it seems removing the chroot let's logstash read auth.log with the perms I've given it (added to adm group) [17:04] that's weird [17:04] I don't know if I can debug this deeply enough right now, it might have to wait [17:04] somehow the chroot --userspec seems to be limiting what logstash can see [17:05] even though it's in the adm group [17:05] luminous: sudo su -c "cat /var/log/auth.log" logstash [17:05] RoyK: logstash has no shell, so that doesn't work [17:06] luminous: sudo su -s /bin/bash -c "cat /var/log/auth.log" logstash [17:06] that worked [17:06] then the logstash user can read those [17:07] but not when run in the chroot [17:07] but I'll need to redeploy this vm to test that theory some more [17:07] but I'll need to push that out till later [17:07] too many deadlines [17:07] :) [17:07] thanks for the assistance so far RoyK :) [17:08] :) === sieb is now known as Sieb === Guest49824 is now known as Adri2000 [20:42] hey folks [20:43] trying to upgrade an old 10.10 (PPC) server which hasn't been online in a while. when I run apt-get update I get a lot of 404s and it fails. Any tips? [20:46] use the archive [20:46] SpaceBass: That being the result of 10.10 not having been supported for a while, and not longer being availible in the regular repo server. [20:46] andol, exactly [20:46] Patrickdk, is that http://old-releases.ubuntu.com ? [20:47] SpaceBass: Yepp [20:47] yes [20:47] edit your /etc/apt/sources.list to use it [20:47] any way to apply that change to the entire sources.list ? [20:47] your best text editor [20:47] or sed [20:47] or perl [20:47] or ... [20:47] yeah...my see and regx are too rusty... vi it is [20:48] SpaceBass: Still, a reinstall might be preferable, given that your current upgrade path looks something like 10.10 -> 11.04 --> 11.10 -- 12.04. [20:49] yeah...probably easier to stand up a basic amazon instance [20:49] had the hardware and thought it might be worth reviving [20:56] so i installed ubuntu 14.04 server, and it put GRUB on my USB Key, not really a problem except it wasn't the USB Key I'd like it on. how can I low level copy one drive to the other? [21:10] anyone able to answer a question about Grub? [21:11] Havenstance: why dont you install grub onto where you want it? [21:13] bekks, I haven't the foggiest how, I hit the guided, Whole Disk Encryption with LVM. and when I do grub-install /dev/sdb it gives an error about installing it on encrypted disk, so I did what it said and it said it installed successfully and I now have a /boot directory which I didn't have before [21:14] but when i reboot it throws up non-system disk or disk error replace and strike any key when ready. I put in the flash drive reboot it boots fine [21:14] May need to reinstall, I've had this issue before and I usually pull the USB key and let it fail then tell it where I want it to go. === Lcawte is now known as Lcawte|Away [21:29] hi all. please can you help me, my ubuntu server has WLAN and LAN enabled and conencted to the same network, when i boot wihtout LAN connected and then connect it later, all traffic still flows over WLAN, how can i make the switch automatic to use LAN when available ? [21:35] dkorras: If by WLAN you mean wireless and by LAN you mean wired, disconnect from the wireless network when you hook up the cable [21:36] dkorras: I'm not certian if there's a way to automatically switch from wireless to the wired connection [21:36] sure, using metric's [21:36] I also just realised this is for aserver not a desktop, and I'm probably gave a rather stupid answer [21:36] but it won't switch, just new connections will use the lower one [21:39] i have search there is a program called guessnet [21:39] there is no documentation to set it up though === lool- is now known as lool [21:53] hi guys. i'm trying to open my perl application in browser (http-service is nginx) and i getting such error: An error occurred while reading CGI reply (no response received) [21:53] i'm using unix socket fcgiwrap, ubuntu 14.04 [21:53] and? [21:53] fix your perl application [21:54] what i need to fix? it worked on other server [21:54] how should I know? [21:54] your perl application does log it's errors right? [21:54] yes, but there are no errors in log file [21:55] I'm not talking about the nginx log file [21:55] did you attempt to run the perl app yourself? [21:55] nginx error.log is clear [21:55] yes I know that [21:56] that is why you should not be looking at it [21:56] but at your perl applications log file [22:00] Patrickdk: my application log file is clear, just thar data which i write there [22:00] well, your perl application is crashing for some reason [22:00] run from bash is ok, any errors === circ-user-8TaE5 is now known as Thatguy [22:35] does any one here use proftpd? [22:38] openssh sftp? [22:52] Luminous if you talking to me for some reason all my config files for it need to be able to be read as everyone [22:52] so 777 or so [22:59] 777? [22:59] everyone needs to write to your config files? [22:59] well i got t as [22:59] 744 [23:00] for some reason it cant read the config files unless it can be read by everyone even thought the process is run under root [23:00] really wierd [23:01] well, that is simple [23:01] the file is owned by the wrong user/group [23:02] currently root but when i look at ps aux o see who is running proftpd its root [23:02] what is the /etc/proftpd folder owned by? [23:02] "root 8720 0.0 0.0 118484 2400 ? Ss Jun28 0:00 proftpd: (accepting connections)" [23:03] root [23:03] and group root [23:03] hmm [23:03] mine is the same [23:04] except the config files I have passwords in, is owned by proftpd and only readable via proftpd user [23:04] hmm [23:04] I'll see if there a ruser caled proftpd [23:04] proftpd 1446 0.0 0.0 84076 2184 ? Ss Jun22 0:02 proftpd: (accepting connections) [23:06] hmm [23:06] just set the user of the files to proftpd and nothing [23:06] ya, in the proftpd you configure the user/group [23:06] so it's up to you [23:06] the debian maintainer never really *debianized* it [23:07] says root [23:07] I believe its set to root so I can set the ftp user uid [23:07] so say if you login as [23:08] user1 it will use say thatguy on the system [23:08] how i gt it setup for my websites [23:08] each website under different user [23:08] then each ftp account needs to be under a different user as well [23:08] same with me [23:09] got mine reading off mysql db :D [23:09] for the users [23:10] in your init.d/proftpd does it say what user to run as? [23:10] no [23:11] ok [23:12] I think i have fixed it [23:12] soon know [23:21] Patickdk thanks I think we've fixed it [23:21] yay [23:22] :) [23:25] would someone please help me make a script that would make backups for a directory for me every like every hour? im trying to setup a backup system for this game server im running.. but the console is very primitive because its in alpha still [23:28] z1haze: use cron [23:28] z1haze: where are you copying the backup to? [23:28] i just want to create a backup folder within the game folder [23:29] i dont know what cron is, im sorry i dont know how to program i just play games :\ lol [23:29] i found a tut on making a shell script to backup [23:29] z1haze: cron is just what you want. [23:29] but can it be automated, like if the server process is running, it does this every hour [23:30] ok great: how do i use this? [23:30] z1haze: you can schedule a task to run every hour of every day of the month to cp /some/file /to/another/place [23:30] z1haze: crontab -e [23:30] ok great, but in my case its a series of region files [23:31] well actually i want to do the entire contents of the region folder [23:31] only because new region files are generated when players scout new land [23:31] and of course id like to have something like FILENAME=ug-$(date +%-Y%-m%-d)-$(date +%-T).tgz [23:31] I would be zip my self [23:31] z1haze: so you want to create a tgz? [23:32] i juist want the concept of a backups folder, generating a new zip/tar each time so its not overwriting [23:32] say like, someone gets griefed 2 days ago, but i wasnt on [23:32] i dont want th ebackups to overwrite the loss, i want to be able to go back and fix that region if i need to [23:33] "zip -r date(date thinghere).zip /path/to/folder/" [23:33] z1haze: then use tar czf /path/to/save/location.tgz /path/to/compress/directory [23:34] z1haze: oh nvm you want sequential backups [23:34] ok can we kinda start from crontab -e [23:34] yes sequential [23:34] im in contrab now, and i opened with nano [23:35] z1haze: http://catlingmindswipe.blogspot.com/2010/02/linux-how-to-incremental-backups-using.html [23:36] z1haze: keep in mind that you will fill up your drive unless you monitor how many of the backups you are going to create [23:36] oh wait histo, i dont need this type [23:37] it seems that backup ur describing will overwrite the same file, but only modifying new files or ones that have been changed [23:38] hmm lets say i have region files 1.0 1.1 1.2 1.3 1.4 and 1.5.. i want tobackup all region files every hour [23:39] but lets say region file 1.5 has some issues with it that i need to investigate, i learn the time of the incident, i find the region backup with that timestamp, and i pull the 1.5 region file out of that backup and replace the real region file [23:40] basically, i need to schedule the execution of a shell script based on if a process is running [23:40] i will write my own shell backup script, how can i schedule this based on IF the game is running [23:42] z1haze: just call the script with cron [23:45] can you give me an example [23:45] i dont understand the 0 0 5 0 stuff [23:45] also, i dont see how i can make it dependent upon a certain process is running [23:48] z1haze: You will have to learn about scripting, cron is easy enough to explain in here. [23:48] so cant u explain if its easy enough [23:49] z1haze: first fields are minutes, hour, day of month, month, day of week. [23:49] so what about one that is just every hour [23:50] z1haze: 0 * * * * [23:50] z1haze: that would be at 0 minutes, every hour, every day of mont, every month, every day of week. [23:50] z1haze: or you could just do @hourly [23:51] z1haze: there are special strings, @reboot, @yearly, @annually, @monthly, @weekly, @daily, @midnight, @hourly [23:51] ok 1 sec let me try this out on a new instance [23:51] how would i go about these 2 issues ill run ito though: [23:52] it will backup even if the server is offline (pointless) [23:52] and secondly, how to i say, i only want to keep.. 72 backups [23:52] but now that im thinking about it, your ideea of the incremental is not bad because if nthing is changed is those previous regions, why do i need them backed up every time [23:53] it wil make it harder to find the filse i need though i bet [23:53] z1haze: you could just back up the changes or the whole thing. [23:53] yea i want to backup changes [23:53] do you know how to write that [23:53] z1haze: yes [23:54] z1haze: just write a script that checks if your process is running, if it is then create a backup. [23:54] great, let me get this instance up and running so it can be tested [23:55] haha i have no diea how to do that but ill be back [23:55] 1 min, thanks for helping me [23:55] z1haze: i'm not going to write it for you. Then you won't learn, I will answer questions that you come across though. [23:56] ive never written scripts in my life im not a programmer [23:57] z1haze: you don't have to be a programmer. It's plain text just the commands that you would type in a console.