[00:07] howdy. [00:08] how do I start the "gui" baed partition editor used in installation? [00:08] (I want to use it to setup sw raid on some other disks, ubuntu-server is installed on it's own drive) [00:13] Is ext4 stable in 10.10? [00:19] I think it's safe to assume the developers believe it to be stable or else they wouldn't have made it the default file system type. === RoyK is now known as Guest61560 === RoyKa is now known as RoyK [01:55] Anyone in that could help, need a good harddrive recovery program. My system disk died on my. I am in live cd now, could get to the /home partition but not the /root [01:55] Pls, any help. [02:16] New bug: #674768 in dhcp3 (main) "wrong reference in description" [Undecided,New] https://launchpad.net/bugs/674768 === squishy is now known as rbniknej [05:24] I set up RAID 1 using mdadm the other day, and md4 appears not to have synced, can anyone help? cat /proc/mdstat: http://paste.ubuntu.com/531032/ [05:24] it doesn't show UU like I think it should :/ [05:26] or does anyone know a better place to ask? === niekie_ is now known as niekie [06:53] hello all! [06:53] i'm having trouble connecting to my router from my external address [06:55] i have the appropriate ports forwarded [06:55] and i've even put it in dmz [06:55] it's not helping. any suggesiotions? === ivoks-afk is now known as ivoks [11:17] is memory ballooning available in kvm in Lucid? [11:17] yes [11:18] by default, or will I have to enable it somehow? [11:18] I have this machine with currently 4 VMs - I haven't overcommitted (much) yet - just wondering [11:18] pretty sure its on by default; it was a early feature of kvm IIRC [11:19] anyhow, have to run [11:19] ciao - I'm sure others can answer any other qyesetions you have [11:29] just wondering, how do I get the unstable package from archive.ubuntu.com/ubuntu/pool/universe/p/proftpd-dfsg/ ? I tried adding unstable to several configs, yet I can't download the package using apt-get [11:56] Yo! How can i optimize the performance of virtual guests runing in ubuntu 10.04 with KVM. [11:57] Processor Intel Xeon E5420 @ 2.5 GHz. 16 Gb RAM [11:57] Lucid Guests [11:58] Both x86 and x86_64 [12:00] Yo! How can i optimize the performance of virtual guests runing in ubuntu 10.04 with KVM. Processor Intel Xeon E5420 @ 2.5 GHz. 16 Gb RAM. Lucid Guests.Both x86 and x86_64 === ewook_ is now known as ewook [12:06] Yo! How can i optimize the performance of virtual guests runing in ubuntu 10.04 with KVM? Processor Intel Xeon E5420 @ 2.5 GHz. 16 Gb RAM. Lucid Guests.Both x86 and x86_64 [12:11] k5673: asking the same question every few minutes are not going to help you get an answer quicker fwiw. [12:12] OK [12:34] k5673: what problems do you experience? [12:37] mgolisch: Running an x86-only database under a 32 bits Lucid guest with 3GB of allocated RAM and 4 assigned processors is slower than running the same database on 32 bits Lucid real with 3GB RAM and Intel Xeon Quad-Core Processor. [12:38] mgolisch: The virtual one is slower than the real one [12:38] that's to be expected. [12:38] what did you expect? [12:39] especialy virtual smp doesnt work too well in many virtualisation products [12:50] Mmmmmmmm [12:50] OK [13:25] hello [13:25] need some help [13:25] for some reason, 'ipmisensors' module isnt included in ubuntu server kernel === NG_ is now known as ng_ === ng_ is now known as NG_ [13:56] When considering Disk based backup for a SOHO NAS... What is the most important consideration? Mirroring capacity? Speed? Offsite vs. Same Rack? [14:05] ehcah: zfs? [14:05] mirroring is a good thing until you get fs corruption or someone deletes a file by accident [14:06] hey guys [14:06] with zfs (or btrfs if you're brave) you have snapshotting, which is rather nice [14:06] hi girls [14:06] im testing kvm on my desktop running kubuntu but im just wondering aquemu will it allow me to setup a guest on a remote machine? [14:06] That's what I'm fearing. I don't know about have a 24 bay chassis and its backup in the same rack, or house either. [14:11] I've been all over the net and as a result of purchasing an Areca Raid Controller, decided that I will simply use Ubuntu Server and allow the Raid Controller to configure my disk for Raid6. [14:11] why raid6? [14:11] I know ZFS, or BTRFS, when available in production sits on top of my config, but I'm not sure I need it? [14:13] ehcah: hardware raid sucks hard when it comes to silent disk errors, and with terabytes of data, you'll get silent errors from the drives, meaning either corrupted data or (in case of metadata) perhaps a panic [14:13] NightDragon: I think it offers me the most protection for my data. Keeping in mind that Raid 6 is still a single point of faillure for me. [14:14] unless you *really* think 2 drives could fail at once... [14:14] RoyK: Can nothing be easy for the incompetent like myself? [14:14] (which is rather astronomical odds) [14:14] me, personally i prefer Raid5 + HotSpare [14:14] less overhead [14:14] NightDragon: You haven't met me. If it can go wrong, it will. [14:14] lol fair enough [14:14] NightDragon: heh - I somehow guess you aren't having too much data around :) [14:15] well if you think about it [14:15] take the odds of a drive failing [14:15] NightDragon: Raid5 + Hot Spare leaves me the same usable disk space. How does performance improve? [14:15] and multiply that by the odds of a second drive failing within the timeframe of data transfer to hotspare [14:15] NightDragon: the most common thing is (a) drive fails, (b) insert new drive, (c) start rebuilding/resilvering and (d) corruption is found on one of the other drives - whops - data corruption [14:15] ehcah: because one of the drives isnt in use until its needed. less overhead [14:15] NightDragon: that is, with linux sw raid or hw raid, you might not see the data is corrupted, so it's ignored, which is rather sad [14:16] hmmz, an interesting problem [14:16] i see your point [14:16] RoyK: Is there any software that can automatically run corruption tests? OR, are they simply found on a rebuild or specific file access? [14:16] NightDragon: it's not hypothetic -I've seen it several times [14:16] ehcah: zfs? [14:17] well then Helllllloooooo tape drive! [14:17] If I run ZFS, doesn't that make the $1,300 I spent on my Areca card, wasted? [14:17] (j/k.. i get your point lol) [14:17] ehcah: return it [14:17] * NightDragon is just jealous that his DRAC card doesnt support raid 6 :( [14:18] ehcah: zfs is way better than what areca can give you [14:18] * RoyK is setting up a couple of 110TB boxes these days - all on zfs [14:18] All I can say is ARGH!!! Everytime I think I've got things worked out, there is a contrary argument against my solution. [14:18] This is good though. [14:18] :) [14:18] I'm trying to start out right! [14:18] ehcah: didn't say it wasn't good enough :) [14:18] I know. [14:18] Interesting discussion. [14:19] I just wish I were more technical. [14:19] It would make my life easier. [14:19] ehcah: dig further and it'll bleed in :) [14:19] When all is said done, I'll have spent more than $3,500 on a solution I intended to reuse older hardware for.... [14:19] RoyK: I don't know about that. I've proven to be pretty thick! :) [14:20] ehcah: that's about as much as we paid for this 10TB test unit [14:20] * RoyK grins [14:20] I bet your disk are better quality than the 2TB Samsung ones I intend to continue using. [14:20] why? [14:20] most disks are about the same quality [14:21] speed differs, obviously, but the error rate is quite constant [14:21] My disk are currently $80 at NewEgg. [14:21] according to google's tests [14:21] well, they'll work [14:21] The 12 I have now, work flawlessly. [14:21] It's unRAID I'm not quite happy with. [14:21] For a ZFS solution. I was ready to use Nexanta. [14:22] why not? [14:22] or openindiana.... [14:22] A few forums I've posted in suggested going back to straight linux server and HW or SW Raid. [14:22] RoyK: I have OpenIndiana in a VM as well. [14:22] I really won't suggest using linux software (or hardware) raid over zfs [14:22] I know the FAQ's say it has a Server + Desktop focus, but I find it very desktop like? [14:23] Any idea when BTRFS will be readily supported? [14:23] zfs is a little slower, because of the checksumming, but when you get those silent errors from the drives, those will be detected by zfs, not by other solutions [14:23] perhaps by btrfs [14:23] but then, btrfs only supports mirroring [14:23] Didn't know that. [14:23] ehcah: you can install on btrfs from 10.10 [14:24] I know, but I didn't think it was supposed to be ready for full usage until sometime in mid 2011? [14:24] ehcah: no current linux fs (except btrfs) checksums data [14:25] ehcah: I guess btrfs will get up to current zfs usability around 2015 with the current progress :þ [14:25] RoyK: Hypothetically. If I could return my Areca card. What would you reccomend that would ultimately get me to 24 SATA drives running on ZFS? [14:25] pci-x or pci? [14:25] Between the two, I'd go with ZFS, hands down. Way more support and implemenations out there... [14:26] Let me check the MOBO I've bought before I answer. The areca is PCI x8 [14:26] http://www.newegg.ca/Product/Product.aspx?Item=N82E16813182211 [14:27] x8, or x4 I guess. [14:27] Straight PCI is probably slower than I want? [14:27] and that MOBO only has 1 slot. [14:28] LSI SAS9211-8i is quite cheap [14:28] that and a SAS expander will allow you to connect a truckload of drives with good speed [14:29] Ahh, expander... I was thinking I was limited to 8 drives with the card above. [14:29] 8 6Gbps SAS ports [14:29] with an expander you can utilize those quite well [14:30] usually the expander takes 4 SAS ports [14:30] Not to push my health insurance too far, but any suggestion for an expander [14:30] meaning 24Gbps [14:30] I think those are quite generic [14:31] If I could safe enough money on the cards. I'd purchase a second Norco RPC-4224 chassis. [14:31] k. [14:31] a sas expander is like an ethernet switch [14:31] only that it switches sas [14:31] Stupid question: Externally mounted or on a PCI type card? [14:32] SAS expanders connect to SAS, so usually externally [14:32] (or at the backplane) [14:32] most backplanes have a sas expander these days [14:32] at least the larger ones [14:33] The RPC-4224 may have 6 already? [14:33] I'll need to check its specs. [14:33] I know it has only 6 connections required. [14:33] Six internal SFF-8087 Mini SAS connectors support up to twenty-four 3.5" or 2.5" SATA (I or II) or SAS hard drives; [14:33] meaning it has an expander..... [14:34] yes. [14:34] dunno if that's 3Gbps or 6Gbps, though [14:34] 3Gbps will probably suffice, so you can get a cheaper controller [14:34] 3Gbps I beleive. [14:34] then the 9211 will be overkill [14:36] I'm looking at other LSI options to see.... [14:37] http://www.newegg.ca/Product/Productcompare.aspx?Submit=ENE&N=100006520%2050001833%2040000410&IsNodeId=1&Manufactory=1833&bop=And&SpeTabStoreType=1&CompareItemList=410|16-118-100^16-118-100-S01,16-118-099^16-118-099-S01 [14:39] Actually, on closer observation... I'm probably best with 3 of http://www.newegg.ca/Product/Product.aspx?Item=N82E16816118100 [14:39] With LSI that is. [14:41] ehcah: try asking on #openindiana - you might not need three of those - but again - that will depend on the backplane/expander used [14:41] Will do. [14:42] The only trouble is adding multiple cards creeps back to the same price as I was already paying... http://www.newegg.ca/Product/Product.aspx?Item=N82E16816151052 [14:42] ehcah: what sort of application will this be? [14:42] Home Media collection. [14:42] No Database or Web serving. [14:42] ehcah: then you can probably live with a single controller [14:43] the bandwidth will suffice [14:43] and I guess you're only on gigabit ethernet or lower anyway [14:43] hi... newbie question. I want to change umask for the 'pootle' user to 0002. how do I do it? it's a user that runs PootleService. [14:43] Ok. I need to get head around around connecting the 6 8087's on the Norco backplane. [14:44] it doesn't matter if the server can deliver 10Gbps if you connect that to an 802.11g network [14:44] s/PootleService/PootleServer [14:44] My LAN includes 24 GB ports for this type of usage. Not faster. [14:44] how many concurrent users? [14:44] All of my media touching devices are hard wired to the GB switch. [14:44] We're a family of 4. [14:45] so worst case 4 concurrent users [14:45] It would be tough to hit more than that. [14:45] Yes. [14:45] you can use anything for that [14:45] it'll work well [14:45] Maybe a backup or ripping session on top. THat's it. [14:45] you really don't need a truckload of controllers [14:45] That's why I like the Areca card, when, I was convinced HW raid was the way to go. [14:45] * RoyK just ordered some 10Gbps switches :D [14:46] Good thing all my gear hasn't arrived yet. [14:46] oh man. for home or business? [14:46] business :) [14:46] two 110TB servers for disk-based backup connected by 10Gbps to the main datacentre [14:46] quite fun :) [14:47] Yep. Sounds like it. All my data will require a 1:1 ratio for backup. [14:47] I don't know how to plan for that capacity beyond the 450 Blu-rays and 200 DVD's I'm ripping now... [14:47] I also struggle with having duplicate copies in the same rack. [14:48] I have a 70Mbit fibre connection, but no friends willing to house a server for me. [14:48] ehcah: I wrote a perl script to find duplicated files on a filesystem... [14:49] Not sure if this makes a difference, but I should have wrote Mbps [14:49] ehcah: for your setup, if you want to use 24 drives, I'd recommend either 3 RAIDz2 VDEVs of 8 drives each (for performance/safety) or 2 RAIDz2 VDEVs with 12 drives each [14:49] ehcah: how many drives are you getting initially? [14:50] RoyK: I can't write a script, but I thought de-dup looks for those instances? [14:50] ehcah: don't use zfs dedup as of now [14:50] 12 total, to begin with. [14:50] ehcah: also, don't use zfs dedup now [14:50] 12 drives can live happily in a raidz2 [14:50] Although, with NewEgg.ca's sale price, I could easily add more. OR, share those with a second unit for backup... [14:51] Ok. [14:51] I've been testing dedup quite extensively, and it sucks hard [14:51] zfs dedup, that is [14:51] 12 x 2TB in Radz2 is about 18GB usable? [14:51] good to know. [14:51] (12-2)x2 [14:51] ok, or 1.8 I think? [14:52] so 20TB or ~18TiB [14:52] 1TB ~ .9TiB [14:52] TiB is what's reported by the OS [14:52] http://en.wikipedia.org/wiki/Tebibyte [14:52] I also own a Sans Digital 8 bay external enclosure with a port multiplier. Would this be good for backup? I was going to sell it and try to by a second Norco case. [14:53] That's why I always assume about 1.8 on a 2TB drive. [14:54] how do I set umask for a user that runs a daemon? [14:54] Based on the 450 BD I'm ripping at an average of 25GB per, I only need just under 12TB. And even with a machine that will 4 BD readers in it, it will take me quite a while to get there. [14:56] RoyK: I know this an Ubuntu forum... But, do you prefer OpenIndiana to Nexanta? Or was that recommendation simply for me as OI comes with a full desktop environment? [14:56] My Nexanta VM has napp-it as the GUI. [15:02] nexenta isn§t a desktop system [15:02] isn't [15:02] OI installs as a desktop system, but isn't really meant to be one [15:03] Ah. Ok, that makes sense. [15:03] From what I've read, they both run the same version of ZFS and have all of the same options. [15:03] OI has a newer zpool version [15:04] but most of the good stuff is in nexenta as well [15:04] freebsd zfs support lacks stuff like removing an slog, which is rather bad [15:04] meaning - if you add an slog (zil on ssd) and you lose that, the whole pool is lost [15:04] but then, that's not really relevant to your use [15:05] I don't fully understand where FreeNas is going, but I think at this point, I'd hold off until version 8 comes out? [15:05] I think I had decided on Nexanta for ZFS, but had not equally ruled out OI. [15:05] ehcah: I'd use OI if I was to choose [15:05] That's really good to know. [15:06] :) [15:06] ehcah: we're setting up OI on these 110TB units [15:06] 77 2TB drives in 11 7-drive RAIDz2s [15:06] whee! [15:06] As I mentioned, I have it running in a VM. Do I need to add anything to the base install? [15:06] COOL. [15:07] ehcah: try to add a bunch of virtual HDs to that VM and try to remove them, rearrange them etc [15:07] try to fuck it up badly [15:07] If I could figure out my cards, I'd actually mount 2 of those Norco chassis and start with single 8 disk RAIDz2's in each. [15:08] RoyK: I plan to when all my gear arrives. [15:08] why not ... [15:08] then just use zfs send/receive between them [15:08] * RoyK diverts to #openindiana [15:08] RoyK: Unfortunately, my drives are that 8 bay enclosure and unRaid for now. [15:08] k. === rbniknej is now known as jenkinbr === ivoks is now known as ivoks-afk === hackeron_ is now known as hackeron [16:33] hi...i want to ensure my iptables rules are persisted through reboot (i have fail2ban adding permanent bans)...i've found an article suggesting i add pre-up and post-down commands to my /etc/network/interfaces file...reading the man page for that file has made me nervous...could anyone spare me a moment to check what i'm planning to do? the server is remote and i don't want to lose connectivity...http://dpaste. [16:34] i don't know if i'm adding those pre-up and post-down commands correctly... [16:35] personally I have the script in /etc/network/if-pre-up.d which does an iptables-restore from a pre-saved lsit of rules. [16:36] mrmist: but my rules are being added to pretty much every day by fail2ban [16:36] fail2ban will sort itself out, you don't need to save those rules [16:36] mrmist: really???? oh, that's the only reason i want to save the config [16:36] you can do some other stuff around persisting them, though, i believe, but i've not really looked deep into that [16:37] mrmist: i should've thought fail2ban would be smart like that...thanks :) === ivoks-afk is now known as ivoks [16:54] kinygos: personally I'd use denyhosts over fail2ban [16:54] it's distributed and works well [16:55] it doesn't cover stuff that doesn't use tcpwrapper's hosts.deny, but then, most services do [17:02] RoyK: i've just seen your note...i'm afraid to use denyhosts...i only have remote access to the server and i'm not confident in my abilities to configure it correctly first time [17:03] RoyK: if i lose access to it, i'll end up having to rebuild it which will be a massive setback in terms of timescales for me [17:07] kinygos: don't you have some sort of console access to the host? === _GoRDoN__ is now known as _GoRDoN_ [17:10] RoyK: i have a lights out board...actually, thinking about it, i might have remote kvm [17:12] RoyK: can denyhosts run alongside fail2ban, or is that a silly suggestion? [17:17] if you only need ssh protection, denyhost will be the best imho [17:26] kinygos: using both for the same services, will be jolly stupid [17:31] my server froze up an i couldnt even connect to ssh. i terminated the instance using my web hosting company provided control panel and started it back up, it booted back up fine and is working now. [17:31] how so i investigate what happened to it? [17:31] err, do [17:34] RoyK: well...i'm quite naive...i couldn't believe how many people come knocking on my ssh door...since i put fail2ban on permanent ban, i'm still banning 3-4 new addresses a day... [17:35] RoyK: so on my todo list is a serious look at what other measures i can take to secure my server [17:39] kinygos: there are people knocking all over [17:39] kinygos: but so long your passwords are secure, they can knock all night [17:40] RoyK: i use "knocking" in the kindest possible sense...i think it's criminal that they even try [17:40] * RoyK welcomes kinygos to the Internet [17:40] * kinygos rolls on the floor laughing [17:41] lock the door with a safe key [17:41] if you have a good password, they can probe on forever [17:41] New bug: #674943 in autofs5 (main) "autofs5 attempts bind mounts with nfs4, but can't perform them correctly" [Undecided,New] https://launchpad.net/bugs/674943 [17:42] more than 7 characters, mixed-case letters, numbers, and non-alphanumeric characters....also root is not allowed [17:44] http://stuff.group.is/ismypasswordsecure.php [17:52] Hi, [17:52] Is there a reason umount and mount are suid? [17:53] lil_cain: so regular users can unmount and mount filesystems if they are allowed to in the fstab [17:54] fstab can allow regular users mount and umount filesystems? [17:54] sure, if you put "user" in the fourth field [17:55] huh. I was not aware of this. [17:55] Cool, thanks. [17:55] lil_cain: np [18:15] hello, I just managed to install ubuntu server on a very old machine ;) would like to use it as NAS [18:15] what's the best filesystem for the "storage" drive, so that cpu usage is as low as possible? [18:20] ext4 will do well [19:31] is there anyway to see a complete history of this irc chat? I'm trying to find a chroot link I posted months ago but can't seem to find the right site. [19:33] thesheff17: you posted 4 links, which one are you after? [19:34] Nafallo: I'm not 100% sure...it had to do with a chroot env that I was helping someone with. [19:34] hmm. all of them about chrooting ssh, right? [19:34] Nafallo: yea [19:36] http://www.cyberciti.biz/tips/howto-linux-unix-rssh-chroot-jail-setup.html http://www.howtoforge.com/chroot_ssh_sftp_debian_etch http://www.marthijnvandenheuvel.com/2010/03/10/how-to-create-a-chroot-ssh-user-in-ubuntu/ [19:36] thesheff17: archive is here, fyi: http://irclogs.ubuntu.com/ [19:37] excellent thanks guys. [19:43] !logs [19:43] Official channel logs can be found at http://irclogs.ubuntu.com/ - For LoCo channels, http://logs.ubuntu-eu.org/freenode/ [21:39] how do I set up a mail server? I've set up apache countless times, but I've got a VPS and I want to switch over to that instead of this shared host for my website, but I don't want to loose my emails [21:40] does anyone here use NFSv4 ACL's inheritance on their machines? Does it make it so when you copy a file to a folder, the file inherits the folders permissions? === SirFiChi is now known as ihCiFriS === Patrickdk_ is now known as PatrickDK === JanC_ is now known as JanC [23:56] New bug: #675052 in openldap (main) "Upgrade from hardy (8.04) to lucid (10.04) sets bad permissions on olcDatabase={-1}frontend,cn=config" [Undecided,New] https://launchpad.net/bugs/675052 [23:58] SpamapS: ping