[00:30] no [00:31] even though the disks in a mirror contain the same user data, they are not the same disk [00:31] the metadata will be wrong and it will rebuild anyway [00:31] Nice article about choosing hard drive http://www.directron.com/howtochoosha.html [00:32] qman__: oh I see, thanks! [00:33] So what is more reliable: SSD or HDD? [00:35] neither is particularly reliable [00:35] if you care about your data, implement a good backup system [00:36] and, the old addage is true: raid is not backup [00:37] raid is there to possibly prevent you from needing your backup, but it doesn't replace it [00:37] Question: what is the best strategy to scheduled replace all old hard drives for multiple servers with RAID1? For the moment, I think we need 2 planned maintenances: first – remove 1 of 2 existing drives and add new drive so md will recover data on the new drive; second – removing second old drive and add new one instead, so md will recover data on it from the previous new drive. So that system will have 2 completely new [00:37] drives. [00:38] that's a good strategy [00:38] qman__: OK [00:38] back up early, back up often, back up off site. [00:38] it has the benefit of that first-removed drive being a backup copy [00:38] if your hardware supports it, there is a third option [00:39] add your disks to the existing mirror without removing the old ones [00:39] then remove the old ones after the resync [00:39] But I'm wondering when md starts recovering process, may I use the server? Or it is better to unplug it from network and let md finish recovering? [00:40] qman__: you mean 3 drive slots on a single system? [00:40] the recovery process happens in the background and does not require downtime; that said, if your system is on the raggedy edge in terms of performance, you may slow down beyond the point of workability [00:40] actually I mean 4, but yes [00:41] qman__: for these servers we have only to slots and they're not removable [00:41] in a raid 1 you can add as many mirrors as you want [00:41] qman__: really? [00:41] so adding 3 or 4 mirrors, then removing the old ones, is a valid strategy too [00:41] I didn't know that [00:46] qman__: how the system know what to recover if at the beginning of recovery process the MyFile.doc was 15 pages book, and then I turned on my server in degraded raid mode so recovering could proceed in background, and then when system done recovering this MyFile.doc for 50% (theoretically) I saved new version of that file, that contains 30 pages and different content of some paragraphs in the first 3 pages. How system knows w [00:46] hat is the valid version of MyFile.doc for the moment? [01:10] vlad_starkov, it updates it on the fly [01:11] how exactly it does this is a bit voodoo magic to me, but it automatically handles writes while resyncing [01:11] qman__: OK np [01:12] Want to read a good book about raid and related [01:16] Some stat on cloning 1TB SATA2 drive: from 0 to first ~400GB the speed was ~ 86Mbyte/sec. Now it's 485GB and speed is ~77Mbyte/sec [01:17] qman, it's very simple [01:17] when you write, it writes [01:17] no vodo needed [01:17] when you read it will limit reads from the good drive, till it's done [01:17] that works on a raid 1, but how it does it on a raid 5 or 6 is still a bit confusing [01:18] ya, it bitmap tracks it [01:18] that was going be my next saying [01:19] vlad_starkov, you know harddrives start fast, then get upto like 1/3 their speed at the end? [01:19] if it starts at 120mb/sec, you can expect 50mb/sec at the end of the disk [01:19] patdk-lap: I didn't [01:19] heh? [01:19] you have never seen a harddrive benchmark ever in your life? [01:20] I noticed that but never know that it's normal behaviour [01:20] it's because the beginning of a hard drive is at the outside edge of the disk [01:20] which, by the nature of rotation, is capable of a much faster speed than the inner edge [01:20] http://img40.imageshack.us/img40/5864/hdtuneproahcibenchmarkv.png [01:20] 514GB done, speed is ~74MByte/sec [01:21] optical discs are set up the opposite way [01:21] with the start on the inside, working out [01:21] A century study [01:21] ever since they switched form constant sectors per track [01:21] What about SSD? [01:22] ssd's don't normally rotate [01:22] )) [01:22] they're just flash chips [01:22] should be a flatline [01:22] well, if it was ram it would be [01:23] flash is on page/block issues, for writing [01:23] for reading, it's just how many chips you have [01:23] So when manufacturer specify read/write timeout, speed and so on, is it an average value or what? [01:24] depends what value your looking at [01:24] avgerage latency, is just that avg [01:24] they would normally publish both [01:24] speed, will be max [01:25] if you notice you get basically max speed for half the disk [01:25] so if you shortstroked the disk 50% :) [01:25] it's actually a common tactic [01:25] well, was, before SSD [01:25] the hdd technology now seems so old... [01:25] if you needed faster disks but didn't need the space, just make your partitions small at the start of the disk [01:26] stay within the high range of speed and reduce latency because you're only seeking over part of the disk [01:26] What max sizes of disks Ubuntu supports? [01:26] qman__: I see. Good trick [01:27] now you just buy SSD because it's almost an order of magnitude faster and seek times are nonexistant [01:28] hmm, for 64bit? something like pb's per disk [01:28] for ext4? we still at 16tb cap? for ext4tools? [01:28] yeah, the software is way ahead of the hardware on that one [01:28] nah [01:28] ext4 is something like 8eb [01:28] no [01:28] ext3 is 16TB [01:28] ext4 could do it, but the tools couldn't [01:29] so it wasn't possible to format it [01:29] atleast when 12.04 came out [01:29] so RAID 10 of 16TB is ok for Ubuntu 12.04? [01:29] yes, no problem [01:29] and even if you can't go bigger with ext4, there's still xfs [01:29] 16Tb with ext4 or ext3? [01:29] yes [01:29] both [01:29] nice [01:30] just the ext lib needs to be fixed up in order to format bigger, was the issue [01:30] I haven't looked back into that isuse for awhile [01:30] mainly cause I've gone zfs [01:30] should be straightforward, I'd be really surprised if it isn't fixed yet [01:30] don't know is it reliable to build raid 10 of 4+4 drives. Chances that more than one will fail [01:30] I have a few 30+tb ntfs volumes [01:31] ouch [01:31] well, generally, going >2tb with raid1 or raid5 is a bad idea [01:31] qman, it works well :) just backups [01:31] ntfs is just plain awful in terms of performance [01:32] really suggest only using raid6 and 3way mirrors for that size [01:32] I can't imagine how bad the fragmentation would be on 30TB [01:32] qman__, I'm getting a nice 1400MB/sec [01:32] qman__, fragmentation? on files >100gigs? [01:32] I want to read about it to dig deeper file systems [01:33] I guess it wouldn't be TOO bad if you _only_ have giant files [01:33] I did say, backups [01:33] but windows just does absolutely retarded things on write [01:33] brand new system install is something like 15% fragmented [01:33] literally copy and paste from CD [01:34] well, copy/paste is a good method really [01:34] but in order to get that fragmented something else is going on [01:34] yeah, the ntfs driver [01:35] it's better than the alternatives :) [01:35] fat12? [01:35] Do you guys know who is the best on SSD market today? [01:35] !best [01:35] Usually, there is no single "best" application to perform a given task. It's up to you to choose, depending on your preferences, features you require, and other factors. Do NOT take polls in the channel. If you insist on getting people's opinions, ask BestBot in #ubuntu-bots. [01:36] !best on the ssd market [01:36] vlad_starkov: I am only a bot, please don't think I'm intelligent :) [01:36] ) [01:37] 586GB done, speed is 72Mbyte/sec [01:37] Don't know is it normal result for 1TB drive? [01:37] 100mb/sec max speed [01:37] so 40mb/sec slow [01:38] ok [01:38] your just over halfway [01:38] so you will see speed drop off good [01:38] but is that for hte whole array? or per disk? [01:39] so while cloning there is 0 errors. Does it mean that disk does not contain BADs or BADs could be detected on write operation? [01:39] I clone single 1Tb SATA2 drive to another exactly the same [01:40] well, that only means it didn't get a read error from the disk [01:40] using GNU ddrescue [01:40] it doesn't mean the data it got from the disk was correct :) [01:40] so errors classified to read and write? [01:40] oh [01:40] disks are only suppost to return bad data 1 out of 10^14 times [01:41] almost never [01:41] na, that would be enterprise disks, 1 out of 10^15 times [01:41] I have several bad data reads from disks [01:41] with the size of modern disks, silent corruption is a real issue [01:42] ok you convinced me: backups, backups, backups... [01:42] if your data is important you need to verify it with good checksums [01:43] qman__: I always wanted to ask, how checksum works? [01:43] I wanted to go with zfs for the data integrity but it kept locking up my server [01:44] a checksum, in its most basic form, adds up all the bits of a given set of data and returns the total [01:45] in simple, small scale stuff like ECC, it just records whether a byte has an even or odd number of ones [01:45] with more complex stuff like md5 or sha1, it performs a hashing algorithm on the data [01:45] if any bits get flipped, the checksums won't add up the same [01:46] that's interesting. A few days ago I learned how does SSL/TLS actually work [01:47] in the case of large files and algorithms like md5 or sha1, collisions can and do happen, so you have to take more than just that string into consideration [01:48] qman__: many times I downloaded files from Internet there was additional .md5 or .sha1 files with the same basename. How should I use these files to validate checksum on Ubuntu? [01:48] while it's reasonably possible that any two given files could have the same checksum (and even likely in some cases), it's astronomically unlikely that two files of identical size with different data will match [01:48] especially in the case of a handful of bit flips [01:48] md5sum filename [01:49] and then compare it with content of .md5 file? in case of sha1 what will be command? [01:51] sha1sum [01:52] you can also verify a burned disc that way, md5sum /dev/cdrom [01:52] or whatever your disc drive is [01:53] a properly burned iso will match exactly on-disc [01:54] nice [01:54] how do you know all these? how long do you use Linux? [01:54] I first started taking it seriously around 2004 [01:57] my formal education is windows focused, but linux is my preferred operating system [01:57] I also have some experience with solaris and freebsd [01:58] qman__: I just started learn it seriously a three months ago. Reading book and all new. Runlevels was pretty complicated but I deal with them. [01:59] The thing I still do not understand is mail system. [01:59] runlevels don't really exist in ubuntu, upstart works differently [01:59] Yep, I read it in the book) [01:59] it still deals with them in order to support sysv-style software [02:01] it's a bit harder when you learn something that has a history. Linux is such a thing. I think it should be much faster to learn new things when you have some experience. [02:02] very much so [02:02] qman__: you're right [02:02] once you reach a certain level of knowledge, you can work toward pretty much any problem you encounter [02:02] yepp [02:02] you need a basic level of experience and knowledge of tools to work through problems [02:03] coupled with good google skills, you can solve nearly anything [02:05] I think you have to know a set of technologies. There's just a set of them that is a part of almost every product today. After you deal with them then you just have to keep up to date your knowledge. [02:08] qman__: about SMART, is it really objective and helpful? [02:09] yes [02:09] it's not perfect, and you shouldn't expect disks to always warn you before they fail [02:09] but around 9 out of 10 times, disks will show errors in the SMART log before they go [02:09] giving you a nice heads-up [02:10] how much time do I have after first disk error notification? [02:10] could be a month, could be an hour [02:11] the good news is, any errors in the SMART log normally qualify for RMA status [02:11] except for temperature ones [02:11] what is RMA? [02:11] return to the manufacturer for replacement [02:12] 730GB done; 64MByte/sec [02:12] oh nice [02:13] I heard WD gives 5 years warranty [02:13] only on some drives [02:13] 2 years at least [02:13] only on some drives [02:13] both manufacturers offer anywhere from 1 to 5 year warranties depending on the class of drive you purchase [02:13] As I know many manufacturers gives at least 2 years [02:14] server drives [02:14] there are only two hard drive manufacturers, seagate and western digital [02:14] they own all the other brands [02:14] seagate bought samsung's hard drive division, and western digital bought hitachi [02:15] there are only two hard drive manufacturers, seagate and western digital [02:15] they own all the other brands [02:15] seagate bought samsung's hard drive division, and western digital bought hitachi [02:16] and I'm pretty sure fujitsu stepped out of hard drive manufacturing [02:17] which one do you prefer? [02:17] WD or Seagate? [02:17] My faulty drive is Hitachi [02:19] I don't really have much of a preference, I've lost the most seagates but I've also bought the most seagates [02:19] my last purchase was WD reds, trying those out [02:21] What's the difference between WD series? [02:21] the warranty, mostly [02:21] that, and blues and greens will probably not work with a hardware raid controller [02:22] because they're designed specifically to not be [02:22] I also recommend against any and all green drives [02:22] they're by and large crap [02:23] greens are intended for low-use patterns, such as external hard drives or a web-browsing desktop [02:23] blues are intended for typical desktops, reds for NASes, blacks for high performance desktops, and then they have raid class drives for servers [02:25] the raid class drives are a lot more expensive so I'm trying the reds, since that's pretty similar to the role my server plays [02:26] my personal experience has been about 20% failure rate within the warranty period across all drives and manufacturers that I've purchased [02:31] Ok that's valuable for me [02:31] qman__: thanks for advices [04:17] qman__: Success! I recovered my RAID1 === akashj87_ is now known as akashj87 [09:08] hello [09:09] i have a dedicated server with a company, i wonder how can i manage the backups [09:09] i could like use something like dropbox or google drive [09:09] is that possible? [09:53] hXm: Hosting providers often offer some kind of backup service [09:56] yes [09:56] but in my case they only do it with a professional plan [09:58] Backup isn't a simple service [09:58] i know [09:59] but dropbox could be a temporary solution [10:01] http://lassebunk.dk/2011/03/16/linux-dropbox-remote-backup/ [10:02] AFAIK, there's no offical Linux client for Google Drive [10:02] gdrive gives 7gb, i thought it gave more [10:02] i prefer dropbox since i have the free 20gb [10:03] i installed dropbox with something like that, now i just want to know how to sync different folders [10:03] didn't google recently combine mail and gdrive amounts? [10:04] regardless if they did or didn't, your "backup solution" still sucks :P [10:04] haha i know :) [10:04] i just dont want to lose something before than i decide if switch to pro or not [10:05] Estás utilizando 6 MB de 5 GB (0 MB en la Papelera). [10:05] still 5gb even [10:05] google drive sucks more than my backup [10:06] why not just use rsync to home? [10:06] my home to where, an external hd? [10:06] I don't know about your home [10:06] is a remote dedicated server [10:06] that's nice [10:07] why not rsync your settings from your remote dedicated server to home? [10:07] that means incremental backups in cron, isnt? [10:08] subversion service are so cool, they doesnt lose a line of typed code [10:10] yes, you can run rsync from cron, I don't understand your question [10:10] if you don;t know what rsync is, you really should go and learn [10:10] what i mean is, rsync requires a cron, then is not real-time backup [10:11] did i explain it right now? [10:15] dont be rude, i have neuronal deficit because im spanish [10:43] hXm: What are you backing up? [10:46] jacobw: sites, sql databases and few tcl scripts [10:51] For the code, you should consider developing with git on your workstation and pushing to your server, or pushing to a repository on your server and cloning the repository to your working directories [10:51] Are you dumping the MySQL databases to SQL files with cron? [10:53] i use transact-sql [10:53] yep, with cron [10:54] about the git i tried configure it twice, then i gave up (in this case i also still live with the dropbox crap-solution) [10:54] i know git is easy to configure and use but for some reason it wont work for [10:54] i didnt paid much attention so probably thats the fault [10:54] i hope the day had 32 hours instead [10:59] how can I communicate with memcache server ? [11:15] hXm: What problem did you have with Git? It's a bit counter intuitive the first time but there's a lot Git users here and in #git [11:17] hXm: Check out gitref.org and https://www.youtube.com/watch?v=ZDR433b0HJY [11:37] jacobw: im watching it right now [11:37] do you know if google apps support git hosting? [11:39] hXm: A git repository is just a directory, so you can put it anywhere that supports file storage, but I don't think any Google Apps integrate with Git right now [12:48] hey guys can you help me find a way to get apt-get working agian [12:48] keep getting issues with E: Problem with MergeList /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_quantal-security_main_i18n_Translation-en [12:53] bitbyte: try apt-get --list-cleanup update [12:53] ill give that a shot now [12:55] bitbyte: Otherwise you can try manually deleting the problematic file. Those files in that directory (/var/lib/apt/lists) will be recreated upon apt-get update [12:56] andol: i get bitbyte@bitbyte-core:~$ sudo apt-get -list -cleanup update [12:56] E: Command line option ‘l’ [from -list] is not known. [12:56] bitbyte: That wasn't the option I gave you :P [12:56] apt-get --list-cleanup update [12:57] haha sorry my eye sights bad, [12:57] but just removed the file and went through [12:58] removing the file seemed to work [12:59] mmmmmm any reason why the package list would fall over ? [13:27] bitbyte: Well, could have been some hickup on the repo server, and hat you managed to try to download the old version of that file at the very wrong moment, or perhaps that cron was making an apt-get update in the background while you rebooted the machine, or perhaps your local disk going bad, or something else. [13:27] mmmm [13:27] well thanks for getting back to me [13:27] I'm following this guide [13:27] http://www.thefanclub.co.za/how-to/how-setup-ubuntu-business-box-server-ubb-part-1 [13:28] to setup my server for a test [13:28] bitbyte: Unless that same thing happen again, I wouldn't worry about it. [13:30] andol: thanks for taking a look :) === wizonesolutions_ is now known as wizonesolutions [13:50] hi guys i have a ubuntu server with 250GB HDD only setup to entired disk used with LVM and i have added a new HDD 2 terbytes, how do i configure this and add it to the curent storage somehow..? [13:52] add it? [13:52] you mean you want to do like a raid0? where if one disk fails it all fails? [13:52] pvcreate, vgadd, lvextend [13:53] patdk-lap: just wanted to increase the existing HDD storage..how [13:55] patdk-lap: this is my output after i conencetd the additonal HDD --> http://pastebin.com/mxHgdK55 [13:57] guys any help [14:13] hey, anyone with experience in bonding nics? I'm having some issues where im guessing one of the interfaces isnt coming up in time and im getting a message saying something along the lines of 'waiting up to 60 more seconds for network'. after the message it boots into the server fine. no mac address associated w/ bond0. im using mode0 bond and 3 nics teamed up. ubuntu 12.04.2LTS. more or less a clean install. [14:15] bugzc: have been (and still are) using bonding, yes [14:15] bugzc: what is mode0? [14:17] balance-rr it seems [14:17] * RoyK likes the text-based names better [14:18] bugzc: I haven't used balance-rr, though. Currently using active-backup, and have used LACP earlier [14:20] how is it connected? [14:20] hopefully not to a switch [14:22] all ethernet cards are connected to a cisco unmanaged switch [14:22] hmm, balance-rr doesn't work that way [14:22] no wonder [14:22] bugzc: get a managed switch and use LACP [14:23] What Im trying to achieve is network aggregation so bandwidth is provided/shared by all nics [14:23] yes, that works rather well with LACP [14:23] Cant replace the switch here. Gotta work with what ive got, unfortunately [14:23] heh? just use tlb mode [14:23] or alb, but alb causes issues with lots of things [14:23] if you need reliable incoming balancing, lacp is best [14:23] outgoing only, tlb works good [14:24] it's basically a caching proxy [14:24] and that means to me? nothing? [14:24] well, it should be mostly outgoing, then [14:24] well it needs to provide a lot of bandwidth, thats what I mean. But not much for incoming [14:24] try tlb [14:25] but [14:25] if you need >1Gbps for a caching proxy, why on earth don't you have a managed switch? [14:25] because there is no budget and im improvising :) [14:26] http://www.ebay.com/itm/CISCO-WS-C2950T-24-CATALYST-SWITCH-10-100B-TX-1000Base-T-GIGABIT-UPLINKS-MANAGED-/290839605531?pt=US_Network_Switches&hash=item43b764311b [14:26] well, not gigabit, though [14:26] but you can get them quite cheap [14:27] and managed switches save you a *lot* of headache [14:28] aye but the company wont let me expense anything any time soon, alas. So my priority right now is to improvise and get bonding to work. I have it working quite nicely on my windows server with Intel nics. [14:28] what sort of bonding do you use on windoze_ [14:28] ? [14:29] well, windows only supports failback and lacp I think [14:29] the intel nic driver supports aggregation+fallback [14:29] yes but Intel has a proprietary driver for that [14:29] so does hp [14:29] so im getting a 2gbps link shared for samba etc [14:29] the hp one does lots of options [14:29] heh? [14:30] it's doing lacp without confirming it with the switch [14:30] so it's basically doing tlb [14:30] nice [14:30] seems so [14:30] Alright let me set it up as tlb and see what I get here [14:31] bugzc: but really, tell your boss you need a managed switch [14:31] I have many times lol [14:31] a managed 10gbit 24port switch [14:31] do you have an internet link > 1Gbps and no managed switches? [14:31] patdk-lap: we have a few ;) [14:32] patdk-lap: 10Gbps internet access from work... [14:32] royk, I'm close to getting one (for home) [14:32] * RoyK wonders who might need 10Gbps at home [14:32] I could use 100gbit, but that is kind of pricy [14:33] patdk-lap: infiniband? [14:33] ya, I have infiniband [14:33] ok [14:33] but attempting to drop it [14:33] trying to cut down? ;) [14:33] just not worth the pain of keeping it working, and playing with it's compatability issues [14:33] Aye :/ [14:49] I wonder why all the samples/docs use mode balance-rr [14:53] I'm trying to follow this page http://manpages.ubuntu.com/manpages/raring/man4/sge.4freebsd.html but I've never compiled a driver. [14:55] Looks like switching to tlb did not resolve the problem [14:55] well, then, LACP! [14:56] bugzc: if you have 10G connectivity to the net, you're bound to have a managed switch [14:56] it's 1gbps [14:56] then why do you need >1gpbs to the proxy? [14:56] the proxy is running a raid stripe with a large cache of commonly accessed objects [14:57] so clients download them through the transparent proxy instead of WAN [14:57] yes, but the bottleneck isn't the network connection to the server if the internet connection is 1Gbps [14:57] no no the WAN is not 1gbps lol [14:57] was this a forward proxy or reverse proxy? [14:58] bugzc: do you monitor the network use on this proxy? [14:58] it's squid3 with caching enabled and http traffic redirected through it [14:59] usually networking isn't a bottleneck for squid - often memory or I/O [15:00] Aye. but I want at least a 2gb link for the large files. This serves a dual purpose as I am setting up something similar for a different use case elsewhere. Here it's not as critical [15:02] just get a managed switch, then [15:02] LACP is well tested [15:02] well proven [15:02] it works [15:02] that's not really a solution as it's not an option for me [15:03] I would gladly get one if the guys higher up gave me the budget to do so [15:03] well, tell them there's no other options [15:03] you have indeed tried [15:04] Well, how come it works just fine with the intel software? I have also seen it done on centos with a similar configuration [15:05] seen exactly what done? [15:05] I did stumble onto this bug specific to 12.04: https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/839595 [15:05] I somewhat wonder why someone couldn't afford a managed switch to make a proper setup when you need >1Gbps [15:05] Launchpad bug 839595 in upstart "failsafe.conf's 30 second time out is too low" [High,Fix released] [15:06] RoyK: Overe here it's more of a want than a need. It will save me time. That's why it's not high priority for the higher ups :) [15:06] what sort of company is this? [15:07] it's a computer service place, nothing fancy [15:09] well, it's rather bad if they want you to create a good solution and can't even get a managed switch for something like $500 [15:10] I could use some help compiling a kernel [15:10] !ask | KennettAZ [15:10] KennettAZ: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience [15:10] RoyK: agreed, but again, not in my control heh [15:10] heh [15:11] bugzc: but what makes you think 2x1Gbps will make things go faster? Usually bandwidth to a squid proxy isn't the bottleneck [15:12] I'm new to linux and I need help with this. http://manpages.ubuntu.com/manpages/raring/man4/sge.4freebsd.html [15:12] So right now after waiting for the 60 sec thing it boots up and im getting no mac addr but im getting an IP i can ping. if i bring down eth0(not part of team) and try to ping the router it fails. [15:13] RoyK: Well imagine 15 clients needing to download windows updates at the same time. [15:13] bugzc: 1Gbps is still rather a lot [15:13] time is money :) [15:13] bugzc: if time is money, get a proper switch [15:14] bugzc: as in *really* - you'll spend hours/days to work out something that may work, or may not, and the time you spent (time is money) could be used on a proper switch instead [15:15] we considered replacing VMware with KVM, because KVM is so much cheaper, but decided managing KVM was much more time consuming than VMware, and we still run VMware [15:15] same thing [15:17] And there is no professional support for KVM in case of need. [15:18] from redhat there is [15:18] but then, kvm from redhat is about as expensive as vmware [15:18] and there are consultants around that can help [15:19] but still, paying for good software or hardware pays in the long run [15:19] Well the problem is that this is a pet project of mine. it serves a dual purpose since there is something similar I want to set up elsewhere. So the company isnt spending much if anything on my time per se [15:19] i understand what you're saying but it's just not happening in my case alas [15:19] then get a managed switch second hand from ebay or something [15:19] RoyK: You're better off using vmware then. :) [15:20] RoyK: That would mean paying out of pocket [15:20] you get them rather cheap, and you learn a bit more :) [15:20] bugzc: if it's a private project, sure, but not sure what you mean [15:21] Even $100 would be a bit of a stretch for me right now [15:21] If its a private project, which isnt used for business purposes, you could use vbox, too. [15:22] it's a pet project that the business approved as long as I dont spend a lot of money on it and it proves fruitful. The whole proxy thing is, not the teaming bit. The teaming was an idea for a bit more optimisation. [15:25] bugzc: monitor network traffic first [15:25] bugzc: I really doubt you need more than 1Gbps for a proxy [15:25] bugzc: install munin or something to see the actual traffic levels [15:29] RoyK: traffic isnt a huge issue right now, though there are peak times, but this box will also be running PXE deployment very soon at which point it will become more of a problem [15:32] RoyK: So instead of using one nic for PXE and one nic for squid, the idea was to aggregate and balance etc [15:35] bugzc: I see, but I still would recommend using something standardised like LACP. Saves you a headache or three [15:38] RoyK: I appreciate that and when I have the budget a managed switch will be my first investment. But how would you explain this: I did the same thing on a ESXI5.1 VM with 4 nics, 3 of them teamed in LACP, and it was doing the same thing. ESXI virtual switch supports LACP natively [15:55] Any other ideas? :) [15:59] any ideas of how to add noip2 into cron so it will update on its own [15:59] I've followed all the guides and it just seems to never update [15:59] bugzc: really, I don't understand why you don't get a proper switch. non-managed stuff isn't meant for production [16:01] RoyK: that's one issue. But now Im talking about a setup /with/ a managed switch - and it was the same exact outcome [16:05] bugzc: if you have a managed switch, enable LACP on those ports [16:06] LACP needs to be configured at both ends [16:06] there's no automatic magic at this level [16:06] or layer [16:09] RoyK: *facepalm* :) [16:09] :) [16:10] I thought though on the bond driver end it would be oblivious to all that with a static ip configuration [16:10] not with LACP but the other modes ive tried like tlb/rr [16:12] I was looking for a way to monitor directories. I was thinking about using inotify but I can't seem to figure it all out. Is inotify already on my system because I tried apt-get inotify and that didn't do anything. [16:15] zanzacar: inotify is there [16:15] zanzacar: what sort of monitoring are you trying to do? [16:15] I created a sftp chroot account and I wanted to be able to monitor it to see for uploaded files. Instead of checking through it every so often. [16:16] * RoyK wrote this little perl thing some time ago to monitor an ftp server's incoming dir [16:16] RoyK: did you write this? http://jmorano.moretrix.com/2012/03/watch-directory-uploaded-files/ [16:16] zanzacar: you can script that easily [16:16] no, not me [16:16] that is one of the things I saw, but since I am much more familar and like python more i figured this would work http://pyinotify.sourceforge.net/ [16:17] zanzacar: you setup a monitor for the directory and wait for a create, when that happens, setup another for the file created and monitor for file close, then do your stuff [16:18] seems reasonable. but all this is based off inotify correct? [16:19] Hey. I set up a WebDAV virtual host for apache2. Now I can only delete / change files in that webdav folder if I change permissions to 777. Is there another way to avoid getting 403 forbidden in my WebDAV client? [16:20] zanzacar: you hook up to the inotify in kernels, yes [16:20] germanstudent: not sure, perhaps some can tell at #httpd [16:20] RoyK: ok so inotify isn't something I can run by myself from the terminal since it is already part of the kernel. [16:20] Thanks RoyK [16:21] zanzacar: inotify is a kernel interface, so no, it's not a userspace thing [16:21] or kernel api or whatever [16:21] interesting this is all new to me [16:21] http://en.wikipedia.org/wiki/Inotify [16:22] there are APIs for most programming languages [16:23] cool thanks I will have to read up on it. Thank you. [16:23] so pick your choice - python? perl? c? [16:23] I have always enjoyed python [16:23] then stick to it :) [16:23] * RoyK sticks to perl [16:23] out of old habit [16:24] gotcha, I don't know perl I only know python and it hasn't failed me yet. [16:24] then just use it :) [16:25] * RoyK doesn't like wars, neither distro wars, programming language wars or otherwise [16:25] editor wars, though... [16:26] ya for sure. thank you for your help. Now that I know what I am dealing with I can move forward. I appreciate it. [16:26] not sure what an editor war is. [16:26] :) [16:26] well, talks about vim versus emacs are common [16:26] * RoyK is higly addicted to vim [16:28] gotcha gotcha gotcha I definetly can say I enjoy vim over anything else. [16:28] Nothing to have an editor war about really. The Emacs people are just wrong. [16:29] ;-) [16:29] ScottK++ [16:32] haha [16:32] well thanks I really appreciate the help/nudge in the correct direction. Always appreciated. [16:43] any of you guys know libmemcache6 ? [16:49] or to my question above any one good with aptitude [16:49] I've got a package which seems its missing form there [16:49] from* [17:04] bitbyte: What's the exact error you're getting? Please pastebin it. [17:04] http://pastebin.com/ksn1XZvq [17:04] its on paste bin in there [17:04] when trying to apt-get sogo [17:38] looking [17:39] bitbyte: What release are you on? [17:39] Ubuntu 12.10 (GNU/Linux 3.5.0-28-generic x86_64) [17:43] bitbyte: The sogo in 12.10 depends on libmemcached10, which does exist. Where are you trying to get the package from? [17:44] I'm following http://www.thefanclub.co.za/how-to/how-setup-ubuntu-business-box-server-ubb-part-2 [17:45] and just doing command sudo apt-get install sogo sope4.9-gdl1-mysql memcached rpl [17:54] bitbyte: Those instructions tell you to add deb http://inverse.ca/ubuntu precise precise to your sources.list. [17:54] You aren't on precise, so sogo built for precise (12.04) isn't installable on 12.10 due to the libmemcached library change. [17:54] I'd check and see if they have a package for quantal. [17:57] so in theory if i change the package to precise it should install [17:59] In theory, yes, but then you're using memcached which is using the newer libmemcached and so it starts to get complicated. [17:59] Alternately, remove that repository, sudo apt-get update and then sudo apt-get install sogo [18:00] sogo wasn't in the official repositories fro 12.04, but it is for 12.10. [18:00] ok well ill try removing and see how it goes [18:00] thanks [18:01] well soho is installing but it does complain about not finding sope4.9-gdl1-mysql [18:04] bitbyte: I'd recommend using LTS releases for servers :P [18:04] they should already be there [18:05] Or at least if you're following a guide for 12.04 and are on 12.10, you should expect to have to make some adjustments. [18:05] heh, some :) [18:05] http://pastebin.com/VHjPzT6Z [18:05] there the packages on right now [18:12] thanks for the help anyway guys ill tinker later with it [18:30] I've got about 2 dozen packages on my server tagged as ip or id. THey show up in aptitude search '~g', but when I try to purge them, using aptitude purge '~g', nothing is purged. [18:33] Those letter codes don't seem quite right to me if they are what I guess them to be...... are you sure / give more context? [18:35] Oh, right [18:35] How helpful, aptitude uses similar yet different codes to dpkg [18:40] Well, if you want to review and run pending aptitude actions, I'd say using the interactive aptitude UI is the nicest way, though you could also go for 'aptitude install' [19:01] Is there any way to get inotify working for remote FUSE/sshfs directorys [20:36] can i reinstall my VPS server via ssh. from ubuntu 12.10 x86_64 to ubuntu 12.10 x86? [20:37] no [20:37] why would you even want to [20:47] Ben64: becasue the service im trying to run works better on 32bit and atm i cant access my VPS control panel [21:08] Hey, my Ubuntu server keeps freezing every 30 minutes and I need to do a hard reboot. How do I diagnose random crashes? [21:10] I thought it might be the native zfs package, but I've been stress testing my pool and I can't crash it. However running rtorrent will consistently crash within an hour. [21:13] I've uploaded the kern.log here, not sure what other logs are important: http://temp.ancientpc.net/kern.log [22:40] qexit [22:41] wrong window. [22:52] anyone feel like helpping me with a system that wont boot? [22:52] after the grub menu it just reboots, doesnt give me any error [22:53] already tried repairing grub with the rescue-disk [22:54] i have the disk mounted on another system and can access the file system, so its not a disk issue [22:57] nobody home? [23:02] krys, can you modify grub options? [23:03] i mean, modify the linux line on the grub entry [23:06] RoyK: thanks for the assistance earlier and thanks to everyone else. [23:18] lenios: probably [23:18] what should I try? [23:18] i used the boot-repair disk already thinking that would fix it but it didny [23:19] ditn* [23:19] didnt* [23:19] i just dont understand why i dont get any error messages or anything [23:54] weird, i ran all grub commands manually expecting to see an error message, same thing... no error just an reboot