[01:04] Hi, could you please tell me what is wrong with this config? http://paste.ubuntu.com/10918180/ [01:09] seems fine, what issues are you having/ [01:10] sarnold: content of the /etc/resolve.conf showing sth else. [01:11] also when I type sudo ifdown eth0, it says interface eth0 not configured [01:11] mojtaba: how about /etc/resolv.conf? [01:12] sarnold: http://paste.ubuntu.com/10918591/ [01:13] sarnold: Do you know why I get that message after running sudo ifdown eth0 or sudo ifdown wlan0? [01:13] mojtaba: do you have 8.8.8.8 configured manually in /etc/resolvconf/ somewhere? [01:13] sarnold: No, I entered in via GUI, but I removed it. [01:14] mojtaba: is network-manager installed on this machine? [01:14] sarnold: yes [01:16] mojtaba: can you uninstall it? I suspect nothing will work quite right os long as you have NM installed. it's a crazy wrench to throw into any problem.. [01:16] sarnold: can I disable it for a moment? [01:17] mojtaba: no idea [01:17] servers and network-manager don't get along very well :P [01:17] sarnold: what should I do after that? === markthomas is now known as markthomas|away [07:10] hi all [07:10] root@ubuntuserver:/usr/local/src/noip-2.1.9-1# make install [07:10] gcc -Wall -g -Dlinux -DPREFIX=\"/usr/local\" noip2.c -o noip2 [07:10] make: gcc: Command not found [07:10] make: *** [noip2] Error 127 [07:10] root@ubuntuserver:/usr/local/src/noip-2.1.9-1# === DenBeiren is now known as zz_DenBeiren [07:32] linocisco: apt-get install build-essential [07:48] Good morning. [07:51] Hey everyone! I have a question regarding Landscape. Is it normal that I need to reboot my server for it to generate CPU/Memory graphs, and show the current process list? [07:52] Matsy: throw that shit out the window and get a graphite/collectd-system going [07:52] OpenTokix: I actually really like the looks / functionality of Landscape. Started a 'trial' today with a few of my non-important servers [07:53] Matsy: I have tried it to - also tried obvservatorium, nagosgraphs, cacti and munin - and graphite/collectd - oh yes, so much win [07:53] Well, I need something to replace my salt-environment [07:54] So, it needs to do a bit more than collecting logs [07:54] why are you moving away from salt? [07:54] Company firewalls seem to have a lot of issues with salt [07:54] Matsy: you think ladnscape will replace a fully fledged CM-system? [07:54] OpenTokix: I don't. [07:55] But, since Landscape also allows the instant deployment of custom scripts to an arbitrary amount of servers, it seems to do the job [07:55] landscape shouldn't require reboots to generate those graphs [07:56] jcastro: That's what I figured. But it didn't show anything, other than 'Virtual Environment: vmware' [07:56] That's the only piece of information it seemed to gather [07:56] Which is odd, because finding out in what kind of virtual environment the computer is without the tools installed is quite a bit more complicate than doing a ps for the list of running processes [07:56] the landscape guys are in #landscape [07:57] I don't know enough about landscape to help [07:57] You guys have a channel for everything [07:57] heh === zz_DenBeiren is now known as DenBeiren [08:58] http://pastebin.ubuntu.com/10922486/ [09:01] Yes? [09:02] linocisco: Probably a DNS issue on your server. [09:05] Matsy, so what do I do? [09:05] linocisco: Fix the internet? [09:05] linocisco: dyn-update.no-ip.com resolves to 8.23.224.120 here. [09:05] Do a simple ping, and see if that works [09:06] If it doesn't, just set your nameservers to a proper DNS service (8.8.8.8 / 8.8.4.4 always work) [09:07] Matsy, ping is blocked [09:07] linocisco: Try adding dyn-update to your hostfile then [09:10] Matsy, it is no-ip. not dyn [09:10] linocisco: I know. See what hostname it tries to contact: dyn-update.no-ip.com [09:11] dyn-update.no-ip.com: command not found [09:53] jpds: ping [10:15] Does anyone use/still uses monit? [10:16] As i'm running a Teamspeak server on my Ubuntu server and recently discovered monit, I wonder how effective this is to keep your server up and running in case something like a crash occurs === Joel is now known as Guest81232 [10:52] kevyes [11:02] Is there an issue with Ubuntu 14.04 and disk performance? [11:03] If there is I never noticed it. [11:03] okay, i am using rsync to copy some data from a RAID-5 array with 5 drives to an SSD and it's only copying at 134MB/s [11:04] and it's actually dipping down as low as 60MB/s === Lcawte|Away is now known as Lcawte [11:06] Teduardo: "performance" is not a property of a distribution, but of its configuration. If you are unhappy with the defaults (which are often not the best for anything in particular), perhaps you need a system administrator to configure your system for your applications. [11:06] Teduardo: Sequential writes is not as fast on SSD:s [11:07] Teduardo: What options are you using for your rsync? --whole-file ? - Are you doing something else on the machine? - What is the performance you expecting? [11:07] Teduardo: 130M/sec from a raid5 on 5 sata-drives sounds resonable. [11:07] Teduardo: hardware or software-raid? [11:07] hardware [11:08] Teduardo: you have to remember rsync is doing hashing and stuff on the files you are copying - you can try --whole-file - that will generally be more consise [11:10] I guess I will just use bonnie++ to figure out if i can get it to go faster. I figured 5 drives each capable of 100MB/s seperately -1 drive for parity would be about 400MB/s [11:10] Teduardo: it is not that simple [11:10] Teduardo: it is not like the file is perfectly divided in four exactly the same size chunks [11:11] ah, im guessing the seek time is horrible on 4tb drives too [11:11] The limit is not MB/sec, but operations per second, - while you are doing your rsync - do a iostat 3 to see how much ops, and how much time is spent on each operation [11:11] Depends on the number of platters [11:12] there ie 2 and 3 platter drives, 3 generally have better seek time than 2. - Also is it many small files, or few large files [11:12] its one gigantic file [11:12] ok [11:13] Then the absolutly fastest thing you can do is use dd [11:13] or cp [11:13] dd if=largefile of=/new/path/of/largefile bs=8M [11:13] This will max out your machine, and you can check speed with kill -USR1 pid-of-dd [11:14] Over 8M chunks, generally will not give better performance - but you can always try [11:15] This will probably grind your machine to a halt, since it is doing only io more or less [11:15] so careful if its a internet-system [11:15] that does something =) [11:15] nah this is a t630 i have in a lab [11:15] ok, dd at full speed [11:15] then [11:15] will be quickest [11:16] dd > cp > scp > rsync > > > > > > > windows explorer > > > > > os x finder [11:17] Hehe, osx ;) [11:18] Teduardo: Tell me what you get in terms of performance (if you try dd) [11:20] 53141921792 bytes (53 GB) copied, 234.048 s, 227 MB/s [11:20] nice [11:20] only 53G =) [11:21] I tought you said large file =) [11:21] he he he [11:21] Teduardo: there you go, - and I guess you learned something in the process [11:22] okay, i will add 10 more drives to the array and see if the performance scales [11:23] i need to be able to restore 11TB of data in less than 10 days [11:23] which is why i'm going on this quest in the first place [11:23] you are already doing it in 13 hours [11:23] with 227M/sec [11:24] yea that's a DD not the restore process of this wacky backup software [11:24] ok [11:24] i'm trying to make sure that the underlying system is capable of what i need before i yell at the sw vendor [11:24] then I guess its a software issue more than actual hardware, but hardware helps [11:24] What backup software is it? [11:24] server backup manager by idera [11:25] ok [11:25] bleh [11:25] comercial backup... [11:25] bleh [11:25] Never used any that wasnt complete useless [11:25] good luck! [11:26] it works flawlessly it just takes forever [11:26] it restored the 11tb of data [11:26] So its always 0 or 11TB? [11:29] yeah it's just backed up data.. which i dont need until it needs to be restored [11:31] but i can't have it take 10 days to restore over 10Gbps ethernet [11:31] But sometimes you only need a small part of the backup. [11:32] Like one config file. [11:32] oh, yeah i'm kind of worst case scenario planning [11:33] but it could be that the software is poorly written but the restore I ran was bare metal and it was just bits sent to the block device rather than files [11:34] so there is no excuse for 10 days [11:37] Teduardo: did you also tune your network-settings for 10GBps? [11:37] ie. txqueue and such? [11:38] Teduardo: do _NOT_ use jumboframes on LAN [11:41] Teduardo: if its over 10Gbps, your limit is network - and not disk-speed, since your network will topout about 100MB/sec [11:45] 10Gbps = 1.25GB/s [11:46] Yes you are correct, - I got a slight case of the dumb for a minute or two there. [11:46] no worries. [11:47] why no jumboframes? [11:47] I only use jumboframes [11:47] i dont think jumboframes actually hurts anything [11:47] and amazon has changed to only using jumboframes also [11:47] Teduardo, depends on your switch [11:47] unless the switch in between is set to 1500 =D [11:48] Patrickdk: For a LAN jumboframes give no added benefit - more then increasing the complexity of your network. [11:48] Patrickdk: jumboframes, is not faster on local network [11:48] no, ifthe switch has a normal (small) packet buffer, you will overflow it quickly [11:48] but i'm not even worried about the network yet like i said i'm just trying to get the read performance of the volume on the server itself showing me a little life [11:48] Patrickdk: Did a lab on this, with switches from different vendors, many different 10G cards, multiples OS:es - no difference [11:49] jumboframes or not highly depends on the nic used [11:49] Patrickdk: it is a networking myth, many believe - since it is annoying as hell to test [11:49] I do see a increase in performance on my local network [11:49] on the old nics, it was a huge improvement [11:50] on newer nics it's down to like 10% or so [11:50] but then, in vm's it goes up again [11:50] it depends if you can take advantage of the nic's tso/lsr/gro/gso or not [11:51] so what should 5x4TB WD RE4s in RAID-5 do read wise in a bonnie++? [11:51] ok, if youre running in vms - performance isnt a top issue anyway =) - so nevermind [11:51] and since those only work for tcp [11:52] OpenTokix: I only manage VM clusters [11:52] do a crapload of udp stuff and rtp [11:52] Patrickdk: About 350-450 ios/s [11:52] iops* [11:52] Matsy: ok [11:52] OpenTokix: Performance is one of my highest priorities [11:52] Matsy: ok [11:52] and 3.5" 7k rpm disk is going max out around 80 iops per disk [11:52] OpenTokix: How can you say that 'if you are running in vms, performance isn't a top issue' [11:52] Matsy: its not [11:53] a top, it is [11:53] the top, no [11:53] it's normally second to the top [11:53] Yeah [11:54] Availability is number one [11:54] But performance is a very close one [11:54] or simple of management [11:54] ability to move to new hardware without it going nuts [11:54] Matsy: Depends also if your traffic is counted in the hundreds, thousands or millions per second - or if your latency is in seconds, ms or us [11:54] lots of single vm per machine [11:55] Patrickdk: Oh, I never use single VM machines [11:55] Patrickdk: A good hot-swappable environment takes care of the hardware changes [11:58] so if you add more disks to a raid-5 volume does that make the performance better or does it just mean that the seek time goes up? [11:58] More disks to raid 5 does not mean more performance [11:58] you shouldn't run big raid5 sets :) [11:59] ^ [11:59] I'd consider 6 disks the max [11:59] beyond that the performance impact is big and much worse your rebuild times will be very long [11:59] during which your array has no fault-protection at all [11:59] And will probably fail (the rebuild) [11:59] yeah the rebuild times and stuff arent that big of a deal given it's cold/backup storage [12:00] Deploy Ceph. [12:00] It depends on which performance though, read performance on a raid5 is roughly equivalent to the read performance of a n-1 raid0 [12:00] Why does performance matter on a cold storage [12:00] I'd say, use zfs [12:00] Matsy: restore not taking 10 days [12:00] Of course, for a parity raid, raid5 is very brave with modern disk sizes [12:01] Teduardo: Go for 1+0 or something [12:01] Should only take 5 days then [12:01] raid 6 would work as well, if your controller supports it [12:01] gives a bit more slack [12:01] r6 is slower than r5 right? [12:01] Teduardo: yes [12:01] r6 tends to work on faster controllers [12:01] so in theory yes, in practice no [12:01] or 'hardly' [12:02] raid5: 2 iops/write, 6 4/iops per write [12:02] ah, the controller in question is a 2GB PERC H730 [12:02] but if you're doing many writes, r5/6 is not good anyway [12:02] Which H730? [12:03] what do ya mean? [12:03] go for 1+0 if you want fast writes :) [12:03] it also depends on the size of the writes, if you do full stripe writes it is not so bad, as opposed to doing random small writes and getting a read-modify-write cycle int here [12:04] like i said i'm more concerned about how fast the data can be restored back to the client machine [12:04] it took 10 days for a 11TB restore in a DR test [12:05] trying to you know... make that.. not suck [12:05] 10 days?! [12:05] That's 700mb per minute [12:05] That's very very slow :p [12:05] and like i said that was a bits to block device restore not a file restore [12:05] so it was hot garbage [12:08] anyway i will check out raid-10 and maybe enable ssd caching on the volume and see if that gets me anywhere [12:08] Wait, you're using SSDs? [12:08] no [12:09] the controller can use SSDs as read cache [12:09] i will install some for that purpose [12:09] just to test around with [12:09] since i have pretty much unlimited hw [12:10] i imagine that i'm going to find that this is all limited by the software and then im going to be unhappy [12:12] so I have an init script with 'Required-Start: $network $remote_fs $syslog', but during bootup it fails to bind to the interfaces defined in /etc/network/interfaces, after booting is done I restart the service, and it works fine [12:12] Unlimited hardware, I envy you [12:12] what could be causing the network service to report as up to init/upstart/whatever but the interface to be still unavailable for binding? [12:12] service in question is pdns-recursor [12:13] Teduardo: well, if you are not seeing lots of io-wait state, you're not waiting for the storag [12:13] Matsy: well, its the company's and not mine of course haha =) [12:14] Teduardo: I work at a university. Unlimited hardware is still very rare. [12:14] Every time I need to buy a new server, there's 200+ pages of bureaucracy [12:14] yeah, currently i am playing with a bunch of Intel NVMe PCI Express drives [12:14] Sigh [12:15] Those are such beasts [12:15] yeah, bios support for booting them is dodgy so have to use uefi and uefi + pxelinux is... errr.. wacky (for me anyway) [12:16] but they are monsters, yes [12:16] Why do you boot them though? [12:16] Seems overkill for just booting [12:16] I mean, servers restarting in 0.1 second or 0.2 seconds [12:16] I'd love to switch my mongo-disks to them though :p [12:17] Maybe in the new fiscal year... [12:17] hehe yeah, it's kind of my job to make sure that we know all of the potential answers to all of the potential questions before hand, so i just test [12:17] im not sure i would boot from them === bilde2910|away is now known as bilde2910 [12:34] and now we wait for the background initialization.. [12:34] woe === Lcawte is now known as Lcawte|Away === Lcawte|Away is now known as Lcawte === shirgall is now known as Guest37225 [13:38] can rsynch used to jsut sync folders remote and local? [13:40] delinquentme: rsync can be used to sync folders and files between two locations, yes, I believe, but depending on what you want to do you may have to provide additional arguments and options to it. You may wish to refer to 'man rsync' (without the ' characters) to read up on the various options. === Faylite_ is now known as Faylite === markthomas|away is now known as markthomas [17:37] Hi. Not sure where to go with this. I'm having a landscape issue. When I login to landscape and attempt to select the 'Support' option at the top (I have valid entitlements), it takes me to a Salesforce.com login screen instead of the Canonical support portal. [17:39] The link is the 'Support' link here: https://landscape.canonical.com/account/hp-l3-support/activities [17:41] rickbeldin: I think that's intentional; at least, when I go to the url you provided. I wind up at an Ubuntu SSO login prompt.. I login, see the landscape interface, hit my own "support" link, which brings me to https://eu1.salesforce.com/500/o -- which appears fully active and live.. [17:42] I was using the interface yesterday, and it logged me right in to where you could see cases. [17:42] rickbeldin: I've heard some sso oddities can be poked in the eye by visiting https://login.ubuntu.com/ first [17:43] !!! [17:43] +1 sarnold. [17:44] rickbeldin: did that sort it? [17:44] Not very intuitive. Sort of along the lines of ctrl-alt-delete or close your browser and restart. Yes, it fixed it. Thanks very much. [17:44] no kidding... [17:46] rickbeldin: I don't kno wmuch about the support / landscape end of things.. it feels like one of those support tickets there would be the place to report the bug, but if those cost you money, _maybe_ this is a better place to start: https://launchpad.net/ubuntu/+source/landscape -- seems a bit empty there, but it might be worth a try [17:46] I have a question about ufw. The log files (small sample here: http://pastebin.com/5pguu4kN ) show many connection requests to port 80 being blocked. [17:47] Thanks. This was the first time and now I know which way to hold my nose. : ) [17:47] rickbeldin: hehe :) [17:47] Port 80 is open because the server hosts a website. My nginx logs don't show any requests from these IPs. Is ufw blocking legitimate requests? The IP addresses seem like legit users, not server farms. === Guest37225 is now known as shirgall [18:36] Hi. Can someone please help me replace a broken disk in a RAID1. I am getting really confused with the output of mdstat to the point where I am not even sure which disk is broken (it is showing 5 md12* disks, with inactive sdb2[0](S), active sdb5[0] [U_], active sdb1[0] [U_], active sda5[1] [_U] and active sda[2] [_U]). [18:37] I think it was sda that failed because for a bit I couldn't boot (no operating system) but that seems to have gone away. I get a login prompt but I can't login when booting from HDD, and now using a Server 14.04 rescue CD to get a prompt. [18:39] CompuChip: I think it'll be easier for other sto help you debug the issue ifyou can pastebin status outputs or similar; the pastebinit package can be very helpful here [18:39] Thanks sarnold, can I install that from the rescue disk prompt? [18:40] CompuChip: I hope so; apt-get update && apt-get install pastebinit to find out :) [18:40] apt-get not found :) [18:41] dang :) [18:41] If I select that I want /dev/sda1 as root and open a shell there I get apt-get but no internet connection [18:43] Don't seem to have a DNS, I can ping 8.8.8.8 but not google.com. [18:44] CompuChip: try adding 8.8.8.8 to your /etc/resolv.conf [18:44] CompuChip: this might work for you once you've got dns up: http://unix.stackexchange.com/a/5918/7064 [18:48] sarnold: thanks, got a bunch of error but think I installed astebinit [18:49] Yay it worked http://paste.ubuntu.com/10928895/ [18:50] nice! now you get to solve your actual problem :) [18:52] Shall I ask the question again? :) [18:55] One of the disks in my RAID1 is broken. I am getting really confused with the output of mdstat to the point where I am not even sure which disk is broken - http://paste.ubuntu.com/10928895/ [18:59] What's even more confusing is that the Rescue Disk refuses to mount /dev/sdb1 as the root, even when I physically swap the connections. [19:00] I can only get a shell in /dev/sda1. === markthomas is now known as markthomas|away [19:52] been a while since I've run Ubuntu as a server, and getting back into it. seems like there used to be a post-install graphical command where you could change some of the settings, such as ip address, mount points, etc. that looked a lot like the install process. make sense? [19:53] no idea [19:54] thought it was a some tk scripts that was packaged with it... maybe I'm thinking of a different distro. === bilde2910 is now known as bilde2910|away === klaas_ is now known as klaas === DenBeiren is now known as zz_DenBeiren === bastidra1or is now known as bastidrazor === markthomas|away is now known as markthomas === Joel is now known as Guest14185 === Lcawte is now known as Lcawte|Away