[01:04] <mojtaba> Hi, could you please tell me what is wrong with this config? http://paste.ubuntu.com/10918180/
[01:09] <sarnold> seems fine, what issues are you having/
[01:10] <mojtaba> sarnold: content of the /etc/resolve.conf showing sth else.
[01:11] <mojtaba> also when I type sudo ifdown eth0, it says interface eth0 not configured
[01:11] <sarnold> mojtaba: how about /etc/resolv.conf?
[01:12] <mojtaba> sarnold: http://paste.ubuntu.com/10918591/
[01:13] <mojtaba> sarnold: Do you know why I get that message after running sudo ifdown eth0 or sudo ifdown wlan0?
[01:13] <sarnold> mojtaba: do you have 8.8.8.8 configured manually in /etc/resolvconf/ somewhere?
[01:13] <mojtaba> sarnold: No, I entered in via GUI, but I removed it.
[01:14] <sarnold> mojtaba: is network-manager installed on this machine?
[01:14] <mojtaba> sarnold: yes
[01:16] <sarnold> mojtaba: can you uninstall it? I suspect nothing will work quite right os long as you have NM installed. it's a crazy wrench to throw into any problem..
[01:16] <mojtaba> sarnold: can I disable it for a moment?
[01:17] <sarnold> mojtaba: no idea
[01:17] <teward> servers and network-manager don't get along very well :P
[01:17] <mojtaba> sarnold: what should I do after that?
[07:10] <linocisco> hi all
[07:10] <linocisco> root@ubuntuserver:/usr/local/src/noip-2.1.9-1# make install
[07:10] <linocisco> gcc -Wall -g -Dlinux -DPREFIX=\"/usr/local\" noip2.c -o noip2
[07:10] <linocisco> make: gcc: Command not found
[07:10] <linocisco> make: *** [noip2] Error 127
[07:10] <linocisco> root@ubuntuserver:/usr/local/src/noip-2.1.9-1#
[07:32] <OpenTokix> linocisco: apt-get install build-essential
[07:48] <lordievader> Good morning.
[07:51] <Matsy> Hey everyone! I have a question regarding Landscape. Is it normal that I need to reboot my server for it to generate CPU/Memory graphs, and show the current process list?
[07:52] <OpenTokix> Matsy: throw that shit out the window and get a graphite/collectd-system going
[07:52] <Matsy> OpenTokix: I actually really like the looks / functionality of Landscape. Started a 'trial' today with a few of my non-important servers
[07:53] <OpenTokix> Matsy: I have tried it to - also tried obvservatorium, nagosgraphs, cacti and munin - and graphite/collectd - oh yes, so much win
[07:53] <Matsy> Well, I need something to replace my salt-environment
[07:54] <Matsy> So, it needs to do a bit more than collecting logs
[07:54] <OpenTokix> why are you moving away from salt?
[07:54] <Matsy> Company firewalls seem to have a lot of issues with salt
[07:54] <OpenTokix> Matsy: you think ladnscape will replace a fully fledged CM-system?
[07:54] <Matsy> OpenTokix: I don't.
[07:55] <Matsy> But, since Landscape also allows the instant deployment of custom scripts to an arbitrary amount of servers, it seems to do the job
[07:55] <jcastro> landscape shouldn't require reboots to generate those graphs
[07:56] <Matsy> jcastro: That's what I figured. But it didn't show anything, other than 'Virtual Environment: vmware'
[07:56] <Matsy> That's the only piece of information it seemed to gather
[07:56] <Matsy> Which is odd, because finding out in what kind of virtual environment the computer is without the tools installed is quite a bit more complicate than doing a ps for the list of running processes
[07:56] <jcastro> the landscape guys are in #landscape
[07:57] <jcastro> I don't know enough about landscape to help
[07:57] <Matsy> You guys have a channel for everything
[07:57] <jcastro> heh
[08:58] <linocisco> http://pastebin.ubuntu.com/10922486/
[09:01] <Matsy> Yes?
[09:02] <Matsy> linocisco: Probably a DNS issue on your server.
[09:05] <linocisco> Matsy, so what do I do?
[09:05] <Matsy> linocisco: Fix the internet?
[09:05] <Matsy> linocisco: dyn-update.no-ip.com resolves to 8.23.224.120 here.
[09:05] <Matsy> Do a simple ping, and see if that works
[09:06] <Matsy> If it doesn't, just set your nameservers to a proper DNS service (8.8.8.8 / 8.8.4.4 always work)
[09:07] <linocisco> Matsy, ping is blocked
[09:07] <Matsy> linocisco: Try adding dyn-update to your hostfile then
[09:10] <linocisco> Matsy, it is no-ip. not dyn
[09:10] <Matsy> linocisco: I know. See what hostname it tries to contact: dyn-update.no-ip.com
[09:11] <linocisco> dyn-update.no-ip.com: command not found
[09:53] <xqcao> jpds: ping
[10:15] <kevinde> Does anyone use/still uses monit?
[10:16] <kevinde> As i'm running a Teamspeak server on my Ubuntu server and recently discovered monit, I wonder how effective this is to keep your server up and running in case something like a crash occurs
[10:52] <OpenTokix> kevyes
[11:02] <Teduardo> Is there an issue with Ubuntu 14.04 and disk performance?
[11:03] <lordievader> If there is I never noticed it.
[11:03] <Teduardo> okay, i am using rsync to copy some data from a RAID-5 array with 5 drives to an SSD and it's only copying at 134MB/s
[11:04] <Teduardo> and it's actually dipping down as low as 60MB/s
[11:06] <Walex> Teduardo: "performance" is not a property of a distribution, but of its configuration. If you are unhappy with the defaults (which are often not the best for anything in particular), perhaps you need a system administrator to configure your system for your applications.
[11:06] <OpenTokix> Teduardo: Sequential writes is not as fast on SSD:s
[11:07] <OpenTokix> Teduardo: What options are you using for your rsync? --whole-file ? - Are you doing something else on the machine? - What is the performance you expecting?
[11:07] <OpenTokix> Teduardo: 130M/sec from a raid5 on 5 sata-drives sounds resonable.
[11:07] <OpenTokix> Teduardo: hardware or software-raid?
[11:07] <Teduardo> hardware
[11:08] <OpenTokix> Teduardo: you have to remember rsync is doing hashing and stuff on the files you are copying - you can try --whole-file - that will generally be more consise
[11:10] <Teduardo> I guess I will just use bonnie++ to figure out if i can get it to go faster. I figured 5 drives each capable of 100MB/s seperately -1 drive for parity would be about 400MB/s
[11:10] <OpenTokix> Teduardo: it is not that simple
[11:10] <OpenTokix> Teduardo: it is not like the file is perfectly divided in four exactly the same size chunks
[11:11] <Teduardo> ah, im guessing the seek time is horrible on 4tb drives too
[11:11] <OpenTokix> The limit is not MB/sec, but operations per second, - while you are doing your rsync - do a iostat 3 to see how much ops, and how much time is spent on each operation
[11:11] <OpenTokix> Depends on the number of platters
[11:12] <OpenTokix> there ie 2 and 3 platter drives, 3 generally have better seek time than 2. - Also is it many small files, or few large files
[11:12] <Teduardo> its one gigantic file
[11:12] <OpenTokix> ok
[11:13] <OpenTokix> Then the absolutly fastest thing you can do is use dd
[11:13] <OpenTokix> or cp
[11:13] <OpenTokix> dd if=largefile of=/new/path/of/largefile bs=8M
[11:13] <OpenTokix> This will max out your machine, and you can check speed with kill -USR1 pid-of-dd
[11:14] <OpenTokix> Over 8M chunks, generally will not give better performance - but you can always try
[11:15] <OpenTokix> This will probably grind your machine to a halt, since it is doing only io more or less
[11:15] <OpenTokix> so careful if its a internet-system
[11:15] <OpenTokix> that does something =)
[11:15] <Teduardo> nah this is a t630 i have in a lab
[11:15] <OpenTokix> ok, dd at full speed
[11:15] <OpenTokix> then
[11:15] <OpenTokix> will be quickest
[11:16] <OpenTokix> dd > cp > scp > rsync > > > > > > > windows explorer > > > > > os x finder
[11:17] <lordievader> Hehe, osx ;)
[11:18] <OpenTokix> Teduardo: Tell me what you get in terms of performance (if you try dd)
[11:20] <Teduardo> 53141921792 bytes (53 GB) copied, 234.048 s, 227 MB/s
[11:20] <OpenTokix> nice
[11:20] <OpenTokix> only 53G =)
[11:21] <OpenTokix> I tought you said large file =)
[11:21] <Teduardo> he he he
[11:21] <OpenTokix> Teduardo: there you go, - and I guess you learned something in the process
[11:22] <Teduardo> okay, i will add 10 more drives to the array and see if the performance scales
[11:23] <Teduardo> i need to be able to restore 11TB of data in less than 10 days
[11:23] <Teduardo> which is why i'm going on this quest in the first place
[11:23] <OpenTokix> you are already doing it in 13 hours
[11:23] <OpenTokix> with 227M/sec
[11:24] <Teduardo> yea that's a DD not the restore process of this wacky backup software
[11:24] <OpenTokix> ok
[11:24] <Teduardo> i'm trying to make sure that the underlying system is capable of what i need before i yell at the sw vendor
[11:24] <OpenTokix> then I guess its a software issue more than actual hardware, but hardware helps
[11:24] <OpenTokix> What backup software is it?
[11:24] <Teduardo> server backup manager by idera
[11:25] <OpenTokix> ok
[11:25] <OpenTokix> bleh
[11:25] <OpenTokix> comercial backup...
[11:25] <OpenTokix> bleh
[11:25] <OpenTokix> Never used any that wasnt complete useless
[11:25] <OpenTokix> good luck!
[11:26] <Teduardo> it works flawlessly it just takes forever
[11:26] <Teduardo> it restored the 11tb of data
[11:26] <OpenTokix> So its always 0 or 11TB?
[11:29] <Teduardo> yeah it's just backed up data.. which i dont need until it needs to be restored
[11:31] <Teduardo> but i can't have it take 10 days to restore over 10Gbps ethernet
[11:31] <lordievader> But sometimes you only need a small part of the backup.
[11:32] <lordievader> Like one config file.
[11:32] <Teduardo> oh, yeah i'm kind of worst case scenario planning
[11:33] <Teduardo> but it could be that the software is poorly written but the restore I ran was bare metal and it was just bits sent to the block device rather than files
[11:34] <Teduardo> so there is no excuse for 10 days
[11:37] <OpenTokix> Teduardo: did you also tune your network-settings for 10GBps?
[11:37] <OpenTokix> ie. txqueue and such?
[11:38] <OpenTokix> Teduardo: do _NOT_ use jumboframes on LAN
[11:41] <OpenTokix> Teduardo: if its over 10Gbps, your limit is network - and not disk-speed, since your network will topout about 100MB/sec
[11:45] <Teduardo> 10Gbps = 1.25GB/s
[11:46] <OpenTokix> Yes you are correct, - I got a slight case of the dumb for a minute or two there.
[11:46] <Teduardo> no worries.
[11:47] <Patrickdk> why no jumboframes?
[11:47] <Patrickdk> I only use jumboframes
[11:47] <Teduardo> i dont think jumboframes actually hurts anything
[11:47] <Patrickdk> and amazon has changed to only using jumboframes also
[11:47] <Patrickdk> Teduardo, depends on your switch
[11:47] <Teduardo> unless the switch in between is set to 1500 =D
[11:48] <OpenTokix> Patrickdk: For a LAN jumboframes give no added benefit - more then increasing the complexity of your network.
[11:48] <OpenTokix> Patrickdk: jumboframes, is not faster on local network
[11:48] <Patrickdk> no, ifthe switch has a normal (small) packet buffer, you will overflow it quickly
[11:48] <Teduardo> but i'm not even worried about the network yet like i said i'm just trying to get the read performance of the volume on the server itself showing me a little life
[11:48] <OpenTokix> Patrickdk: Did a lab on this, with switches from different vendors, many different 10G cards, multiples OS:es - no difference
[11:49] <Patrickdk> jumboframes or not highly depends on the nic used
[11:49] <OpenTokix> Patrickdk: it is a networking myth, many believe - since it is annoying as hell to test
[11:49] <Patrickdk> I do see a increase in performance on my local network
[11:49] <Patrickdk> on the old nics, it was a huge improvement
[11:50] <Patrickdk> on newer nics it's down to like 10% or so
[11:50] <Patrickdk> but then, in vm's it goes up again
[11:50] <Patrickdk> it depends if you can take advantage of the nic's tso/lsr/gro/gso or not
[11:51] <Teduardo> so what should 5x4TB WD RE4s in RAID-5 do read wise in a bonnie++?
[11:51] <OpenTokix> ok, if youre running in vms - performance isnt a top issue anyway =) - so nevermind
[11:51] <Patrickdk> and since those only work for tcp
[11:52] <Matsy> OpenTokix: I only manage VM clusters
[11:52] <Patrickdk> do a crapload of udp stuff and rtp
[11:52] <OpenTokix> Patrickdk: About 350-450 ios/s
[11:52] <OpenTokix> iops*
[11:52] <OpenTokix> Matsy: ok
[11:52] <Matsy> OpenTokix: Performance is one of my highest priorities
[11:52] <OpenTokix> Matsy: ok
[11:52] <Patrickdk> and 3.5" 7k rpm disk is going max out around 80 iops per disk
[11:52] <Matsy> OpenTokix: How can you say that 'if you are running in vms, performance isn't a top issue'
[11:52] <OpenTokix> Matsy: its not
[11:53] <Patrickdk> a top, it is
[11:53] <Patrickdk> the top, no
[11:53] <Patrickdk> it's normally second to the top
[11:53] <Matsy> Yeah
[11:54] <Matsy> Availability is number one
[11:54] <Matsy> But performance is a very close one
[11:54] <Patrickdk> or simple of management
[11:54] <Patrickdk> ability to move to new hardware without it going nuts
[11:54] <OpenTokix> Matsy: Depends also if your traffic is counted in the hundreds, thousands or millions per second - or if your latency is in seconds, ms or us
[11:54] <Patrickdk> lots of single vm per machine
[11:55] <Matsy> Patrickdk: Oh, I never use single VM machines
[11:55] <Matsy> Patrickdk: A good hot-swappable environment takes care of the hardware changes
[11:58] <Teduardo> so if you add more disks to a raid-5 volume does that make the performance better or does it just mean that the seek time goes up?
[11:58] <Matsy> More disks to raid 5 does not mean more performance
[11:58] <Sling> you shouldn't run big raid5 sets :)
[11:59] <Matsy> ^
[11:59] <Sling> I'd consider 6 disks the max
[11:59] <Sling> beyond that the performance impact is big and much worse your rebuild times will be very long
[11:59] <Sling> during which your array has no fault-protection at all
[11:59] <OpenTokix> And will probably fail (the rebuild)
[11:59] <Teduardo> yeah the rebuild times and stuff arent that big of a deal given it's cold/backup storage
[12:00] <jpds> Deploy Ceph.
[12:00] <maswan> It depends on which performance though, read performance on a raid5 is roughly equivalent to the read performance of a n-1 raid0
[12:00] <Matsy> Why does performance matter on a cold storage
[12:00] <Sling> I'd say, use zfs
[12:00] <Teduardo> Matsy: restore not taking 10 days
[12:00] <maswan> Of course, for a parity raid, raid5 is very brave with modern disk sizes
[12:01] <Matsy> Teduardo: Go for 1+0 or something
[12:01] <Matsy> Should only take 5 days then
[12:01] <Sling> raid 6 would work as well, if your controller supports it
[12:01] <Sling> gives a bit more slack
[12:01] <Teduardo> r6 is slower than r5 right?
[12:01] <OpenTokix> Teduardo: yes
[12:01] <Sling> r6 tends to work on faster controllers
[12:01] <Sling> so in theory yes, in practice no
[12:01] <Sling> or 'hardly'
[12:02] <OpenTokix> raid5: 2 iops/write, 6 4/iops per write
[12:02] <Teduardo> ah, the controller in question is a 2GB PERC H730
[12:02] <Sling> but if you're doing many writes, r5/6 is not good anyway
[12:02] <Matsy> Which H730?
[12:03] <Teduardo> what do ya mean?
[12:03] <Sling> go for 1+0 if you want fast writes :)
[12:03] <maswan> it also depends on the size of the writes, if you do full stripe writes it is not so bad, as opposed to doing random small writes and getting a read-modify-write cycle int here
[12:04] <Teduardo> like i said i'm more concerned about how fast the data can be restored back to the client machine
[12:04] <Teduardo> it took 10 days for a 11TB restore in a DR test
[12:05] <Teduardo> trying to you know... make that.. not suck
[12:05] <Matsy> 10 days?!
[12:05] <Matsy> That's 700mb per minute
[12:05] <Matsy> That's very very slow :p
[12:05] <Teduardo> and like i said that was a bits to block device restore not a file restore
[12:05] <Teduardo> so it was hot garbage
[12:08] <Teduardo> anyway i will check out raid-10 and maybe enable ssd caching on the volume and see if that gets me anywhere
[12:08] <Matsy> Wait, you're using SSDs?
[12:08] <Teduardo> no
[12:09] <Teduardo> the controller can use SSDs as read cache
[12:09] <Teduardo> i will install some for that purpose
[12:09] <Teduardo> just to test around with
[12:09] <Teduardo> since i have pretty much unlimited hw
[12:10] <Teduardo> i imagine that i'm going to find that this is all limited by the software and then im going to be unhappy
[12:12] <Sling> so I have an init script with 'Required-Start:    $network $remote_fs $syslog', but during bootup it fails to bind to the interfaces defined in /etc/network/interfaces, after booting is done I restart the service, and it works fine
[12:12] <Matsy> Unlimited hardware, I envy you
[12:12] <Sling> what could be causing the network service to report as up to init/upstart/whatever but the interface to be still unavailable for binding?
[12:12] <Sling> service in question is pdns-recursor
[12:13] <maswan> Teduardo: well, if you are not seeing lots of io-wait state, you're not waiting for the storag
[12:13] <Teduardo> Matsy: well, its the company's and not mine of course haha =)
[12:14] <Matsy> Teduardo: I work at a university. Unlimited hardware is still very rare.
[12:14] <Matsy> Every time I need to buy a new server, there's 200+ pages of bureaucracy
[12:14] <Teduardo> yeah, currently i am playing with a bunch of Intel NVMe PCI Express drives
[12:14] <Matsy> Sigh
[12:15] <Matsy> Those are such beasts
[12:15] <Teduardo> yeah, bios support for booting them is dodgy so have to use uefi and uefi + pxelinux is... errr.. wacky (for me anyway)
[12:16] <Teduardo> but they are monsters, yes
[12:16] <Matsy> Why do you boot them though?
[12:16] <Matsy> Seems overkill for just booting
[12:16] <Matsy> I mean, servers restarting in 0.1 second or 0.2 seconds
[12:16] <Matsy> I'd love to switch my mongo-disks to them though :p
[12:17] <Matsy> Maybe in the new fiscal year...
[12:17] <Teduardo> hehe yeah, it's kind of my job to make sure that we know all of the potential answers to all of the potential questions before hand, so i just test
[12:17] <Teduardo> im not sure i would boot from them
[12:34] <Teduardo> and now we wait for the background initialization..
[12:34] <Teduardo> woe
[13:38] <delinquentme> can rsynch used to jsut sync folders remote and local?
[13:40] <teward> delinquentme: rsync can be used to sync folders and files between two locations, yes, I believe, but depending on what you want to do you may have to provide additional arguments and options to it.  You may wish to refer to 'man rsync' (without the ' characters) to read up on the various options.
[17:37] <rickbeldin> Hi.  Not sure where to go with this.  I'm having a landscape issue. When I login to landscape and attempt to select the 'Support' option at the top (I have valid entitlements), it takes me to a Salesforce.com login screen instead of the Canonical support portal.
[17:39] <rickbeldin> The link is the 'Support' link here:  https://landscape.canonical.com/account/hp-l3-support/activities
[17:41] <sarnold> rickbeldin: I think that's intentional; at least, when I go to the url you provided. I wind up at an Ubuntu SSO login prompt.. I login, see the landscape interface, hit my own "support" link, which brings me to https://eu1.salesforce.com/500/o -- which appears fully active and live..
[17:42] <rickbeldin> I was using the interface yesterday, and it logged me right in to where you could see cases.
[17:42] <sarnold> rickbeldin: I've heard some sso oddities can be poked in the eye by visiting https://login.ubuntu.com/ first
[17:43] <rickbeldin> !!!
[17:43] <rickbeldin> +1 sarnold.
[17:44] <sarnold> rickbeldin: did that sort it?
[17:44] <rickbeldin> Not very intuitive.  Sort of along the lines of ctrl-alt-delete or close your browser and restart.  Yes, it fixed it.  Thanks very much.
[17:44] <sarnold> no kidding...
[17:46] <sarnold> rickbeldin: I don't kno wmuch about the support / landscape end of things.. it feels like one of those support tickets there would be the place to report the bug, but if those cost you money, _maybe_ this is a better place to start: https://launchpad.net/ubuntu/+source/landscape -- seems a bit empty there, but it might be worth a try
[17:46] <jazzorius> I have a question about ufw. The log files (small sample here: http://pastebin.com/5pguu4kN ) show many connection requests to port 80 being blocked.
[17:47] <rickbeldin> Thanks. This was the first time and now I know which way to hold my nose.  : )
[17:47] <sarnold> rickbeldin: hehe :)
[17:47] <jazzorius> Port 80 is open because the server hosts a website. My nginx logs don't show any requests from these IPs. Is ufw blocking legitimate requests? The IP addresses seem like legit users, not server farms.
[18:36] <CompuChip> Hi. Can someone please help me replace a broken disk in a RAID1. I am getting really confused with the output of mdstat to the point where I am not even sure which disk is broken (it is showing 5  md12* disks, with inactive sdb2[0](S), active sdb5[0] [U_], active sdb1[0] [U_], active sda5[1] [_U] and active sda[2] [_U]).
[18:37] <CompuChip> I think it was sda that failed because for a bit I couldn't boot (no operating system) but that seems to have gone away. I get a login prompt but I can't login when booting from HDD, and now using a Server 14.04 rescue CD to get a prompt.
[18:39] <sarnold> CompuChip: I think it'll be easier for other sto help you debug the issue ifyou can pastebin status outputs or similar; the pastebinit package can be very helpful here
[18:39] <CompuChip> Thanks sarnold, can I install that from the rescue disk prompt?
[18:40] <sarnold> CompuChip: I hope so; apt-get update && apt-get install pastebinit to find out :)
[18:40] <CompuChip> apt-get not found :)
[18:41] <sarnold> dang :)
[18:41] <CompuChip> If I select that I want /dev/sda1 as root and open a shell there I get apt-get but no internet connection
[18:43] <CompuChip> Don't seem to have a DNS, I can ping 8.8.8.8 but not google.com.
[18:44] <sarnold> CompuChip: try adding 8.8.8.8 to your /etc/resolv.conf
[18:44] <sarnold> CompuChip: this might work for you once you've got dns up: http://unix.stackexchange.com/a/5918/7064
[18:48] <CompuChip> sarnold: thanks, got a bunch of error but think I installed astebinit
[18:49] <CompuChip> Yay it worked http://paste.ubuntu.com/10928895/
[18:50] <sarnold> nice! now  you get to solve your actual problem :)
[18:52] <CompuChip> Shall I ask the question again? :)
[18:55] <CompuChip> One of the disks in my RAID1 is broken. I am getting really confused with the output of mdstat to the point where I am not even sure which disk is broken - http://paste.ubuntu.com/10928895/
[18:59] <CompuChip> What's even more confusing is that the Rescue Disk refuses to mount /dev/sdb1 as the root, even when I physically swap the connections.
[19:00] <CompuChip> I can only get a shell in /dev/sda1.
[19:52] <wk5h> been a while since I've run Ubuntu as a server, and getting back into it.  seems like there used to be a post-install graphical command where you could change some of the settings, such as ip address, mount points, etc. that looked a lot like the install process.  make sense?
[19:53] <patdk-wk> no idea
[19:54] <wk5h> thought it was a some tk scripts that was packaged with it...  maybe I'm thinking of a different distro.