/srv/irclogs.ubuntu.com/2014/06/16/#ubuntu-server.txt

=== pHcF_ is now known as pHcF
IbnSaeedis it proper method of learning iptables, via setting up MAAS in ubuntu 14 server inside a VM ?04:32
=== a1berto_ is now known as a1berto
PupenoWhat' the appropriate way of mounting volumes that require net access in Ubuntu and thus, cannot be automatically mounted at boot time?08:08
jamespagecoreycb, zul: all uploaded apart from keystone (still running unit tests) and nova (pending zul actions)09:23
med_jamespage, is there an SRU bug (or something similar) for OpenStack 2014.1.1 ? And is that going into Trusty proper or will the UCA be created for it?10:16
jamespagemed_, bug https://bugs.launchpad.net/ubuntu/+source/nova/+bug/132813410:17
uvirtbotLaunchpad bug 1328134 in nova "[SRU] packaging for openstack icehouse 2014.1.1 release" [Undecided,New]10:17
jamespagemed_, no UCA for trusty - this goes in as an SRU10:17
jamespagemed_, just pending the last few uploads today and we'll get it acked into -proposed10:17
med_nod10:18
med_I misread the tail of that bug, looked like everything was marked invalid10:18
=== mrmist is now known as mist
med_and thanks james. Much appreciated.10:19
med_and thanks zul (didn't think you'd be on yet)10:19
cariboujamespage: would you eventually have time to review http://bugs.launchpad.net/bugs/131360210:32
uvirtbotLaunchpad bug 1313602 in nova-cloud-controller "Nova-cloud-controller charms failed to sync ssh keys between compute nodes" [Undecided,In progress]10:32
cariboujamespage: you or someone else that does openstack charming10:33
jamespagecaribou, sure - not sure why is not in the review queue.10:33
jamespagecaribou, that bug has no linked branches?10:33
cariboujamespage: yeah, I noticed that10:34
pmatulismorning10:34
cariboujamespage: let me dig the MP for you10:35
jamespagecaribou, can you link the branches to that bug report and set the merges back to ready for review.10:35
cariboujamespage: sure; I'm just surprized that the LP: #bug in the changelog did go unnoticed by launchpad10:36
cariboujamespage: I put the status to 'resubmit' by mistake. How can I get them back to 'ready for review'. By resubmitting them ?10:44
jamespagecaribou, URL?10:44
cariboujamespage: https://code.launchpad.net/~louis-bouchard/charms/precise/nova-compute/lp1313602-multiline-known-hosts/+merge/21844010:45
cariboujamespage: and https://code.launchpad.net/~louis-bouchard/charms/precise/nova-cloud-controller/lp1313602-multiline-known-hosts/+merge/21844210:45
jamespagecaribou, hmm - that looks OK10:46
Voyage whats the limit of concurrent connections of one 64 core CPU server machine? whats the bottle neck? 64 core processor cpu and 2 TB ram.11:01
OpenTokixVoyage: for what?11:03
jamespageVoyage, by default probably 1024 - the open file descriptor limit is low by default which constrains  the amount of network connections a single process can support11:04
jamespagenote that is for 1 process11:04
OpenTokixjamespage: but that is a software limit, the question is regarding hardware.11:04
jamespageOpenTokix, in which case the answer is it depends on what sofware you are running11:05
OpenTokixVoyage: if it has a 1Gbps interface and the packets are on ethernet frame per client, around 60k connections, since the 1GBps will top out at about 120kpps (x10 for 10GE)11:05
OpenTokixjamespage: yes =) as I asked "for what?"11:05
jamespageOpenTokix, indeed :-)11:05
jamespageOpenTokix, by open fd limit is always pretty much the first bottleneck unless the software is really crappy :-)11:06
OpenTokixjamespage: indeed it is11:06
OpenTokixjamespage: and it is also very very traffic dependend where the bottle neck is11:07
jamespageindeed11:07
jamespageI love 'it depends' answers :-)11:07
OpenTokix=)11:07
OpenTokixIt always do11:07
OpenTokixI have seen some really weird bottlenecks on high volume stuff11:07
OpenTokixLike running out of source ports for instance11:07
jamespageindeed - I reckon the best answer is really to configure, test, see what breaks, re-configure....11:07
OpenTokixwith a 3s fin_timeout =)11:07
jamespageof course generating that sort of load does become tricky11:08
OpenTokiximpossible to benchmark11:08
OpenTokixbtw. =)11:08
OpenTokixThe biggest houses for benchmarking can only muster up about 5000 unique hosts11:09
jamespageOpenTokix, that sounds expensive but 100% correct11:10
jamespageOpenTokix, I used to use a Testing and as Service product that ran from ec2 - but that did not go that large11:10
jamespage100's11:10
OpenTokixjamespage: crazy expensive between 10-50k dollar for one test11:10
jamespageOpenTokix, esp when someone forgets to bump the ulimit and you hit the wall fast!11:11
OpenTokixjamespage: Ec2 is to slow for the stuff I was testing11:11
OpenTokixjamespage: I designed the system we tested, and I didnt forget =)11:11
OpenTokixjamespage: handled about 3 billion requsts/day from millions of ips11:11
jamespageOpenTokix, awesome11:11
OpenTokixit was =)11:13
OpenTokixThen the company got aquired by idiots and I left =)11:13
OpenTokixjamespage: and needless to say, their availability is going down =)11:13
Voyagejamespage,  so normally, whats the max no. of connections that reasonable powerfull servers reach?11:48
jamespageVoyage, it really does depend on what you are actually doing on it11:49
VoyageOpenTokix,  the  60k connections is not the limit on server side. correct? its on client side ( which is ofcourse never reached)11:49
VoyageOpenTokix,  a server can have more than  60k connections concurrently. right?11:50
VoyageOpenTokix,  the file descriptior limit can be customized to be more than  millions. no?11:50
Voyagejamespage,  web server. apache or apache tomcat..11:51
OpenTokixVoyage: on a single 1Gbps nic you have a limit of about 120kpps11:52
OpenTokixVoyage: generally11:52
OpenTokixVoyage: you can do some tweaks, if know _a lot_ of your traffic to reach higher11:52
OpenTokix(at the cost of latency)11:52
Voyageignore latency11:53
OpenTokixYou cant get a 1G much higher then 120kpps11:56
OpenTokixfor internet traffic11:56
=== Ursinha is now known as Ursinha-afk
drovenعاوز زب من الزقازيق13:11
cfhowlettdroven in English please13:11
drovenعاوز زب من الزقازيق13:17
cfhowlett!english|droven13:18
ubottudroven: The main Ubuntu channels require that you speak in calm, polite English. For other languages, please visit https://wiki.ubuntu.com/IRC/ChannelList13:18
drovenعاوز زب من الزقازيق13:21
davidparkerHi everyone! I have a low-traffic website, audio streaming, archiving, and file serving needs to accomplish, and I have two computers to do it. Should I install ubuntu server on both of them, using one as backup? Or should I use MAAS? My needs are not complex so using a cloud-style elastic architecture is probably overkill, right? I want to use Juju to set up my stack, though. Thanks!13:22
remix_tjdavidparker: maybe you need only an active/passive cluster?13:38
remix_tjdavidparker: something like this http://www.thomas-krenn.com/en/wiki/HA_Cluster_with_Linux_Containers_based_on_Heartbeat,_Pacemaker,_DRBD_and_LXC13:39
rbasakdavidparker: if you want to use Juju in your stack, and you have two computers, then I think your options are: separate local environments on each computer, one manual provider environment covering both, or MAAS. For two machines, MAAS is overkill I think. I'd probably go with the manual provider.13:41
rbasakdavidparker: I suggest that you use the juju stable PPA for now. I'm working on getting the latest updates to Trusty, and making that happen faster. But right now the PPA is the best option.13:42
davidparkerIs using Juju also overkill? I've installed ubuntu server with the following packages: apache, PHP, MySQL, DNS server, OpenSSH, etc. So if that's all I'm using, plus some audio specific stuff like Rotter and LiquidSoap, then why use Juju? I've already got all the software I need on ubuntu server.13:54
davidparkerThen I could just use the second computer as a RAID, or some type of backup.13:55
=== Ursinha-afk is now known as Ursinha
hallynzul: all right, let's get some libvirt 1.2.5 up in this utopic.  (i wanna finish the cgmanager patch against it)  are you working on it right now?  (if not i'll take a look)14:27
zulhallyn:  please have a look14:30
hallynk14:35
DenBeirenwhat would be the easy way to change locale to belgium / dutch layout truough cli?14:50
rbasakDenBeiren: LANG is usually set in /etc/default/locale. Older releases used /etc/environment14:53
rbasakDenBeiren: not sure about keyboard though, sorry.14:53
rbasak"sudo dpkg-reconfigure console-setup" maybe. I'm not sure if that's still used.14:53
rbasakMaybe /etc/default/keyboard as provided by the keyboard-configuration pacakge.14:54
rbasakThe loadkeys command will change things immediately for you on a VT if you need it updated without a reboot.14:54
rbasakNo idea about X.14:54
=== Ursinha is now known as Ursinha-afk
=== Ursinha-afk is now known as Ursinha
=== |Jurgen| is now known as |J-W|
=== |J-W| is now known as |Jurgen|
superbootHi all. I've got a mdadm RAID-10 array, and want to upgrade the os to a new version with a freash install. Can I just copy the /etc/mdadm/mdadm.conf to the new install, and start/mount the array?16:12
tgm4883Generally speaking, shouldn't iSCSI be faster than NFS? I've been doing some testing, and I'm getting better performance to NFS. Is my thinking backwards?16:15
superbootMeaning your tests are showing that NFS is faster? (for clarification for the group)16:15
tgm4883exactly16:15
tgm4883I've been using IOMeter for testing16:15
tgm4883I'm getting higher IOPS and more throughput16:16
tgm4883Tests consisting of SAN storage mounted from a QNAP on a ESXi host using both iSCSI and NFS (two different volumes). Virtual machine on each of the storage running IOMeter16:17
superbootSorry, I don't have any experiance with iSCSI.16:19
superbootI'm sure there is someone in the channel that does.16:19
RoyKtgm4883: sync writes to NFS should be slower than writes to iSCSI, but NFS also support async writes, meaning the writes are buffered (at the VFS layer IIRC)16:24
RoyKiSCSI is always sync on the block level16:25
RoyK(but then, the filesystem (or VFS?) should do some caching anyway)16:25
RoyKtgm4883: how do you compare these two? what sort of NFS server? what filesystem on top of the iSCSI thing?16:26
RoyKand btw, for iSCSI, you should use jumboframes16:26
RoyKtgm4883: standard 1500 byte frames will generate rather a lot of tiny frames where you want network stacks and switches to be more happy with 9000b jumboframes16:27
RoyKtgm4883: but please describe your infrastructure, network etc16:28
=== a1berto_ is now known as a1berto
tgm4883RoyK: The SAN is a QNAP server, I believe uses NFSv3. The filesystem on the iSCSI mount is VMFS5. I'm not currently using jumbo frames, but I'll check with my networking team to ensure that is possible. I'd also have to check with them to see what switches we're using, I know we have a combination of Cisco and Brocade/Foundry.16:31
tgm4883I'm comparing them by running IOMeter on a Ubuntu 14.04 VM. I have it located on one type of storage, run the test, then use vmotion to move it to the other storage and test again16:32
=== Ursinha is now known as Ursinha-afk
RoyKguess all switches produced the latest 10 years or so, should support jumboframes16:36
RoyKnot necessarily enabled in the config, though16:36
tgm4883yea, I wouldn't be surprised if it's not enabled16:36
=== |Jurgen| is now known as Dieltjes
=== Dieltjes is now known as Dieltjens
RoyKbut.. do I understand you correctly? Are you comparing vmfs5 on iscsi with nfs on a linux vm?16:37
RoyKif so, that's not quite fair16:38
=== Dieltjens is now known as |Jurgen|
tgm4883not exactly. both are mounted directly on the ESXi server. VMFS5 on iSCSI, NFS3 (for NFS).16:38
tgm4883Both are mounted on the ESXi server, not on the IOMeter client16:39
RoyKnot sure, then - we're using iSCSI for most stuff on ESXi (Dell Equallogic storage)16:40
RoyKthose boxes just support direct block access anyway16:41
RoyKtgm4883: what sort of QNAP thing? The QNAPs I've used, just use linux and software RAID/LVM16:43
=== Ursinha-afk is now known as Ursinha
RoyKtgm4883: nothing wrong with that, though...16:44
RoyKtgm4883: btw, check output of 'ifconfig' on that QNAP thing - check the MTU16:45
tgm4883RoyK: it's a TS-EC1279U-RP16:46
tgm4883MTU's at 1500 in the web interface16:47
RoyKk16:47
tgm4883I'll check with our networking team on the switches, they are in a meeting rightnow16:47
RoyKok16:48
tgm4883Wiring wise, the ESXi hosts have dual 1Gbit links to the switch, the QNAP has 4 - 1 Gbit links to the switch in LACP config16:48
RoyKtgm4883: not sure if it's relevant, but we don't use LACP in our setups - we use quad 1G links to the EQL storage with multipath16:50
RoyKLACP proabably don't scale as well as iSCSI multipath16:50
RoyKespecially with few hosts16:50
RoyKhow many ESXi hosts?16:50
tgm4883Which EQL?16:51
RoyKEQL as in EqualLogic - we have a few16:51
RoyKfrom large/slowish ones to small/fast ones (15k in raid1-0)16:51
tgm4883So a bit more about our environment, I'm testing the QNAP for another team, so it's only got one host attached to it. We've got 7 hosts connected to our EQL ps4100. I recently removed our PS6000x as it's pretty small storage and performance wasn't great when I tested it16:52
tgm4883I'm not 100% sure that my predecessor had any of this tuned for anything, and AFAIK there was no performance testing done on any of this16:53
RoyKwith a single iSCSI connection, you'll probably only get 1Gbps throughput because of how LACP works16:53
RoyKdepending on setup, though16:53
lordievaderGood evening.16:53
tgm4883RoyK: our 10k EQL was barely outperforming the QNAP in these IOMeter tests16:54
tgm4883Then again, I'm not 100% I like any of these IOps numbers I'm getting compared to what sales is telling me on the new SANs they want me to buy16:54
RoyKthesheff17: doesn't surprise me, but then, EQL have some nifty features for moving volumes around easily. Still wouldn't recommend them. We're looking at Compellent now to see if that can replace what we're using now16:55
RoyKDell's talking about allowing replication between EQL and Compellent soon (Q2 2015 or so), which could mean we can use the current hardware for the DR site16:56
RoyKCompellent looks a *lot* better, albeit a bit more expensive at first16:56
tgm4883RoyK: yea there was some fancy auto-management between the two SANs when I had both in production, but it was renewal time and I didn't feel like paying $4K for 6TB of storage when performance was so bad16:56
RoyKlol16:57
RoyKit's rather convenient to be able to upgrade a controller at a time, though, something you can't do with things like a QNAP or some homebrew ZFS-setup16:58
tgm4883true, although we have blackout days every quarter that we can take any system down we need to work on16:59
tgm4883but yea, that is pretty convenient to work on16:59
RoyK*days*?16:59
tgm4883sorry, just 1 blackout sunday16:59
tgm48834 per year16:59
RoyKok17:00
RoyKthat's convenient17:00
tgm4883yea it's a carryover from the bad old days17:00
tgm4883mostly we don't need to take stuff down anymore because everything is so redundant17:00
RoyKanother thing that's rather annoying with EQL is that its controllers are active/standby, not active/active17:01
bcsallerThis is an odd one, I'm sprinting on location and our partner here is seeing an odd case as they spin up VMs. Under high network load when they bring up a container with a previously used IP but a changed MAC address the ARP cache seems to persist long enough that they lose traffic. This isn't something they were seeing under Lucid but are now seeing as they transition to Trusty. Anyone seen something like this?17:15
=== armenb_ is now known as armenb
=== |Jurgen| is now known as |J-W|
ChiarotQuestion for you guys, I have a ubuntu server that is doing DHCP, I'd like to add a PXE server into my network as well, what would I add into the dhcpd.conf to ensure it points to the correct box for pxe?21:04
lordievaderChiarot: filename "pxelinux.0"; and probably "next-server".21:13
Hornetevening21:20
Hornetsort of hybrid issue here; having 'fun' getting my sshfs mount to behave as expected21:20
Horneta mv command as the remote user in terminal, works, but sshfs gets operation not permitted errors21:20
Horneteverything has correct rights21:21
Hornetonly possible cause I can think of is the move is between two physical remote drives21:21
Hornetbut both in the same box21:21
Hornetany insights much appreciated21:22
sarnoldHornet: seems like you're not alone, at least one other person reported cross-mount moves don't work through sshfs: http://ubuntuforums.org/showthread.php?t=212405822:01
sarnold(as much I dislike forum-based research, this one seems clueful :)22:02
Hornetsarnold: already seen that, but thanks22:02
HornetI need to get sshfs working as I need transparent access so that file utilities can work, I don't just want to use gui file managers22:03
sarnoldHornet: you could use multiple sshfs mounts, one per underlying filesystem22:03
sarnoldHornet: then your local 'mv' command would recognize the situation correctly and perform the copy + unlink exactly as it should22:04
Hornetconsidered it, it'd break smart transfers, eg it'd download from the server then upload again22:04
Hornetrather than just moving on the host22:04
sarnoldto be honest, I can't expect any networked filesystem to get that correct22:04
Hornetsshfs does22:04
sarnoldI thought you were here because it doesn't get it correct?22:05
Hornetif I move the files to the destination, or close to it, and do mass complex renames, it does it instantly22:05
Hornetit just can't move things from the original source correctly22:05
Hornetwhich is the crux of the matter22:05
Hornetwe're talking about 60gb of files presently22:05
Hornetwill vary wildly per use case though22:05
Horneteg, on the server, /volatile represents a single 1tb drive in the box22:07
Hornetthe main filesystem is a 6tb raid array22:07
Hornetif I try to use sshfs to move files from /volatile to /home it screws the proverbial pooch22:07
Hornetbut if I move the files there first with mv, then do my mass renaming in situ, it works22:08
Hornetthis is basically bug territory, but sshfs are -glacial- at fixing things22:08
sarnoldand this one would take a fair amount of fiddling to get right22:09
Hornetsftp seems to bypass the issue22:10
Hornetbut when you look into sftp and fuse, guess where you end up?22:10
Hornetsshfs22:10
hacktronany advise would be much appreciated... I have an ubuntu server that as far as I know is working fine.. Hosting apkapps.com there. But after going for about a week with no reboot the site will timeout until I reboot again. that is one issue.22:11
hacktronthe other is I cannot access from any device within my office work network22:12
hacktroncurrently it just times out22:12
Hornetlatter sounds like router/firewall issues22:12
Hornetformer, god knows22:12
sarnoldbut with an sftp, at least you -know- that your moves aren't necessarily atomic; when 'source' and 'destination' are both on the same filesystem mount point, you'd expect a rename() system call to be atomic, but sshfs cannot provide that gaurantee if it does system(mv) behind the scenes. I wonder what would break if they implemented it.22:12
Hornetsarnold, I've tried the classic workaround=rename, no luck there22:13
Hornetfirst thing I tried22:13
sarnoldhacktron: from the internal systems, look up apkapps.com -- I suspect you're getting an external IP, one that the internal machines can't route to.22:13
hacktronwell I can access other pages on the server just not the apkapps.com site22:13
hacktronfor example the ip for the server is 173.55.24.6722:14
hacktronif I type that directly I will get the page being served22:15
hacktronbut if I go to 173.55.24.67/apkapps/apk which is where the site is being help it timeout22:16
hacktronapkapps.com just points to that location22:16
hallynzul: hm, so libvirt virstoragetest is hanging in buildds on:   cmd = virCommandNewArgList(qemuimg, "create", "-f", "qcow2", NULL);22:16
hacktronthat is how I currently have it setup, not sure if that is correct22:16
hallyn    virCommandAddArgFormat(cmd, "-obacking_file=raw,backing_fmt=raw%s",22:16
hallyn                           compat ? ",compat=0.10" : "");22:16
hallyn    virCommandAddArg(cmd, "qcow2");22:16
sarnoldhacktron: does apkapps.com configuration or code do any hostname logging rather than IP logging?22:17
hacktronin apache2 config file is setup with hostname to point to directory,22:18
Hornetslight tangent to prior issue, what's the sanest way to move many files via a mv command?  I've tried command line substition but that seemed to not work22:18
hacktronnot sure if that is what you are asking about.22:19
sarnoldHornet: I often put together pretty hacky shell commands: cd source/ ; for f in *foo* ; do mv $f ../destination/ ; done22:20
Hornetah, of course22:21
sarnoldhacktron: heh, not really; sometimes log files are set to log hostnames instead of IP addresses; that's usually a bad idea because (a) anyone can set reverse dns to anything they want (b) when there are timeouts resolving names, odd things can happen.22:21
Hornetwould have to make a comprehesive list though, the files are awkwardly named22:21
sarnoldHornet: the "string operations" here are sometimes useful for making similar for loops when things get worse: http://tldp.org/LDP/abs/html/refcards.html22:21
Hornetnoted, thanks22:23
hacktronsarnold__: do you have a resource that can help me setup correctly, I just followed this http://httpd.apache.org/docs/2.2/vhosts/name-based.html22:23
sarnoldhacktron: sorry, that's the same guide (or the 2.4 version) I use when trying to figure my way around apache. :/22:25
hacktronsarnold__: ok thanks..so make sure its logging ip address not hostname?22:40
sarnoldhacktron: rather the other way around -- logging hostnames rather than IPs. (This is not common.)22:41
hacktronok tyvm22:42
=== not_phunyguy is now known as phunyguy

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!