=== pHcF_ is now known as pHcF [04:32] is it proper method of learning iptables, via setting up MAAS in ubuntu 14 server inside a VM ? === a1berto_ is now known as a1berto [08:08] What' the appropriate way of mounting volumes that require net access in Ubuntu and thus, cannot be automatically mounted at boot time? [09:23] coreycb, zul: all uploaded apart from keystone (still running unit tests) and nova (pending zul actions) [10:16] jamespage, is there an SRU bug (or something similar) for OpenStack 2014.1.1 ? And is that going into Trusty proper or will the UCA be created for it? [10:17] med_, bug https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1328134 [10:17] Launchpad bug 1328134 in nova "[SRU] packaging for openstack icehouse 2014.1.1 release" [Undecided,New] [10:17] med_, no UCA for trusty - this goes in as an SRU [10:17] med_, just pending the last few uploads today and we'll get it acked into -proposed [10:18] nod [10:18] I misread the tail of that bug, looked like everything was marked invalid === mrmist is now known as mist [10:19] and thanks james. Much appreciated. [10:19] and thanks zul (didn't think you'd be on yet) [10:32] jamespage: would you eventually have time to review http://bugs.launchpad.net/bugs/1313602 [10:32] Launchpad bug 1313602 in nova-cloud-controller "Nova-cloud-controller charms failed to sync ssh keys between compute nodes" [Undecided,In progress] [10:33] jamespage: you or someone else that does openstack charming [10:33] caribou, sure - not sure why is not in the review queue. [10:33] caribou, that bug has no linked branches? [10:34] jamespage: yeah, I noticed that [10:34] morning [10:35] jamespage: let me dig the MP for you [10:35] caribou, can you link the branches to that bug report and set the merges back to ready for review. [10:36] jamespage: sure; I'm just surprized that the LP: #bug in the changelog did go unnoticed by launchpad [10:44] jamespage: I put the status to 'resubmit' by mistake. How can I get them back to 'ready for review'. By resubmitting them ? [10:44] caribou, URL? [10:45] jamespage: https://code.launchpad.net/~louis-bouchard/charms/precise/nova-compute/lp1313602-multiline-known-hosts/+merge/218440 [10:45] jamespage: and https://code.launchpad.net/~louis-bouchard/charms/precise/nova-cloud-controller/lp1313602-multiline-known-hosts/+merge/218442 [10:46] caribou, hmm - that looks OK [11:01] whats the limit of concurrent connections of one 64 core CPU server machine? whats the bottle neck? 64 core processor cpu and 2 TB ram. [11:03] Voyage: for what? [11:04] Voyage, by default probably 1024 - the open file descriptor limit is low by default which constrains the amount of network connections a single process can support [11:04] note that is for 1 process [11:04] jamespage: but that is a software limit, the question is regarding hardware. [11:05] OpenTokix, in which case the answer is it depends on what sofware you are running [11:05] Voyage: if it has a 1Gbps interface and the packets are on ethernet frame per client, around 60k connections, since the 1GBps will top out at about 120kpps (x10 for 10GE) [11:05] jamespage: yes =) as I asked "for what?" [11:05] OpenTokix, indeed :-) [11:06] OpenTokix, by open fd limit is always pretty much the first bottleneck unless the software is really crappy :-) [11:06] jamespage: indeed it is [11:07] jamespage: and it is also very very traffic dependend where the bottle neck is [11:07] indeed [11:07] I love 'it depends' answers :-) [11:07] =) [11:07] It always do [11:07] I have seen some really weird bottlenecks on high volume stuff [11:07] Like running out of source ports for instance [11:07] indeed - I reckon the best answer is really to configure, test, see what breaks, re-configure.... [11:07] with a 3s fin_timeout =) [11:08] of course generating that sort of load does become tricky [11:08] impossible to benchmark [11:08] btw. =) [11:09] The biggest houses for benchmarking can only muster up about 5000 unique hosts [11:10] OpenTokix, that sounds expensive but 100% correct [11:10] OpenTokix, I used to use a Testing and as Service product that ran from ec2 - but that did not go that large [11:10] 100's [11:10] jamespage: crazy expensive between 10-50k dollar for one test [11:11] OpenTokix, esp when someone forgets to bump the ulimit and you hit the wall fast! [11:11] jamespage: Ec2 is to slow for the stuff I was testing [11:11] jamespage: I designed the system we tested, and I didnt forget =) [11:11] jamespage: handled about 3 billion requsts/day from millions of ips [11:11] OpenTokix, awesome [11:13] it was =) [11:13] Then the company got aquired by idiots and I left =) [11:13] jamespage: and needless to say, their availability is going down =) [11:48] jamespage, so normally, whats the max no. of connections that reasonable powerfull servers reach? [11:49] Voyage, it really does depend on what you are actually doing on it [11:49] OpenTokix, the 60k connections is not the limit on server side. correct? its on client side ( which is ofcourse never reached) [11:50] OpenTokix, a server can have more than 60k connections concurrently. right? [11:50] OpenTokix, the file descriptior limit can be customized to be more than millions. no? [11:51] jamespage, web server. apache or apache tomcat.. [11:52] Voyage: on a single 1Gbps nic you have a limit of about 120kpps [11:52] Voyage: generally [11:52] Voyage: you can do some tweaks, if know _a lot_ of your traffic to reach higher [11:52] (at the cost of latency) [11:53] ignore latency [11:56] You cant get a 1G much higher then 120kpps [11:56] for internet traffic === Ursinha is now known as Ursinha-afk [13:11] عاوز زب من الزقازيق [13:11] droven in English please [13:17] عاوز زب من الزقازيق [13:18] !english|droven [13:18] droven: The main Ubuntu channels require that you speak in calm, polite English. For other languages, please visit https://wiki.ubuntu.com/IRC/ChannelList [13:21] عاوز زب من الزقازيق [13:22] Hi everyone! I have a low-traffic website, audio streaming, archiving, and file serving needs to accomplish, and I have two computers to do it. Should I install ubuntu server on both of them, using one as backup? Or should I use MAAS? My needs are not complex so using a cloud-style elastic architecture is probably overkill, right? I want to use Juju to set up my stack, though. Thanks! [13:38] davidparker: maybe you need only an active/passive cluster? [13:39] davidparker: something like this http://www.thomas-krenn.com/en/wiki/HA_Cluster_with_Linux_Containers_based_on_Heartbeat,_Pacemaker,_DRBD_and_LXC [13:41] davidparker: if you want to use Juju in your stack, and you have two computers, then I think your options are: separate local environments on each computer, one manual provider environment covering both, or MAAS. For two machines, MAAS is overkill I think. I'd probably go with the manual provider. [13:42] davidparker: I suggest that you use the juju stable PPA for now. I'm working on getting the latest updates to Trusty, and making that happen faster. But right now the PPA is the best option. [13:54] Is using Juju also overkill? I've installed ubuntu server with the following packages: apache, PHP, MySQL, DNS server, OpenSSH, etc. So if that's all I'm using, plus some audio specific stuff like Rotter and LiquidSoap, then why use Juju? I've already got all the software I need on ubuntu server. [13:55] Then I could just use the second computer as a RAID, or some type of backup. === Ursinha-afk is now known as Ursinha [14:27] zul: all right, let's get some libvirt 1.2.5 up in this utopic. (i wanna finish the cgmanager patch against it) are you working on it right now? (if not i'll take a look) [14:30] hallyn: please have a look [14:35] k [14:50] what would be the easy way to change locale to belgium / dutch layout truough cli? [14:53] DenBeiren: LANG is usually set in /etc/default/locale. Older releases used /etc/environment [14:53] DenBeiren: not sure about keyboard though, sorry. [14:53] "sudo dpkg-reconfigure console-setup" maybe. I'm not sure if that's still used. [14:54] Maybe /etc/default/keyboard as provided by the keyboard-configuration pacakge. [14:54] The loadkeys command will change things immediately for you on a VT if you need it updated without a reboot. [14:54] No idea about X. === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha === |Jurgen| is now known as |J-W| === |J-W| is now known as |Jurgen| [16:12] Hi all. I've got a mdadm RAID-10 array, and want to upgrade the os to a new version with a freash install. Can I just copy the /etc/mdadm/mdadm.conf to the new install, and start/mount the array? [16:15] Generally speaking, shouldn't iSCSI be faster than NFS? I've been doing some testing, and I'm getting better performance to NFS. Is my thinking backwards? [16:15] Meaning your tests are showing that NFS is faster? (for clarification for the group) [16:15] exactly [16:15] I've been using IOMeter for testing [16:16] I'm getting higher IOPS and more throughput [16:17] Tests consisting of SAN storage mounted from a QNAP on a ESXi host using both iSCSI and NFS (two different volumes). Virtual machine on each of the storage running IOMeter [16:19] Sorry, I don't have any experiance with iSCSI. [16:19] I'm sure there is someone in the channel that does. [16:24] tgm4883: sync writes to NFS should be slower than writes to iSCSI, but NFS also support async writes, meaning the writes are buffered (at the VFS layer IIRC) [16:25] iSCSI is always sync on the block level [16:25] (but then, the filesystem (or VFS?) should do some caching anyway) [16:26] tgm4883: how do you compare these two? what sort of NFS server? what filesystem on top of the iSCSI thing? [16:26] and btw, for iSCSI, you should use jumboframes [16:27] tgm4883: standard 1500 byte frames will generate rather a lot of tiny frames where you want network stacks and switches to be more happy with 9000b jumboframes [16:28] tgm4883: but please describe your infrastructure, network etc === a1berto_ is now known as a1berto [16:31] RoyK: The SAN is a QNAP server, I believe uses NFSv3. The filesystem on the iSCSI mount is VMFS5. I'm not currently using jumbo frames, but I'll check with my networking team to ensure that is possible. I'd also have to check with them to see what switches we're using, I know we have a combination of Cisco and Brocade/Foundry. [16:32] I'm comparing them by running IOMeter on a Ubuntu 14.04 VM. I have it located on one type of storage, run the test, then use vmotion to move it to the other storage and test again === Ursinha is now known as Ursinha-afk [16:36] guess all switches produced the latest 10 years or so, should support jumboframes [16:36] not necessarily enabled in the config, though [16:36] yea, I wouldn't be surprised if it's not enabled === |Jurgen| is now known as Dieltjes === Dieltjes is now known as Dieltjens [16:37] but.. do I understand you correctly? Are you comparing vmfs5 on iscsi with nfs on a linux vm? [16:38] if so, that's not quite fair === Dieltjens is now known as |Jurgen| [16:38] not exactly. both are mounted directly on the ESXi server. VMFS5 on iSCSI, NFS3 (for NFS). [16:39] Both are mounted on the ESXi server, not on the IOMeter client [16:40] not sure, then - we're using iSCSI for most stuff on ESXi (Dell Equallogic storage) [16:41] those boxes just support direct block access anyway [16:43] tgm4883: what sort of QNAP thing? The QNAPs I've used, just use linux and software RAID/LVM === Ursinha-afk is now known as Ursinha [16:44] tgm4883: nothing wrong with that, though... [16:45] tgm4883: btw, check output of 'ifconfig' on that QNAP thing - check the MTU [16:46] RoyK: it's a TS-EC1279U-RP [16:47] MTU's at 1500 in the web interface [16:47] k [16:47] I'll check with our networking team on the switches, they are in a meeting rightnow [16:48] ok [16:48] Wiring wise, the ESXi hosts have dual 1Gbit links to the switch, the QNAP has 4 - 1 Gbit links to the switch in LACP config [16:50] tgm4883: not sure if it's relevant, but we don't use LACP in our setups - we use quad 1G links to the EQL storage with multipath [16:50] LACP proabably don't scale as well as iSCSI multipath [16:50] especially with few hosts [16:50] how many ESXi hosts? [16:51] Which EQL? [16:51] EQL as in EqualLogic - we have a few [16:51] from large/slowish ones to small/fast ones (15k in raid1-0) [16:52] So a bit more about our environment, I'm testing the QNAP for another team, so it's only got one host attached to it. We've got 7 hosts connected to our EQL ps4100. I recently removed our PS6000x as it's pretty small storage and performance wasn't great when I tested it [16:53] I'm not 100% sure that my predecessor had any of this tuned for anything, and AFAIK there was no performance testing done on any of this [16:53] with a single iSCSI connection, you'll probably only get 1Gbps throughput because of how LACP works [16:53] depending on setup, though [16:53] Good evening. [16:54] RoyK: our 10k EQL was barely outperforming the QNAP in these IOMeter tests [16:54] Then again, I'm not 100% I like any of these IOps numbers I'm getting compared to what sales is telling me on the new SANs they want me to buy [16:55] thesheff17: doesn't surprise me, but then, EQL have some nifty features for moving volumes around easily. Still wouldn't recommend them. We're looking at Compellent now to see if that can replace what we're using now [16:56] Dell's talking about allowing replication between EQL and Compellent soon (Q2 2015 or so), which could mean we can use the current hardware for the DR site [16:56] Compellent looks a *lot* better, albeit a bit more expensive at first [16:56] RoyK: yea there was some fancy auto-management between the two SANs when I had both in production, but it was renewal time and I didn't feel like paying $4K for 6TB of storage when performance was so bad [16:57] lol [16:58] it's rather convenient to be able to upgrade a controller at a time, though, something you can't do with things like a QNAP or some homebrew ZFS-setup [16:59] true, although we have blackout days every quarter that we can take any system down we need to work on [16:59] but yea, that is pretty convenient to work on [16:59] *days*? [16:59] sorry, just 1 blackout sunday [16:59] 4 per year [17:00] ok [17:00] that's convenient [17:00] yea it's a carryover from the bad old days [17:00] mostly we don't need to take stuff down anymore because everything is so redundant [17:01] another thing that's rather annoying with EQL is that its controllers are active/standby, not active/active [17:15] This is an odd one, I'm sprinting on location and our partner here is seeing an odd case as they spin up VMs. Under high network load when they bring up a container with a previously used IP but a changed MAC address the ARP cache seems to persist long enough that they lose traffic. This isn't something they were seeing under Lucid but are now seeing as they transition to Trusty. Anyone seen something like this? === armenb_ is now known as armenb === |Jurgen| is now known as |J-W| [21:04] Question for you guys, I have a ubuntu server that is doing DHCP, I'd like to add a PXE server into my network as well, what would I add into the dhcpd.conf to ensure it points to the correct box for pxe? [21:13] Chiarot: filename "pxelinux.0"; and probably "next-server". [21:20] evening [21:20] sort of hybrid issue here; having 'fun' getting my sshfs mount to behave as expected [21:20] a mv command as the remote user in terminal, works, but sshfs gets operation not permitted errors [21:21] everything has correct rights [21:21] only possible cause I can think of is the move is between two physical remote drives [21:21] but both in the same box [21:22] any insights much appreciated [22:01] Hornet: seems like you're not alone, at least one other person reported cross-mount moves don't work through sshfs: http://ubuntuforums.org/showthread.php?t=2124058 [22:02] (as much I dislike forum-based research, this one seems clueful :) [22:02] sarnold: already seen that, but thanks [22:03] I need to get sshfs working as I need transparent access so that file utilities can work, I don't just want to use gui file managers [22:03] Hornet: you could use multiple sshfs mounts, one per underlying filesystem [22:04] Hornet: then your local 'mv' command would recognize the situation correctly and perform the copy + unlink exactly as it should [22:04] considered it, it'd break smart transfers, eg it'd download from the server then upload again [22:04] rather than just moving on the host [22:04] to be honest, I can't expect any networked filesystem to get that correct [22:04] sshfs does [22:05] I thought you were here because it doesn't get it correct? [22:05] if I move the files to the destination, or close to it, and do mass complex renames, it does it instantly [22:05] it just can't move things from the original source correctly [22:05] which is the crux of the matter [22:05] we're talking about 60gb of files presently [22:05] will vary wildly per use case though [22:07] eg, on the server, /volatile represents a single 1tb drive in the box [22:07] the main filesystem is a 6tb raid array [22:07] if I try to use sshfs to move files from /volatile to /home it screws the proverbial pooch [22:08] but if I move the files there first with mv, then do my mass renaming in situ, it works [22:08] this is basically bug territory, but sshfs are -glacial- at fixing things [22:09] and this one would take a fair amount of fiddling to get right [22:10] sftp seems to bypass the issue [22:10] but when you look into sftp and fuse, guess where you end up? [22:10] sshfs [22:11] any advise would be much appreciated... I have an ubuntu server that as far as I know is working fine.. Hosting apkapps.com there. But after going for about a week with no reboot the site will timeout until I reboot again. that is one issue. [22:12] the other is I cannot access from any device within my office work network [22:12] currently it just times out [22:12] latter sounds like router/firewall issues [22:12] former, god knows [22:12] but with an sftp, at least you -know- that your moves aren't necessarily atomic; when 'source' and 'destination' are both on the same filesystem mount point, you'd expect a rename() system call to be atomic, but sshfs cannot provide that gaurantee if it does system(mv) behind the scenes. I wonder what would break if they implemented it. [22:13] sarnold, I've tried the classic workaround=rename, no luck there [22:13] first thing I tried [22:13] hacktron: from the internal systems, look up apkapps.com -- I suspect you're getting an external IP, one that the internal machines can't route to. [22:13] well I can access other pages on the server just not the apkapps.com site [22:14] for example the ip for the server is 173.55.24.67 [22:15] if I type that directly I will get the page being served [22:16] but if I go to 173.55.24.67/apkapps/apk which is where the site is being help it timeout [22:16] apkapps.com just points to that location [22:16] zul: hm, so libvirt virstoragetest is hanging in buildds on: cmd = virCommandNewArgList(qemuimg, "create", "-f", "qcow2", NULL); [22:16] that is how I currently have it setup, not sure if that is correct [22:16] virCommandAddArgFormat(cmd, "-obacking_file=raw,backing_fmt=raw%s", [22:16] compat ? ",compat=0.10" : ""); [22:16] virCommandAddArg(cmd, "qcow2"); [22:17] hacktron: does apkapps.com configuration or code do any hostname logging rather than IP logging? [22:18] in apache2 config file is setup with hostname to point to directory, [22:18] slight tangent to prior issue, what's the sanest way to move many files via a mv command? I've tried command line substition but that seemed to not work [22:19] not sure if that is what you are asking about. [22:20] Hornet: I often put together pretty hacky shell commands: cd source/ ; for f in *foo* ; do mv $f ../destination/ ; done [22:21] ah, of course [22:21] hacktron: heh, not really; sometimes log files are set to log hostnames instead of IP addresses; that's usually a bad idea because (a) anyone can set reverse dns to anything they want (b) when there are timeouts resolving names, odd things can happen. [22:21] would have to make a comprehesive list though, the files are awkwardly named [22:21] Hornet: the "string operations" here are sometimes useful for making similar for loops when things get worse: http://tldp.org/LDP/abs/html/refcards.html [22:23] noted, thanks [22:23] sarnold__: do you have a resource that can help me setup correctly, I just followed this http://httpd.apache.org/docs/2.2/vhosts/name-based.html [22:25] hacktron: sorry, that's the same guide (or the 2.4 version) I use when trying to figure my way around apache. :/ [22:40] sarnold__: ok thanks..so make sure its logging ip address not hostname? [22:41] hacktron: rather the other way around -- logging hostnames rather than IPs. (This is not common.) [22:42] ok tyvm === not_phunyguy is now known as phunyguy