[04:32] <IbnSaeed> is it proper method of learning iptables, via setting up MAAS in ubuntu 14 server inside a VM ?
[08:08] <Pupeno> What' the appropriate way of mounting volumes that require net access in Ubuntu and thus, cannot be automatically mounted at boot time?
[09:23] <jamespage> coreycb, zul: all uploaded apart from keystone (still running unit tests) and nova (pending zul actions)
[10:16] <med_> jamespage, is there an SRU bug (or something similar) for OpenStack 2014.1.1 ? And is that going into Trusty proper or will the UCA be created for it?
[10:17] <jamespage> med_, bug https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1328134
[10:17] <jamespage> med_, no UCA for trusty - this goes in as an SRU
[10:17] <jamespage> med_, just pending the last few uploads today and we'll get it acked into -proposed
[10:18] <med_> nod
[10:18] <med_> I misread the tail of that bug, looked like everything was marked invalid
[10:19] <med_> and thanks james. Much appreciated.
[10:19] <med_> and thanks zul (didn't think you'd be on yet)
[10:32] <caribou> jamespage: would you eventually have time to review http://bugs.launchpad.net/bugs/1313602
[10:33] <caribou> jamespage: you or someone else that does openstack charming
[10:33] <jamespage> caribou, sure - not sure why is not in the review queue.
[10:33] <jamespage> caribou, that bug has no linked branches?
[10:34] <caribou> jamespage: yeah, I noticed that
[10:34] <pmatulis> morning
[10:35] <caribou> jamespage: let me dig the MP for you
[10:35] <jamespage> caribou, can you link the branches to that bug report and set the merges back to ready for review.
[10:36] <caribou> jamespage: sure; I'm just surprized that the LP: #bug in the changelog did go unnoticed by launchpad
[10:44] <caribou> jamespage: I put the status to 'resubmit' by mistake. How can I get them back to 'ready for review'. By resubmitting them ?
[10:44] <jamespage> caribou, URL?
[10:45] <caribou> jamespage: https://code.launchpad.net/~louis-bouchard/charms/precise/nova-compute/lp1313602-multiline-known-hosts/+merge/218440
[10:45] <caribou> jamespage: and https://code.launchpad.net/~louis-bouchard/charms/precise/nova-cloud-controller/lp1313602-multiline-known-hosts/+merge/218442
[10:46] <jamespage> caribou, hmm - that looks OK
[11:01] <Voyage>  whats the limit of concurrent connections of one 64 core CPU server machine? whats the bottle neck? 64 core processor cpu and 2 TB ram.
[11:03] <OpenTokix> Voyage: for what?
[11:04] <jamespage> Voyage, by default probably 1024 - the open file descriptor limit is low by default which constrains  the amount of network connections a single process can support
[11:04] <jamespage> note that is for 1 process
[11:04] <OpenTokix> jamespage: but that is a software limit, the question is regarding hardware.
[11:05] <jamespage> OpenTokix, in which case the answer is it depends on what sofware you are running
[11:05] <OpenTokix> Voyage: if it has a 1Gbps interface and the packets are on ethernet frame per client, around 60k connections, since the 1GBps will top out at about 120kpps (x10 for 10GE)
[11:05] <OpenTokix> jamespage: yes =) as I asked "for what?"
[11:05] <jamespage> OpenTokix, indeed :-)
[11:06] <jamespage> OpenTokix, by open fd limit is always pretty much the first bottleneck unless the software is really crappy :-)
[11:06] <OpenTokix> jamespage: indeed it is
[11:07] <OpenTokix> jamespage: and it is also very very traffic dependend where the bottle neck is
[11:07] <jamespage> indeed
[11:07] <jamespage> I love 'it depends' answers :-)
[11:07] <OpenTokix> =)
[11:07] <OpenTokix> It always do
[11:07] <OpenTokix> I have seen some really weird bottlenecks on high volume stuff
[11:07] <OpenTokix> Like running out of source ports for instance
[11:07] <jamespage> indeed - I reckon the best answer is really to configure, test, see what breaks, re-configure....
[11:07] <OpenTokix> with a 3s fin_timeout =)
[11:08] <jamespage> of course generating that sort of load does become tricky
[11:08] <OpenTokix> impossible to benchmark
[11:08] <OpenTokix> btw. =)
[11:09] <OpenTokix> The biggest houses for benchmarking can only muster up about 5000 unique hosts
[11:10] <jamespage> OpenTokix, that sounds expensive but 100% correct
[11:10] <jamespage> OpenTokix, I used to use a Testing and as Service product that ran from ec2 - but that did not go that large
[11:10] <jamespage> 100's
[11:10] <OpenTokix> jamespage: crazy expensive between 10-50k dollar for one test
[11:11] <jamespage> OpenTokix, esp when someone forgets to bump the ulimit and you hit the wall fast!
[11:11] <OpenTokix> jamespage: Ec2 is to slow for the stuff I was testing
[11:11] <OpenTokix> jamespage: I designed the system we tested, and I didnt forget =)
[11:11] <OpenTokix> jamespage: handled about 3 billion requsts/day from millions of ips
[11:11] <jamespage> OpenTokix, awesome
[11:13] <OpenTokix> it was =)
[11:13] <OpenTokix> Then the company got aquired by idiots and I left =)
[11:13] <OpenTokix> jamespage: and needless to say, their availability is going down =)
[11:48] <Voyage> jamespage,  so normally, whats the max no. of connections that reasonable powerfull servers reach?
[11:49] <jamespage> Voyage, it really does depend on what you are actually doing on it
[11:49] <Voyage> OpenTokix,  the  60k connections is not the limit on server side. correct? its on client side ( which is ofcourse never reached)
[11:50] <Voyage> OpenTokix,  a server can have more than  60k connections concurrently. right?
[11:50] <Voyage> OpenTokix,  the file descriptior limit can be customized to be more than  millions. no?
[11:51] <Voyage> jamespage,  web server. apache or apache tomcat..
[11:52] <OpenTokix> Voyage: on a single 1Gbps nic you have a limit of about 120kpps
[11:52] <OpenTokix> Voyage: generally
[11:52] <OpenTokix> Voyage: you can do some tweaks, if know _a lot_ of your traffic to reach higher
[11:52] <OpenTokix> (at the cost of latency)
[11:53] <Voyage> ignore latency
[11:56] <OpenTokix> You cant get a 1G much higher then 120kpps
[11:56] <OpenTokix> for internet traffic
[13:11] <droven> عاوز زب من الزقازيق
[13:11] <cfhowlett> droven in English please
[13:17] <droven> عاوز زب من الزقازيق
[13:18] <cfhowlett> !english|droven
[13:21] <droven> عاوز زب من الزقازيق
[13:22] <davidparker> Hi everyone! I have a low-traffic website, audio streaming, archiving, and file serving needs to accomplish, and I have two computers to do it. Should I install ubuntu server on both of them, using one as backup? Or should I use MAAS? My needs are not complex so using a cloud-style elastic architecture is probably overkill, right? I want to use Juju to set up my stack, though. Thanks!
[13:38] <remix_tj> davidparker: maybe you need only an active/passive cluster?
[13:39] <remix_tj> davidparker: something like this http://www.thomas-krenn.com/en/wiki/HA_Cluster_with_Linux_Containers_based_on_Heartbeat,_Pacemaker,_DRBD_and_LXC
[13:41] <rbasak> davidparker: if you want to use Juju in your stack, and you have two computers, then I think your options are: separate local environments on each computer, one manual provider environment covering both, or MAAS. For two machines, MAAS is overkill I think. I'd probably go with the manual provider.
[13:42] <rbasak> davidparker: I suggest that you use the juju stable PPA for now. I'm working on getting the latest updates to Trusty, and making that happen faster. But right now the PPA is the best option.
[13:54] <davidparker> Is using Juju also overkill? I've installed ubuntu server with the following packages: apache, PHP, MySQL, DNS server, OpenSSH, etc. So if that's all I'm using, plus some audio specific stuff like Rotter and LiquidSoap, then why use Juju? I've already got all the software I need on ubuntu server.
[13:55] <davidparker> Then I could just use the second computer as a RAID, or some type of backup.
[14:27] <hallyn> zul: all right, let's get some libvirt 1.2.5 up in this utopic.  (i wanna finish the cgmanager patch against it)  are you working on it right now?  (if not i'll take a look)
[14:30] <zul> hallyn:  please have a look
[14:35] <hallyn> k
[14:50] <DenBeiren> what would be the easy way to change locale to belgium / dutch layout truough cli?
[14:53] <rbasak> DenBeiren: LANG is usually set in /etc/default/locale. Older releases used /etc/environment
[14:53] <rbasak> DenBeiren: not sure about keyboard though, sorry.
[14:53] <rbasak> "sudo dpkg-reconfigure console-setup" maybe. I'm not sure if that's still used.
[14:54] <rbasak> Maybe /etc/default/keyboard as provided by the keyboard-configuration pacakge.
[14:54] <rbasak> The loadkeys command will change things immediately for you on a VT if you need it updated without a reboot.
[14:54] <rbasak> No idea about X.
[16:12] <superboot> Hi all. I've got a mdadm RAID-10 array, and want to upgrade the os to a new version with a freash install. Can I just copy the /etc/mdadm/mdadm.conf to the new install, and start/mount the array?
[16:15] <tgm4883> Generally speaking, shouldn't iSCSI be faster than NFS? I've been doing some testing, and I'm getting better performance to NFS. Is my thinking backwards?
[16:15] <superboot> Meaning your tests are showing that NFS is faster? (for clarification for the group)
[16:15] <tgm4883> exactly
[16:15] <tgm4883> I've been using IOMeter for testing
[16:16] <tgm4883> I'm getting higher IOPS and more throughput
[16:17] <tgm4883> Tests consisting of SAN storage mounted from a QNAP on a ESXi host using both iSCSI and NFS (two different volumes). Virtual machine on each of the storage running IOMeter
[16:19] <superboot> Sorry, I don't have any experiance with iSCSI.
[16:19] <superboot> I'm sure there is someone in the channel that does.
[16:24] <RoyK> tgm4883: sync writes to NFS should be slower than writes to iSCSI, but NFS also support async writes, meaning the writes are buffered (at the VFS layer IIRC)
[16:25] <RoyK> iSCSI is always sync on the block level
[16:25] <RoyK> (but then, the filesystem (or VFS?) should do some caching anyway)
[16:26] <RoyK> tgm4883: how do you compare these two? what sort of NFS server? what filesystem on top of the iSCSI thing?
[16:26] <RoyK> and btw, for iSCSI, you should use jumboframes
[16:27] <RoyK> tgm4883: standard 1500 byte frames will generate rather a lot of tiny frames where you want network stacks and switches to be more happy with 9000b jumboframes
[16:28] <RoyK> tgm4883: but please describe your infrastructure, network etc
[16:31] <tgm4883> RoyK: The SAN is a QNAP server, I believe uses NFSv3. The filesystem on the iSCSI mount is VMFS5. I'm not currently using jumbo frames, but I'll check with my networking team to ensure that is possible. I'd also have to check with them to see what switches we're using, I know we have a combination of Cisco and Brocade/Foundry.
[16:32] <tgm4883> I'm comparing them by running IOMeter on a Ubuntu 14.04 VM. I have it located on one type of storage, run the test, then use vmotion to move it to the other storage and test again
[16:36] <RoyK> guess all switches produced the latest 10 years or so, should support jumboframes
[16:36] <RoyK> not necessarily enabled in the config, though
[16:36] <tgm4883> yea, I wouldn't be surprised if it's not enabled
[16:37] <RoyK> but.. do I understand you correctly? Are you comparing vmfs5 on iscsi with nfs on a linux vm?
[16:38] <RoyK> if so, that's not quite fair
[16:38] <tgm4883> not exactly. both are mounted directly on the ESXi server. VMFS5 on iSCSI, NFS3 (for NFS).
[16:39] <tgm4883> Both are mounted on the ESXi server, not on the IOMeter client
[16:40] <RoyK> not sure, then - we're using iSCSI for most stuff on ESXi (Dell Equallogic storage)
[16:41] <RoyK> those boxes just support direct block access anyway
[16:43] <RoyK> tgm4883: what sort of QNAP thing? The QNAPs I've used, just use linux and software RAID/LVM
[16:44] <RoyK> tgm4883: nothing wrong with that, though...
[16:45] <RoyK> tgm4883: btw, check output of 'ifconfig' on that QNAP thing - check the MTU
[16:46] <tgm4883> RoyK: it's a TS-EC1279U-RP
[16:47] <tgm4883> MTU's at 1500 in the web interface
[16:47] <RoyK> k
[16:47] <tgm4883> I'll check with our networking team on the switches, they are in a meeting rightnow
[16:48] <RoyK> ok
[16:48] <tgm4883> Wiring wise, the ESXi hosts have dual 1Gbit links to the switch, the QNAP has 4 - 1 Gbit links to the switch in LACP config
[16:50] <RoyK> tgm4883: not sure if it's relevant, but we don't use LACP in our setups - we use quad 1G links to the EQL storage with multipath
[16:50] <RoyK> LACP proabably don't scale as well as iSCSI multipath
[16:50] <RoyK> especially with few hosts
[16:50] <RoyK> how many ESXi hosts?
[16:51] <tgm4883> Which EQL?
[16:51] <RoyK> EQL as in EqualLogic - we have a few
[16:51] <RoyK> from large/slowish ones to small/fast ones (15k in raid1-0)
[16:52] <tgm4883> So a bit more about our environment, I'm testing the QNAP for another team, so it's only got one host attached to it. We've got 7 hosts connected to our EQL ps4100. I recently removed our PS6000x as it's pretty small storage and performance wasn't great when I tested it
[16:53] <tgm4883> I'm not 100% sure that my predecessor had any of this tuned for anything, and AFAIK there was no performance testing done on any of this
[16:53] <RoyK> with a single iSCSI connection, you'll probably only get 1Gbps throughput because of how LACP works
[16:53] <RoyK> depending on setup, though
[16:53] <lordievader> Good evening.
[16:54] <tgm4883> RoyK: our 10k EQL was barely outperforming the QNAP in these IOMeter tests
[16:54] <tgm4883> Then again, I'm not 100% I like any of these IOps numbers I'm getting compared to what sales is telling me on the new SANs they want me to buy
[16:55] <RoyK> thesheff17: doesn't surprise me, but then, EQL have some nifty features for moving volumes around easily. Still wouldn't recommend them. We're looking at Compellent now to see if that can replace what we're using now
[16:56] <RoyK> Dell's talking about allowing replication between EQL and Compellent soon (Q2 2015 or so), which could mean we can use the current hardware for the DR site
[16:56] <RoyK> Compellent looks a *lot* better, albeit a bit more expensive at first
[16:56] <tgm4883> RoyK: yea there was some fancy auto-management between the two SANs when I had both in production, but it was renewal time and I didn't feel like paying $4K for 6TB of storage when performance was so bad
[16:57] <RoyK> lol
[16:58] <RoyK> it's rather convenient to be able to upgrade a controller at a time, though, something you can't do with things like a QNAP or some homebrew ZFS-setup
[16:59] <tgm4883> true, although we have blackout days every quarter that we can take any system down we need to work on
[16:59] <tgm4883> but yea, that is pretty convenient to work on
[16:59] <RoyK> *days*?
[16:59] <tgm4883> sorry, just 1 blackout sunday
[16:59] <tgm4883> 4 per year
[17:00] <RoyK> ok
[17:00] <RoyK> that's convenient
[17:00] <tgm4883> yea it's a carryover from the bad old days
[17:00] <tgm4883> mostly we don't need to take stuff down anymore because everything is so redundant
[17:01] <RoyK> another thing that's rather annoying with EQL is that its controllers are active/standby, not active/active
[17:15] <bcsaller> This is an odd one, I'm sprinting on location and our partner here is seeing an odd case as they spin up VMs. Under high network load when they bring up a container with a previously used IP but a changed MAC address the ARP cache seems to persist long enough that they lose traffic. This isn't something they were seeing under Lucid but are now seeing as they transition to Trusty. Anyone seen something like this?
[21:04] <Chiarot> Question for you guys, I have a ubuntu server that is doing DHCP, I'd like to add a PXE server into my network as well, what would I add into the dhcpd.conf to ensure it points to the correct box for pxe?
[21:13] <lordievader> Chiarot: filename "pxelinux.0"; and probably "next-server".
[21:20] <Hornet> evening
[21:20] <Hornet> sort of hybrid issue here; having 'fun' getting my sshfs mount to behave as expected
[21:20] <Hornet> a mv command as the remote user in terminal, works, but sshfs gets operation not permitted errors
[21:21] <Hornet> everything has correct rights
[21:21] <Hornet> only possible cause I can think of is the move is between two physical remote drives
[21:21] <Hornet> but both in the same box
[21:22] <Hornet> any insights much appreciated
[22:01] <sarnold> Hornet: seems like you're not alone, at least one other person reported cross-mount moves don't work through sshfs: http://ubuntuforums.org/showthread.php?t=2124058
[22:02] <sarnold> (as much I dislike forum-based research, this one seems clueful :)
[22:02] <Hornet> sarnold: already seen that, but thanks
[22:03] <Hornet> I need to get sshfs working as I need transparent access so that file utilities can work, I don't just want to use gui file managers
[22:03] <sarnold> Hornet: you could use multiple sshfs mounts, one per underlying filesystem
[22:04] <sarnold> Hornet: then your local 'mv' command would recognize the situation correctly and perform the copy + unlink exactly as it should
[22:04] <Hornet> considered it, it'd break smart transfers, eg it'd download from the server then upload again
[22:04] <Hornet> rather than just moving on the host
[22:04] <sarnold> to be honest, I can't expect any networked filesystem to get that correct
[22:04] <Hornet> sshfs does
[22:05] <sarnold> I thought you were here because it doesn't get it correct?
[22:05] <Hornet> if I move the files to the destination, or close to it, and do mass complex renames, it does it instantly
[22:05] <Hornet> it just can't move things from the original source correctly
[22:05] <Hornet> which is the crux of the matter
[22:05] <Hornet> we're talking about 60gb of files presently
[22:05] <Hornet> will vary wildly per use case though
[22:07] <Hornet> eg, on the server, /volatile represents a single 1tb drive in the box
[22:07] <Hornet> the main filesystem is a 6tb raid array
[22:07] <Hornet> if I try to use sshfs to move files from /volatile to /home it screws the proverbial pooch
[22:08] <Hornet> but if I move the files there first with mv, then do my mass renaming in situ, it works
[22:08] <Hornet> this is basically bug territory, but sshfs are -glacial- at fixing things
[22:09] <sarnold> and this one would take a fair amount of fiddling to get right
[22:10] <Hornet> sftp seems to bypass the issue
[22:10] <Hornet> but when you look into sftp and fuse, guess where you end up?
[22:10] <Hornet> sshfs
[22:11] <hacktron> any advise would be much appreciated... I have an ubuntu server that as far as I know is working fine.. Hosting apkapps.com there. But after going for about a week with no reboot the site will timeout until I reboot again. that is one issue.
[22:12] <hacktron> the other is I cannot access from any device within my office work network
[22:12] <hacktron> currently it just times out
[22:12] <Hornet> latter sounds like router/firewall issues
[22:12] <Hornet> former, god knows
[22:12] <sarnold> but with an sftp, at least you -know- that your moves aren't necessarily atomic; when 'source' and 'destination' are both on the same filesystem mount point, you'd expect a rename() system call to be atomic, but sshfs cannot provide that gaurantee if it does system(mv) behind the scenes. I wonder what would break if they implemented it.
[22:13] <Hornet> sarnold, I've tried the classic workaround=rename, no luck there
[22:13] <Hornet> first thing I tried
[22:13] <sarnold> hacktron: from the internal systems, look up apkapps.com -- I suspect you're getting an external IP, one that the internal machines can't route to.
[22:13] <hacktron> well I can access other pages on the server just not the apkapps.com site
[22:14] <hacktron> for example the ip for the server is 173.55.24.67
[22:15] <hacktron> if I type that directly I will get the page being served
[22:16] <hacktron> but if I go to 173.55.24.67/apkapps/apk which is where the site is being help it timeout
[22:16] <hacktron> apkapps.com just points to that location
[22:16] <hallyn> zul: hm, so libvirt virstoragetest is hanging in buildds on:   cmd = virCommandNewArgList(qemuimg, "create", "-f", "qcow2", NULL);
[22:16] <hacktron> that is how I currently have it setup, not sure if that is correct
[22:16] <hallyn>     virCommandAddArgFormat(cmd, "-obacking_file=raw,backing_fmt=raw%s",
[22:16] <hallyn>                            compat ? ",compat=0.10" : "");
[22:16] <hallyn>     virCommandAddArg(cmd, "qcow2");
[22:17] <sarnold> hacktron: does apkapps.com configuration or code do any hostname logging rather than IP logging?
[22:18] <hacktron> in apache2 config file is setup with hostname to point to directory,
[22:18] <Hornet> slight tangent to prior issue, what's the sanest way to move many files via a mv command?  I've tried command line substition but that seemed to not work
[22:19] <hacktron> not sure if that is what you are asking about.
[22:20] <sarnold> Hornet: I often put together pretty hacky shell commands: cd source/ ; for f in *foo* ; do mv $f ../destination/ ; done
[22:21] <Hornet> ah, of course
[22:21] <sarnold> hacktron: heh, not really; sometimes log files are set to log hostnames instead of IP addresses; that's usually a bad idea because (a) anyone can set reverse dns to anything they want (b) when there are timeouts resolving names, odd things can happen.
[22:21] <Hornet> would have to make a comprehesive list though, the files are awkwardly named
[22:21] <sarnold> Hornet: the "string operations" here are sometimes useful for making similar for loops when things get worse: http://tldp.org/LDP/abs/html/refcards.html
[22:23] <Hornet> noted, thanks
[22:23] <hacktron> sarnold__: do you have a resource that can help me setup correctly, I just followed this http://httpd.apache.org/docs/2.2/vhosts/name-based.html
[22:25] <sarnold> hacktron: sorry, that's the same guide (or the 2.4 version) I use when trying to figure my way around apache. :/
[22:40] <hacktron> sarnold__: ok thanks..so make sure its logging ip address not hostname?
[22:41] <sarnold> hacktron: rather the other way around -- logging hostnames rather than IPs. (This is not common.)
[22:42] <hacktron> ok tyvm