[00:20] <coxaLT> How could i start xterm from ssh for vncserver
[00:20] <coxaLT> Or, how could i start xterm on vncstartup?
[00:21] <coxaLT> I am totaly new and need extensive care
[01:42] <lkthomas> hey guys
[01:43] <lkthomas> does upstart will constantly check if the running process still exists ?
[04:03] <zzxc> Hey, what are the requirements for running raid 5.
[04:35] <Sachiru> Query: Is ntopng paid, or free, and does it work with collectors other than nprobe?
[06:10] <Alina-malina> Alice upload video to youtube, Bob lives in US and for bob that video is NOT available, Charlie lives in Pakistan and the video is available for him. So the question: How Charlie can find out if the video that Alice upload to youtube is available for Bob?
[07:05] <lordievader> Good morning.
[07:07] <Omicronpersei8> echo morning.
[07:07] <lordievader> Hey Omicronpersei8, how are you?
[07:08] <Omicronpersei8> All ok here..
[08:34] <ExeciN> I installed gnome and every time ubuntu-server boots, gnome goes in a resolution of 800x600. How do I permanently change the resolution?
[08:35] <lordievader> ExeciN: I suppose that question is more suited for #ubuntu.
[08:54] <Sachiru> Query: What would you guys recommend as a DNS and WINS server for Ubuntu that is a) lightweight, b) intended for forwarding/caching only, and c) fast?
[09:40] <psih0man> hello all! I need some help in turning power off on some PCIe device. on the Net I can't find the files they say they should exist in /sys/bus/pci. I'm running ubuntu-server 14.04 and the device I want turned off is a PCIe slot that connects an add-on SAS HBA
[10:23] <jamespage> zul, I fixed up pyscss
[11:06] <pmatulis> morning
[11:07] <lordievader> Hey pmatulis, how are you?
[11:31] <pmatulis> lordievader: very fine and you?
[11:32] <lordievader> Doing good here :)
[12:09] <rbasak> jamespage: can you comment on bug 1370049 please?
[12:09]  * rbasak can't remember the details right now.
[12:11] <rbasak> jamespage: also see horizon bug 1370107
[12:28] <jamespage> rbasak, I think upstream dropped support for spidermonkey in 2.6
[12:29] <rbasak> jamespage: was there a question about security as well, for example if two clients connect to the server then they need to be isolated?
[12:29] <rbasak> Or was that a v8 issue?
[12:29] <jamespage> erm maybe
[12:29] <rbasak> mwhudson might now maybe?
[12:30] <jamespage> there is certainly a problem with out-of-memory behaviour in newer v8 versions that do support things like arm64
[12:35] <jamespage> coreycb, zul: I extended coreycb's original MIR with the rest of the xstatic packages that are in archive - https://bugs.launchpad.net/ubuntu/+source/python-xstatic-jquery-ui
[12:37] <zul> jamespage: cool the packages have the server team subscribed to them right?
[12:37] <zul> good morning btw
[12:37] <jamespage> zul, not yet
[12:37] <jamespage> zul, I got fed up of using LPweb ui
[12:37] <jamespage> so was going to write a script
[12:38] <zul> jamespage: ackles
[12:38] <coreycb> jamespage, thanks
[12:40] <hydrajump> hi I'm configuring some ubuntu servers for web servers and a best practice is to create a new user to run the web app, e.g. node.js in my case.
[12:40] <hydrajump> googling for the correct secure way to do this I've seen this: useradd -d / -M -U -c "nodejs user" -s /usr/sbin/nologin node
[12:41] <hydrajump> or doing something like this: groupadd -r node; useradd --create-home --gid node unprivilegeduser
[12:42] <pmatulis> hydrajump: what's the problem?
[12:44] <hydrajump> pmatulis: just want feedback/advice on how to create that "web" user sensibly.
[12:44] <hydrajump> I'm googling the options and info as well
[12:45] <hydrajump> seems that useradd is low-level  and adduser is suggested way to do it
[12:45] <pmatulis> hydrajump: i've never heard of best practices for the creation of a user.  adduser is a frontend to useradd.  the latter can be used for more customized setup
[12:47] <Odd_Bloke> I would recommend using adduser.
[12:49] <pmatulis> hydrajump: if you do this you will see useradd being invoked (grep useradd adduser.strace):
[12:49] <pmatulis> $ sudo strace -f -o adduser.strace adduser john
[12:50] <pmatulis> 11345 execve("/usr/sbin/useradd", ["/usr/sbin/useradd", "-d", "/home/john", "-g", "john", "-s", "/bin/bash", "-u", "1001", "john"], [/* 17 vars */]) = 0
[12:54] <hydrajump> pmatulis: ok so this "useradd -d / -M -U -c "nodejs user" -s /usr/sbin/nologin node" from the man pages will create a user called node in group node who can't login, homedirectory will not be created but set to /
[12:55] <pmatulis> hydrajump: i'm not here to confirm man pages
[12:56] <pmatulis> hydrajump: test out your understanding.  if you have a problem then come back here and ask about it
[12:58] <hydrajump> cool no worries.
[13:41] <jrwren_> hydrajump: why not run nodejs as www-data?
[13:44] <hydrajump> jrwren_: no idea. I don't know node. My task is to setup the servers for node or in this case I'm using docker containers. I've been googling best practices for deploying node in production and came across examples creating a new "node" user and putting the node app in /var/www/my-app.
[13:44] <hydrajump> is www-data an existing user on ubuntu for this purpose?
[13:45] <jrwren_> hydrajump: i don't know much about docker, but given each docker container only runs 1 process, I'd run everything as root ;)
[13:45] <jrwren_> hydrajump: www-data is the web user. apache runs as it.
[13:46] <jrwren_> hydrajump: the only reason you might want to isolate www-data from your web app is if you have other web resource which you want to isolate from your nodejs app. e.g. nodejs app should not be able to modify a static web page being served by apache.
[13:46] <hydrajump> jrwren_: I've learnt enough docker this weekend to say that's a very bad idea and not recommended.
[13:47] <jrwren_> hydrajump: i don't understand. why? is it not only 1 process? protection from when a remote vulnerability is found? Can you link me?
[13:47] <hydrajump> jrwren_: a docker container running root is no different than running root on the host.
[13:48] <hydrajump> jrwren_: root in docker is not a "special" case of root. best practice is to treat containers with regards to permissions and security no different than without containers.
[13:48] <jrwren_> hydrajump: cgrouped root should be more limited. the container jail should make the guest root user very different from host root user. if that is not the case... *ugh*
[13:48] <jrwren_> hydrajump: wow. that is pretty terrible.
[13:48] <jrwren_> hydrajump: so.. its not a container.
[13:48] <hydrajump> jrwren_: I'm just sharing what I've learnt and in my discussions on #docker. Best practice always have USER in your dockerfiles
[13:49] <jrwren_> hydrajump: good to know. thanks for the information.
[13:50] <smoser> hallyn, 'lxc-start -n foo'
[13:50] <smoser> that should block until foo stops. right ?
[13:53] <jdstrand> jrwren_: "container" is a bit of a misnomer if you are talking about security. when container technologies like docker and LXC are used with an LSM like apparmor or selinux (among other things), things are a lot better. you still have the kernel syscall interface and don't have a hypervisor to protect untrusted guests, so one has to consider the problem space
[13:54] <jrwren_> jdstrand: I thought this was the point of the cgroups interface.
[13:54] <jdstrand> cgroups limit resources, not access to files or the syscall interface
[13:55] <jrwren_> jdstrand: huh, looks like i was misunderstanding.
[13:55] <jdstrand> containers are great for a lot of things. they are not a wholesale replacement for VMs, etc. it depends on ones needs
[13:58] <hallyn> smoser: 'lxc-start -n foo' in recent lxc will start backgronded,
[13:58] <patdk-wk> nothing can help you against someone hacking the kernel
[13:58] <hallyn> so you need a lxc-start -F to make it block
[13:58] <hallyn> smoser: or, just 'lxc-start -n foo -d; lxc-wait -n foo -s RUNNING; lxc-wait -n foo -s STOPPED'
[13:59] <smoser> :(
[13:59] <smoser> that seems like a backwards incompatible change.
[13:59] <jrwren_> jdstrand: i must be thinking of something else, like this: https://lwn.net/Articles/515034/
[13:59] <hallyn> smoser: it is
[14:00] <hallyn> stgraber: ^ smoser doesn't like the lxc-start default change to running backgrounded :)
[14:02] <smoser> i think if people have scripts (like I did) built ont he expectation that 'lxc-start' runs the container until it shuts down
[14:02] <smoser> then those scripts will be broken.
[14:02] <hallyn> smoser: we're not going to change that in utopic for certain;  and i thought fo rawhile we were gong to just leave lxc-start -n foo run foregrounded, but i don't want anyone new to depend on it
[14:02] <smoser> thats the thing about interfaces...
[14:03] <hallyn> agreed.  we'd considered wiatng for the new lxc command to give that behavior.
[14:05] <stgraber> well, LXC never guaranteed the command line behavior to be compatible between versions and indeed it never has been. If you need something stable, use the API :)
[14:05] <stgraber> however realistically most people have been using lxc-start with -d and -d is still accepted in current git master (basically ignored)
[14:06] <stgraber> and the new -F argument has been backported to LXC 1.0 so that people can use it consistantly with both old and new versions
[14:06] <hallyn> lxc never guaranteed - but that won't keep a bunc hof users from getting really pissed when their infrasctructure breaks
[14:07] <stgraber> sure, though the amount of people actually using lxc-start without -d is pretty minimal. From the check I did against the Ubuntu archive, we only have one such case currently.
[14:07] <jdstrand> jrwren_: that article applies. that article is talking about changes to libvirt for it to be a container technology that uses an LSM (in that particular case, selinux). work is ongoing. for example, upstream docker 1.2 can use either selinux or apparmor to help secure root containers. libvirt 1.2.8 in Ubuntu will utilize apparmor when using libvirt-lxc. LXC uses apparmor for its 'root' containers, but can also use the newer 'userns' capabilit
[14:07] <smoser> stgraber, that change is not sruable
[14:08] <smoser> for an example.
[14:08] <hydrajump> jrwren_: no worries ;)
[14:09] <stgraber> smoser: indeed and we have no plan to SRU it. The behavior change will happen with LXC 1.1, the only thing we'll SRU as part of the 1.0.x series is the support for -F so that people can build software working with both the LTS and current dev release
[14:10] <smoser> well, thats my $0.02. its a backwards incompatible and unexpected change to a command line interface, and one that is not terribly necessary.
[14:10] <hallyn> stgraber: smoser: agreed i can't imagine scripting lxc-start without -d.  i don't know why you'd do it
[14:10] <hallyn> oh, to test cloud-init maybe
[14:11] <smoser> do you think we should make 'ls' background by default ?
[14:11] <smoser> or 'grep'.
[14:11] <smoser> how about top
[14:11] <hallyn> anyway my stance remains i like the new behavior better but am queasy about changing the default
[14:12] <smoser> i'll shut up now.
[14:12] <hallyn> smoser: your comparisons are not reasonable :)
[14:12] <smoser> they're not unreasonable.
[14:12] <smoser> those programs have blocked for as long as i've ever used them.
[14:12] <smoser> same as 'lxc-start' has.
[14:14] <hallyn> i do fear we'll (or our users will) regret it
[14:16] <stgraber> hallyn: I can agree that 1.1 isn't the best time for it, we should have done it with 1.0 really but it's still worth doing because options are really meant to be options, not something you pass every single time
[14:17] <hallyn> scripts may almost alwyas use -d, but i personally almost never do
[14:17] <hallyn> so it's not "something i pass every time".  in fact -F will become that
[14:17] <hallyn> it's an option, exactly an option
[14:17] <hallyn> as for woes with the lxc command line, we have far greater ones :)
[14:17] <hallyn> i.e. "-n"
[14:18] <smoser> :)
[14:18] <stgraber> hmm, ok, not sure how you can stand working from a straight lxc-start but ok (I tend to be annoyed by various messages coming through /dev/console and by some of the odd console behavior) :)
[14:18] <stgraber> even before we had lxc-attach working I'd pretty much exclusively stick to lxc-start -n blah -d + lxc-console
[14:19] <smoser> humans can be told "change your behavior"
[14:19] <smoser> its harder to tell programs that.
[14:19] <hallyn> other than tmux splitting not working well, i've not had console issues since dwight reworked the sigwinch support
[14:19] <smoser> well, without having people find out that their programs are broken and their $*#( no longer works.
[14:19] <hallyn> smoser: still for scripting lxc-start -d + lxc-wait is much more reliable
[14:20] <hallyn> maybe we should have /etc/lxc/lxc.conf have an option :)
[14:20] <hallyn> "default-foreground"
[14:20] <hallyn> (i say "yuck" as the one who'd probably have toimplement it)
[14:20] <stgraber> if that gives us a proper parser for lxc.conf, sure ;)
[14:25] <smb> hallyn, darn you. it would have been too nice if you could have told me you plan another libvirt merge when I complained about the current one again dropping my xen patches :-P
[14:25] <hallyn> smb: d'oh.  we've been talking about it for weeks!
[14:25] <hallyn> sorry
[14:26] <hallyn> maybe intead of talking on irc we should have a m-l
[14:26] <hallyn> or a ubuntu-libvirt channel at least
[14:26] <hallyn> smb: does this mean you'll need to refresh patches?
[14:27] <smb> Well back to the patches for me. yeah, let me see maybe those I just did for 1.2.6 still apply
[14:27] <hallyn> smb: why are yo uneeding so many patches?  is upstream not taking them?
[14:28] <smb> hallyn, one is about xend detection which probably we can unned in U+1 (and then rip out the old toolstack in xen completely)
[14:30] <smb> The other two are about gfx device support. and yeah, one of them Jim Fehlig and I fail due to other things getting important
[14:30] <hallyn> ok. fwiw.  it must have applied cleanly when zul did the initial merge :)
[14:30] <smb> hallyn, no he dropped all of them when doing 1.2.6
[14:30] <smb> not sure why
[14:31] <hallyn> tsk tsk zul
[14:32] <smb> part of reason I was whining yesterday in the server team meeting. ;)
[14:33] <Kunzem1989> Hi everyone. (don't know if i can ask for help about my issue here ) I have been given a Ubuntu server 12.04 with two virtual pc running on it using virtualbox. I'm trying to log into these virtual machines which are owned by root. I think they have been setup on with console. is there a standard way of accessing them. I tried google but searching for "ubuntu server virtualbox login" and searches like that can meen alot of things.
[14:33] <hallyn> smb: i was also waiting for this merge to reply about the merging with debian's apparmor
[14:33] <hallyn> as of this package we're basically the same, minus a few new patches i should send.
[14:34] <ikonia> Kunzem1989: the login will be set by the people who built them
[14:34] <smb> hallyn, ah ok. Yeah.
[14:34] <hallyn> i didn't want to break the packaging so i left our files under debian/apparmor, but they are the same as the ones shipped by upstream basically
[14:34] <smb> hallyn, Oh, ok, including potential xen paths? Well I can check that when I look at what you jsut uploaded
[14:34] <hallyn> so for U+1 we can drop our custom apparmor (or introduce any new delta using quilt patches, encouraging us to send them upstream too)
[14:35] <hallyn> uh, i think so.  i wa slooking at the diffs of the files at any rate
[14:35] <rbasak> jamespage: shall I sort this mysql-5.6 FTBFS on arm64 then, or are you doing it?
[14:37] <hallyn> hm.
[14:38] <smb> hallyn, I have a look and get back to you (though it might get tomorrow). Need to sort out whether and what I need in 1.2.8
[14:38] <hallyn> smb: sadly the /usr/lib/xen* lines are still not in 1.2.8
[14:38] <hallyn> (in examples/apparmor/urs.sbin.libvirtd)
[14:39] <hallyn> once this settles we should simply send the deltas upstream
[14:39] <smb> ok, so that would be what we need to submit upstream ... right
[14:53] <RoyK> Kunzem1989: guess that'll be a #virtualbox question - basically, you'll do it either with bridging network configuration (which I would recommend) or with virtualbox' included port forwarding
[14:55] <RoyK> Kunzem1989: also, I'd recommend KVM over virtualbox for server stuff - virtualbox has its nice things on the client side, but IMHO it's not as good as kvm for servers, and kvm comes well integrated with ubuntu
[15:41] <StolenToast> huh
[15:41] <StolenToast> when did my server acquire this '.5' after '12.04'?
[15:42] <cfhowlett> StolenToast, you did a dist-upgrade.  check your logs.
[15:43] <StolenToast> I was going to respond to him
[15:43] <StolenToast> oh well
[15:43] <StolenToast> I remember doing a dit-upgrade but I remember it failing due to low isk space
[15:43] <sdeziel> StolenToast: base-files is the packages you upgraded
[15:44] <StolenToast> what kinds of things does that include?
[15:47] <sdeziel> StolenToast: https://lists.ubuntu.com/archives/ubuntu-announce/2014-August/000189.html => all I could find
[15:48] <sdeziel> StolenToast: apparently nobody created the page https://wiki.ubuntu.com/PrecisePangolin/ReleaseNotes/ChangeSummary/12.04.5 so it's hard to see what changed from the 12.04.4
[16:14] <garbagegod> For new installations, is 14.04 recommended?
[16:14] <garbagegod> Are there any big changes between 12.04 and 14.04?
[16:15] <rbasak> Yes, and yes.
[16:16] <rbasak> See https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes
[16:16] <garbagegod> Just what I was looking for, thanks so much
[19:06] <jsonperl> I'm going to refactor a game server I wrote to create a new process per "world", rather than the current persistent server we currently have
[19:07] <jsonperl> My thoughts are that it would need to be new port everytime, and re-use ports as they become available
[19:07] <jsonperl> Any thoughts on how you would do it?
[19:11] <patdk-wk> heh?
[19:11] <patdk-wk> tcp/udp ports?
[19:12] <sarnold> jsonperl: the typical approach is to have a master server process that does the usual socket(), bind(), listen() dance and then in a loop accept() fork() and have the child handle the new connection..
[19:13] <jsonperl> patdk-wk no udp, just tcp
[19:13] <patdk-wk> sarnold, highly depends
[19:13] <patdk-wk> if he is doing one port per world
[19:13] <patdk-wk> and how his protocol works
[19:14] <patdk-wk> if his client connects to a inventory servers, to lookup the port to connect, for the right world
[19:14] <sarnold> of course, forking servers also fell out of favor a decade back as wasteful compared to e.g. libevent or libev or libuv based server designs..
[19:14] <patdk-wk> or if he has a process that accepts connections, reads the world info, then forwards/handsoff to the right one
[19:14] <jsonperl> patdk-wk it will handoff to the correct one IF the world is already running
[19:15] <patdk-wk> but using libevent/libev is great, it requires good programming :)
[19:15] <jsonperl> patdk-wk we can likely instruct the machine to have it loaded by the time the user gets there
[19:15] <patdk-wk> sounds much like a php-fpm model then
[19:15] <patdk-wk> one master process opens one port
[19:16] <patdk-wk> then it just moves that socket to the client, once it's started
[19:16] <patdk-wk> or just moves it, if it's already started
[19:16] <patdk-wk> same for apache prefork mpm
[19:16] <jsonperl> is putting this behind nginx or haproxy crazy for non-web traffic?
[19:17] <patdk-wk> actually, that would be more, worker mpm wouldn't it?
[19:17] <patdk-wk> nginx, likely, haproxy not so much
[19:18] <jsonperl> so route the client based on some metadata via one external port
[19:18] <jsonperl> through a reverse proxy
[19:18] <jsonperl> can nginx handle LOADS of traffic?
[19:18] <jsonperl> like 10-20Mb/s
[19:18] <jsonperl> via many different connections
[19:19] <sarnold> I suspect a decade-old machine could handle 20Mbps :)
[19:20] <patdk-wk> a modern machine, should be able to easily handle 40-80gbit
[19:21] <patdk-wk> a 10year old machine, around 4gbit or so
[19:21] <patdk-wk> maybe 10gbit
[19:25] <jsonperl> haha ok
[19:25] <jsonperl> so here's my basic premise, i'd love you to shoot holes in this
[19:25] <jsonperl> right now i have 8 processor cores (HT or otherwise)
[19:26] <jsonperl> I have a single process that runs a game-server, it does not run accross cores
[19:26] <jsonperl> i currently run a number of them (say 15 or so), some wind up using a lot of cpu, some very little
[19:26] <jsonperl> and we distribute worlds / players to lessor used servers
[19:28] <patdk-wk> solution is easy, just stop accepting connections, server load will go down, and you won't need to solve this problem :)
[19:28] <jsonperl> i think we would be much better served allowing many servers, 1 per "world" and allow the OS to distribute amongst it's available cores
[19:28] <jsonperl> since it's like... smart at that
[19:28] <jsonperl> patdk-wk haha
[19:28] <sarnold> patdk-wk :)
[19:28] <patdk-wk> but you just said you run one process per world
[19:28] <patdk-wk> and many on the same server
[19:28] <jsonperl> it's one process per SERVER, but many worlds
[19:29] <patdk-wk> oh
[19:29] <jsonperl> and a limited number on the server, pre-loaded and running always
[19:29] <jsonperl> sorry machine
[19:29] <sarnold> jsonperl: how many worlds? are any of the worlds more popular than others?
[19:29] <jsonperl> a limited number per machine
[19:29] <jsonperl> sarnold many many worlds, thousdands
[19:29] <jsonperl> and some are very popular and stay up always
[19:29] <jsonperl> and some are rarely used
[19:29] <patdk-wk> you need to thread your process
[19:29] <jsonperl> some are large some are small
[19:29] <jsonperl> patdk-wk ABSOLUTELY agree
[19:29] <patdk-wk> now likely you can thread per world
[19:29] <jsonperl> but that is a whole other ball of wax
[19:29] <patdk-wk> if you could thread lower than that, better
[19:30] <patdk-wk> well, you won't fix your problem, any other way
[19:30] <patdk-wk> except if you run 1 world per server
[19:30] <jsonperl> understood
[19:30] <patdk-wk> and that is going be resource draining
[19:30] <sarnold> dunno, more processes could do it, but I'd be leery of starting more than ~100 processes for this
[19:30] <jsonperl> 1 world per server seems like a good interim solution
[19:31] <patdk-wk> ya, till you get to thousands :)
[19:31] <jsonperl> i suspect we won't need more than 100-200 processes per machine
[19:31] <patdk-wk> that could be doable
[19:31] <patdk-wk> just contextswitching will skyrocket
[19:31] <jsonperl> at any one time we have at most 500 worlds running
[19:31] <patdk-wk> threading will keep it down
[19:31] <patdk-wk> but that really only matters with how many active worlds there are
[19:32] <garbagegod> For those of you who manage different servers, how do you keep track of credentials and whatnot? In a txt file on your desktop? In a custom built webapp?
[19:32] <jsonperl> so my premise is, a mutlithreaded server would be best, and runs many worlds
[19:32] <jsonperl> but a better solution than fixed servers, with many worlds would be many servers with one world
[19:32] <garbagegod> I was about to sit down and create a server information management app for my company and I wanted to consult you guys and see if there's an existing solution for that
[19:32] <jsonperl> allowing the server to balance
[19:32] <jsonperl> would you agree?
[19:32] <patdk-wk> garbagegod, in a public github
[19:33] <patdk-wk> jsonperl, yes
[19:33] <jsonperl> patdk-wk :)
[19:33] <garbagegod> patdk-wk: not a bad idea! I've been meaning to implement version control for a long time
[19:33] <patdk-wk> multithreaded will work great, cause it won't be switching contexts all the time, like a one per server would be doing
[19:33] <patdk-wk> garbagegod, keepass :)
[19:35] <garbagegod> doesn't seem like a bad solution
[19:35] <garbagegod> anything else?
[19:37] <patdk-wk> keepass2? but it's a hacky windows port :)
[19:37] <jsonperl> patdk-wk I understood very little when we wrote the game server years ago
[19:38] <sarnold> :)
[19:38] <jsonperl> and did it all with MRI ruby and custom c extensions... very pleasent to code but limited in scalability
[19:38] <patdk-wk> I wouldn't expect you to switch to libev
[19:38] <patdk-wk> threading is probably won't be too painful though
[19:39] <patdk-wk> but to switch to libev, event based, will require a totally new design, and scratch write :)
[19:39] <patdk-wk> and good understanding of state machines :)
[19:39] <jsonperl> it's all heavily based on a reactor pattern currently
[19:39] <jsonperl> ruby-eventmachine
[19:39] <patdk-wk> ah
[19:40] <sarnold> oh
[19:40] <sarnold> then you're already using libev or similar behind the scenes
[19:40] <jsonperl> it is libev i believe
[19:40] <sarnold> neat stuff :)
[19:40] <jsonperl> but multicore is a no go
[19:41] <patdk-wk> ya, multicore with event is not a lot of fun
[19:45] <jsonperl> i lied, eventmachine does NOT use libev
[19:46] <jsonperl> it's essential a similar paradigm written for use with ruby (in C)
[19:47] <sarnold> interesting, I wonder why they didn't use libev.
[19:52] <patdk-wk> jsonperl, just remember to track your context switching :)
[19:52] <patdk-wk> it would be very interesting to do a before and after graphs
[19:52] <patdk-wk> comparing how much context switching increases your cpu usage
[19:55] <jsonperl> even if it's a lot, i suspect the benefit of squeezing every inch out of each core will outweight the switching
[19:56] <jsonperl> we have a LOT of spare cycles on a machine now, since basically one beastly server can max out a core
[20:02] <jsonperl> patdk-wk also, how would i got about tracking the context switches?
[20:02] <patdk-wk> cacti/munin/......
[20:03] <patdk-wk> whatever your currently (hopefully) monitoring your servers with
[20:03] <jsonperl> wrote some custom stuff
[20:03] <jsonperl> pretty much just cpu/memory/process uptime
[20:04] <patdk-wk> heh, install munin :)
[20:04] <jsonperl> nothing seemed to be able to pull core statistics
[20:04] <patdk-wk> core stats?
[20:04] <jsonperl> everything was reporting cpu usage as a total
[20:05] <jsonperl> like usage on core 3
[20:05] <patdk-wk> that is easy to do
[20:05] <jsonperl> munin is your fav?
[20:05] <patdk-wk> ya
[20:06] <patdk-wk> basically, you just throw in an additional monitor onto the servers
[20:06] <patdk-wk> something like http://www.matija.si/system-administration/2014/04/01/a-munin-plugin-to-monitor-each-cpu-core-separately/
[20:06] <jsonperl> hmm, will you look at that
[20:06] <jsonperl> will do
[20:06] <patdk-wk> or maybe http://munin-monitoring.org/browser/munin-contrib/plugins/system/cpu-usage-by-process
[20:09] <jsonperl> every time i jump in this channel I discover weeks worth of work I HAVE to do
[20:09] <jsonperl> haha
[21:13] <mwhudson> rbasak: that bug title annoys me