[00:03] <teward> i think the php5 changes that changed the chmod permissions for php5-fpm's socket/listener introduced a regression for some webserver compatibility - it runs as root:root instead of www-data:www-data, and without the 0666 permissions it breaks with nginx's default php5-fpm + nginx configuration file setup...
[00:04] <teward> while it fixed a CVE, I think it might've introduced future issues...
[00:04] <teward> ... should I be poking the security team on this or just file a new bug?
[00:05] <teward> (this currently affects Precise, I have not tested later versions yet, as i'm still re-spinning my VMs)
[00:13] <rbasak> teward: that rings a bell. Might be worth checking if there's a bug for that already.
[00:13] <teward> rbasak, i didn't see one when i looked, but I'm testing Trusty right now
[00:13] <teward> working on spinning up a Utopic VM to test and preempt the issue
[00:13] <teward> rbasak, I found a solution, and it's literally a two-line change
[00:14] <teward> which just forces php5-fpm to listen as www-data instead of root
[00:14] <teward> (leaving 0660 as the perms)
[00:14] <teward> works on my production server on Precise, gonna test shortly on Trusty
[00:14] <teward> assuming the VM ever turns on >.<
[00:14] <rbasak> teward: as long as the change doesn't introduce a regression in itself. That's always the worry :)
[00:14] <teward> rbasak, exactly why i wouldn't mind looping the sec team on the consideration for the patch
[00:15] <teward> noting, of course, the php5-fpm change that WAS made kinda introduced a regression into out-of-the-box usage for nginx at the very least...
[00:16] <teward> rbasak, if Marc were around I'd poke them on this but meh
[00:17] <teward> rbasak, i also don't immediately see any bug(s) on this on Launchpad
[00:19] <rbasak> teward: I recall some kind of claimed regression in this area. It might be unrelated, or resolved.
[00:19] <rbasak> teward: or I might be getting mixed up with something else.
[00:20] <sarnold> I seem to remember there was a regression bug that got a second update..
[00:20] <sarnold> teward,rbasak http://www.ubuntu.com/usn/usn-2254-2/
[00:22] <teward> sarnold, if that's the case it didn't fix it in Precise
[00:23] <teward> maybe Trusty but not Precise
[00:23] <teward> sarnold, looks like it works in Trusty
[00:55] <delinquentme> Im configuring a number of ubuntu servers to do number crunching ... and each of the nodes are using SSH to share resources ... is there some explicit reason to NOT use a single public key between all of the compute nodes??
[00:58] <Sachiru> @delinquent: Are the nodes public (i.e. internet) facing or not?
[00:59] <z1haze> heh, so TJ- just watched me the last few days in the various channels, enjoyed watching me sweat huh?
[01:00] <z1haze> hope its been entertaining
[01:02] <TJ-> z1haze: No, it's quite painful actually. As I advised, you need to spend some time learning the underlying concepts and command configurations or else you'll end up with something less than 100% correct or secure. These are complex issues, they take many months or years for professionals to master.
[01:02] <z1haze> right which is why i obviusly am not going to be able to sit and read a ocuple pages and do it on my own
[01:03] <z1haze> i was here i offered to pay sometone to do it with me
[01:05] <teward> rbasak, sarnold:  since this isn't a security bug i'm not going to keep bugging you two on this issue in -hardened, but this is the bug for the observed issue in Precise:  https://bugs.launchpad.net/ubuntu/+source/php5/+bug/1352617
[01:06] <sarnold> teward: nice bug, thanks
[01:08] <teward> sarnold, note I referenced the regression bug which prompted the second update on that bug, to make a note the bug is similar.  I'm not sure how you solved that in Trusty and don't have time to dissect the diffs, but my posted solution in the bug appears to force the socket to be made with www-data:www-data
[01:08] <teward> s/on that bug/that USN/
[01:10] <teward> sarnold, and you're welcome, I tried to be as detailed with the bug as I could without putting in unnecessary details :P
[01:11] <sarnold> teward: heh yes, it's a tough balance isn't it? :) a good report is hard to write..
[01:11] <teward> sarnold, indeed, and having done a bunch of SRUs, I'm pretty sure that, at least in this case, I know how to write a decent report :)
[01:12] <teward> and of course with nobody for me to pay attention to today, I'm bored and don't mind doing bug hunting todayl
[01:12] <sarnold> oh ho ho ho! :D
[01:14] <teward> that, and this affected a production server, so bleh
[01:14] <teward> sarnold, i'd make a diff, but at the moment I'm stuck on [CENSORED] [CENSORED] [CENSORED] Micro[CENSORED]t right now
[01:17] <teward> sarnold, you can't approve series nominations can you?
[01:18] <sarnold> teward: nope :/
[01:18] <teward> meh
[01:18] <teward> sarnold, i'll go make a debdiff either way
[01:18] <teward> but meh
[01:32] <teward> sarnold: question, do you know how to force quilt to put patches into debian/patches and refer to debian/patches instead of putting it in the source dir (and not the debian/ dir)
[01:33] <teward> probably some devscripts syntax, but I dunno...
[01:34] <sarnold> teward: I've got an alias 'dq' that helps with that: alias dq='export QUILT_PATCHES=debian/patches'
[01:41] <teward> sarnold: should this be a security fix, or just a standard update...?  trying to figure where i should target my debdiff's changelog entry
[01:44] <sarnold> teward: did this update break it? http://www.ubuntu.com/usn/usn-2254-1/
[01:44] <teward> sarnold: got a diff for that update?
[01:45] <teward> (it looks like it might've broken it because 5.3.10-1ubuntu3.12 was the version that changed the chmod permissions)
[01:45] <teward> sarnold: there seems to be additional fixes in there though somewhere
[01:45] <teward> because you have to FORCE php5-fpm to make the socket as www-data:www-data now in precise
[01:45] <sarnold> teward: 346k :/  https://launchpad.net/ubuntu/+source/php5/5.3.10-1ubuntu3.12
[01:46] <teward> whereas it works ootb in trusty
[01:46] <teward> sarnold: yep that broke it
[01:47] <teward> sarnold: i think it got missed in the regression fix due to Precise *not* having a socket created by default
[01:47] <teward> (it listens by IP)
[01:47] <teward> as i mentioned in the bug it's a custom FPM configuration case, but it's still going to create the socket as root:root with 0660
[01:49] <sarnold> teward: ahhhh. that makes sense.
[01:50] <teward> sarnold: there seems to have been some changes made or something to FIX that in trusty and others, but I don't know which changes fixed that issue
[01:50] <teward> or whether it was fixed separately
[01:51] <teward> sarnold: the workaround I do is basically this, but after the package is built:  http://paste.ubuntu.com/7957082/
[01:52] <teward> since that's populated by www-data in Ubuntu right now, I'm not sure if this introduces any additional security issues
[01:53] <teward> sarnold: but yes, that USN-2254-1 fix (5.3.19-1ubuntu3.12) introduced the issue.
[03:35] <Sierra> Is there any need to reboot an ubuntu server after running an apt-get upgrade?
[03:35] <Sierra> It never seems to prompt for a reboot, but ubuntu desktop seems to always want to reboot after updating
[03:44] <teward> Sierra: at kernel updates, maybe.
[03:45] <z1haze> im confused.. so i mounted my ftp backup server with nfs and backed up all my stuff on it.. and with i check the usage of my ftp server its a 0mb..
[04:09] <z1haze> what would someone recommend to be able to use my dedicated server as 'normal' and also be able to created vps' on it? would you recommend proxmox? or is that like solely for use of vm's
[06:27] <monokrome> Hey. Does anyone know why this has no results?    $ sudo virsh list --all
[06:27] <monokrome> I installed libvirt-bin, and maas has downloaded 12 images.
[09:26] <liquid-silence> hi all, looking to migrate our email from google to a VPS, imap and pop3 including smtp with multiple domains
[09:26] <liquid-silence> anyone can suggest the packages I might require?
[09:26] <liquid-silence> I am thinking postfix + dovecot + postgresql
[09:26] <liquid-silence> but not sure yet (not really wanting the database dependency)
[09:26] <liquid-silence> We need virtual domains
[09:27] <liquid-silence> and virtual users
[09:27] <Abhijit> !email
[09:27] <Abhijit> :-(
[10:26] <rbasak> teward: what was the ownership of the socket before the listen directive was customised, OOI?
[10:41] <liquid-silence> gah I hate dovecot
[10:41] <liquid-silence> ffs
[10:42] <ashd> alice help
[10:43] <weeb1e_> Has anyone ever successfully disabled power saving (CPU scaling) in a Dell PowerEdge bios?
[10:43] <weeb1e_> I am being driven insane
[10:43] <weeb1e_> Ubuntu was unable to stop CPU scaling even with software in full control of CPU managment, which I assumed to be an Ubuntu 14.04 bug
[10:44] <weeb1e_> But now after trying everything imaginable, the technician has been unable to stop the CPU from scaling with software control disabled in the bios
[10:45] <weeb1e_> System Profile is set to Max Performance and C States + C1E are disabled
[10:45] <ogra_> just keep it on and force the performance governor (if you really want to waste power)
[10:45] <weeb1e_> Yet the CPUs cores still enter C6 state and scale down
[10:45] <weeb1e_> ogra_: That did not work, I tried for over a day
[10:45] <ogra_> how did you try ?
[10:45] <weeb1e_> Unlike every other one of my boxes, the performance governer, and even userspace governer had no effect
[10:46] <weeb1e_> So now we are trying to use the bios to disable scaling, and that does not work either
[10:46] <ogra_> what did you do to enforce its usage ?
[10:47] <weeb1e_> ogra_: I used cpufrequtils and also /proc directly
[10:47] <weeb1e_> Nothing worked
[10:47] <cfhowlett> !flash
[10:47] <weeb1e_> I had a few knowledgable people in here try and help too, with no success ogra_
[10:48] <weeb1e_> Which is why I'm really hoping someone knows how to force no scaling from the bios
[10:48] <ogra_> weeb1e_, well, make sure to "mv /etc/rc2.d/S99ondemand /etc/rc2.d/K99ondemand" to make sure the system does not forcefully load ondemand 1min after boot
[10:49] <weeb1e_> ogra_: Using cpufreq-set, I was able to set each cores governer to performance or userspace, and cpufreq-info said the cores were at max frequency
[10:49] <weeb1e_> Yet they continue to scale down to 1.6ghz
[10:49] <weeb1e_> The same happens with software cpu control, C States and C1E disabled in the bios
[10:50] <weeb1e_> So if both software and hardware based control cannot disable scaling, how the hell can it be disabled
[10:50] <weeb1e_> The technician who has physical access is going to run out of time soon
[10:50] <ogra_> heh, ask dell i guess :)
[10:50] <weeb1e_> This has been going on for over a week now
[10:51] <ogra_> you surely can hack around it somehow by forcing the min frequency up etc
[10:51] <weeb1e_> How?
[10:51] <weeb1e_> The bios apparently has no such option
[10:51] <ogra_> same way you set the governor in /proc
[10:51] <weeb1e_> and the userspace governer with a frequency had no effect
[10:51] <ogra_> there are other proc nodes next to it
[10:52] <weeb1e_> ogra_: That is the userspace governer
[10:52] <weeb1e_> Which like I said, does not work any more than performance with software control enabled in the bios
[10:52] <weeb1e_> It thinks it is working
[10:52] <weeb1e_> But it has zero effect
[10:52] <ogra_> cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq
[10:52] <weeb1e_> I use the performance governer on all my other boxes
[10:52] <ogra_> you should be able to set it there
[10:53] <ogra_> (note this is super hackish but might work)
[10:53] <weeb1e_> I doubt it'll work, considering the userspace governer didn't
[10:53] <weeb1e_> But I'll try once I get the techie to enable software control in the bios again
[10:53] <weeb1e_> They are going to phone dell now
[10:53]  * ogra_ never used the userspace governor, so i cant tell 
[10:53] <weeb1e_> Based on your advice
[10:54] <ogra_> yeah, sounds very HW or BIOS/UEFI specific
[10:54] <weeb1e_> Ok, they are going to enable software control and boot it up before phoning
[10:54] <weeb1e_> So I'll try setting scaling itself
[10:54] <ogra_> right ...
[10:55] <ogra_> you should definitely talk to #ubuntu-kernel too ...
[10:56] <weeb1e_> Alright, thanks
[11:00] <weeb1e_> ogra_: Setting core 0's scaling value to 2.49ghz instead of 1.59 has no effect and all cores still scale down to 1.59
[11:01] <ogra_> sounds like a bug (in either HW or SW)
[11:01] <weeb1e_> Well considering hardware control is also bugged
[11:01] <weeb1e_> It seems like hardware
[11:04] <weeb1e_> Lets hope something comes out of phoning Dell
[11:04] <weeb1e_> Because I'm now out of ideas
[11:12] <weeb1e_> So dell is useless. Apparently they need to log a call to the warranty department so that they can send someone out to look at it
[11:13] <weeb1e_> They cannot help at all over the phone
[11:15] <weeb1e_> I recommended trying a BIOS update
[11:15] <weeb1e_> Since I have literally no other ideas now
[11:18] <weeb1e_> The company which spends around 100 million a year with dell is going to consider moving to HP based on their response to that phone call
[11:38] <weeb1e_> ogra_: I found a software solution!
[11:38] <weeb1e_> "To dynamically control C-states, open the file /dev/cpu_dma_latency and write the maximum allowable latency to it. This will prevent C-states with transition latencies higher than the specified value from being used, as long as the file /dev/cpu_dma_latency is kept open. Writing a maximum allowable latency of 0 will keep the processors in C0"
[11:38] <ogra_> awesome
[11:39] <weeb1e_> ogra_: It only applies to the second socket
[11:39] <weeb1e_> So only half a solution
[11:41] <Lunario> Is there a way to run a webbrowser or some other program on ubuntu server but view it on another pc? kinda like teamviewer or vnc but faster?
[11:46] <weeb1e_> Nevermind, forcing c state 0 effectively disables Turbo Boost
[11:46] <weeb1e_> Which means it is not a viable option
[11:54] <peetaur2> Lunario: X11 forwarding
[11:55] <peetaur2> Lunario: ssh -X user@host, then run some command
[11:56] <Lunario> great, will check it out, thanks!
[12:01] <weeb1e_> ogra_: So the final solution is to compromise by writing about 80 to that file and keeping the file open forever
[12:02] <weeb1e_> That will limit the C states to between C0 and C3, stop scaling under minimal load and allow turbo to function correctly
[12:02] <weeb1e_> I will just need to build a custom service which will keep that file open at all times
[12:51] <toyotapie> Can I debootstrap a 64-bit OS from a 32-bit installation or visa versa?
[12:51] <toyotapie> vice versa*
[12:51] <rbasak> Vice versa only I thikn.
[12:52] <rbasak> Though you might be able to use qemu-user-static or something to debootstrap 64 bit from 32. I'm not sure.
[12:53] <rbasak> You might just want to use Ubuntu Core images instead.
[12:53] <rbasak> They're pretty much a tarballed debootstrap.
[12:53] <toyotapie> Nice, with or without kernel ?
[12:54] <toyotapie> and grub*
[12:54] <toyotapie> either way, I am downloading it now
[13:07] <rbasak> toyotapie: no kernel or bootloader IIRC. You can install those yourself though, either in a chroot (on a 64-bit host for 32-bit system).
[13:07] <rbasak> toyotapie: or by getting the debs and booting the system manually to get started.
[13:39] <frobware> I have access to an APM Mustang board and wanted to know if it is possible to configure eth1 and eth2; eth0 seems to be detected fine.
[14:17] <^Lestat> make: no tagert specified http://pastebin.com/vFD0FNqg
[14:18] <^Lestat> what am I doing wrong?
[14:19] <lordievader> ^Lestat: What are you trying to build? Have you configured it?
[14:19] <foolhardy> I have an ubuntu 12.04 server vm and I'm finding that the nightly suspend (for backups) causes the clock to be off, making it slower and slower by each  night. How do I go about telling ubuntu to pull time daily from NTP?
[14:19] <^Lestat> i typed ./configure
[14:19] <^Lestat> Im trying to install pdflib
[14:19] <^Lestat> following this http://linuxhelp-kavanathai.blogspot.com/2011/08/how-to-install-pdflib-lite-pdflib-on.html
[14:19] <lordievader> ^Lestat: And it completed without error?
[14:20] <^Lestat> I get all the way to step 9
[14:20] <^Lestat> thus far yes
[14:20] <^Lestat> just trying to get a local install on my dev box so Im not push/pulling all day
[14:21] <^Lestat> I dont even understand what phpize does.
[14:21] <Pici> A 2011 tutorial on how to install a library from source aimed at Centos installs?
[14:21] <^Lestat> ah crap. I didnt even read that it was centos
[14:22] <lordievader> ^Lestat: Could you pastebin the output of your last ./configure?
[14:22] <^Lestat> sure...
[14:23] <^Lestat> ohhh. ok I feel foolish
[14:23] <^Lestat> http://pastebin.com/MMgp4RXw
[14:23] <^Lestat> yea there are errors
[14:25] <^Lestat> I dunno anything about makes
[14:27] <Pici> There is php-fpdf (which appears to be a free alternative to pdflib) in the repositories, if you aren't tied to pdflib
[14:27] <sbalneav> Hello all.  I'm getting bitten by https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1352876
[14:28] <sbalneav> Looks like the latest glib has some problems, I'm down hard on several of my servers.  Anybody know of any fixes?
[14:33] <^Lestat> Im using pdfib in my production server. So I'd rather not change my codebase on my local dev?
[14:34] <lordievader> ^Lestat: ;) Fixing the errors will probably solve your problem.
[14:35] <^Lestat> no idea how/where
[14:35] <^Lestat> Im not a server dude.
[14:36] <^Lestat> and that is for centOS not ubuntu
[14:37] <lordievader> ^Lestat: Have you installed build-essential? Source code is source code, CentOS code should work here too. Unless the code is very specific... but then it is bad code.
[14:43] <^Lestat> installing... ;-)
[14:48] <sbalneav> I have updated the bug.  Looks like ubuntu has buggered up libc6 2.11.1-0ubuntu7.14
[14:48] <^Lestat> ok that changed everything
[14:48] <^Lestat> now how was a noob supposed to know that?
[14:48] <sbalneav> If you back down to the 2.11.1-0ubuntu7.13 version of libc6 and libc-bin, it fixes the problem.
[14:48] <lordievader> ^Lestat: The error tells you ;)
[14:49] <^Lestat> That-> ? configure: error: in `/home/vagrant/Downloads/pdflib-3.0.4': configure: error: C++ preprocessor "/lib/cpp" fails sanity check
[14:49] <lordievader> Yes.
[14:49]  * ^Lestat scratches head
[14:50] <lordievader> ^Lestat: Now if you had build-essential installed, that would've been interesting ;)
[14:52] <^Lestat> heh now I messed up my php
[14:58] <msx> hello everyone, do you know where i can find the script that produce this output? http://i.imgur.com/WNGS9nC.png   I already tried the usual places like /etc/issue, /etc/issue.net, /etc/sshd/, hushlogin, etc. to no avail :S
[14:58] <msx> *produces
[15:01] <msx> xD
[15:03] <msx> oops wrong # sorry
[15:04] <pmatulis> msx: seems server-related no?
[15:04] <msx> pmatulis: hi, well, it is the standard ubuntu server notice when you login via a tty
[15:05] <pmatulis> answer: update-notifier-common and landscape-common packages.  you can safely remove them
[15:05] <msx> pmatulis: believe it or not i can't find it anywhere
[15:05] <msx> okay, but i don't want to remove it, just modify it. Now i know it is related to the landscape suite i know where to look
[15:06] <msx> pmatulis: tnx!
[15:06] <pmatulis> msx: good deal
[15:08] <msx> ahh, nice, it's a python script i see
[15:09] <pmatulis> msx: it will be in danger of being overwritten during a package update
[15:10] <Tzunamii> You can always flag it to be ignored, if you want
[15:11] <pmatulis> true
[15:18] <^Lestat> unreal
[15:18] <^Lestat> the php manual had everything i needed, in someones comment
[15:19] <^Lestat> But just looking at those command line instances looks daunting to a noob
[15:27] <^Lestat> Anyone recall being a noob?
[15:27] <msx> pmatulis: Tzunamii: yes, tnx :)
[15:30] <csst0111> I'm using crontab to run a python script. This script is in  /home/user/foo/bar/script.py  and creates some files. When the cron job runs the files from the output are created in the /home/user and not /home/user/foo/bar
[15:31] <csst0111> Is there something I can add to the crontab or I should change the script so it can create the files in the desired destination ?
[15:33] <rbasak> csst0111: you could prefix the crontab command with "cd foo/bar &&"
[15:33] <rbasak> csst0111: or have your python script change the pwd
[15:33] <rbasak> csst0111: import os; os.chdir("...")
[15:33] <csst0111> rbasak, oh yes the first one is great!!
[15:34] <csst0111> thank you!
[15:34] <rbasak> Yeah that keeps your script nice and generic
[15:34] <rbasak> No problem
[15:34] <Tzunamii> You really should let cron be and do it in the script
[15:34] <rbasak> Tzunamii: but then the script ends up with the directory hardcoded. Move the script and it breaks or produces unusual behaviour.
[15:34] <rbasak> Tzunamii: better to define the script as producing the output in the current directory.
[15:35] <rbasak> Tzunamii: and then make sure it is called with the current directory correct. As an example, GNU make does exactly this.
[15:35] <lordievader> ^Lestat: Sure we do, I remember my first time installing an nvidia driver like it was yesterday.
[15:35] <rbasak> (as well as provide a -C option for convenience)
[15:35] <^Lestat> I feellike a total moron. I don't even know the right terms to use.
[15:36] <Tzunamii> The script is where everything should be at. In 6 months when he wants to do changes he will have forgot where that output fix were.
[15:36] <lordievader> ^Lestat: As long as the other person understands what you are talking about, who cares.
[15:37] <rbasak> Tzunamii: hardcoding path locations into scripts is usually a bad idea.
[15:37] <rbasak> It generally ties the script to a particular path and thus a particular machine.
[15:37] <rbasak> Makes it really inflexible to deploy, develop and test.
[15:38] <Tzunamii> Depends on what kind of script it is. Usually scripts worth mentioning has a parameter where you set the script
[15:39] <Tzunamii> If he wants to do your solution he really should comment the script with where the output solution is at
[15:41] <rbasak> The output will be at "."
[15:55] <^Lestat> well, I don't want ppl to think Im lazy either
[15:57] <Tzunamii> rbasak: I think you're missing the point. Never spread out things when you don't have to. In order to have everything in the script you add the line:  BASEDIR=$(dirname $(readlink -f $0))   in the script and the script know where to cd to/use as it's basedir
[15:57] <Tzunamii> That way the script can be moved around and/or used in cron and it always will know it's basedir
[15:58] <Tzunamii> Sorry for the late reply. I'm working atm
[15:58] <rbasak> Tzunamii: that still stops you testing the script with a different set of input and output files.
[15:59] <rbasak> (without further support to override in the script)
[15:59] <rbasak> I'm in favour of doing things the Unix way. Less surprise then.
[15:59] <rbasak> Unix commands generally act on the current working directory.
[15:59] <rbasak> The caller gets to decide that.
[15:59] <Tzunamii> rbasak: So you're saying adding commands outside a scripted environment in cron is the solution?
[16:00] <rbasak> It makes sense to separate the script from the data it operates on, just like any other command.
[16:00] <Tzunamii> No, it doesn't
[16:00] <rbasak> That makes testing and deployment easier.
[16:00] <rbasak> It's consistent with everything else on the system.
[16:00] <Tzunamii> I'm not going to argue coding practices here in this channel
[16:52] <vedic> What is the fastest way of sending small size files to another server? For security I am looking to setup VPN between machines and transfer files on that
[16:52] <vedic> small size means about 50kb to 100 kb each file
[16:53] <lordievader> vedic: rsync + ssh?
[16:53] <vedic> And if possible, I would prefer to automate it. i.e. as soon as the file comes to a directory, it should get transferred to another machine without waiting
[16:53] <rbasak> tar + ssh might be slightly faster the first time.
[16:54] <vedic> rbasak: You do you think tar is required as size is less than 100kb
[16:54] <rbasak> tar doesn't have anything to do with size
[16:54] <vedic> lordievader: I guess rsync first spends some time in calculating checksum etc
[16:54] <RoyK> rbasak: not lot
[16:54] <RoyK> vedic: vedic not if the files have the same size/timestamp
[16:54] <rbasak> rsync introduces some more latency, since the file list has to be sent across first, etc.
[16:55] <RoyK> rbasak: with ssh compression, that's not a lot
[16:55] <lordievader> vedic: True, however it only send over new/changed stuff. And it can compress things.
[16:55] <RoyK> vedic: unionfs, perhaps?
[16:55] <RoyK> erm - not unionfs
[16:55] <rbasak> RoyK: compression doesn't help latency
[16:55] <RoyK> my bad
[16:56] <RoyK> rbasak: obviously, but that depends on your original latency
[16:56] <vedic> RoyK, rbasak: Files are not coming to machine 1 very quickly. Something like 100 files per minutes but they should get moved (not copied) to machine 2 as soon as possible
[16:56] <rbasak> If there were a large pile of small files for a one-off transfer, I'd use tar over ssh. It'd be noticeably faster because there wouldn't be an initial delay while the disk seeks to find the entire file list to send over.
[16:56] <RoyK> vedic: unison is what I meant
[16:56] <rbasak> That's why I said tar initially.
[16:57] <rbasak> But if it's a regular thing, you probably want rsync.
[16:58] <vedic> rbasak: Its regular and all files are different for sure. The files moved before will never come again in the machine 1. They will remain on machine 2 so partial content transfer is not the case.
[16:58] <RoyK> vedic: or perhaps FreeFileSync
[16:58] <RoyK> vedic: even if they are changed on machine 1?
[16:59] <RoyK> zfs send/receive :D - but that implies zfs :P
[16:59] <vedic> RoyK: On machine 1, a file stays only till it is not moved and I am looking to move asap. File comes to Machine 1 because user is uploading via API or Web form. But they are processed on Machine 2
[17:00] <RoyK> vedic: I'd write a little daemon using inotify to grab a file once it's uploaded and closed, send it over and remove the original
[17:00] <RoyK> shouldn't be too hard in perl/python/whatever
[17:00] <vedic> RoyK: I see
[17:00] <rbasak> I'd consider re-engineering a little.
[17:01] <rbasak> Rather than using files themselves, maybe use git-annex or something like that to better manage files between multiple machines.
[17:01] <rbasak> It will track what machine has what, and handles sync on demand.
[17:01] <vedic> I use Python so yea, that can be done. I was just thinking if there is something already ready to do such thing as a service
[17:01] <rbasak> I hear that it has an automatic sync feature now, too.
[17:01] <RoyK> no reason to setup mass synchronisation if you only want to move a file whenever it lands on your computer - inotify is quite simple
[17:02] <rbasak> Then you don't get yourself into an odd state in failure caes.
[17:02] <rbasak> cases
[17:02] <RoyK> rbasak: seems like overkill to me, really. use inotify on the directory, wait for an incoming file, wait for it to close, rsync it over to the other machine and unlink it
[17:02] <rbasak> RoyK: that's fine for the simple common case, yes.
[17:03] <rbasak> RoyK: if there's a larger deployment that depends on it, then it's a maintenance nightmare.
[17:03] <vedic> rbasak, RoyK: It see max 100 files coming per minutes on Machine 1
[17:03] <RoyK> rbasak: well, he didn't say anything about deployment size other than that it's two machines
[17:03] <rbasak> Suddenly what file exists where becomes part of your deployment state.
[17:03] <rbasak> By larger deployment I probably should have said a complex deployment
[17:03] <rbasak> A web app on one machine that sends files to another to be processed is complex, in my book.
[17:03] <RoyK> vedic: ouch - what sort of files are these? what are you going to do with identical filenames?
[17:04] <RoyK> vedic: you could use unison, though
[17:04] <vedic> RoyK: File names are UUID so I don't think it can be identical. These are small music samples that I need to signal process and send the reply back to the user asap.
[17:06] <RoyK> vedic: then I suppose something like the webapp receiving the file could use http post (or something) to the receiving server and get the answer quickly. shouldn't be much on an overhead if it's on a LAN, and it should be easy to make it quick
[17:07] <rbasak> +1
[17:07] <vedic> RoyK, rbasak: What if I setup TCP/IP server and client. When ever file lends on Machine 1, Machine 1's client connects to server and send the file to machine 2. Or connection is not terminated but the server is just waiting to get next file from the client?
[17:07] <rbasak> Or stick them in a message queue, though admittedly that is one more component to manage in the deployment.
[17:08] <RoyK> vedic: didn't you say the file was received by a webapp on server 1? if so, it should be simple to do the magic from there with a webservice on server 2
[17:09] <vedic> RoyK: yea, Machine 1 is a web server where file comes. It can come via API or via Web form upload. Machine 2 is the place where I process these files. So I can run a TCP/IP server on Machine 2 which is waiting to listen from Machine 1. So machine 1 is actually will run Web server to get file from the user and a TCP/IP client to send it to Machine 2
[17:12] <vedic> Roy:, rbasak: Do you think I am just repeating what the tools already provide? or it is just not a good solution
[17:12] <RoyK> vedic: a TCP/IP server like a small webserver and then another webapp to do the dirty work is what I'd do
[17:12] <TJ-> vedic: Does machine 1 do anything to the files except write them to a file-system?
[17:12] <rbasak> vedic: it's important to consider the error states. That's what costs time and effort maintaining a deployment.
[17:13] <vedic> TJ: Machine 1 just holds those files temporary and waiting to get them moved permanently
[17:13] <rbasak> vedic: what you want to do is reduce the state space so that the system doesn't get stuck or broken.
[17:13] <vedic> rbasak: I see
[17:13] <rbasak> vedic: or that it self-corrects from an errant state.
[17:13] <vedic> rbasak: yea, thats priority
[17:13] <TJ-> vedic: So why not NFS mount a file-system from machine 2 onto machine 1? machine 1 writes into it, machine 2 sees the files arrive
[17:13] <rbasak> I would avoid NFS since it makes handling errors harder.
[17:14] <rbasak> What if machine 2 is down? Should the web app on machine 1 hang?
[17:14] <vedic> TJ: What if Machine 1 and Machine 2 are on not on LAN but on cloud and may be hosted in different locations?
[17:14] <vedic> "on not" => "not"
[17:14] <rbasak> vedic: the issue with moving files about is that if there's a "confused" state possible, you'll inevitably end up in it eventually.
[17:15] <RoyK> vedic: If I understand your application correctly, I'd use webservices - just that - it'll make error handling easy - far easier than nfs or other shared filesystems
[17:15] <rbasak> vedic: eg. races like failures when shutting down for restart, and a file was half copied. Or old half copied temp files filling up all space.
[17:15] <vedic> RoyK: I see
[17:16] <rbasak> Or a file copied across but not removed at the sending end. The receiving ends processes it, deletes the file, and then it accidentally gets processed twice.
[17:16] <vedic> rbasak: yea, I would surely prefer that this doesn't happen
[17:16] <rbasak> A message queue basically solves this problem. But it is complex to deploy.
[17:16] <vedic> rbasak: hmm
[17:16] <RoyK> vedic: with webservices (or similar) it'll be stateful from end to end - keep it simple ;)
[17:16] <vedic> rbasak: RabbitMQ?
[17:17] <rbasak> RoyK's solution will also work cleanly I think, assuming that files can be processed immediately. Otherwise you'll want a queue.
[17:17] <rbasak> Something like that, yes.
[17:17] <rbasak> Amazon has SQS.
[17:17] <vedic> rbasak: yea
[17:19] <vedic> rbasak, RoyK: I think I will need something like message queue. Hope that doesn't adds its own latency to a large extent
[17:19] <TJ-> I agree with RoyK, simple inotifywait + rsync, as in the example at https://github.com/rvoicilas/inotify-tools/wiki#info
[17:19] <TJ-> I think you're over-engineering the solution
[17:24] <vedic> TJ: hmm... I see pyinotify
[17:24] <vedic> I use Python so I will surely check this along with message queue solution
[17:28] <rbasak> I just remembered watershed
[17:29] <rbasak> That might be even easier than inotify
[17:29] <RoyK> TJ-: I thought so first, but as of now, I think it'd be better with just the receiving webapp to use a webservice with server 2, which may have some queueing if needed
[17:29] <RoyK> (on server 2, that is)
[17:29] <rbasak> Just call it with rsync every time after you finish writing a file. It will make sure that rsync only runs once, and one more final time.
[17:29] <rbasak> Be careful with writing files though. You don't want to rsync half a file, so mv it in from another directory.
[17:29] <rbasak> (this is one of the error states I was talking about)
[17:36] <hallyn> smb: i'm thinking on thursday morning (my morning :) I may go through the debian.vs.ubuntu libvirt packages.
[17:37] <hallyn> stgraber: I really wish '-F' was an option in download template.  I always mis-spell '--flushcache'.
[18:56] <zartoosh> hi this might be a wrong place to ask but here it is:  we recently moved from 12.04,  to 14.04. One of our applications which is single thread, no forking and does floating point calculation is running 3 times slower on ubuntu 14.04 compare to 12.04. The floating point uses math library calls, tan, ceil, floor. Any hints greatly appreciated . thx
[19:01] <sarnold> zartoosh: perhaps related: https://gcc.gnu.org/ml/gcc/2012-02/msg00469.html
[19:02] <MavKen> with 1GB ram...any reason to use 64bit?  basic lamp setup with a few wordpress sites
[19:02] <Guest20842> how to access my local machine from another machine ?
[19:02] <KM0201> Guest12249: remote desktop?  vnc?
[19:02] <bekks> !ssh | Guest20842
[19:02] <MavKen> Guest20842, ssh
[19:03] <KM0201> oh.. forgoti was in -server  :)
[19:11] <qman__> MavKen: consistency, application support
[20:06] <fridaynext> if i sudo update-rc.d sickbeard defaults - will that cause it to start up at system boot each time?
[20:08] <arrrghhh> fridaynext, you have sickbeard installed and an entry in /etc/init.d for it?
[20:08] <fridaynext> arrrghhh: yes
[20:08] <arrrghhh> then yes, it iwll.
[20:09] <arrrghhh> will*
[20:10] <fridaynext> arrrghhh: gotcha - thanks
[20:10] <arrrghhh> http://manpages.ubuntu.com/manpages/precise/man8/update-rc.d.8.html
[20:10] <arrrghhh> if you want to know moar
[20:23] <delinquentme> best way to get the internal network IP from a command like ifconfig ... but without needing to clean up the other stuff?
[20:23] <sarnold> delinquentme: "the" IP? it's possible for a machine to have thousands, if not millions..
[20:24] <delinquentme> so im making a number of cloned machines so all of the info returned should be fairly similar
[20:25] <xibalba> anyone here work with pure-ftpd much?
[20:26] <xibalba> trying to use pure-quotacheck -u ftpuser -d /home/some/user/directory. it runs, but returns nothing.
[20:26] <delinquentme> ifconfig | perl -nle'/dr:(\S+)/ && print $1'
[20:27] <sarnold> delinquentme: ip addr show  may be easier to parse
[20:27] <xibalba> oh it only creates the file...doesn't check the quota
[20:57] <monokrome> Does anyone here have experience running OpenStack on Ubuntu server?
[20:58] <monokrome> I was going through the documentation for Ubuntu OpenStack Cloud and it says to use `virsh list --all` which doesn't list anything
[20:58] <monokrome> So, I created 6 VMs with `virsh install` (hopefully that is what is expected) but don't know the appropriate way to get them to talk to MAAS
[20:59] <monokrome> I figure that they need to be on a private network, but am not sure how to set one up. They are set up to use PXE.
[21:06] <rbasak> monokrome: there's a ton of work in this area at the moment. Try the cloud installer: http://ubuntu-cloud-installer.readthedocs.org/en/latest/
[21:11] <stokachu> http://askubuntu.com/questions/144531/how-do-i-install-openstack
[21:11] <stokachu> rbasak, ^
[21:13] <rbasak> monokrome: stokachu's link should help
[21:14] <stokachu> http://ubuntu-cloud-installer.readthedocs.org/en/stable/single-installer.guide.html
[21:14] <stokachu> monokrome, thats our guide for the cloud installer
[21:15] <stokachu> monokrome, http://ubuntu-cloud-installer.readthedocs.org/en/latest/single-installer.guide.html
[21:15] <stokachu> sorry you want that one instead
[21:15] <qman__> I set it up once and found that its quite difficult and complex, it took me a few days to actually get things working and in the end I found that it didn't suit my needs, and replaced it with a normal KVM setup
[21:18] <qman__> I also found that its changing a lot between versions, so old documentation is usually more harm than good
[21:19] <rbasak> The Ubuntu cloud installer documentation linked above is current, AFAIK.
[21:20] <stokachu> our installer guides are also autogenerated on each commit
[21:20] <stokachu> so they'll be the most current
[21:22] <qman__> Hopefully its much better now, I used 12.04 when I set it up
[21:25] <rbasak> Things have progressed massively in the last two years.
[21:29] <qman__> The main reason I decided not to use it was that I needed persistent VMs, and while possible, doing that was awkward and difficult, and regular KVM just made more sense
[21:31] <z1haze> if i were to use a bare metal hypervisor, such as esxi on my server, would i still be able to like install say.. ubuntu or something on it so i can use the server itself as a regular server and just use the esxi to create the vm's? im confused as how that works
[21:31] <z1haze> my host has a install hit basically replaces the o/s but they are telling me its not a full o/s
[21:31] <qman__> z1haze: no, the bare metal hypervisor becomes the server's OS
[21:32] <z1haze> how what would you recommend i use then?
[21:32] <qman__> z1haze: you then create everything in VMs
[21:32] <z1haze> just like create a large portioned vm for myself?
[21:33] <z1haze> i guess i want the functionality of using the hypervisor and have ubuntu.
[21:33] <z1haze> i suppose i could just create a vm for myself and install ubuntu on it huh?
[21:33] <qman__> Yes
[21:33] <qman__> That is the point of bare metal hypervisors
[21:33] <z1haze> ok yea, im realy new to this. im sorry
[21:34] <qman__> Only the minimum runs non-virtualized
[21:34] <z1haze> ok well, what if i want my vm, the one ill be using for myself and to utilize basically whatever portion of the server i want.. how would i configre that as to not be restricted from cpu or RAM or w/e can it be like configured dynamically to where it takes whatever it needs?
[21:35] <qman__> That way, the hardware layer is abstracted and marginalized
[21:36] <qman__> Some hypervisors support dynamic hardware changes but generally you don't do that
[21:36] <qman__> The concept is that you create a VM for each role or service you are performing
[21:37] <qman__> with appropriate resources assigned to that role
[21:39] <qman__> So instead of one bare metal server that does lots of things, you have lots of VMs that do one thing each, sharing hardware
[21:42] <qman__> It simplifies upgrades and management, and allows you to reduce downtime when problems arise
[21:44] <qman__> For example, I am able to upgrade my VMs to 12.04 and 14.04 one at a time, only taking down one service or role, even though they run on the same hardware
[21:44] <qman__> I can also take snapshots and roll back if it fails
[21:45] <qman__> Which for my mail server, it did
[21:46] <qman__> All the while, my spam filter running as another VM on the same box, kept receiving my mail
[21:50] <Lunario> when I ssh into my ubuntu server via ssh -X -t  and then start a program in the terminal, I would like that program to be accessible from other terminal windows created via ssh. I would also like to have access via ssh to particular programs running on my server and open them in the terminal (say an always connected irssi client). How do I do that?
[21:51] <qman__> Lunario: not including the X11 forwards, you can use GNU screen
[21:51] <fridaynext> so i've got files in /etc/init.d/ as well as /etc/defaults, and i've update-rc.d'd them and chmod +x'd them, but they still don't start at bootup. Ideas why?
[21:55] <Wylley> Hi everyone
[21:55] <Lunario> qman__: just searched for it and am checking it out, thanks for the hint!
[21:56] <Lunario> seems to be able to do what I want to do, so great :)
[21:56] <Wylley> having a weird issue trying to install Ubuntu Server (14) on to a machine that has FakeRAID built in to the motherboard. Installation goes normally, but on reboot, I just get an endless loop of "Incrementally started RAID arrays" and "mdadm: CREATE user root not found", etc.
[21:57] <qman__> Wylley: rule of thumb, don't use fakeraid, just turn it off and use mdadm
[21:58] <qman__> It will have more features, be more reliable, and be portable
[21:58] <fridaynext> and Wylley, if you need a tutorial, I used this one to set up RAID5, and it helped me understand it immensely: http://zackreed.me/articles/38-software-raid-5-in-debian-with-mdadm
[21:58] <fridaynext> Also has great tutorials on SMART drive status, UPS, email notifications, etc.
[21:59] <Wylley> qman__, ok, killing the onboard "raid" controller, going to reinstall.
[22:00] <Wylley> fridaynext, thanks. I'll go check it out.
[22:01] <qman__> Wylley: the installer's raid option during disk setup is mdadm in case that wasn't clear
[22:03] <Wylley> qman__, during the install, it says it found drives containing mdadm containers. Do I want to activate these?
[22:04] <qman__> No, you want to delete them and start over
[22:04] <Wylley> ok, and do I want "entire disk" or "entire disk with lvm"?
[22:04] <qman__> I've found that sometimes you can get into a situation where you have to manually zero out the drives otherwise the installer keeps trying to assemble old stuff and never works
[22:05] <qman__> You want custom
[22:07] <qman__> https://help.ubuntu.com/14.04/serverguide/advanced-installation.html
[22:14] <Wylley_> qman__ thanks for your help. I think I'm on my way to a working server now. :-)
[22:43] <Lunario> qman__: coming back to gnu screen: is it also possible to keep a gtk process started via gnu screen running after detaching from the session?
[22:47] <rbasak> Lunario: look into xpra to do screen-like things to graphical (X/GTK) programs
[22:48] <Lunario> rbasak: thanks, will do