/srv/irclogs.ubuntu.com/2014/08/05/#ubuntu-server.txt

tewardi think the php5 changes that changed the chmod permissions for php5-fpm's socket/listener introduced a regression for some webserver compatibility - it runs as root:root instead of www-data:www-data, and without the 0666 permissions it breaks with nginx's default php5-fpm + nginx configuration file setup...00:03
tewardwhile it fixed a CVE, I think it might've introduced future issues...00:04
teward... should I be poking the security team on this or just file a new bug?00:04
teward(this currently affects Precise, I have not tested later versions yet, as i'm still re-spinning my VMs)00:05
rbasakteward: that rings a bell. Might be worth checking if there's a bug for that already.00:13
tewardrbasak, i didn't see one when i looked, but I'm testing Trusty right now00:13
tewardworking on spinning up a Utopic VM to test and preempt the issue00:13
tewardrbasak, I found a solution, and it's literally a two-line change00:13
tewardwhich just forces php5-fpm to listen as www-data instead of root00:14
teward(leaving 0660 as the perms)00:14
tewardworks on my production server on Precise, gonna test shortly on Trusty00:14
tewardassuming the VM ever turns on >.<00:14
rbasakteward: as long as the change doesn't introduce a regression in itself. That's always the worry :)00:14
tewardrbasak, exactly why i wouldn't mind looping the sec team on the consideration for the patch00:14
tewardnoting, of course, the php5-fpm change that WAS made kinda introduced a regression into out-of-the-box usage for nginx at the very least...00:15
tewardrbasak, if Marc were around I'd poke them on this but meh00:16
tewardrbasak, i also don't immediately see any bug(s) on this on Launchpad00:17
rbasakteward: I recall some kind of claimed regression in this area. It might be unrelated, or resolved.00:19
rbasakteward: or I might be getting mixed up with something else.00:19
sarnoldI seem to remember there was a regression bug that got a second update..00:20
sarnoldteward,rbasak http://www.ubuntu.com/usn/usn-2254-2/00:20
tewardsarnold, if that's the case it didn't fix it in Precise00:22
tewardmaybe Trusty but not Precise00:23
tewardsarnold, looks like it works in Trusty00:23
delinquentmeIm configuring a number of ubuntu servers to do number crunching ... and each of the nodes are using SSH to share resources ... is there some explicit reason to NOT use a single public key between all of the compute nodes??00:55
Sachiru@delinquent: Are the nodes public (i.e. internet) facing or not?00:58
z1hazeheh, so TJ- just watched me the last few days in the various channels, enjoyed watching me sweat huh?00:59
z1hazehope its been entertaining01:00
TJ-z1haze: No, it's quite painful actually. As I advised, you need to spend some time learning the underlying concepts and command configurations or else you'll end up with something less than 100% correct or secure. These are complex issues, they take many months or years for professionals to master.01:02
z1hazeright which is why i obviusly am not going to be able to sit and read a ocuple pages and do it on my own01:02
z1hazei was here i offered to pay sometone to do it with me01:03
tewardrbasak, sarnold:  since this isn't a security bug i'm not going to keep bugging you two on this issue in -hardened, but this is the bug for the observed issue in Precise:  https://bugs.launchpad.net/ubuntu/+source/php5/+bug/135261701:05
uvirtbotLaunchpad bug 1352617 in php5 "php5-fpm UNIX sockets do not listen as www-data:www-data, cause 502s with webservers trying to use socket" [Undecided,New]01:05
sarnoldteward: nice bug, thanks01:06
tewardsarnold, note I referenced the regression bug which prompted the second update on that bug, to make a note the bug is similar.  I'm not sure how you solved that in Trusty and don't have time to dissect the diffs, but my posted solution in the bug appears to force the socket to be made with www-data:www-data01:08
tewards/on that bug/that USN/01:08
tewardsarnold, and you're welcome, I tried to be as detailed with the bug as I could without putting in unnecessary details :P01:10
sarnoldteward: heh yes, it's a tough balance isn't it? :) a good report is hard to write..01:11
tewardsarnold, indeed, and having done a bunch of SRUs, I'm pretty sure that, at least in this case, I know how to write a decent report :)01:11
tewardand of course with nobody for me to pay attention to today, I'm bored and don't mind doing bug hunting todayl01:12
sarnoldoh ho ho ho! :D01:12
tewardthat, and this affected a production server, so bleh01:14
tewardsarnold, i'd make a diff, but at the moment I'm stuck on [CENSORED] [CENSORED] [CENSORED] Micro[CENSORED]t right now01:14
tewardsarnold, you can't approve series nominations can you?01:17
=== arrrghhh is now known as arrrghhhAWAY
sarnoldteward: nope :/01:18
tewardmeh01:18
tewardsarnold, i'll go make a debdiff either way01:18
tewardbut meh01:18
tewardsarnold: question, do you know how to force quilt to put patches into debian/patches and refer to debian/patches instead of putting it in the source dir (and not the debian/ dir)01:32
tewardprobably some devscripts syntax, but I dunno...01:33
=== peter is now known as Guest26783
sarnoldteward: I've got an alias 'dq' that helps with that: alias dq='export QUILT_PATCHES=debian/patches'01:34
tewardsarnold: should this be a security fix, or just a standard update...?  trying to figure where i should target my debdiff's changelog entry01:41
sarnoldteward: did this update break it? http://www.ubuntu.com/usn/usn-2254-1/01:44
tewardsarnold: got a diff for that update?01:44
teward(it looks like it might've broken it because 5.3.10-1ubuntu3.12 was the version that changed the chmod permissions)01:45
tewardsarnold: there seems to be additional fixes in there though somewhere01:45
tewardbecause you have to FORCE php5-fpm to make the socket as www-data:www-data now in precise01:45
sarnoldteward: 346k :/  https://launchpad.net/ubuntu/+source/php5/5.3.10-1ubuntu3.1201:45
tewardwhereas it works ootb in trusty01:46
tewardsarnold: yep that broke it01:46
tewardsarnold: i think it got missed in the regression fix due to Precise *not* having a socket created by default01:47
teward(it listens by IP)01:47
tewardas i mentioned in the bug it's a custom FPM configuration case, but it's still going to create the socket as root:root with 066001:47
sarnoldteward: ahhhh. that makes sense.01:49
tewardsarnold: there seems to have been some changes made or something to FIX that in trusty and others, but I don't know which changes fixed that issue01:50
tewardor whether it was fixed separately01:50
tewardsarnold: the workaround I do is basically this, but after the package is built:  http://paste.ubuntu.com/7957082/01:51
tewardsince that's populated by www-data in Ubuntu right now, I'm not sure if this introduces any additional security issues01:52
tewardsarnold: but yes, that USN-2254-1 fix (5.3.19-1ubuntu3.12) introduced the issue.01:53
=== Sachiru_ is now known as Sachiru
=== CripperZ- is now known as cripperz
=== cripperz is now known as CripperZ
=== CripperZ is now known as cripperz
=== cripperz is now known as CripperZ
SierraIs there any need to reboot an ubuntu server after running an apt-get upgrade?03:35
SierraIt never seems to prompt for a reboot, but ubuntu desktop seems to always want to reboot after updating03:35
tewardSierra: at kernel updates, maybe.03:44
z1hazeim confused.. so i mounted my ftp backup server with nfs and backed up all my stuff on it.. and with i check the usage of my ftp server its a 0mb..03:45
z1hazewhat would someone recommend to be able to use my dedicated server as 'normal' and also be able to created vps' on it? would you recommend proxmox? or is that like solely for use of vm's04:09
=== Acilim is now known as Acilim_A
=== tortib is now known as lulzer
=== lulzer is now known as tortib
monokromeHey. Does anyone know why this has no results?    $ sudo virsh list --all06:27
monokromeI installed libvirt-bin, and maas has downloaded 12 images.06:27
=== Abhijit_ is now known as Abhijit
liquid-silencehi all, looking to migrate our email from google to a VPS, imap and pop3 including smtp with multiple domains09:26
liquid-silenceanyone can suggest the packages I might require?09:26
liquid-silenceI am thinking postfix + dovecot + postgresql09:26
liquid-silencebut not sure yet (not really wanting the database dependency)09:26
liquid-silenceWe need virtual domains09:26
liquid-silenceand virtual users09:27
Abhijit!email09:27
Abhijit:-(09:27
=== psivaa is now known as psivaa-reboot
rbasakteward: what was the ownership of the socket before the listen directive was customised, OOI?10:26
=== CripperZ is now known as CripperZ-
liquid-silencegah I hate dovecot10:41
liquid-silenceffs10:41
ashdalice help10:42
weeb1e_Has anyone ever successfully disabled power saving (CPU scaling) in a Dell PowerEdge bios?10:43
weeb1e_I am being driven insane10:43
weeb1e_Ubuntu was unable to stop CPU scaling even with software in full control of CPU managment, which I assumed to be an Ubuntu 14.04 bug10:43
weeb1e_But now after trying everything imaginable, the technician has been unable to stop the CPU from scaling with software control disabled in the bios10:44
weeb1e_System Profile is set to Max Performance and C States + C1E are disabled10:45
ogra_just keep it on and force the performance governor (if you really want to waste power)10:45
weeb1e_Yet the CPUs cores still enter C6 state and scale down10:45
weeb1e_ogra_: That did not work, I tried for over a day10:45
ogra_how did you try ?10:45
weeb1e_Unlike every other one of my boxes, the performance governer, and even userspace governer had no effect10:45
weeb1e_So now we are trying to use the bios to disable scaling, and that does not work either10:46
ogra_what did you do to enforce its usage ?10:46
weeb1e_ogra_: I used cpufrequtils and also /proc directly10:47
weeb1e_Nothing worked10:47
cfhowlett!flash10:47
ubottuTo install Flash see https://help.ubuntu.com/community/RestrictedFormats/Flash - See also  !Restricted and !Gnash10:47
weeb1e_I had a few knowledgable people in here try and help too, with no success ogra_10:47
weeb1e_Which is why I'm really hoping someone knows how to force no scaling from the bios10:48
ogra_weeb1e_, well, make sure to "mv /etc/rc2.d/S99ondemand /etc/rc2.d/K99ondemand" to make sure the system does not forcefully load ondemand 1min after boot10:48
weeb1e_ogra_: Using cpufreq-set, I was able to set each cores governer to performance or userspace, and cpufreq-info said the cores were at max frequency10:49
weeb1e_Yet they continue to scale down to 1.6ghz10:49
weeb1e_The same happens with software cpu control, C States and C1E disabled in the bios10:49
weeb1e_So if both software and hardware based control cannot disable scaling, how the hell can it be disabled10:50
weeb1e_The technician who has physical access is going to run out of time soon10:50
ogra_heh, ask dell i guess :)10:50
weeb1e_This has been going on for over a week now10:50
ogra_you surely can hack around it somehow by forcing the min frequency up etc10:51
weeb1e_How?10:51
weeb1e_The bios apparently has no such option10:51
ogra_same way you set the governor in /proc10:51
weeb1e_and the userspace governer with a frequency had no effect10:51
ogra_there are other proc nodes next to it10:51
weeb1e_ogra_: That is the userspace governer10:52
weeb1e_Which like I said, does not work any more than performance with software control enabled in the bios10:52
weeb1e_It thinks it is working10:52
weeb1e_But it has zero effect10:52
ogra_cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq10:52
weeb1e_I use the performance governer on all my other boxes10:52
ogra_you should be able to set it there10:52
ogra_(note this is super hackish but might work)10:53
weeb1e_I doubt it'll work, considering the userspace governer didn't10:53
weeb1e_But I'll try once I get the techie to enable software control in the bios again10:53
weeb1e_They are going to phone dell now10:53
* ogra_ never used the userspace governor, so i cant tell 10:53
weeb1e_Based on your advice10:53
ogra_yeah, sounds very HW or BIOS/UEFI specific10:54
weeb1e_Ok, they are going to enable software control and boot it up before phoning10:54
weeb1e_So I'll try setting scaling itself10:54
ogra_right ...10:54
ogra_you should definitely talk to #ubuntu-kernel too ...10:55
weeb1e_Alright, thanks10:56
weeb1e_ogra_: Setting core 0's scaling value to 2.49ghz instead of 1.59 has no effect and all cores still scale down to 1.5911:00
ogra_sounds like a bug (in either HW or SW)11:01
weeb1e_Well considering hardware control is also bugged11:01
weeb1e_It seems like hardware11:01
weeb1e_Lets hope something comes out of phoning Dell11:04
weeb1e_Because I'm now out of ideas11:04
weeb1e_So dell is useless. Apparently they need to log a call to the warranty department so that they can send someone out to look at it11:12
weeb1e_They cannot help at all over the phone11:13
weeb1e_I recommended trying a BIOS update11:15
weeb1e_Since I have literally no other ideas now11:15
weeb1e_The company which spends around 100 million a year with dell is going to consider moving to HP based on their response to that phone call11:18
weeb1e_ogra_: I found a software solution!11:38
weeb1e_"To dynamically control C-states, open the file /dev/cpu_dma_latency and write the maximum allowable latency to it. This will prevent C-states with transition latencies higher than the specified value from being used, as long as the file /dev/cpu_dma_latency is kept open. Writing a maximum allowable latency of 0 will keep the processors in C0"11:38
ogra_awesome11:38
weeb1e_ogra_: It only applies to the second socket11:39
weeb1e_So only half a solution11:39
LunarioIs there a way to run a webbrowser or some other program on ubuntu server but view it on another pc? kinda like teamviewer or vnc but faster?11:41
weeb1e_Nevermind, forcing c state 0 effectively disables Turbo Boost11:46
weeb1e_Which means it is not a viable option11:46
peetaur2Lunario: X11 forwarding11:54
peetaur2Lunario: ssh -X user@host, then run some command11:55
Lunariogreat, will check it out, thanks!11:56
weeb1e_ogra_: So the final solution is to compromise by writing about 80 to that file and keeping the file open forever12:01
weeb1e_That will limit the C states to between C0 and C3, stop scaling under minimal load and allow turbo to function correctly12:02
weeb1e_I will just need to build a custom service which will keep that file open at all times12:02
=== sync0new is now known as sync0pate
toyotapieCan I debootstrap a 64-bit OS from a 32-bit installation or visa versa?12:51
toyotapievice versa*12:51
rbasakVice versa only I thikn.12:51
rbasakThough you might be able to use qemu-user-static or something to debootstrap 64 bit from 32. I'm not sure.12:52
rbasakYou might just want to use Ubuntu Core images instead.12:53
rbasakThey're pretty much a tarballed debootstrap.12:53
toyotapieNice, with or without kernel ?12:53
toyotapieand grub*12:54
toyotapieeither way, I am downloading it now12:54
rbasaktoyotapie: no kernel or bootloader IIRC. You can install those yourself though, either in a chroot (on a 64-bit host for 32-bit system).13:07
rbasaktoyotapie: or by getting the debs and booting the system manually to get started.13:07
=== sync0new is now known as sync0pate
=== TDog_ is now known as TDog
frobwareI have access to an APM Mustang board and wanted to know if it is possible to configure eth1 and eth2; eth0 seems to be detected fine.13:39
=== Malinux_ is now known as Malinux
^Lestatmake: no tagert specified http://pastebin.com/vFD0FNqg14:17
^Lestatwhat am I doing wrong?14:18
lordievader^Lestat: What are you trying to build? Have you configured it?14:19
foolhardyI have an ubuntu 12.04 server vm and I'm finding that the nightly suspend (for backups) causes the clock to be off, making it slower and slower by each  night. How do I go about telling ubuntu to pull time daily from NTP?14:19
^Lestati typed ./configure14:19
^LestatIm trying to install pdflib14:19
^Lestatfollowing this http://linuxhelp-kavanathai.blogspot.com/2011/08/how-to-install-pdflib-lite-pdflib-on.html14:19
lordievader^Lestat: And it completed without error?14:19
^LestatI get all the way to step 914:20
^Lestatthus far yes14:20
^Lestatjust trying to get a local install on my dev box so Im not push/pulling all day14:20
^LestatI dont even understand what phpize does.14:21
PiciA 2011 tutorial on how to install a library from source aimed at Centos installs?14:21
^Lestatah crap. I didnt even read that it was centos14:21
lordievader^Lestat: Could you pastebin the output of your last ./configure?14:22
^Lestatsure...14:22
^Lestatohhh. ok I feel foolish14:23
^Lestathttp://pastebin.com/MMgp4RXw14:23
^Lestatyea there are errors14:23
^LestatI dunno anything about makes14:25
PiciThere is php-fpdf (which appears to be a free alternative to pdflib) in the repositories, if you aren't tied to pdflib14:27
sbalneavHello all.  I'm getting bitten by https://bugs.launchpad.net/ubuntu/+source/apt/+bug/135287614:27
uvirtbotLaunchpad bug 1352876 in apt ""apt-get update" crashes" [Undecided,New]14:27
sbalneavLooks like the latest glib has some problems, I'm down hard on several of my servers.  Anybody know of any fixes?14:28
^LestatIm using pdfib in my production server. So I'd rather not change my codebase on my local dev?14:33
lordievader^Lestat: ;) Fixing the errors will probably solve your problem.14:34
^Lestatno idea how/where14:35
^LestatIm not a server dude.14:35
^Lestatand that is for centOS not ubuntu14:36
lordievader^Lestat: Have you installed build-essential? Source code is source code, CentOS code should work here too. Unless the code is very specific... but then it is bad code.14:37
^Lestatinstalling... ;-)14:43
sbalneavI have updated the bug.  Looks like ubuntu has buggered up libc6 2.11.1-0ubuntu7.1414:48
^Lestatok that changed everything14:48
^Lestatnow how was a noob supposed to know that?14:48
sbalneavIf you back down to the 2.11.1-0ubuntu7.13 version of libc6 and libc-bin, it fixes the problem.14:48
lordievader^Lestat: The error tells you ;)14:48
^LestatThat-> ? configure: error: in `/home/vagrant/Downloads/pdflib-3.0.4': configure: error: C++ preprocessor "/lib/cpp" fails sanity check14:49
lordievaderYes.14:49
* ^Lestat scratches head14:49
lordievader^Lestat: Now if you had build-essential installed, that would've been interesting ;)14:50
^Lestatheh now I messed up my php14:52
=== chuck_ is now known as zul
msxhello everyone, do you know where i can find the script that produce this output? http://i.imgur.com/WNGS9nC.png   I already tried the usual places like /etc/issue, /etc/issue.net, /etc/sshd/, hushlogin, etc. to no avail :S14:58
msx*produces14:58
msxxD15:01
msxoops wrong # sorry15:03
pmatulismsx: seems server-related no?15:04
msxpmatulis: hi, well, it is the standard ubuntu server notice when you login via a tty15:04
pmatulisanswer: update-notifier-common and landscape-common packages.  you can safely remove them15:05
msxpmatulis: believe it or not i can't find it anywhere15:05
msxokay, but i don't want to remove it, just modify it. Now i know it is related to the landscape suite i know where to look15:05
msxpmatulis: tnx!15:06
pmatulismsx: good deal15:06
msxahh, nice, it's a python script i see15:08
pmatulismsx: it will be in danger of being overwritten during a package update15:09
TzunamiiYou can always flag it to be ignored, if you want15:10
pmatulistrue15:11
^Lestatunreal15:18
^Lestatthe php manual had everything i needed, in someones comment15:18
^LestatBut just looking at those command line instances looks daunting to a noob15:19
^LestatAnyone recall being a noob?15:27
msxpmatulis: Tzunamii: yes, tnx :)15:27
csst0111I'm using crontab to run a python script. This script is in  /home/user/foo/bar/script.py  and creates some files. When the cron job runs the files from the output are created in the /home/user and not /home/user/foo/bar15:30
csst0111Is there something I can add to the crontab or I should change the script so it can create the files in the desired destination ?15:31
=== med_ is now known as Guest26240
rbasakcsst0111: you could prefix the crontab command with "cd foo/bar &&"15:33
rbasakcsst0111: or have your python script change the pwd15:33
rbasakcsst0111: import os; os.chdir("...")15:33
csst0111rbasak, oh yes the first one is great!!15:33
csst0111thank you!15:34
rbasakYeah that keeps your script nice and generic15:34
rbasakNo problem15:34
TzunamiiYou really should let cron be and do it in the script15:34
rbasakTzunamii: but then the script ends up with the directory hardcoded. Move the script and it breaks or produces unusual behaviour.15:34
rbasakTzunamii: better to define the script as producing the output in the current directory.15:34
rbasakTzunamii: and then make sure it is called with the current directory correct. As an example, GNU make does exactly this.15:35
=== Acilim_A is now known as Acilim
lordievader^Lestat: Sure we do, I remember my first time installing an nvidia driver like it was yesterday.15:35
rbasak(as well as provide a -C option for convenience)15:35
^LestatI feellike a total moron. I don't even know the right terms to use.15:35
TzunamiiThe script is where everything should be at. In 6 months when he wants to do changes he will have forgot where that output fix were.15:36
lordievader^Lestat: As long as the other person understands what you are talking about, who cares.15:36
rbasakTzunamii: hardcoding path locations into scripts is usually a bad idea.15:37
rbasakIt generally ties the script to a particular path and thus a particular machine.15:37
rbasakMakes it really inflexible to deploy, develop and test.15:37
TzunamiiDepends on what kind of script it is. Usually scripts worth mentioning has a parameter where you set the script15:38
TzunamiiIf he wants to do your solution he really should comment the script with where the output solution is at15:39
rbasakThe output will be at "."15:41
^Lestatwell, I don't want ppl to think Im lazy either15:55
Tzunamiirbasak: I think you're missing the point. Never spread out things when you don't have to. In order to have everything in the script you add the line:  BASEDIR=$(dirname $(readlink -f $0))   in the script and the script know where to cd to/use as it's basedir15:57
TzunamiiThat way the script can be moved around and/or used in cron and it always will know it's basedir15:57
TzunamiiSorry for the late reply. I'm working atm15:58
rbasakTzunamii: that still stops you testing the script with a different set of input and output files.15:58
rbasak(without further support to override in the script)15:59
rbasakI'm in favour of doing things the Unix way. Less surprise then.15:59
rbasakUnix commands generally act on the current working directory.15:59
rbasakThe caller gets to decide that.15:59
Tzunamiirbasak: So you're saying adding commands outside a scripted environment in cron is the solution?15:59
rbasakIt makes sense to separate the script from the data it operates on, just like any other command.16:00
TzunamiiNo, it doesn't16:00
rbasakThat makes testing and deployment easier.16:00
rbasakIt's consistent with everything else on the system.16:00
TzunamiiI'm not going to argue coding practices here in this channel16:00
=== arrrghhhAWAY is now known as arrrghhh
vedicWhat is the fastest way of sending small size files to another server? For security I am looking to setup VPN between machines and transfer files on that16:52
vedicsmall size means about 50kb to 100 kb each file16:52
lordievadervedic: rsync + ssh?16:53
vedicAnd if possible, I would prefer to automate it. i.e. as soon as the file comes to a directory, it should get transferred to another machine without waiting16:53
rbasaktar + ssh might be slightly faster the first time.16:53
vedicrbasak: You do you think tar is required as size is less than 100kb16:54
rbasaktar doesn't have anything to do with size16:54
vediclordievader: I guess rsync first spends some time in calculating checksum etc16:54
RoyKrbasak: not lot16:54
RoyKvedic: vedic not if the files have the same size/timestamp16:54
rbasakrsync introduces some more latency, since the file list has to be sent across first, etc.16:54
RoyKrbasak: with ssh compression, that's not a lot16:55
lordievadervedic: True, however it only send over new/changed stuff. And it can compress things.16:55
RoyKvedic: unionfs, perhaps?16:55
RoyKerm - not unionfs16:55
rbasakRoyK: compression doesn't help latency16:55
RoyKmy bad16:55
RoyKrbasak: obviously, but that depends on your original latency16:56
vedicRoyK, rbasak: Files are not coming to machine 1 very quickly. Something like 100 files per minutes but they should get moved (not copied) to machine 2 as soon as possible16:56
rbasakIf there were a large pile of small files for a one-off transfer, I'd use tar over ssh. It'd be noticeably faster because there wouldn't be an initial delay while the disk seeks to find the entire file list to send over.16:56
RoyKvedic: unison is what I meant16:56
rbasakThat's why I said tar initially.16:56
rbasakBut if it's a regular thing, you probably want rsync.16:57
vedicrbasak: Its regular and all files are different for sure. The files moved before will never come again in the machine 1. They will remain on machine 2 so partial content transfer is not the case.16:58
RoyKvedic: or perhaps FreeFileSync16:58
RoyKvedic: even if they are changed on machine 1?16:58
RoyKzfs send/receive :D - but that implies zfs :P16:59
vedicRoyK: On machine 1, a file stays only till it is not moved and I am looking to move asap. File comes to Machine 1 because user is uploading via API or Web form. But they are processed on Machine 216:59
RoyKvedic: I'd write a little daemon using inotify to grab a file once it's uploaded and closed, send it over and remove the original17:00
RoyKshouldn't be too hard in perl/python/whatever17:00
vedicRoyK: I see17:00
rbasakI'd consider re-engineering a little.17:00
rbasakRather than using files themselves, maybe use git-annex or something like that to better manage files between multiple machines.17:01
rbasakIt will track what machine has what, and handles sync on demand.17:01
vedicI use Python so yea, that can be done. I was just thinking if there is something already ready to do such thing as a service17:01
rbasakI hear that it has an automatic sync feature now, too.17:01
RoyKno reason to setup mass synchronisation if you only want to move a file whenever it lands on your computer - inotify is quite simple17:01
rbasakThen you don't get yourself into an odd state in failure caes.17:02
rbasakcases17:02
RoyKrbasak: seems like overkill to me, really. use inotify on the directory, wait for an incoming file, wait for it to close, rsync it over to the other machine and unlink it17:02
rbasakRoyK: that's fine for the simple common case, yes.17:02
rbasakRoyK: if there's a larger deployment that depends on it, then it's a maintenance nightmare.17:03
vedicrbasak, RoyK: It see max 100 files coming per minutes on Machine 117:03
RoyKrbasak: well, he didn't say anything about deployment size other than that it's two machines17:03
rbasakSuddenly what file exists where becomes part of your deployment state.17:03
rbasakBy larger deployment I probably should have said a complex deployment17:03
rbasakA web app on one machine that sends files to another to be processed is complex, in my book.17:03
RoyKvedic: ouch - what sort of files are these? what are you going to do with identical filenames?17:03
RoyKvedic: you could use unison, though17:04
vedicRoyK: File names are UUID so I don't think it can be identical. These are small music samples that I need to signal process and send the reply back to the user asap.17:04
RoyKvedic: then I suppose something like the webapp receiving the file could use http post (or something) to the receiving server and get the answer quickly. shouldn't be much on an overhead if it's on a LAN, and it should be easy to make it quick17:06
rbasak+117:07
vedicRoyK, rbasak: What if I setup TCP/IP server and client. When ever file lends on Machine 1, Machine 1's client connects to server and send the file to machine 2. Or connection is not terminated but the server is just waiting to get next file from the client?17:07
rbasakOr stick them in a message queue, though admittedly that is one more component to manage in the deployment.17:07
RoyKvedic: didn't you say the file was received by a webapp on server 1? if so, it should be simple to do the magic from there with a webservice on server 217:08
vedicRoyK: yea, Machine 1 is a web server where file comes. It can come via API or via Web form upload. Machine 2 is the place where I process these files. So I can run a TCP/IP server on Machine 2 which is waiting to listen from Machine 1. So machine 1 is actually will run Web server to get file from the user and a TCP/IP client to send it to Machine 217:09
vedicRoy:, rbasak: Do you think I am just repeating what the tools already provide? or it is just not a good solution17:12
RoyKvedic: a TCP/IP server like a small webserver and then another webapp to do the dirty work is what I'd do17:12
TJ-vedic: Does machine 1 do anything to the files except write them to a file-system?17:12
rbasakvedic: it's important to consider the error states. That's what costs time and effort maintaining a deployment.17:12
vedicTJ: Machine 1 just holds those files temporary and waiting to get them moved permanently17:13
rbasakvedic: what you want to do is reduce the state space so that the system doesn't get stuck or broken.17:13
vedicrbasak: I see17:13
rbasakvedic: or that it self-corrects from an errant state.17:13
vedicrbasak: yea, thats priority17:13
TJ-vedic: So why not NFS mount a file-system from machine 2 onto machine 1? machine 1 writes into it, machine 2 sees the files arrive17:13
rbasakI would avoid NFS since it makes handling errors harder.17:13
rbasakWhat if machine 2 is down? Should the web app on machine 1 hang?17:14
vedicTJ: What if Machine 1 and Machine 2 are on not on LAN but on cloud and may be hosted in different locations?17:14
vedic"on not" => "not"17:14
rbasakvedic: the issue with moving files about is that if there's a "confused" state possible, you'll inevitably end up in it eventually.17:14
RoyKvedic: If I understand your application correctly, I'd use webservices - just that - it'll make error handling easy - far easier than nfs or other shared filesystems17:15
rbasakvedic: eg. races like failures when shutting down for restart, and a file was half copied. Or old half copied temp files filling up all space.17:15
vedicRoyK: I see17:15
rbasakOr a file copied across but not removed at the sending end. The receiving ends processes it, deletes the file, and then it accidentally gets processed twice.17:16
vedicrbasak: yea, I would surely prefer that this doesn't happen17:16
rbasakA message queue basically solves this problem. But it is complex to deploy.17:16
vedicrbasak: hmm17:16
RoyKvedic: with webservices (or similar) it'll be stateful from end to end - keep it simple ;)17:16
=== matsubara is now known as matsubara-lunch
vedicrbasak: RabbitMQ?17:16
rbasakRoyK's solution will also work cleanly I think, assuming that files can be processed immediately. Otherwise you'll want a queue.17:17
rbasakSomething like that, yes.17:17
rbasakAmazon has SQS.17:17
vedicrbasak: yea17:17
vedicrbasak, RoyK: I think I will need something like message queue. Hope that doesn't adds its own latency to a large extent17:19
TJ-I agree with RoyK, simple inotifywait + rsync, as in the example at https://github.com/rvoicilas/inotify-tools/wiki#info17:19
TJ-I think you're over-engineering the solution17:19
vedicTJ: hmm... I see pyinotify17:24
vedicI use Python so I will surely check this along with message queue solution17:24
rbasakI just remembered watershed17:28
rbasakThat might be even easier than inotify17:29
RoyKTJ-: I thought so first, but as of now, I think it'd be better with just the receiving webapp to use a webservice with server 2, which may have some queueing if needed17:29
RoyK(on server 2, that is)17:29
rbasakJust call it with rsync every time after you finish writing a file. It will make sure that rsync only runs once, and one more final time.17:29
rbasakBe careful with writing files though. You don't want to rsync half a file, so mv it in from another directory.17:29
rbasak(this is one of the error states I was talking about)17:29
=== psivaa is now known as psivaa-holiday
hallynsmb: i'm thinking on thursday morning (my morning :) I may go through the debian.vs.ubuntu libvirt packages.17:36
hallynstgraber: I really wish '-F' was an option in download template.  I always mis-spell '--flushcache'.17:37
=== Acilim is now known as Acilim_A
=== matsubara-lunch is now known as matsubara
zartooshhi this might be a wrong place to ask but here it is:  we recently moved from 12.04,  to 14.04. One of our applications which is single thread, no forking and does floating point calculation is running 3 times slower on ubuntu 14.04 compare to 12.04. The floating point uses math library calls, tan, ceil, floor. Any hints greatly appreciated . thx18:56
sarnoldzartoosh: perhaps related: https://gcc.gnu.org/ml/gcc/2012-02/msg00469.html19:01
MavKenwith 1GB ram...any reason to use 64bit?  basic lamp setup with a few wordpress sites19:02
=== mohammad is now known as Guest20842
Guest20842how to access my local machine from another machine ?19:02
KM0201Guest12249: remote desktop?  vnc?19:02
bekks!ssh | Guest2084219:02
ubottuGuest20842: SSH is the Secure SHell protocol, see: https://help.ubuntu.com/community/SSH for client usage. PuTTY is an SSH client for Windows; see: http://www.chiark.greenend.org.uk/~sgtatham/putty/ for it's homepage. See also !scp (Secure CoPy) and !sshd (Secure SHell Daemon)19:02
MavKenGuest20842, ssh19:02
KM0201oh.. forgoti was in -server  :)19:03
qman__MavKen: consistency, application support19:11
fridaynextif i sudo update-rc.d sickbeard defaults - will that cause it to start up at system boot each time?20:06
arrrghhhfridaynext, you have sickbeard installed and an entry in /etc/init.d for it?20:08
fridaynextarrrghhh: yes20:08
arrrghhhthen yes, it iwll.20:08
arrrghhhwill*20:09
fridaynextarrrghhh: gotcha - thanks20:10
arrrghhhhttp://manpages.ubuntu.com/manpages/precise/man8/update-rc.d.8.html20:10
arrrghhhif you want to know moar20:10
=== mcclurmc_ is now known as mcclurmc
delinquentmebest way to get the internal network IP from a command like ifconfig ... but without needing to clean up the other stuff?20:23
sarnolddelinquentme: "the" IP? it's possible for a machine to have thousands, if not millions..20:23
delinquentmeso im making a number of cloned machines so all of the info returned should be fairly similar20:24
xibalbaanyone here work with pure-ftpd much?20:25
xibalbatrying to use pure-quotacheck -u ftpuser -d /home/some/user/directory. it runs, but returns nothing.20:26
delinquentmeifconfig | perl -nle'/dr:(\S+)/ && print $1'20:26
sarnolddelinquentme: ip addr show  may be easier to parse20:27
xibalbaoh it only creates the file...doesn't check the quota20:27
=== markthomas is now known as markthomas_lunch
monokromeDoes anyone here have experience running OpenStack on Ubuntu server?20:57
monokromeI was going through the documentation for Ubuntu OpenStack Cloud and it says to use `virsh list --all` which doesn't list anything20:58
monokromeSo, I created 6 VMs with `virsh install` (hopefully that is what is expected) but don't know the appropriate way to get them to talk to MAAS20:58
monokromeI figure that they need to be on a private network, but am not sure how to set one up. They are set up to use PXE.20:59
rbasakmonokrome: there's a ton of work in this area at the moment. Try the cloud installer: http://ubuntu-cloud-installer.readthedocs.org/en/latest/21:06
stokachuhttp://askubuntu.com/questions/144531/how-do-i-install-openstack21:11
stokachurbasak, ^21:11
rbasakmonokrome: stokachu's link should help21:13
stokachuhttp://ubuntu-cloud-installer.readthedocs.org/en/stable/single-installer.guide.html21:14
stokachumonokrome, thats our guide for the cloud installer21:14
stokachumonokrome, http://ubuntu-cloud-installer.readthedocs.org/en/latest/single-installer.guide.html21:15
stokachusorry you want that one instead21:15
qman__I set it up once and found that its quite difficult and complex, it took me a few days to actually get things working and in the end I found that it didn't suit my needs, and replaced it with a normal KVM setup21:15
qman__I also found that its changing a lot between versions, so old documentation is usually more harm than good21:18
rbasakThe Ubuntu cloud installer documentation linked above is current, AFAIK.21:19
stokachuour installer guides are also autogenerated on each commit21:20
stokachuso they'll be the most current21:20
qman__Hopefully its much better now, I used 12.04 when I set it up21:22
rbasakThings have progressed massively in the last two years.21:25
qman__The main reason I decided not to use it was that I needed persistent VMs, and while possible, doing that was awkward and difficult, and regular KVM just made more sense21:29
z1hazeif i were to use a bare metal hypervisor, such as esxi on my server, would i still be able to like install say.. ubuntu or something on it so i can use the server itself as a regular server and just use the esxi to create the vm's? im confused as how that works21:31
z1hazemy host has a install hit basically replaces the o/s but they are telling me its not a full o/s21:31
qman__z1haze: no, the bare metal hypervisor becomes the server's OS21:31
z1hazehow what would you recommend i use then?21:32
qman__z1haze: you then create everything in VMs21:32
z1hazejust like create a large portioned vm for myself?21:32
z1hazei guess i want the functionality of using the hypervisor and have ubuntu.21:33
z1hazei suppose i could just create a vm for myself and install ubuntu on it huh?21:33
qman__Yes21:33
=== markthomas_lunch is now known as markthomas
qman__That is the point of bare metal hypervisors21:33
z1hazeok yea, im realy new to this. im sorry21:33
qman__Only the minimum runs non-virtualized21:34
z1hazeok well, what if i want my vm, the one ill be using for myself and to utilize basically whatever portion of the server i want.. how would i configre that as to not be restricted from cpu or RAM or w/e can it be like configured dynamically to where it takes whatever it needs?21:34
qman__That way, the hardware layer is abstracted and marginalized21:35
qman__Some hypervisors support dynamic hardware changes but generally you don't do that21:36
qman__The concept is that you create a VM for each role or service you are performing21:36
qman__with appropriate resources assigned to that role21:37
qman__So instead of one bare metal server that does lots of things, you have lots of VMs that do one thing each, sharing hardware21:39
qman__It simplifies upgrades and management, and allows you to reduce downtime when problems arise21:42
qman__For example, I am able to upgrade my VMs to 12.04 and 14.04 one at a time, only taking down one service or role, even though they run on the same hardware21:44
qman__I can also take snapshots and roll back if it fails21:44
qman__Which for my mail server, it did21:45
qman__All the while, my spam filter running as another VM on the same box, kept receiving my mail21:46
=== quix_ is now known as quix
Lunariowhen I ssh into my ubuntu server via ssh -X -t  and then start a program in the terminal, I would like that program to be accessible from other terminal windows created via ssh. I would also like to have access via ssh to particular programs running on my server and open them in the terminal (say an always connected irssi client). How do I do that?21:50
qman__Lunario: not including the X11 forwards, you can use GNU screen21:51
fridaynextso i've got files in /etc/init.d/ as well as /etc/defaults, and i've update-rc.d'd them and chmod +x'd them, but they still don't start at bootup. Ideas why?21:51
WylleyHi everyone21:55
Lunarioqman__: just searched for it and am checking it out, thanks for the hint!21:55
Lunarioseems to be able to do what I want to do, so great :)21:56
Wylleyhaving a weird issue trying to install Ubuntu Server (14) on to a machine that has FakeRAID built in to the motherboard. Installation goes normally, but on reboot, I just get an endless loop of "Incrementally started RAID arrays" and "mdadm: CREATE user root not found", etc.21:56
qman__Wylley: rule of thumb, don't use fakeraid, just turn it off and use mdadm21:57
qman__It will have more features, be more reliable, and be portable21:58
fridaynextand Wylley, if you need a tutorial, I used this one to set up RAID5, and it helped me understand it immensely: http://zackreed.me/articles/38-software-raid-5-in-debian-with-mdadm21:58
fridaynextAlso has great tutorials on SMART drive status, UPS, email notifications, etc.21:58
Wylleyqman__, ok, killing the onboard "raid" controller, going to reinstall.21:59
Wylleyfridaynext, thanks. I'll go check it out.22:00
qman__Wylley: the installer's raid option during disk setup is mdadm in case that wasn't clear22:01
Wylleyqman__, during the install, it says it found drives containing mdadm containers. Do I want to activate these?22:03
qman__No, you want to delete them and start over22:04
Wylleyok, and do I want "entire disk" or "entire disk with lvm"?22:04
qman__I've found that sometimes you can get into a situation where you have to manually zero out the drives otherwise the installer keeps trying to assemble old stuff and never works22:04
qman__You want custom22:05
qman__https://help.ubuntu.com/14.04/serverguide/advanced-installation.html22:07
=== arrrghhh is now known as arrrghhhAWAY
=== TDog_ is now known as TDog
Wylley_qman__ thanks for your help. I think I'm on my way to a working server now. :-)22:14
=== arrrghhhAWAY is now known as arrrghhh
Lunarioqman__: coming back to gnu screen: is it also possible to keep a gtk process started via gnu screen running after detaching from the session?22:43
rbasakLunario: look into xpra to do screen-like things to graphical (X/GTK) programs22:47
Lunariorbasak: thanks, will do22:48

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!