=== cripperz is now known as CripperZ- === arrrghhh is now known as arrrghhhAWAY === arlen is now known as arlen_ === CripperZ- is now known as cripperz === arrrghhhAWAY is now known as arrrghhh === arrrghhh is now known as arrrghhhAWAY === TDog_ is now known as TDog [05:00] Hello, what is the easiest way to reconfigure the network? I just convert a VDI to VDMK and I was hoping that Ubuntu 14.04LTS would figure things out during boot time... === thumper is now known as thumper-afk [05:06] People a sleep [05:09] !patience [05:09] Don't feel ignored and repeat your question quickly; if nobody knows your answer, nobody will answer you. While you wait, try searching https://help.ubuntu.com or http://ubuntuforums.org or http://askubuntu.com/ [05:15] I didn't repeat the question :) [05:15] Anyway, it doesn't appear there's a way from the debian mailing list [06:00] lutchy: server stuff doesn't expect much changes; see /etc/network/interfaces for what you need to modify [06:39] hallyn, yes :) [07:33] I have a MAC OS X computer with two ubuntu-server-vm's in Virtualbox (doesn't really matter but I 'm just mentioning it). They are in the same subnet (192.168.56.4, 192.168.56.5) and they both have same kind of folder tree structure. The one is ubuntu 12.04 LTS and the other 14.04 LTS. How can I mirror my /var/www folder as is ? [07:34] I have read many things about rsync, will it work ? [07:52] freezevee, should do although I think the structure of /var/www changed a bit with apache24 in 14.04 - worth checking the 14.04 release notes [07:53] jamespage: thanks, I'll check this out. IS there any significant change ? [07:53] freezevee, nothing major but worth reading the upgrade notes for apache - https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes#Ubuntu_Server [07:54] the link to the package upgrade notes is bust [07:54] * jamespage sighs [08:04] ummm... 2.4 is very different in regard to configs [08:05] or at least, different enough that it requires work to get it going. === alb3rt0 is now known as a1berto === ming is now known as Guest28776 === khaitanya is now known as Guest57044 [11:06] jamespage: quick bit of advice on mysql, please? [11:06] jamespage: there are various bits that I'm looking at all at once, but I'm thinking of doing this more incrementally. [11:07] jamespage: so my plan is: 1) flip the bits to make 5.6 standalone (without depending on 5.5 binaries), including new mysql-common (with breaks/replaces the old one), etc. [11:07] rbasak, sure [11:07] 2) remove those bits on 5.5. [11:07] For both of these to hit the archive (I'll try a PPA first). [11:08] Then 3) analyse the Ubuntu delta and add apport, ufw as necessary. [11:08] 4) virtual-mysql-server etc. [11:08] This will mean that there will be a period that Utopic's mysql though functional will not be full featured. [11:08] How does this sound to you? [11:09] Or maybe I can stage this all in a PPA first. [11:09] Are there any other bits I'm missing here? [11:22] rbasak, I'd probably do the delta first and then transition [11:26] I'm running 14.04 and I ran apt-get update;apt-get upgrade -- one of the packages it wanted to update was grub. the machine is a software raid1 (with 2 TB x 2) so when it got to grub it asked me if i wanted to keep the local version of /etc/default/grub or if i wanted to overwrite with the maintainers version. so i chose to keep the local version. but now im at this prompt: http://i.imgur.com/HRtBgeK.png ? should i check box all 3 or just the last one [11:26] smoser, care to review https://code.launchpad.net/~james-page/software-properties/juno-support/+merge/228828 ? === psivaa is now known as psivaa-lunch [12:24] Hi guys. Any real-time log monitoring app recommentations that run commands on specific log occurrences? For instance, I want to monitor a log file, and send specific log lines to a redis cluster. Or what would be the best way to accomplish this? [12:25] Is there a command or parameter to ifconfig I can use to see if a network card is being used to packet sniff ? [12:26] I see that dmesg says "eth1 entered promiscous mode" but I don't see anything in ifconfig [12:29] progre55: logstash is pretty popular. I'm not sure how well it matches what you want though. And it requires Java, which seems pretty heavyweight to me. [12:31] rbasak: logstash seems a bit too much for this simple task. I just want to monitor logs and execute a script if a line matches. I could write some bash script myself, but then it wouldnt be as flexible, and would end up with duplicate entries, etc on a restart or logrotate.. === cripperz is now known as CripperZ- [12:36] progre55: maybe rsyslog has something you can use? [12:36] netstat -i only shows the P ( promiscious ) flag when I use 'ip link set eth0 promisc on', but both 'ip link set eth0 promisc on' and 'tcpdump -i eth0' generates a 'eth0 entered promiscous mode' in dmesg. What am I missing? [12:40] Relevant discussion over at redhat https://bugzilla.redhat.com/show_bug.cgi?id=199979 [12:40] toyotapie: Error: Could not parse XML returned by bugzilla.redhat.com: HTTP Error 404: Not Found [12:42] found the solution. I ned to read from /sys/class/net/[net_if]/flags [12:43] and the definition for the flags are in ./include/linux/if.h in kernel source [12:43] thanks guys :) === psivaa-lunch is now known as psivaa [12:46] rbasak: thanks, I’ll have a look at rsyslog [13:14] jamespage, its fine with me other than i dont see why you'd drop the CA_ALLOW_CODENAME [13:15] smoser, ack - I'll tweak that back in [13:38] Does anyone have any idea what could cause /proc/cpuinfo to report the wrong number of physical cores per CPU? [13:39] model name : Intel(R) Xeon(R) CPU E5540 @ 2.53GHz [13:39] cpu cores : 2 [13:39] weeb1e, since /proc/cpuinfo doesn't report physical cores, heh [13:39] patdk-wk: It does [13:39] no, it doesn't [13:39] I've already used `cat /proc/cpuinfo | grep "^cpu cores" | uniq` to measure physical cores [13:39] On many different boxes in the past === Ursinha is now known as Ursinha-afk [13:41] so by your definition, a hyperthreaded cpu is a physical core? [13:42] and then, it gets really fucky on virtual systems [13:44] this is what one of my systems says [13:44] model name : Intel(R) Xeon(R) CPU E5345 @ 2.33GHz [13:44] cpu cores : 2 [13:44] patdk-wk: Virtual cores can be measured with `cat /proc/cpuinfo | grep "^processor" | wc -l`, but since hyperthreading is disabled, that reports 8, which is the correct number of physical cores (2x4) [13:44] but it's actually running on a X5660 with 6 cores [13:44] and an E5345 has 4 cores [13:45] Strange, perhaps that is not actually the command I used in the past [13:45] you sure your not running as a vm? [13:45] virt-what? [13:51] patdk-wk: It's a barebone machine [13:51] I do not use VMs [13:51] only thing I can guess then is [13:51] the bios disabled some cores [13:52] That seems unlikely [13:52] The host would have had to disable half the cores, instead of hyperthreading [13:52] depends, it's a bios option in every computer I own atleast [13:53] cores, 1, 2, all [13:53] Making it hyperthreaded 2 core CPUs, instead of non hyperthreaded 4 core [13:56] patdk-wk: Using `cat /proc/cpuinfo | grep "^cpu cores" | uniq` on a machine with 1 physical CPU, 4 physical cores + hyperthreading returns 4, which is the correct number of physical cores [13:57] Doing the same on another box, which has no hyperthreading, 2 CPUs, each with 4 cores, returns 4 [13:57] So I have no idea why this new box is reporting 2 :/ === Ursinha-afk is now known as Ursinha [14:10] arrrghhhAWAY: doh! I was off a week. No engineering hangout today. [15:00] does anybody know how to install ubuntu server 14.04 offline? [15:00] i keep getting asked to choose a mirror [15:05] depends on *how* your installing it [15:05] how? [15:06] yes [15:06] netboot, pxe, iso, dvd, usb, ...... [15:06] i used USB [15:06] but I've also used DVD in the past too [15:07] created from what? the a full dvd? [15:07] then you likely can just skip that [15:07] as it's just looking for updates [15:07] i downloaded the ubuntu.14.04-amd-64.iso image [15:07] and created a USB out of that [15:07] it wouldn't let me skip it [15:07] i'll try DVD again tomorrow === arrrghhhAWAY is now known as arrrghhh [15:35] Nivex, ah so it's next week? [15:35] yeah. I think I may have found some of the holdup too: https://wiki.ubuntu.com/QATeam/ReleaseReports/TrustyPoint1TestingReport [15:36] I see a lot of -desktop issues... nothing server specific tho? [15:37] all the way at the bottom. update-manager === Ursinha is now known as Ursinha-afk [15:39] oh 1348067 update-manager crashed with TypeError: pulse() takes exactly 1 argument (2 given) [15:39] that one? === arrrghhh is now known as arrrghhhAWAY === Ursinha-afk is now known as Ursinha [15:55] heh? it says *fixed/closed* [15:56] more likely is like 1347721 1347964 === daveops_ is now known as daveops === SpamapS_ is now known as SpamapS === Corey_ is now known as Corey [17:28] rbasak, smoser: http://people.canonical.com/~jamespage/server-sru/trusty-sru.html === arrrghhhZ is now known as arrrghhh [18:03] how do i disable a service in ubuntu 14 server? [18:11] ploo_: if it has an upstart job, echo manual >> /etc/init/servicename.override -- see http://upstart.ubuntu.com/cookbook/#override-files for details [18:26] hello, can someone explain howto traffic shape between two ubuntu servers which are connected via a gre tunnel? if i shape on ubuntu2 gre server the upload from ubuntu1 changes? can someone help? [18:26] but the uplaod from ubntu1 to ubuntu2 remains untouched? [18:26] *but the download from ubuntu1 to unbuntu2 remains untouched i mean [18:28] even if i setup on both side traffic shaping and shape upload to 512kbit its still 4mbit Download and 512kbit Upload on Ubuntu1 [18:28] but i want 512kbit upload and 512kbit download on ubuntu1. how can i get this to work? [18:37] you can only shape outgoing traffic [18:38] attempts to shape incoming, is a best effort, and can't be reliable [18:40] patdk-wk yes, but if i enable nat on ubuntu2 then i can [18:40] why is it not possible without nat? [18:41] what does nat have to do with anything? [18:41] I thought we where talking about traffic shaping [18:42] look if i just enable iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE on ubuntu2 with the same rules as before then upload of ubuntu1 remains untouched but download goes down to 512kbit/s [18:42] nothing changed except the nat rule [18:42] there is more than *just* that nat rule [18:42] but since I can't see any of it [18:42] I cna't comment [18:42] trust me, i have only added the nat command above and all works [18:43] oh, I believe you *JUST* added it [18:43] but as soon as i delete nat the traffic shaping is reversed to upload instead of download [18:43] but I don't believe you understand how that interacts with everything else that was already there [18:43] cause that one line should have HUGE effects on the traffic shaping [18:43] depending on how you set it up [18:44] yes, cause the traffic shaping rules you setup, are likely *not what you really wanted* [18:44] but what you think you wanted [18:44] i just wanted that my stupid sdsl line dont get congested [18:45] because 1 downlaod destroys latency 600ms [18:45] it works perfectly with nat so that certain applications gets priorized but without my upload which cannot be possible gets priorized [18:45] and even the udp/tcp ports gets priorized correctly?! [18:46] so patdk-wk tell me how would you setup just a simple internet connection without congestion? [18:46] personally? using shorewall :) [18:46] i asked in some forums and they tell me to use tunnels [18:46] not by ucing tc by hand, too much work [18:47] tunnels? [18:47] I dunno what you want anymore [18:47] like vpn, ipsec, gre [18:47] at first it was gre traffic shaping [18:47] now it's internet traffic shaping [18:47] then it's tunnels [18:47] you should be shaping EVERYTHING [18:47] look 2 ubuntu server are connected via gre tunnels over the internet. ubuntu#1 is my home gateway [18:48] just shaping that single link between them, won't solve it [18:48] you need to shape everything [18:48] the default route from ubuntu1 goes through ubuntu2 over the gre tunnel. 1 static route from ubuntu1 to the wan ip from ubuntu2 [18:48] is there a purpose you route everything over the gre tunnel? [18:48] yes to control that what the sender sends me [18:48] otherwise i dont have a single control point [18:49] you will need to setup shaping on both ends [18:49] With this latest MongoDB merge (the monolithic refactor to charmhelpers) could you run an openstack deploy using the trusty mongodb charm and validate we are indeed g2g for the openstack use cases? My prelim test just validates that the config is written using the public address - as i'm no domain expert in ceilometer, i want to make sure I'm not breaking the charm again. [18:49] s/you/anyone [18:49] just setup proper shaping on each side [18:50] this is my shaping script i have added to each ubuntu server: http://pastebin.com/C4Scq2rz [18:50] so whats not correct here can you tell me? [18:51] instead of this dummy 10.0.x.x addresses my wan ips are there [18:52] i route a /29 subnet through the ubuntu2 [18:52] I thought you wanted to limit to 4mbit? [18:53] no my connection is 4mbit/4mbit [18:53] so you should se tthe cap to like no more than 3.6mbit [18:53] and if i limit on both sides or shape then on my side its 4mbit/512kbit [18:53] the cap is set to 512kbit [18:53] the cap for what? [18:53] classid 1:24 sorry its 256kbit [18:53] exactly what did you TEST? [18:54] for the ip address [18:54] for my wan ip net [18:54] how exactly do you test an ip address? [18:54] normally it requires sending data [18:54] and you cant send data to an ip address [18:54] you can send it to a port though [18:54] tc class add dev eth0 parent 1:2 classid 1:24 htb rate 256kbit prio 2 [18:54] your making this pretty damned hard [18:55] iptables -A POSTROUTING -t mangle -s 212.27.84.48/29 -j MARK --set-mark 24 [18:55] why would you do that? [18:55] and as i said with nat enabled its set to 256kbit on that range [18:55] because i want to use this server for other things [18:56] as it should be [18:56] and if i download something the server wont have enough bandwidth [18:56] the source of your gre tunnel is in that ip range right? [18:56] so yes, it should be limited [18:56] what did you expect? [18:56] youtell it to limit it, it did [18:57] the remote address is in this range [18:57] all those iptable rules need to go [18:58] you should be shaping all your traffic going ONLY over the gre first [18:59] if you only shape that, you should be fine [18:59] you mean replace eth0 with tun0 in this script? [18:59] well i have tried that, then no traffic shaping takes place [19:00] and some people say that you cannot shape with the gre interface you need to shape on your physical interface [19:00] odd, maybe a limit of gre, never used it [19:00] if that is the case your screwed [19:00] cause you won't be able to prioritize traffic over the gre [19:00] if that is the case, why do you have all those 10.0.0.0? [19:01] just dummy ips [19:01] have nothing to say [19:01] you will need to shape everything BUT your gre tunnel on eth0 [19:01] in the original script they are removed [19:01] in fact only the /29 stands there [19:01] what you need to do, is download the wondershaper [19:02] then edit it, to exclude your gre tunnel from being shaped [19:02] for your external host [19:02] for your internal one, I guess yo uneed to shape only the gre [19:02] or well, the whole internet interface would do it too [19:02] for internal i can do normal ts with tc [19:02] it works well [19:03] but you loose all control of prioritizing outgoing packets [19:03] well thats not true [19:03] on my side all is behaving normal [19:08] patdk-wk how can i exclude gre interface in wondershaper? [19:08] likely by setting the whitelist or exclude ip [19:11] well in wondershaper i have done this wondershaper eth0 512 512 [19:11] so there is no exclude tun0 [19:12] so what did you mean should i remove the gre port number from beeing shaped? [19:30] patdk-wk even with wondershaper eth0 200 200 on both side nothing changes [19:30] except the upload [19:30] on my side [19:38] hm... anyone here using LSI SAS controllers with supermicro SAS expanders? Seems smartctl can't report much from those drives. This is an LSI SAS 9207-4i4e [19:58] royk, works for me :) [20:09] patdk-wk: well, it somewhat works here as well, and -i identifies the drives and so on, and -a gives some data, but not the data I want. See http://paste.ubuntu.com/7907948/ [20:10] sda is directly connected to a local SATA thing. sdc etc is on the SAS expander [20:29] royk, your confused :) [20:29] your expecting sata output [20:29] the output for sdc is *NORMAL* for enterprise disks [20:30] that is the same output you would get using a FC, or SCSI disk also that is enterprise rated [20:33] patdk-wk: erm - how can I get proper output from a sata drive? I have seen that before [20:34] heh? [20:34] that an enterprise disk though [20:34] it has nothing to do with *proper* output [20:34] that is what the disk returns [20:34] enterprise disks use that format [20:35] they have always been that way [20:35] atleast since I started using enterprise disks, back in 2004 [20:35] you would have to swap out the firmware on the disk, to change it [20:47] patdk-wk: I've used enterprise disks that give proper output too. Also, the sda is an enterprise disk, but not connected to the SAS controller === mjohnson151 is now known as mjohnson15 [21:00] how do i disable a service in ubuntu 14 server? [21:04] ploo_: see update-rc.d or just remove the service from /etc/rc2.d/ [21:04] ploo_: in /etc/rc2.d/ it's just symlinks for the services to start [21:04] ploo_: which service is this about? [21:13] How can I execute a bash script in a new shell and have it not exit the shell when the script exits? I need to do that, so I can open that script in a new named screen session, and have it exit to a bash shell if the process running in screen exits [21:13] I'm really tired of having to start things manually because I don't know how to do the above [21:15] weeb1e: that doesn't quite make sense to me. There are two things you can do here I think. [21:15] hey, i'm having some trouble with upstart. i keep getting 'invalid job class' [21:15] weeb1e: look into the "source" command (also shortened to .). This runs the commands in the file as if you're typing them in, so when done you'll get a shell that's in the same environment those commands are in. [21:16] Hi all, I am setting in etc/default/grub gfxmode=800x600 and grub comes up at that resolution. however after boot the resolution changes to 752x413 and I am not sure what is causing this.. any ideas? [21:16] rbasak: I am specifically asking how not to do that [21:16] weeb1e: or the second option is just to run another shell at the end of your script. Just have "$SHELL" (with the quotes) as a line at the end. [21:16] Since when you're running a shell script, that's what happens - non-interactively. [21:17] And the reason I don't want to run another shell, is it will lose history and instead of having to press "up arrow -> enter" to restart the application, i will have to type out the full command line [21:17] If you then want it interactive, start an interactive shell. [21:17] rbasak: So I am really hoping for a proper solution [21:17] I simply want to start a new shell, execute a command as if it was typed into the shell, and leave the shell as is afterwards [21:18] Maybe that's something that screen can do for you. It's not a feature of the shell itself. [21:18] (unless it's some newer thing I don't know about, or eg. zsh or something can do it) [21:18] I'm very surprised, as this seems like a rather big oversight [21:19] I think most people arrange things so that it's just not needed. I've never needed it, for example. [21:19] You don't use screen for programatically spawned processes then I guess [21:19] I require it for my use case [21:20] No, people generally don't do that. [21:20] So I guess I will have to continue to manually start each application in a screen session after every reboot [21:20] THey fix the processes to not need to run interactively, so no need for screen. [21:20] On 10 different boxes [21:21] Or they wrap the command and add an init job, etc. [21:21] The processes are under constant development, and although they support multiple remote interfaces for REPL interaction, they also have an interactive console at command line [21:21] Interactivity is not the reason I want this configuration however [21:21] As I explained, I specifically want to be able to ctrl + c to exit the application, and use "up arrow -> enter" to restart it [21:22] The inconvenience of having to type out the entire command line manually to restart the applications is far worse than having to manually start the screen sessions after every reboot [21:22] Make the entire command line shorter. [21:23] Or wrap the screen invocation, with the automatic "start session if session is not already running" thing. [21:23] It can only be so short, and it's still an inconvenience [21:24] That isn't an option, since on the very rare instance that the application crashes, if it exits the screen session completely, the console output would be lost [21:24] I suppose you could even kludge the history. [21:24] screen can log the output [21:24] I am still really surprised bash lacks such a trivial feature [21:25] Go ahead and suggest it to them, then. [21:25] I'm surprised that you feel it's trivial, and aren't wondering what you're doing differently that nobody else has faced this problem before. [21:26] I guess it's pretty specific to screen, since executing the application using most other methods would be wrapped in a shell [21:27] I could approach the issue from many angles, but I'll probably just continue manually for now and deal with it later [21:28] I could even automate the TTY used to spawn screen [21:29] weeb1e: requiring such a setup is considered bad design, so there isn't much in terms of hacks to make it work, other than screen [21:30] royk, hmm, no, while seagate calls it enterprise, it's not a real enterprise disk, it's nearline [21:30] I have never had a enterprise disk give me the output that normal ide/sata disk do [21:30] weeb1e: easy way out here is writing an init script (or just a regular script) that starts your app in screen, so its easy to stop/start/restart and maintain interactivity [21:30] but the *sata enterprise* disks I have do though, do that normal sata format [21:31] but none of my 10krpm or 15krpm disks, have EVER produce that output, but always that other *enterprise* format [21:31] patdk-wk: nearline or enterprise, it's the same thing [21:31] not really [21:31] qman: That would mean any console output would be lost if the application was terminated badlyt [21:31] *badly [21:31] they have different hardware in them [21:32] patdk-wk: what sorts? why would el-cheapo drives report better than expensive drives? [21:32] weeb1e: you should make it log properly at the app level, but screen can also log [21:32] royk, no, it's cause of the different firmware [21:32] on the cheapo's :) you have to manage the smart checks and tests yourself [21:32] on the *enterprise one* it is always doing it's own checks and reporting back [21:32] you don't have to schedule checks [21:33] it's just the different ways they do the smart stuff [21:33] qman: It does, but if it was terminated really badly, say due to a system function or something related to the kernel, that won't necessarily be enough [21:33] For applications that require constant development, there is simply no more convenient setup than running in screen [21:33] patdk-wk: I've done some tests on those 'enterprise drives' as well and the ones I've tested worked with smart tests as well [21:34] yes, you can manually do it [21:34] and it will ignore or do them [21:34] but it also does them itself [21:34] without you asking [21:34] patdk-wk: it's pretty hairy if an enterprise drive can't report simple smart data [21:34] patdk-wk: what sort of protocol? [21:35] patdk-wk: or is that something disclosed, only used for smartass systems? [21:35] no, it's there in the output [21:35] but it doesn't give you those values your used to [21:36] does recent smartmontools support that? [21:36] I'm not sure, I haven't played with them in a system that I could use smartmontools on [21:37] as I normally shove them on a hardware raid card, or an san system [21:39] weeb1e: then make your init script use screen, and tell screen to log === arrrghhh is now known as arrrghhhAWAY === arrrghhhAWAY is now known as arrrghhh [21:47] hey all, i'm looking for a hand with some upstart issues [21:48] what are the valid job types? [21:48] i keep getting an 'invalid job type' error [21:49] initctl: Invalid job class: [21:49] qman: I ended up just automating the PTY to spawn the application in screen the same way I have been using it manually for years [21:51] It gave me something different to do anyway, I've always wanted to automate a PTY app, but never had a good enough use case [22:17] Has the sudoers file format changed since 11.10? [22:20] weeb1e: I don't see anything in /usr/share/doc/sudo/{NEWS*,changelog*} files that anything big changed; a ton of little things but nothing along ht elines of "new format for sudoers' [22:21] I'll do a little more testing, but I seem to be unable to use NOPASSWD on a group or user without breaking the files parsing [22:22] Even using a Cmnd_Alias does not work :( [22:24] This line fails to parse, with or without quotes around the second command: [22:24] Cmnd_Alias PASSWORDLESS_CMDS = /usr/sbin/iftop, ruby /home/admin/process_manager/process_manager.rb [22:24] weeb1e: do you get any error messages from visudo? [22:25] sarnold: visudo? That sounds visual, and this is a server [22:25] weeb1e: in the same way that 'vi' is the VIsual editor, compared to ed :) [22:25] After adding that line and attempting to use sudo, it says: >>> /etc/sudoers: syntax error near line 18 <<<\nsudo: parse error in /etc/sudoers near line 18\nsudo: no valid sudoers sources found, quitting\nsudo: unable to initialize policy plugin [22:27] sarnold: Well I tried visudo, and that definitely looks like a safer way that using a second shell to restore the file, but it gives the same line 18 error :P [22:28] weeb1e: darn. I hoped it would be more verbose about what went wrong :( [22:28] weeb1e: the best part of visudo is that it prevents you from locking yourself out of sudo access by accident [22:28] Yeah, if my SSH session was lost before I manually restored the file, I would get locked out [22:28] So I will be using visudo in future [22:29] Although, in future, this updating will be scripted [22:29] weeb1e: try giving the full path to ruby? [22:29] That is, as soon as somebody can tell me what is wrong with that above line [22:30] sarnold: Surely that shouldn't be necessary? [22:30] Even ruby scripts use /bin/env instead of an absolute ruby path [22:30] Systems often have multiple ruby installations [22:31] Although, it would actually be very unsafe without an absolute path [22:32] weeb1e: I fear the sudoers file :) more specific is probably better... [22:33] sarnold: That does indeed solve the issue [22:33] And fear of it is good [22:33] But it is also sometimes necessary [22:33] welcome to the cult :) [22:33] Especially if you use iftop [22:33] Which I believe every server admin should be using [22:34] More specifically iftop -B :P [22:34] hehe, iftop++ -- the things it finds.. [22:35] First thing I do when connecting to any box, is start htop and iftop -B, then use a third tab for actually doing anything === arrrghhhAWAY is now known as arrrghhh [22:49] so, trying btrfs... and I'm getting some weird metrics on df. so I heard running a newer kernel is a good idea for btrfs... so in my haste waiting for 14.04, I just forced that kernel to install - and now the btrfs drive won't mount [22:49] this is what dmesg says [22:49] http://hastebin.com/vugepogoci.sm [22:52] arrrghhh: zfs sometimes gives non-intuitive results to 'df' as well -- compression, deduplication, etc. all kinda make ls vs du vs df give different results than people might be used to [22:55] So this is rather odd, using that command added to sudoers manually works fine, but when run inside bash inside screen, it asks for a password [22:55] well available made sense... [22:55] but used / free did not. 345 avail, 207 used 18 free - doesn't make sense [22:59] sarnold, but still how do I mount the drive now? I've never seen an error like this where [22:59] 'mount' fails because there's not enough space ^^ [22:59] arrrghhh: sorry, no idea there, I haven't spent any time looking at btrfs [23:00] Not sure what sudo depends on, in order to check users against the sudoers file [23:00] But the only thing I can think of, is when screen starts bash, it must remove some environment variable that sudo depends on [23:07] sarnold, ok. I started using it because I thought I could build a RAID 0 "array" which could be built with different sized disks [23:08] seemed to work, but maybe not... heh [23:12] I managed to mount it ro [23:12] but not sure what happened exactly... [23:18] hm. there's a lot I don't understand about btrfs. :D [23:19] arrrghhh: that's ok, it hates you too =)))))))) *giggle* [23:19] arrrghhh: different sized disks + btrfs as raid0 is not that good, as it would constantly would be rebalancing and deduplicating. [23:20] arrrghhh: LVM2 would have been a better choice. [23:21] hm. I didn't need to set these up as RAID 0 [23:21] t'was just convenient. I spose I'll blow them up and go back to ext4 [23:22] arrrghhh: well LVM2 gives you a pool of storage, from which you can create volumes of any size and resize/snapshot them etc. Or just have one spanning the whole amount. [23:23] ya, I have my 4tb disk using lvm [23:23] arrrghhh: with btrfs, there is no way to control pools much, thus all drives with btrfs end up in one =/ (unlike lvm2, or e.g. ext4) and then hallarious properties start to dawn on to you with time. Eg. with btrfs your IO throughput and performance will degrade with time, and with fullness. [23:23] I'm just not sure that would be the best for 2 SSD's of different size [23:23] arrrghhh: use them as dm-cache for even bigger spinny drives is the best =) [23:24] I use the ssd's to try and decrease build times [23:24] arrrghhh: well, I bought 32GB of RAM and build everythin in tmpfs =) that does decrease build times. [23:24] arrrghhh: plus i have fsync disabled and eatmydata enabled. [23:25] hah, if only RAM was cheap. I only have 16gb [23:25] arrrghhh: and builds on btrfs would be slower than xfs or ext4. [23:25] I can't figure this one out... If anyone knows why running sudo inside GNU Screen stops the sudoers file from being used, please let me know [23:25] especially a lot of throwaway builds and/or ccache [23:25] weeb1e, I use sudo inside of screen all the time... no problems [23:26] arrrghhh: I take it you do not use it in conjunction with a NOPASSWD entry in the sudoers file [23:27] The NOPASSWD sudoers entry works fine from a shell under my user, even if I start a new shell with `bash`, but as soon as I start screen from my user, using sudo in that bash shell asks for a password [23:29] oh yes I want sudo to prompt for psw [23:29] arrrghhh: NOPASSWD is generally used when you want to run specific binaries with sudo without a password === RoyK is now known as m40m [23:35] hi all. small question. how do you make it possible to have two cascaded redundant web servers running where the second webserver on machine 2 takes over when the first webserver on machine 1 shutdown ? I have read round robin is no solution for such a case because it does not provide any cascade! === m40m is now known as RoyK [23:39] xperia: I'd recommend using a third server which will always be up as an nginx reverse proxy, which will monitor each of your upstream servers and route requests to one which is available [23:42] weeb1e: thanks a lot for your rely. highly appreciate it. is there no other solution like having some heart beat exchange between two servers for monitoring and a takeover in case of a emergency? [23:42] reply [23:49] I'm new to ubuntu server, and with that being said, I'm wanting to setup a server to store all my files and also act as a backup server. Would I need to setup a database server or a fileserver? [23:50] depends [23:50] maybe both [23:50] but if your looking at the install screen still, probably none [23:51] but generally storing all your files + backups is not a good idea [23:51] as it can't store the backups of all your files it's storing safely [23:52] xperia: Are the two servers are different physical locations? [23:52] If so, a reverse proxy would not be the correct solution [23:52] backups are for cowerds - real men trust their drives [23:53] weeb1e: yeah they are on total different physical location all over the world with different ip addresses also. [23:53] Patrickdk, well for starting out, I'll have 2 hard drives in raid 1 for now [23:53] xperia, there is no solution to that, other than anycasted ip space [23:53] real men run 50-drive systems in raid0 [23:53] what does having a raid1 have to do with this? [23:53] xperia: Some kind of heart beat based monitoring could work then, although you will likely get false positives from time to time due to routes between the servers going down [23:53] raid1 doesn't help provide fileservers or backups [23:54] it just provides space [23:55] xperia: What most people (including google) do in that situation is, use a roundrobin for the different IPs on your DNS, set the TTL to very low (not more than a few minutes), monitor the servers and when one goes down, script removal of that IP from the DNS [23:55] If you use a very low TTL, the DNS update will propagate quickly [23:56] that assumes though, that your website is sessionless [23:56] If your session is cookie based, there is no issue [23:57] If the session is based on local server storage, that is more complex [23:57] if all the session data is inside the cookie [23:57] weeb1e: ahh yeah i have seen this behavior on the google side. i have already tried round robin and it works very well the thing is that it balance the traffic instead to cascade it. but yeah gues manipulating the dns entry in case of a emergency is best solution. [23:57] Session based cookies are very popular, it's easy to encode and store the users id in the cookie [23:58] xperia: You can do the same thing without a round robin if you want [23:58] It'll just maximize any downtime [23:58] You just leave only the first server as the DNS A record, and swap it with the other one if the first server goes down [23:58] It's the only solution in this case [23:59] if you need to balance traffic better [23:59] you can always use the same ip address multible times [23:59] Unless you want to have a third PC which loads an initial page and redirects to an available server [23:59] That is also a possibility, if redirects are possible [23:59] you don't need a 3rd pc for that :) [23:59] both of the existing ones can do that [23:59] Not if one could go down...