[00:11] hello [00:12] hello [00:12] xczxczx [00:13] hello [00:14] i have a question about installing ubuntu-server 64bit on vmware [00:15] is there anyone willing to help me ? [00:21] !anyone [00:21] A high percentage of the first questions asked in this channel start with "Does anyone/anybody..." Why not ask your next question (the real one) and find out? See also !details, !gq, and !poll. [00:21] mares: vmware what, esxi? [00:27] twb: hey, i downloaded ubuntu-server 64bit and try to install it on vmware [00:28] twb: but wen i restart machine, black screen pop up with no options !? [00:28] mares: vmware is a company, not a program. [00:28] mares: what vmware product are you using? [00:29] twb: sry, vmware workstation [00:30] i just want to setup lamp server to practice with php,mysql etc. [00:30] Does that product normally give you any (emulated) BIOS boot screens? [00:30] twb: yes [00:31] Do you see those? [00:31] yep [00:31] And it goes black immediately after that? You don't see *anything* from the install CD? [00:32] i installed it, but when i restart my machine it goes black [00:33] i installed it from iso image downloaded from ubuntu site [00:33] OK, so after the install, you reboot, and see the BIOS prompts again? And immediately after the BIOS part, it goes black? [00:34] yea [00:34] This is 10.04 [00:34] ? [00:34] 11 [00:34] latest version [00:35] is there any guide that i can follow up on installing virtual ubuntu server [00:35] I'm not sure what's happening, but I would guess that it's either switching to the wrong VT (in which case, try Ctrl+Alt+F1) or the splash crap is doing the wrong thing, or it's switching to a video mode that confuses vmware-workstation. [00:36] Also try hitting Escape once you hit the black screen [00:36] ill try that, thanks [00:36] Try booting a live CD, and turning off vga/vesa/splash-related stuff in grub's config [00:36] Try installing 10.04 instead of 11.04. [00:37] ok [00:37] Also try using kvm instead of vmware crap :_) [00:40] hehe, i installed vm virtualbox and same happens [00:40] i went through installation wizard and all stuff [00:40] and when i restart , black screen :P [00:41] oracle virtualbox is also proprietary crap [00:41] lol [00:41] so u suggest kvm ? [00:41] Yes. [00:41] ok, lets try it, thanks! [01:05] hmm, is php-fpm not available directly from ubuntu? [01:05] aka, is a third party ppa required? [01:05] php5-fpm [01:06] no diff [01:08] im following a third party tutorial and they never mentioned loading an extra ppa. So thats why im wondering [01:09] i loaded the brianmercer ppa, but then noticed that after i got php5-fpm installed and nginx, then tried to install php-apc, it tries to install apache as well, which i obviously dont need [01:10] MACscr: you are wrong. php5-fpm is in Ubuntu. [01:10] Enable the "universe" component. [01:11] http://paste.debian.net/122267/ [01:14] i dont see lucid mentioned there [01:15] Ah, sorry [01:15] so maybe its not available for LTS? [01:15] Looks like it was removed for a while, then came back in oneiric [01:15] which is what i would think most server users would be using [01:15] Probably because it had release-critical bugs when lucid was released [01:15] So yeah, you will have to do a PPA or something [01:16] Or maybe PEAR, dunno if that's a good idea on Ubuntu [01:16] ok, so how about installing php-apc without it wanting to do apache stuff? [01:18] MACscr: as in, you "apt-get install php-apc" and it pulls in apache? [01:18] That's because php-apc depends on phpapi-20090626, which is a virtual package provided by apache, php5-fpm, php5-cgi and php5-cli [01:19] Ah, php5-fpm is a binary package built from the php5 source package. So as to why it is absent, you will have to look at the /usr/share/doc/php5-fpm/Debian.changelog.gz [01:21] Looks like FPM was turned off in 5.3.3-2 and reenabled in http://bugs.debian.org/603174 (5.3.5-1). [01:28] hmm, so i need to be running maverick or newer? [01:40] Sorry, I have work to do === medberry is now known as med_out [02:51] New bug: #807324 in bind9 (main) "BIND 9.7.0 (ie., lucid) is overly strict on authoritative responses missing the "aa" flag" [Undecided,New] https://launchpad.net/bugs/807324 [04:21] how can i change the name of my partitions from xvda to sda, etc? Its a xen guest, but im using a premade image and i want to change it so that its using the same naming scheme as the rest of my guests [04:22] its a xen guest btw. I know how to change it within grub and fstab, and with the guest.cfg, but im not 100% sure where else it needs to be changed [04:25] MACscr: well the bootloader/initrd is probably going by UUID, so you only need to edit /etc/fstab [04:26] i dont think it is, because it didnt boot when i tried that. Got to busy box and i did: cat /proc/partitions and it still showed xvda [04:29] Well, grub is exceptionally stupid [04:29] MACscr: OK, what *is* in partitions? [04:34] twb: http://pastebin.com/ahY0MEQQ [04:34] Guess you want xvda1 then [04:35] right, but im trying to change it to sda1 and so on =P [04:35] Uh, what? [05:08] What channel would be good to ask questions about SATA vs. e-SATA connectors? (I have an allegedly-e-SATA cardbus card that has SATA conectors, and the same for a drive. I also have cables that have a SATA connector on one end and e-SATA on the other.) [05:11] uhm [05:12] here, or perhaps google around for FAQs about sata? [05:16] serge_: lxc-start looks like C; I'll just do the shell script for now ;) [05:21] Well, my question's above. WTF is up with an external SATA card and external SATA drive both using internal connectors? [05:21] cjs: the main reason esata has different connectors is because the cable needs to work outside the case's shielding [05:22] cjs: there's no real reason internal sata cabling won't work externally, although I admit it's weird and dumb to ship gear that way [05:23] I understand that. The cables are driven at higher voltage, better shielded, and have stronger connectors that are rated for more insertions/removals. [05:23] Well, I'd wonder about RF issues if using internal cables externally. [05:23] But this silly PC-Card says "e-SATA" right on it. [05:24] Perhaps they're using SATA connectors due to space issues (the limited height on the edge of the card) but it's otherwise an e-SATA interface? [05:26] ahem, how do you get bash to run a command for you? [05:26] e.g "bash -e ls", but that seems to spit out some weird error [05:26] Did you want "bash -c ls"? [05:27] Oh, I see: "bash -e -c ls". [05:27] thanks [05:32] cjs: I think he expected bash -e to be like perl/sed -e [05:37] rurufufuss: -e means exist on any untested error. E.g., "false" will exit, but "if false; then true; fi" will not. [05:38] ah, I see [05:38] yeah I thought -e is execute [05:38] rurufufuss: that's called -c in bash [05:38] cjs: technically both will EXIT [05:39] But cf. bash -xec 'false;pwd' vs. bash -xec ':;pwd' [05:40] Of course I meant, "exit immediately after executing the failing command." [07:08] serge_: https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/807351 [07:08] Launchpad bug 807351 in lxc "it would be cool to be able to clone an lxc container onto aufs for test runs - ephemeral containers" [Undecided,New] [07:35] why would ipv6 ip addresses be showing in ifconfig if they arent listed in /etc/network/interfaces? [07:35] i definitely dont have any type of dhcp going either [07:45] you should disable iv6 on ur system [08:07] grr, i had a system running, installed linux-server (so i could switch from generic to server for the kernel), ran grub-update, rebooted. Now it says the disk im trying to load doesnt exist. Its trying it using whatever UUID it created. So in grub, i tried doing root=/dev/xvda instead, but same error [08:07] just seems odd that the UUID would be wrong if thats obviously what was automatically generated because thats what it had found [08:08] but either way, the /dev/xvda should have worked [08:08] now neither kernel will load [08:17] jamespage, about bug 791454, you think the test case is wrong ? [08:17] Launchpad bug 791454 in mdadm "RAID1 Test Failed: Device need to be readded manually" [High,Opinion] https://launchpad.net/bugs/791454 [08:39] jibel: well it might have been right once - but its not for natty or oneiric [08:39] I have not had time to test maverick/lucid [08:40] its kind of an odd test todo anyway [08:40] if you had an actual drive failure and had to replace then automated recover would not be an option [08:40] as you would have to create the partition table first and then re-assemble the array [08:44] jibel: I was thinking the same as jamespage. [08:47] Daviey, jamespage I tend to agree. Could someone from the server team update the test case with the expected behavior then ? [08:49] jibel: is this step 16 or 17? [08:51] Daviey, 16.l There should be no need to add any missing devices back to the RAIDs manually. Otherwise, there is a bug! [08:53] Ah! [08:54] This isn't testing inserting a new disk.. but if a disk gets disconnected, reconnected - does mdad rebuild it without requiring input === tobias is now known as Guest19514 [08:56] hi, would appreciate help with updating midnight commander [08:57] i'm running 10.04 LTS, and just installed "sudo apt-get install mc" [08:57] however, this resulted in version 4.7.0 of mc, while their website says that 4.7.5 is stable [08:57] how do I get that new version? === mendel__ is now known as mendel_ [09:02] Guest19514: looks like the debian maintainer hasn't updated the package yet. You should probably file a bug against it in debian at bugs.debian.org [09:05] hey SpamapS ! [09:39] Daviey: hello! [11:56] hi, we noticed that one of our web servers started to appear in public proxy lists. Previously we received huge amount of traffic and disabled foreign ip blocks via iptables. Obviously these two are related. Any advice for preventing this from happening again? [11:57] if that is all you did, you still have the proxy issue [11:57] fix the proxy issue? [11:57] patdk-wk, i suppose so [11:58] that can either be a webserver config issue, or a security hole in an webcgi [11:58] no we haven't fixed it yet. iptables was just a temporary solution. [11:58] and frankly, i have no idea about the solution [11:59] I don't blame you there, cause we don't know where the issue is yet, other than there is one [12:00] this is where looking at your log files, during that massive traffic usage normally helps [12:00] i think, probably something wrong with our apache mod_proxy configuration but i'm not sure what it is. [12:01] log files shown outgoing http requests from a lot of different ips [12:10] good morning [12:30] Ursinha: bom dia [12:31] что? [12:31] lynxman: buenos dias :) [12:31] jpds: kak dela [12:32] Ursinha: все мне очень хорошо, и у тебя? [12:36] I understand, but don't know how to reply [12:36] hahahaha [12:36] jpds: are you fluent in Russian? [12:37] Ursinha: he is by now [12:38] haha [12:57] zul: hey you think you can get to my package today? :) [13:09] lynxman: yes hopefully but there is other people who can review it as well [13:10] zul: that's why I'm asking, don't want to stress you ;) [13:10] lynxman: im not stressed...just busy [13:11] zul: np then :) [13:19] whats a good way to upgrade ubuntu server from 10.04 to 11.04 ? [13:20] going from 10.04 -> 10.10 -> 11.04 [13:31] hello [13:31] after an error on /dev/sda1 i reboot my system with a livecd [13:31] where i have done [13:32] fsck.ext3 /dev/sda1 [13:32] is it the correct way ? [13:35] after an error on /dev/sda1 i reboot my system with a livecd [13:36] hmm, I just tried sudo do-release-upgrade [13:37] but i'm getting - already at the latest version [13:42] anyone ? [13:42] New bug: #807534 in exim4 (main) "package exim4-base 4.74-1ubuntu1.2 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/807534 [13:43] I have server 10.04 and when I do sudo do-release-upgrade it tells me already at the latest vesrion [13:43] any idea what I'm doing wrong here ? === med_out is now known as medberry2 [13:59] orudie, no newer lts release is out yet, 12.04 === NG_ is now known as ng_ [15:05] Daviey: hey, in an hour or so, can I get you to sponsor a cgroup package for me? [15:05] serge_: sure thing! [15:05] Daviey: thx [15:05] serge_: fixing the init script? :) [15:06] stgraber: yeah, rolling that in with one other fix [15:07] cool [15:07] it was an idiotic snafu on my part [15:07] I also need to upload a new arkose today as libcgroup broke it :) [15:07] now i just want to make sure i didn't accidentally break something :) [15:07] how did it do that? [15:07] i'm looking for some time to try out arkose [15:07] i want to start running everything under it :) [15:07] especially once you integrate apparmor (you said you were doing that right?) [15:08] arkose used to mount a cgroup filesystem just before calling lxc-execute. That used to work quite well but now the mount call fails as cgroup is already mounted ;) [15:08] ah, so waht really broke it was lxc now recommending cgroup-bin? [15:08] yep, the idea is to get apparmor support into it and just use apparmor or lxc or both depending on what the profile describes [15:08] yeah, arkose depends on lxc which recommends cgroup-bin :) [15:09] anyway I made this specific mount() call optional, so if it fails it'll just continue and use whatever cgroup fs already exists [15:09] just need to release 1.2.2 with that fix and upload it to the archive [15:13] serge_: Are you able to assist with bug 776103? [15:13] Launchpad bug 776103 in open-vm-tools "package open-vm-dkms 2011.03.28-387002-0ubuntu2 fails to build against 2.6.39 kernels, due to missing linux/smp_lock.h" [High,In progress] https://launchpad.net/bugs/776103 [15:16] Daviey: i was sort of hoping to get some time on user namespaces after finishing with all this qemu, libvirt, lxc, and cgroup stuff :) [15:16] Daviey: this is in multiverse? or universe? [15:17] serge_: open-vm-tools, is this not main? [15:17] Daviey: libcgroup package is at "dget http://people.canonical.com/~serge/libcgroup_0.37.1-1ubuntu3-package/libcgroup_0.37.1-1ubuntu3.dsc [15:17] Daviey: no it's not, lemme check [15:17] Daviey: multiverse [15:17] bah [15:18] right, we need to decide whether to move it up to at least universe, bc it's taking up a lot of time [15:18] serge_: The assignee is looking for assistance in solving the ftbfs. If it's not too time intensive, would you be able to help? [15:18] does it not have a maintainer? [15:18] sure [15:18] serge_: not urgent for *today*.. but you made yourself the team expert in open-vm-tools :) [15:18] who is that guy who has been posting all the patches? does he care to be its maintainer? [15:19] serge_: shrug. [15:19] i know almost nothing about open-vm-tools, i looked at the last one bc i'm comfortable with kernel stuff [15:19] all right, your sponsoring of libcgroup squashes two bugs, i'll go look at open-vm-tools :) [15:20] Daviey: on Bug 791850, it looks like a dead-lock. I spent two hours yesterday with Amazon taking a look at it. The kernel initializes the CPU's and then just sits and spins with high CPU. [15:20] Launchpad bug 791850 in linux "oneiric cluster compute instances do not boot" [Undecided,Confirmed] https://launchpad.net/bugs/791850 [15:20] Daviey: oh, ok - at least it's on oneiric. i was afraid this was against natty with a newer kernel or something bogus [15:20] wonder if nmuench ever hangs out on irc === medberry2 is now known as medberry [15:25] Daviey: what's the role of the person that attends the release meeting for each team? [15:26] Ursinha: tradionally it's been the tech lead, but there is no reason it has to be that. [15:26] zul has also taken the burden of driving it previously aswell. [15:26] right [15:26] trying to understand the teams and who is who [15:27] right i did.. [15:27] 6 [15:27] argh === mconigliaro_ is now known as mconigliaro [16:36] Hey o/, Got something interesting to talk about in Ubuntu cloud days? → Please add a session to https://wiki.ubuntu.com/UbuntuCloudDays/Timetable .. Thanks === ng_ is now known as NG_ === micahg_ is now known as micahg [18:02] Folks, I asked some USB 3.0 questions on my local lug mailing list but got only one response saying I should ask here. I've purchased a few 3 Tb external USB 3.0 hard drives to use as backup devices on ubuntu server 10.04 x86_64. I am currently using them as USB 2.0 because I have no hardware with 3.0 ports. I'd sure like to speed this up. This brings me to a couple questions before a... [18:02] ...purchase any cards. Is this a good place to ask? [18:11] ColoBill: for where to get hardware? [18:13] ColoBill: what do you need to know? if there are drivers available or not? [18:14] btw, personally I'd recommend setting up a backup server instead of using USB-connected drives, but that's up to you [18:16] New bug: #807649 in nagios3 (main) "package nagios3-common 3.2.3-1ubuntu1.2 failed to install/upgrade: le sous-processus script post-installation installé a retourné une erreur de sortie d'état 1" [Undecided,New] https://launchpad.net/bugs/807649 [18:29] Roy, the drives are to be put on the backup server to take another copy offsite [18:30] Q1: I believe USB 3.0 is supported in recent kernels. I am going to put the card in a box running Ubuntu Server 10.04 x86_64. It should be fine right? [18:31] Q2: I just found one 2-port card on newegg.com for $30 and then went to the manufacturer's website to read the specs. Although USB 3.0 speeds can be up to 10x USB 2.0 speeds, they are honest enough to say with their card you will only get up to 2x USB 2.0. Is this a function of the card, PCI or both? Can I find better that will work? [18:31] Q3: Has anybody out there done this and do you have card suggestions? [18:37] ColoBill: 10.04 drivers haven't been updated in a while, so you may need to backport drivers or use a newer distro - try first or get the PCI ID of the card to verify [18:42] RoyK, good idea. I didn't even think of that. [19:03] zul: ping [19:03] adam_g: whats up? [19:04] zul: are those lio-utils packages available anywhere? [19:04] adam_g: they are still sitting in binary new i can upload them to a ppa [19:05] zul: if you could that'd be sweet, i'd like to test. i haven't touched lio in a while and looked at the utils earlier this week but couldnt get them to work with recent kernel [19:07] adam_g: what went wrong? [19:08] zul: it wasn't working with whatevers changed in lio's use of sysfs. [19:08] adam_g: interesting [19:09] zul: what version of the utils did you package? [19:09] they should be in ppa:zulcss/ppa in a bit [19:10] cool [19:10] thanks [19:26] New bug: #807675 in augeas (main) "please port 0.8.1 for Natty" [Undecided,New] https://launchpad.net/bugs/807675 === dannf is now known as dannf-lunch === utlemming is now known as utlemming_lunch === erichammond1 is now known as erichammond [21:17] Daviey: did you ever push libcgroup? [21:17] (not seeing it in rmadison) [21:20] smoser: ping [21:22] RoAkSoAx: do you mind sponsoring http://people.canonical.com/~serge/libcgroup_0.37.1-1ubuntu3-package/libcgroup_0.37.1-1ubuntu3.dsc ? [21:23] sure thing [21:28] Hello, I'm having problems with my ubuntu dedicated server: The IPs of my VPSs are not visible from outside only nmap x.x.x.x -PN shows me that the server is up. So, i guess there is a firewall in between, I removed ufw, bastille and only iptables is running,but seems to be open: http://pastebin.com/szVgpb5P [21:29] How can I find another firewall that is blocking my IP? [21:29] I'd appreciate your help so much [21:29] ask the provider [21:29] RoyK Would the provider block all ports of my IP subnet? [21:30] give me the IP/subnet and I'll run a scan if you like ;) [21:31] RoyK thanks! but I'd like to learn. Is there a command to scan at which state the firewall is active? [21:32] serge_: I get this patch : debian-changes-0.37.1-1ubuntu1 http://paste.ubuntu.com/640383/ [21:32] xamanu: what happens if you nmap -sT -O x.x.x.1-254 ? [21:33] substitute 1-254 with your range [21:34] serge_: which comes from an upload to natty [21:34] RoyK Nmap done: 14 IP addresses (0 hosts up) scanned in 12.28 seconds [21:35] serge_: is that intented or something created by quilt :) [21:35] xamanu: ask the provider - if you haven't setup a firewall yourself, and ufw is set to allow ICMP, the machine(s) should be visible === utlemming_lunch is now known as utlemming [21:36] RoyK ok thank you. I'll do that. I have set up firewall myself but now opened up everything for testing and couldn't find anything else [21:37] xamanu: most providers have a firewall protecting things - I have asked my provider to allow everything through so that I can use ufw to control it myself [21:38] RoAkSoAx: i'm not sure. i don't remember why that showed up [21:39] RoAkSoAx: jbernard may remember. as i recall he did push it [21:39] (that is, he applied a debdiff from me) [21:39] serge_:k other than that it looks good but I think we'd need to figure out why's that been created and if we really want it [21:39] if not we could just drop it [21:40] serge_: im building now and will upload after [21:40] RoyK ok.but wierd that they activate this from one day to another. anyway I'll just ask them. Thanks! [21:41] RoAkSoAx: i think i'll open a bug for it, bc none of it rings any bells for me [21:41] serge_: k, uploaded [21:42] RoAkSoAx: plus, it changes things (like /etc/init.d/cgred.in) which we don't use. it's weird [21:42] RoAkSoAx: thanks! [21:43] serge_: yeah that must be a left over from some changes that are not reflected in a patch, or changes that are not really necessary === medberry is now known as med_out [21:46] accidental git update maybe === Ursinha is now known as Ursinha-afk [21:51] Is it possible to somehow limit/throttle the percentage of CPU my Ubuntu (or a specific user) is allowed to use? The reason I ask is if my virtual server reaches 100% cpu for several seconds, Amazon starts throttling it down to extreme slow speeds. So I want to make sure no process can reach higher than 80%, or if thats not possible that any process can not reach above 50%. Or if thats not possible EITHER, than any user [21:51] can not go above X percentage. [21:52] Deathray: are you using a t1.micro? [21:52] Yes, exactly. [21:53] I tried cpulimit which works great, but it will not work with apache2 since it has several workers, and cpulimit will just bind itself tio the first PID it finds named apache2 and neglect the others [21:53] Deathray: the t1.micro, well cheap, is prone to that due to the severe resource starvation. [21:53] Have you tried cgroups? [21:54] How do I list all packages that depend on a certain package -- installed or not? [21:54] Yeah exactly which is why I'm trying to work around that by throttling myself. But never heard of it, quite new to Linux so I'll read up on it and see if it can help :) [21:57] Deathray: cgroups will likely do what you want. But if you are taking any sort of consistent load, then upgrading to a m1.small might make your life easier. [21:58] apt-cache rdepends foo [21:58] Deathray: I have lost more sleep over the t1.micro than I care to admit. While it is a useful instance type for prototyping, using as a shell account, etc., any production usage should probably move to an m1.small or bigger. [21:59] Yeah that is true, but the thing is im just running a personal blog & teamspeak3 for the small gaming community im in, So the price difference is big for this small project when cheap alternatives are available. but since I'm a nerd i want THIS to work :) [22:00] free tier first year/15 usd a month after vs. 70 a month I think it is, is too much :( [22:01] clear [22:01] Deathray: Give cgroups a look. The other thing I would watch is memory usage with Apache or even switch to Lighttd to reduce your memory footprint. A common problem with the micro is that they are very memory starved, so swapping is easy. Once you get into a swapping situation, that can push your CPU usage up and lead to hitting the scheduler. [22:04] Deathray: Another idea would be to limit the inbound traffic to keep Apache from doing to much, i.e. setup security groups [22:05] Hello everyone... is anyone here pretty familiar with setting up web servers? [22:06] That can help me understand how to set up virtual hosts, and not need permissions higher than 755? [22:06] Interesting, I'll have to look into that. I already implemented CloudFlare to filter out botnets and other bad stuff which has saved my server lots of bandwidth which translates to resources which helps a bit. But sometimes when google crawls my website or some other random linux process decides to do something which spikes at 100% for a couple of seconds, Amazon's incapacitating throttle kicks in and my server dies to t [22:06] he point of not even accepting SSH. [22:08] Deathray: Google robots.txt and how to opt out of Google indexing -- unless you want the indexing. [22:09] Deathray: The not accepting SSH is the scheduler, and is not surprising if your instance is working hard. [22:09] Allthough I don't believe Amazon's incapacitating throttling kicks in if I use too much memory, I think it's soley based off of CPU utlization (i even mounted a few gigs of EBS for swap). Based off of Amazon's own documentation I can actually confirm that: http://bit.ly/cGwR3o [22:10] Aha, cool [22:10] Deathray: the other thing you might want to look at is fail2ban. It is a script that setups iptables for you and looks for patterns in logs and then blocks on it. [22:12] That sounds like some cool stuff I would like looking into. But I think a better solution would be to find something more global for the entire OS and all processes, since it would just be a matter of time before some cron task or other Linux task uses enough cpu % to induce the Amazon throttle. [22:12] Which is where cgroups that you mentioned may be the solution, I'll have work on it :) [22:13] Deathray: One problem that you'll with it your performance may go out the window with cgroups because you'll have to figure out what the max CPU utilization is and then limit the spikes. [22:14] Deathray: If you figure it out, blog it. It would be immensely useful to the community === erichammond1 is now known as erichammond [22:30] utlemming, Hah this is turning into an interesting project :D You can bet i will! Do you have any tools you can suggest for benchmarking the cpu for testing purposes, so I dont have to open 30 tabs of my blog to cause the throttle to accure? === dannf-lunch is now known as dannf [22:33] Deathray: :) I'm an OS and Cloud Guy. I don't have much experience with benchmarking application stacks. [22:42] New bug: #807770 in backuppc (main) "package backuppc 3.2.0-3ubuntu4 failed to install/upgrade: ErrorMessage: le sous-processus script post-installation installé a retourné une erreur de sortie d'état 1" [Undecided,New] https://launchpad.net/bugs/807770 [23:03] Is it bad to have postfix running publicly, won't spammers abuse it for sending bulk emails? It seems many sites and ISPs have public SMTP servers, doesn't this get abused? [23:05] Rinsmaster: define public [23:05] public as in open relay [23:05] yes this will get abused [23:06] and any serious business shouldnt run an open relay as they will get blacklisted very soon(and can no longer serve their customers an appropriate mail service) [23:06] isn't this why usernames and passwords where invented? [23:07] this is why all kinds of authentication mechanisms in the smtp field were invented [23:09] Ah okay, I understand. Thanks guys :) [23:17] virgin runs a semi-restricted smtp server; you can only use it from the virgin network. [23:17] but no auth needed [23:20] what good is that for spammers outside the virgin network? :/ [23:21] if they get a drone inside the virgin network they also get an open relay ;) === erichammond1 is now known as erichammond [23:28] utlemming, I've done some tests which seem to prove that limiting the CPU % will not help. They don't throttle you based off of the percentrage of CPU your instance is currently using, but the average over time. I ran sysbench to calculate prime as many times possible for 10 seconds, repeating itself for a minute. And despite throttling 10-20-50%, the total amount of calculations is close to the same at the end, although [23:28] the results dont fluctuate as much on the one's where the benchmark was limited to low amounts such as 10% [23:29] So even though sysbench was staying at 50% for the one minute duration, the results every 10 seconds were sometimes very high, but sometimes dropped immensely which is where the amazon throttle kicks in. [23:30] But the total amount compared to the test results from the test i made capped to 10%, were the same, just the independent results every 10 seconds were more "stable" and not sometimes dropping to ridiculous amounts. [23:31] Deathray: What is your out of cgroup CPU utilization? [23:31] I don't know :/ I used cpulimit to limit the benchmark. How would I find that out? [23:33] As a conclusion though, I guess I was stupid assuming the amazon throttle was unintellegint enough to never touch my server if i just didnt reach the cap, but it seems it works in a mory dynamic way, balancing your server.. Actually looking at this graph: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/images/Micro_Bad_Fit_Background_Throttled.png also proves that (which eric hammond actually pointed out) [23:35] The interesting thing I found out though, was looking at "top" when the results severely dropped indicating where the amazon throttle occurs, the small "st" inccreases to 95-100%, but the cpu total % would stay the same. What does the "st" in top exactly mean? [23:35] and when the test results go back to normal the st goes back to 0% [23:36] Deathray: "st" is more or less "stolen time", which indicates that the guest is blocked on the hypervisor [23:36] Aha! [23:40] Hmm, so I guess throttling my ubuntu server on my own won't really benifit in any way, I'll have to live with amazon's throttle and find other ways to optimize my server so it uses less CPU, such as the fail2ban you mentioned [23:40] are there any UEC images for oneiric server that contain all kernel modules that typically come with -server? === erichammond1 is now known as erichammond