[00:07] apt-get update on EC2 us-east-1 results in: Failed to fetch bzip2:/var/lib/apt/lists/partial/us-east-1.ec2.archive.ubuntu.com_ubuntu_dists_precise-updates_main_source_Sources Hash Sum mismatch [00:09] http://askubuntu.com/questions/209844/ === n0ts_off is now known as n0ts === mcclurmc is now known as mcclurmc_away [01:53] I have an issue going on with a post-up script — the script adds custom routes, but sometimes one or more of the routes is not added. This is on a kvm guest. === n0ts is now known as n0ts_off [02:07] what services are being run on your ubuntu server? [02:11] bsdman: this particular system has redis and nginx — but it was happening on another server that had entirely different services [02:11] unfortunately, it's hard to reproduce [02:13] this an appropriate place to ask about AWSOME? [02:21] benlake, what do you wish to know? [02:22] does the implementation provide for non EC2 compliant API implementations? [02:22] bsdman: [02:23] no idea. little beyond my knowledge. [02:25] bsdman: no worries [02:30] hmm, looks like it might not be that bad to plug in a new cloud provider [02:43] bsdman: here's an excerpt from my log — https://gist.github.com/6d8169c1da423e07a70d [02:47] bsdman: also, it seems to just be the first boot after a destroy/create/start [05:29] stgraber: i saw the emails - excellent [05:41] Just curious, what's the best policy for adding shares? I noticed that I had pre-existing directories and the acl's didn't seem to take properly. I have another drive mounted as /Share, and the folders in there are owned by root:root with them at RWX for all users, but if I set no guest access, and set authenicated users to read-only and admins as RW it gives everyone RW. I'm still trying [05:41] to get used to this, so any suggestions are appreciated. I don't want them be created under /homes/samba/shares/..... is the main thing. I'm running zentyal 3.0 right now and that's where it creates a share if I use the interface. Are there any special permissions I have to set on these folders or something? === n0ts_off is now known as n0ts [06:58] here is a snippet from my smb.conf which works with acls: [06:58] writable = yes [06:58] valid users = @Users [06:58] hide unreadable = yes [06:58] inherit permissions = yes [06:58] security mask = 0000 [06:58] map archive = no [06:59] force directory mode = 0777 [06:59] force create mode = 0777 [06:59] force directory security mode = 777 === n0ts is now known as n0ts_off === Ursinha-afk is now known as Ursinha [08:36] New bug: #1073463 in maas (main) "User-configurable files are not installed in /etc" [Undecided,New] https://launchpad.net/bugs/1073463 [08:39] hi! when using iscsi as storage-backend to kvm/libvirt i can migrate running virtualized systems from one host to another, but can't create snapshots. when using qcow2 based file-images it is the other way round. am i right? [08:41] New bug: #1070775 in maas/1.2 "The zone name (attached to a cluster controller) can still be changed when it contains in-use nodes and DNS is managed." [Critical,Fix committed] https://launchpad.net/bugs/1070775 [08:42] New bug: #1059645 in maas "URI in API description wrong when accessing machine via alternative interface" [High,In progress] https://launchpad.net/bugs/1059645 [08:42] New bug: #1064224 in maas "IPMI detection ends up with power_address of 0.0.0.0" [Undecided,New] https://launchpad.net/bugs/1064224 [08:42] New bug: #1064527 in maas "detect_ipmi needs improvement. detects non-existant device in nested kvm" [Undecided,New] https://launchpad.net/bugs/1064527 [08:42] New bug: #1070522 in maas "maas-cli nodes new incomplete documentation" [Critical,Fix committed] https://launchpad.net/bugs/1070522 [08:42] New bug: #1070765 in maas/1.2 "DNS forward zone ends up with nonsensical entries" [Critical,In progress] https://launchpad.net/bugs/1070765 [08:42] New bug: #1070774 in maas/1.2 "The hostname of a node can still be changed once the node is in use" [Critical,Fix committed] https://launchpad.net/bugs/1070774 [08:51] New bug: #1066775 in maas/1.2 "Main page slow to load with many nodes" [Critical,Fix committed] https://launchpad.net/bugs/1066775 [09:15] New bug: #1073478 in squid3 (main) "squid3 does not clean up pid file" [Undecided,New] https://launchpad.net/bugs/1073478 [09:27] Hello, it seems the Ubuntu 12.04 ami-9c78c0f5 is broken. Does anyone know when the next repository update is scheduled? === n0ts_off is now known as n0ts [09:42] Morning all [09:43] I'm hitting some annoying hurdles with munin on 12.10. Anyone else got any experience with it (I am getting html files, but no graphs) === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha [10:05] Within a EC2 Server instance, while running a ./configure, suddenly the terminal hangs during execution. e.g. at config.status: executing depfiles commands. I experienced the same thing once during e.g. sudo checkinstall. The whole terminal becomes unresponsive. What could be the reason for that? This happens on a ec2 micro instance. === n0ts is now known as n0ts_off [10:41] New bug: #1073510 in nova (main) "Upstart init script executes before network bridge is available" [Undecided,New] https://launchpad.net/bugs/1073510 [10:57] hi guys, can anyone tell me why - smbmount //10.29.39.23/share /mnt/hd2/share -o user=samba,pass=blablabla,gid=ubuntu,uid=ubuntu mounts the drive corrently as ubuntu user on one server and mounts as root on another server....? === mcclurmc_away is now known as mcclurmc === cpg is now known as cpg|away [11:10] <_cronus_> bubu\a, maybe you need to pass the forceuid and forcegid options as well [11:12] ignore -channels #linuxoutlaws * JOINS PARTS QUITS NICKS [11:13] hi, can i easily install ubuntu server 12.04 lts on usb key using dd? [11:13] I have done this with archlinux [11:13] but I found no docs related to this specifically for ubuntu [11:14] nope, use startup disk creator if running ubuntu [11:15] or unetbootin with windows [11:15] If I install something with --no-install-recommends, will it never bother me with the recommends again? [11:15] vezq: I am running a different distro [11:15] unetbootin also available for linux [11:16] vezq: thanks === mcclurmc is now known as mcclurmc_away [12:02] I have a "amd64" version of ubuntu server 12.04 - this works onintel x86_64 architecture too, right? [12:03] ak5: yes, it's called amd64 since it was AMD who first made that architecture - intel followed that when they finally found that the itanium run wasn't good enough [12:03] ok good, just checking [12:04] thanks === n0ts_off is now known as n0ts [12:15] I am confused what options to use in unetbootin for my ubuntu server 12.04 amd64 iso file [12:17] oh nevermind seems like my download is just corrupt -.- === Ursinha is now known as Ursinha-afk === n0ts is now known as n0ts_off === yofel_ is now known as yofel === mcclurmc_away is now known as mcclurmc === shantorn__ is now known as shantorn === mcclurmc is now known as mcclurmc_away === mcclurmc_away is now known as mcclurmc [14:03] I just logged in a KVM node which has a 100GB swap partition... someone hasn't been paying attention during installation jesus lol [14:11] jamespage: free by any chance? [14:15] tuxloaf: is there 256GB of RAM? ;) [14:18] has 100gb of ram :p [14:19] also a 48gb udev and 48gb cgroup partition from which just a couple MB is in use === matsubara is now known as matsubara-lunch [14:47] :) [14:52] Umm.. does this sound familiar to some1? The file path or username are not showing up next to the dollar sign when I connect to my server via SSH === mcclurmc is now known as mcclurmc_away === n0ts_off is now known as n0ts [15:00] fosfori, what shell does your user have on the remote machine? [15:00] bash [15:01] New bug: #1073571 in cinder "[Folsom] Conflict between nova-api and cinder-api" [Undecided,New] https://launchpad.net/bugs/1073571 [15:01] you sure? [15:01] cat the passwd file [15:02] fosfori: perhaps someone has messed up PS1 [15:03] bubu\a: done [15:03] try [15:03] PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' [15:03] Just got this baby up yesterday [15:03] fosfori, def bash then? :/ [15:03] fosfori: is this for root, or a newly created user? [15:04] I'm not too familiar with the commands :3 [15:04] RoyK: new user [15:04] then it's probably not bash [15:04] http://paste.ubuntu.com/1320990/ [15:05] fosfori: as that user, run chsh -s /bin/bash, or run chsh -s /bin/bash as root [15:06] fosfori, what shell does that user have when you cat /etc/passwd? [15:06] Okay, let's see [15:06] default shell is /bin/sh [15:07] which is dash [15:07] ahh [15:07] no idea why that's the default, since imho dash sucks badly [15:07] yeah, I looked at the wrong line [15:07] bingo [15:08] So how does this one help me getting that path to show up? [15:09] yes [15:09] as that user, just try to run bash [15:09] and you'll see the difference [15:11] seems it's a useradd default set in /etc/default/useradd [15:12] <[conrad]> Does anyone know of anything more elaborate than https://wiki.ubuntu.com/EasyUbuntuClustering , and possibley more specific to a newer version such as 12.04 ? I'm essentially a duplicate of the first use case. We have 4 Dell servers that were recently freed up that we'd like to use together to hopefully achieve this. [15:14] [conrad]: what is your target application for this? === cpg|away is now known as cpg [15:14] I guess it's a compute cluster? === cpg is now known as Guest42437 [15:22] <[conrad]> RoyK: What we'd like to do is get the 4 servers ( they all have 4 cores ) working together as one, and if possible, run a single Virtual Machine off of the cluster, that would have access to all 16 cores. [15:24] I have a test server setup by I university IT, they don't like me because I'm to much of an open source advocate.... at any rate! I did get a test box to play with Ubuntu Server 12.04. I've tried installing the LMS Canvas and only got so far, because I didn't know some of the prams IT setup. [15:25] But I was pretty sure I installed PHP and such...shouldn't it have started by default? I did a 'ps aux | grep php' and didn't see it there === mcclurmc_away is now known as mcclurmc === Error404NotFound is now known as ChuckNorrisLives === ChuckNorrisLives is now known as YouShallNotPass === YouShallNotPass is now known as Error404NotFound === Error404NotFound is now known as LiamNeesonKicksA === LiamNeesonKicksA is now known as JackNicholosonMa === JackNicholosonMa is now known as Error404NotFound [15:31] i see php and php5 in /usr/bin/ [15:33] ah I had to do sudo ps aux | grep php before I saw php there [15:33] [conrad]: you can't do that [15:33] so it looks to be running. [15:34] i can only access the doman locally [15:34] but if I try something like domainname/phpinfo.php it doesn't list anything [15:34] I do have /var/www/index.html and that does display [15:35] [conrad]: compute clusters are based on queueing - you can't just combine four servers to one [15:35] [conrad]: rather get a new dual 16-core opteron instead ;) [15:36] <[conrad]> RoyK: I wish it were that easy :-). Do you have any idea on the approach that was going to be taken in the link I provided? The first use case is pretty dead on with what trouble we have. [15:38] [conrad]: I know linux and clustering quite well, and AFAIK there's no possible way to run a single VM on a cluster of machines working as single machine. It might be possible with a good infiniband network and knowledge to set it up, but before you've installed that, you've already spent more money than that dual 16-core opteron machine would cost [15:39] New bug: #1073603 in apache2 (main) "disable ssl compression to mitigate the BEAST attack" [Undecided,New] https://launchpad.net/bugs/1073603 [15:40] [conrad]: what sort of job are you planning to run that needs 16 cores? [15:41] ah good news... I got my php info to show up! [15:42] step .001 done [15:43] <[conrad]> RoyK: We don't necessarily have to do a single VM. We're open to any solution. Our problem is now is that we're constantly either under-utilizing our resources ( while they're sitting idle ), or running into situations where a single box is pinned, and other boxes in the office sit idle. We have several threaded applications that we've built and use that do a variety of things. I'm not sure any single application would effeciently u [15:43] <[conrad]> se all 16, but if it was one large VM ( I understand it can't be ), we could have 2-3 users shelled or RDP'd into the machine doing similar tasks, and maximizing the resources.. [15:44] then what you need is a queueing system [15:44] so that jobs can be queued to the cluster and distributed to available resources [15:45] that's how they build large compute clusters these days [15:45] why not build an internal cloud? [15:46] [conrad]: something like http://en.wikipedia.org/wiki/SLURM [15:46] bubu\a: not suitable for compute clusters [15:47] trade in your 4 servers for a SAN? :) [15:47] bubu\a: not my servers, but still a bad idea - no chance you can run a compute cluster on a SAN alone ;) [15:48] sorry have missed almost all fo this convo [15:48] A SAN is nice to have for other uses... [15:48] what are you trying to accomplish? [15:48] and why [15:48] for a compute cluster, you can easily use local storage on each node [15:49] <[conrad]> bubu\a: See the first use case on https://wiki.ubuntu.com/EasyUbuntuClustering. [15:49] bubu\a: read above - I'm merely answering [conrad], who wants to setup a compute cluster [15:49] [conrad]: setting up SLURM isn't very hard, and it's well-proven - see the wikipedia article above [15:49] <[conrad]> RoyK: This seems like it might be able to cover us. I seem some interesting features which might benefit us ( IE Idle nodes can be powered down , Different operating systems can be booted for each job ). [15:50] SLURM is a linux thing [15:50] <[conrad]> The upcoming feature "Integration with Apache Hadoop + Open MPI based job launch" could also be very benificial to us. [15:50] but you can possibly hack up something integrating with virsh [15:50] <[conrad]> Ahh, yes all the OS's listed are *nix variants. [15:51] so what you do is create an executable with some data and post that to the queue, then it'll run when resources are available [15:51] hmm [15:51] tis a good question! [15:52] I've never set it up myself, I quit my last job when we were in the process of moving to that [15:52] <[conrad]> The only issue I see is that this would be have to be a console based executable right? [15:52] yes [15:53] supercomputers don't run a GUI ;) [15:53] or, i guess some do, but that's another ballpark === Ursinha is now known as Ursinha-afk [15:55] <[conrad]> Right, I understand. But this would only solve half of our problem. Some of our utilities are GUI based, and others ( both console and GUI ) are Windows based. [15:56] then I guess you'll need two solutions, one for windows and one for linux [15:56] RoyK, could you not setup a big VM box on a SAN? people can thern TS into it to run whatever needed... [15:57] I don't know any queueing solutions for windows, but IIRC Microsoft has something [15:57] bubu\a: no, a SAN is only storage [15:57] well i know that but with the user of the servers too [15:57] well, of course, you can run a hypervisor on a machine connected to the SAN [15:58] but SAN is only storage [15:58] yes [15:58] but that could be the storage for the big VM [15:58] and the rest can run of the cores/ram from the servers [15:58] so in this perspective, the SAN is irrelevant [15:59] think of a SAN as an external harddrive [15:59] just bigger and more fancy [15:59] yes i know [16:00] get proper servers conrad? [16:00] a G6 or something? [16:00] fill it full of cores/ram - hook it up to a san for the storage [16:00] * RoyK thinks G6 is only a model series from HP and rather irrelevant to computing power [16:00] create a big VM? [16:01] bubu\a: really, there's no point of running a big VM on a single node unless you have a cluster of nodes (for failover) [16:02] hmm yeah fair point [16:03] [conrad]: but... if you just need up-to-4-core-VMs [16:03] take a look at https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial [16:04] it's a rather thorogh description on setting up KVM in a cluster [16:04] the howto uses DRBD for shared storage, but I don't think that'll be very good for four nodes [16:05] better use shared storage somewhere, NFS works, but is a bit slow, glusterfs should work, if you have a SAN, then OCFS2 or GFS2 should do the job [16:06] [conrad]: it takes a while to understand how the cluster works, so don't think a simple apt-get install will do [16:06] <[conrad]> I'm sure of that. I've spent countless hours over time getting familiar with LDAP, and the other services we have implemented in out network currently. [16:08] [conrad]: I've setup a two-node cluster like that (with GFS2) on CentOS - tried with Ubuntu first, but there wasn't much knowledge about it in the ubuntu world, at least not what I could find. Also, the amount of users on such systems seem to be far higher in the redhat world [16:10] [conrad]: btw, https://wiki.ubuntu.com/EasyUbuntuClustering seems to be rather outdated, based on Hardy 8.04 [16:11] <[conrad]> RoyK: I saw that. That's why I mentioned in my original message about something more related to the current 12.04 [16:12] [conrad]: I guess that KVM cluster I liked to above would be the best start. You can implement that with ubuntu - most of the same tools exist [16:13] exept sanlock, which is a bit stupid, since without it, and with shared storage (like GFS2 or NFS), there's no way of stopping two nodes to start the same VM, and doing so will definetely result in disk corruption (beleive me, I've tried...) [16:16] Can anyone help me, yum isn't working anymore after installing EPEL.. whatever I do I get this error: http://pastebin.com/qzBeLNXL [16:16] [conrad]: if you need a mixed environment, I'd say setting up a KVM cluster will be a good start. Then you can possibly setup a few Linux VMs to do the batchable work and then a windows VM or two to do the rest [16:16] niel: erm - yum/EPEL is a redhat thing, and this is an Ubuntu channel ;) === Ursinha-afk is now known as Ursinha [16:17] oh yeah lol === mcclurmc is now known as mcclurmc_away === mcclurmc_away is now known as mcclurmc [16:38] [conrad]: you might want to read http://www.microsoft.com/hpc/en/us/product/high-performance-computing-faqs.aspx [16:40] [conrad]: you can of course setup hyper-v to do the virtualization, but my experience in running Linux VMs on Hyper-V is not good - they tend to be offlined from the network if network traffic is high. I've seen that with ubuntu lucid - never made a test with precise, we moved the linux VMs to KVM before Precise landed [16:46] New bug: #1073639 in rabbitmq-server (main) "Option to not boot on install" [Undecided,New] https://launchpad.net/bugs/1073639 === mcclurmc is now known as mcclurmc_away [17:35] Hi Folks, I would appreciate any help with my problem since I spent so much time to solve it . I got a snapshot include files to be deployed , now I did the following steps to achieve that : [17:35] Creating small ec2 Ubuntu Server 11.10 ( 64 bit) [17:36] Create new volume with snapshot needed ( I assigned 6 GB for it) [17:36] Attached the volume to my ec2 instance [17:36] Created a new directory to hold the data ( /site ) [17:36] Edit /etc/fstab and add this : /dev/xvdf /site auto defaults,nobootwait,noatime 0 0 and reboot my instance [17:37] but when I try to mount using sudo mount /dev/xvdf /site , i'm getting that i need to specify the format of the file [17:38] if it was a ext file system, the linux ordinary one, it would have auto-detect it [17:38] the format of the partition* [17:38] anyone can help with this issue please ? [17:38] !mount [17:38] mount is used to attach devices to directories. See also https://help.ubuntu.com/community/Mount [17:38] moskydos: i would test that i can mount the directory before adding it to fstab [17:39] u r right holstein , but i'm doing that and it is not working so far , let me try to show you some results here [17:39] moskydos: i would take fstab out of the equation [17:39] moskydos: if you cant mount it, fstab cant [17:40] u r right , let's forget about fstab at this stage , my main problem is that i can't mount the snapshot [17:40] moskydos: take some time and troubleshoot just mounting the partition, then when you can do that, you'll learn how to add it to fstabl [17:40] moskydos: how is it formatted? [17:41] https://gist.github.com/8b7543922893f0771a8b [17:41] Disk /dev/xvda1 doesn't contain a valid partition table - that is pretty relevant [17:41] i'm just trying to figure out the format so I can do something like sudo mount -t vfat /dev/xvdf /site or whatever is the format [17:41] where are you getting this partition? [17:42] if its broken, its broken... [17:42] HPFS/NTFS/exFAT [17:42] mm I didn't setup /dev/xvda1 , it is there already when I initiate new ubuntu instance [17:43] the volume that I want to attach is at xvdf [17:43] moskydos: i would ask whoever made it if they can mount it, or to test it, or to se if it is functioning properly [17:43] moskydos: i hear you, but if its bad, its bad.. [17:44] http://www.linuxquestions.org/questions/linux-laptop-and-netbook-25/problems-mounting-disk-in-ubuntu-11-10-ntfs-signature-missing-917973/ might be relevant [17:44] mmm u recommend me to use another ec2 type somehow ? [17:44] moskydos: im just suggesting that you dont know anything about that partition.. you didnt create it. it could be the issue and you could waste lots of time trying to mount it and its broken [17:45] im not saying that is the case, just that its an option === fleish_ is now known as fleish [18:07] hey guys im not finding a good how to for ubuntu 12.04 on how to setup virtual users and domains for use with dovecot + postfix any good how to's for this? [18:08] holstein sorry for the headache here , I just wanna go through this step by step, I just created a new ec2 instance , I didn't do anything with it , I didn't even attach the snapshot , can u see the conf so far here https://gist.github.com/735b4adb8425df3fa920 [18:08] I just wanna make sure that at this stage nothing broken u know [18:08] before I go further [18:35] eagles: http://www.exratione.com/2012/05/a-mailserver-on-ubuntu-1204-postfix-dovecot-mysql/ [18:57] humm [19:24] hi there [19:25] is there any postifix or mail server guru ? [19:26] Hi, guys! I just installed Ubuntu 12.04 LTS with Apache2 but I must be doing something wrong. The "it works" page takes about 15 sec to respond, then it responds quickly but after a while it does the same thing again. I suspect DNS problems. I enabled "options debug" in resolv.conf and apache and *.debug in syslog to /var/log/debug but I have absolutely no output. [19:27] suppose is not a local installation, right? [19:27] does error log say something? [19:27] I also did strace -r -p $APACHE_PID for all apache processes (I managed to reduce the clients to only 3) and the problematic function seems to be accept(). [19:28] gatsby7, what do you mean by local? It's a physical box that I installed myself and I have full control over. [19:28] gatsby7, error logs are empty. [19:28] gatsby7, I may be able to help with postfix. [19:29] While the web browser is waiting for the response, strace keeps on the accept() function. I think it has something to do with the resolver library. [19:30] I also fully upgraded it, made sure nsswitch.conf has "hosts: files dns" and disabled ipv6, just to put it out of the way. [19:31] Any ideas on this weird behavior, or pointers on how can I debug this? :-S [19:31] alvarezp, do you made any changes 2 apache2.conf? [19:33] btw i will explain my postfix problems [19:34] when mail comes from mobile devices, or from mailing lists, the sender address is shown ad sendername+SOME_WEIRD_CODE@domain.tld [19:34] gatsby7, I installed cacti and nagios which adds their respective files to conf.d (but I don't find any problems there). Then I changed StartServers and Min/MaxSpareThreads to reduce the number of processes to be able to use strace easily and changed the logging level to "Debug" on apache2/apache.conf and sites-available/default. [19:36] alvarezp, do you reach nagios proberly? [19:36] gatsby7, Yes: Nagios, Cacti and the "it works" page, but all of them with the same problem: 15-sec delay on the first try and immediately after recurring tries. After a while, the pattern repeats. [19:37] gatsby7, sendername+SOME_WEIRD_CODE@domain.tld is a valid mail format and it's the same receipient as sendername@domain.tld. Are you sure it is an error? [19:38] alvarezp, unfortuntely 20 days ago we didn't have this problem [19:38] gatsby7, never seen that behavior [19:38] gatsby7, I have mail servers of my own. [19:39] gatsby7, if you have configuration backups, have you tried diff-ing? [19:39] think i have 2 tell more details [19:39] : [19:39] 20 days ago we had one domain, the we decided [19:40] to have one more domain, using the same mail infrastructure [19:40] so we used virtual domain in postfix and forced with a script [19:41] that all user in the old domain will send mail with the new domain. [19:42] everything is working fine except for mailing list services and mobile devices [19:43] the strange fact is we force our user sender address, but this behaviour occurs when others mail us... [19:44] gatsby7, ohh, that's a perfect use case for configuration diffing [19:46] gatsby7, it'd be useful to know what directives changed, and see which ones are potential candidates for that behavior. [19:48] gabrtv, have you enabled debugging logs and checked? [19:54] has anyone upgraded there server from 12.04 to 12.10 === n0ts is now known as n0ts_off [19:56] Also, HostnameLookups Off === n0ts_off is now known as n0ts [19:59] Hi everyone, how can I bundle instance into new AMI and share the AMI with some accounts IDs ? any up-to-dated article demonstrate this ? === n0ts is now known as n0ts_off [20:35] Ohh god!!!!!! I found that the slow responses are not my server's fault: it's my web cache fault. :( Sorry all for the noise. === Guest42437 is now known as cpg [20:49] Hi [20:49] I'm looking for a way to upgrade apache to 2.22.2 on ubuntu server 10.04 [20:54] allanm, reason? [20:55] cause you will break all security patchs doing that [21:00] well I'm actually trying to get a bunch of vulnerabilities patched [21:00] CVE-2011-3192 CVE-2010-2068 CVE-2010-1623 CVE-2010-1452 [21:00] CVE-2012-0031 CVE-2011-3607 CVE-2012-0053 CVE-2011-3348 CVE-2011-1928 CVE-2011-0419 [21:00] CVE-2010-0434 CVE-2011-4415 [21:00] allanm: The byterange filter in the Apache HTTP Server 1.3.x, 2.0.x through 2.0.64, and 2.2.x through 2.2.19 allows remote attackers to cause a denial of service (memory and CPU consumption) via a Range header that expresses multiple overlapping ranges, as exploited in the wild in August 2011, a different vulnerability than CVE-2007-0086. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3192) [21:01] allanm: mod_proxy_http.c in mod_proxy_http in the Apache HTTP Server 2.2.9 through 2.2.15, 2.3.4-alpha, and 2.3.5-alpha on Windows, NetWare, and OS/2, in certain configurations involving proxy worker pools, does not properly detect timeouts, which allows remote attackers to obtain a potentially sensitive response intended for a different client in opportunistic circumstances via a normal HTTP request. (http://cve.mitre.org/cgi-bin/cvename.cgi?nam [21:01] allanm: Memory leak in the apr_brigade_split_line function in buckets/apr_brigade.c in the Apache Portable Runtime Utility library (aka APR-util) before 1.3.10, as used in the mod_reqtimeout module in the Apache HTTP Server and other software, allows remote attackers to cause a denial of service (memory consumption) via unspecified vectors related to the destruction of an APR bucket. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2010-1623) [21:01] allanm: The (1) mod_cache and (2) mod_dav modules in the Apache HTTP Server 2.2.x before 2.2.16 allow remote attackers to cause a denial of service (process crash) via a request that lacks a path. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2010-1452) [21:01] allanm: scoreboard.c in the Apache HTTP Server 2.2.21 and earlier might allow local users to cause a denial of service (daemon crash during shutdown) or possibly have unspecified other impact by modifying a certain type field within a scoreboard shared memory segment, leading to an invalid call to the free function. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-0031) [21:01] allanm: Integer overflow in the ap_pregsub function in server/util.c in the Apache HTTP Server 2.0.x through 2.0.64 and 2.2.x through 2.2.21, when the mod_setenvif module is enabled, allows local users to gain privileges via a .htaccess file with a crafted SetEnvIf directive, in conjunction with a crafted HTTP request header, leading to a heap-based buffer overflow. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3607) [21:01] allanm: protocol.c in the Apache HTTP Server 2.2.x through 2.2.21 does not properly restrict header information during construction of Bad Request (aka 400) error documents, which allows remote attackers to obtain the values of HTTPOnly cookies via vectors involving a (1) long or (2) malformed header in conjunction with crafted web script. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-0053) [21:01] allanm: The mod_proxy_ajp module in the Apache HTTP Server before 2.2.21, when used with mod_proxy_balancer in certain configurations, allows remote attackers to cause a denial of service (temporary "error state" in the backend server) via a malformed HTTP request. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3348) [21:01] allanm: The fnmatch implementation in apr_fnmatch.c in the Apache Portable Runtime (APR) library 1.4.3 and 1.4.4, and the Apache HTTP Server 2.2.18, allows remote attackers to cause a denial of service (infinite loop) via a URI that does not match unspecified types of wildcard patterns, as demonstrated by attacks against mod_autoindex in httpd when a /*/WEB-INF/ configuration pattern is used. NOTE: this issue exists because of an incorrect fix f [21:01] allanm: Stack consumption vulnerability in the fnmatch implementation in apr_fnmatch.c in the Apache Portable Runtime (APR) library before 1.4.3 and the Apache HTTP Server before 2.2.18, and in fnmatch.c in libc in NetBSD 5.1, OpenBSD 4.8, FreeBSD, Apple Mac OS X 10.6, Oracle Solaris 10, and Android, allows context-dependent attackers to cause a denial of service (CPU and memory consumption) via *? sequences in the first argument, as demonstrated [21:01] allanm: The ap_read_request function in server/protocol.c in the Apache HTTP Server 2.2.x before 2.2.15, when a multithreaded MPM is used, does not properly handle headers in subrequests in certain circumstances involving a parent request that has a body, which might allow remote attackers to obtain sensitive information via a crafted request that triggers access to memory locations associated with an earlier request. (http://cve.mitre.org/cgi-bi [21:01] allanm: The ap_pregsub function in server/util.c in the Apache HTTP Server 2.0.x through 2.0.64 and 2.2.x through 2.2.21, when the mod_setenvif module is enabled, does not restrict the size of values of environment variables, which allows local users to cause a denial of service (memory consumption or NULL pointer dereference) via a .htaccess file with a crafted SetEnvIf directive, in conjunction with a crafted HTTP request header, related to (1) [21:01] sorry about that [21:03] heh? those are patched [21:03] I'm currently running [21:03] I would recommend you check ubuntu cve, or maybe even the apache package [21:03] 2.22.14 [21:03] version numbers != vaulnerable [21:04] lol the pci scan is marking me as if it was [21:04] ya, it's a pci scan [21:04] you think it actually checks if your vaunerable or not? [21:04] read up on how to do pci scans :) [21:04] :) [21:04] run scan, it checks server version, says you have issues [21:05] you supply link that shows issue was corrected, done [21:06] sir you are 100% right [21:06] https://launchpad.net/ubuntu/lucid/+source/apache2/+changelog [21:06] check those cve's off that list :) [21:07] attempting to pass pci scans via software versions leaves you having to manually patch or upgrade stuff yourself [21:09] I've had a lot of people email me, about apache 2.4 for ubuntu, so they could pass pci scans :) [21:13] :) [21:14] patdk-wk, you are amazin now I have to figure out how to deal with the company that does the PCI scan === matsubara is now known as matsubara-afk [21:43] New bug: #1073725 in elinks (universe) "links and elinks don't have js support enabled by default" [Undecided,New] https://launchpad.net/bugs/1073725 === cpg|away is now known as cpg === cpg is now known as Guest81619 === Guest81619 is now known as Guest81619|away === Guest81619|away is now known as cpg [23:27] I'm having a really weird issue [23:27] apache becomes super slow to respond [23:27] if i do ufw enable [23:28] even with 80 and 443 allowed [23:42] what's that? SourceGuardian requires Zend Engine API version 220090626. [23:42] The Zend Engine API version 220100525 which is installed, is newer. [23:43] this started after upgrade to 12.10