[00:07] <erichammond> apt-get update on EC2 us-east-1 results in: Failed to fetch bzip2:/var/lib/apt/lists/partial/us-east-1.ec2.archive.ubuntu.com_ubuntu_dists_precise-updates_main_source_Sources  Hash Sum mismatch
[00:09] <erichammond> http://askubuntu.com/questions/209844/
[01:53] <mgw> I have an issue going on with a post-up script — the script adds custom routes, but sometimes one or more of the routes is not added.  This is on a kvm guest.
[02:07] <bsdman> what services are being run on your ubuntu server?
[02:11] <mgw> bsdman: this particular system has redis and nginx — but it was happening on another server that had entirely different services
[02:11] <mgw> unfortunately, it's hard to reproduce
[02:13] <benlake> this an appropriate place to ask about AWSOME?
[02:21] <bsdman> benlake, what do you wish to know?
[02:22] <benlake> does the implementation provide for non EC2 compliant API implementations?
[02:22] <benlake> bsdman:
[02:23] <bsdman> no idea. little beyond my knowledge.
[02:25] <benlake> bsdman: no worries
[02:30] <benlake> hmm, looks like it might not be that bad to plug in a new cloud provider
[02:43] <mgw> bsdman: here's an excerpt from my log — https://gist.github.com/6d8169c1da423e07a70d
[02:47] <mgw> bsdman: also, it seems to just be the first boot after a destroy/create/start
[05:29] <hallyn> stgraber: i saw the emails - excellent
[05:41] <p7ank5te7> Just curious, what's the best policy for adding shares? I noticed that I had pre-existing directories and the acl's didn't seem to take properly. I have another drive mounted as /Share, and the folders in there are owned by root:root with them at RWX for all users, but if I set no guest access, and set authenicated users to read-only and admins as RW it gives everyone RW. I'm still trying
[05:41] <p7ank5te7> to get used to this, so any suggestions are appreciated. I don't want them be created under /homes/samba/shares/..... is the main thing. I'm running zentyal 3.0 right now and that's where it creates a share if I use the interface. Are there any special permissions I have to set on these folders or something?
[06:58] <vezq> here is a snippet from my smb.conf which works with acls:
[06:58] <vezq> writable = yes
[06:58] <vezq> valid users = @Users
[06:58] <vezq> hide unreadable = yes
[06:58] <vezq> inherit permissions = yes
[06:58] <vezq> security mask = 0000
[06:58] <vezq> map archive = no
[06:59] <vezq> force directory mode = 0777
[06:59] <vezq> force create mode = 0777
[06:59] <vezq> force directory security mode = 777
[08:39] <webwurst> hi! when using iscsi as storage-backend to kvm/libvirt i can migrate running virtualized systems from one host to another, but can't create snapshots. when using qcow2 based file-images it is the other way round. am i right?
[09:27] <houman> Hello, it seems the Ubuntu 12.04 ami-9c78c0f5 is broken. Does anyone know when the next repository update is scheduled?
[09:42] <Gargoyle> Morning all
[09:43] <Gargoyle> I'm hitting some annoying hurdles with munin on 12.10. Anyone else got any experience with it (I am getting html files, but no graphs)
[10:05] <houman> Within a EC2 Server instance, while running a ./configure, suddenly the terminal hangs during execution. e.g. at config.status: executing depfiles commands. I experienced the same thing once during e.g. sudo checkinstall. The whole terminal becomes unresponsive.  What could be the reason for that? This happens on a ec2 micro instance.
[10:57] <bubu\a> hi guys, can anyone tell me why - smbmount //10.29.39.23/share /mnt/hd2/share -o user=samba,pass=blablabla,gid=ubuntu,uid=ubuntu mounts the drive corrently as ubuntu user on one server and mounts as root on another server....?
[11:10] <_cronus_> bubu\a, maybe you need to pass the forceuid and forcegid options as well
[11:12] <vezq> ignore -channels #linuxoutlaws * JOINS PARTS QUITS NICKS
[11:13] <ak5> hi, can i easily install ubuntu server 12.04 lts on usb key using dd?
[11:13] <ak5> I have done this with archlinux
[11:13] <ak5> but I found no docs related to this specifically for ubuntu
[11:14] <vezq> nope, use startup disk creator if running ubuntu
[11:15] <vezq> or unetbootin with windows
[11:15] <Jeeves_> If I install something with --no-install-recommends, will it never bother me with the recommends again?
[11:15] <ak5> vezq: I am running a different distro
[11:15] <vezq> unetbootin also available for linux
[11:16] <ak5> vezq: thanks
[12:02] <ak5> I have a "amd64" version of ubuntu server 12.04 - this works onintel x86_64 architecture too, right?
[12:03] <RoyK> ak5: yes, it's called amd64 since it was AMD who first made that architecture - intel followed that when they finally found that the itanium run wasn't good enough
[12:03] <ak5> ok good, just checking
[12:04] <ak5> thanks
[12:15] <ak5> I am confused what options to use in unetbootin for my ubuntu server 12.04 amd64 iso file
[12:17] <ak5> oh nevermind seems like my download is just corrupt -.-
[14:03] <TuxLof> I just logged in a KVM node which has a 100GB swap partition... someone hasn't been paying attention during installation  jesus lol
[14:11] <roaksoax> jamespage: free by any chance?
[14:15] <vezq> tuxloaf: is there 256GB of RAM? ;)
[14:18] <TuxLof> has 100gb of ram :p
[14:19] <TuxLof> also a 48gb udev and 48gb cgroup partition from which just a couple MB is in use
[14:47] <vezq> :)
[14:52] <fosfori> Umm.. does this sound familiar to some1? The file path or username are not showing up next to the dollar sign when I connect to my server via SSH
[15:00] <bubu\a> fosfori, what shell does your user have on the remote machine?
[15:00] <fosfori> bash
[15:01] <bubu\a> you sure?
[15:01] <bubu\a> cat the passwd file
[15:02] <RoyK> fosfori: perhaps someone has messed up PS1
[15:03] <fosfori> bubu\a: done
[15:03] <RoyK> try
[15:03] <RoyK> PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
[15:03] <fosfori> Just got this baby up yesterday
[15:03] <bubu\a> fosfori, def bash then? :/
[15:03] <RoyK> fosfori: is this for root, or a newly created user?
[15:04] <fosfori> I'm not too familiar with the commands :3
[15:04] <fosfori> RoyK: new user
[15:04] <RoyK> then it's probably not bash
[15:04] <RoyK> http://paste.ubuntu.com/1320990/
[15:05] <RoyK> fosfori: as that user, run chsh -s /bin/bash, or run chsh -s /bin/bash <username> as root
[15:06] <bubu\a> fosfori, what shell does that user have when you cat /etc/passwd?
[15:06] <fosfori> Okay, let's see
[15:06] <RoyK> default shell is /bin/sh
[15:07] <RoyK> which is dash
[15:07] <fosfori> ahh
[15:07] <RoyK> no idea why that's the default, since imho dash sucks badly
[15:07] <fosfori> yeah, I looked at the wrong line
[15:07] <RoyK> bingo
[15:08] <fosfori> So how does this one help me getting that path to show up?
[15:09] <RoyK> yes
[15:09] <RoyK> as that user, just try to run bash
[15:09] <RoyK> and you'll see the difference
[15:11] <RoyK> seems it's a useradd default set in /etc/default/useradd
[15:12] <[conrad]> Does anyone know of anything more elaborate than https://wiki.ubuntu.com/EasyUbuntuClustering , and possibley more specific to a newer version such as 12.04 ? I'm essentially a duplicate of the first use case. We have 4 Dell servers that were recently freed up that we'd like to use together to hopefully achieve this.
[15:14] <RoyK> [conrad]: what is your target application for this?
[15:14] <RoyK> I guess it's a compute cluster?
[15:22] <[conrad]> RoyK: What we'd like to do is get the 4 servers ( they all have 4 cores ) working together as one, and if possible, run a single Virtual Machine off of the cluster, that would have access to all 16 cores.
[15:24] <designbybeck> I have a test server setup by I university IT, they don't like me because I'm to much of an open source advocate.... at any rate!  I did get a test box to play with Ubuntu Server 12.04. I've tried installing the LMS Canvas and only got so far, because I didn't know some of the prams IT setup.
[15:25] <designbybeck> But I was pretty sure I installed PHP and such...shouldn't it have started by default? I did a 'ps aux | grep php' and didn't see it there
[15:31] <designbybeck> i see php and php5 in /usr/bin/
[15:33] <designbybeck> ah I had to do sudo ps aux | grep php before I saw php there
[15:33] <RoyK> [conrad]: you can't do that
[15:33] <designbybeck> so it looks to be running.
[15:34] <designbybeck> i can only access the doman locally
[15:34] <designbybeck> but if I try something like domainname/phpinfo.php it doesn't list anything
[15:34] <designbybeck> I do have /var/www/index.html and that does display
[15:35] <RoyK> [conrad]: compute clusters are based on queueing - you can't just combine four servers to one
[15:35] <RoyK> [conrad]: rather get a new dual 16-core opteron instead ;)
[15:36] <[conrad]> RoyK: I wish it were that easy :-). Do you have any idea on the approach that was going to be taken in the link I provided? The first use case is pretty dead on with what trouble we have.
[15:38] <RoyK> [conrad]: I know linux and clustering quite well, and AFAIK there's no possible way to run a single VM on a cluster of machines working as single machine. It might be possible with a good infiniband network and knowledge to set it up, but before you've installed that, you've already spent more money than that dual 16-core opteron machine would cost
[15:40] <RoyK> [conrad]: what sort of job are you planning to run that needs 16 cores?
[15:41] <designbybeck> ah good news... I got my php info to show up!
[15:42] <designbybeck> step .001 done
[15:43] <[conrad]> RoyK: We don't necessarily have to do a single VM. We're open to any solution. Our problem is now is that we're constantly either under-utilizing our resources ( while they're sitting idle ), or running into situations where a single box is pinned, and other boxes in the office sit idle. We have several threaded applications that we've built and use that do a variety of things. I'm not sure any single application would effeciently u
[15:43] <[conrad]> se all 16, but if it was one large VM ( I understand it can't be ), we could have 2-3 users shelled or RDP'd into the machine doing similar tasks, and maximizing the resources..
[15:44] <RoyK> then what you need is a queueing system
[15:44] <RoyK> so that jobs can be queued to the cluster and distributed to available resources
[15:45] <RoyK> that's how they build large compute clusters these days
[15:45] <bubu\a> why not build an internal cloud?
[15:46] <RoyK> [conrad]: something like http://en.wikipedia.org/wiki/SLURM
[15:46] <RoyK> bubu\a: not suitable for compute clusters
[15:47] <bubu\a> trade in your 4 servers for a SAN? :)
[15:47] <RoyK> bubu\a: not my servers, but still a bad idea - no chance you can run a compute cluster on a SAN alone ;)
[15:48] <bubu\a> sorry have missed almost all fo this convo
[15:48] <RoyK> A SAN is nice to have for other uses...
[15:48] <bubu\a> what are you trying to accomplish?
[15:48] <bubu\a> and why
[15:48] <RoyK> for a compute cluster, you can easily use local storage on each node
[15:49] <[conrad]> bubu\a: See the first use case on https://wiki.ubuntu.com/EasyUbuntuClustering.
[15:49] <RoyK> bubu\a: read above - I'm merely answering [conrad], who wants to setup a compute cluster
[15:49] <RoyK> [conrad]: setting up SLURM isn't very hard, and it's well-proven - see the wikipedia article above
[15:49] <[conrad]> RoyK: This seems like it might be able to cover us. I seem some interesting features which might benefit us ( IE Idle nodes can be powered down , Different operating systems can be booted for each job ).
[15:50] <RoyK> SLURM is a linux thing
[15:50] <[conrad]> The upcoming feature "Integration with Apache Hadoop + Open MPI based job launch" could also be very benificial to us.
[15:50] <RoyK> but you can possibly hack up something integrating with virsh
[15:50] <[conrad]> Ahh, yes all the OS's listed are *nix variants.
[15:51] <RoyK> so what you do is create an executable with some data and post that to the queue, then it'll run when resources are available
[15:51] <bubu\a> hmm
[15:51] <bubu\a> tis a good question!
[15:52] <RoyK> I've never set it up myself, I quit my last job when we were in the process of moving to that
[15:52] <[conrad]> The only issue I see is that this would be have to be a console based executable right?
[15:52] <RoyK> yes
[15:53] <RoyK> supercomputers don't run a GUI ;)
[15:53] <RoyK> or, i guess some do, but that's another ballpark
[15:55] <[conrad]> Right, I understand. But this would only solve half of our problem. Some of our utilities are GUI based, and others ( both console and GUI ) are Windows based.
[15:56] <RoyK> then I guess you'll need two solutions, one for windows and one for linux
[15:56] <bubu\a> RoyK, could you not setup a big VM box on a SAN? people can thern TS into it to run whatever needed...
[15:57] <RoyK> I don't know any queueing solutions for windows, but IIRC Microsoft has something
[15:57] <RoyK> bubu\a: no, a SAN is only storage
[15:57] <bubu\a> well i know that but with the user of the servers too
[15:57] <RoyK> well, of course, you can run a hypervisor on a machine connected to the SAN
[15:58] <RoyK> but SAN is only storage
[15:58] <bubu\a> yes
[15:58] <bubu\a> but that could be the storage for the big VM
[15:58] <bubu\a> and the rest can run of the cores/ram from the servers
[15:58] <RoyK> so in this perspective, the SAN is irrelevant
[15:59] <RoyK> think of a SAN as an external harddrive
[15:59] <RoyK> just bigger and more fancy
[15:59] <bubu\a> yes i know
[16:00] <bubu\a> get proper servers conrad?
[16:00] <bubu\a> a G6 or something?
[16:00] <bubu\a> fill it full of cores/ram - hook it up to a san for the storage
[16:00]  * RoyK thinks G6 is only a model series from HP and rather irrelevant to computing power
[16:00] <bubu\a> create a big VM?
[16:01] <RoyK> bubu\a: really, there's no point of running a big VM on a single node unless you have a cluster of nodes (for failover)
[16:02] <bubu\a> hmm yeah fair point
[16:03] <RoyK> [conrad]: but... if you just need up-to-4-core-VMs
[16:03] <RoyK> take a look at https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial
[16:04] <RoyK> it's a rather thorogh description on setting up KVM in a cluster
[16:04] <RoyK> the howto uses DRBD for shared storage, but I don't think that'll be very good for four nodes
[16:05] <RoyK> better use shared storage somewhere, NFS works, but is a bit slow, glusterfs should work, if you have a SAN, then OCFS2 or GFS2 should do the job
[16:06] <RoyK> [conrad]: it takes a while to understand how the cluster works, so don't think a simple apt-get install will do
[16:06] <[conrad]> I'm sure of that. I've spent countless hours over time getting familiar with LDAP, and the other services we have implemented in out network currently.
[16:08] <RoyK> [conrad]: I've setup a two-node cluster like that (with GFS2) on CentOS - tried with Ubuntu first, but there wasn't much knowledge about it in the ubuntu world, at least not what I could find. Also, the amount of users on such systems seem to be far higher in the redhat world
[16:10] <RoyK> [conrad]: btw, https://wiki.ubuntu.com/EasyUbuntuClustering seems to be rather outdated, based on Hardy 8.04
[16:11] <[conrad]> RoyK: I saw that. That's why I mentioned in my original message about something more related to the current 12.04
[16:12] <RoyK> [conrad]: I guess that KVM cluster I liked to above would be the best start. You can implement that with ubuntu - most of the same tools exist
[16:13] <RoyK> exept sanlock, which is a bit stupid, since without it, and with shared storage (like GFS2 or NFS), there's no way of stopping two nodes to start the same VM, and doing so will definetely result in disk corruption (beleive me, I've tried...)
[16:16] <niel> Can anyone help me, yum isn't working anymore after installing EPEL.. whatever I do I get this error: http://pastebin.com/qzBeLNXL
[16:16] <RoyK> [conrad]: if you need a mixed environment, I'd say setting up a KVM cluster will be a good start. Then you can possibly setup a few Linux VMs to do the batchable work and then a windows VM or two to do the rest
[16:16] <RoyK> niel: erm - yum/EPEL is a redhat thing, and this is an Ubuntu channel ;)
[16:17] <niel> oh yeah lol
[16:38] <RoyK> [conrad]: you might want to read http://www.microsoft.com/hpc/en/us/product/high-performance-computing-faqs.aspx
[16:40] <RoyK> [conrad]: you can of course setup hyper-v to do the virtualization, but my experience in running Linux VMs on Hyper-V is not good - they tend to be offlined from the network if network traffic is high. I've seen that with ubuntu lucid - never made a test with precise, we moved the linux VMs to KVM before Precise landed
[17:35] <moskydos> Hi Folks, I would appreciate any help with my problem since I spent so much time to solve it . I got a snapshot include files to be deployed , now I did the following steps to achieve that :
[17:35] <moskydos> Creating small ec2 Ubuntu Server 11.10 ( 64 bit)
[17:36] <moskydos> Create new volume with snapshot needed ( I assigned 6 GB for it)
[17:36] <moskydos> Attached the volume to my ec2 instance
[17:36] <moskydos> Created a new directory to hold the data ( /site )
[17:36] <moskydos> Edit /etc/fstab and add this : /dev/xvdf        /site    auto    defaults,nobootwait,noatime     0   0 and reboot my instance
[17:37] <moskydos> but when I try to mount using sudo mount /dev/xvdf /site , i'm getting that i need to specify the format of the file
[17:38] <moskydos> if it was a ext file system, the linux ordinary one, it would have auto-detect it
[17:38] <holstein> the format of the partition*
[17:38] <moskydos> anyone can help with this issue please ?
[17:38] <holstein> !mount
[17:38] <holstein> moskydos: i would test that i can mount the directory before adding it to fstab
[17:39] <moskydos> u r right holstein , but i'm doing that and it is not working so far , let me try to show you some results here
[17:39] <holstein> moskydos: i would take fstab out of the equation
[17:39] <holstein> moskydos: if you cant mount it, fstab cant
[17:40] <moskydos> u r right , let's forget about fstab at this stage , my main problem is that i can't mount the snapshot
[17:40] <holstein> moskydos: take some time and troubleshoot just mounting the partition, then when you can do that, you'll learn how to add it to fstabl
[17:40] <holstein> moskydos: how is it formatted?
[17:41] <moskydos> https://gist.github.com/8b7543922893f0771a8b
[17:41] <holstein> Disk /dev/xvda1 doesn't contain a valid partition table - that is pretty relevant
[17:41] <moskydos> i'm just trying to figure out the format so I can do something like sudo mount -t vfat  /dev/xvdf /site or whatever is the format
[17:41] <holstein> where are you getting this partition?
[17:42] <holstein> if its broken, its broken...
[17:42] <holstein> HPFS/NTFS/exFAT
[17:42] <moskydos> mm I didn't setup /dev/xvda1 , it is there already when I initiate new ubuntu instance
[17:43] <moskydos> the volume that I want to attach is at xvdf
[17:43] <holstein> moskydos: i would ask whoever made it if they can mount it, or to test it, or to se if it is functioning properly
[17:43] <holstein> moskydos: i hear you, but if its bad, its bad..
[17:44] <holstein> http://www.linuxquestions.org/questions/linux-laptop-and-netbook-25/problems-mounting-disk-in-ubuntu-11-10-ntfs-signature-missing-917973/ might be relevant
[17:44] <moskydos> mmm u recommend me to use another ec2 type somehow ?
[17:44] <holstein> moskydos: im just suggesting that you dont know anything about that partition.. you didnt create it. it could be the issue and you could waste lots of time trying to mount it and its broken
[17:45] <holstein> im not saying that is the case, just that its an option
[18:07] <eagles0513875__> hey guys im not finding a good how to for ubuntu 12.04 on how to setup virtual users and domains for use with dovecot + postfix any good how to's for this?
[18:08] <moskydos> holstein sorry for the headache here , I just wanna go through this step by step, I just created a new ec2 instance , I didn't do anything with it , I didn't even attach the snapshot , can u see the conf so far here https://gist.github.com/735b4adb8425df3fa920
[18:08] <moskydos> I just wanna make sure that at this stage nothing broken u know
[18:08] <moskydos> before I go further
[18:35] <vezq> eagles: http://www.exratione.com/2012/05/a-mailserver-on-ubuntu-1204-postfix-dovecot-mysql/
[18:57] <eagles0513875__> humm
[19:24] <gatsby7> hi there
[19:25] <gatsby7> is there any postifix or mail server guru ?
[19:26] <alvarezp> Hi, guys! I just installed Ubuntu 12.04 LTS with Apache2 but I must be doing something wrong. The "it works" page takes about 15 sec to respond, then it responds quickly but after a while it does the same thing again. I suspect DNS problems. I enabled "options debug" in resolv.conf and apache and *.debug in syslog to /var/log/debug but I have absolutely no output.
[19:27] <gatsby7> suppose is not a local installation, right?
[19:27] <gatsby7> does error log say something?
[19:27] <alvarezp> I also did strace -r -p $APACHE_PID for all apache processes (I managed to reduce the clients to only 3) and the problematic function seems to be accept().
[19:28] <alvarezp> gatsby7, what do you mean by local? It's a physical box that I installed myself and I have full control over.
[19:28] <alvarezp> gatsby7, error logs are empty.
[19:28] <alvarezp> gatsby7, I may be able to help with postfix.
[19:29] <alvarezp> While the web browser is waiting for the response, strace keeps on the accept() function. I think it has something to do with the resolver library.
[19:30] <alvarezp> I also fully upgraded it, made sure nsswitch.conf has "hosts: files dns" and disabled ipv6, just to put it out of the way.
[19:31] <alvarezp> Any ideas on this weird behavior, or pointers on how can I debug this? :-S
[19:31] <gatsby7> alvarezp,  do you made any changes 2 apache2.conf?
[19:33] <gatsby7> btw i will explain my postfix problems
[19:34] <gatsby7> when mail comes from mobile devices, or from mailing lists, the sender address is shown ad sendername+SOME_WEIRD_CODE@domain.tld
[19:34] <alvarezp> gatsby7, I installed cacti and nagios which adds their respective files to conf.d (but I don't find any problems there). Then I changed StartServers and Min/MaxSpareThreads to reduce the number of processes to be able to use strace easily and changed the logging level to "Debug" on apache2/apache.conf and sites-available/default.
[19:36] <gatsby7> alvarezp, do you reach nagios proberly?
[19:36] <alvarezp> gatsby7, Yes: Nagios, Cacti and the "it works" page, but all of them with the same problem: 15-sec delay on the first try and immediately after recurring tries. After a while, the pattern repeats.
[19:37] <alvarezp> gatsby7, sendername+SOME_WEIRD_CODE@domain.tld is a valid mail format and it's the same receipient as sendername@domain.tld. Are you sure it is an error?
[19:38] <gatsby7> alvarezp, unfortuntely 20 days ago we didn't have this problem
[19:38] <alvarezp> gatsby7, never seen that behavior
[19:38] <alvarezp> gatsby7, I have mail servers of my own.
[19:39] <alvarezp> gatsby7, if you have configuration backups, have you tried diff-ing?
[19:39] <gatsby7> think i have 2 tell more details
[19:39] <gatsby7> :
[19:39] <gatsby7> 20 days ago we had one domain, the we decided
[19:40] <gatsby7> to have one more domain, using the same mail infrastructure
[19:40] <gatsby7> so we used virtual domain in postfix and forced with a script
[19:41] <gatsby7> that all user in the old domain will send mail with the new domain.
[19:42] <gatsby7> everything is working fine except for mailing list services and mobile devices
[19:43] <gatsby7> the strange fact is we force our user sender address, but this behaviour occurs when others mail us...
[19:44] <alvarezp> gatsby7, ohh, that's a perfect use case for configuration diffing
[19:46] <alvarezp> gatsby7, it'd be useful to know what directives changed, and see which ones are potential candidates for that behavior.
[19:48] <alvarezp> gabrtv, have you enabled debugging logs and checked?
[19:54] <Troy^> has anyone upgraded there server from 12.04 to 12.10
[19:56] <alvarezp> Also, HostnameLookups Off
[19:59] <ehab33> Hi everyone,  how can I bundle instance into new AMI and share the AMI with some accounts IDs ? any up-to-dated article demonstrate this ?
[20:35] <alvarezp> Ohh god!!!!!! I found that the slow responses are not my server's fault: it's my web cache fault. :( Sorry all for the noise.
[20:49] <allanm> Hi
[20:49] <allanm> I'm looking for a way to upgrade apache to 2.22.2 on ubuntu server 10.04
[20:54] <patdk-wk> allanm, reason?
[20:55] <patdk-wk> cause you will break all security patchs doing that
[21:00] <allanm> well I'm actually trying to get a bunch of vulnerabilities patched
[21:00] <allanm> CVE-2011-3192 CVE-2010-2068 CVE-2010-1623 CVE-2010-1452
[21:00] <allanm> CVE-2012-0031 CVE-2011-3607 CVE-2012-0053 CVE-2011-3348 CVE-2011-1928 CVE-2011-0419
[21:00] <allanm> CVE-2010-0434 CVE-2011-4415
[21:01] <allanm> sorry about that
[21:03] <patdk-wk> heh? those are patched
[21:03] <allanm> I'm currently running
[21:03] <patdk-wk> I would recommend you check ubuntu cve, or maybe even the apache package
[21:03] <allanm> 2.22.14
[21:03] <patdk-wk> version numbers != vaulnerable
[21:04] <allanm> lol the pci scan is marking me as if it was
[21:04] <patdk-wk> ya, it's a pci scan
[21:04] <patdk-wk> you think it actually checks if your vaunerable or not?
[21:04] <patdk-wk> read up on how to do pci scans :)
[21:04] <allanm> :)
[21:04] <patdk-wk> run scan, it checks server version, says you have issues
[21:05] <patdk-wk> you supply link that shows issue was corrected, done
[21:06] <allanm> sir you are 100% right
[21:06] <patdk-wk> https://launchpad.net/ubuntu/lucid/+source/apache2/+changelog
[21:06] <patdk-wk> check those cve's off that list :)
[21:07] <patdk-wk> attempting to pass pci scans via software versions leaves you having to manually patch or upgrade stuff yourself
[21:09] <patdk-wk> I've had a lot of people email me, about apache 2.4 for ubuntu, so they could pass pci scans :)
[21:13] <allanm> :)
[21:14] <allanm> patdk-wk, you are amazin now I have to figure out how to deal with the company that does the PCI scan
[23:27] <zastern> I'm having a really weird issue
[23:27] <zastern> apache becomes super slow to respond
[23:27] <zastern> if i do ufw enable
[23:28] <zastern> even with 80 and 443 allowed
[23:42] <Aison> what's that? SourceGuardian requires Zend Engine API version 220090626.
[23:42] <Aison> The Zend Engine API version 220100525 which is installed, is newer.
[23:43] <Aison> this started after upgrade to 12.10