[08:47] <zkvvoob> Hello everyone. I'm looking for some extremely kind soul who would be willing to help me figure out a mess that I got into after upgrading from 12.10 ro 13.10 and then 14.04.2. No websites load, I can't get the ISPConfig that had been managing the vhosts to run and besides Apache is giving me some strange errors. Please?
[08:48] <Sling> zkvvoob: what errors exactly?
[08:48] <Sling> (i know a lot about apache, nearly nothing about ispconfig)
[08:48] <zkvvoob> Sling:
[08:48] <zkvvoob> * Restarting web server apache2
[08:48] <zkvvoob> AH00548: NameVirtualHost has no effect and will be removed in the next release /etc/apache2/conf-enabled/000-ispconfig.local.conf:62
[08:48] <zkvvoob> (98)Address already in use: AH00072: make_sock: could not bind to address [::]:8081
[08:48] <zkvvoob> (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:8080
[08:48] <zkvvoob> no listening sockets available, shutting down
[08:48] <zkvvoob> AH00015: Unable to open logs
[08:48] <zkvvoob> Action 'start' failed.
[08:48] <zkvvoob> The Apache error log may have more information.
[08:49] <Sling> please use a pastebin for pasting more than a few lines
[08:49] <zkvvoob> sorry
[08:49] <Sling> but it seems that its trying to listen on ports 8080 and 8081 but there is already something listening there
[08:49] <Sling> see 'lsof -i:8080' and 'lsof -i:8081' to check what is
[08:50] <zkvvoob> Nothing happens, just new command line
[08:50] <Sling> are you running this as root?
[08:50] <Sling> or with sudo
[08:50] <zkvvoob> sudo su, root
[08:51] <OpenTokix> Sling: netstat -anp |grep :8080
[08:51] <Sling> OpenTokix: lsof is a lot easier ;)
[08:51] <zkvvoob> OpenTokix: that did nothing either :(
[08:53] <OpenTokix> zkvvoob: if you do a grep :8080 /etc/apache2/sites-enabled/*   Maybe you are trying to bind multiple times to some ip or such? Anbd apache will kill itself.
[08:55] <zkvvoob> OpenTokix: Maybe it's this, I got two lines that go "/etc/apache2/sites-enabled/000-ispconfig.vhost:NameVirtualHost *:8080
[08:55] <zkvvoob> and then /etc/apache2/sites-enabled/000-ispconfig.vhost:<VirtualHost _default_:8080>
[08:55] <OpenTokix> bleu.... ispconfig
[08:55] <OpenTokix> That braindead howto infecting everything
[08:55] <OpenTokix> ok
[08:56] <OpenTokix> zkvvoob: That is probably it, yes - remove the _default_ one
[08:56] <Sling> uuh
[08:56] <Sling> no
[08:56] <Sling> you're looking for Listen directives
[08:56] <zkvvoob> default.vhost or one of the lines?
[08:56] <Sling> that is what tells apache on which interface:port to listen, you can have multiple vhosts defined for the same port/ip
[08:56] <OpenTokix> zkvvoob: Can you pastebin the 000-ispconfig.vhost and /etc/apache2/ports.conf
[08:56] <Sling> the vhost won't cause that error
[08:57] <zkvvoob> openjust a minute
[08:57] <zkvvoob> OpenTokix: just a minute
[08:57] <OpenTokix> Sling: You can not bind to the same ip:port muliple times, but you can bind to *:port multiple times.
[08:57] <Sling> OpenTokix: the vhost doesnt bind
[08:57] <Sling> that is what the Listen directive doe
[08:57] <Sling> does*
[08:58] <Sling> and the _default_:443 vhost is a special case for non-SNI stuff
[08:58] <zkvvoob> OpenTokix: http://pastebin.com/EWYRjSgR
[08:58] <Sling> it won't cause the errors he just mentioned, since it has nothing to do with 8080/8081 :)
[08:59] <OpenTokix> zkvvoob: Do you have a listen 8080 in your /etc/apache2/ports.conf to ?
[08:59] <Sling> wow that is a weird config
[08:59] <zkvvoob> OpenTokix: No
[09:00] <Sling> so if you load mod_fcid you get one documentroot, and if you load the itk mpm you get another
[09:00] <OpenTokix> zkvvoob: try grep -r -i  Listen /etc/apache2/*
[09:01] <zkvvoob> OpenTokix: http://pastebin.com/0swhqd0P
[09:04] <OpenTokix> zkvvoob: I would try to change the _default_:8080 to *:8080
[09:05] <zkvvoob> OpenTokix: where should I do that?
[09:05] <OpenTokix> zkvvoob: ispconfig.vhost
[09:05] <zkvvoob> OpenTokix: got it, just a moment
[09:07] <zkvvoob> OpenTokix: restarted apache2, but got the same message "AH00072: make_sock: could not bind to address [::]:8081;  AH00072: make_sock: could not bind to address 0.0.0.0:8080"
[09:08] <OpenTokix> intresting - one complains about ipv6, one ipv4 also
[09:12] <zkvvoob> OpenTokix: I honestly don't get it, why an upgrade would screw things so badly! Everything was running just perfectly up until 15 hrs ago
[09:12] <OpenTokix> zkvvoob: Nature of sysadmin - stuff breaks and one has to fix it.
[09:13] <zkvvoob> Trouble is I am only a sysadmin wannabe :(
[09:13] <OpenTokix> zkvvoob: you always use service apache2 restart; not service apache2 stop && service apache2 start ?
[09:13] <OpenTokix> zkvvoob: Since restart and stop/start it very different for apache2
[09:14] <zkvvoob> OpenTokix: yes, service apache2 restart
[09:21] <lordievader> Good morning.
[09:26] <zkvvoob> OpenTokix: are you out of ideas?
[09:27] <OpenTokix> zkvvoob: I was waiting for you to do a stop/start
[09:28] <zkvvoob> appologies
[09:28] <zkvvoob> did that, same thing
[09:29] <OpenTokix> zkvvoob: and ps ax |grep apache show nothing?
[09:29] <zkvvoob> OpenTokix:  8858 pts/0    S+     0:00 grep --color=auto apache
[09:29] <OpenTokix> zkvvoob: ofc. Annoying
[09:53] <zkvvoob> OpenTokix: any other ideas? Sorry, if you're fed up with my troubles, I just really don't know who to turn to
[09:53] <OpenTokix> zkvvoob: I love troubleshooting, best part of the job.
[09:54] <OpenTokix> zkvvoob: did you try replace all _default_ with * ?
[09:54] <OpenTokix> zkvvoob: in apps.vhost and ispconfig.vhost
[09:55] <zkvvoob> OpenTokix: only in ispconfig.vhost, will do the apps now
[09:56] <OpenTokix> zkvvoob: if iirc the error changes slightly when you removed it in ispconfig.vhost
[09:58] <zkvvoob> OpenTokix: Changed it in apps.vhost as well, did service apache2 stop && service apache2 start, but the error remains the same: (98) Address already in use: AH00072: make_sock: could not bind to address [::]:8081; (98) Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:8080
[10:00] <OpenTokix> zkvvoob: There is only two files in /etc/apache2/sites-enabled ?
[10:01] <zkvvoob> OpenTokix: no, there are: 000-apps.vhost, 000-aps.vhost, 000-default, 000-ispconfig.conf and 000-ispconfig.vhost
[10:02] <OpenTokix> ok
[10:02] <OpenTokix> what is the difference between 000-ispconfi .conf and .vhost
[10:02] <OpenTokix> zkvvoob: Can you pastebin them all? - Make sure you doe ----------- NAME ------- between them
[10:03] <zkvvoob> OpenTokix: http://pastebin.com/GK4PQcmX
[10:03] <zkvvoob> OpenTokix: http://pastebin.com/EWYRjSgR
[10:04] <OpenTokix> zkvvoob: Can you do grep -i include /etc/apache2/apache2.conf
[10:05] <zkvvoob> OpenTokix: http://pastebin.com/kPQdsyNM
[10:06] <OpenTokix> zkvvoob: is there any files in /etc/apache2/conf-enabled ?
[10:07] <zkvvoob> OpenTokix: 000-ispconfig.local.conf, apache2-doc.conf, charset.conf, javascript-common.conf, localized-error-pages.conf, other-vhosts-access-log.conf, security.conf, serve-cgi-bin.conf
[10:08] <OpenTokix> zkvvoob: grep -i listen /etc/apache2/conf-enabled/*
[10:09] <zkvvoob> OpenTokix: nothing, empty line
[10:11] <d4rks1d3r> hi all, noob question, I'm trying to install ZNC as a daemon, where do I need to create the systemd script?
[10:42] <ReScO> I'm in a pinch, somehow HSTS got enabled on my Apache2 installation and now i cannot access non-https resources.
[10:53] <rbasak> ReScO: http://stackoverflow.com/q/10629397/478206
[17:45] <dmsimard> Hi. Anyone know why 14.04.2 isn't shipped yet for cloud images ? https://cloud-images.ubuntu.com/trusty/current/
[17:48] <rbasak> Odd_Bloke: ^^
[17:48] <rbasak> dmsimard: I suspect that the images are effectively 14.04.2 anyway. Apart from maybe the kernel.
[17:48] <sarnold> probably the <h1> is broken..
[17:49] <dmsimard> rbasak: 3.16 doesn't seem to be in them indeed, e.g: https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64.manifest
[17:49] <rbasak> That might be too big a change to be worth making.
[17:50] <sarnold> .. and unlikely to make much difference, since it is mostly there for hardware enablement, not as big a deal with VMs
[17:51] <dmsimard> The specific use case if for usage with Ceph (rbd kernel module) which for certain features requires kernel >3.13
[17:51] <dmsimard> I have no particular problems with building my own images based off of that (aka, install the utopic lts package on top of the existing image and re-package) but I would've expected that to be in the cloud image built-in
[17:52] <rbasak> I suppose the difference is that users might not expect to suddenly end up with a different kernel version if they use the current image.
[17:52] <dmsimard> So it's going to stay at 3.13 until 16.04 then ?
[17:52] <rbasak> So there also exist users who expect the original kernel.
[17:53] <Odd_Bloke> dmsimard: rbasak is correct, the daily images always track the latest packages and so are 14.04.2 sans the HWE bits.
[17:53] <Odd_Bloke> And, in fact, he is correct again. :p
[17:53] <dmsimard> I understand that, no problem here
[17:53] <dmsimard> Just want to adjust my expectations :)
[17:54] <utlemming> Odd_Bloke: looks like freenode let me join :)
[17:54] <Odd_Bloke> utlemming: o/
[17:54] <rbasak> I suppose what you need is a separate stream of images that include the HWE kernel. But that would be a whole load more for Canonical to maintain and support I guess, and more than we do right now.
[17:54] <rbasak> OTOH, you can probably arrange some cloud-init userdata magic to upgrade the kernel and reboot on first boot.
[17:54] <rbasak> It would just mean slightly slower "boot".
[17:54] <utlemming> Odd_Bloke, rbasak: I understand that there is some question about the HWE kernel?
[17:55] <utlemming> Odd_Bloke, rbasak: are these in re:  to the downloadable images or EC2?
[17:55] <Odd_Bloke> dmsimard: ^
[17:55] <dmsimard> rbasak: I'd tend to push an image that is already up to date and use that image :)
[17:55] <dmsimard> Well, I mean, on my end
[17:55] <utlemming> On GCE, Azure, VMWare's vCHS and a few other clouds the default is to use the HWE kernel at the behest of the cloud or because of virtual hardware compatability.
[17:55] <dmsimard> Download cloud image -> update it -> push it -> use it
[17:56] <utlemming> dmsimard: the reason why we don't default is that it was nacked at a v-UDS. Th
[17:57] <rbasak> utlemming: so dmsimard's need comes from Ceph, which makes use of newer kernel features. Firmly in userspace, AIUI.
[17:57] <utlemming> dmsimard: and we did discuss the idea of having an HWE image download, but that was killed because of the maintaince burden, and user confusion. Not to mention that the HWE kernel can royally screw with DKMS packages (i.e. Virtualbox and open-vm-tools, not mention the filesystem modules)
[17:57] <dmsimard> Fair enough
[17:57] <rbasak> But save for bumping all users to the HWE kernel for that, or maintaining separate HWE images, I don't see any other solution really, except for an upgrade-on-first-boot thing.
[17:58] <dmsimard> FWIW, here's the Ceph recommendations off of the docs: http://ceph.com/docs/master/start/os-recommendations/
[17:59] <dmsimard> And features that might not work depending on Kernel versions: http://cephnotes.ksperis.com/blog/2014/01/21/feature-set-mismatch-error-on-ceph-kernel-client
[17:59] <utlemming> dmsimard: believe me, I really would like to make the switch. But the decision is made on a cloud-by-cloud basis. We only use the HWE kernel by default on clouds where the pain of not having the HWE kernel is worse than the pain of breaking thinks like DKMS.
[17:59] <dmsimard> utlemming: I hear ya, not trying to make this happen
[18:00] <mgagne> utlemming: where are those cloud-by-cloud image located? aren't they the same as the ones publicly available already?
[18:00] <dmsimard> Just saying this wasn't about HWE, more about fixes/improvements made in later releases
[18:01] <utlemming> dmsimard: they are not published generally since they are published into the clouds. GCE is probably the best one if you want in cloud. Their NVME SSD's are quite performant
[18:01] <rbasak> dmsimard: there is of course the claim that LTS release + new features = more recent release :-)
[18:01] <mgagne> utlemming: who is maintaining those images?
[18:01] <utlemming> dmsimard: fixes is why we just switch Azure 12.04 over to HWE yesterday incidently.
[18:02] <utlemming> mgagne: The Canonical Certified Public Cloud Team (me, Odd_Bloke and rcj, with our fearless manager gaughen).
[18:03] <mgagne> utlemming: pardon my hasty conclusion but it looks like someone is already maintaining different versions of those images and adding one more isn't that much of an issue. except for the potential public confusion it could cause
[18:05] <rbasak> mgagne: I don't think you quite appreciate the sheer number of images we're talking about, and the QA and maintenance work involved. Adding HWE is adding a whole dimension. There are two or three (?) HWE kernels each LTS, so you'd be multiplying the whole set by that number.
[18:05] <utlemming> mgagne: I wouldn't call your conclusion hasty. It goes to the downloadable images. The GCE, Azure, et al, images that are published are not downloadable. You run them in cloud. And yes, I agree that there is a degree of confusion that is introduced here.
[18:06] <utlemming> mgagne: and to rbasak's argument, you should see the bills for testing.
[18:07] <mgagne> utlemming: sure, I'm a pragmatic person, I'm not trying to corner people into doing things that doesn't make sense from a financial, time and effort perspective
[18:07] <utlemming> mgagne: but the choice of HWE in some environments (i.e. Azure, GCE, VMware) is dictated by either the cloud or a technical requirement that makes choosing otherwise problementic.
[18:08] <utlemming> mgagne: ack. fwiw, I really do appreciate your concern. And that's why I want to re-raise this for 16.04.
[18:08] <utlemming> mgagne: I think that the HWE kernel has proven to offer far more benefits that problems, especially for bug fixes, and new features that make older versions of Ubuntu more usable.
[18:08] <mgagne> utlemming: but since it was mentioned that some cloud providers have "special" images built from them, I guess it's fair to wonder why it's ok for them but not the "public" ones.
[18:09] <mgagne> utlemming: if it comes down to finance and time resources reasons, I can't argue with that. those are fair reasons
[18:09] <utlemming> mgagne: the downloadable images are the "ideal", where the in-cloud published images are pragmatic compromise.
[18:10] <mgagne> utlemming: right
[18:10] <Odd_Bloke> mgagne: Furthermore, in places that we support HWE, we only support HWE; there is still only one 14.04 image.
[18:11] <mgagne> utlemming: lets say a provider wishes to have images built and maintained by Ubuntu, would this be something provided by the "Canonical Certified Public Cloud Program" ?
[18:12] <mgagne> Odd_Bloke: right, so you get the one with HWE, nothing else: no image without HWE. right?
[18:13] <Odd_Bloke> mgagne: Yep; anything else would be (a) confusing for users ("wait, which of these is actually 14.04?"), and (b) too resource-intensive (both human and technical).
[18:14] <mgagne> Odd_Bloke: makes sense
[18:22] <rostam> Hi I am using ubuntu 14.04. I would like to disable console messages during boot. This is for production. How could I do that? thanks
[18:35] <sarnold> rostam: try console=/dev/null on kernel command line? just a guess...
[18:37] <Sling> rostam: why would you want to disable that?
[18:37] <Sling> keep in mind that anybody with physical access should be considered to have full access
[19:06] <rostam> Sling at the customer site the console messages looks like something is wrong they do not understand it.
[19:06] <rostam> sarnold, thanks will try that.
[19:07] <sarnold> of course when something does break you might regret turning them off :)
[19:16] <londoncalling> hi, want to get 3 vm's running on a digialocean droplet (1GB RAM, 20GB disk), how would I do this purely from command line. I don't want to have to use VNC.
[19:18] <sarnold> londoncalling: you can use libvirt via command line using virsh; or you can start your vms directly with qemu and command line options...
[19:22] <londoncalling> sarnold, thanks a lot. Ill whip out the google-fu
[21:45] <blizzow> Ok, I downloaded the 14.10 iso and formatted a USB stick with unetbootin.  Stuck the USB stick into a Dell Poweredge R510 and started the installer.  The installer complains about being unable to detect or scan a CDROM.  So I copied the iso to the USB stick as well.  As soon as the installation starts, I tap alt+f1 and mount the iso at /cdrom as a loop device.  The installation makes it through the disk partitioning and some package installation, but 
[21:53] <bekks> Just dd the iso onto the stick.
[22:00] <genii> blizzow: The images made now are hybrid images, you don't need to do anything special so that they run off a USB stick instead of the CD/DVD. You just need to use dd if on a linux machine. If Windows then  you need to find a dd for it like RawWrite or WinDD
[22:00] <blizzow> bekks: I'm going to try that next.
[22:05] <blizzow> Just flashes isolinux.bin missing or corrupt and then boots the disk
[22:06] <blizzow> :(
[22:07] <bekks> then how did you create the bootable usb stick?
[22:08] <blizzow> dd if=/home/myusername/ubuntu-14.10-server-amd64.iso of=/dev/sdc
[22:09] <blizzow> I mounted the iso with mount -o loop -t iso9660 /home/myusername/ubuntu-14.10-server-amd64.iso /mnt/ and did an md5sum of /mnt/isolinux/isolinux.bin and it's the same as on my drive.
[22:13] <genii> Did you check the md5 sum to make sure the iso is not currupted?
[22:15] <blizzow> yes, it's the correct hash
[22:20] <genii> blizzow: I suspect an EFI issue
[22:27] <blizzow> Maybe so, but really, I've installed ubuntu on these suckers before.  The installer should not be this sensitive
[22:28] <genii> blizzow: Did you fsck the USB stick ?
[22:31] <blizzow> genii: yeah.  and dd if=/dev/zero of=/path/to/usbdev to wipe it first.
[22:32] <blizzow> The weird part is I can boot to the installer if I use unetbootin to make the stick, but the cdrom isn't seen.  If I use dd, I can't even get to the installer.
[22:34] <blizzow> Forcing mounting /path/usb/partition /cdrom  during the install from the unetbootin usb stick leads to an install that doesn't complete.  I'm wondering if there is a different way to mount/detect the USB stick as my CDROM or if I can do a network based install..
[22:53] <blizzow> Got it!  There is a setting in BIOS about usb drive emulation auto/floppy/hard drive.  It was set to auto and I set it to hard drive.
[23:06] <RoyK> Pratchett just died :(
[23:59] <sarnold> RoyK: i've been sad all day.. no new discworld books.