[05:08] <cpaelzer_> good morning
[06:05] <lordievader> Good morning.
[07:10] <caribou> cpaelzer: I see that you looked at LP: #1644428 I should give you a bit of context on that bug
[07:11] <caribou> cpaelzer: it was created after we pushed a change to make some library statically linked which caused unexpected breakage
[07:11] <caribou> cpaelzer: so the change was reverted and re-SRUed
[07:12] <caribou> people have been piggybacking on that bug but it concerns a change that was reverted and never made available on the other releases
[07:12] <sarnold> :(
[07:14] <zioproto> hello all
[07:14] <zioproto> anyone has a Neutron Network node running on Ubuntu Xenial ??
[07:15] <zioproto> I have a funny problem with udp packet drop in the router namespace
[08:55] <zioproto> I have no idea how to fix this. The Neutron network node now running Xenial, is dropping UDP packets instead of doing the DNAT to the internal IP of the instances. With a tool called Dropwatch I can see that the packets are dropped at the function __udp4_lib_rcv. Basically it is like if the packet is not processed by iptables but it gets to the host, and
[08:55] <zioproto> because there is no socket listening on that UDP port it is dropped
[08:56] <zioproto> I feel I am hitting some limit introduced by some systemd config or some other weird tooling in Xenial
[09:44] <zioproto> Found the problem, I had to reset the conntrack entries
[09:44] <zioproto> there is a race condition between floating IP interfaces creations and iptables rules creations by the network node
[09:44] <zioproto> conntrack will cache the traffic to local, and will ignore the DNAT iptable rule
[09:47] <fishcooker> let's say i have ubuntu server with many installed package how to revert back to initial state without rebuild or reinstall the built-in package?
[09:57] <tomreyn> zioproto: interesting problem and solution, thanks for sharing.
[09:58] <tomreyn> fishcooker: you could apt-get download the package and dpkg -x it, then diff and cherry pick the files you want / need.
[09:59] <tomreyn> ...but i'm not sure i got understood your need properly.
[10:15] <fishcooker> i accidentally upgrade to 16.04 from 14.04 then i can't reboot it http://vpaste.net/u84uO
[10:15] <fishcooker> noted tomreyn
[10:59] <tomreyn> fishcooker: hmm this looks like it can be a broken upgrade.
[10:59] <tomreyn> how did you upgrade? are you actually looking for assitence with it?
[11:32] <fishcooker> tomreyn: actually i change the sources.list to the local repository then apt install -f
[11:34] <fishcooker> i copy the xenial repository to my trusty sources.list ... just noticed, tomreyn
[11:43] <tomreyn> fishcooker: hmm well trhat's not a supported upgrade process, but i assume you're aware.
[13:19] <thatstevecena> Good morning. Are there any Postfix people here? I'm having a problem with Ubuntu 14.04.5LTS, Postfix and DKIM. I'm able to validate signatures for a few hours but ultimately they all start failing. Using other testing sites all the signatures that fail pass.
[14:17] <ahasenack> cpaelzer: hi, do you have an example of a server SRU that used the git workflow?
[15:46] <tasslehoff> I have a server with 4 disks in RAID5 (mdadm). One broke, and now I'm swapping all of them. My plan was. Delete RAID -> Swap disks -> Create RAID, but do I need to? Can I just swap the disks and then create the new RAID?
[15:46] <nacc> tasslehoff: why are you swapping all of them if only one broke?
[15:47] <bindi> y u no zfs
[15:47] <tasslehoff> nacc: They are old, and I fear more will break soon. Also I bought better disks.
[15:47] <nacc> ahasenack: there isn't a strict workflow for SRU in git, as it's not typically a merge. I find it easiest (presuming versions are the same) to use cherry-pick across the branches
[15:48] <nacc> tasslehoff: are they the same size as the old disks?
[15:48] <tasslehoff> nacc: yes.
[15:49] <nacc> tasslehoff: so why not swap them one at a time and let mdadm rebuild the array?
[15:49] <nacc> tasslehoff: if you are planning on wiping the RAID, then I don't see why you wouldn't delete and recreate the array
[15:50] <tasslehoff> nacc: I have a usb drive that can hold all the data, so I thought it faster to backup the data there.
[15:51] <nacc> tasslehoff: oh so you're backing up the RAID first?
[15:51] <tasslehoff> nacc: yep! should have mentioned that :)
[15:52] <nacc> tasslehoff: well, i think mdadm configuration uses the disk by name (depends on how you configured it, i guess) -- i think you're best off deleting the array first
[15:54] <tasslehoff> nacc: ok. https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04 seems a good guide
[16:43] <ppetraki> nacc, tasslehoff, md uses metadata to determine array membership, drive letter name ordering is not deterministic on Linux.
[16:45] <nacc> ppetraki: ah ok, i wasn't sure, thanks!
[16:45] <nacc> ppetraki: yeah, i figured that would be pretty error-prone
[16:45] <ppetraki> nacc, tasslehoff, it's also a *really good idea* to save a copy of the mdadm.conf off box
[16:45] <nacc> ppetraki: yeah i'd say so :)
[16:46] <ppetraki> nacc, it writes that guid to the superblock location on media. The only thing you really need to worry about is if you start making dd clones of your array members
[16:46] <nacc> ppetraki: makes sense
[16:47] <nacc> ppetraki: thanks for clarifying!
[16:47] <ppetraki> nacc, you're welcome
[16:47] <ppetraki> nacc, it's also a good idea if you're doing SSDs to over provision, give the garbage collector some scratch to run. Assuming these are consumer grade SSDs
[16:50] <nacc> ppetraki: yep, that seems like reasonable advice
[16:54] <ppetraki> nacc, drive vendors keep about 7% to themselves for that purpose that you can't get back, but if you only ever write to the first 80% of the drive, it's smart enough to know it can use the remaining 20% for tmp space while it frees up larger ranges of clean pages for you to write to. They're really thin provisioned under the hood.
[16:54] <ppetraki> nacc, pdf warning, it improves performance too :) http://www.samsung.com/semiconductor/minisite/ssd/downloads/document/Samsung_SSD_845DC_04_Over-provisioning.pdf
[16:56] <ahasenack> nacc: for SRU MPs, the target git repo should be lpusdp? ~ubuntu-server-dev/ubuntu/+source/?
[16:56] <ahasenack> I cloned lpusip, i.e., ~usd-import-team/ubuntu/+source/
[16:56] <ahasenack> to prepare it
[16:57] <ppetraki> nacc, so to over provision on a md array you would simply provision to 80% less size when create the array. Do not mess with partitions unless you want to figure out alignment issues.
[16:57] <nacc> ahasenack: no, you can ignore usdp now
[16:57]  * ppetraki means 20% less
[16:57] <nacc> ahasenack: you can propose merging to the appropriate series-devel on lpusip
[16:57] <ahasenack> nacc: thx
[16:57]  * ppetraki over provisioning is a stupid term
[16:57] <nacc> ppetraki: :)
[17:00] <ppetraki> nacc, https://www.percona.com/blog/2011/06/09/aligning-io-on-a-hard-disk-raid-the-theory/ , save a copy of this, I swear it moved.
[17:03] <ahasenack> nacc: "target/reference path" in lp is the target branch, right? ubuntu/zesty-devel for example?
[17:03] <nacc> ahasenack: yeah
[17:03] <ahasenack> ok
[17:03] <nacc> ppetraki: bookmarked :)
[17:17] <azeem> nacc: ok, I've created a bug for now: https://bugs.launchpad.net/ubuntu/+source/resource-agents/+bug/1688613
[17:35] <nacc> azeem: cool
[18:55] <RoyK_Home> hm - I've moved a server to another location - some users are on ecryptfs and they don't seem to get their homedirs mounted - any idea what to do? I have the old vm - I just wonder what I might have forgotten to restore
[19:03] <jamespage> semiosis: around?
[20:01] <coreycb> jamespage: nacc has the new python-django in a ppa in case we want to test with it -- https://bugs.launchpad.net/ubuntu/+source/python-django/+bug/1605278
[20:19] <coreycb> jamespage: pkgs for the latest newton point releases are uploaded to the sru queue
[20:24] <pmatulis> re high availability with postgresql, i see reference to 'pgsql RA'. what is "RA"?
[20:27] <sarnold> guessing https://raw.githubusercontent.com/ClusterLabs/resource-agents/a6f4ddf76cb4bbc1b3df4c9b6632a6351b63c19e/heartbeat/pgsql
[20:29] <TLoFP> If I want to join two drives in Raid 0 (software) is it possible to keep the data that is on one of the drives?
[20:29] <pmatulis> sarnold, thanks
[20:30] <azeem> pmatulis: yeah, RA is resource agent
[20:30] <azeem> pmatulis: where did you see it?
[20:31] <pmatulis> azeem, reading stuff on the net
[20:32] <sarnold> TLoFP: it seems unlikely to me; but if you like to live dangerously -maybe- you could try an inplace conversion to btrfs and then see if you can add a second drive to the btrfs thingy
[20:33] <sarnold> TLoFP: but (a) i'm not sure I trust btrfs yet (b) i'm doubly-unsure if you can trust btrfs's multiple drives stuff yet (c) only one copy of data scares me now (d) two drives in one filesystem like that doubles the chances for catastrophic failure compared tojust one drive..
[20:33] <azeem> pmatulis: ah ok, I just wondered cause I filed #1688613 a few hours ago
[20:34] <azeem> which says "pgsql RA"
[20:34] <sarnold> TLoFP: .. and half the point of raid-0 ish things is so you could spread IOs across multiple disks for higher throughput, which this wuoldn't achieve if you just leave all the data on the one drive untouched
[20:39] <gartral> hey all, I'm at my literal wits' end here, I have an apache install on 16.04 that refuses to cooperate, I try to reload it after changing some site-configs and it tells me apache2.service isn't running... the site is up!
[20:39] <TLoFP> sarnold: thanks. I thought this wasn't easily possible, but wanted to check
[20:39] <sarnold> gartral: netstat -lntp
[20:39] <sarnold> TLoFP: just please be sure that you've got backups of anything you care about :)
[20:40] <TLoFP> right that is the issue sarnold
[20:40] <TLoFP> I have 4 TB of data that I would like to keep but it is low priority
[20:41] <TLoFP> I am adding another 4TB disk to the system to allow for 8TB of storage.
[20:41] <gartral> sarnold: what am I looking for in here?
[20:41] <TLoFP> I have no ability to backup 4TB. So I am stuck
[20:41] <sarnold> gartral: something bound to your web ports
[20:41] <gartral> sarnold: nothing is, at all, but the apache welcome page is up... I'm very confused now
[20:42] <sarnold> TLoFP: I'm a huge fan of zfs, I really like the redundancy and checksums and compression and snapshots and so on
[20:42] <sarnold> TLoFP: but it's not very .. consumer-oriented. it's not much for 'just add one more drive to this pool'
[20:42] <TLoFP> sarnold: but I doesn't seem that ZFS is straight forward configuration
[20:42] <gartral> wait...
[20:42] <sarnold> gartral: ggrab a different browser perhaps/ maybe it's stuck in cache
[20:43] <sarnold> TLoFP: I found zfs way simpler to understand than mdadm, but that might just be me
[20:43] <gartral> I might of figured it out... openvpn and apache both try to use port 443, don't they?
[20:44] <gartral> yea, apache isn't loaded at all, so now what?
[20:46] <sarnold> gartral: check apache logs to see if emitted any reasons why it couldn't start
[20:46] <sarnold> gartral: /var/log/apache* and perhaps journalctl -u apache2 or whatever the service name
[20:46] <gartral> sarnold: apache logs are unreadable for me, it's a shared host
[20:46] <sarnold> gartral: please explain further
[20:47] <sarnold> TLoFP: if you're curious about zfs I suggest this blog post series https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/
[20:47] <gartral> sarnold: I can't read /var/log/apache, at all.. even with sudo su
[20:49] <sarnold> gartral: curious. can you pastebin the output of sudo namei -l /var/log/apache2/error.log ?
[20:50] <nacc> gartral: i assume you meant /var/log/apache2 not /var/log/apache
[20:51] <gartral> nacc: indeed
[20:51] <gartral> sarnold: http://paste.ubuntu.com/24519145/
[20:51] <gartral> yea... NO PERMS, at all
[20:53] <nacc> gartral: is this a VPS or something?
[20:54] <gartral> sarnold: I think I know why to... ls -la shows /var/log/apache2 \-> /dev/zero so it's just dumping all apache logs into the garbage bin
[20:54] <gartral> nacc: it is
[20:55] <nacc> gartral: sounds like a bad VPS provider
[20:55] <nacc> gartral: and not really ubuntu, as that is definitely not the ubuntu configuration
[20:55] <gartral> nacc: no... it's ubuntu, it's just configured to keep as little logs as possible..
[20:56] <nacc> gartral: which ... is not ubuntu
[20:56] <nacc> gartral: sounds fundamentally broken
[20:56] <nacc> gartral: as you can't debug why things don't work without logs
[20:57] <gartral> nacc: http://paste.ubuntu.com/24519180/
[20:57] <nacc> gartral: that's just reading files in /etc
[20:57] <nacc> gartral: unfortunately, VPS are terrible in this regard
[20:57] <nacc> gartral: what does `uname -a` report?
[20:57] <nacc> we've had people come into #ubuntu saying they are running 16.04.2 and the kernel is 2.6 based
[20:57] <gartral> nacc: Linux kitsunet-emergency-znc 4.4.0-66-generic #87-Ubuntu SMP Fri Mar 3 15:29:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[20:58] <nacc> because the VPS provider can override whatever they want
[20:58] <nacc> (particularly for containers)
[20:58] <nacc> gartral: i would first reproduce it with a stock ubuntu (no modification to the configuration)
[20:59] <gartral> nacc: unfortunetly I'm lacking resources for that
[20:59] <nacc> gartral: and/or you can change your server's configuration to not drop all logging
[20:59] <gartral> hang on, I might be able to redirect the symlink
[20:59] <gartral> nacc: my thoughts exactly
[21:00] <gartral> we want error.log, right?
[21:03] <gartral> nacc: the timestamps are bizzare but http://paste.ubuntu.com/24519201/
[21:03] <gartral> they're off and padded with extra 0s
[21:06] <nacc> gartral: the extra 0s could just be the resolution of your timesource
[21:06] <nacc> gartral: that doesn't indicate any errors
[21:06] <nacc> gartral: is apache running?
[21:06] <nacc> gartral: not the service, the process
[21:14] <gartral> nacc: not that i can see, no
[21:16] <gartral> nacc: I gotta move and head out, I'll be connected but I'll be slow to respond for a bit
[21:35] <gartral> back
[21:37] <gartral> nacc: so apache doesn't appear to be running
[22:52] <nacc> gartral: ok, how are you checking that?
[23:40] <CodeMouse92__> I must be missing something freakin' obvious.
[23:40] <CodeMouse92__> I'm trying to set up SquirrelMail on a subdomain (webmail.example.com), without something else running on example.com
[23:41] <CodeMouse92__> I've got the <VirtualHost *:80> in both
[23:41] <CodeMouse92__> And the ServerName is set to 'example.com' and 'webmail.example.com' in their .confs, respectively)
[23:42] <CodeMouse92__> The main site is in mousepawgames.conf, and the squirrelmail is in squirrelmail.conf, both in sites-available, both a2ensite'd up
[23:42] <CodeMouse92__> No access errors
[23:42] <CodeMouse92__> HOWEVER: squirrelmail is NOT serving to webmail.example.com - it's actually serving to example.com (if I shut off port 80 on the main site one to prevent blocking
[23:42] <CodeMouse92__> Freaky as heck - what am I missing here?
[23:45] <tarpman> CodeMouse92__: does apachectl -S provide any info?
[23:45] <CodeMouse92__> tarpman: Plenty of info, no errors that I can see. Want me to pastebin this sucker?
[23:45] <tarpman> not sure I'll be able to make anything of it
[23:46] <tarpman> but please do, someone else might
[23:46] <CodeMouse92__> Point is, it IS showing both sites....stand by
[23:46] <CodeMouse92__> https://bpaste.net/show/2e7940cb773a
[23:47] <CodeMouse92__> But webmail.mousepawgames.net literally goes nowhere.
[23:47] <CodeMouse92__> (Worth noting that subdomains aren't being blocked...the 'mail.' subdomain works fine in its context of Postfix/Dovecot)
[23:48] <tarpman> webmail.mousepawgames.net gets me a squirrelmail login
[23:48] <CodeMouse92__> You're kidding
[23:48] <tarpman> once I /etc/hosts it, anyway. DNS?
[23:48] <tarpman> looks like DNS and not apache, to me
[23:48] <CodeMouse92__> ...
[23:49] <CodeMouse92__> That's bizarre. I'm remoting into the server (Linode) in question, so it won't be *my* DNS persay
[23:49] <tarpman> (also this is an argument for not doing the "example.com" thing - harder to see your actual problem)
[23:49] <CodeMouse92__> Yeah, I get that
[23:49] <tarpman> are you remoting into it via the name "webmail.mousepawgames.net"?
[23:49] <tarpman> 8.8.8.8 says NXDOMAIN
[23:49] <CodeMouse92__> Well, no, I mean I'm SSHing into the server. I'm remote.
[23:50] <tarpman> and ns1.linode.com says NXDOMAIN too
[23:50] <CodeMouse92__> Yeah, without overriding with /etc/hosts, webmail.mousepawgames.net is *not* working...
[23:50] <CodeMouse92__> Uhm, hm. Do I need to literally add each subdomain to Linode's DNS?
[23:50] <tarpman> yes.
[23:50] <CodeMouse92__> HAH. Told you I thought I was missing the obvious
[23:51] <tarpman> or well rather
[23:51] <tarpman> depends what you mean by 'subdomain'
[23:51] <tarpman> mousepawgames.net is the domain
[23:51] <tarpman> webmail.mousepawgames.net is a host within that domain
[23:51] <tarpman> if you had mail.internal.mousepawgames.net, I'd call internal.mousepawgames.net a subdomain
[23:51] <CodeMouse92__> Yeah, mousepawgames.net is set up and all...
[23:52] <tarpman> https://www.linode.com/docs/assets/912-hosting-2.png
[23:52] <tarpman> so in the manager for mousepawgames.net
[23:52]  * CodeMouse92__ nods and adds the A/AAAA record
[23:52] <tarpman> you want to add an A record for webmail.mousepawgames.net (bottom table in there)
[23:52] <tarpman> or alternatively a CNAME (an alias) pointing at mousepawgames.net itself
[23:52] <CodeMouse92__> Actually, middle table, but yeah
[23:52] <tarpman> either ior
[23:52] <tarpman> *or
[23:53] <tarpman> HTH anyway
[23:55] <CodeMouse92__> Yeah, thank you tarpman