[07:14] <lordievader> Good morning
[09:18] <kstenerud> cpaelzer: It turns out that the php7.2 issue was exactly as I'd suspected: A missing #include, which causes the call to a macro to be interpreted as a call to an undeclared function, thus the undefined symbol when linking.
[09:19] <kstenerud> So now, if I want to fix this and test it, is there a way for me to make my own fixed version of php-intl, and then build horde-lz4 against it and run autopkgtests?
[09:19] <kstenerud> I'm hoping theres a better way than just pushing an MP through and then praying nothing goes wrong on the excuses page once it mgrates
[10:05] <cpaelzer> kstenerud: yeah there is
[10:06] <cpaelzer> kstenerud: for starters you can put a fixed php7.2 (which builds php-intl binary IIRC) in your PPA
[10:06] <cpaelzer> kstenerud: and then you can upload the horede package to the same PPA
[10:06] <cpaelzer> kstenerud: it will build against that php it finds in that PPA then
[10:06] <cpaelzer> and you can then check e.g. the build log or even install from the PPA on a container to check for more
[10:07] <cpaelzer> kstenerud: will that achieve what you need?
[10:07] <kstenerud> how do I get the horde package uploaded to the ppa? It fails the signature because the last changelog is not mine
[10:07] <cpaelzer> kstenerud: just "dch -i" it and add a ~ppa1 to the version
[10:07] <cpaelzer> the last changelog will be yours then and you can upload
[10:07] <kstenerud> ok thanks!
[10:14] <jamespage> cpaelzer: hey not sure we got to a conclusion on where to fix/patch or whatever for the bionic backport of libvirt 5.0.0 with regards to udev rules
[10:15] <jamespage> we can hold a patch in the backported to re-apply the rules and maintainer script parts so that its bionic only - does that work for you?
[10:16] <cpaelzer> jamespage: hiho, yeah you can add back on the Bionic-backport what I dropped
[10:17] <cpaelzer> jamespage: do you need a pointer to the former change?
[10:17] <jamespage> cpaelzer: yes
[10:17] <jamespage> please
[10:21] <cpaelzer> jamespage: old change #1 https://salsa.debian.org/qemu-team/qemu/commit/5a90d7b9
[10:21] <cpaelzer> jamespage: old change #2 https://salsa.debian.org/qemu-team/qemu/commit/447402b0
[10:21] <cpaelzer> jamespage: dropped because https://salsa.debian.org/qemu-team/qemu/blob/ubuntu-disco-3.1/debian/changelog#L109
[10:47] <jamespage> cpaelzer: how does http://paste.ubuntu.com/p/Wy26sYSjVQ/ look ?
[10:47] <jamespage> basically i ressurected the maintainer scripts and udev rule from cosmic's qemu
[10:51] <kstenerud> cpaelzer: I'm taking another look at debian's php, and it looks like they're still updating their 7.2 branch: https://salsa.debian.org/php-team/php/tree/master-7.2
[10:51] <cpaelzer> jamespage: seems ok to me
[10:51] <kstenerud> but it doesn't show up in rmadison
[10:51] <jamespage> cpaelzer: throwing it into the UCA for stein now
[10:51] <kstenerud> So would it make more sense for me to update disco's php7.2 off that?
[10:51] <cpaelzer> kstenerud: they might update the branch because someone still uses it for himselve
[10:52] <cpaelzer> kstenerud: I'd not take anything from there blingly (like rebaseing onto it)
[10:52] <cpaelzer> kstenerud: but evaluating and picking changes one by one might be ok if you think they are helpful
[10:53] <kstenerud> oh wait, looks like their 7.2 doesn't use the newer libicu, so it wouldn't help
[10:54] <kstenerud> However, I'm noticing that in their 7.3 icu (upstream) changes, there are a lot of differences besides just the namespace change. There are a bunch of ifdefs around icu version 57 (above or below)...
[10:54] <kstenerud> so now I'm wondering if I should revisit what changes I made, and maybe include all that extra code?
[10:55] <kstenerud> I'm worried that just "making the code compile" might hide some submarine bugs
[10:56] <cpaelzer> kstenerud: ack, sounds worth a check
[11:04] <rbasak> kstenerud: I like that attitude :)
[11:40] <jamespage> coreycb: qemu fix on its way into the stein uca
[11:40] <jamespage> holding a uca patch
[11:52] <ahasenack> good morning
[13:13] <jamespage> coreycb: how are we looking on the stein uploads? I don't see any updates in the to tracker?
[13:23] <Delvien_> ubuntu-server 18.04 - i had set my hosts file manually at one point, even after i changed to a DNS server, it seems to be keeping that information and directing "ping hostname" to its old IP. Where can I clear this ?
[13:24] <ahasenack> try "systemd-resolve --flush-caches"
[13:24] <Delvien_> tried that already, still trying to ping old IP
[13:25] <blackflow> Delvien_: "changed to a DNS server" -- how?
[13:25] <ahasenack> cat /etc/nsswitch.conf |grep hosts:
[13:25] <ahasenack> does that return "files" before dns?
[13:25] <Delvien_> yes
[13:28] <Delvien_> blackflow: local dns server to resolve local queries before passing to net for FQDN
[13:29] <Delvien_> because editing /etc/hosts on every machine is a pain
[13:29] <blackflow> Delvien_: which one, bind?
[13:29] <blackflow> (which one is the local dns server)
[13:30] <Delvien_> blackflow: tried bind but with my setup it wasnt working, im having my pihole serve as my DHCP and local DNS
[13:30] <Delvien_> ahasenack: yes, it returns files
[13:31] <blackflow> Delvien_: well you'll have to trace the resolution and see what exactly is authoritative for the zone and what is that particular authority returning. using dig
[13:31] <blackflow> dig hostname +trace    dig hostname @ns-that's-supposed-to-be-the-authority
[13:32] <blackflow> dig hostname @configured_resolve.conf_nameserver
[13:32] <Delvien_> blackflow: i did that, its pointing inward (127.0.0.53) so its trying to self-resolve the query, when it shouldnt because i set a nameserver
[13:32] <Delvien_> plus hosts file is now blank so im not sure where its pulling this infor
[13:33] <blackflow> +trace will tell you
[13:34] <Delvien_> trace?
[13:34] <blackflow> also.... do you perchance have a "search" domain in resolv.conf and using just "hostname" is actually checking "hostname.search.domain.tld" and that still points to the old IP?
[13:34] <blackflow> Delvien_: `dig <hostname> +trace`
[13:34] <blackflow> you said "I did that" but you obviously did not understand a word I said :)
[13:34] <Delvien_> yeah its still pointing 127.0.0.53
[13:36] <blackflow> Delvien_: and what is listening on that IP? pihole?
[13:37] <Delvien_> pihole is a different machine, different ip
[13:37] <blackflow> so what is that then, systemd-resolved?
[13:39] <Delvien_> Im not sure, thats what im trying to figure out
[13:39] <Delvien_> or are you asking about pihole?
[13:39] <Delvien_> ubuntu-server seems to be using systemd-resolved
[13:40] <Delvien_> and ignoring settings in netplan for nameserver
[13:40] <blackflow> ss -4lnp will tell you what's listening on that ip
[13:40] <blackflow> so anyway, here's how this works. defined by nsswitch, but assuming defualts now, resolving is first consulting /etc/hosts, then asking whoever is configured in /etc/resolv.conf as the "nameserver" entry.
[13:41] <blackflow> so THAT is your first stop, to check its config and clear its caches if any. BUT, if /etc/hosts is the only place you forced that hostname-IP combo, then that's a different problem
[13:42] <blackflow> if that's systemd-resolved, the command ahasenack gave you should've fixed it but.... systemd-resolved is a steaming pile of feces which everyone should be disabling and masking out on servers. I've seen that not work in the past and I had to fully restart the systemd-resolved service.
[13:43] <Delvien_> ss states nothing is listening on 127.0.0.53, resolv.conf is auto-regenerating to point to 127.0.0.53, so yeah i think its systemd-resolved
[13:43] <Delvien_> noted.. getting rid of this resolved. lol
[13:45] <blackflow> if nothing is listening on that IP but resolv.conf is pointing at it with "nameserver 127.0.0.53", then you've got broken resolving there.
[13:45] <Delvien_> yeah, disabled resolved, fixed resolv.conf, works now
[13:45] <blackflow> you'll have to mask, not just disable, resolved. other serivces (like NM) have this ugly tendency to resurrect it, if only disabled.
[13:46] <Delvien_> I had fixed resolv.conf before but it seems it was pointing it right back.. its quite broken.
[13:46] <blackflow> it's a symlink into /run  by default. with resolved masked out, and no NM (or something else), you'll need to unlink /etc/resolv.conf and make a proper file
[14:22] <coreycb> jamespage: ok thanks for the qemu patch
[14:22] <jamespage> coreycb: bumping it to proposed now
[14:23] <coreycb> jamespage: no progress, i've been sidetracked. i'll try to focus on stein today.
[14:23] <jamespage> coreycb: have cycles to spend on pkg-ing right now - can I work on unassigned items on the tracker?
[14:23] <coreycb> jamespage: sure have at it please. i'm going to get the keystone ldap SRU going and then i'll work on stein too.
[14:24] <jamespage> coreycb: +1 ok
[17:13] <Delvien> blackflow: So I found out that 127.0.0.53 is the "stub listener" for systemd-resolved
[17:13] <sarnold> that was a very nice thing of them to do :)
[17:14] <sarnold> it makes it stand out clear as day what tool you're using
[17:54]  * mason has been naughty and ripping out a bunch of that stuff lately.
[17:55] <mason> Zapping netplan.io, networkd-dispatcher, adding ifupdown and having =gasp= a manually managed resolve.conf for servers.
[17:59] <sarnold> mason: *the horrors*!
[18:09] <blackflow> mason: good!
[18:09] <blackflow> out with NIHtrocities!
[18:12] <Odd_Bloke> Surely manually managing your resolv.conf is NIH too? :p
[18:12] <sarnold> what's to mnage thogh?
[18:14] <mason> Odd_Bloke: Eh? Managing resolv.conf has been a non-issue for decades...? I'm not sure I understand. Even managing it with cfengine and its descendents is layered with a fine coating of dust at this point. That it's seen any attention at all is a bit weird, and that that attention has made it fragile and breaky is bizarre.
[18:15] <lordcirth> mason, what breakage have you run into?
[18:16] <mason> lordcirth: None here, because I yank the stuff out, but wasn't there something big and visible on SlashDot a year or so ago?
[18:16] <lordcirth> iirc there was a default fallback to 8.8.8.8 if it couldn't contact anything?
[18:17] <lordcirth> Which doesn't seem like a huge deal to me
[18:17] <lordcirth> And it was fixed
[18:17] <mason> That sounds right.
[18:18] <mason> Yeah. "We're going to randomly depend on a particular company's services for EVERYONE." Questionable thinking behind that.
[18:18] <lordcirth> Not depend, just fallback to as the last resort
[18:18] <cyphermox> it's also configurable, like everything else
[18:18] <lordcirth> Had it not been set, then it would have just broken instead. The concern was privacy.
[18:18] <Odd_Bloke> And I don't think 8.8.8.8 is random either; it's a common fallback.
[18:19] <Odd_Bloke> (It's what I switch to when I find a malfunctioning DNS server in the wild.)
[18:19] <lordcirth> Yeah, my home router DNS is 1.1.1.1, then 8.8.8.8
[18:19] <lordcirth> If cloudflare and google stop working - well they don't.
[18:19] <blackflow> 17.04 was uninstallable for me, systemd-resolved had to be removed from the live env. then systemd-resolved had issues with my upstream resolver, some responses were considered erroneous. I've been enjoying the peace of quiet of a local bind instance (because bind is what I use otehrwise for authoritative purposes, so 'tis a familiar tool)  ever since.
[18:20] <lordcirth> blackflow, are you sure the upstream responses *weren't* erroneous? It wouldn't be the first time that a tool written to spec broke because the one it replaced didn't follow spec.
[18:20] <cyphermox> ^ that, quite often
[18:21] <cyphermox> not so much the responses themselves as how they are conveyed
[18:21] <blackflow> lordcirth: infact, no, it was systemd's inability to handle DNSSEC properly.
[18:22] <blackflow> uhm.... systemd-resolved's inability
[18:22] <lordcirth> blackflow, ah, that's interesting. Do you know if there's a bug page for it?
[18:22] <blackflow> I think this was it https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1682499
[18:22] <blackflow> I don't remember now, there was quite a number of 'em, including upstream
[18:23] <cyphermox> that looks quite relevant, if you did have dnssec; it was very much busted
[18:23] <mason> Anyway, this all strikes me the same way "predictable naming" does - everyone gets it, even though really a very small subset of users need it. I'd want to default to less-complex and make the more complex solution available for folks who benefit from it.
[18:23] <blackflow> quite a number of domains did not work for me until I ripped that out in place of a local bind
[18:25] <blackflow> btw, the argument made about broken spec is a moot one. if you're writing a critical tool intended to replace another, then you MUST make sure your tool supports spec "quirks", if you want it to replace anything
[18:25] <blackflow> saying "FU, your upstream is broken" is just gonna make _me_ go "FU and your <...>", this worked for the past 10 years until you came along.
[18:26] <mason> I suspect everyone has good intentions and just wants to make cool new software.
[18:26] <lordcirth> blackflow, I think that depends on the situation. Sometimes I've updated, and it stopped working, and clearly said that the other end was broken. And I looked into it and fixed the other end, and lived happily ever after.
[18:26] <blackflow> mason: the problem with "predictable naming" is that you can't reliably predict it in advance. case in point the recent shtstorm that hit the upstream about their change in udev
[18:27] <lordcirth> The way I like now is to use netplan's mac matching, then rename it.
[18:27] <mason> blackflow: Well. The problem is that most people have one or two NICs at most in most systems. Not being able to rely on "eth0" being what you want is unfortunate.
[18:27] <lordcirth> My management iface is 'mgmt'.
[18:27] <blackflow> it very much affects me because we rent our servers and I don't control PCI slots or exact hardware used. installing via debootstrap because we use encrypted ZFS root and I'm left guessing via udevadm which name it is GONNA be after reboot.....
[18:27] <mason> MAC matching makes sense.
[18:28] <lordcirth> Of course, if you only have one NIC, it would be nice if you could rely on it to be eth0...
[18:28] <cyphermox> you can't rely on having just one or two NICs on servers nowadays though
[18:28] <blackflow> lordcirth: the ironic part is that MAC binding has been done for years to fix _Exactly_ teh issue that "predictable naming" is supposed to fix, and it only made it worse so I am _STILL_ forced to bind via mac and use ethX naming again.
[18:28] <cyphermox> pizzaboxes do have two usually, and anything 2U often has 4
[18:29] <cyphermox> (and if you netboot, order may vary slightly, etc. etc.)
[18:30] <blackflow> i've learned to appreciate the process management part of systemd and the ability to utilize kernel features through simple unit files. everything else? purge it out with nuclear fire!
[18:30] <cyphermox> the best way to summarize this is "it's not that simple"
[18:32] <blackflow> big red flag when RH is not doogfooding their own product properly.  they use NM and not networkd for example. I'm not sure but I don't think they use resolved even?
[18:32] <blackflow> *dogfooding
[18:33] <Ussat> blackboxsw, um...I know LOTS of RH employees that do........
[18:34] <blackflow> but their flagship moneymaker isn't. like, orders of magnitude more important than an anecdotal mention of "LOTS of employees".
[18:34] <Ussat> ...
[18:34] <Ussat> Yes, yes it is, but whatever
[18:36] <Ussat> I use both Ubuntu and RH extensively......both have their strengths and weakness's
[18:39] <blackflow> I like SELinux. Wish I had more time to learn it and use it properly. Wish it was supported in 'buntu instead of AppArmor. I'll ask Santa in ten months :)
[18:40] <Ussat> SElinux is great
[18:41] <blackflow> 'tis.
[18:42] <blackflow> incidentally, I found gentoo's selinux policies to be far more  usable and polished than CentOS/Fedora's. you could actually run a SELINUXTYPE=strict server (openrc instead of systemd though, and no xorg) just fine with no issues.
[18:46]  * jdstrand notes that selinux is available in Ubuntu via kernel command line and the policy/tools that come from Debian. that said, I would expect that policy to need a lot of work
[18:47] <blackflow> quite a lot. that's what I meant by "supported in Ubuntu instead of AppArmor" -- that level of committment to AppArmor.
[18:48] <jdstrand> it's community supported so if people wanted to invest their time, it would be great. but sure
[18:50] <blackflow> I totally would if, say, Ubuntu shipped with it default and had a bootable server instance supported, even if in targeted SELINUXTYPE. Like a platform to get started.  So I'm doing the same but for AppArmor. all the services we use are apparmor'ed, even though ubuntu does not ship policies for them.
[18:56] <jdstrand> yeah, I mean, AppArmor is backed by Canonical and used in many places, so that is the focus. the lack of decent selinux policy is unfortunate but also reflects that the community is largely ok with the choice. my only point is that there is a path forward to make it better; someone needs to do it and bootstrapping the policy and then maintaining it release to release is hard
[18:57] <jdstrand> years ago there was decent policy from the community, but the maintenance burden got to be too much for those devs and now we just get whatever policy Debian has