[00:16] sarnold, alright, done, any chance you'd be willing to sponsor it for us? [00:18] kyrofa: sorry, I'm not a coredev :( [00:21] python-pygraphviz is in universe. [00:22] oh! so motu would suffice? [00:22] (which, alas, I am also not, but that's probably a larger poool :) [00:23] Yes sir. === pieq__ is now known as pieq [04:42] Is there anything preventing someone sync'ing libjson-c from experimental? [04:45] Unit193: . [04:45] When are you going to become a Core Developer, again? :P [04:45] Why would I? I like the fringe packages. :3 [04:45] ¯\_(ツ)_/¯ [04:45] (Read: I'm only looking into this because of 'sway') [04:45] OOOOOH [04:46] What's the progress on that? [04:46] A little birdie tells me someone is interested in starting an Ubuntu Sway flavor. [04:46] depwait on json-c .13 :P [04:46] :P [04:46] Urgh, not another... [04:46] your what hurts? [04:49] Unit193: Between Ubuntu Sway, Ubuntu Cinnamon, and Ubuntu Unity, I wonder if someone is already betting who actually pulls through first. :P [04:49] (All three of those are currently in some stage of "in the works.") [04:49] Now, if we had an active TB that could actually process a flavor application... [04:49] * tsimonq2 runs [04:50] Who's behind them? [04:50] Unit193: Oh look, your binaries are in the NEW queue. [04:52] Ubuntu Unity has... Khurshid Alam? I'm not sure who else. Ubuntu Cinnamon is by someone I was told I should meet when I went to SELF but never got the chance to, so *shrug*. Ubuntu Sway is a few flavor folks. [04:53] * tsimonq2 glares at bashfulrobot [04:53] :P [04:54] i'd be interested in trying a sway flavor actually [04:55] wxl: That makes four existing flavor people interested. [04:55] Right, so reading proposed migration output... [04:55] oh yeah and i'm starting a fbui flavor. [04:55] Unit193: Oh? [04:56] skipped: elogind (0, 5, 11) got: 7+0: a-1:a-4:a-0:i-0:p-0:s-2 * arm64: elogind, libelogind-dev, libelogind0, libpam-elogind [04:56] Actually, that's a good point that I should add to the wiki page. [04:56] Going to guess it's installability issues, which yeah that's sort of expected in Ubuntu since there's no systemd alternative. [04:57] Yeah, even on notest. [04:57] Hm. [04:57] Which if sway depends on that...Hrm! [04:57] Oh, well, I wonder why I thought that in the first place, since it wouldn't show as a candidate anyway if autopkgtests didn't pass. [04:57] Unit193: It does? [04:57] Wat? [04:58] - Make build-deps libsystemd-dev and libelogind-dev alternatives So we'll want the new sway, good. [04:58] * libelogind0 is ABI compatible with libsystemd0 so conflict, replace and [04:59] provide libsystemd0 (=${source:Upstream-Version}), and also install [04:59] Wait... wat? [04:59] libsystemd.so symlinks (Closes: #923244). [04:59] ...You understand the point of elogind, yes? [04:59] I actually don't. To Google I go. [05:00] To replace sytemd's logind, so one can have gnome or other such things on a sysvinit/openrc system. [05:01] Elogind has been developed for use in GuixSD, the OS distribution of GNU Guix. [05:01] Hmm. [05:01] Okay. [05:01] Unit193: So you're telling me that sway depends on this thing? [05:02] * wxl blinks [05:02] Oh. [05:02] No, I'm glad it isn't. [05:02] I guess they just want it to work in Devuan? :P [05:03] https://sources.debian.org/src/sway/1.1.1-1/meson.build/#L55 well it *has* support. [05:04] I wonder what went through Drew's mind to do that. [05:04] Why not? [05:05] Seems like a good thing to do. [05:05] I'm just wondering why. :) [05:06] https://files.devuan.org/devuan_ascii/Release_notes.txt flavors are (consolekit + slim) + (XFCE|MATE) || (elogind + lightdm) + (Cinnamon|KDE|LXQt) [05:06] That's like wondering why one would support logind.. [05:06] almost exactly [05:06] Anyway, doesn't matter. Things are sync'd. [05:07] Ohhh, it just clicked to me that it's drop-in for their use case. [05:07] Okay, that's cool. [05:07] the sleeper has awakened [05:12] There, added update_output_notest.txt to ProposedMigration. [05:26] Wasn't there an openbox style Wayland compositor? === mwhudson_ is now known as mwhudson [09:09] seb128, i'm -1 about nagging uploaders about infra issues, too, but this is not the sru team's fault, the infra should retry automatically [09:10] rbalint, right, fair enough [09:52] seb128: I agree it's tedious to drive SRUs because of the false postiive autopkgtests === ricab is now known as ricab|bbl [09:52] seb128: but I don't think it makes sense for the process to require that the limited-in-size SRU team do that driving when any uploader can do it. It'd be more efficient if everyone pitched in. [09:53] seb128: separately, if we can figure out how to reduce the false positives, that'd be good too :) [09:53] rbasak, well, I know that some uploaders get confused on what to do when the test failures is an infra issue and just do nothing and then the SRU sits there for ages [09:53] indeed [09:54] sil2100, seb128: I suggest a tag that means "autopkgtest results have false positives" [09:55] The notification could ask the uploader to tag if they've checked, and we could have the pending-sru report mark the autopkgtest faiilures green or something like that. [09:55] Then uploaders will know what to do and the SRU team can focus on the ones that have been looked at first. [09:57] yeah, that would be better/less confusing I think [09:57] rbasak: might be a possibility [09:57] Anyway, even without the bug notification, in most cases people would anyway 'be confused' if they checked the autopkgtest failures, which we expect them to do [09:57] That indicator-session case from yesterday passed once the kernel had fully published out, by the way. [09:58] As I said, nothing changed in the process or anything, we always expected people to look at autopkgtest results of their SRUs [09:58] Yes - adding the notification hasn't made anything worse IMHO. [09:58] Laney, you mean gnome-settings-daemon? [09:58] That was the trigger. [09:58] ah, right [09:58] crap, I did a test retry [09:59] thx awesome bar to completing to the retry url for me :/ === ricab|bbl is now known as ricab [12:19] xnox: I've been testing apache2 with client cert auth following the openssl 1.1.1 update [12:19] I don't have a conclusion yet, but I've been finding some things [12:19] for example, in disco, it doesn't even establish a connection, because browsers try tls 1.3, and that fails with this errror in the logs: [12:20] [Thu Jun 27 20:55:43.549681 2019] [ssl:error] [pid 2620:tid 140250923878144] [client 10.0.100.1:60000] AH10129: verify client post handshake, referer: https://bionic-apache-client-cert.lxd/ [12:20] [Thu Jun 27 20:55:43.549715 2019] [ssl:error] [pid 2620:tid 140250923878144] [client 10.0.100.1:60000] AH10158: cannot perform post-handshake authentication, referer: https://bionic-apache-client-cert.lxd/ [12:20] [Thu Jun 27 20:55:43.549732 2019] [ssl:error] [pid 2620:tid 140250923878144] SSL Library Error: error:14268117:SSL routines:SSL_verify_client_post_handshake:extension not received [12:20] ahasenack: sounds like we should bump it to Critical? [12:20] I googled that extension, firefox seems to have it in some version, and chrome/chromium hasn't implemented it yet [12:20] rbasak: I just want to be sure my setup is ok first, client cert auth is messy to setup and prone to errors [12:21] I tried disco thinking it would be my good case, but it failed there too. I wanted to try xenial now [12:21] Ah [12:21] Somehow I missed that it was limited to client cert auth only, sorry. [12:21] we run on client cert auth all over at $work, and as of today everything is horribly slow :/ [12:21] on tls v1.2, bionic, I get the same error as reported in https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039 [12:21] Launchpad bug 1833039 in openssl (Ubuntu) "18.04/Apache2: rejecting client initiated renegotiation due to openssl 1.1.1" [Undecided,Confirmed] [12:22] maswan: that's the bug above, a long timeout [12:22] rbasak: there is also something about mod reqtimeout, which we have set to 20s, I think the timeouts are related to that kicking in. I tried disabling it, but then the connection never times out [12:22] something else to investigate [12:23] yeah, our wiki maintainer already found that bug, and luckily it's almost weekend here and we hope for better times after the weekend. :) [12:23] the upstream commit pointed at in the bug talks about tls v1.3 only: mod_ssl: disable check for client initiated renegotiations with TLS 1.3. [12:23] but we are seeing the timeouts with 1.2 [12:23] I think there are many things going on and we need to separate them [12:24] first, I don't think apache in bionic has tls v1.3 support [12:24] hm. it's 1.2 here, at least when it finally manages to connect and load [12:33] ahasenack, so in your setup it just fails, no long waits [12:33] not what I said [12:33] ahasenack: ouch [12:33] * paride re-reads [12:33] in disco it fails, because tls 1.3 is negotiated, but apache complains the browser lacks a certain extension [12:33] I found bugs about that in firefox and chromium [12:35] in bionic I reproduce the timeout [12:35] and I think modrequesttimeout is playing a role here, maybe even helping, I'm not sure yet [12:36] ahasenack, and in disco there is no fallback to tls1.2, if I get it right [12:36] for client cert auth, nope [12:37] the upstream bugs need a closer inspection (firefox, chromium). There was one for apache too, when it was thought it was an apache bug, but apparently that was closed as invalid (that one I didn't even skim yet) [12:40] ahasenack, I wonder if changing the MinProtocol / MaxProtocol options in /etc/ssl/openssl.conf would have an effect in this case [12:41] apache has settings for that [12:42] I mean, if you are thinking about a workaround, it could be set in apache, if there is one [12:42] I'm not clear on the renegotiation feature, if it's allowed or not [12:42] openssl's s_client can trigger a renegotiation, via the R key after the connection is established, and it's refused all the time by the server [12:42] I'm not sure yet what's the expected behavior [12:45] these are my raw test setup instructions: http://paste.ubuntu.com/p/2fTg2yM2Ht/ === ricab_ is now known as ricab [14:18] ahasenack: paride: so upstream have refactored SSLVerifyCLient processing inside location stanzas [14:18] ahasenack: paride: due to observed brokeness. [14:19] ahasenack: paride: i don't think that's in any release. [14:19] xnox: are you saying this in response to my comment about things working if I move that outside of a location block? [14:19] ahasenack: paride: shall we try capping apache2 to tlsv1.2 for now, across the board, until we can get new apache upstream release which is actually compatible and performance with tlsv1.3 [14:20] xnox: I don't think capping to 1.2 fixes it. That's in fact where I see the problem [14:20] ahasenack: yes, moving things outside of the block fixes things. but existing deployments will not change their apache config. [14:20] ahasenack: SIGH [14:20] tls 1.3 has another issue with browser support, that post-auth handshake situation [14:20] ahasenack: it seems like we are about to do a quite a fat apache2 update of core & mod_ssl then [14:20] that we can "fix" by capping to 1.2 [14:20] ahasenack: right, so capping would fix that bit only, but not everything. [14:21] the timeout is still there [14:21] yes, timeout / long delay got fixed in the refactor patch series [14:21] ahasenack: what about eoan? i guess it's also still broken? [14:21] xnox: in this bug the commenter says it's fixed: https://bz.apache.org/bugzilla/show_bug.cgi?id=62691#c5 [14:21] bz.apache.org bug 62691 in mod_ssl "OpenSSL 1.1.1 and client-certificate renegotiation causes 1 minute delay" [Normal,Resolved: fixed] [14:21] yeap that. [14:21] the 1.3 patch is fully in 2.4.39 [14:22] which we don't have in eoan, because of the dbeian freeze [14:22] when i looked at backporting that, to bionic it looked akward. [14:22] I was just discussing this with the team this week, if we should go ahead of debian for some packages [14:22] ahasenack: it's not unusual for us to do that during a freeze. [14:22] ahasenack: and/or ask debian to upload things into experimental for us to merge/sync from [14:23] xnox: I haven't tried eoan. I stopped at disco while still trying to understand the issues at play, differenciating the timeout from the tls v1.3 post-auth handshake issue [14:24] disco+ can be tested if we disable tls 1.3 [14:24] I can do that [14:24] just need to be careful, I'm trying multiple things at the same time [14:24] and certificate auth quickly gets messy [14:24] indeed [14:25] ahasenack, true; I'm trying to replicate your setup from the pastebin [14:25] yeah, it's a bit raw [14:25] the reqtimeout change, for example, [14:26] with that module disabled, the connection doesn't complete, it stays stuck [14:26] and I saw in another comment in one of these multiple bugs that 2.4.39 has an extra setting in that module specifically for ssl handshakes [14:26] which one user was hoping to use as a workaround [14:27] hmmmm [14:27] ahasenack: rebuild apache against openssl 1.0.2 [14:33] hmmm [14:33] not wanting to get hopes up [14:33] but I think I found a patch that worked [14:33] xnox: fwiw [14:33] https://bugs.launchpad.net/ubuntu/+source/kopano-webapp/+bug/1834052 [14:33] Launchpad bug 1834052 in kopano-webapp (Ubuntu) "autopkgtest failures: Chromium-Related in Tests" [Medium,In progress] [14:33] regarding the badtest [14:33] https://github.com/apache/httpd/commit/bbedd8b80e50647e09f2937455cc57565d94a844 [14:34] xnox: just to mention i noticed the same problem when nginx got stuck as well [14:34] i've asked in -release for it to either be ignored or badtested several times with E:NoResponse [14:37] yea, that patch worked [14:37] xnox: paride: https://github.com/apache/httpd/commit/bbedd8b80e50647e09f2937455cc57565d94a844 [14:37] I'll ask for people to test the ppa I have [14:38] https://launchpad.net/~ahasenack/+archive/ubuntu/apache2-client-cert-1833039 [14:38] ahasenack, the description and comments sound very promising [14:39] it fixed my test case at least [14:39] tried multiple times: package from distro, fail/timeout; upgrade to ppa, works [14:45] ahasenack: that looks ok [14:46] ahasenack: but please test it in all apache mode. [14:46] which modes do you mean? [14:46] ahasenack: i.e. forking / threading / blocking / non-blocking "worker" engine stuff. => not sure about terminology, but does that make any sense at all? [14:46] ah, mpm [14:47] given the setup intructions I have, I could even add a dep8 test for this probably, but that would take more time [14:47] ahasenack: there are cases where unsetting auto_retry may result in things not consumed right (hang) if the app doesn't know how to handle the SSL read/write_more_wanted error codes. [14:47] well, that is committed upstream already [14:47] ahasenack: just restart the server locally with one vs the other engine. should be good enough. [14:47] I could check if there were follow-up commits [14:48] ahasenack: true, just thinking if it has more commit deps or follow-ups => yes that. [14:48] ok [14:48] (you are on track there) [14:49] maybe it's not even the auto-retry that fixed it, the other bit of that commit looks suspiscious [14:49] " For OpenSSL >=1.1.1, turn on client cert support " [14:49] the autoretry could be specific to tls v1.3 [14:50] anyway [14:53] xnox: do we want to push any Ubuntu level changes to fix the tests in kopano-webapp so we don't need to badtest it? oSoMoN did some debugging and found it's specifically due to passing a log-path arg to chromedriver (and snap isolation being the problem) [14:53] rather than just badtesting the tests [14:53] at least, until Debian does something about the test ( oSoMoN made a Pull Req in Debian to fix the test, but Debian is frozen so...) [15:11] teward: if there is a fix, i'm happy to upload it into ubuntu. [15:11] teward: link? [15:12] xnox: https://salsa.debian.org/giraffe-team/kopano-webapp/merge_requests/1/diffs [15:12] it requires alteration to d/tests/test_webapp.py [15:12] but just adjusting the service_args [15:12] author is oSoMoN [15:12] ack [15:12] xnox: I can push it in also (yay for coredev) but wanted to check BEFORE making any changes [15:12] ahasenack: yeah that other bit, is weird. [15:13] teward: it looks harmless, go for it. [15:13] ack [15:13] .. bleh where'd i put the code... [15:13] teward: as long as, like we don't collect the log from that path. Or like we assert on stderr and without the log, it will start producing stderr. [15:13] ... oh that's right it was in my tmpfs *derp* [15:13] xnox: AFAICT that's not even a retrievable artifact after autopkgtests have run [15:13] my guess is it'll spit out to stdout/stderr [15:14] but oSoMoN would have details there [15:14] test it yourself [15:14] xnox: that is, however, the reason chromedriver is crashing [15:14] don't blindly upload [15:14] Laney: i am testing it - slowly ;) [15:14] * teward is currently stuck in a "Downloading files" state [15:19] * Laney sends some bandwidth over in the post [15:22] would be good if the installing snap stage showed progress [15:23] indeed [15:23] Laney: i'm running the autopkgtest locally, it's slow only because of the Internet being stupid. [15:25] it's the uplink speed here there's a lot of visitors on site too so it breaks things. and apparently my LXD image for Eoan autopkgtests is broken soooo..... i'mma have to rebuild it. [15:25] * teward rebuilds the LXD image [15:27] * teward runs the autopkgtest [15:28] you might have some trouble installing the snap there [15:28] (that's the armhf problem) [15:30] Laney: amd64 can install it fine [15:30] my infra's amd64 [15:30] the amd64 failure is due to log-path and snap confining [15:30] it is not an arch problem, it is a VM vs LXD problem [15:30] Laney: well SO FAR I haven't had the issue in my local snaps [15:30] s/snaps/LXD tests [15:31] it's possible to have lxd containers which can install snaps, sure [15:31] the ones we have for autopkgtest can't, though [15:31] right but i'm talking for my local tests, not the ones in use on the autopkgtest env. [15:31] if we can pass the amd64/i386 tests we can badtest the armhf still [15:31] THAT i already asked for :p [15:31] I said might [15:31] and got E:NoReply in -release [15:32] and that badtest is already there [15:34] teward, it's likely that without a --log-file parameter, the --verbose switch to chromedriver will make it write to stderr, but that should be fine because the autopkgtest already has the allow-stderr restriction [15:35] oSoMoN: indeed. [15:35] https://sites.google.com/a/chromium.org/chromedriver/logging confirms it will write to stderr [15:35] i don't have a VM to autopkgtest with damn snaps. [15:36] does autopkgtest have a vmware hook for creating VMs for testing? just curious [15:36] i have VMware Workstation on this system which is why I ask :| [15:39] nein [15:40] meh qemu it is then. [15:40] * teward installs into the system [15:40] at least i have a local archive mirror so it will do THAT part fast heh [15:48] teward: oSoMoN: It's still failing for me fwiw https://paste.ubuntu.com/p/GJJPNB3BwR/ [15:48] Laney: *with* the patch? [15:48] yes [15:48] oSoMoN: sounds like it's still fubar. [15:49] meh [15:49] still going to get my Eoan qemu autopkgtest env set up [15:49] sorry :( [15:49] but it's slowwwwwwwwwww (56% of 500MB and I'm jacked right into the network) [15:57] Laney, passing here, in an eoan amd64 VM [15:57] run it with autopkgtest, that's what I've been doing [15:57] yes, I’ve run it with autopkgtest -B . -- null [15:57] verified failing without the patch, and passing with it [15:58] ... [15:58] I mean it fails [15:58] try the qemu runner [15:58] autopkgtest --output-dir out2 --shell-fail --timeout-copy=6000 --apt-upgrade kopano-webapp_3.5.6+dfsg1-1ubuntu1.dsc -- qemu --ram-size=8192 ../reallytemp/autopkgtest-eoan-amd64.img [15:59] gotta go now, family waiting for me, but please keep me posted (comments on the bug), thanks! [16:03] Laney: i'll do so as soon as this thing finishes building the image. It's sit on libc6 triggers :| === ShibaInu is now known as Shibe [16:56] hello xnox, can I fix borgbackup please? [17:03] xnox: confirmed disco works with client cert auth with no timeouts if I disable tlsv1.3 in the vhost (SSLProtocol all -SSLv3 -TLSv1.3). With TLSv1.3 I get [17:03] out of the box, I get https://pastebin.ubuntu.com/p/XdWq2xzq7Z/ (out of the box we have tlsv1.3 enabled) [17:03] so that will need a different fix, even if it's disabling tlsv1.3 [17:04] FF needs release 68 to work as far as I can tell, we are at 67 === ricab is now known as ricab|bbl === ricab|bbl is now known as ricab === Wryhder is now known as Lucas_Gray