[00:16] <kyrofa> sarnold, alright, done, any chance you'd be willing to sponsor it for us?
[00:18] <sarnold> kyrofa: sorry, I'm not a coredev :(
[00:21] <Unit193> python-pygraphviz is in universe.
[00:22] <sarnold> oh! so motu would suffice?
[00:22] <sarnold> (which, alas, I am also not, but that's probably a larger poool :)
[00:23] <Unit193> Yes sir.
[04:42] <Unit193> Is there anything preventing someone sync'ing libjson-c from experimental?
[04:45] <tsimonq2> Unit193: .
[04:45] <tsimonq2> When are you going to become a Core Developer, again? :P
[04:45] <Unit193> Why would I?  I like the fringe packages. :3
[04:45] <tsimonq2> ¯\_(ツ)_/¯
[04:45] <Unit193> (Read: I'm only looking into this because of 'sway')
[04:45] <tsimonq2> OOOOOH
[04:46] <tsimonq2> What's the progress on that?
[04:46] <tsimonq2> A little birdie tells me someone is interested in starting an Ubuntu Sway flavor.
[04:46] <Unit193> depwait on json-c .13 :P
[04:46] <tsimonq2> :P
[04:46] <Unit193> Urgh, not another...
[04:46] <wxl> your what hurts?
[04:49] <tsimonq2> Unit193: Between Ubuntu Sway, Ubuntu Cinnamon, and Ubuntu Unity, I wonder if someone is already betting who actually pulls through first. :P
[04:49] <tsimonq2> (All three of those are currently in some stage of "in the works.")
[04:49] <tsimonq2> Now, if we had an active TB that could actually process a flavor application...
[04:49]  * tsimonq2 runs
[04:50] <Unit193> Who's behind them?
[04:50] <tsimonq2> Unit193: Oh look, your binaries are in the NEW queue.
[04:52] <tsimonq2> Ubuntu Unity has... Khurshid Alam? I'm not sure who else. Ubuntu Cinnamon is by someone I was told I should meet when I went to SELF but never got the chance to, so *shrug*. Ubuntu Sway is a few flavor folks.
[04:53]  * tsimonq2 glares at bashfulrobot 
[04:53] <tsimonq2> :P
[04:54] <wxl> i'd be interested in trying a sway flavor actually
[04:55] <tsimonq2> wxl: That makes four existing flavor people interested.
[04:55] <Unit193> Right, so reading proposed migration output...
[04:55] <wxl> oh yeah and i'm starting a fbui flavor.
[04:55] <tsimonq2> Unit193: Oh?
[04:56] <Unit193> skipped: elogind (0, 5, 11) got: 7+0: a-1:a-4:a-0:i-0:p-0:s-2 * arm64: elogind, libelogind-dev, libelogind0, libpam-elogind
[04:56] <tsimonq2> Actually, that's a good point that I should add to the wiki page.
[04:56] <Unit193> Going to guess it's installability issues, which yeah that's sort of expected in Ubuntu since there's no systemd alternative.
[04:57] <tsimonq2> Yeah, even on notest.
[04:57] <tsimonq2> Hm.
[04:57] <Unit193> Which if sway depends on that...Hrm!
[04:57] <tsimonq2> Oh, well, I wonder why I thought that in the first place, since it wouldn't show as a candidate anyway if autopkgtests didn't pass.
[04:57] <tsimonq2> Unit193: It does?
[04:57] <tsimonq2> Wat?
[04:58] <Unit193>     - Make build-deps libsystemd-dev and libelogind-dev alternatives   So we'll want the new sway, good.
[04:58] <tsimonq2>   * libelogind0 is ABI compatible with libsystemd0 so conflict, replace and
[04:59] <tsimonq2>     provide libsystemd0 (=${source:Upstream-Version}), and also install
[04:59] <tsimonq2> Wait... wat?
[04:59] <tsimonq2>     libsystemd.so symlinks (Closes: #923244).
[04:59] <Unit193> ...You understand the point of elogind, yes?
[04:59] <tsimonq2> I actually don't. To Google I go.
[05:00] <Unit193> To replace sytemd's logind, so one can have gnome or other such things on a sysvinit/openrc system.
[05:01] <tsimonq2> Elogind has been developed for use in GuixSD, the OS distribution of GNU Guix.
[05:01] <tsimonq2> Hmm.
[05:01] <tsimonq2> Okay.
[05:01] <tsimonq2> Unit193: So you're telling me that sway depends on this thing?
[05:02]  * wxl blinks
[05:02] <tsimonq2> Oh.
[05:02] <tsimonq2> No, I'm glad it isn't.
[05:02] <tsimonq2> I guess they just want it to work in Devuan? :P
[05:03] <Unit193> https://sources.debian.org/src/sway/1.1.1-1/meson.build/#L55 well it *has* support.
[05:04] <tsimonq2> I wonder what went through Drew's mind to do that.
[05:04] <Unit193> Why not?
[05:05] <Unit193> Seems like a good thing to do.
[05:05] <tsimonq2> I'm just wondering why. :)
[05:06] <wxl> https://files.devuan.org/devuan_ascii/Release_notes.txt flavors are (consolekit + slim) + (XFCE|MATE) || (elogind + lightdm) + (Cinnamon|KDE|LXQt)
[05:06] <Unit193> That's like wondering why one would support logind..
[05:06] <wxl> almost exactly
[05:06] <Unit193> Anyway, doesn't matter.  Things are sync'd.
[05:07] <tsimonq2> Ohhh, it just clicked to me that it's drop-in for their use case.
[05:07] <tsimonq2> Okay, that's cool.
[05:07] <wxl> the sleeper has awakened
[05:12] <tsimonq2> There, added update_output_notest.txt to ProposedMigration.
[05:26] <Unit193> Wasn't there an openbox style Wayland compositor?
[09:09] <rbalint> seb128, i'm -1 about nagging uploaders about infra issues, too, but this is not the sru team's fault, the infra should retry automatically
[09:10] <seb128> rbalint, right, fair enough
[09:52] <rbasak> seb128: I agree it's tedious to drive SRUs because of the false postiive autopkgtests
[09:52] <rbasak> seb128: but I don't think it makes sense for the process to require that the limited-in-size SRU team do that driving when any uploader can do it. It'd be more efficient if everyone pitched in.
[09:53] <rbasak> seb128: separately, if we can figure out how to reduce the false positives, that'd be good too :)
[09:53] <seb128> rbasak, well, I know that some uploaders get confused on what to do when the test failures is an infra issue and just do nothing and then the SRU sits there for ages
[09:53] <seb128> indeed
[09:54] <rbasak> sil2100, seb128: I suggest a tag that means "autopkgtest results have false positives"
[09:55] <rbasak> The notification could ask the uploader to tag if they've checked, and we could have the pending-sru report mark the autopkgtest faiilures green or something like that.
[09:55] <rbasak> Then uploaders will know what to do and the SRU team can focus on the ones that have been looked at first.
[09:57] <seb128> yeah, that would be better/less confusing I think
[09:57] <sil2100> rbasak: might be a possibility
[09:57] <sil2100> Anyway, even without the bug notification, in most cases people would anyway 'be confused' if they checked the autopkgtest failures, which we expect them to do
[09:57] <Laney> That indicator-session case from yesterday passed once the kernel had fully published out, by the way.
[09:58] <sil2100> As I said, nothing changed in the process or anything, we always expected people to look at autopkgtest results of their SRUs
[09:58] <rbasak> Yes - adding the notification hasn't made anything worse IMHO.
[09:58] <seb128> Laney, you mean gnome-settings-daemon?
[09:58] <Laney> That was the trigger.
[09:58] <seb128> ah, right
[09:58] <seb128> crap, I did a test retry
[09:59] <seb128> thx awesome bar to completing to the retry url for me :/
[12:19] <ahasenack> xnox: I've been testing apache2 with client cert auth following the openssl 1.1.1 update
[12:19] <ahasenack> I don't have a conclusion yet, but I've been finding some things
[12:19] <ahasenack> for example, in disco, it doesn't even establish a connection, because browsers try tls 1.3, and that fails with this errror in the logs:
[12:20] <ahasenack> [Thu Jun 27 20:55:43.549681 2019] [ssl:error] [pid 2620:tid 140250923878144] [client 10.0.100.1:60000] AH10129: verify client post handshake, referer: https://bionic-apache-client-cert.lxd/
[12:20] <ahasenack> [Thu Jun 27 20:55:43.549715 2019] [ssl:error] [pid 2620:tid 140250923878144] [client 10.0.100.1:60000] AH10158: cannot perform post-handshake authentication, referer: https://bionic-apache-client-cert.lxd/
[12:20] <ahasenack> [Thu Jun 27 20:55:43.549732 2019] [ssl:error] [pid 2620:tid 140250923878144] SSL Library Error: error:14268117:SSL routines:SSL_verify_client_post_handshake:extension not received
[12:20] <rbasak> ahasenack: sounds like we should bump it to Critical?
[12:20] <ahasenack> I googled that extension, firefox seems to have it in some version, and chrome/chromium hasn't implemented it yet
[12:20] <ahasenack> rbasak: I just want to be sure my setup is ok first, client cert auth is messy to setup and prone to errors
[12:21] <ahasenack> I tried disco thinking it would be my good case, but it failed there too. I wanted to try xenial now
[12:21] <rbasak> Ah
[12:21] <rbasak> Somehow I missed that it was limited to client cert auth only, sorry.
[12:21] <maswan> we run on client cert auth all over at $work, and as of today everything is horribly slow :/
[12:21] <ahasenack> on tls v1.2, bionic, I get the same error as reported in https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
[12:22] <ahasenack> maswan: that's the bug above, a long timeout
[12:22] <ahasenack> rbasak: there is also something about mod reqtimeout, which we have set to 20s, I think the timeouts are related to that kicking in. I tried disabling it, but then the connection never times out
[12:22] <ahasenack> something else to investigate
[12:23] <maswan> yeah, our wiki maintainer already found that bug, and luckily it's almost weekend here and we hope for better times after the weekend. :)
[12:23] <ahasenack> the upstream commit pointed at in the bug talks about tls v1.3 only:     mod_ssl: disable check for client initiated renegotiations with TLS 1.3.
[12:23] <ahasenack> but we are seeing the timeouts with 1.2
[12:23] <ahasenack> I think there are many things going on and we need to separate them
[12:24] <ahasenack> first, I don't think apache in bionic has tls v1.3 support
[12:24] <maswan> hm. it's 1.2 here, at least when it finally manages to connect and load
[12:33] <paride> ahasenack, so in your setup it just fails, no long waits
[12:33] <ahasenack> not what I said
[12:33] <xnox> ahasenack:  ouch
[12:33]  * paride re-reads
[12:33] <ahasenack> in disco it fails, because tls 1.3 is negotiated, but apache complains the browser lacks a certain extension
[12:33] <ahasenack> I found bugs about that in firefox and chromium
[12:35] <ahasenack> in bionic I reproduce the timeout
[12:35] <ahasenack> and I think modrequesttimeout is playing a role here, maybe even helping, I'm not sure yet
[12:36] <paride> ahasenack, and in disco there is no fallback to tls1.2, if I get it right
[12:36] <ahasenack> for client cert auth, nope
[12:37] <ahasenack> the upstream bugs need a closer inspection (firefox, chromium). There was one for apache too, when it was thought it was an apache bug, but apparently that was closed as invalid (that one I didn't even skim yet)
[12:40] <paride> ahasenack, I wonder if changing the MinProtocol / MaxProtocol options in /etc/ssl/openssl.conf would have an effect in this case
[12:41] <ahasenack> apache has settings for that
[12:42] <ahasenack> I mean, if you are thinking about a workaround, it could be set in apache, if there is one
[12:42] <ahasenack> I'm not clear on the renegotiation feature, if it's allowed or not
[12:42] <ahasenack> openssl's s_client can trigger a renegotiation, via the R key after the connection is established, and it's refused all the time by the server
[12:42] <ahasenack> I'm not sure yet what's the expected behavior
[12:45] <ahasenack> these are my raw test setup instructions: http://paste.ubuntu.com/p/2fTg2yM2Ht/
[14:18] <xnox> ahasenack:  paride: so upstream have refactored SSLVerifyCLient processing inside location stanzas
[14:18] <xnox> ahasenack:  paride: due to observed brokeness.
[14:19] <xnox> ahasenack:  paride: i don't think that's in any release.
[14:19] <ahasenack> xnox: are you saying this in response to my comment about things working if I move that outside of a location block?
[14:19] <xnox> ahasenack:  paride: shall we try capping apache2 to tlsv1.2 for now, across the board, until we can get new apache upstream release which is actually compatible and performance with tlsv1.3
[14:20] <ahasenack> xnox: I don't think capping to 1.2 fixes it. That's in fact where I see the problem
[14:20] <xnox> ahasenack:  yes, moving things outside of the block fixes things. but existing deployments will not change their apache config.
[14:20] <xnox> ahasenack:  SIGH
[14:20] <ahasenack> tls 1.3 has another issue with browser support, that post-auth handshake situation
[14:20] <xnox> ahasenack:  it seems like we are about to do a quite a fat apache2 update of core & mod_ssl then
[14:20] <ahasenack> that we can "fix" by capping to 1.2
[14:20] <xnox> ahasenack:  right, so capping would fix that bit only, but not everything.
[14:21] <ahasenack> the timeout is still there
[14:21] <xnox> yes, timeout  / long delay got fixed in the refactor patch series
[14:21] <xnox> ahasenack:  what about eoan? i guess it's also still broken?
[14:21] <ahasenack> xnox: in this bug the commenter says it's fixed: https://bz.apache.org/bugzilla/show_bug.cgi?id=62691#c5
[14:21] <xnox> yeap that.
[14:21] <ahasenack> the 1.3 patch is fully in 2.4.39
[14:22] <ahasenack> which we don't have in eoan, because of the dbeian freeze
[14:22] <xnox> when i looked at backporting that, to bionic it looked akward.
[14:22] <ahasenack> I was just discussing this with the team this week, if we should go ahead of debian for some packages
[14:22] <xnox> ahasenack:  it's not unusual for us to do that during a freeze.
[14:22] <xnox> ahasenack:  and/or ask debian to upload things into experimental for us to merge/sync from
[14:23] <ahasenack> xnox: I haven't tried eoan. I stopped at disco while still trying to understand the issues at play, differenciating the timeout from the tls v1.3 post-auth handshake issue
[14:24] <ahasenack> disco+ can be tested if we disable tls 1.3
[14:24] <ahasenack> I can do that
[14:24] <ahasenack> just need to be careful, I'm trying multiple things at the same time
[14:24] <ahasenack> and certificate auth quickly gets messy
[14:24] <xnox> indeed
[14:25] <paride> ahasenack, true; I'm trying to replicate your setup from the pastebin
[14:25] <ahasenack> yeah, it's a bit raw
[14:25] <ahasenack> the reqtimeout change, for example,
[14:26] <ahasenack> with that  module disabled, the connection doesn't complete, it stays stuck
[14:26] <ahasenack> and I saw in another comment in one of these multiple bugs that 2.4.39 has an extra setting in that module specifically for ssl handshakes
[14:26] <ahasenack> which one user was hoping to use as a workaround
[14:27] <xnox> hmmmm
[14:27] <xnox> ahasenack:  rebuild apache against openssl 1.0.2
[14:33] <ahasenack> hmmm
[14:33] <ahasenack> not wanting to get hopes up
[14:33] <ahasenack> but I think I found a patch that worked
[14:33] <teward> xnox: fwiw
[14:33] <teward> https://bugs.launchpad.net/ubuntu/+source/kopano-webapp/+bug/1834052
[14:33] <teward> regarding the badtest
[14:33] <ahasenack> https://github.com/apache/httpd/commit/bbedd8b80e50647e09f2937455cc57565d94a844
[14:34] <teward> xnox: just to mention i noticed the same problem when nginx got stuck as well
[14:34] <teward> i've asked in -release for it to either be ignored or badtested several times with E:NoResponse
[14:37] <ahasenack> yea, that patch worked
[14:37] <ahasenack> xnox: paride: https://github.com/apache/httpd/commit/bbedd8b80e50647e09f2937455cc57565d94a844
[14:37] <ahasenack> I'll ask for people to test the ppa I have
[14:38] <ahasenack> https://launchpad.net/~ahasenack/+archive/ubuntu/apache2-client-cert-1833039
[14:38] <paride> ahasenack, the description and comments sound very promising
[14:39] <ahasenack> it fixed my test case at least
[14:39] <ahasenack> tried multiple times: package from distro, fail/timeout; upgrade to ppa, works
[14:45] <xnox> ahasenack:  that looks ok
[14:46] <xnox> ahasenack:  but please test it in all apache mode.
[14:46] <ahasenack> which modes do you mean?
[14:46] <xnox> ahasenack:  i.e. forking / threading / blocking / non-blocking "worker" engine stuff. => not sure about terminology, but does that make any sense at all?
[14:46] <ahasenack> ah, mpm
[14:47] <ahasenack> given the setup intructions I have, I could even add a dep8 test for this probably, but that would take more time
[14:47] <xnox> ahasenack:  there are cases where unsetting auto_retry may result in things not consumed right (hang) if the app doesn't know how to handle the SSL read/write_more_wanted error codes.
[14:47] <ahasenack> well, that is committed upstream already
[14:47] <xnox> ahasenack:  just restart the server locally with one vs the other engine. should be good enough.
[14:47] <ahasenack> I could check if there were follow-up commits
[14:48] <xnox> ahasenack:  true, just thinking if it has more commit deps or follow-ups => yes that.
[14:48] <ahasenack> ok
[14:48] <xnox> (you are on track there)
[14:49] <ahasenack> maybe it's not even the auto-retry that fixed it, the other bit of that commit looks suspiscious
[14:49] <ahasenack> " For OpenSSL >=1.1.1, turn on client cert support "
[14:49] <ahasenack> the autoretry could be specific to tls v1.3
[14:50] <ahasenack> anyway
[14:53] <teward> xnox: do we want to push any Ubuntu level changes to fix the tests in kopano-webapp so we don't need to badtest it? oSoMoN did some debugging and found it's specifically due to passing a log-path arg to chromedriver (and snap isolation being the problem)
[14:53] <teward> rather than just badtesting the tests
[14:53] <teward> at least, until Debian does something about the test ( oSoMoN made a Pull Req in Debian to fix the test, but Debian is frozen so...)
[15:11] <xnox> teward:  if there is a fix, i'm happy to upload it into ubuntu.
[15:11] <xnox> teward:  link?
[15:12] <teward> xnox: https://salsa.debian.org/giraffe-team/kopano-webapp/merge_requests/1/diffs
[15:12] <teward> it requires alteration to d/tests/test_webapp.py
[15:12] <teward> but just adjusting the service_args
[15:12] <teward> author is oSoMoN
[15:12] <xnox> ack
[15:12] <teward> xnox: I can push it in also (yay for coredev) but wanted to check BEFORE making any changes
[15:12] <xnox> ahasenack:  yeah that other bit, is weird.
[15:13] <xnox> teward:  it looks harmless, go for it.
[15:13] <teward> ack
[15:13] <teward> .. bleh where'd i put the code...
[15:13] <xnox> teward:  as long as, like we don't collect the log from that path. Or like we assert on stderr and without the log, it will start producing stderr.
[15:13] <teward> ... oh that's right it was in my tmpfs *derp*
[15:13] <teward> xnox: AFAICT that's not even a retrievable artifact after autopkgtests have run
[15:13] <teward> my guess is it'll spit out to stdout/stderr
[15:14] <teward> but oSoMoN would have details there
[15:14] <Laney> test it yourself
[15:14] <teward> xnox: that is, however, the reason chromedriver is crashing
[15:14] <Laney> don't blindly upload
[15:14] <teward> Laney: i am testing it - slowly ;)
[15:14]  * teward is currently stuck in a "Downloading files" state
[15:19]  * Laney sends some bandwidth over in the post
[15:22] <Laney> would be good if the installing snap stage showed progress
[15:23] <teward> indeed
[15:23] <teward> Laney: i'm running the autopkgtest locally, it's slow only because of the Internet being stupid.
[15:25] <teward> it's the uplink speed here there's a lot of visitors on site too so it breaks things.  and apparently my LXD image for Eoan autopkgtests is broken soooo..... i'mma have to rebuild it.
[15:25]  * teward rebuilds the LXD image
[15:27]  * teward runs the autopkgtest
[15:28] <Laney> you might have some trouble installing the snap there
[15:28] <Laney> (that's the armhf problem)
[15:30] <teward> Laney: amd64 can install it fine
[15:30] <teward> my infra's amd64
[15:30] <teward> the amd64 failure is due to log-path and snap confining
[15:30] <Laney> it is not an arch problem, it is a VM vs LXD problem
[15:30] <teward> Laney: well SO FAR I haven't had the issue in my local snaps
[15:30] <teward> s/snaps/LXD tests
[15:31] <Laney> it's possible to have lxd containers which can install snaps, sure
[15:31] <Laney> the ones we have for autopkgtest can't, though
[15:31] <teward> right but i'm talking for my local tests, not the ones in use on the autopkgtest env.
[15:31] <teward> if we can pass the amd64/i386 tests we can badtest the armhf still
[15:31] <teward> THAT i already asked for :p
[15:31] <Laney> I said might
[15:31] <teward> and got E:NoReply in -release
[15:32] <Laney> and that badtest is already there
[15:34] <oSoMoN> teward, it's likely that without a --log-file parameter, the --verbose switch to chromedriver will make it write to stderr, but that should be fine because the autopkgtest already has the allow-stderr restriction
[15:35] <teward> oSoMoN: indeed.
[15:35] <oSoMoN> https://sites.google.com/a/chromium.org/chromedriver/logging confirms it will write to stderr
[15:35] <teward> i don't have a VM to autopkgtest with damn snaps.
[15:36] <teward> does autopkgtest have a vmware hook for creating VMs for testing? just curious
[15:36] <teward> i have VMware Workstation on this system which is why I ask :|
[15:39] <Laney> nein
[15:40] <teward> meh qemu it is then.
[15:40]  * teward installs into the system
[15:40] <teward> at least i have a local archive mirror so it will do THAT part fast heh
[15:48] <Laney> teward: oSoMoN: It's still failing for me fwiw https://paste.ubuntu.com/p/GJJPNB3BwR/
[15:48] <teward> Laney: *with* the patch?
[15:48] <Laney> yes
[15:48] <teward> oSoMoN: sounds like it's still fubar.
[15:49] <oSoMoN> meh
[15:49] <teward> still going to get my Eoan qemu autopkgtest env set up
[15:49] <Laney> sorry :(
[15:49] <teward> but it's slowwwwwwwwwww (56% of 500MB and I'm jacked right into the network)
[15:57] <oSoMoN> Laney, passing here, in an eoan amd64 VM
[15:57] <Laney> run it with autopkgtest, that's what I've been doing
[15:57] <oSoMoN> yes, I’ve run it with autopkgtest -B . -- null
[15:57] <oSoMoN> verified failing without the patch, and passing with it
[15:58] <Laney> ...
[15:58] <Laney> I mean it fails
[15:58] <Laney> try the qemu runner
[15:58] <Laney> autopkgtest --output-dir out2 --shell-fail --timeout-copy=6000 --apt-upgrade kopano-webapp_3.5.6+dfsg1-1ubuntu1.dsc  -- qemu --ram-size=8192 ../reallytemp/autopkgtest-eoan-amd64.img
[15:59] <oSoMoN> gotta go now, family waiting for me, but please keep me posted (comments on the bug), thanks!
[16:03] <teward> Laney: i'll do so as soon as this thing finishes building the image.  It's sit on libc6 triggers :|
[16:56] <LocutusOfBorg> hello xnox, can I fix borgbackup please?
[17:03] <ahasenack> xnox: confirmed disco works with client cert auth with no timeouts if I disable tlsv1.3 in the vhost (SSLProtocol all -SSLv3 -TLSv1.3). With TLSv1.3 I get
[17:03] <ahasenack> out of the box, I get https://pastebin.ubuntu.com/p/XdWq2xzq7Z/ (out of the box we have tlsv1.3 enabled)
[17:03] <ahasenack> so that will need a different fix, even if it's disabling tlsv1.3
[17:04] <ahasenack> FF needs release 68 to work as far as I can tell, we are at 67