[10:23] <doko> ddstreet: I see you touched knot-resolver before. would it be possible to look at the merge? currently fails autopkg tests with the new knot version
[11:45] <doko> xnox: did you follow-up on https://github.com/ntop/nDPI/pull/840 ?
[11:53] <xnox> doko:  i have not
[11:53] <xnox> doko:  i failed at configuring wireshark and inspecting packets to understand if nDPI is broken, or if the test data is broken (and is endian specific), or both.
[11:57] <danboid> didrocks: Will zsys handle spare disks ie automatically replacing dead disks with spares?
[11:58] <danboid> I've not had any luck getting this to work with zed
[12:01] <ahasenack> good morning
[12:01] <danboid> In my experience, when a disk fully gives up the ghost, zfs sees it as REMOVED, which is when it would ideally be auto-replaced with a hot spare
[12:04] <danboid> At the moment I'm having to replace failed disks manually when they fail with my proxmox RAID2 array
[12:04] <danboid> RAIDZ2
[12:13] <ahasenack> danboid: is zed running? afaik it's the one responsible for the replacement actions
[12:13] <ahasenack> maybe it's missing a config
[12:13] <ahasenack> sorry, just jumping in on what you said last, no idea about context
[12:14] <danboid> ahasenack, Yeah its running bu I've read reports of others having this issue with zed
[12:14] <danboid> claiming it doesn't actually work for swapping spares. Have you got zed to work?
[12:15] <ahasenack> I remember having to do something with in to enable hot spare
[12:15] <danboid> I have attempted to configure it according to the docs and a guide I found
[12:15] <ahasenack> back when I tested this, in an older system
[12:15] <danboid> Don't suppose yoiu still have your zed config do you?
[12:16] <ahasenack> no
[12:16] <ahasenack> I'm trying to remember
[12:16] <ahasenack> I think it had to be told to listen to this particular event
[12:16] <ahasenack> I'm checking the default config to see if something rings a bell
[12:16] <danboid> OK thanks
[12:16] <danboid> I'm wondering if zsys may expand to cover this
[12:19] <ahasenack> danboid: do you have autoreplace=on in the pool?
[12:19] <ahasenack> zpool get autoreplace
[12:21] <danboid> ahasenack, No! That could be my problem then. I'll enable that
[12:23] <danboid> Thanks!
[12:23] <ahasenack> hope it works
[12:24] <ahasenack> more details I don't have at the moment
[12:31] <doko> sforshee, apw: looking at https://launchpad.net/ubuntu/+source/gkrellm2-cpufreq/0.6.4-6/+build/19239466 is libcpupower-dev something which should be built from the kernel sources?
[12:31] <LocutusOfBorg> tjaalton, http://debomatic-amd64.debian.net/distribution#unstable/renderdoc/1.7+dfsg-3.1/buildlog do you care about debian?
[12:31] <LocutusOfBorg> I uploaded in groovy the fix
[12:36] <ahasenack> doko: upstream golang fix for that s390x issue :) https://go-review.googlesource.com/c/go/+/238628/
[12:46] <didrocks> danboid: this is rather a built-in ZFS feature. But we'll propose in the future built-in raid and spare in the installer
[12:46] <xnox> ahasenack:  so obvious!
[12:46] <ahasenack> xnox: I'm confused, in which timezone are you in again? :)
[12:46] <ahasenack> I thought UK
[12:46] <ahasenack> but then I saw you online at like 10pm my time
[12:50] <xnox> i have midnight weekly calls
[12:55] <danboid> didrocks: Yes, adding spare zfs disk suppport to the installer would be a very nice feature to have
[12:56] <danboid> didrocks: Any idea when the installer will better support zfs ie creating zfs partitions, RAIDZ etc?
[12:57] <danboid> Might we see any of this in 20.10?
[12:57] <ahasenack> xnox: ouch
[13:36] <oSoMoN> I doubt this is going to collide with anyone else doing +1 maintenance, but just in case, I'm looking at rocksdb FTBFS (bug #1884072)
[13:37] <oSoMoN> ha, just saw rbalint's comment on the bug, ok…
[13:37] <oSoMoN> so I guess I'm not working on this any longer :)
[14:06] <oSoMoN> python-cogent autopkgtests pass locally, can anyone please retry https://autopkgtest.ubuntu.com/request.cgi?release=groovy&arch=amd64&package=python-cogent&trigger=python-cogent%2F2020.2.7a%2Bdfsg-2 , https://autopkgtest.ubuntu.com/request.cgi?release=groovy&arch=arm64&package=python-cogent&trigger=python-cogent%2F2020.2.7a%2Bdfsg-2 , https://autopkgtest.ubuntu.com/request.cgi?release=groovy&arch=ppc64el&package=python-cogent&trigger=python-cogent%2F2
[14:06] <oSoMoN> 020.2.7a%2Bdfsg-2 and https://autopkgtest.ubuntu.com/request.cgi?release=groovy&arch=s390x&package=python-cogent&trigger=python-cogent%2F2020.2.7a%2Bdfsg-2 ?
[14:07] <seb128> oSoMoN, I clicked those
[14:07] <oSoMoN> thanks!
[14:26] <oSoMoN> seb128, looks like they're failing again, at least on arm64 (didn't catch the output on other arches and the logs aren't there yet), but one difference with my local run is that I was using all of groovy-proposed, including pytest=4.6.11-1, can you rerun with that additional trigger?
[14:30] <seb128> oSoMoN, done
[14:30] <oSoMoN> thanks!
[14:30] <seb128> np!
[14:55] <oSoMoN> still failed, bleh
[16:39] <oSoMoN> seb128, are you looking at the libvorbis autopkgtest failures? if not I'll take it
[16:40] <oSoMoN> (the fix looks trivial)
[17:00] <oSoMoN> seb128, I'll take the answer to my first question as a no :) I filed https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=963082 and submitted https://salsa.debian.org/multimedia-team/libvorbis/-/merge_requests/1, will share a debdiff for an ubuntu upload in a moment
[17:08] <oSoMoN> there we go: https://people.canonical.com/~osomon/+1maintenance/libvorbis.debdiff
[17:09] <oSoMoN> core devs: sponsoring welcome ^
[17:46] <rbasak> bryce: https://code.launchpad.net/~racb/usd-importer/+git/usd-importer/+merge/386013 is ready for an initial look please
[17:46] <rbasak> I expect this will take you quite some time
[17:49] <bryce> rbasak, ok noted
[17:53] <rbalint> oSoMoN, thanks, sponsoring libvorbis
[17:53] <oSoMoN> cheers
[17:53] <rbalint> oSoMoN, also upstreaming the fix to Debian
[17:53] <oSoMoN> rbalint, done already
[17:53] <oSoMoN> https://salsa.debian.org/multimedia-team/libvorbis/-/merge_requests/1
[17:55] <rbalint> oSoMoN, merging then as well :-)
[17:55] <oSoMoN> thx
[17:57] <oSoMoN> gmenuharness was removed in focal (bug #1866434), but is back (and FTBFS) in groovy, I guess it should be removed again? (and how did it manage to creep back in?)
[17:57] <rbalint> oSoMoN, usually I prefer updating the changelog with gbp dch in a separate commit
[17:58] <oSoMoN> rbalint, feel free to break that up in two commits (or keep only the functional changes part)
[17:58] <rbalint> oSoMoN, ok, doing that
[19:42] <seb128> oSoMoN, sorry i was too slow to reply :)
[19:43] <oSoMoN> seb128, no worries, it's all handled now
[19:43] <oSoMoN> do you know what happened with gmenuharness (see my question above)?
[19:46] <seb128> oSoMoN, no, I don't know, https://launchpad.net/ubuntu/+source/gmenuharness/+publishinghistory is weird
[19:46] <seb128> it was deleted from focal on 2020-04-01.
[19:47] <seb128> but on 2020-04-24
[19:47] <seb128>  Copied from ubuntu focal in Primary Archive for Ubuntu  to G
[19:47] <seb128> but ye
[19:47] <seb128> but yeah, it should be removed again I guess