[07:11] <LocutusOfBorg> mwhudson, you mean the sphinx theme failure?
[07:12] <LocutusOfBorg> "sudo apt-get install python-sphinx-rtd-theme"
[07:12] <LocutusOfBorg> :)
[07:19] <LocutusOfBorg> mwhudson, interestingly, the bionic package builds docs with "sphinx-build" while the eoan package does: python3 -m sphinx
[07:19] <LocutusOfBorg> so, the python3-sphinx-rtd-theme is probably useless, and python-sphinx-rtd-theme is instead
[09:21] <mitya57> LocutusOfBorg: sphinx-build is managed by alternatives, so both python3-sphinx and python-sphinx provide it.
[09:22] <mitya57> And upstream sphinx dropped Python 2 support so I highly recommend using only the python3 versions, especially for the new code.
[09:34] <LocutusOfBorg> mitya57, we are talking about bionic, cosmic
[09:34] <LocutusOfBorg> cosmic and later are already using the python3 version
[09:35] <LocutusOfBorg> but probably mwhudson is not referring to that issue...
[09:45] <xnox> rbasak, for the server team review ;-) https://code.launchpad.net/~xnox/ubuntu-seeds/+git/ubuntu-seeds/+merge/366639
[09:47] <rbasak> Thanks :)
[09:50] <juliank> hmm, did we drop support for apt-btrfs-snapshot in grub recently? IIRC, I got boot entries for snapshots previously, but don't seem to get any anymore in disco/eoan
[09:51] <juliank> But I might misremember, I don't know
[10:08] <mitya57> LocutusOfBorg: ok. I had a general comment :)
[10:41] <LocutusOfBorg> mwhudson, the problem is this SRU from xnox https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1797386
[10:41] <LocutusOfBorg> looks like the new openssl makes some test fail on it
[10:42] <LocutusOfBorg> do you want me to subscribe you to the bug report?
[10:57] <xnox> LocutusOfBorg, problem for what?
[10:57] <xnox> python-tornado?
[10:58] <LocutusOfBorg> yes, python-tornado builds with -release pocket and doesn't build with -proposed, and the diffs are ssl-related...
[10:58] <LocutusOfBorg> I added something on the bug report
[11:00] <xnox> i'll look into that, thanks.
[11:00] <LocutusOfBorg> I added also some upstream commits that might be fixing the issue, I don't think the python-tornado in bionic/cosmic fails in testsuite
[11:01] <LocutusOfBorg> we might have consider to sru the latest version to bionic, not sure, I don't want to enable proposed pocket on my laptop right now, so I can't really test it more
[11:05] <LocutusOfBorg> also updated https://bugs.launchpad.net/ubuntu/+source/python-tornado/+bug/1801184
[11:05] <LocutusOfBorg> mwhudson, I wasn't aware of the bug report before
[11:06] <LocutusOfBorg> I'm also committing your change in debian
[12:03] <juliank> Hmm, I just had an idea: We currently ship the mirror lists in python-apt-common, how about we let launchpad generate those lists and ship them as part of the archive, referenced from the release file? Then we don't have to update them, just ship an apt.conf snippet to download them? (should be easy right, cjwatson wgrant)?
[12:04] <juliank> Though I guess we need to ship a seed on the image
[12:04] <cjwatson> Impractical for stable series where we don't regenerate the release pocket's InRelease routinely
[12:04] <juliank> that's true
[12:05] <cjwatson> So I'm not particularly keen
[12:05] <juliank> Maybe a link to a mirror list
[12:05] <juliank> (which must be https)
[12:06] <juliank> Ah but then we need more than a snippet
[12:07] <juliank> I guess just movign the list to distro-ifno-data is probably the best idea
[13:21] <xnox> rbasak, doko - ported etckeeper to python3 and forwarded all patches to debian, and uploading to ubuntu.
[13:56] <sahid> xnox: hi, it looks like we have a timeout issue here: http://autopkgtest.ubuntu.com/packages/h/heat-dashboard/cosmic/armhf
[13:56] <sahid> anychance you have a look?
[13:56] <sahid> coreycb: ^
[13:57] <coreycb> xnox: sahid: test test is timing out. is there any way we can increase the timeout?
[13:57] <xnox> sahid, i have no idea.... why ping me? =)
[13:57] <coreycb> xnox: that was my suggestion
[13:58] <xnox> coreycb, you can make a merge proposal to add to the big_packages... but if it didn't timeout before as a small one, it's really a bandaid now.
[13:58] <coreycb> xnox: it has a history of it
[13:58] <xnox> coreycb, we had io problems, and did scale workers down, and retry things..... but i was not involved in that. it was more of a vorlon / Laney thing
[13:59] <coreycb> xnox: sahid: ok maybe big_packages is the way forward for us
[13:59] <xnox> coreycb, make a merge proposal similar to this -> https://git.launchpad.net/autopkgtest-cloud/commit/?id=0e3fbc0cb6844e2ce20b129d6262aaacf4c22065
[13:59] <xnox> against lp:autopkgtest-cloud
[13:59] <xnox> but probably like for all arches / all configs.
[14:00] <coreycb> xnox: great thanks, will do
[14:03] <Laney> io problems were on the controller, nothing to do with the compute hosts[4~
[14:03] <juliank> xnox: that etckeeper ftbfs
[14:04] <xnox> hahahhahahahha
[14:04] <juliank> xnox: rm: cannot remove 'debian/etckeeper/usr/lib/python2.7/dist-packages/bzrlib/plugins/etckeeper/__init__.pyc': No such file or directory
[14:04] <juliank> gotta remove the rm I guess
[14:04] <juliank> But I'm not sure how you can make sure upgrades work correctly
[14:05] <xnox> juliank, nah, add back bzr build-dep
[14:05] <juliank> YOu need the transitional bzr -> breezy first, and then Breaks for the old bzr?
[14:05] <xnox> cause it means that bzr plugin was not built.
[14:05] <xnox> want to support both at the moment
[14:05] <juliank> ah you got both?
[14:06] <xnox> yeap
[14:42] <xnox> juliank, all built now. And also, since the default is git now, python deps are not strictly required and are optional. And would be there, if bzr/brz are installed, so dropped those too.
[14:42] <xnox> now i need to quickly port heartbeat, wait for desktop team to upgrade dejadup/duplicity, and bash swift upstream.
[14:54] <juliank> xnox: nice
[15:01] <ddstreet> is there any tooling to more easily run autopkgtest.u.c tests from ppas?  or is the only way to hand-edit urls to submit/check tests?
[15:46] <xnox> ddstreet, not really. but i do create ppa's on bileto.ubuntu.com and upload into those
[15:46] <xnox> ddstreet, as those have automatic autopkgtests runners setup against them.
[16:28] <Wafficus> hi there qhestion regarding Ubuntu ISO tester. How do i modify it to instead test Lubuntu's iso?
[16:58] <vorlon> coreycb: why is the heat-dashboard autopkgtest so much slower in cosmic than in disco?  should we not instead sru the fixes for that?
[16:58] <vorlon> coreycb: i.e. 1.4.0-0ubuntu3 became a lot faster than 1.4.0-0ubuntu1
[16:59] <vorlon> (did it only get faster because of dropping python2 support? :P)
[17:02] <vorlon> jbicha: hi, what's the point of carrying the gparted delta for LP: #1737248 to disable xhost root, while we're also still linking against gtk2?  "discouraging other upstreams from following this pattern" doesn't sound like a very strong argument
[17:12] <coreycb> vorlon: all true statements, i think we can attribute at least half of the performance increase to dropping py2.
[17:39] <ddstreet> xnox ah ok, guess all the cool kids have bileto access, not me though :(
[17:49] <sarnold> a user in #debian found something funny with our locales and it looks like a bug to me. I'm not sure where to file a bug report: ubuntu http://paste.ubuntu.com/p/bC22mnH8c2/ vs a fedora 29 lxd guest http://paste.ubuntu.com/p/P2Y88sfTW8/
[17:59] <infinity> sarnold: Which release?  No can reproduce.
[18:00] <sarnold> infinity: my ubuntu paste is from bionic
[18:00] <ginggs> vorlon: i responded in LP: #1713615 - from here, it seems like switching to github is the right thing to do
[18:00] <vorlon> ginggs: yeah, +1. do you want to upload that change?
[18:00] <infinity> sarnold: Mmkay.  Well, it's fine on disco/eoan at least.
[18:01] <ginggs> vorlon: thanks, i'll upload and look at SRUs, and try to convince Debian to switch too
[18:03] <infinity> sarnold: Ahh, I knew it seemed familiar.  This is https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1774857
[18:04] <infinity> sarnold: If you have an overwhelming urge to bisect, pointers welcome. :P
[18:04] <infinity> (Though, I fear it'll end up fixed in a 2MB commit labelled "update from Unicode 9 to Unicode 10" or similar)
[18:06] <sarnold> infinity: yiiikes. :/ thanks
[18:10] <infinity> sarnold: And sure enough, there's such a commit (though I was off by one version)
[18:10] <infinity> b11643c21c5c9d67a69c8ae952e5231ce002e7f1
[18:10] <infinity>     Bug 23308: Update to Unicode 11.0.0
[18:10] <infinity> 10040 lines.
[18:10] <infinity> Hopefully that's not the "fix". :P
[18:10] <infinity> But it probably is.
[18:11] <infinity> Since it comes with new translit tables, which are likely at fault here.
[18:44] <juliank> ginggs, vorlon Can't we just distribute the .exe in the installer .deb?
[18:45] <juliank> ah no, may not be included in your own product
[18:45] <juliank> but the github mirror distributing it is fine
[18:46] <ginggs> juliank: do you think it's still worth filing a bug in apt?
[18:46] <juliank> ginggs: there is no bug in apt
[18:46] <juliank> ginggs: it does not allow https->http redir on purpose
[18:47] <juliank> The problem here is the sourceforge mirror doing the redirect
[18:48] <ginggs> juliank: right, but redirecting to http for the failedmirror page happens after something has already gone wrong
[18:48] <infinity> It's irksome that there isn't a Canonical microsoft.com URL to get this stuff from.  It feels so sketchy downloading it from random people who downloaded it in the past.
[18:48] <infinity> s/Canonical/canonical/
[18:49] <infinity> Stupid fingers can't type that without the upper case C.
[18:49] <juliank> ginggs: yeah, well, their mirror system probably does garbage collection on mirrors or something?
[18:49] <juliank> so it redirects to mirrorfailure which then should trigger a resync of the file from master or something
[18:50] <infinity> On the other hand, a larger issue here seems to be people complaining that the package failing to download is breaking upgrades?  Is that actually true?
[18:50] <ginggs> juliank: i'm pretty sure the file is there, i can download it successfully and then immediately after i can get a failure
[18:50] <ginggs> infinity: yes
[18:50] <infinity> Cause the whole point of Steve moving all this to update-manager downloader jobs was to prevent broken downloads from breaking upgrades.
[18:50] <infinity> So, what's gone wrong there?
[18:50] <juliank> ginggs: it is there on _some_ mirrors
[18:50] <infinity> It's meant to just silently skip over the breakage and try again later, and pop up update-manager nag windows telling you that it failed.
[18:51] <infinity> vorlon: Any thoughts on that part of the bug?
[18:51] <juliank> ah so
[18:51] <juliank> I guess it fails on partial requests
[18:51] <ginggs> juliank: i'm pretty sure i'm getting intermittent failures from the same mirror
[18:51] <infinity> Like, the entire delta seems pointless if it's not doing its job.
[18:52] <juliank> so I'm downloading https://newcontinuum.dl.sourceforge.net/project/corefonts/cabextract/0.6/cabextract-0.6-1.src.rpm
[18:53] <juliank> look here: https://paste.ubuntu.com/p/CM2dCdRsSJ/
[18:53] <ginggs> infinity: it's a bit silly that it downloads the fonts again during an upgrade, but I don't know if that's avoidable
[18:54] <juliank> If we send the host a Range and a If-Range it fails
[18:55] <juliank> The mirror should respond with a 206 Partial Content or a 200 OK
[18:55] <juliank> This only happens if we already have the full file downloaded
[18:55] <juliank> for a partial file, it's fine
[18:56] <juliank> e.g. that file is 94927 bytes large
[18:56] <infinity> ginggs, vorlon: Also note (maybe not a big deal, I might be rare?) that changing the download host from dl.sourceforge.net to something else will break people who had to whitelist sf.net in their proxies. :/
[18:56] <juliank> Range: bytes=94927- fails
[18:57] <juliank> Sourceforge should fix their servers really
[18:57] <juliank> but as a workaround we could always delete the last byte if we have a partial file
[18:59] <juliank> FWIW, the correct server response would be
[18:59] <juliank> HTTP/1.1 416 Requested Range Not Satisfiable
[18:59] <juliank> Content-Range: bytes */94927
[19:03] <infinity> juliank: Huh, yep.  Chopping off the last byte on the existing file lets it be happy.  So, is your hypothesis that all the failed downloads are people who already have the files locally?
[19:03] <juliank> I'd think so
[19:03] <infinity> Or have we just found one of the many ways sf mirrors suck? :P
[19:04] <juliank> as a bonus point, we could shave of like 4 bytes and then check on resumption that they match what we expect
[19:05] <infinity> We store the md5sums locally that we expect for all the .exe files.  Why would we be downloading at all if we already have them?
[19:05] <infinity> sha256, rather.
[19:05] <infinity> But sums.
[19:06] <juliank> Well, I don't know
[19:07] <juliank> I'll just email sf ops
[19:10] <infinity> def download_file(uri, sha256_hashsum):
[19:10] <infinity>     """Download a URI and checks the given hashsum using apt-helper
[19:10] <infinity>     Returns: path to the downloaded file or None
[19:10] <infinity>     """
[19:10] <infinity>     download_dir = os.path.join(STAMPDIR, "partial")
[19:10] <infinity>     dest_file = os.path.join(download_dir, os.path.basename(uri))
[19:10] <infinity>     ret = subprocess.call(
[19:10] <infinity>         ["/usr/lib/apt/apt-helper",
[19:10] <infinity>          "download-file", uri, dest_file, "SHA256:" + sha256_hashsum])
[19:10] <infinity>     if ret != 0:
[19:10] <infinity>         if os.path.exists(dest_file):
[19:10] <infinity>             os.remove(dest_file)
[19:10] <infinity>         return None
[19:10] <infinity>     return dest_file
[19:10] <infinity> Yeah, so.  That's silly.
[19:10] <infinity> We have a path, we have a sha256, and we don't check the state of the path before we start.
[19:10] <infinity> That could check the sum, skip if it's what's expected, unlink if not.
[19:10] <sarnold> unlink would prevent resume
[19:11] <sarnold> though if we're having this discussion because resume doesn't work, then maybe we should unlink :)
[19:11] <infinity> So does data corruption.  http isn't rsync.
[19:11] <infinity> http range requests against broken serves (or with locally broken data) won't produce a magically fixed file.
[19:12] <infinity> s/serves/servers/
[19:12] <infinity> Either way, the part where we don't check the local file *at all* before redownloading seems silly.
[19:13] <infinity> Maybe someone assumed apt-helper does such a thing, which it very much doesn't appear to.
[19:14] <sarnold> heh, I would indeed have made that assumption :)
[19:16] <infinity> juliank: sf.net server suckage aside, I'm somewhat inclined to blame the update-notifier implementation here for being silly.
[19:17] <juliank> infinity: will check
[19:17] <infinity> juliank: If nothing else, we're pointlessly downloading files we already have.
[19:17] <juliank> infinity: Yeah, well, it should pass the file size to apt-helper
[19:18] <infinity> We don't appear to know the sizes, just the sums.
[19:18] <infinity> Which is silly too, cause if we had the file to make the sums, we could have obtained the sizes. :P
[19:18] <infinity> But the implementation didn't call for that.
[19:18] <infinity> Thankfully, I think the consumers of this API can be counted on two fingers, so changing it to be more robust is doable.
[19:18] <infinity> (msttcorefonts, adobe-flash-plugin, not sure if anything else uses it)
[19:19] <juliank> hmm it does not seem to help
[19:19] <juliank> we still perform a 302 even if we have the size
[19:19] <juliank> * a GET
[19:19] <juliank> Anyhow, I pinged sourceforge
[19:19] <infinity> juliank: I'm a bit surprised too, mind you, that passing the sha256 to apt-helper doesn't mean "hash the local file first and don't download if it matches".
[19:20] <juliank> odd, yes
[19:21] <juliank> there is code for that, but probably only for .deb
[19:23] <infinity> But yeah, as for the update-notifier implementation, while I can see angry people on 28.8k modems (all 2 of them) complaining if we break resume, I'd suggest http resume is more likely to be harmful than helpful in this sort of situation, and it'd be easier to just "check local, if sum match, no download, else unlink and download".
[19:23] <juliank> infinity: Oh, partial is fine, as long as it does not match the full file size
[19:23] <infinity> Given that the sort of connections where resume really matters are also the sort of connections where half a file is likely to have a few bad bytes...
[19:23] <juliank> infinity: Just truncate the file to st.st_size -1
[19:24] <infinity> juliank: Sure, partial is fine, if it's fine.  But if it's not? :)
[19:24] <juliank> But yeah, I guess re-download is fine too
[19:24] <infinity> Oh, I guess if we do an actual partial resume, then it hits full size, then we sum, we'll unlink and try again later.
[19:24] <juliank> let's do that I guess
[19:24]  * juliank grabs code
[19:24] <infinity> So maybe that's okay.
[19:25] <infinity> I dunno.  I can count on one hand the number of times I've bothered to http resume anything smaller than an ISO in the last few years.
[19:26] <infinity> I realise that my connection is MUCH better than the world average, but I also don't think things are as dire for our users as we might think.
[19:27] <juliank> infinity: Just resuming also does not handle the case were the hash is wrong
[19:28] <juliank> like I don't think it falls back to downloading the entire file if it turns out the resumed one is wrong
[19:28] <infinity> juliank: You have to remember that the whole process tries later if it fails.  In theory.
[19:29] <infinity> In practice, I'm suspicious, given people claiming this is breaking upgrades. :P
[19:29] <juliank> infinity: ah, right it deletes
[19:29] <infinity> But in theory, if you resume, finish download, hash fails, it'll unlink and return EPISSOFF, and then try the download again later.
[19:29] <infinity> But... If that was working right, people wouldn't be complaining about broken upgrades, so there's some digging to be done there too.
[19:29] <juliank> infinity: If it failed for more than 3 times, than it becomes an error
[19:30] <juliank> um * 3 days
[19:30] <infinity> I know that USED to work right.
[19:30] <infinity> juliank: But not an error at dpkg time!
[19:30] <juliank> infinity: Now people install the ttf installer from Debian which works fine
[19:30] <infinity> That's my confusion about the bug complaints.
[19:30] <juliank> but then people never get the stamp removed
[19:30] <juliank> so next time the installer is updated in Ubuntu, it will be a year old or so
[19:30] <juliank> it being the stamp
[19:30] <infinity> I had msttcorefonts broken for ages because I didn't have dl.df.net whitelisted in squid-deb-proxy.  That never broke dpkg/apt upgrades, just caused annoying popups.
[19:31] <infinity> dl.sf.net even.
[19:31] <juliank> infinity: I don't think they say install fails
[19:31] <infinity> Some of the comments claimed that.
[19:31] <infinity> Something about it breaking "in the middle of a 1000 package upgrade", etc.
[19:31] <juliank> hmm, odd
[19:32] <infinity> Which is the only reason it's anymore more than a minor bug, really.
[19:32] <infinity> I mean, it's more than minor at the package scope, obviously. :P
[19:32] <infinity> But not the distro scope.
[19:32] <infinity> Unless it interrupts upgrades.
[19:34] <infinity> "Not easy to do if an upgrade fails with 1341 packages left to update!", comment 18.
[19:34] <infinity> But maybe that dude was misdiagnosing a different upgrade failure and just blamed the first "error" in the log.
[19:34] <infinity> Cause I don't see anyone else mentioning it interrupting and upgrade.
[19:36] <infinity> juliank: Yeah, I guess ignore that.  I see no other indication that the package fails to configure for anyone else, and that guy provided no evidence.
[19:37] <juliank> infinity: https://paste.ubuntu.com/p/dMFw6QRr3D/
[19:37] <juliank> oops, should have added -f diff
[19:37] <juliank> that's better (with syntax highlighting): https://paste.ubuntu.com/p/BYwqFkc5sF/
[19:38] <juliank> infinity: I can change that to truncate if we want to, but I'm not sure
[19:38] <juliank> this one is definitely safer
[19:38] <infinity> I think this is safer, TBH.
[19:39] <infinity> What does apt itself do for partials?
[19:40] <infinity> check || resume && check || unlink && retry?
[19:40] <infinity> Would be my guess.
[19:40] <infinity> Which would also be fine here, but it feels heavy.
[19:40] <juliank> infinity: I think it hashes the partial file
[19:40] <juliank> infinity: Then it does the GET rather than calculating the digest and checking if it matches
[19:41] <infinity> juliank: Right, s/check/hash/ above, and maybe we mean the same thing?
[19:41] <juliank> well, it does not check the hash
[19:41] <juliank> it just calculates it, and then does the GET
[19:42] <juliank> This should be fixable in apt
[19:42] <juliank> infinity:  see https://salsa.debian.org/apt-team/apt/blob/master/methods/http.cc#L1012
[19:42] <infinity> I guess I'm curious if apt stumbles this hard in similar situations and we just don't notice because most apt mirrors are not braindead.
[19:43] <juliank> infinity: Before the Req.StartPos > 0, we should check if the hashes we have so far match what we expect and then report that we are done
[19:43] <infinity> Or if it's a more robust fail&&remove&&retry
[19:43] <juliank> I'm not sure if it can retry
[19:43] <infinity> Well, retry is the natural sane option for a failed resume.
[19:44] <infinity> Or maybe unlink-and-let-the-user-retry, I suppose.
[19:47] <infinity> juliank: Anyhow, given that the data-downloader API doesn't include filesize, I think your pastebin is sane.
[19:47] <juliank> infinity: I think that too, and I already uploaded it to eoan
[19:47] <juliank> I think fixing apt to not issue GET requests for files it already has downloaded fully is sensible too
[19:48] <infinity> juliank: If we extended it to require size, then you could add an "if len != expected: resume" in there.
[19:48] <juliank> but it's a lot more work
[19:49] <juliank> because um, once you call Result() on your hashes object in apt, it freezes them
[19:49] <infinity> juliank: But without size, I think all we can assume about a file on disk is it's either correct (matching sum) or completely useless.  Not much in between.
[19:50] <juliank> hmm
[19:51] <juliank> you don't know if it's incomplete, sure, but the server _should_ tell you that
[19:51] <infinity> juliank: Oh, from an http client, yes.  I'm talking update-notifier, not apt.
[19:51] <infinity> juliank: From apt, you can absolutely ask the server what it thinks the length is.
[19:52] <infinity> update-notifier, not so much, cause the http bit is opaque to us.
[19:53] <infinity> juliank: If apt-helper were robustified so that update-notifier could just call it blind and get "sane" (for some definition thereof) results, that'd be spiffy, but that's a much deeper rabbit hole.
[19:55] <juliank> infinity: it might be eventually
[19:56] <juliank> I should not have done this now I'm too awake
[19:57] <infinity> Story of my life.
[19:57] <infinity> juliank: OOI, should we revert ginggs' upload now that you've done the update-notifier one?
[19:58] <infinity> I'm probably in a tiny minority, but I know his upload will actually fail to upgrade on my system. :P
[19:58] <juliank> infinity: don't really care
[19:58] <juliank> infinity: would be useful for testing I guess
[19:59] <juliank> Also seems like mz TV remote batteries are empty ugh
[20:47] <ginggs> infinity, juliank: ttf-mscorefonts-installer seems to be working with update-notifier 3.192.20, three successes in a row
[20:48] <juliank> I think there might be one more-flaky mirror
[20:49] <juliank> because remove && install should fix it and some reported it did not
[20:49] <juliank> but maybe that was only changed in recent versions
[20:50] <ginggs> juliank: purge and install did not work for me
[20:50] <ginggs> yesterday and today
[20:50] <ginggs> i have in /var/lib/update-notifier/package-data-downloads/partial/
[20:50] <ginggs> -rw-r--r-- 1 _apt root 55047 Apr 28 13:22 andale32.exe.FAILED
[20:51] <juliank> ginggs: heh, nevermind, it does not delete the file in there
[20:51] <ginggs> so does that mean at least yesterday i had a partial download from a mirror?
[20:51] <juliank> so um, that's suboptimal because if you remove the pacakge, the .exe don't get removed
[20:51] <juliank> you must ahve had some complete download
[20:52] <ginggs> ok, but the weird thing was it would fail on different files
[20:56] <ahasenack> what does this deb10u1 suffix usually mean in a package's version? 200+deb10u1
[20:57] <ahasenack> this is a native package, from debian (postgresql-common)
[20:57] <ahasenack> they had version 200, and now 200+deb10u1
[20:59] <juliank> ahasenack: 1st stable release update for debian 10
[20:59] <ahasenack> hm
[20:59] <juliank> ahasenack: I know, debian 10 is not out yet, but it's ok to use the versioning now I guess :)
[21:01] <ahasenack> juliank: thanks
[21:15] <mwhudson> LocutusOfBorg: oh huh i didn't suspect openssl
[22:04] <infinity> mwhudson: Even as a completely context-free statement, that makes total sense.
[22:04] <mwhudson> infinity: heh
[22:43] <vorlon> infinity: hmmm in fact I don't see anything showing that dpkg or apt fails, I only see people quoting the error messages