[00:02] <juliank> test works now (and fails in 1.2.8!)
[00:04] <infinity> juliank: Always a good sign for a test.
[00:04]  * infinity decides it's pizza and TV time.
[00:11] <juliank> lamont: Fixed in https://anonscm.debian.org/cgit/apt/apt.git/commit/?id=6df5632
[00:13] <juliank> infinity: It was TV time for me, and then lamont returned with bug details. So it became bugfixing time :D
[00:15] <juliank> Off to bed now, it's 1am...
[00:22]  * juliank should write a tool to create minimal apt test cases ...
[00:33] <juliank> lamont: infinity: Uploaded 1.2.9 to Debian, should be part of the next dinstall run in an hour.
[00:33] <juliank> => can be synced in a few hours
[00:33] <infinity> juliank: Is "aptget" in the testcase a typo, or a wrapper?
[00:33] <juliank> infinity: A wrapper
[00:33] <infinity> Check.
[00:35] <juliank> infinity: apt-get still runs the host apt-get, and aptget the one in the build tree
[00:39] <juliank> Oh really, the armhf test suite for 1.2.8 now failed in another test... :/
[00:40] <juliank> We run those flaky tests 10 times, and they still fail :/
[00:41] <juliank> Maybe we should run them in qemu to slow things down ...
[00:43] <infinity> juliank: Meh, they'll magically all work in 1.2.9 for no good reason anyway.
[00:44] <juliank> Let's hope so.
[00:44] <juliank> It's almost always armhf and i386 failing
[00:44] <infinity> Coincidentally, the only 32-bit arches in adt?
[00:45] <juliank> Yeah. I suppose those are the slowest ones
[00:45] <infinity> I didn't say slowest, just 32-bit.
[00:45] <juliank> Yes, but I said
[00:45] <infinity> The amd64 and i386 tests run on exactly the same hardware.
[00:45] <juliank> hmm
[00:46] <juliank> Somewhat surprising
[00:47] <infinity> These same tests are run at build-time, right?
[00:47] <juliank> No
[00:47] <infinity> Ahh.  Kay.
[00:47] <infinity> Cause I'd be even more confused if they did.
[00:47] <juliank> At build-time we only run small unit tests
[00:47] <juliank> CI systems (autopkgtest, travis) also run full integration test suite
[00:48] <juliank> or actually, they only run the integration tests...
[00:48] <juliank> Well, travis runs both
[00:49] <juliank> I think the problem with the download progress testing is that curl does not really respect speed limits you tell it
[00:50] <juliank> It starts fetching at full speed and then adjusts down to reach the requested speed
[00:50] <juliank> which fails on localhost for 800KB files
[00:50] <infinity> So you need bigger files?
[00:50] <infinity> Or less curl.
[00:51] <juliank> That might make sense
[00:51] <juliank> Well, the https method uses curl
[00:51] <infinity> Oh, right.
[00:51] <infinity> Bigger files, then.
[00:52] <juliank> Yes
[00:52] <juliank> And the other failing test, I have no idea yet.
[00:52] <juliank> It's supposed to test that fetching A, and B (which redirects to A) only cause one download of A
[00:53] <juliank> It actually does, but the 103 Redirect message from the method to the main process is never shown
[00:53] <juliank> It says " @ Queue: Action combined for http://localhost:35267/foo and http://localhost:35267/foo"
[00:54] <infinity> Weren't you supposed to be sleeping? :)
[00:54] <juliank> Yes, unfortunately, yes.
[00:55]  * juliank should just grep for "@ Queue: Action combined for" instead of "103 Redirect", that would be more reliable
[00:57] <juliank> I see what's going on
[00:58] <juliank> GET /foo with "Range: bytes=64350-" cannot work on a 64350 byte fil
[00:58] <juliank> Going to feed that bug to DonKult
[01:03] <juliank> So this one might actually be a real bug
[01:07] <juliank> I feel like I just lost one hour :/
[01:08] <juliank> Hello summertime, I suppose :/
[01:12] <lamont> juliank: thanks!
[01:24] <karstensrage> infinity, cant thank you enough, works flawlessly on 16.04
[01:25] <karstensrage> freaking brilliant
[01:25] <karstensrage> very happy
[01:34] <karstensrage> can someone help me with backports?
[05:03] <fo0bar> to request a (universe) package be synced from debian after the import freeze, should I request an SRU?  I've only had to do this once before years ago, and seem to remember the process was SRU-like, but can't remember if it was an SRU specifically
[05:05] <fo0bar> (package update, not new package)
[05:18] <fo0bar> ah, found my previous request (from 2012).  looks like it's just a matter of "please sync" against the package, with an explanation of the rationale
[05:21] <karstensrage> fo0bar, are you familiar with the rationale's for backports?
[05:21] <karstensrage> for new stuff
[05:23] <fo0bar> karstensrage: in general, or for my specific request?
[05:26] <karstensrage> fo0bar, in general
[05:27] <karstensrage> i have two new packages that got into xenial and id like them backported to trusty and precise
[05:29] <fo0bar> karstensrage: ahh yes, I do happen to.  that is a Stable Release Update (SRU), and is explained at https://wiki.ubuntu.com/StableReleaseUpdates
[05:30] <karstensrage> that doesnt seem like its for new packages?
[05:31] <fo0bar> karstensrage: I believe the process is the same.  I'm not an Ubuntu developer, but I do know that once releases are finalized (and especially LTSes), SRU requests get more scrutiny than normal sync requests, so you'll need to read over and closely follow that doc
[05:31] <karstensrage> hmm ok
[05:48] <infinity> fo0bar: That's not an SRU he's asking for, it's a backport.
[05:48] <infinity> https://wiki.ubuntu.com/UbuntuBackports
[05:48] <infinity> Very different (goes to a different pocket, managed by different people, etc)
[05:49] <infinity> karstensrage: The process is as laid out on the wiki page.  Expecting immediate response to backports bugs is probably your failing.  Be patient, once you've followed the right steps.
[05:49] <infinity> fo0bar: What are you looking to get synced?
[05:50] <fo0bar> infinity: ah sorry, my mistake
[05:51] <fo0bar> infinity: https://bugs.launchpad.net/ubuntu/+source/2ping/+bug/1562455
[05:51] <infinity> ubottu: Your parser sucks.
[05:51] <fo0bar> haha
[05:51] <fo0bar> 2ping used to be the first sorted package in the archive, before that damned 0ad came around
[05:51] <infinity> fo0bar: Done.
[05:52] <fo0bar> infinity: ta!
[05:55] <infinity> fo0bar: Oh, if I'd seen who the upstream was, I might have not synced it.
[05:55] <infinity> fo0bar: Can we trust that guy?
[05:55] <fo0bar> that jerk
[05:57] <fo0bar> 2ping protocol is about 99.5% binary data and lives happily (and safely) in bytearray() land, but that other .5% is optional text notice data, and of course that's the part I screwed up
[05:57] <fo0bar> luckily when not in debug mode, all untrusted parsing is handled in a failsafe exception handler
[05:58] <infinity> I should write a second implementation of the protocol called bigping.
[05:59] <infinity> Full name, of course, Notorious P.I.N.G.
[05:59] <fo0bar> haha
[06:39] <karstensrage> infinity, i just want to make sure i have a properly worded rationale for the backport, i am anxious but not expecting an immediate response
[06:40] <karstensrage> ive been looking over other rationales that i can find but i dont see any patterns
[06:40] <karstensrage> i mean for golang backports the rationale i guess i obvious
[06:40] <karstensrage> ... is obvious
[06:40] <infinity> karstensrage: The only real rationale for a NEW package needs to be "this package doesn't exist in trusty, but I'd like to use it there".
[06:40] <infinity> karstensrage: The backports pocket isn't particularly strict in what it accepts.
[06:41] <karstensrage> ok that makes sense
[06:41] <infinity> (Which is why we don't install it by default)
[06:42] <karstensrage> oh sure, these packages dont even make sense as defaults
[06:42] <karstensrage> for anything
[06:42] <infinity> I meant we don't install anything from backports by default.  ie: if there's an upgraded version of something in backports, apt won't offer it to you unless you explicitly request it.
[06:43] <infinity> With, say "apt-get install package/trusty-backports"
[06:43] <karstensrage> hmm so like even if you already have the package installed it doesnt say an update is available?
[06:43] <infinity> Exactly.
[06:43] <infinity> Because we don't support backports.
[06:43] <infinity> So auto-upgrading people to it would be irresponsible.
[06:43] <karstensrage> oh but security updates are handled differently right?
[06:43] <infinity> It's a use-at-your-own-risk service to people who prefer new shiny over well-supported.
[06:44] <infinity> -security and -updates pockets are automatic.  -backports isn't.
[06:44] <karstensrage> oh i see
[06:44] <karstensrage> ah now i see
[06:45] <karstensrage> ok that really is different than what i thought, let me re-read backports with this new context to see if i get something different
[06:46] <infinity> karstensrage: Backports is almost certainly where your "I want my new package on old releases" request belongs.  It's not even remotely acceptable as an SRU.
[06:46] <infinity> karstensrage: Alternately, you could just tell your users to use xenial if they want your package.  Your call.
[06:47] <karstensrage> yeah i get that, but what happens if a security issue or bug is found, and then you want that backported as well?
[06:47] <infinity> You just submit another backport request.
[06:47] <karstensrage> but it wont show to be updated?
[06:47] <infinity> So, the security fix lands in xenial-security, then you ask the backporty people to re-backport the xenial-security version to trusty-backports.
[06:47] <karstensrage> since its -backport?
[06:47] <infinity> Oh, once something is installed from -backports, you get updates from -backports.
[06:47] <karstensrage> ah ok
[06:47] <karstensrage> ok gotcha
[06:48] <infinity> It's the initial install/upgrade to/from backports that has to be explicit.
[06:48] <karstensrage> yes of course
[06:48] <karstensrage> yes that makes sense
[06:49] <karstensrage> alright thank you, ill re-read again in the morning, kind of bleary eyed, must get some sleep, good night
[06:51] <infinity> 'Night.
[09:38] <ginggs> infinity: was that you rebuilding fpc just now?
[09:51] <juliank> infinity: I guess we're not that lucky with apt 1.2.9, the i386 autopkgtest failed the flaky progress detection twice :(  - either retry a third time (or how much more is needed) or ignore the issue (if possible). I'm currently testing a way to make the test less flaky, by using 16MB test files instead of 800KB ones
[12:15] <juliank> cjwatson: Does openssh-server really need a strictly versioned dep on openssh-client? This can cause APT to switch architectures if the native arch is not available in the wanted version, but the other arch is (https://unix.stackexchange.com/questions/272416/why-installing-openssh-server-would-remove-openssh-client)
[12:16]  * juliank has a fix for that on the APT side too, but not sure if he will roll it out
[12:17] <juliank> APT fix: https://github.com/julian-klode/apt/compare/master...julian-klode:bugfix/cross-arch-candidate?expand=1
[12:35] <mapreri> can somebody mark lp #1562114 as triaged/minor (or whatever severity launchpad offers for this)
[16:54] <mitya57> mapreri, done
[16:54] <mapreri> mitya57: ta :)
[18:19] <infinity> ginggs: It was not me.
[18:20] <infinity> juliank: Retried yet again.  The odds aren't playing out for us right now, though. :P
[18:22] <juliank> infinity: Yeah, I have not seen that fail that often (3 runs until now, and all 3 failed the same flaky test) - hopefully this gets better when the test uses 16 MB files instead of 800 KB ones...
[18:22] <juliank> I could also start at 1MB, and then double the size with every failure
[18:23] <infinity> Heh.
[18:23] <infinity> That's assuming upping the size really helps.
[18:23] <infinity> If the problem is that you need a (vaguely) consistent throttled speed, could you throttle it from the server side instead?
[18:23] <infinity> (not sure what you're using as your https server in the tests)
[18:27] <juliank> It's a custom built server, so the answer is yes, we can throttle there.
[18:28] <juliank> s/built/written/
[18:28] <juliank> I tried adding usleep(100) between each 500  byte block, that made things less bad
[18:29] <juliank> but larger file size produced less retries on my test system, so I suspect it's the same on i386
[18:31] <juliank> It tries 16 MB files, starting with a limit of 16 MB/s and then throttling down, dividing by the number of the retry
[18:32] <juliank> This actually seems to throttle, as a run at half speed takes 5 seconds instead of 3
[18:51] <ginggs> infinity: no worries, i guess it was Logan wondering why his lazarus sync wouldn't build on powerpc. I've filed bug LP: #1562480
[18:53] <juliank> infinity: no luck :(
[18:54] <juliank> infinity: Is it possible to just force accept it?
[18:55] <infinity> juliank: It is, yeah.
[18:55] <infinity> juliank: But I'm bored enough to retry a few dozen more times too.
[19:00] <juliank> Such a waste of resources ...
[19:01] <juliank> With the larger size coming in 1.2.10 I have not yet seen a single failure or even a retry
[19:01] <infinity> juliank: That's promising.
[19:01] <juliank> in my i386 chroot
[19:02] <juliank> which failed quite easily before (1 of 5 tries I think)
[19:02] <juliank> s/tries/runs/
[19:03] <juliank> Now I have done 15 runs already
[19:03] <juliank> every check passed
[19:15] <infinity> juliank: Hey look, 5th time's the charm. :P
[19:15] <juliank> <insert random exclamation here>!
[19:16]  * juliank should really implement rate limiting on the server
[19:17] <infinity> juliank: That's probably the only way tests like this won't be an arms race.
[19:18] <infinity> juliank: I note that the 800k file was "testfile.big", which is hilarious.
[19:18] <infinity> If 800k is "big", I think we found a time machine.
[19:19] <juliank> I think with 800k, the bytes were already buffered in the kernel sometimes
[19:20] <juliank> causing the whole issue of directly going from 0 to 100
[19:20] <juliank> Actually, I'm not sure how much the method(s) request at once.
[19:20] <juliank> But it's definitely less than 16 MB :D
[19:23] <juliank> Our time out is 500ms
[19:24] <juliank> That is, we print progress every 500 ms
[19:24] <juliank> If two tests take 1.3 seconds or something like that as they used to do, that can't work
[22:16] <juliank> Is it possible to lock down launchpad bugs?
[22:16] <juliank> bug #1558331 is getting a bit out of control
[22:20] <cjwatson> juliank: There's some common functionality in both packages and I'd rather not have confusion due to "apt-get install openssh-server" not upgrading both.  Surely this isn't the only package with this pattern of dependencies?
[22:21] <juliank> OK
[22:40]  * juliank does not understand why users are so whiny about a very small minority of neglected repositories not working with a not-yet-released Ubuntu release...
[22:41] <juliank> Especially Nvidia is terrifying: They only have MD5 checksums.
[22:42] <ari-tczew> cjwatson: is there still actual problem with syncpackage?
[22:43] <juliank> ari-tczew: (not cj, but: ) It at least imported apt fine, so it seems OK to me ...
[22:44] <ari-tczew> juliank: I've already synced one package for me and it works as well. however, I did a sponsor sync and I guess there is something missed..
[23:02] <Logan> ginggs: good deductive skills! :P
[23:02] <Logan> that's exactly why I retried
[23:40] <duobix> Hi there, is there a plan to support devices with 32bit uefi and 64bit cpus? (bay trail/cheap x86 tablets)