[00:02] test works now (and fails in 1.2.8!) [00:04] juliank: Always a good sign for a test. [00:04] * infinity decides it's pizza and TV time. [00:11] lamont: Fixed in https://anonscm.debian.org/cgit/apt/apt.git/commit/?id=6df5632 [00:13] infinity: It was TV time for me, and then lamont returned with bug details. So it became bugfixing time :D [00:15] Off to bed now, it's 1am... [00:22] * juliank should write a tool to create minimal apt test cases ... [00:33] lamont: infinity: Uploaded 1.2.9 to Debian, should be part of the next dinstall run in an hour. [00:33] => can be synced in a few hours [00:33] juliank: Is "aptget" in the testcase a typo, or a wrapper? [00:33] infinity: A wrapper [00:33] Check. [00:35] infinity: apt-get still runs the host apt-get, and aptget the one in the build tree [00:39] Oh really, the armhf test suite for 1.2.8 now failed in another test... :/ [00:40] We run those flaky tests 10 times, and they still fail :/ [00:41] Maybe we should run them in qemu to slow things down ... [00:43] juliank: Meh, they'll magically all work in 1.2.9 for no good reason anyway. [00:44] Let's hope so. [00:44] It's almost always armhf and i386 failing [00:44] Coincidentally, the only 32-bit arches in adt? [00:45] Yeah. I suppose those are the slowest ones [00:45] I didn't say slowest, just 32-bit. [00:45] Yes, but I said [00:45] The amd64 and i386 tests run on exactly the same hardware. [00:45] hmm [00:46] Somewhat surprising [00:47] These same tests are run at build-time, right? [00:47] No [00:47] Ahh. Kay. [00:47] Cause I'd be even more confused if they did. [00:47] At build-time we only run small unit tests [00:47] CI systems (autopkgtest, travis) also run full integration test suite [00:48] or actually, they only run the integration tests... [00:48] Well, travis runs both [00:49] I think the problem with the download progress testing is that curl does not really respect speed limits you tell it [00:50] It starts fetching at full speed and then adjusts down to reach the requested speed [00:50] which fails on localhost for 800KB files [00:50] So you need bigger files? [00:50] Or less curl. [00:51] That might make sense [00:51] Well, the https method uses curl [00:51] Oh, right. [00:51] Bigger files, then. [00:52] Yes [00:52] And the other failing test, I have no idea yet. [00:52] It's supposed to test that fetching A, and B (which redirects to A) only cause one download of A [00:53] It actually does, but the 103 Redirect message from the method to the main process is never shown [00:53] It says " @ Queue: Action combined for http://localhost:35267/foo and http://localhost:35267/foo" [00:54] Weren't you supposed to be sleeping? :) [00:54] Yes, unfortunately, yes. [00:55] * juliank should just grep for "@ Queue: Action combined for" instead of "103 Redirect", that would be more reliable [00:57] I see what's going on [00:58] GET /foo with "Range: bytes=64350-" cannot work on a 64350 byte fil [00:58] Going to feed that bug to DonKult [01:03] So this one might actually be a real bug [01:07] I feel like I just lost one hour :/ [01:08] Hello summertime, I suppose :/ [01:12] juliank: thanks! [01:24] infinity, cant thank you enough, works flawlessly on 16.04 [01:25] freaking brilliant [01:25] very happy [01:34] can someone help me with backports? === juliank is now known as Guest47366 === juliank_ is now known as juliank [05:03] to request a (universe) package be synced from debian after the import freeze, should I request an SRU? I've only had to do this once before years ago, and seem to remember the process was SRU-like, but can't remember if it was an SRU specifically [05:05] (package update, not new package) [05:18] ah, found my previous request (from 2012). looks like it's just a matter of "please sync" against the package, with an explanation of the rationale [05:21] fo0bar, are you familiar with the rationale's for backports? [05:21] for new stuff [05:23] karstensrage: in general, or for my specific request? [05:26] fo0bar, in general [05:27] i have two new packages that got into xenial and id like them backported to trusty and precise [05:29] karstensrage: ahh yes, I do happen to. that is a Stable Release Update (SRU), and is explained at https://wiki.ubuntu.com/StableReleaseUpdates [05:30] that doesnt seem like its for new packages? [05:31] karstensrage: I believe the process is the same. I'm not an Ubuntu developer, but I do know that once releases are finalized (and especially LTSes), SRU requests get more scrutiny than normal sync requests, so you'll need to read over and closely follow that doc [05:31] hmm ok [05:48] fo0bar: That's not an SRU he's asking for, it's a backport. [05:48] https://wiki.ubuntu.com/UbuntuBackports [05:48] Very different (goes to a different pocket, managed by different people, etc) [05:49] karstensrage: The process is as laid out on the wiki page. Expecting immediate response to backports bugs is probably your failing. Be patient, once you've followed the right steps. [05:49] fo0bar: What are you looking to get synced? [05:50] infinity: ah sorry, my mistake [05:51] infinity: https://bugs.launchpad.net/ubuntu/+source/2ping/+bug/1562455 [05:51] Error: launchpad bug 2 not found [05:51] ubottu: Your parser sucks. [05:51] infinity: I am only a bot, please don't think I'm intelligent :) [05:51] haha [05:51] 2ping used to be the first sorted package in the archive, before that damned 0ad came around [05:51] fo0bar: Done. [05:52] infinity: ta! [05:55] fo0bar: Oh, if I'd seen who the upstream was, I might have not synced it. [05:55] fo0bar: Can we trust that guy? [05:55] that jerk [05:57] 2ping protocol is about 99.5% binary data and lives happily (and safely) in bytearray() land, but that other .5% is optional text notice data, and of course that's the part I screwed up [05:57] luckily when not in debug mode, all untrusted parsing is handled in a failsafe exception handler [05:58] I should write a second implementation of the protocol called bigping. [05:59] Full name, of course, Notorious P.I.N.G. [05:59] haha [06:39] infinity, i just want to make sure i have a properly worded rationale for the backport, i am anxious but not expecting an immediate response [06:40] ive been looking over other rationales that i can find but i dont see any patterns [06:40] i mean for golang backports the rationale i guess i obvious [06:40] ... is obvious [06:40] karstensrage: The only real rationale for a NEW package needs to be "this package doesn't exist in trusty, but I'd like to use it there". [06:40] karstensrage: The backports pocket isn't particularly strict in what it accepts. [06:41] ok that makes sense [06:41] (Which is why we don't install it by default) [06:42] oh sure, these packages dont even make sense as defaults [06:42] for anything [06:42] I meant we don't install anything from backports by default. ie: if there's an upgraded version of something in backports, apt won't offer it to you unless you explicitly request it. [06:43] With, say "apt-get install package/trusty-backports" [06:43] hmm so like even if you already have the package installed it doesnt say an update is available? [06:43] Exactly. [06:43] Because we don't support backports. [06:43] So auto-upgrading people to it would be irresponsible. [06:43] oh but security updates are handled differently right? [06:43] It's a use-at-your-own-risk service to people who prefer new shiny over well-supported. [06:44] -security and -updates pockets are automatic. -backports isn't. [06:44] oh i see [06:44] ah now i see [06:45] ok that really is different than what i thought, let me re-read backports with this new context to see if i get something different [06:46] karstensrage: Backports is almost certainly where your "I want my new package on old releases" request belongs. It's not even remotely acceptable as an SRU. [06:46] karstensrage: Alternately, you could just tell your users to use xenial if they want your package. Your call. [06:47] yeah i get that, but what happens if a security issue or bug is found, and then you want that backported as well? [06:47] You just submit another backport request. [06:47] but it wont show to be updated? [06:47] So, the security fix lands in xenial-security, then you ask the backporty people to re-backport the xenial-security version to trusty-backports. [06:47] since its -backport? [06:47] Oh, once something is installed from -backports, you get updates from -backports. [06:47] ah ok [06:47] ok gotcha [06:48] It's the initial install/upgrade to/from backports that has to be explicit. [06:48] yes of course [06:48] yes that makes sense [06:49] alright thank you, ill re-read again in the morning, kind of bleary eyed, must get some sleep, good night [06:51] 'Night. [09:38] infinity: was that you rebuilding fpc just now? [09:51] infinity: I guess we're not that lucky with apt 1.2.9, the i386 autopkgtest failed the flaky progress detection twice :( - either retry a third time (or how much more is needed) or ignore the issue (if possible). I'm currently testing a way to make the test less flaky, by using 16MB test files instead of 800KB ones [12:15] cjwatson: Does openssh-server really need a strictly versioned dep on openssh-client? This can cause APT to switch architectures if the native arch is not available in the wanted version, but the other arch is (https://unix.stackexchange.com/questions/272416/why-installing-openssh-server-would-remove-openssh-client) [12:16] * juliank has a fix for that on the APT side too, but not sure if he will roll it out [12:17] APT fix: https://github.com/julian-klode/apt/compare/master...julian-klode:bugfix/cross-arch-candidate?expand=1 [12:35] can somebody mark lp #1562114 as triaged/minor (or whatever severity launchpad offers for this) [12:35] Launchpad bug 1562114 in scribus (Ubuntu) "text box editor return always on the top when changing window" [Undecided,Confirmed] https://launchpad.net/bugs/1562114 [16:54] mapreri, done [16:54] mitya57: ta :) === pavlushka_ is now known as pavlushka [18:19] ginggs: It was not me. [18:20] juliank: Retried yet again. The odds aren't playing out for us right now, though. :P [18:22] infinity: Yeah, I have not seen that fail that often (3 runs until now, and all 3 failed the same flaky test) - hopefully this gets better when the test uses 16 MB files instead of 800 KB ones... [18:22] I could also start at 1MB, and then double the size with every failure [18:23] Heh. [18:23] That's assuming upping the size really helps. [18:23] If the problem is that you need a (vaguely) consistent throttled speed, could you throttle it from the server side instead? [18:23] (not sure what you're using as your https server in the tests) [18:27] It's a custom built server, so the answer is yes, we can throttle there. [18:28] s/built/written/ [18:28] I tried adding usleep(100) between each 500 byte block, that made things less bad [18:29] but larger file size produced less retries on my test system, so I suspect it's the same on i386 [18:31] It tries 16 MB files, starting with a limit of 16 MB/s and then throttling down, dividing by the number of the retry [18:32] This actually seems to throttle, as a run at half speed takes 5 seconds instead of 3 [18:51] infinity: no worries, i guess it was Logan wondering why his lazarus sync wouldn't build on powerpc. I've filed bug LP: #1562480 [18:51] Launchpad bug 1562480 in glibc (Ubuntu) "fp-compiler not installable on powerpc since glibc 2.23" [Undecided,New] https://launchpad.net/bugs/1562480 [18:53] infinity: no luck :( [18:54] infinity: Is it possible to just force accept it? [18:55] juliank: It is, yeah. [18:55] juliank: But I'm bored enough to retry a few dozen more times too. [19:00] Such a waste of resources ... [19:01] With the larger size coming in 1.2.10 I have not yet seen a single failure or even a retry [19:01] juliank: That's promising. [19:01] in my i386 chroot [19:02] which failed quite easily before (1 of 5 tries I think) [19:02] s/tries/runs/ [19:03] Now I have done 15 runs already [19:03] every check passed [19:15] juliank: Hey look, 5th time's the charm. :P [19:15] ! [19:16] * juliank should really implement rate limiting on the server [19:17] juliank: That's probably the only way tests like this won't be an arms race. [19:18] juliank: I note that the 800k file was "testfile.big", which is hilarious. [19:18] If 800k is "big", I think we found a time machine. [19:19] I think with 800k, the bytes were already buffered in the kernel sometimes [19:20] causing the whole issue of directly going from 0 to 100 [19:20] Actually, I'm not sure how much the method(s) request at once. [19:20] But it's definitely less than 16 MB :D [19:23] Our time out is 500ms [19:24] That is, we print progress every 500 ms [19:24] If two tests take 1.3 seconds or something like that as they used to do, that can't work [22:16] Is it possible to lock down launchpad bugs? [22:16] bug #1558331 is getting a bit out of control [22:16] bug 1558331 in apt (Ubuntu) "message "The repository is insufficiently signed by key (weak digest)" is poorly worded" [High,Fix released] https://launchpad.net/bugs/1558331 [22:20] juliank: There's some common functionality in both packages and I'd rather not have confusion due to "apt-get install openssh-server" not upgrading both. Surely this isn't the only package with this pattern of dependencies? [22:21] OK [22:40] * juliank does not understand why users are so whiny about a very small minority of neglected repositories not working with a not-yet-released Ubuntu release... [22:41] Especially Nvidia is terrifying: They only have MD5 checksums. [22:42] cjwatson: is there still actual problem with syncpackage? [22:43] ari-tczew: (not cj, but: ) It at least imported apt fine, so it seems OK to me ... [22:44] juliank: I've already synced one package for me and it works as well. however, I did a sponsor sync and I guess there is something missed.. [23:02] ginggs: good deductive skills! :P [23:02] that's exactly why I retried [23:40] Hi there, is there a plan to support devices with 32bit uefi and 64bit cpus? (bay trail/cheap x86 tablets)