[01:06] -queuebot:#ubuntu-release- New binary: haskell-http-link-header [amd64] (artful-proposed/universe) [1.0.3-2] (no packageset) [01:06] -queuebot:#ubuntu-release- New binary: haskell-http-link-header [ppc64el] (artful-proposed/universe) [1.0.3-2] (no packageset) [01:06] -queuebot:#ubuntu-release- New binary: haskell-http-link-header [i386] (artful-proposed/universe) [1.0.3-2] (no packageset) [01:09] -queuebot:#ubuntu-release- New binary: haskell-http-link-header [arm64] (artful-proposed/universe) [1.0.3-2] (no packageset) [03:29] -queuebot:#ubuntu-release- New: accepted haskell-http-link-header [arm64] (artful-proposed) [1.0.3-2] [03:29] -queuebot:#ubuntu-release- New: accepted haskell-http-link-header [ppc64el] (artful-proposed) [1.0.3-2] [03:29] -queuebot:#ubuntu-release- New: accepted haskell-http-link-header [i386] (artful-proposed) [1.0.3-2] [03:29] -queuebot:#ubuntu-release- New: accepted haskell-cabal-install [s390x] (artful-proposed) [1.24.0.2-2] [03:29] -queuebot:#ubuntu-release- New: accepted haskell-http-link-header [amd64] (artful-proposed) [1.0.3-2] [03:45] jbicha: are you tracking the mapnik that you synced from experimental? It has missing symbols relative to the previous version and regresses node-mapnik [03:46] (and probably more; but node-mapnik has the autopkgtest that notices) [04:07] -queuebot:#ubuntu-release- New binary: haskell-github [amd64] (artful-proposed/universe) [0.15.0-1build1] (no packageset) [04:07] -queuebot:#ubuntu-release- New binary: haskell-github [ppc64el] (artful-proposed/universe) [0.15.0-1build1] (no packageset) [04:07] -queuebot:#ubuntu-release- New binary: haskell-github [i386] (artful-proposed/universe) [0.15.0-1build1] (no packageset) [04:32] -queuebot:#ubuntu-release- New: accepted haskell-github [amd64] (artful-proposed) [0.15.0-1build1] [04:32] -queuebot:#ubuntu-release- New: accepted haskell-github [ppc64el] (artful-proposed) [0.15.0-1build1] [04:32] -queuebot:#ubuntu-release- New: accepted haskell-github [i386] (artful-proposed) [0.15.0-1build1] [08:03] slangasek: I do it frequently [08:03] But not at the weekend ...... [08:03] slangasek: Are you interested in helping out, or something? You seem to be keeping an eye on this. [09:05] hello, did anybody stop haskell autosync? btw slangasek great success :) [09:08] -queuebot:#ubuntu-release- New binary: paste [amd64] (artful-proposed/main) [2.0.3+dfsg-4ubuntu1] (ubuntu-desktop, ubuntu-server) [09:21] btw why is ninja failing in that strange way? I would blame debhelper or similar... [09:22] src:ninja-build, seems that debhelper is missing some target [09:23] I see no evidence that haskell autosyncing is stopped [09:26] I had to manually sinc ghc some minutes ago [09:33] LocutusOfBorg: what happens if you rebuild in unstable now? [09:33] 10.5 is there [09:35] Laney, I finished 10 secoonds ago [09:35] damn, same issue [09:35] adding an empty override_dh_auto_install works, meh [09:35] I'm opening an RC bug in debian [09:36] or maybe I'll open it if you ask me, because it might be a debhelper bug [09:38] seems likely that it is [09:38] -queuebot:#ubuntu-release- New binary: linux-signed-lts-xenial [amd64] (trusty-proposed/main) [4.4.0-82.105~14.04.1] (kernel) [09:38] I'll ask in debian devel irc [09:39] LocutusOfBorg: ghc> you're just impatient [09:40] LocutusOfBorg: by about an hour or so [09:40] ok, it was uploaded yesterday... anyway, even better [09:44] LocutusOfBorg: didn't get imported into LP last night because of losing a race by a couple of minutes, which delayed it by six hours [09:44] oh ok [10:07] -queuebot:#ubuntu-release- Unapproved: at-spi2-atk (xenial-proposed/main) [2.18.1-2ubuntu1 => 2.24.1-0ubuntu1] (kubuntu, ubuntu-desktop) [10:09] -queuebot:#ubuntu-release- Unapproved: rejected at-spi2-atk [source] (xenial-proposed) [2.24.1-0ubuntu1] [10:15] * cjwatson finally gets round to proposing a crontab merge to fix said race. Only been meaning to do that for about six year [10:15] s [10:32] cjwatson, <3 [10:32] lovely cron [10:52] -queuebot:#ubuntu-release- New: accepted linux-signed-lts-xenial [amd64] (trusty-proposed) [4.4.0-82.105~14.04.1] [11:35] slangasek: yes I believe we just need a newer node-mapnik version, we can do that ourselves or wait a bit longer for Debian [11:51] -queuebot:#ubuntu-release- Unapproved: libinput (xenial-proposed/main) [1.2.3-1ubuntu1 => 1.6.3-1ubuntu1~16.04.1] (kubuntu, ubuntu-desktop) [12:12] -queuebot:#ubuntu-release- Unapproved: xorg-server (xenial-proposed/main) [2:1.18.4-0ubuntu0.2 => 2:1.18.4-0ubuntu0.3] (desktop-core, xorg) [12:18] -queuebot:#ubuntu-release- Unapproved: xfonts-utils (xenial-proposed/main) [1:7.7+3ubuntu0.16.04.1 => 1:7.7+3ubuntu0.16.04.2] (core) [12:57] -queuebot:#ubuntu-release- Unapproved: mesa (xenial-proposed/main) [12.0.6-0ubuntu0.16.04.1 => 17.0.7-0ubuntu0.16.04.1] (core, xorg) [13:40] -queuebot:#ubuntu-release- New: accepted paste [amd64] (artful-proposed) [2.0.3+dfsg-4ubuntu1] [14:21] -queuebot:#ubuntu-release- New binary: rustc [amd64] (artful-proposed/universe) [1.17.0+dfsg2-7] (no packageset) [14:21] -queuebot:#ubuntu-release- New binary: rustc [s390x] (artful-proposed/universe) [1.17.0+dfsg2-7] (no packageset) [14:27] -queuebot:#ubuntu-release- New binary: rustc [i386] (artful-proposed/universe) [1.17.0+dfsg2-7] (no packageset) [14:36] -queuebot:#ubuntu-release- Unapproved: accepted curtin [source] (xenial-proposed) [0.1.0~bzr505-0ubuntu1~16.04.1] [15:33] -queuebot:#ubuntu-release- Unapproved: intel-microcode (xenial-proposed/restricted) [3.20151106.1 => 3.20170511.1~ubuntu16.04.0] (ubuntu-desktop, ubuntu-server) [15:34] -queuebot:#ubuntu-release- Unapproved: intel-microcode (yakkety-proposed/restricted) [3.20160714.1 => 3.20170511.1~ubuntu16.10.0] (ubuntu-desktop, ubuntu-server) [15:34] -queuebot:#ubuntu-release- Unapproved: intel-microcode (zesty-proposed/restricted) [3.20161104.1 => 3.20170511.1~ubuntu17.04.0] (ubuntu-desktop, ubuntu-server) [15:46] infinity: xnox: ^ who's reviewing these? Want me to do it? I don't see sil2100 here. [15:50] -queuebot:#ubuntu-release- New binary: rustc [arm64] (artful-proposed/universe) [1.17.0+dfsg2-7] (no packageset) [15:50] rbasak, i review no, i applaud only =) [15:53] xnox: huh? [15:53] rbasak, i'm just the uploader, and i have not pinged anybody to review it yet. [15:54] * xnox translates ranglish to english [15:54] Ah [15:54] OK. Well then I'll take it. [15:58] xnox: not doing Trusty? [16:00] rbasak, nope. [16:01] rbasak, these new microcodes require early initramfs loading code; which is not in trusty. And I don't think intel-microcode is widely installed in trusty. [16:02] rbasak, in xenial we have started automated intel-microcode installation with ubuntu-drivers [16:02] I see. Thanks. [16:04] xnox: in the past the intel-microcode SRU updated the dat file only. With your backport it has the effect of changing more, such as debhelper compat level and the initramfs-tools hook. Do you have an opinion on one approach over the other? [16:11] -queuebot:#ubuntu-release- New: accepted rustc [arm64] (artful-proposed) [1.17.0+dfsg2-7] [16:11] -queuebot:#ubuntu-release- New: accepted rustc [amd64] (artful-proposed) [1.17.0+dfsg2-7] [16:11] -queuebot:#ubuntu-release- New: accepted rustc [s390x] (artful-proposed) [1.17.0+dfsg2-7] [16:11] -queuebot:#ubuntu-release- New: accepted rustc [i386] (artful-proposed) [1.17.0+dfsg2-7] [16:11] rbasak, i'd rather do wholesome backports. the changes are improve text output during initramfs generation. Pre-v4.4 handling of kernels is not as relevant on xenial, unless people boot trusty kernel (still) [16:12] xnox: that's not general SRU policy though :-/ [16:14] Laney: well, I was working on transitions over the weekend, and the number of stuck tests that I was running across was significant enough that I batch retried, and 800 seemed a bit high. Do you have some idea of the failure rate here, that would inform whether it's worth putting effort into the cause vs. just batch retrying on the regular? [16:15] rbasak, this is hwe backport of binary blobs for restricted. [16:15] xnox: it is; however the packaging is not a binary blob. [16:15] Laney: but also, if you need someone else to hit the mass-retry button, I can certainly do that ;) [16:15] slangasek: do you know what the problem was? [16:15] rbasak, e.g. are nvidia packages backported piecewise, or just straight backports of the new upstream releases (and the new packaging)? [16:15] no clue [16:15] it's not regular, at least not on that scale [16:15] but there could be a systemic problem that hits at one time [16:15] xnox: I don't know. Is it? [16:15] something like: the initial dist-upgrade fails [16:16] rbasak, there is more room for error by modifying this; since one has to be careful and exclude packaging of the broken microcodes. [16:16] I can see that a wholesale packaging update might be necessary depending on the nature of the upstream blob update (eg. if old packaging cannot cope with it). [16:16] infinity, can you help us to review the xenial sru for fwupdate? [16:17] Laney: well, there had also been several big packages going through the system (nodejs, perl, ...), so it's possible 800 was in line with the overall error rate? But I don't know how to even measure that error rate [16:17] In this case, isn't the packaging more closely tied to the Free bits, rather than the non-Free bits? The blob itself remains a blob all the way through to the hardware, no? [16:18] zesty diff is smaller, since zesty base is newer thus that one looks a lot more like just the blob update [16:21] That statement sounds likely to be true, but I don't follow what point you're making there. I was reviewing the Xenial diff to start with. [16:21] that's backwards no? =) [16:21] yakkety all diffs are needed. [16:21] slangasek: We have the information needed (exit code of autopkgtest) in a database, so it could be exposed on http://autopkgtest.ubuntu.com/statistics [16:21] 800 at once is definitely more than I'd expect [16:21] I suspect a package was failing to install or similar [16:21] I find it easier to start with the most invasive set of decisions. Then reviewing the smaller diffs is easy :) [16:22] I pushed a change earlier that should result in more runs being recorded as failures (with version 'unknown') rather than being left in running state [16:22] Laney: you have an error code for these test requests that end up stuck in 'running'? They don't seem to ever show up on the per-package pages [16:23] It's because they fail too early for autopkgtest-web to know the current version, so it skips them [16:23] ah [16:23] that's what my fix remedies [16:23] rbasak, the initramfs hook updates are good / wanted in xenial; update from debhelper compat 7->9 is imho also good as we do want microcode package to build exactly the same way on all releases. And ideally we do want to update it on continious basis. [16:23] well, it fakes up a version of 'unknown' [16:23] not taking compat changes, creates more work and divergence [16:23] yeah, that seems like a good improvement :) [16:24] so you should at least easily be able to browse them rather than having to surf swift via its API [16:24] * xnox ponders if intel-microcode really should be a snap [16:24] makes the state more discoverable [16:24] xnox: perhaps so, but in that case should we not have those changes go through the usual SRU process with their own bugs, justifications, regression tests, etc? [16:25] xnox: and for the future "exactly the same way" / "update it on continious basis", I agree this might be what we want in general, but I think that should probably form an SRU exception that we want to formulate, review, approve, etc. [16:26] rbasak, perhaps, but all of them are "backport the latest micocode package to stable releases" which is what the bug report in question is. [16:26] rbasak, the bug here is not "cherrypick high-severety microcode updates only for Skylake" [16:26] OTOH this particular update we should probably just get through, and for that, I think minimising it to the blob update makes more sense, unless there's a particular reason why that can't be done or there's a specific reason it carries higher risk. [16:27] my request here is for a wholesome backport to latest, not a targeted fix for just this one skylake HT bug. [16:27] *wholesale [16:27] xnox: OK, but if you want to drive the more general bug, it'll end up in the slow lane. [16:27] cjwatson, yes, checked dictionary. [16:28] rbasak, it definately should be in the slow lane, with extra long phasing. I am expecting it will be in proposed for a month of so. [16:28] rbasak, as we need to collect multiple possitive results on multiple CPU SKUs / Families. [16:29] I agree with extra long phasing and a long time in proposed. I was expecting though to be able to fast track a blob update (only) into proposed so users had somewhere official from which to get it (for opt-in updates). [16:29] there are a lot of updates to a lot of microcodes, and any one of them can regress anything (to the point of graphics not working, or systems not booting) [16:30] rbasak, that would diverge packages and create sru-only delta which i do not want to support, nor maintain, nor debug going forward. [16:30] (delta vs debian) [16:31] imho it is a mistake that we do not routinely update microcode packages, the way we e.g. update the hwe-kernels. [16:31] It's been done before. I'm not ruling out your longer term plans at all here. If those are successful, then any difficult maintainability point becomes moot. [16:32] rbasak, given lack of dependencies it would be nice to copy down the .deb verbantim. but we will not do that. [16:33] rbasak, please review yakkety/zesty, i think you will have less objections about those. [16:33] rbasak, and i prefer for later releases be fix released, before even accepting xenial package. [16:34] unless you object to s/7/9 in those too [16:40] AFAICT, either I need to accept the packaging changes on the basis that they are approved under current SRU policy (perhaps with approving whatever I need to that is within my remit), and so I can accept X/Y/Z subject to review this way, or I don't. I don't think it makes sense to accept Y/Z and not X on a policy basis. And if I did, but we later decided to do something different, that'd be even [16:40] more of a pain to manage in the future. [16:40] So I think ~ubuntu-sru needs to resolve what to do here first, if you want to include packaging changes in this update. [16:41] OTOH I'd accept a blob-only update into all three releases as I don't think there's any SRU policy question there, but you've said and given reasons as to why you don't want that. [16:41] rbasak, right, my thinking behind this upload was that this is backport-published-to-updates-via-proposed-testing [16:42] That's a summary of my thinking right now. I'd like opinions from other ~ubuntu-sru if any are around please. [16:42] i did not prepare these as classic SRUs. [16:42] and backports minimise delta / changes, to only those neccessory to get something building on older release; no changes are required here, as debian too backports this package as far back as is reasonable. === ratliff_ is now known as ratliff [16:44] The regression risk there though is for example that the delivery mechanism changes to something that older kernels cannot support. [16:45] OTOH, it seems unlikely to me that just updating the blobs would pose any difficulty for a system on which the delivery mechanism already works (and if it doesn't, that's a matter for a more specific SRU following the usual process). [16:45] the risk of buggy microcodes / microcodes with regressions is much higher. The last delivery change was in the v3.13 kernels, and has not changed since. [16:45] "the risk of buggy microcodes / microcodes with regressions is much higher" - how so? [16:46] (the requirement around built-in / non-builtin module is imho insignificant) [16:46] "The last delivery change was in the v3.13 kernels" - only because you specifically know that. What might you not know about that is in the same class of problems? [16:46] rbasak, re:buggy microcode the infamous lock ellision update [16:46] that needed newer glibc on the host too, to prevent it from using the now disabled lock ellision, resulting in crashes. [16:47] e.g. one needed newer userspace glibc and the new microcode. [16:47] that was non-obvious during the microcode update. [16:47] because of lack of disclosures from intel [16:47] xnox: and how would backporting the latest packaging have avoided that, if that packaging didn't have the versioned dependency because we didn't know about it? [16:48] it doesn't, due to lack of information. Therefore bisecting and cherrypicking individual microcode updates and/or cherrypicking changes does not reduce risk at all. The only way we can reduce risk is extensive in-the-field testing on multiple of microcode families and watch for regression bug reports. [16:49] because we need positive tests that things are not broken horibbly on N latest cpu families. E.g. at least as far back as e.g. ivy bridge today. [16:49] xnox: fine, but it makes no difference to your example whether we backport the packaging or backport only the blogs. [16:49] *blobs. [16:50] validation matrix explodes. I do not think we will be able to get comprehensive coverage for every cpu family, on every release series. therefore we will have to trust the validation from multiple releases. [16:51] I still fail to see how it would be different in your example whether we backport blobs or backport the packaging. The packaging made no difference to that situation. [16:51] which is an argument for not doing these updates, and keep systems knowgly vulnerable. [16:51] rbasak, if the packaging makes no difference, why should i be creating N versions of packaging, instead of maintaining one version of packaging?! [16:51] =) [16:51] and then debugging the case where the packaging did make a difference. [16:52] Because minimal changes to stable releases minimised regression risk, as the packaging includes interactions with other components (such as the kernel) where there is a non-zero risk of regression that can be mitigated by not changing it. [16:52] *minimises [17:03] my understanding is that e.g. snapd, firefox, linux-hwe, cloud-archive use a backport model of complete updates to latest upstream and packaging changes with minimal delta w.r.t. the version being backported. Rather than minimal delta w.r.t. each of the targetted releases. [17:05] i understand not taking 99% of changes where the actual bugfix is a one-lines; i do not understand rejecting 1% of packaging changes, when 99% are all needed. [17:05] Yes, and each of those have a documented exception and individual policy. We don't currently have one of those for intel-microcode. [17:06] where can i look at that? [17:06] And each of those has had the pros and cons considered on a case by case basis. [17:06] https://wiki.ubuntu.com/StableReleaseUpdates#Documentation_for_Special_Cases [17:06] ack thanks [17:06] Firefox isn't there. Technically it doesn't have an SRU exception AFAIK. It goes in via the security pocket, and the security team have their own polices. [17:07] Though in practice an SRU would probably be fine because we do it all the time anyway (because security). [17:12] I've written up what I think is the current situation in the bug. I'm EOD for now - I think from our discussion you're not in a hurry? [17:19] Depends: libsbml perl (not considered) [17:19] hmph [19:14] cjwatson: do you have any idea why auto-sync says dictem_1.0.4.orig.tar.gz content has changed, given that it arrived in Ubuntu via auto-sync? [19:45] libfile-sharedir-projectdistdir-perl/1.000008-1 [19:45] which are worse, perl package names or perl package version numbers? [19:58] slangasek: Because it did. [19:59] infinity: so Debian's orig.tar.gz changed? [19:59] slangasek: dak is slack about maintaining the constraint if things are no longer on disk. :( [19:59] ah, so this was a package removal/readd? [20:00] slangasek: Debian's original orig only existed briefly. It went (accidentally, probably) native in 1.0.4-2, then back to non-native in -4 [20:00] slangasek: But we have a record of -1, and LP won't let you do that: [20:00] slangasek: https://launchpad.net/ubuntu/+source/dictem/1.0.4-1 [20:00] slangasek: So, the only solution here would be to fake the sync using the orig from the -1 upload. [20:00] right [20:01] slangasek: Not that it matters, given that -3 and -4 were no-ops, really. Just the maintainer asserting his non-demise. [20:03] (Well, they should be no-ops, according to the changelog.... Of course, the new orig might have made them excitingly different!) [21:57] -queuebot:#ubuntu-release- Unapproved: apport (zesty-proposed/main) [2.20.4-0ubuntu4.2 => 2.20.4-0ubuntu4.3] (core) [21:58] -queuebot:#ubuntu-release- Unapproved: apport (yakkety-proposed/main) [2.20.3-0ubuntu8.4 => 2.20.3-0ubuntu8.5] (core) [22:00] -queuebot:#ubuntu-release- Unapproved: apport (xenial-proposed/main) [2.20.1-0ubuntu2.7 => 2.20.1-0ubuntu2.8] (core) === bregma__ is now known as bregma [22:30] Laney, infinity: I'm interested in your thoughts on https://bugs.launchpad.net/britney/+bug/1700668 [22:30] Ubuntu bug 1700668 in britney "make it easier to reset baseline for autopkgtests that regress in release" [Undecided,New] [23:06] -queuebot:#ubuntu-release- New sync: ocamlbuild (artful-proposed/primary) [0.10.1-1] [23:16] ^ would unblock ocaml transition going further [23:18] not an autosync? :) [23:19] -queuebot:#ubuntu-release- New: accepted ocamlbuild [sync] (artful-proposed) [0.10.1-1] [23:19] slangasek, from experimental [23:19] ah [23:21] -queuebot:#ubuntu-release- New binary: ocamlbuild [amd64] (artful-proposed/none) [0.10.1-1] (no packageset) [23:21] -queuebot:#ubuntu-release- New binary: ocamlbuild [ppc64el] (artful-proposed/none) [0.10.1-1] (no packageset) [23:22] -queuebot:#ubuntu-release- New binary: ocamlbuild [arm64] (artful-proposed/none) [0.10.1-1] (no packageset) [23:22] -queuebot:#ubuntu-release- New binary: ocamlbuild [s390x] (artful-proposed/none) [0.10.1-1] (no packageset) [23:22] -queuebot:#ubuntu-release- New binary: ocamlbuild [i386] (artful-proposed/none) [0.10.1-1] (no packageset) [23:23] -queuebot:#ubuntu-release- New binary: ocamlbuild [armhf] (artful-proposed/none) [0.10.1-1] (no packageset) [23:32] slangasek, seems to me that all the bugs of failed to install/upgrade are filed against random package that happens to be the unlucky one to call insserv or run of disk space, etc. [23:32] https://bugs.launchpad.net/ubuntu/+source/systemd/+bugs?field.searchtext=install%2Fupgrade&search=Search&field.status%3Alist=NEW&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.assignee=&field.bug_reporter=&field.omit_dupes=on&field.has_patch=&field.has_no_package= [23:32] trianging these is not fun. [23:33] xnox: experiencing the upgrade failures is also not fun; can we get rid of insserv yet? [23:33] slangasek, my understanding is that we have and debian did too. but not in xenial?! [23:33] -queuebot:#ubuntu-release- New: accepted ocamlbuild [amd64] (artful-proposed) [0.10.1-1] [23:33] -queuebot:#ubuntu-release- New: accepted ocamlbuild [armhf] (artful-proposed) [0.10.1-1] [23:33] -queuebot:#ubuntu-release- New: accepted ocamlbuild [ppc64el] (artful-proposed) [0.10.1-1] [23:34] -queuebot:#ubuntu-release- New: accepted ocamlbuild [arm64] (artful-proposed) [0.10.1-1] [23:34] -queuebot:#ubuntu-release- New: accepted ocamlbuild [s390x] (artful-proposed) [0.10.1-1] [23:34] -queuebot:#ubuntu-release- New: accepted ocamlbuild [i386] (artful-proposed) [0.10.1-1] [23:34] right, it is gone, but still causes upgrade failures in xenial [23:35] hm, my system seems to be broken [23:35] since insserv in xenial isn't integrated with systemd the way it was with upstart, I think the right fix is to SRU insserv to make it non-fatal [23:35] I'm getting segfaults [23:35] [661832.017428] schroot[30225]: segfault at 500000002 ip 00007fc0023262b8 sp 00007ffde3b7f060 error 4 in pam_cgfs.so[7fc002323000+6000] [23:35] [657710.517534] pcscd[12977]: segfault at 10 ip 00007f7792767d44 sp 00007f778abc8e20 error 4 in libpthread-2.23.so (deleted)[7f779275e000+18000] [23:35] that can't be good. [23:35] slangasek, does systemd in xenial need any of the insserv orderings? and e.g. do we still use insserv orderings for e.g. shutdown? [23:36] * xnox thought we didn't [23:38] xnox: that's what I'm saying, I don't believe it's meaningfully integrated with systemd at all [23:39] should be verified, but AIUI it's mostly pointless [23:39] ack. let me add a trello card about that [23:39] (and so shouldn't be allowed to cause upgrade errors)