=== zomble is now known as grumble [01:16] Hi folks! I'm an upstream developer of certbot (previously known as "letsencrypt") [01:16] and hlieberman is our debian developer [01:16] Hi there! [01:16] we're trying to figure out the process of getting an up-to-date copy of cerbot into Ubuntu [01:17] most especially xenial, where there's an increasingly old and unecessarily buggy version [01:17] though we also have questions about other Ubuntu versions, I suspect [01:18] pdeee: So, the certbot package in Yakkety is synced, unchanged, from Debian; Zesty has just opened, so we'll shortly be pulling whatever the current version in Debian unstable (and further uploads to Debian will auto-sync, until Zesty freeze) [01:19] RAOF, which debian channel would Yakkety sync from? Testing? [01:19] No; Yakkety is released. [01:20] For released, stable versions, you'll be interacting with our stable release updates process: That covers the development release. As for stable releases, you'll be [01:20] https://wiki.ubuntu.com/StableReleaseUpdates [01:21] we've been looking at that and trying to figure out what we need to do next [01:22] it seems one thing is to get 0.9.3 (which is in Debian unstable and testing) into Zesty, since that's much better than 0.8.1, and also very well tested at this point [01:22] then follow the process to pull 0.9.3 back into Xenial and Yakkety [01:23] Yes, that's mostly a prerequisite for getting anything into yakkety-updates. Bugs have to be fixed in the development release before we accept fixes in -updates ☺ [01:23] and that's currently blocked because Zesty is new and not yet syncing? [01:23] I think it actually is syncing, or maybe it's slightly blocked for a perl transition. [01:24] But that will be over soon, and 0.9.3 will get autosynced into zesty. [01:25] The trick with SRUs is sometimes working out whether to do them; typically we *don't* want the latest release as a stable-release-update, we want bugfix-only releases (or just bugfixes). [01:25] This interacts… interestingly with online services like letsencrypt, though :) [01:26] So it's entirely possible that 0.9.3 will be appropriate to put into -updates. [01:26] RAOF, we wouldn't ask you to take all of our latest releases [01:26] when we did 0.9.0, it contained a giant set of both new features and bug fixes [01:27] we have lots of tests in our tree, but even so such releases usually contain minor regressions [01:27] RAOF: Believe me, I've been beating them over the head with this lesson as stretch freeze approaches. ;) [01:27] so we did 0.9.1, 0.9.2 and 0.9.3 to fix those regressions as we became aware of them [01:27] hlieberman: :) [01:28] at this point we've issued a few hundred thousand certificates to users of 0.9.3, and are pretty sure it contains no further substantial regression [01:28] pdeee: So, one important point: does 0.9.3 break any workflows that someone on 0.8.1 would have set up? [01:28] definitely not [01:28] You've added new features - has anything changed in a backward-incompatible way? (Including workarounds for bugs that are now fixed?) [01:29] we're extremely protective of those [01:29] no, again we'd try very very hard to ensure we don't break our own users that way [01:29] Excellent. That's what we're trying to avoid in the SRU process, so if you're caring about that it makes it easier to SRU :) [01:30] afk for a few minutes, but hlieberman a question i have for you is whether there are any steps in https://wiki.ubuntu.com/StableReleaseUpdates that you might want help with [01:30] So, you've got a bunch of tests in the tree. Can we run those as a part of packaging? [01:31] yes [01:31] Good, good. That also makes it easier to approve SRUs! [01:31] hlieberman already does (we know, because occasionally they break for fascinating context-dependent reasons) [01:32] RAOF: Yup. And I consider any failing tests to render the package RC-buggy, so there should never be any broken tests in Debian. [01:32] I haven't yet gotten around to integrating it into Debian CI, but it is run as part of build. [01:33] (both by me, in an sbuild schroot, and by the buildds.) [01:34] Excellent. Once you get the DEP-8 metadata in place, our britany will gate on it too. [01:35] Yeah. I think I need to do something vaguely gross to get the DEP-8 stuff to work with actual as-installed testing, but it's on the roadmap. [01:57] * pdeee is back for a bit === ZarroBoogs is now known as Pici [05:07] nacc: does gbp import-dsc support multitar? if so, you could pick apart what that does [07:58] Mirv: hi! are there plans to land qt 5.6 in xenial? [08:07] mardy: xenial-overlay already has it, that's as far as xenial is concerned. major Qt upgrades do not fall under https://wiki.ubuntu.com/StableReleaseUpdates criteria. [08:09] Mirv: ok. Then do you know what's the state of bug 1615265? It's marked as released, but the bug is still there in xenial [08:09] bug 1615265 in qtlocation-opensource-src (Ubuntu RTM) "OpenStreetMap Plugin for Map QML type broken" [High,In progress] https://launchpad.net/bugs/1615265 [08:10] mardy: LP bugs refer to latest development series. if something is wanted to be backported as SRU, it'd be needed to be nominated for xenial for example. [08:12] mardy: hmm, in what use case you ask regarding that bug? I mean, it's fixid in yakkety + xenial-overlay, do you have desktop xenial application you're considering or something like that? [08:13] Mirv: well yes :-) [08:17] Mirv: I might end up shipping qt 5.6 with my app anyway, but having the bug fixed can be useful especially for developers, who test their apps on the xenial desktop [08:45] @pilot in === udevbot changed the topic of #ubuntu-devel to: Yakkety Yak (16.10) Released | Archive: open | Devel of Ubuntu (not support or app devel) | build failures: http://qa.ubuntuwire.com/ftbfs/ | #ubuntu for support and discussion of precise-xenial | #ubuntu-app-devel for app development on Ubuntu http://wiki.ubuntu.com/UbuntuDevelopment | Patch Pilots: dholbach [09:21] mardy: yeah if you snap using xenial overlay it'd solve the problem too. vivid backport seems hard, not sure if 5.6 -> 5.5 plugin backporting could be easier. [09:22] the mapbox plugin is needed === dholbach_ is now known as dholbach === hikiko is now known as hikiko|ln [12:33] @pilot out === udevbot changed the topic of #ubuntu-devel to: Yakkety Yak (16.10) Released | Archive: open | Devel of Ubuntu (not support or app devel) | build failures: http://qa.ubuntuwire.com/ftbfs/ | #ubuntu for support and discussion of precise-xenial | #ubuntu-app-devel for app development on Ubuntu http://wiki.ubuntu.com/UbuntuDevelopment | Patch Pilots: === hikiko|ln is now known as hikiko === alan_g is now known as alan_g|lunch === alan_g|lunch is now known as alan_g [13:54] * Mirv hugs dholbach [13:57] * dholbach hugs Mirv back :-) [15:03] Hi to all, if you could like to help in the Google Code In 2016 (GCI) this year as mentor, please help us adding your task to the wiki: https://wiki.ubuntu.com/GoogleCodeIn2016 [15:11] slangasek: ack, i think i got a method that works with dpkg-source -x --skip-debianization, but I still need to tweak a few of the assumptions made by gbp import-orig. I think I'm pretty close === JanC_ is now known as JanC [18:06] smoser: hrm, so `gbp import-orig` with --pristine-tar is working, but it is producing tar.bz2 files, when the corresponding dsc files use tar.gz. I assume that's not ideal, is it also incorrect? I have verified the contents of the tarballs are identical, just differently compressed -- I'm not sure how to avoid it yet [18:07] the perl transition is over, right? For the most part? [18:08] teward: seems to be one package remaining https://people.canonical.com/~ubuntu-archive/transitions/html/perl5.24.html [18:08] nacc, i dont really *care*.. [18:08] but uploading the .bz2 if the .gz was already uploaded could fail i guess. [18:08] right? [18:08] i dont know if it does. [18:09] hjd: more or less asked because nginx finally got out of proposed, which was on my radar. SO, more or less complete except that one package? [18:09] but launchpad cries if you upload the same version with a different orig tarball (i know it does reject it if you do that with the existing orig name) [18:11] smoser: ah yes, true,when you went to build, it might [18:12] teward: looks like it, but I don't know anything more than what the page says :) [18:12] :) [18:12] nacc, well i think it would just reject the upload. and i guess it would fail if you were trying to re-build a existing .dsc as you dont actually have a file that is referenced. [18:13] smoser: right, so not good :) [18:21] smoser: ok, so gbp import-orig 0.8.0 has support for multiple orig tarballs [18:21] smoser: do you know if anyo f the packaes on our list actually have multiple orig tarballs? [18:27] I believe LibreOffice has multiple orig tarballs [18:27] jbicha: thanks, good to know [18:28] jbicha: there are definitely packages that do, i'm just not sure they are in the purview of the packages we care about for 1.0 of the importer :) [18:28] yeah, LO is not a very minimal test case [18:34] jbicha: i think spamassassin is probably a better one for server right now [18:34] smoser: i assume that one is on your list? [18:34] nacc: I'm not sure I'm keen on the importer importing orig tarballs. I think I'm quite happy to leave that to the Debian archive and to Launchpad to maintain, and just have tooling pull them from those sources when needed. [18:34] rbasak: ok, i believe it means we can't be compat with gbp then [18:35] rbasak: at least, on my first reading of it [18:35] Do we need to be? [18:35] What benefit would that give us (genuine question)? [18:35] i thought it was in our plans to try to be, yes [18:36] other developers brought it up at some point, i'd need to go look in my logs [18:37] nacc, i dont think spamassasin wasin the list. [18:37] i've a ton of dsc files on that ndoe that i'im doing the import on [18:37] feel free to look around and grep [18:38] rbasak, my interest in pristine-tar is only in its promise. [18:39] i'm fine with having a tool that says "get me the orig tarball" and having it work. [18:39] i think what we want it is a usd build-package [18:39] probably [18:39] which knows that we may not have the corresponding orig and dtrt [18:39] but if that tool is 'git pristin-tarball' and i've *already* that orig tarball , then that seems a nice side affect. [18:39] I'd be happy with a well known local cache directory or something like that. [18:39] however, if that costs me a 20% increase in my .git dir, i'm not so keen on it. [18:40] rbasak, that is what pristine-tarball does. if it works. [18:40] Then any tool could use it, including a usd build-package [18:40] Or even populate the cache at usd clone time. [18:40] smoser: didn't you object to having a cache? [18:41] i dont thin i have an objection to a cache [18:41] ok [18:41] but if pristine tarball actually means that i basically can create the orig tarballs "for free" from that same git clone [18:41] which it does [18:41] then yeah, thats wonderful [18:41] pristine-tar checkout [18:41] s//which it does/which it might do/ [18:41] "for free" is the part i'm not sure of [18:41] and that is the part that i think is a requirement [18:42] at least there is some cost at which i'd say forget it [18:42] Using the git workflow already has a bandwidth downside. When you clone a branch, you clone the entire history (usually), not just the current source package. [18:42] if every clone cost me 2x, then forget it. [18:42] if every clone cost me 1.02x, then yeah. magic is nice. [18:42] On top of that difference the extra bandwidth to download an orig tarball seems insignificant to me. [18:43] even if we had tars in pristine-tar; it feels like we'd still wrap it in some knowledge of the helper tools, so it becomes irrelevant to the end-user [18:44] not entirely irrelevant. waste is not irrelevant. it really is just a matter of how much. [18:45] ok, it's irrelevant to the process [18:45] where you get the tarball from, as long as it's correct, is not important (yet) [18:45] if 'usd-clone ' got you all possible orig tarballs along with the source that you cloned and came at cost of about one download of an orig tarball [18:45] then of course we'd want that. [18:46] but, again, that's an optimization [18:46] not a requirement either way, afaict [18:46] nacc, it *is* important when you're offline [18:46] its not an optimization then [18:46] :) [18:46] smoser: we never said we'd support offline-ness [18:46] smoser: you're doing feature creep again [18:46] no [18:46] smoser: or broadening my scope beyond what we intended to support [18:46] well of course. [18:47] yes, absolutely it is feature creep to have access to all orig tarballs [18:47] right [18:47] nacc, but to be clear, you were talking about the same feature creep above [18:47] yes, and in both cases, an optimization [18:48] so it is a matter of how that feature is implemented. and pristine-tar has the potential to be very nice. [18:48] pristine-tar overhead is generally insignificant [18:49] cjwatson: ack, it doesn't seem big, except for my tooling dealing with it :) [18:50] nacc, given the difficulty you're having, it might be worth re-considering what i'd originally said... [18:50] i dont think we *have* to do this now [18:51] smoser: i think i have it, tbh -- just not sure we can support multiple origs with it [18:51] is there value in doing it now versus adding the pristine-tar later. [18:51] but if rbasak doesn't want it at all, not sure :) [18:51] let me get some numbers [18:53] smoser: i'll just get a size on the resuling .git directories [18:53] honestly, if the nubers are like 2%, then rbask doesn't get to complain. especially since he's already said the workflow "has a bandwidth downside" and has discounted that [18:53] :) [18:54] nacc, in theory... i think you shoudl be able to do the import with pristine-tar all that there, then just delete those branches and git-gc and compare [18:54] right ? [18:54] I'm not complaining about the bandwidth. I'm complaining about the extra complexity, maintenance burden, bug surface, etc. [18:54] rbasak, we can also drop it later [18:54] The bandwidth I don't care about. [18:55] rbasak: it doesn't really do anything to do the rest of the imported tree [18:55] nacc, ie, comparison is very easy. [18:55] If instead we have tooling that just pulls and caches from LP when needed, it feels like there's less to go wrong. [18:55] rbasak: and i'm trying to use gbp-import-orig as much as possible [18:55] smoser: yeah, i just need to actually run the imports locally :) [18:56] So if we end up with this, it feels like you're taking away the simple option from me :-) [18:56] smoser: rbasak: also, i wonder -- we're sort of optimizing the corner-case(s). That is, our primary issue is it's not trivial to build when you use the importer tree [18:56] having *all* orig tarballs available fixes a much large problem than that [18:58] rbasak: an interesting thing is that there is not already a trivial way to just get the corresponding upstream for a to-build version. smoser has a script to warp it [18:58] nacc, fyi, https://code.launchpad.net/~smoser/usd-importer/+git/usd-importer/+merge/309777 [19:00] smoser: nice [19:00] nacc: yeah, I imagine that to be part of a usd build command. [19:00] rbasak: ack [19:01] rbasak: the advantage we get from pristine-tar for that is, presuming the tarball is already defined (e.g., for the debian version we're merging to), we don't need to do any searching, we can just `pristine-tar`. Otherwise, we have to 'figure out' where to get the upstream tarball from? [19:03] nacc: can we get it from Launchpad - Ubuntu if defined in Ubuntu, else Debian? Doesn't that cover all cases? [19:05] nacc, do you have a published branch of what you've got so far ? [19:07] smoser: let me push it up [19:11] rbasak: i guess so; you'd basically look for the parent's version and pull that [19:13] nacc, http://paste.ubuntu.com/23407877/ that was my list, and spamassasin was in it. [19:15] smoser: ack [19:20] nacc: I'd look for the upstream orig tarball version mentioned in debian/changelog, if it's not already in the parent directory and also not in the cache. [19:21] I'm not sure if the LP API means that you have to walk the publishing history to find a version that matches the same upstream version, or if there's a more direct way. [19:21] But I'd look in the Ubuntu distro first, then fall back to the Debian distro. [19:21] Assuming it's a non-native package of course. [19:22] rbasak: right, i didn't find a way to get upstream tarballs directly, but hadn't looked exhaustively [19:23] smoser: https://git.launchpad.net/~nacc/usd-importer/log/?h=orig_tarball [19:24] smoser: i just switched it from using dpkg-source -x'd directory for import-orig to the tarball directly [19:25] smoser: testing that now [19:26] rbasak: it feels like in our process, it would be always be the nearest import tag's orig tarball? [19:27] nacc: what if I'm bumping to a new upstream version ahead of Debian? [19:27] then we'd have no way to do that in any process [19:27] Though in that case I suppose I'd have it in the parent directory. [19:27] you'd need to use uupdate [19:27] or uscan [19:27] It just feels to me that it really should match the upstream version in the first entry of debian/changelog. [19:27] launchpad wouldn't have it already either [19:28] If you want to use the tags to try and locate a version that LP might have, I'm OK with that. [19:28] It feels like there's no need to couple this with the tags though. It is possible to do it independently. [19:29] rbasak: the issue is, aiui, there's no link to upstream tarballs in launchpad [19:29] rbasak: i'd have to manually munge the dsc file still [19:29] and even then, only for dsc files that are upblished [19:29] which are exactly those that are tagged in the importer [19:30] I don't follow. Here's my algorithm: [19:31] 1) If native package, fail; 2) Look up required upstream version based on first entry in debian/changelog; 3) if in parent directory or in cache, succeed immediately with the first of those found; [19:31] [ 2) doesn't work for multiple orig aiui ] [19:32] rbasak: ok, i'm presuming there is a 4) coming? [19:32] 4) walk Launchpad publishing entries for source package name in Ubuntu, most recent first. First match for upstream version wins; 5) try 4) again but with Debian [19:32] I don't understand why 2 wouldn't work even with multiple orig [19:33] what is the 'usptream version'? it doesn't give you the name of the tarballs [19:33] oh i see [19:33] so you're just saying run srcpkg.pull() again [19:33] for a specific srcpkg [19:34] Yes, though I didn't know the API details. If possible, grab orig tarballs only. If not possible, grab everything then throw everything but orig tarballs away. [19:34] not possible, afaict [19:34] A large number of packages have a `get-orig-source` rule in debian/rules that should perform the package-specific logic to get the files necessary. Is the use case for this sufficiently different from that to need separate logic? [19:34] persia: could be a try/catch kind of thing [19:34] persia: what we really want to avoid, though, is in any way using a tarball different than what is published [19:34] persia: we could fall back to that maybe, but it's also not guaranteed that the thing that "get-orig-source" will fetch is what the uploader uploaded, in which case we'd end up with a mismatch and a reject. [19:35] If you want bitwise compatibility, you want pristine-tar (which is where you were before I mentioned anything, so I'll leave you to it) [19:35] persia: it's the archive (whether Debian or Ubuntu) that holds the definite binary blob, and that's what we want to grab ideally. [19:36] pristine-tar is just another way of getting it. What was proposed before (AIUI) is round-tripping that through the git repo via pristine-tar. [19:37] rbasak: ok, so we could make our own API for this pretty easily, I guess [19:37] rbasak: that downloads any tarball in dsc files that contains orig.tar [19:37] or whatever the appropriate regex would be [19:37] nacc: not possible, aiui> the web UI seems to be able to do it, in that I can find the URL associated with an orig tarball only IIRC. But fair enough if the Python binding doesn't expose that to you. [19:37] rbasak: i think that's being built from lplib [19:37] rbasak: i mean, i could generate that too, i suppose [19:38] OK, beyond my knowledge now. I'll leave that to you :-) [19:38] Anyway, do you get my gist? I'm not saying it *has* to be that way. It just feels less error-prone/complex to me. [19:38] rbasak: yeah, i see what you have; it's definitely less complex [19:39] At the cost of bandwidth, admittedly, but it feels to me that it's worth the gain in simplicity. [19:39] rbasak: how do you want the cache to work? in $HOME? or in the repo itself? [19:39] rbasak: and how/waht manages the cache? [19:39] Right - given that we're already paying bandwidth cost in cloning the entire repo. [19:39] yep [19:40] cache> I don't mind particularly. $XDG_CACHE_DIR would be nice, addtionally with support inside pull-lp-source! But that's perhaps scope creep. Inside the repo would be fine I think. [19:40] rbasak: right, so we'd store them somehwere in .git? How does that work normally? will it 'just work' to push those objects up? [19:42] I meant in .git, but not inside the git data structure. [19:42] So it wouldn't push or pull. [19:42] Another user's tooling would find it missing and use my algorithm above to fetch it directly from LP as needed. [19:43] ah ok [19:43] so purely local cache [19:43] got it [19:43] Right [19:43] smoser: my branch does some stuff to stop using xgit for the importer itself, which i think i might commit anyways [19:43] as that's more a usd-clone detail than an import details [19:43] *detail [19:44] rbasak: ok, i think we're on the same page now [19:44] rbasak: but all of that is in a (yet non-existent) usd-build, right? [19:45] nacc: right :) === JanC_ is now known as JanC [19:45] nacc: though it would be fairly independent in a usd-get-orig script easily enough [19:45] rbasak: yep, i think i'm going to push this into one of our python libs [19:45] rbasak: the code is reorged quite a bit now to support the veraious tools all being in python, hopefully [19:46] and pulling in whatever dep they need [19:46] nacc: great! [19:46] nacc: thank you for all your work on this [19:46] i can probably get a usd-build done today, that at least does the common stuff correctly, from HEAD [19:46] rbasak: we also need to sit down for UOS, probably? maybe after the team meeting thursday? [19:47] Yes, we should. [19:47] One caveat with my algorithm above: will it be reasonably quick or dog slow? [19:47] (to walk LP spph)? [19:48] well, it'd be slower than following the tags, but that's ok; and in *most* cases, I think it would only be going a few publishes back [19:48] that's kind of why i thought following tags would be better -- as we'd find the 'nearest' publish faster [19:49] we could then santiy check that version, to be sure [19:49] but that would then miss the case of pushing a new upstream in an SRU [19:49] rare though it might be [19:49] cd/win 54 [19:49] oops [19:51] 54 [19:52] heh [20:04] rbasak: actually, question on 2), there's no way to go from an upstream version to an orig tarball afaict [20:04] rbasak: no canonical way, i mean [20:05] nacc: you're not getting an orig tarball, not for step 2. Just determining the upstream version number. [20:05] Or am I answering the wrong question? === robert_ancell_ is now known as robert_ancell [20:05] rbasak: right, err, 3) then [20:06] rbasak: how do i go from purely an upstream version to something to lookup in the cache? [20:06] rbasak: and/or something to download [20:06] rbasak: i think 3) may also what doesn't work with multiple origs? [20:07] rbasak: ah maybe i can do this: [20:08] Ah, I hadn't thought of that. [20:08] that's i think why i was getting hung up on the import tags [20:08] or as you referred to the spph [20:08] But you could structure the cache to have // directories, where the files inside are named matching the orig tarballs exactly [20:08] yes [20:08] that's a good point [20:09] Where is the upstream version only, not the package version. [20:09] right [20:11] rbasak: let's say someone acidentally deleted one file in the cache of two orig tarballs (certainly possible). We would actually need to know which orig tarballs we expect to find in there, right? so we know if we can use the cache? [20:13] Good point. Maybe leave a "MANIFEST" or similar file in there? Or indeed the .dsc if you have it? [20:14] yeah, i think we should keep the .dsc file around to know hwat is there [20:14] I accept that your questions are demonstrating how this isn't as simple as I first thought :) [20:14] yeah :) [20:14] there are corner cases either way [20:14] At the moment I still favour this over pristine-tar I think though [20:14] understood [20:19] rbasak: is there a flag to dpkg-buildpackage (i guess to dpkg-source?) to use tarballs from arbitrary locations? [20:25] Not that I know about, sorry. [20:25] Would a placing a symlink work? [20:27] rbasak: probably, in ../ ? [20:28] rbasak: i think that's where dpkg-buildpackage ends up looking, right? [20:37] nacc: right [20:38] And I think developers won't be surprised to find a symlink there. [20:38] Or at least they'd find it reasonable. [20:38] The only surprise might be to find that the symlink breaks if the git directory is deleted. [20:38] But I think that's OK - it's entirely recoverable. [20:38] rbasak: right [21:51] 1953. By Iain Lane 6 hours ago [21:51] Merge xnox's branch to sign with the 4K key for current releases [21:51] horum, i see my commit on launchpad now [21:52] Laney, somehow there is still one commit outstanding from my branch =/ the use full fingerprint [21:52] and i did split channel thing by accident. [22:58] re: autoimporter, could someone take a stab at otto's question? https://bugs.launchpad.net/bugs/1638125 thanks [22:58] Launchpad bug 1638125 in mariadb-5.5 (Ubuntu) "USN-3109-1: MySQL vulnerabilities partially applies to MariaDB too" [Medium,New] [22:59] sarnold: done [22:59] rbasak: thanks! [23:00] rbasak: hah :) very nice value-per-byte :) [23:03] * mwhudson wonders if the snapd autopkgtest just needs some zesty upload to ppa:snappy-dev/image === JanC is now known as Guest13101 === JanC_ is now known as JanC === g2` is now known as g2[cubs-ATL]