=== freyes__ is now known as freyes === maclin1 is now known as maclin === maclin1 is now known as maclin [09:41] cjwatson I've been referred to you by jamespage for this and hope you can help. We're still seeing 'Hash Sum mismatch' errors for apt update executions. According to http://www.chiark.greenend.org.uk/~cjwatson/blog/no-more-hash-sum-mismatch-errors.html this should be a thing of the past from Xenial onwards, but we're still seeing it. I'm trying to figure out how to verify whether this is actually being used or not. [09:43] If I do 'apt-get -oDebug::Acquire::http=true update' then it shows the InRelease download happening from the standard ubuntu sources, but out failures are from those sources - for example: E: Failed to fetch http://mirror.rackspace.com/ubuntu/dists/xenial-updates/universe/binary-amd64/Packages.gz Hash Sum mismatch [09:44] odyssey4me: I wonder whether you have to use a specific mirror configuration to pickup the extra bits [09:45] Yeah, I don't think you should be hitting packages.gz if you are by-hash. [09:45] that's what's confusing me - unfortunately docs on all this are very sparese [09:45] *sparse [09:46] according to http://www.chiark.greenend.org.uk/~cjwatson/blog/no-more-hash-sum-mismatch-errors.html I should see the InRelease file fetched, then the by-hash ... and I'm not seeing that [09:46] so I'm wondering if there's some sort of apt or whatever config that needs to be in place to activate this, or if using it can be forced somehow [09:48] By-Hash Try to download indexes via an URI constructed from a hashsum of the expected file rather than downloaded via a well-known stable filename. True by default, but automatically disabled if the source indicates no support for it. Usage can be forced with the special value "for [09:48] man 5 apt.conf [09:48] man:apt.conf(5) [09:49] odyssey4me: can you pastebin a full debug dump? [09:51] the debug option you quoted earlier should be enough to get started with [09:52] you shouldn't need any special config, though perhaps there's something weird about the mirror you're using or about your existing config [09:54] cjwatson ok, posting info to https://gist.github.com/odyssey4me/549131501f752dfe957d1ec151d62914 - just did one with a mixed set of repo sources, which also goes through apt-cacher-ng... I'll post another one now with just a plain default set of sources [09:54] (and no apt-cacher-ng) [09:55] while acng should work, I'd rather start with as few things in the mix as possible [09:56] odyssey4me: so that info you've posted doesn't show a failure - I need one that does [09:56] nor does it even show fetching any Packages file at all [09:56] oh, well, that's a problem - because it's not consistent [09:57] right, but you should be able to get it eventually presumably [09:57] there's no point in me spending time debugging the successful cases [09:57] we're seeing it in CI jobs, so by the time we see it the host it happened on is gone [09:57] temporarily add -oDebug::Acquire::http=true to the CI job config? [09:58] is there some sort of config I can add to have apt log things, then we can collect those logs? [09:58] you could drop in e.g. /etc/apt/apt.conf.d/99debug with Debug::Acquire::http "true"; [09:59] if that's easier than a command-line option [09:59] will that drop info into /var/log/apt ? [09:59] no, stdout/stderr (I forget which) of apt [09:59] hmm, ok - that's worth a try - thanks for the advice [09:59] lemme give that a go, and I'll come back when I have something useful to peruse [10:00] thanks Faux cjwatson jamespage :) [10:00] thanks. it's worth capturing /etc/apt/sources.list too [10:02] the only fail I have on record right now is the output from an ansible task which I've just added to https://gist.github.com/odyssey4me/549131501f752dfe957d1ec151d62914 - not sure if that helps at all [10:04] mm, not really enough detail unfortunately [10:04] odyssey4me: oh, and if you're using apt-cacher-ng, make sure that you have the fix or workaround linked from my blog entry [10:05] I thought that was trusty only? [10:05] It looked to me like xenial's package got patched? [10:05] if you're using xenial's acng then that should be OK, yes [10:06] but this is the kind of symptom you can get from a bug there - i.e. acng serves a by-hash file that turns out to not actually match the requested hash, then apt falls back to the non-by-hash version, and finds that it's out of sync due to old-fashioned mirror update in progress or whatever [10:07] the debug output should hopefully make this kind of thing clear [10:08] the by-hash scheme is generally more robust against cache breakage, but acng is a special case because it's sufficiently clever about the archive structure that if misconfigured it can undo the robustness savings [10:09] and I suppose it's also possible that the mirror.rackspace.com mirror sync script is incorrect and fails to put the new by-hash files in place before InRelease, which would also have a similar effect [10:10] in that case the debug output should show a 404 for the by-hash file followed by 200 for plain old Packages [10:14] hmm, let me get hold of our mirror folks and see whether they've implemented the 2-step mechanism [10:14] thanks again - really appreciate your time [10:15] they probably have two-stage sync (most decent mirrors do), but worth checking the details [10:16] specifically whether InRelease is excluded from the first stage [10:17] ubumirror is also unfortunately a bit wrong and we should reeeeally fix it [10:18] (it needs to exclude only Packages* Sources* Release* InRelease from the first stage, not all of dists/ [10:18] ) [10:30] ohhai :-) [10:45] cjwatson: we don't have a bug about that, right? (I create one otherwise) [10:46] I don't think so [10:49] bug 1771796 [10:49] bug 1771796 in Ubuntu Mirror scripts "Fix the two-stage sync" [Medium,Triaged] https://launchpad.net/bugs/1771796 [11:41] hi all [11:42] please consider for bionic because I don't feel like dealing with broken segmentation in files [11:42] https://bugs.launchpad.net/ubuntu/+source/gsequencer/+bug/1770324 [11:42] Launchpad bug 1770324 in gsequencer (Ubuntu) "Sync gsequencer 1.4.29-1 (universe) from Debian unstable (main)" [Undecided,Fix committed] === Guest24334 is now known as _hc [12:19] <_0kx__> hi [12:23] <_0kx__> is the bug known: 1.) i've entered the password in gdm3 in the first time wrong. after this, i've entered the password in the right way (second) and the login hangs up. [12:23] <_0kx__> ? [12:23] <_0kx__> thanks! [12:24] <_0kx__> the bug appears after the upgrade from 17.10 to 18.04! [12:25] _0kx__, hey, yes, a fix has been commited this week and is being backported for a SRU [12:26] it's in the SRU queue waiting for review in fact now [12:26] https://launchpadlibrarian.net/370642960/gdm3_3.28.0-0ubuntu1.1_source.changes [12:39] <_0kx__> seb128: yeah, that's my bug. thanks a lot! there is hope.:-) [12:40] _0kx__, yw! [13:10] <_0kx__> bye [13:18] rbasak: Morning! Will you be around in about three hours to continue our conversation from yesterday? === caravena_ is now known as caravena [13:34] tsimonq2: yes. Ping me when you're ready. [13:34] nacc: ^ FYI [13:53] ACK [14:32] hello. anyone willing to sponsor a package change for bionic for me? the package became unusable this morning. http://people.ubuntu.com/~nafallo/lastpass-cli/ [14:33] let me know if this is supposed to be in -motu :-) [14:34] quite sure I need to follow some policy I don't know about yet :-) [14:40] Nafallo: Please file a bug, attach your diff, subscribe the sponsors team, and link it here. [14:41] can this be fixed somehow for 18.04? https://bugs.launchpad.net/ubuntu/+source/protracker/+bug/1769693 [14:41] Launchpad bug 1769693 in protracker (Ubuntu) "protracker does not run due to SDL2 library version" [Undecided,Confirmed] [14:43] tsimonq2: cheers :-) [14:44] I suppose I should do this against cosmic to begin with :-) [14:50] tarzeau: I commented on the bug [14:54] rbasak, Nafallo: Cheers [15:14] I think bug 1555562 is ready for sponsorship now :-) [15:14] bug 1555562 in lastpass-cli (Ubuntu) "lastpass-cli changed bundled CA certificates" [Undecided,In progress] https://launchpad.net/bugs/1555562 [16:01] coreycb hey are ddebs getting built for uca pkgs yet, do you know? [16:02] ddstreet: i dont think so. jamespage, do you know? [16:03] ddstreet: nope [16:03] well they get built but I don't think we've figured out the sync process yet [16:12] rbasak: ack [16:16] jamespage they get build in the private ppa tho, not -staging right [16:17] we need them in -staging so they're publicly available [16:17] ddstreet: we really need to sync them to the actual cloud archive [16:18] ddstreet: the packages get rebuilt in proposed so its not the same binary [16:18] jamespage that's fine, but unless you plan to leave them all there forever for all versions, that's not good enough [16:18] getting rebuilt is a problem, too [16:18] why [16:18] ? [16:18] you may not have to support (debug) old versions, but some people do, like my team [16:19] only making the latest ddebs available is nice for development, but not terribly useful for support/debugging [16:19] ddstreet: I'd be happy to commit to doing the same as we do in the ubuntu archive - whats the policy for ddeb retention there? [16:19] jamespage 1-2 most recent versions, which is exactly why i said it's not enough [16:19] however all pkgs have ddebs in LP, which is what we are asking you to do [16:20] there's an existing bug for this, rather old, i'll find it [16:20] lp was a full record of all build artefacts? [16:20] ddstreet: yeah I know the one [16:20] k [16:20] no progress on it then? [16:20] no [16:21] its never quite managed to bubble to the top of the priorities list [16:21] any plan for there to be progress on it? ;-) [16:21] ddstreet: do PPA builds keep full history as well? [16:21] jamespage yes [16:22] so you can always grab the binaries from older versions - I did not know that [16:22] yes [16:22] see ppa:ddstreet/ubuntu-dev-tools which includes pull-lp-ddebs [16:22] can get ddebs for any package in LP history as long as it was actually built with ddebs [16:23] pull-lp-debs too, which is handy to reproduce issues on older versions (common requirement) [16:24] anyway it really would be nice to finally have ddebs for uca pkgs, it's a bit of a pain to debug stuff without any dbgsyms, which is what we have to do currently [16:26] rbasak, nacc: Hi. [16:27] Let's continue. [16:28] o/ [16:29] tsimonq2: you mentioned lubuntu-artwork and one other yesterday. How long is the complete list? [16:30] tsimonq2: and are there any on your list where the workflow (including VCS location, where it's derived from etc) is different from lubuntu-artwork? [16:31] ddstreet,jamespage: PPA builds do get garbage-collected after a while once they've been superseded by later versions. [16:31] rbasak: Default settings and artwork (as well as calamares-settings-ubuntu)we keep an up-to-date VCS to. [16:31] rbasak: The rest would be good to have for tracking. [16:31] cjwatson that should be turned off for the UCA public ppa, then [16:31] ddstreet,jamespage: you may like to look at lp:ddeb-retriever, which is very carefully constructed to use the correct LP APIs for keeping up with publication flow [16:32] as i assume it is turned off for the ubuntu archives [16:32] ddstreet: oh, maybe it is for that particular case [16:32] tsimonq2: how big is the rest? [16:32] tsimonq2: and what do you mean by tracking? [16:32] ddstreet: Yeah, it is, assuming this is owned by ~ubuntu-cloud-archive [16:33] we have a blacklist of stuff that never gets expired [16:33] rbasak: https://phab.lubuntu.me/diffusion/ is what we currently keep eyes on all the time. [16:33] " No repositories found for this query." [16:33] Waat. [16:35] rbasak: tsimonq2: i see them all here [16:35] Is a login required perhaps? [16:35] cjwatson hopefully the UCA -staging ppa is on that blacklist [16:35] shouldn't be but checking [16:35] BTW, I only have about 25 minutes. [16:35] yep [16:35] that's it [16:35] rbasak: Perhaps I need to play with permissions, but it's just settings, artwork, and the Calamares settings that we actively keep a rich history to, right now. [16:35] First I'm just trying to understand your workflows [16:35] ddstreet: it's by owner, so if it's owned by ~ubuntu-cloud-archive then it is [16:35] I would like to be able to extend that to be able to do some rich commits in some LXQt packages. We have some git repos which just contain me branching from Debian and adding patches. [16:35] I hope it doesn't seem too much like an interrogation :) [16:35] ddstreet: if it's not then you need to stop abbreviating :) [16:36] tsimonq2: so you'd be asking us to initially import those three packages for you? Just trying to understand scope. [16:36] rbasak: You're fine, but I'm determined to figure something out here. ;) [16:36] yes, that's it, my fat fingers would surely misspell ~ubuntuy-cloud-archive ;-) [16:36] rbasak: Yeah, but ideally we could have everything we explicitly seed imported. [16:36] man the policies suggest it should be viewable [16:37] Let's start with just considering this set of three [16:37] OK. [16:37] nope [16:37] actually it's per repository :/ [16:37] What do you expect to happen to the LP project VCS repositories you have at the moment? [16:37] fixing [16:37] cjwatson yeah ddeb-retriever is interesting, but it just grabs all ddebs for all pkgs in lp, which is a bit less fine-grained than pull-lp-ddebs ;-) [16:38] Will they become a read only archive only, or will you still be pushing to them? [16:38] oh actually i think i made it all visible now [16:38] rbasak: I still would like rich history, so pushing. [16:38] ddstreet: sure, I meant it for the case where jamespage perhaps wants to publish them in the cloud archive [16:38] no, that's just for new repos. ugh [16:38] ddstreet: or something along those lines [16:38] tsimonq2: wouldn't you end up with two divergent repositories for each package then? [16:38] ah right yep definitely, i assume that's the standard way to publish them [16:39] I see "rART Lubuntu Artwork" under https://phab.lubuntu.me/diffusion/ now - only one entry. [16:39] rbasak: It would be good for rich history to be there, but if someone just dputs, importing that would be good. [16:39] keep refreshing [16:39] more are coming [16:40] cjwatson jamespage looks from the code like it might need to be modified tho, since it logs into LP anonymously while the ~ubuntu-cloud-archive source ppas are private [16:40] anyway [16:40] tsimonq2: I still think you'll end up with divergence [16:40] You'll get two parallel repositories [16:40] they're all there no [16:40] w [16:40] wxl: I see them now thanks [16:40] rbasak: Is there a way to not have them diverge? [16:41] ddstreet: sure, there would be plenty of details [16:41] rbasak: nacc said something along those lines yesterday. [16:41] tsimonq2: the easiest way would be for you to drop the other respository and use the git-ubuntu imported repository for everything only. [16:42] That repository would become your single source of truth, and you wouldn't push any commits except in that repository. [16:42] rbasak: Is that r/w or just r? [16:42] It's r/o. [16:42] If you need a holding area for changes yet to be uploaded, you could put that in a team repository branched from ubuntu/devel in the importer repository. [16:42] When you upload from the holding area, you could provide that to the importer as rich history. [16:43] How would that work? [16:43] Providing the importer with rich history is currently limited to ~usd-import-team, but the plan is to make it possible for any uploader to provide the rich history automatically. [16:43] Because that seems like what I'm looking for. [16:43] rbasak: Is that Canonical-only right now? [16:43] Let's see if I can summarise the importer's operation. [16:44] tsimonq2: it's what i described yesterday (approved MPs, e.g) [16:44] The importer repository is read-only. We consider this essential as it is supposed to represent Launchpad's publication history as the single source of truth, and allowing anyone to push directly would break that. [16:44] Right. [16:45] So only the importer pushes to the "official" repositories and only in response to Launchpad publications of uploads. [16:45] However, if you make available rich history to the importer, it can choose to adopt that rich history as part of the "official record" of how it got to a commit that matches a Launchpad publication. [16:46] It will only adopt the rich history if the final commit of the rich history matches the published version exactly. [16:46] (it might be helpful to point to a server team merge to see the result) [16:46] rbasak: So where does it look for that rich history? [16:47] Currently the rich history is provided by pushing a tag with the appropriate name to the official repository. Which is not ideal, because we want to keep the repository read-only for all other purposes, and Launchpad currently doesn't permit refspec-based ACLs. [16:47] In the long term, I think we'll be wrapping dput and supplying the importer with information on how to find the rich history corresponding to the upload in the changes file. [16:48] Then the rich history can be obtained by the importer from any Launchpad git repository branch such as the one from your MP. [16:48] rbasak: That's scheduled for this cycle, FYI [16:48] Can I have rich history and tag it, but still manually dpput? [16:48] *dput [16:49] We have an intermediate plan for the importer to be able to grab rich history by looking for them amongst approved MPs. [16:49] Approved but not merged MPs, right? [16:49] Right now the process to ensure that rich history is adopted is to push the tag with the rich history first (someone in ~usd-import-team and we call it the "upload tag") before dput. [16:49] There is no wrapper currently. [16:49] tsimonq2: yeah something like that. [16:49] (We haven't started it yet, but it's about halfway down my dept's "Infrastructure" roadmap list so it has a decent chance of finally getting done.) [16:50] cjwatson: thanks. You understand that our plan no longer requires refspec-based ACLs though right? [16:50] I mean it'd help right now, but long term we won't need them. [16:50] I lose track, but you mentioned it, that's all :) [16:50] rbasak: Do you have an example of this in action? (Can you walk me through itt?) [16:50] *it [16:50] (You're not the only people who've wanted it at various points) [16:51] tsimonq2: maybe follow https://code.launchpad.net/~paelzer/ubuntu/+source/chrony/+git/chrony/+merge/345498? [16:51] Actually that's not ready for upload. [16:51] I can of course show you an MP where it happened, but I'm not sure whether that'll be helpful as it'll be in the past and not in action. [16:51] sorry, I don't have the time this evening to make it ready rbasak [16:52] rbasak: OK. [16:52] * tsimonq2 looks. [16:52] In that MP, cpaelzer is working on an upload for chrony. [16:52] When it's ready, he'll have a branch inside ~paezler with it ready. [16:52] Someone in ~usd-import-team will run "git ubuntu tag --upload" on it, and then push the tag to the official repo. [16:53] cpaelzer will then dput. [16:53] dput to Debian? [16:53] When the importer sees the upload published by Launchpad, it will look for the upload tag, find it, verify that the tree matches the upload and adopt it into the formal record. [16:53] dput to Ubuntu. [16:53] "Merge into: ubuntu/+source/chrony:debian/sid" is a hack. [16:53] but dput to Debian would work just the same [16:53] Really it won't be merged by anything. [16:54] The importer will create the commit based primarly on Launchpad's publication history. [16:54] rbasak: What's the logic behind the hack? [16:54] We use debian/sid so the preview diff looks sane. [16:54] Ohh, it's a merge from Debian? [16:54] That'd make sense. [16:54] It's a hack because usually MPs are intended to be merged by something like "git checkout target-branch && git merge proposed-branch". [16:54] Right. [16:55] Whereas our importer repositories reflect Launchpad's publications as the single source of truth. [16:55] Will this be the way it is long term? [16:55] So instead of doing a merge directly, we round trip through a Launchpad publication via dput [16:55] When the importer sees the upload, it creates the merge commit based on the publication and not the MP. [16:55] Right. [16:56] For as long as Launchpad's publication history forms the official record for Ubuntu uploads, this will be how it has to be. [16:56] One day very far in the future and not on any roadmap, Ubuntu may wish to switch to git repositories as the single source of truth, with uploads secondary to that. If and only if that happens, only then will MPs get merged directly. [16:57] OK. [16:57] So does Launchpad then have the tag from Debian on the Ubuntu tree as part of the merge, or at least a reference that this is a "merge" commit? [16:57] (Once merged.) [16:57] tsimonq2: which way do you mean merge? [16:57] tsimonq2: Git-merge or Ubuntu-merge ? [16:58] I need to go soon. [16:58] If nacc is around he can take over. [16:58] rbasak: I can try and pick up a bit here [16:58] Thanks [16:58] My main concern is that you don't end up with two diverging repositories as I don't think that'll be useful for you workflow-wise. [16:58] right [16:58] As I said, I'm quite happy to add your stuff to the whitelist if that's what you decide you want in the end. [16:59] I'd say, do it. [16:59] It would be good to try it out and play with it. [16:59] tsimonq2: afaict, what you'd end up doing is having the ubuntu LP repo for your srcpkg as 'pkg' (the default with `git-ubuntu`) and then you'd have your personal (or team's, any LP user reference) as "". Your active development would be in the remote at [17:00] you can do whatever you want there, but you woudl eventually propose changes to the 'pkg' remote via a MP [17:00] tsimonq2: send us an MP against https://code.launchpad.net/~paelzer/ubuntu/+source/chrony/+git/chrony/+merge/345498 please [17:00] rbasak: presumably not that? :) [17:00] Oh [17:00] Yeah. I meant: https://git.launchpad.net/usd-importer/tree/gitubuntu/source-package-whitelist.txt [17:00] We've got some CI brokenness going on at the moment. [17:00] Will do later this evening. [17:00] Thanks! [17:01] I can manually use a newer whitelist, but I'd prefer to get it landed properly before activating it, unless you're in a real hurry. [17:01] I'd estimate a week or so to do it properly. [17:01] Nah, let's do it properly. [17:01] But if you really want it sooner I can hack our importer instance up a bit. [17:01] OK, thanks. [17:02] * rbasak EODs [17:02] o/ [17:02] powersj: is there a reason I can't get to https://jenkins.ubuntu.com/server/job/git-ubuntu-ci/427/? [17:02] wxl: So we can stay on the same page... ^ [17:02] referred to from https://code.launchpad.net/~racb/usd-importer/+git/usd-importer/+merge/345670 [17:02] It's broken for me too [17:02] weird [17:03] rbasak: ok [17:03] nacc: we have had some jenkins issues and the latest update blew away our pipeline jobs :\ [17:03] powersj: urgh [17:03] yeah... [17:03] @tsimonq2: you referring me to your conversation with rbasak? if so, sounds good. let's see it when it's done. :) [17:03] Error: "tsimonq2:" is not a valid command. [17:03] heh [17:03] wxl: Yep; OK. :) [17:04] * tsimonq2 kicks the differently named udevbot. [17:04] wxl: tsimonq2: if you have any other questions, though, i can answer them in the meanwhile [17:05] i'm mainly going to let tsimonq2 run point on this and otherwise keep my hands out of the pot, but i'll find ya'll if things go south XD [17:05] heh [17:21] I need more space for my VMs, So I scored a 970 Evo on amazon gift cards that have been building from birthday gifts and such [17:54] externalreality: wrong channel? [17:59] nacc, ha, thx [17:59] externalreality: np :) [18:35] bdmurray: could you review gnome-initial-setup for promotion to bionic? we're going to fix the failed (missing) bugfix in our next upload [18:43] jbicha: looking [18:45] jbicha: Could you explain what went wrong? How is it still the old ones? [18:46] I must have badly merged the branches. Our packaging workflow wasn't very good as we're just starting to switch our packaging to git [18:47] one complication was that it wasn't obvious to the person doing the previous uploads that we could actually do binary git patches to include the .png we needed [18:49] jbicha: okay [20:43] ricotz: What's the difference between the ways nvidia's being packaged now? [21:17] hi, I need someone from desktop to take a look at a LP: #1765914, it seems dconf is reporting a wrong scale-factor, but I have no idea of the cause [21:17] Launchpad bug 1765914 in openjdk-lts (Ubuntu) "Java windows and fonts are huge running in openjdk-11-jre" [Undecided,New] https://launchpad.net/bugs/1765914