[09:41] <odyssey4me> cjwatson I've been referred to you by jamespage for this and hope you can help. We're still seeing 'Hash Sum mismatch' errors for apt update executions. According to http://www.chiark.greenend.org.uk/~cjwatson/blog/no-more-hash-sum-mismatch-errors.html this should be a thing of the past from Xenial onwards, but we're still seeing it. I'm trying to figure out how to verify whether this is actually being used or not.
[09:43] <odyssey4me> If I do 'apt-get -oDebug::Acquire::http=true update' then it shows the InRelease download happening from the standard ubuntu sources, but out failures are from those sources - for example: E: Failed to fetch http://mirror.rackspace.com/ubuntu/dists/xenial-updates/universe/binary-amd64/Packages.gz  Hash Sum mismatch
[09:44] <jamespage> odyssey4me: I wonder whether you have to use a specific mirror configuration to pickup the extra bits
[09:45] <Faux> Yeah, I don't think you should be hitting packages.gz if you are by-hash.
[09:45] <odyssey4me> that's what's confusing me - unfortunately docs on all this are very sparese
[09:45] <odyssey4me> *sparse
[09:46] <odyssey4me> according to http://www.chiark.greenend.org.uk/~cjwatson/blog/no-more-hash-sum-mismatch-errors.html I should see the InRelease file fetched, then the by-hash ... and I'm not seeing that
[09:46] <odyssey4me> so I'm wondering if there's some sort of apt or whatever config that needs to be in place to activate this, or if using it can be forced somehow
[09:48] <Faux> By-Hash Try to download indexes via an URI constructed from a hashsum of the expected file rather than downloaded via a well-known stable filename. True by default, but automatically disabled if the source indicates no support for it. Usage can be forced with the special value "for
[09:48] <Faux> man 5 apt.conf
[09:48] <Faux> man:apt.conf(5)
[09:49] <cjwatson> odyssey4me: can you pastebin a full debug dump?
[09:51] <cjwatson> the debug option you quoted earlier should be enough to get started with
[09:52] <cjwatson> you shouldn't need any special config, though perhaps there's something weird about the mirror you're using or about your existing config
[09:54] <odyssey4me> cjwatson ok, posting info to https://gist.github.com/odyssey4me/549131501f752dfe957d1ec151d62914 - just did one with a mixed set of repo sources, which also goes through apt-cacher-ng... I'll post another one now with just a plain default set of sources
[09:54] <odyssey4me> (and no apt-cacher-ng)
[09:55] <cjwatson> while acng should work, I'd rather start with as few things in the mix as possible
[09:56] <cjwatson> odyssey4me: so that info you've posted doesn't show a failure - I need one that does
[09:56] <cjwatson> nor does it even show fetching any Packages file at all
[09:56] <odyssey4me> oh, well, that's a problem - because it's not consistent
[09:57] <cjwatson> right, but you should be able to get it eventually presumably
[09:57] <cjwatson> there's no point in me spending time debugging the successful cases
[09:57] <odyssey4me> we're seeing it in CI jobs, so by the time we see it the host it happened on is gone
[09:57] <cjwatson> temporarily add -oDebug::Acquire::http=true to the CI job config?
[09:58] <odyssey4me> is there some sort of config I can add to have apt log things, then we can collect those logs?
[09:58] <cjwatson> you could drop in e.g. /etc/apt/apt.conf.d/99debug with Debug::Acquire::http "true";
[09:59] <cjwatson> if that's easier than a command-line option
[09:59] <odyssey4me> will that drop info into /var/log/apt ?
[09:59] <cjwatson> no, stdout/stderr (I forget which) of apt
[09:59] <odyssey4me> hmm, ok - that's worth a try - thanks for the advice
[09:59] <odyssey4me> lemme give that a go, and I'll come back when I have something useful to peruse
[10:00] <odyssey4me> thanks Faux cjwatson jamespage :)
[10:00] <cjwatson> thanks.  it's worth capturing /etc/apt/sources.list too
[10:02] <odyssey4me> the only fail I have on record right now is the output from an ansible task which I've just added to https://gist.github.com/odyssey4me/549131501f752dfe957d1ec151d62914 - not sure if that helps at all
[10:04] <cjwatson> mm, not really enough detail unfortunately
[10:04] <cjwatson> odyssey4me: oh, and if you're using apt-cacher-ng, make sure that you have the fix or workaround linked from my blog entry
[10:05] <odyssey4me> I thought that was trusty only?
[10:05] <odyssey4me> It looked to me like xenial's package got patched?
[10:05] <cjwatson> if you're using xenial's acng then that should be OK, yes
[10:06] <cjwatson> but this is the kind of symptom you can get from a bug there - i.e. acng serves a by-hash file that turns out to not actually match the requested hash, then apt falls back to the non-by-hash version, and finds that it's out of sync due to old-fashioned mirror update in progress or whatever
[10:07] <cjwatson> the debug output should hopefully make this kind of thing clear
[10:08] <cjwatson> the by-hash scheme is generally more robust against cache breakage, but acng is a special case because it's sufficiently clever about the archive structure that if misconfigured it can undo the robustness savings
[10:09] <cjwatson> and I suppose it's also possible that the mirror.rackspace.com mirror sync script is incorrect and fails to put the new by-hash files in place before InRelease, which would also have a similar effect
[10:10] <cjwatson> in that case the debug output should show a 404 for the by-hash file followed by 200 for plain old Packages
[10:14] <odyssey4me> hmm, let me get hold of our mirror folks and see whether they've implemented the 2-step mechanism
[10:14] <odyssey4me> thanks again - really appreciate your time
[10:15] <cjwatson> they probably have two-stage sync (most decent mirrors do), but worth checking the details
[10:16] <cjwatson> specifically whether InRelease is excluded from the first stage
[10:17] <cjwatson> ubumirror is also unfortunately a bit wrong and we should reeeeally fix it
[10:18] <cjwatson> (it needs to exclude only Packages* Sources* Release* InRelease from the first stage, not all of dists/
[10:18] <cjwatson> )
[10:30] <Nafallo> ohhai :-)
[10:45] <Nafallo> cjwatson: we don't have a bug about that, right? (I create one otherwise)
[10:46] <cjwatson> I don't think so
[10:49] <Nafallo> bug 1771796
[11:41] <joelkraehemann> hi all
[11:42] <joelkraehemann> please consider for bionic because I don't feel like dealing with broken segmentation in files
[11:42] <joelkraehemann> https://bugs.launchpad.net/ubuntu/+source/gsequencer/+bug/1770324
[12:19] <_0kx__> hi
[12:23] <_0kx__> is the bug known: 1.) i've entered the password in gdm3 in the first time wrong. after this, i've entered the password in the right way (second) and the login hangs up.
[12:23] <_0kx__> ?
[12:23] <_0kx__> thanks!
[12:24] <_0kx__> the bug appears after the upgrade from 17.10 to 18.04!
[12:25] <seb128> _0kx__, hey, yes, a fix has been commited this week and is being backported for a SRU
[12:26] <seb128> it's in the SRU queue waiting for review in fact now
[12:26] <seb128> https://launchpadlibrarian.net/370642960/gdm3_3.28.0-0ubuntu1.1_source.changes
[12:39] <_0kx__> seb128: yeah, that's my bug. thanks a lot! there is hope.:-)
[12:40] <seb128> _0kx__, yw!
[13:10] <_0kx__> bye
[13:18] <tsimonq2> rbasak: Morning! Will you be around in about three hours to continue our conversation from yesterday?
[13:34] <rbasak> tsimonq2: yes. Ping me when you're ready.
[13:34] <rbasak> nacc: ^ FYI
[13:53] <tsimonq2> ACK
[14:32] <Nafallo> hello. anyone willing to sponsor a package change for bionic for me? the package became unusable this morning. http://people.ubuntu.com/~nafallo/lastpass-cli/
[14:33] <Nafallo> let me know if this is supposed to be in -motu :-)
[14:34] <Nafallo> quite sure I need to follow some policy I don't know about yet :-)
[14:40] <tsimonq2> Nafallo: Please file a bug, attach your diff, subscribe the sponsors team, and link it here.
[14:41] <tarzeau> can this be fixed somehow for 18.04? https://bugs.launchpad.net/ubuntu/+source/protracker/+bug/1769693
[14:43] <Nafallo> tsimonq2: cheers :-)
[14:44] <Nafallo> I suppose I should do this against cosmic to begin with :-)
[14:50] <rbasak> tarzeau: I commented on the bug
[14:54] <tsimonq2> rbasak, Nafallo: Cheers
[15:14] <Nafallo> I think bug 1555562 is ready for sponsorship now :-)
[16:01] <ddstreet> coreycb hey are ddebs getting built for uca pkgs yet, do you know?
[16:02] <coreycb> ddstreet: i dont think so. jamespage, do you know?
[16:03] <jamespage> ddstreet: nope
[16:03] <jamespage> well they get built but I don't think we've figured out the sync process yet
[16:12] <nacc> rbasak: ack
[16:16] <ddstreet> jamespage they get build in the private ppa tho, not -staging right
[16:17] <ddstreet> we need them in -staging so they're publicly available
[16:17] <jamespage> ddstreet: we really need to sync them to the actual cloud archive
[16:18] <jamespage> ddstreet: the packages get rebuilt in proposed so its not the same binary
[16:18] <ddstreet> jamespage that's fine, but unless you plan to leave them all there forever for all versions, that's not good enough
[16:18] <ddstreet> getting rebuilt is a problem, too
[16:18] <jamespage> why
[16:18] <jamespage> ?
[16:18] <ddstreet> you may not have to support (debug) old versions, but some people do, like my team
[16:19] <ddstreet> only making the latest ddebs available is nice for development, but not terribly useful for support/debugging
[16:19] <jamespage> ddstreet: I'd be happy to commit to doing the same as we do in the ubuntu archive - whats the policy for ddeb retention there?
[16:19] <ddstreet> jamespage 1-2 most recent versions, which is exactly why i said it's not enough
[16:19] <ddstreet> however all pkgs have ddebs in LP, which is what we are asking you to do
[16:20] <ddstreet> there's an existing bug for this, rather old, i'll find it
[16:20] <jamespage> lp was a full record of all build artefacts?
[16:20] <jamespage> ddstreet: yeah I know the one
[16:20] <ddstreet> k
[16:20] <ddstreet> no progress on it then?
[16:20] <jamespage> no
[16:21] <jamespage> its never quite managed to bubble to the top of the priorities list
[16:21] <ddstreet> any plan for there to be progress on it? ;-)
[16:21] <jamespage> ddstreet: do PPA builds keep full history as well?
[16:21] <ddstreet> jamespage yes
[16:22] <jamespage> so you can always grab the binaries from older versions - I did not know that
[16:22] <ddstreet> yes
[16:22] <ddstreet> see ppa:ddstreet/ubuntu-dev-tools which includes pull-lp-ddebs
[16:22] <ddstreet> can get ddebs for any package in LP history as long as it was actually built with ddebs
[16:23] <ddstreet> pull-lp-debs too, which is handy to reproduce issues on older versions (common requirement)
[16:24] <ddstreet> anyway it really would be nice to finally have ddebs for uca pkgs, it's a bit of a pain to debug stuff without any dbgsyms, which is what we have to do currently
[16:26] <tsimonq2> rbasak, nacc: Hi.
[16:27] <tsimonq2> Let's continue.
[16:28] <rbasak> o/
[16:29] <rbasak> tsimonq2: you mentioned lubuntu-artwork and one other yesterday. How long is the complete list?
[16:30] <rbasak> tsimonq2: and are there any on your list where the workflow (including VCS location, where it's derived from etc) is different from lubuntu-artwork?
[16:31] <cjwatson> ddstreet,jamespage: PPA builds do get garbage-collected after a while once they've been superseded by later versions.
[16:31] <tsimonq2> rbasak: Default settings and artwork  (as well as calamares-settings-ubuntu)we keep an up-to-date VCS to.
[16:31] <tsimonq2> rbasak: The rest would be good to have for tracking.
[16:31] <ddstreet> cjwatson that should be turned off for the UCA public ppa, then
[16:31] <cjwatson> ddstreet,jamespage: you may like to look at lp:ddeb-retriever, which is very carefully constructed to use the correct LP APIs for keeping up with publication flow
[16:32] <ddstreet> as i assume it is turned off for the ubuntu archives
[16:32] <cjwatson> ddstreet: oh, maybe it is for that particular case
[16:32] <rbasak> tsimonq2: how big is the rest?
[16:32] <rbasak> tsimonq2: and what do you mean by tracking?
[16:32] <cjwatson> ddstreet: Yeah, it is, assuming this is owned by ~ubuntu-cloud-archive
[16:33] <cjwatson> we have a blacklist of stuff that never gets expired
[16:33] <tsimonq2> rbasak: https://phab.lubuntu.me/diffusion/ is what we currently keep eyes on all the time.
[16:33] <rbasak> "    No repositories found for this query."
[16:33] <tsimonq2> Waat.
[16:35] <wxl> rbasak: tsimonq2: i see them all here
[16:35] <rbasak> Is a login required perhaps?
[16:35] <ddstreet> cjwatson hopefully the UCA -staging ppa is on that blacklist
[16:35] <wxl> shouldn't be but checking
[16:35] <rbasak> BTW, I only have about 25 minutes.
[16:35] <wxl> yep
[16:35] <wxl> that's it
[16:35] <tsimonq2> rbasak: Perhaps I need to play with permissions, but it's just settings, artwork, and the Calamares settings that we actively keep a rich history to, right now.
[16:35] <rbasak> First I'm just trying to understand your workflows
[16:35] <cjwatson> ddstreet: it's by owner, so if it's owned by ~ubuntu-cloud-archive then it is
[16:35] <tsimonq2> I would like to be able to extend that to be able to do some rich commits in some LXQt packages. We have some git repos which just contain me branching from Debian and adding patches.
[16:35] <rbasak> I hope it doesn't seem too much like an interrogation :)
[16:35] <cjwatson> ddstreet: if it's not then you need to stop abbreviating :)
[16:36] <rbasak> tsimonq2: so you'd be asking us to initially import those three packages for you? Just trying to understand scope.
[16:36] <tsimonq2> rbasak: You're fine, but I'm determined to figure something out here. ;)
[16:36] <ddstreet> yes, that's it, my fat fingers would surely misspell ~ubuntuy-cloud-archive ;-)
[16:36] <tsimonq2> rbasak: Yeah, but ideally we could have everything we explicitly seed imported.
[16:36] <wxl> man the policies suggest it should be viewable
[16:37] <rbasak> Let's start with just considering this set of three
[16:37] <tsimonq2> OK.
[16:37] <wxl> nope
[16:37] <wxl> actually it's per repository :/
[16:37] <rbasak> What do you expect to happen to the LP project VCS repositories you have at the moment?
[16:37] <wxl> fixing
[16:37] <ddstreet> cjwatson yeah ddeb-retriever is interesting, but it just grabs all ddebs for all pkgs in lp, which is a bit less fine-grained than pull-lp-ddebs ;-)
[16:38] <rbasak> Will they become a read only archive only, or will you still be pushing to them?
[16:38] <wxl> oh actually i think i made it all visible now
[16:38] <tsimonq2> rbasak: I still would like rich history, so pushing.
[16:38] <cjwatson> ddstreet: sure, I meant it for the case where jamespage perhaps wants to publish them in the cloud archive
[16:38] <wxl> no, that's just for new repos. ugh
[16:38] <cjwatson> ddstreet: or something along those lines
[16:38] <rbasak> tsimonq2: wouldn't you end up with two divergent repositories for each package then?
[16:38] <ddstreet> ah right yep definitely, i assume that's the standard way to publish them
[16:39] <rbasak> I see "rART Lubuntu Artwork" under https://phab.lubuntu.me/diffusion/ now - only one entry.
[16:39] <tsimonq2> rbasak: It would be good for rich history to be there, but if someone just dputs, importing that would be good.
[16:39] <wxl> keep refreshing
[16:39] <wxl> more are coming
[16:40] <ddstreet> cjwatson jamespage looks from the code like it might need to be modified tho, since it logs into LP anonymously while the ~ubuntu-cloud-archive source ppas are private
[16:40] <ddstreet> anyway
[16:40] <rbasak> tsimonq2: I still think you'll end up with divergence
[16:40] <rbasak> You'll get two parallel repositories
[16:40] <wxl> they're all there no
[16:40] <wxl> w
[16:40] <rbasak> wxl: I see them now thanks
[16:40] <tsimonq2> rbasak: Is there a way to not have them diverge?
[16:41] <cjwatson> ddstreet: sure, there would be plenty of details
[16:41] <tsimonq2> rbasak: nacc said something along those lines yesterday.
[16:41] <rbasak> tsimonq2: the easiest way would be for you to drop the other respository and use the git-ubuntu imported repository for everything only.
[16:42] <rbasak> That repository would become your single source of truth, and you wouldn't push any commits except in that repository.
[16:42] <tsimonq2> rbasak: Is that r/w or just r?
[16:42] <rbasak> It's r/o.
[16:42] <rbasak> If you need a holding area for changes yet to be uploaded, you could put that in a team repository branched from ubuntu/devel in the importer repository.
[16:42] <rbasak> When you upload from the holding area, you could provide that to the importer as rich history.
[16:43] <tsimonq2> How would that work?
[16:43] <rbasak> Providing the importer with rich history is currently limited to ~usd-import-team, but the plan is to make it possible for any uploader to provide the rich history automatically.
[16:43] <tsimonq2> Because that seems like what I'm looking for.
[16:43] <tsimonq2> rbasak: Is that Canonical-only right now?
[16:43] <rbasak> Let's see if I can summarise the importer's operation.
[16:44] <nacc> tsimonq2: it's what i described yesterday (approved MPs, e.g)
[16:44] <rbasak> The importer repository is read-only. We consider this essential as it is supposed to represent Launchpad's publication history as the single source of truth, and allowing anyone to push directly would break that.
[16:44] <tsimonq2> Right.
[16:45] <rbasak> So only the importer pushes to the "official" repositories and only in response to Launchpad publications of uploads.
[16:45] <rbasak> However, if you make available rich history to the importer, it can choose to adopt that rich history as part of the "official record" of how it got to a commit that matches a Launchpad publication.
[16:46] <rbasak> It will only adopt the rich history if the final commit of the rich history matches the published version exactly.
[16:46] <nacc> (it might be helpful to point to a server team merge to see the result)
[16:46] <tsimonq2> rbasak: So where does it look for that rich history?
[16:47] <rbasak> Currently the rich history is provided by pushing a tag with the appropriate name to the official repository. Which is not ideal, because we want to keep the repository read-only for all other purposes, and Launchpad currently doesn't permit refspec-based ACLs.
[16:47] <rbasak> In the long term, I think we'll be wrapping dput and supplying the importer with information on how to find the rich history corresponding to the upload in the changes file.
[16:48] <rbasak> Then the rich history can be obtained by the importer from any Launchpad git repository branch such as the one from your MP.
[16:48] <cjwatson> rbasak: That's scheduled for this cycle, FYI
[16:48] <tsimonq2> Can I have rich history and tag it, but still manually dpput?
[16:48] <tsimonq2> *dput
[16:49] <rbasak> We have an intermediate plan for the importer to be able to grab rich history by looking for them amongst approved MPs.
[16:49] <tsimonq2> Approved but not merged MPs, right?
[16:49] <rbasak> Right now the process to ensure that rich history is adopted is to push the tag with the rich history first (someone in ~usd-import-team and we call it the "upload tag") before dput.
[16:49] <rbasak> There is no wrapper currently.
[16:49] <rbasak> tsimonq2: yeah something like that.
[16:49] <cjwatson> (We haven't started it yet, but it's about halfway down my dept's "Infrastructure" roadmap list so it has a decent chance of finally getting done.)
[16:50] <rbasak> cjwatson: thanks. You understand that our plan no longer requires refspec-based ACLs though right?
[16:50] <rbasak> I mean it'd help right now, but long term we won't need them.
[16:50] <cjwatson> I lose track, but you mentioned it, that's all :)
[16:50] <tsimonq2> rbasak: Do you have an example of this in action? (Can you walk me through itt?)
[16:50] <tsimonq2> *it
[16:50] <cjwatson> (You're not the only people who've wanted it at various points)
[16:51] <rbasak> tsimonq2: maybe follow https://code.launchpad.net/~paelzer/ubuntu/+source/chrony/+git/chrony/+merge/345498?
[16:51] <rbasak> Actually that's not ready for upload.
[16:51] <rbasak> I can of course show you an MP where it happened, but I'm not sure whether that'll be helpful as it'll be in the past and not in action.
[16:51] <cpaelzer> sorry, I don't have the time this evening to make it ready rbasak
[16:52] <tsimonq2> rbasak: OK.
[16:52]  * tsimonq2 looks.
[16:52] <rbasak> In that MP, cpaelzer is working on an upload for chrony.
[16:52] <rbasak> When it's ready, he'll have a branch inside ~paezler with it ready.
[16:52] <rbasak> Someone in ~usd-import-team will run "git ubuntu tag --upload" on it, and then push the tag to the official repo.
[16:53] <rbasak> cpaelzer will then dput.
[16:53] <tsimonq2> dput to Debian?
[16:53] <rbasak> When the importer sees the upload published by Launchpad, it will look for the upload tag, find it, verify that the tree matches the upload and adopt it into the formal record.
[16:53] <rbasak> dput to Ubuntu.
[16:53] <rbasak> "Merge into: ubuntu/+source/chrony:debian/sid" is a hack.
[16:53] <cpaelzer> but dput to Debian would work just the same
[16:53] <rbasak> Really it won't be merged by anything.
[16:54] <rbasak> The importer will create the commit based primarly on Launchpad's publication history.
[16:54] <tsimonq2> rbasak: What's the logic behind the hack?
[16:54] <rbasak> We use debian/sid so the preview diff looks sane.
[16:54] <tsimonq2> Ohh, it's a merge from Debian?
[16:54] <tsimonq2> That'd make sense.
[16:54] <rbasak> It's a hack because usually MPs are intended to be merged by something like "git checkout target-branch && git merge proposed-branch".
[16:54] <tsimonq2> Right.
[16:55] <rbasak> Whereas our importer repositories reflect Launchpad's publications as the single source of truth.
[16:55] <tsimonq2> Will this be the way it is long term?
[16:55] <rbasak> So instead of doing a merge directly, we round trip through a Launchpad publication via dput
[16:55] <rbasak> When the importer sees the upload, it creates the merge commit based on the publication and not the MP.
[16:55] <tsimonq2> Right.
[16:56] <rbasak> For as long as Launchpad's publication history forms the official record for Ubuntu uploads, this will be how it has to be.
[16:56] <rbasak> One day very far in the future and not on any roadmap, Ubuntu may wish to switch to git repositories as the single source of truth, with uploads secondary to that. If and only if that happens, only then will MPs get merged directly.
[16:57] <tsimonq2> OK.
[16:57] <tsimonq2> So does Launchpad then have the tag from Debian on the Ubuntu tree as part of the merge, or at least a reference that this is a "merge" commit?
[16:57] <tsimonq2> (Once merged.)
[16:57] <nacc> tsimonq2: which way do you mean merge?
[16:57] <nacc> tsimonq2: Git-merge or Ubuntu-merge ?
[16:58] <rbasak> I need to go soon.
[16:58] <rbasak> If nacc is around he can take over.
[16:58] <nacc> rbasak: I can try and pick up a bit here
[16:58] <rbasak> Thanks
[16:58] <rbasak> My main concern is that you don't end up with two diverging repositories as I don't think that'll be useful for you workflow-wise.
[16:58] <nacc> right
[16:58] <rbasak> As I said, I'm quite happy to add your stuff to the whitelist if that's what you decide you want in the end.
[16:59] <tsimonq2> I'd say, do it.
[16:59] <tsimonq2> It would be good to try it out and play with it.
[16:59] <nacc> tsimonq2: afaict, what you'd end up doing is having the ubuntu LP repo for your srcpkg as 'pkg' (the default with `git-ubuntu`) and then you'd have your personal (or team's, any LP user reference) as "<LP user name>". Your active development would be in the remote at <LP user name>
[17:00] <nacc> you can do whatever  you want there, but you woudl eventually propose changes to the 'pkg' remote via a MP
[17:00] <rbasak> tsimonq2: send us an MP against https://code.launchpad.net/~paelzer/ubuntu/+source/chrony/+git/chrony/+merge/345498 please
[17:00] <nacc> rbasak: presumably not that? :)
[17:00] <rbasak> Oh
[17:00] <rbasak> Yeah. I meant: https://git.launchpad.net/usd-importer/tree/gitubuntu/source-package-whitelist.txt
[17:00] <rbasak> We've got some CI brokenness going on at the moment.
[17:00] <tsimonq2> Will do later this evening.
[17:00] <tsimonq2> Thanks!
[17:01] <rbasak> I can manually use a newer whitelist, but I'd prefer to get it landed properly before activating it, unless you're in a real hurry.
[17:01] <rbasak> I'd estimate a week or so to do it properly.
[17:01] <tsimonq2> Nah, let's do it properly.
[17:01] <rbasak> But if you really want it sooner I can hack our importer instance up a bit.
[17:01] <rbasak> OK, thanks.
[17:02]  * rbasak EODs
[17:02] <tsimonq2> o/
[17:02] <nacc> powersj: is there a reason I can't get to https://jenkins.ubuntu.com/server/job/git-ubuntu-ci/427/?
[17:02] <tsimonq2> wxl: So we can stay on the same page... ^
[17:02] <nacc> referred to from https://code.launchpad.net/~racb/usd-importer/+git/usd-importer/+merge/345670
[17:02] <rbasak> It's broken for me too
[17:02] <nacc> weird
[17:03] <nacc> rbasak: ok
[17:03] <powersj> nacc: we have had some jenkins issues and the latest update blew away our pipeline jobs :\
[17:03] <nacc> powersj: urgh
[17:03] <powersj> yeah...
[17:03] <wxl> @tsimonq2: you referring me to your conversation with rbasak? if so, sounds good. let's see it when it's done. :)
[17:03] <udevbot> Error: "tsimonq2:" is not a valid command.
[17:03] <nacc> heh
[17:03] <tsimonq2> wxl: Yep; OK. :)
[17:04]  * tsimonq2 kicks the differently named udevbot.
[17:04] <nacc> wxl: tsimonq2: if you have any other questions, though, i can answer them in the meanwhile
[17:05] <wxl> i'm mainly going to let tsimonq2 run point on this and otherwise keep my hands out of the pot, but i'll find ya'll if things go south XD
[17:05] <nacc> heh
[17:21] <externalreality> I need more space for my VMs, So I scored a 970 Evo on amazon gift cards that have been building from birthday gifts and such
[17:54] <nacc> externalreality: wrong channel?
[17:59] <externalreality> nacc, ha, thx
[17:59] <nacc> externalreality: np :)
[18:35] <jbicha> bdmurray: could you review gnome-initial-setup for promotion to bionic? we're going to fix the failed (missing) bugfix in our next upload
[18:43] <bdmurray> jbicha: looking
[18:45] <bdmurray> jbicha: Could you explain what went wrong? How is it still the old ones?
[18:46] <jbicha> I must have badly merged the branches. Our packaging workflow wasn't very good as we're just starting to switch our packaging to git
[18:47] <jbicha> one complication was that it wasn't obvious to the person doing the previous uploads that we could actually do binary git patches to include the .png we needed
[18:49] <bdmurray> jbicha: okay
[20:43] <dmj_s76> ricotz: What's the difference between the ways nvidia's being packaged now?
[21:17] <tdaitx> hi, I need someone from desktop to take a look at a LP: #1765914, it seems dconf is reporting a wrong scale-factor, but I have no idea of the cause