[00:05] <lifeless> blr: de nada
[11:02] <wgrant> cjwatson: Do we actually need the depopulation job, or will update-pkgcache do that automatically if we explicitly set it to None there?
[11:07] <cjwatson> wgrant: I initially thought we needed to set fti to None as well in order to get ftiupdate to do anything, but looking at it again I think I was wrong.
[11:09] <wgrant> cjwatson: The triggers? They should fire on any column change.
[11:09] <wgrant> Any change to a relevant column, that is.
[11:09] <cjwatson> Yeah, I just misread the guard at the top of ftiupdate
[11:09] <cjwatson> Apparently yesterday wasn't one of my cleverer days
[11:10] <wgrant> Heh, I outright rejected one of your branches for the first time, so indeed.
[11:11] <wgrant> Ah, which I see you've fixed. Excellent, thanks.
[11:14] <cjwatson> Yep, although I think it needs further work as per my comment.
[11:15] <wgrant> What's your concern?
[11:15] <wgrant> Unless someone has requested a great many builds recently, or snuck in thousands for an architecture that will never build, there should not be many living BuildQueues.
[11:15] <wgrant> BuildQueues existing only for pending or building builds.
[11:17] <cjwatson> Ah, good.  That was my initial assumption when I wrote that code and then I realised I didn't remember if BQ was left around afterward.
[11:19] <cjwatson> fixing the excessive join
[11:19] <wgrant> Great.
[11:19] <wgrant> BFJ is forever, but BQ lives up to its name.
[11:20] <cjwatson> What's the difference between the two feature flags you propose in git-ref-commits?
[11:21] <wgrant> One prevents memcached from being DoSed or causing problems, while the other prevents turnip from being DoSed.
[11:21] <wgrant> Useful for working out which is screwed, if nothing else.
[11:21] <wgrant> Our memcached cluster has not been tested with significant load in some time.
[11:22] <wgrant> But we at least need the latter one.
[11:22] <cjwatson> Oh I see, right.
[11:23] <cjwatson> Probably need to at least synthesise the most recent commit out of the information we have in the latter case.
[11:23] <wgrant> If it's not going to be on by default, maybe, yeah.
[11:23] <wgrant> Avoid regressing behaviour from current, plus it's a handful of lines of TAL, right?
[11:23] <cjwatson> Doesn't even need to be in TAL, I was thinking of having getCommits do it
[11:24] <cjwatson> That's entirely dynamic, the results of that never wind up in the DB
[11:24] <cjwatson> So getCommits could reasonably say "shrug, this is all I've got"
[11:24] <wgrant> Ah, of course.
[11:26] <wgrant> I would also consider potential for mischief through very large commit messages. But with the feature flags in place that is easy to manually mitigate once discovered.
[11:26] <cjwatson> Disabling the "use memcache" flag would of course significantly increase the load on turnip.
[11:26] <wgrant> It would.
[11:31] <cjwatson> Probably ought to see if we can do something about turnip scaling soon, at least splitting some of the layers.  There's the cgit bodge, but it occurred to me that perhaps we could proxy it to the pack backend.
[11:31] <wgrant> Yeahhh, I'd been considering that.
[11:32] <wgrant> That makes it a slightly more complicated marginally less bodgy bodge that doesn't inhibit scaling.
[13:41] <cjwatson> wgrant: https://code.launchpad.net/~cjwatson/launchpad/package-cache-drop-changelog/+merge/294894 updated
[13:42] <wgrant> cjwatson: Much simpler, thanks.
[14:01] <cjwatson> Hm, re getLog, I think I also need to ensure a fallback for the frozen GitRef case.
[14:01] <cjwatson> Oh, no, the commits will still be in the repository, so that bit is fine.
[14:01] <wgrant> Exactly.
[14:03] <cjwatson> wgrant: Also, I was thinking last night of adding a ~registry-viewable memcache stats page to the web UI.
[14:03] <cjwatson> Unless scripts/memcache-stats output is already available somewhere we can see.
[14:04] <wgrant> I've never seen it.
[14:08] <cjwatson> Though graphs would be more useful.
[14:08] <cjwatson> https://lpstats.canonical.com/graphs/ allegedly has various memcached things but they're all empty.
[14:09] <cjwatson> I have no idea where to look for what's supposed to generate that; no matches in tuolumne
[14:09] <wgrant> They'd be in tuolumne-lp-configs, but nothing there either.
[14:09] <cjwatson> Yeah that's what I meant
[14:10] <cjwatson> Presumably they'd have to come off the appservers
[14:12] <cjwatson> There's a bit of Nagios monitoring but it's very basic
[14:27]  * cjwatson tries to understand why update-pkgcache even exists at all
[14:27] <cjwatson> I don't quite see why we couldn't just fill in cache rows when we touch xPPHs
[14:28] <cjwatson> We can't fill in all the columns of DSPC until we have binaries, but we could fill that in when we get BPPHs
[14:29] <cjwatson> Er, that's DistributionSourcePackageCache not DistroSeriesPackageCache
[14:32] <wgrant> Calculating the entire everything on every BPPH creation would be a bit odd. But it could probably be optimised to be reasonable.
[14:32] <wgrant> Just need to watch out for bloat etc.
[14:33] <cjwatson> We still need to do counters and such in update-pkgcache
[14:33] <cjwatson> It's a bit odd because you have to look at all BPRs for a source, yes; it would probably do a hell of a lot less work total
[14:33] <wgrant> Less work than current update-pkgcache, indeed.
[14:34] <cjwatson> Seems to make more sense to only update caches when things change
[14:34] <wgrant> Less work than an update-pkgcache that was written with a mind, probably not.
[14:35] <cjwatson> Could be when we process a build rather than on each individual BPPH, though still watching out for copies and removals
[14:36] <wgrant> Right, the only sensible place for it is in publishBinaries
[14:36] <wgrant> If we were going to do it inline.
[14:38] <cjwatson> How else had you been thinking of improving update-pkgcache (aside from obvious query optimisations and such)?  Its current overall design is all about getting all the published sources and binaries and materialising them, so that would be similar order of work to the queries for NMAF publication of all archives, which doesn't sound like the ten minutes you suggested in Asana