=== pieq_ is now known as pieq [04:42] -queuebot:#ubuntu-release- New: accepted python-xstatic-dagre [amd64] (impish-proposed) [0.6.4.0-1] [04:42] -queuebot:#ubuntu-release- New: accepted ssshtest [amd64] (impish-proposed) [0.0+git20190416.6f5438a-2] [04:42] -queuebot:#ubuntu-release- New: accepted ssshtest [amd64] (impish-proposed) [0.0+git20190416.6f5438a-1] [08:28] tjaalton, vorlon: I'm going to try to look at the linux-firmware-raspi2 and openssl SRUs this morning to try and finish what I started in my shift on Wednesday. [08:43] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [source] (impish-proposed) [465.27-0ubuntu1] [08:49] -queuebot:#ubuntu-release- New binary: nvidia-graphics-drivers-465 [amd64] (impish-proposed/restricted) [465.27-0ubuntu1] (i386-whitelist) [08:55] -queuebot:#ubuntu-release- New binary: nvidia-graphics-drivers-465 [i386] (impish-proposed/restricted) [465.27-0ubuntu1] (i386-whitelist) [08:59] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [amd64] (impish-proposed) [465.27-0ubuntu1] [08:59] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [i386] (impish-proposed) [465.27-0ubuntu1] [09:02] -queuebot:#ubuntu-release- Unapproved: accepted linux-firmware-raspi2 [source] (groovy-proposed) [4-0ubuntu0~20.10.1] [09:03] -queuebot:#ubuntu-release- Unapproved: accepted linux-firmware-raspi2 [source] (focal-proposed) [4-0ubuntu0~20.04.1] [09:08] sil2100, hey, can you check why this change doesn't seem to apply to groovy and focal, please? https://code.launchpad.net/~albertomilone/ubuntu-archive-tools/nvidia-1000-whitelist/+merge/391356 [09:08] juliank -- I've just run across something .. odd ... in the browse.cgi. Have a look at https://git.launchpad.net/autopkgtest-cloud/tree/charms/focal/autopkgtest-web/webcontrol/browse.cgi#n185 -- the "continue" within that loop is I *think* intended for the outer loop, but of course it hits the enclosing (inner) loop. That makes the inner loop entirely redundant (I've verified this by excising the whole thing and checking the output matches the [09:08] original, which it does) [09:09] waveform: true [09:09] waveform: Maybe the lines after need to be indented further and added a break after found_valid = True [09:10] it's hard to figure out origin since git log does not follow the rename [09:10] juliank, yup - I've got a fix, but I'm hoping to optimize the whole thing down to a single (fast) query. Now -- the interesting question: do I leave the (potentially faulty but apparently acceptable) logic alone which results in a *very* fast query ... or do I fix the logic, but that results in a *much* more complicated and rather slower query (although still faster than the original) [09:11] (it's very fast becaues it basically ignores the triggers column entirely) [09:12] (which is pretty obviously "wrong" but apparently no-one's complained about it being wrong so far so ... maybe that's okay?) [09:12] waveform: that was added in commit 1305721d4d8d604b4b531c4b9727050fd351ea8c based on vorlon's concerns but oh well [09:13] waveform: I think the right answer to make it work and sensibly fast would be to add a triggers (test_id, package, version) table and then join it with current_version and match on package name and version [09:14] Well, maybe the current version table should be a version table that contains all versions and then test should link to that [09:14] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [source] (hirsute-proposed) [465.27-0ubuntu0.21.04.1] [09:14] But schema optimiation is hard [09:15] juliank, that would indeed be my preference (the triggers(test_id, run_id, package, version) table) but I'm also a bit concerned about removing a column that people may be relying upon. I can split that column easily enough with a bit of recursive SQL (although that'll probably look a bit frightening to any future maintainers) but obviously that's not terribly fast as it's unindexable [09:16] and yes, a versions table would be the ideal but frankly that's an even bigger schema change and I'm wary of changing the schema in ways that'll break existing users of the downloadable db (who may be external and be running queries that we can't predict) [09:16] yup [09:16] waveform: We can always add a triggers table and column and keep the existing one, but ugh space [09:17] yes, that was the other option I considered, but as you note -- it's a pretty big portion of the overall db's size [09:17] waveform: So I'd try the recursive SQL thing I guess [09:17] yup -- I'll see what I can do -- and I'll try and leave some comments for the poor beggar who next deals with the query :) [09:17] waveform: Or rename result and make result a view that builds the triggers column dynamically might work? I don't remember much SQL [09:18] oh! now I feel like an idiot ... why didn't I think of that ... [09:18] yes, SQLite's got group_concat (thoroughly non-standard but perfect for this use) [09:18] okay, will hack on that [09:23] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [source] (groovy-proposed) [465.27-0ubuntu0.20.10.1] [09:25] Laney: What's left on the arm64 worker side? We're not making any progress at all, it seems [09:26] -queuebot:#ubuntu-release- New binary: nvidia-graphics-drivers-465 [i386] (hirsute-proposed/multiverse) [465.27-0ubuntu0.21.04.1] (i386-whitelist) [09:27] -queuebot:#ubuntu-release- New binary: nvidia-graphics-drivers-465 [amd64] (hirsute-proposed/multiverse) [465.27-0ubuntu0.21.04.1] (i386-whitelist) [09:28] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [source] (focal-proposed) [465.27-0ubuntu0.20.04.1] [09:32] juliank: I don't agree with no progress at all [09:32] Laney: I meant in terms of queue length :D [09:32] still disagree [09:32] I think progress is being made [09:32] but I am going to add some more workers to bos01, now that there are new servers there [09:32] Two days ago we were at 7.9K tests, now we're at 8.33 [09:33] yesterday we were at 8.45K [09:33] so we're 1.4% lower than yesterday, ok [09:33] at 05:00 we were at 8.70K, now we are at 8.34K [09:33] sooooo [09:34] -queuebot:#ubuntu-release- New binary: nvidia-graphics-drivers-465 [amd64] (groovy-proposed/multiverse) [465.27-0ubuntu0.20.10.1] (no packageset) [09:34] Laney: Maybe we need to set scale to logarithmic on graph [09:35] well no, that does opposite [09:36] it would show useful graph for other archs again [09:36] s/useful/visible/ [09:37] maybe it's just running super long tests atm [09:39] -queuebot:#ubuntu-release- New binary: nvidia-graphics-drivers-465 [amd64] (focal-proposed/none) [465.27-0ubuntu0.20.04.1] (no packageset) [09:39] there's quite a few systemd-upstream / libreoffice [09:40] anyway adding some more workers, we can watch how they get on [09:43] -queuebot:#ubuntu-release- Unapproved: irssi (focal-updates/main) [1.2.2-1ubuntu1.1 => 1.2.2-1ubuntu1.1] (core) (sync) [09:45] -queuebot:#ubuntu-release- Unapproved: accepted irssi [sync] (focal-updates) [1.2.2-1ubuntu1.1] [09:45] ^ that's seb128 fixing a lost ddeb publication - should be no functional change otherwise [09:48] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [amd64] (hirsute-proposed) [465.27-0ubuntu0.21.04.1] [09:48] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [i386] (hirsute-proposed) [465.27-0ubuntu0.21.04.1] [09:48] juliank: ok, now 12 -> 24 arm64 in bos01 [09:48] Laney: niiiiiiiiiiiice [09:48] and the charm added them all properly, weeeeeeeeeeeeee [09:48] time to attack! [09:49] racing down the queue [09:49] that or the servers fall over or the cloud hates us or we hit limits on the cloud-worker [09:49] :D [09:49] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [source] (bionic-proposed) [465.27-0ubuntu0.18.04.1] [09:59] -queuebot:#ubuntu-release- New binary: nvidia-graphics-drivers-465 [i386] (bionic-proposed/none) [465.27-0ubuntu0.18.04.1] (no packageset) [10:05] -queuebot:#ubuntu-release- New binary: nvidia-graphics-drivers-465 [amd64] (bionic-proposed/none) [465.27-0ubuntu0.18.04.1] (no packageset) [10:23] -queuebot:#ubuntu-release- Unapproved: accepted openssl [source] (hirsute-proposed) [1.1.1j-1ubuntu3.1] [10:23] -queuebot:#ubuntu-release- Unapproved: accepted openssl [source] (groovy-proposed) [1.1.1f-1ubuntu4.4] [10:24] -queuebot:#ubuntu-release- Unapproved: accepted openssl [source] (focal-proposed) [1.1.1f-1ubuntu2.4] [10:38] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [amd64] (focal-proposed) [465.27-0ubuntu0.20.04.1] [10:38] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [amd64] (groovy-proposed) [465.27-0ubuntu0.20.10.1] [10:48] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [amd64] (bionic-proposed) [465.27-0ubuntu0.18.04.1] [10:48] -queuebot:#ubuntu-release- New: accepted nvidia-graphics-drivers-465 [i386] (bionic-proposed) [465.27-0ubuntu0.18.04.1] [10:50] Laney: it looks like xenial results aren't showing anymore on the autopkgtest servers...can we please put them back so we can compare our xenial updates? [10:50] Laney: for example: https://autopkgtest.ubuntu.com/packages/apport [10:56] -queuebot:#ubuntu-release- Unapproved: accepted nvidia-graphics-drivers-460 [source] (groovy-proposed) [460.73.01-0ubuntu0.20.10.2] [10:56] mdeslaur: it's a bit hard, where do you want to do the cutoff? [10:56] mdeslaur: currently we list all supported ubuntu releases [10:57] we could add ESM I guess [10:57] well, we need to support xenial for a few years [10:57] OK [10:57] We can do that [10:57] Add ESM [10:57] -queuebot:#ubuntu-release- Unapproved: accepted nvidia-graphics-drivers-460 [source] (hirsute-proposed) [460.73.01-0ubuntu1.21.04.1] [10:57] thanks Laney juliank [10:59] -queuebot:#ubuntu-release- Unapproved: accepted nvidia-graphics-drivers-460 [source] (focal-proposed) [460.73.01-0ubuntu0.20.04.2] [11:01] -queuebot:#ubuntu-release- Unapproved: accepted nvidia-graphics-drivers-460 [source] (bionic-proposed) [460.73.01-0ubuntu0.18.04.2] [11:03] juliank: https://code.launchpad.net/~laney/autopkgtest-cloud/+git/autopkgtest-cloud-1/+merge/402399 I think [11:03] Laney: no [11:04] Laney: They are lists and both contain bionic/focal [11:04] Laney: I just pushed the same change, with a bug to staging :/ [11:05] Laney: So we need to build that list all then filter ALL_UBUNTU_RELEASES by it, such that we can a list of supported+esm without duplicates and in correct order [11:05] SUPPORTED_UBUNTU_RELEASES_SET = set(UDI.supported( ) + UDI.supported_ESM()) [11:05] wait [11:05] well we should duplicate work, who is doing it? [11:05] I was, since I got asked [11:05] SUPPORTED_UBUNTU_RELEASES = [r for r in ALL_UBUNTU_RELEASES if r in SUPPORTED_UBUNTU_RELEASES_SET] :) [11:05] but if you want to... [11:06] I can do it, but I can also go get groceries [11:07] freaky fridays [11:08] k, let me, one second [11:08] -queuebot:#ubuntu-release- Unapproved: accepted nvidia-graphics-drivers-450-server [source] (hirsute-proposed) [450.119.04-0ubuntu0.21.04.1] [11:09] Laney: Hmm maybe it would actually be more useful to just use supported() with a date a month earlier [11:10] Laney: Adding ESM adds trusty too, and we don't release stuff to that, but when stuff becomes ESM we might spend a month fixing it up [11:10] Or 3 months [11:11] It's about the security team being able to see results, I think it's probably fine to just show it for the whole ESM period [11:11] months is not good enough [11:11] ah so a backlog view for them [11:12] -queuebot:#ubuntu-release- Unapproved: accepted nvidia-graphics-drivers-450-server [source] (groovy-proposed) [450.119.04-0ubuntu0.20.10.1] [11:12] that makes sense, I guess [11:12] something like that, ideally they would probably download the db and use that somehow in their process so it's not a manual cross check [11:12] but work eh [11:12] https://code.launchpad.net/~laney/autopkgtest-cloud/+git/autopkgtest-cloud-1/+merge/402399 that one is better [11:12] oops [11:12] https://code.launchpad.net/~laney/autopkgtest-cloud/+git/autopkgtest-cloud-1/+merge/402401 [11:12] -queuebot:#ubuntu-release- Packageset: Added nvidia-graphics-drivers-465 to i386-whitelist in focal [11:12] -queuebot:#ubuntu-release- Packageset: Added nvidia-graphics-drivers-465 to i386-whitelist in groovy [11:14] Laney: optimally, they'd initialize their esm instance with the db from the archive one I'd say [11:14] -queuebot:#ubuntu-release- Unapproved: accepted nvidia-graphics-drivers-450-server [source] (focal-proposed) [450.119.04-0ubuntu0.20.04.1] [11:16] Laney: lgtm, push it to edge/staging and release to stabe/prod if it works there [11:16] merci [11:16] can't properly test on staging though, no xenial there [11:17] -queuebot:#ubuntu-release- Unapproved: accepted nvidia-graphics-drivers-450-server [source] (bionic-proposed) [450.119.04-0ubuntu0.18.04.1] [11:17] * apw notes that he added those nvidia-graphics-drivers-465 entries to the i386-whitelist as that is already covered by the existing rules. [11:17] Laney: hmm, just see if it works though : [11:18] I made a typo in mine so I know what I'm talking about :D [11:18] heh [11:18] right, I will do in a minute, will review/merge wavefor_m stuff at the same time [11:18] +1 [11:18] apw: you mean it's covered by the script so it won't get reverted next time right? [11:19] Laney, right it won't get reverted as we attempt to add every nvidia package for the next 1000 versions [11:19] heh [11:19] ok, just checking, we got bitten by manual packageset editing in the past [11:19] Laney: not sure if you followed our discussion this morning about renaming results table and making it a view so we can split up triggers into its own table so that we can query on it fast inside sql, but it's fancy [11:19] Laney, i did not run the script as it is going to make more wide ranging changes and i think i want someone else to review those [11:19] more fancy changes coming I think :) [11:20] making autopkgtest-cloud fast [11:20] I still want to rewrite -web it in Go, build a single binary, use FastCGI, and invoke it via systemd socket activiation. Much nicer experience :D [11:21] riiiiiiiiiight [11:21] (OK OK, just fastcgi on the python would be nice I guess such that you can put it into a systemd service and apply resource limits or isolation) [11:21] Laney: That's how I build my web services [11:21] I was looking at gunicorn the other day [11:22] (oh that so should be utter sarcasm) [11:22] Just use builtin Go web server to serve autopkgtest-web, haproxy in front makes things happy enough [11:23] Laney: I don't like processes, I'm more of an event loop webserver person myself [11:24] But yes, there's little reason to have Apache invoke Python as CGI [11:26] CGI is just evil [11:27] WSGI vs FastCGI or just doing HTTP in Python itself, meh [11:29] (CGI for me does not work because my services are tightly AppArmor confined, as is my http server) [11:30] autopkgtest-web arguably could use some confinement [11:37] wouldn't be a bad idea [11:42] mdeslaur: is there now [11:43] awesome, thanks Laney! [11:44] hum, https://launchpad.net/~ubuntu-cdimage/+livefs/ubuntu/impish/ubuntu-canary is failing on [11:44] ' libatomic1 : Depends: gcc-11-base (= 11-20210417-1ubuntu1) but 11.1.0-1ubuntu1 is to be installed' [11:45] I wonder why, it shouldn't be much different from ubuntu proper ... I wonder if that's transient? or gcc-11 just migrated and it's going to impact ubuntu tomorrow? [11:53] seb128: I bet you got unlucky with the publisher and it's a one off, lets try a retry [12:04] Laney, thanks [12:19] hi tjaalton, if you get a chance in your sru rota today could you take a look at software-properties in the focal unapproved queue? [12:38] seb128: it worked [12:38] Laney, thanks! [12:52] Laney: publish-db is broken now it seems, running since 19 minutes [12:52] Laney: on web/0 [12:53] 5 minutes on web/1 [12:54] Laney: but when did you push it, publish-db on web/0 started 12:33 [12:56] oh 11:42 I guess [12:57] waveform: ^ regression in publish-db [12:58] I'm restarting it [12:58] juliank, got a stack trace or anything like that? [12:58] waveform: nah, it hangs [12:58] that's ... weird [12:59] waveform: I can attach pdb [12:59] yeah, would be interested to know where it's hanging [12:59] (presumably a lock-wait on something db related) [13:00] waveform: just need help using pdb [13:01] gdb I can ctrl+c and it stops executing and I can bt [13:01] haven't played with pdb yet [13:01] going to revert the charm [13:02] pdb's not so useful for live-debugging - generally I'd stick a few breakpoints at the start of things like init_db and get_sources to see which one(s) are hanging and then step from there [13:03] Laney: but not right now, I'm in the middle of sth [13:03] Laney: need to clean up again :/ [13:03] yes now [13:03] stop messing on the prod servers, download to your machine or something [13:03] or at least make copies [13:03] Laney: it's not possible [13:04] Laney: can't download the file as need to get the same state, and I did not modify stuff, just stopped publish-db to run it manually in pdb [13:05] Laney: publish-db.service needs a stop or restart after the revert [13:06] I'm not sure how to get the database for debugging this [13:07] you can use juju scp to extract the rw one [13:08] but can I get it on my laptop? [13:08] yes, then scp off of wendigo [13:09] juliank: or it would be ok (but less ideal imho) to copy it to a temporary dir, make a copy of publish-db and hack there [13:10] Laney: if it's a locking issue with writers or sth it might only reproduce on production, though [13:12] it could be, and then you can test that by taking a copy and hacking the output db [13:12] a copy of publish-db* [13:13] sil2100: somehow riscv64 builds did not get build in the bileto ppa, nor have they been dispatched in focal either. as if riscv64 is no longer ther in focal. I am confused. [13:13] sil2100: for the dkms rebuilds SRUs for focal security. [13:14] Laney: we probably should throw away the old data in public/ and start fresh, I think it added more data, it got exponentially longer the runtime [13:14] So web/1 has been running for 4 minutes now [13:14] Ah now it finished [13:15] 4:30 runtime [13:16] too many writers in parallel it seems [13:16] the initial copy is what is taking so long [13:18] Laney: Oh, why is download-all-results running? [13:18] that explains the slowness [13:19] heh [13:19] juliank: I guess it got pulled by autopkgtest-web.target when the charm upgraded [13:19] seems undesirable [13:19] can I stop it? [13:19] should be safe, right? [13:20] don't see why not [13:21] ack, stopped it [13:24] xnox: huh [13:25] xnox: there was a bug where they didn't get created for linux-any packages, a self copy should sort it out [13:26] fixed on 05/05 so if only the upload/accept was before then... [13:27] But does that mean LP tried building the riscv64 binaries with -proposed enabled in focal-proposed now? [13:27] xnox: ^ [13:30] Laney, waveform so the publish-db job on web/0 has been running for 10 minutes now which seems like it hung again, despite having changes reverted, so maybe the problem is actually elsewhere [13:30] It's doing pread() from r/w and r/o database, and then pwrite to r/o [13:30] since forever now [13:30] Probably should have increased the page size from 4k [13:32] we um need to add logging to publish-db [13:32] so presumably that's the backup call [13:32] in other words, backup's safe-but-slooooow [13:33] waveform: Yeah it seems to be that I guess, we only had those two dbs open [13:33] waveform: Not sure if we closed the old r/o one or not [13:33] waveform: Optimally, sqlite3 would just dump us an .db copy whenever it does a wal checkpoint :D [13:33] Instead of us going around copying stuff [13:36] -queuebot:#ubuntu-release- Unapproved: netplan.io (groovy-proposed/main) [0.102-0ubuntu1~20.10.2 => 0.102-0ubuntu1~20.10.3] (core) [13:37] -queuebot:#ubuntu-release- Unapproved: rejected libdrm [source] (hirsute-proposed) [2.4.105-1ubuntu0.1] [13:40] waveform: I feel like it would have helped to go to 4MB large pages, but really sqlite backup needs some work [13:40] well, backup's meant for ... erm ... backup where speed doesn't *really* matter -- we're kind of abusing it as a publication mechanism [13:43] still, 4K (being the base block size of most disks these days) is pretty tiny for a default page size (DB2 was recommending 32K for page sizes back in the late 90s!) [13:43] Laney: i see, thanks. [13:43] coreycb: ok, this is not needed in groovy? [13:43] sil2100: i've done no change rebuild for that one package now. will copy it once it finishes publishing. [13:44] tjaalton: correct, it's only needed in focal. thanks for looking! [13:46] xnox: ok [13:46] xnox: all the others are fine as they are, right? [13:46] -queuebot:#ubuntu-release- Unapproved: accepted software-properties [source] (focal-proposed) [0.98.9.5] [13:47] waveform: Should have used btrfs on the web workers, then could have kept db in subvolume, snapshotted subvolume, and then published the snapshot :D [13:48] waveform: would have been instant [13:50] that has historically been a genuinely documented method of backing up databases (from sqlite to postgres to db2, albeit with lvm snapshots instead, but same difference) [13:50] still, with -wal logging in sqlite that would get a bit messy (more than one file to distribute, unless you could guarantee the -wal was redundant at that point) [13:51] waveform: well, you'd just run pragma journal_mode = delete before publishing the snapshot (given r/w snapshot) [13:51] istr that only works if you can guarantee your process is the only connection to the database at that time [13:52] (which is probably still do-able) [13:52] waveform: yeah you create snapshot.new, run sqlite3 on snapshot.new/autopkgtest.db to do the pragma, then swap snapshot and snapshot.new [13:53] waveform: So I read the backup API restarts if the database content changes rather than finishing up whatever state it was in [13:53] waveform: I'm wondering if doing a duplication using insert into ... select * from ... would be faster [13:53] restarts? Oh, I hadn't read that -- had assumed it was "fixing up" changed pages after the initial run [13:54] it ... could well be in that case [13:54] Duplication with attaching two dbs and insert into select * from
would give it a snapshot without the need to recover [13:54] * juliank writes a script, but has no writer load to test again [13:57] indeed [14:10] sil2100: yeah, i think they use "any" rather than "linux-any", or like ftbfs on riscv64 anyway, or like don't build for riscv64, or built fine. seems like just the one package affected. [14:12] -queuebot:#ubuntu-release- Unapproved: iptables-netflow (focal-proposed/universe) [2.4-2ubuntu0.3 => 2.4-2ubuntu0.4] (kernel-dkms) (sync) [14:14] sil2100: ^^^ [14:30] oh waveform, now that we have WAL, we can stop doing incremental backup because we no longer block the writers, I guess that would avoid the restarts too [14:36] oh, that's a good point too [14:36] yes - definitely worth trying [14:37] waveform: It now passed in 18s [14:38] that's more like it :) [14:38] 22s [14:38] looking much better [14:38] lovely [14:40] juliank: what is charm revision 48? [14:41] Laney: current master, cb97ce7b107ab90755c5110f8c17492fa25abac6 [14:41] Laney: "publish-db: Avoid incremental backup" [14:41] Laney: we're now down from 10 minute publish-db peak times to 20s [14:42] Laney: Just by removing the pages=... argument [14:44] Laney: do you get notifications on charm store releases? :D [14:45] Laney: FWIW, we need more disk space, at 1.1GB size and 1.7GB left at times, we cannot ever safely vacuum the database [14:45] juliank: no, I noticed the code had gone back to the previous version and went checking to see what you did, was expecting a merge proposal [14:45] yeah, we can resize at some point, those machines are probably too small [14:45] can live resize one at a time [14:45] but it needs to be done with an IS person because some manual recovery is required [14:45] Laney: oh, did you not commit the revert in the git repo=? [14:46] no it was charm level only [14:46] Laney: but we believe now the code was fine, and it was just the backup API acting up [14:46] ack [14:47] Laney: because it copied 1/3 of the db, then slept and waited, writers writed, and then sqlite3 _restarted_ the backup, so it could take ages [14:47] Laney: now we copy in one go, so we're safe [14:47] I'm happy with performance now :) [14:54] So next step is to split out the current_version generation into update-versions which writes it into the r/w database using a BEGIN CONCURRENT transaction, and then point browse.cgi at r/w database and get live updates :D [14:55] then publish-db every 5 minutes only [14:56] it's good stuff but I am conscious it's not multiple cloud workers or anything else we identified [14:56] right [14:56] it's just silly side project to get mind off things [14:57] but yeah, it would be good to remove that silly two part DB stuff [15:00] Laney: need to add metrics for publish-db runtime too [15:00] or can that be abstract for systemd services, that we plug that in for some? [15:01] Everything that has a timer should have a metric to track runtime [15:02] Added an issue [15:05] oops clicked on statistics link [15:05] going to eat up 100% CPU for a minute now [15:05] :D [15:13] tseliot_: fwiw the i386-whitelist has a peculiar behavior that if a named source package doesn't exist anywhere in the series, adding it to the packageset is a no-op. It looks like someone has added it now though? [15:29] hey SRU folks, I'm about to disappear for several days but wanted to peek at the stopped phased update for https://launchpad.net/ubuntu/+source/horizon/3:18.3.3-0ubuntu1 (https://errors.ubuntu.com/?release=Ubuntu%2020.04&package=horizon&period=week&version=3%3A18.3.3-0ubuntu1 is the error page) - the traceback in that seems completely unrelated from Horizon, as the entire traceback (excepting the top level caller) is in django. Given that, I think [15:29] it would be OK to override the crash [15:29] -queuebot:#ubuntu-release- Packageset: Added debugedit to i386-whitelist in focal [15:29] -queuebot:#ubuntu-release- Packageset: Added xwayland to i386-whitelist in focal [15:52] icey: I can look at that for you and override it if appropriate. [16:13] mwhudson: initramfs-tools> I have not yet looked no [16:18] tseliot_: so running the update-i386-whitelist script does not show nvidia-graphics-drivers-465 as to-be-added, only nvidia-graphics-drivers-465-server; yet somehow it doesn't appear in the output of https://people.canonical.com/~ubuntu-archive/packagesets/focal/i386-whitelist so now I'm confused [16:19] tseliot_: it's possible the package must exist in the main archive in that series in order for the packageset to take effect? [16:23] -queuebot:#ubuntu-release- Unapproved: libreoffice (hirsute-proposed/main) [1:7.1.2~rc2-0ubuntu2 => 1:7.1.3-0ubuntu0.21.04.1] (ubuntu-desktop) [16:29] * enyc meows [16:34] icey: I've overridden it now [16:57] -queuebot:#ubuntu-release- Unapproved: nfs-ganesha (groovy-proposed/main) [3.2-2ubuntu1 => 3.2-2ubuntu2] (no packageset) [16:57] -queuebot:#ubuntu-release- Unapproved: nfs-ganesha (hirsute-proposed/main) [3.4-1 => 3.4-1ubuntu0.21.04.1] (no packageset) [17:24] -queuebot:#ubuntu-release- Unapproved: nfs-ganesha (focal-proposed/main) [3.0.3-0ubuntu3 => 3.0.3-0ubuntu3.1] (no packageset) [17:36] -queuebot:#ubuntu-release- Unapproved: accepted netplan.io [source] (groovy-proposed) [0.102-0ubuntu1~20.10.3] [20:51] -queuebot:#ubuntu-release- Unapproved: accepted pi-bluetooth [source] (focal-proposed) [0.1.15ubuntu0~20.04.1] [21:02] -queuebot:#ubuntu-release- Unapproved: initramfs-tools (groovy-proposed/main) [0.137ubuntu12 => 0.137ubuntu12.1] (core, i386-whitelist) [21:03] -queuebot:#ubuntu-release- Unapproved: initramfs-tools (focal-proposed/main) [0.136ubuntu6.4 => 0.136ubuntu6.5] (core, i386-whitelist) [21:13] -queuebot:#ubuntu-release- Unapproved: initramfs-tools (bionic-proposed/main) [0.130ubuntu3.11 => 0.130ubuntu3.12] (core) [21:24] -queuebot:#ubuntu-release- Unapproved: speedtest-cli (focal-proposed/universe) [2.1.2-2 => 2.1.2-2ubuntu0.20.04.1] (no packageset) [21:26] -queuebot:#ubuntu-release- Unapproved: rejected speedtest-cli [source] (focal-proposed) [2.1.2-2ubuntu0.20.04.1] [21:29] -queuebot:#ubuntu-release- Unapproved: speedtest-cli (focal-proposed/universe) [2.1.2-2 => 2.1.2-2ubuntu0.20.04.1] (no packageset) [21:29] -queuebot:#ubuntu-release- Unapproved: speedtest-cli (groovy-proposed/universe) [2.1.2-2 => 2.1.2-2ubuntu0.20.10.1] (no packageset) [23:45] still no ML annoucement on 16.04's end-of-standard-support :(