[00:15] <arraybolt3[m]> How on earth do you even build Chromium in a PPA? It takes massive amounts of RAM, I thought.
[00:18] <JanC> seems like it works
[00:19] <sarnold> until recently the builder vms had 8gigs
[00:19] <sarnold> which was half of what I thought they had
[00:19] <sarnold> I think they fiddle with build options to reduce concurrency, use less memory inthe linker, etc
[00:19] <JanC> wouldn't the build process adapt to available memory also?
[00:20] <JanC> or to the number of cores available
[00:20] <sarnold> number of cores, usually
[00:20] <JanC> which is likely quite limited in a VM?
[00:20] <sarnold> yeah, perhaps by enough that the memory limit wasn't as painful
[00:21] <sarnold> I've thought vaguely about writing an nproc replacement that also takes available memory into consideration
[00:21] <sarnold> but then when I thought about what "available memory" actually means I decided it'd be easier to just go insane directly
[00:22] <JanC> might be useful on some SoCs that have plenty of cores but little RAM (and can't expanded RAM)
[00:27] <sarnold> yeah, my rpi for example has four cores but one gigabyte of memory. you'd be insane to try make -j4 on there
[00:29] <arraybolt3> sarnold: (re: go insane directly) lol I've heard that it was pretty hard to figure out available memory, is there a particular reason that it's that hard?
[00:29] <JanC> depending on what you are trying to compile, but for something big for sure
[00:29] <arraybolt3> htop seems to do a decent job of it.
[00:30] <JanC> it's hard to say what is *really* available (or can become available, if needed) because of caching/buffers/etc.
[00:31] <arraybolt3> I figured that buffers and cache could just be ignored, and that all that mattered was the total size of all active processes and shared libraries in use, minus whatever was swapped out. I suspect I don't know a whole lot about this area though.
[00:31] <sarnold> arraybolt3: there's just so many variables -- the physical memory is easy to get, but after that things get fuzzy, because most processes share some amounts of memory with all the other processes on the system; a lot of memory is used for cache or buffers; and then there's rlimits, cgroups, overcommit limits..
[00:32] <arraybolt3> Oy, I didn't know about those last few (rlimits etc).
[00:32] <JanC> but not all buffers/cache are shown as such (e.g. when using ZFS), and some of it would be needed for the actual build process, etc.  :)
[00:34] <sarnold> and perhaps the database running on the system is far more important than the build :)
[00:34] <JanC> and maybe you don't mind using some zram- or flash-based swap, but HDD swap would be a problem, etc.
[00:35] <JanC> I mean, pushing some other stuff into swap
[00:36] <arraybolt3> That explains why Linux goes into thrashing when it runs out of RAM rather than just dying or destroying stuff outright?
[00:36] <arraybolt3> It's even designed to kill processes via the OOM killer, it just seems like without systemd-oomd it takes forever to actually do that.
[00:37] <JanC> OOM killers are a bad design, except in emergency cases really
[00:37] <JanC> except when you can really finetune how they work (what they kill when), I suppose
[00:38] <JanC> ideally applications should get a warning before killing them is necessary & they react to it  :)
[00:38] <sarnold> oh hah I forgot systemd-oomd, the thing that kills things before it's a catastrophe (and causes its own catastrophes :)
[00:38] <sarnold> one more variable to consider
[00:39] <JanC> you don't have to install it :)
[00:39] <sarnold> *nod*
[09:42] <nibbon_> o/
[09:42] <lotuspsychje> welcome nibbon_ 
[09:42] <nibbon_> is this the right place to talk about subiquity (https://github.com/canonical/subiquity)
[09:47] <lotuspsychje> nibbon_: the ubuntu support channels will take all questions around ubuntu and their flavours
[09:48] <nibbon_> actually, my question is related to ubuntu sever, that's why I came here :)
[09:48] <lotuspsychje> shoot, and if volunteers are awake, they will try to answer
[09:53] <nibbon_> with the d-i installer, I have a procedure to build an ISO that perform unattended installation 
[09:56] <nibbon_> my take away reading the subiquity GH repo is that I can still repackage an ISO, but then I need to have an external http server to serve the configuration
[09:58] <nibbon_> I would like to know whether or not I can still put everything into the ISO 
[11:32] <tomreyn> nibbon_: https://www.pugetsystems.com/labs/hpc/ubuntu-22-04-server-autoinstall-iso/
[14:23]  * nibbon_ reads
[15:17] <nibbon_> tomreyn: thanks, that's the example I was looking for
[15:24] <tomreyn> you're welcome 
[20:00] <Exterminador> hello  folks. any idea on why my VPS isn't completing the `sudo apt update` command?
[20:01] <Exterminador> nvm.. it's because I've seek for help that it completed after almost 5m hanging -_-
[20:02] <arraybolt3> Tape the IRC logs to the monitor and that should make it keep working in the future :P
[20:02] <arraybolt3> (OK so not really...)
[20:02] <arraybolt3> Anyway, my guess would probably be either a bogged-down or glitchy server on Canonical's end, or possibly a really low-performance VPS.
[20:11] <Exterminador> the VPS isn't exactly low-performance even being from Kimsufi (OVH Eco branch)
[20:12] <Exterminador> usually all goes smooth for the exception for today. so, yes, it was probably a glitch in the matrix :)
[20:52] <ChmEarl> world premiere: https://pb.psychotic.ninja/view/90bc0efd
[21:45] <arraybolt3> ChmEarl: Excuse me?
[23:01] <Woet> just like theres iftop for network traffic, is there something for storage "traffic"?
[23:01] <Woet> ah, iotop
[23:03] <sarnold> Woet: also take a look at biotop, filetop, and other tools https://github.com/iovisor/bcc/tree/master