=== genii is now known as genii-core === Kilroy9 is now known as Kilroy [04:34] octav1a: for a yet more complete answer: 20.04 lts server would also install updates as snaps, for packages that were installed as snaps (or have been converted to snaps, but i don't think this will happen during an LTS release). there are ways to delay those, but - as far as i know - none to prevent it. [04:35] (the linux kernel or nvidia drivers are not packaged as snaps as this time) [04:36] I don't think I'm using any non-default snaps === hornbill047 is now known as hornbill [06:22] good morning === gschanuel215 is now known as gschanuel21 === ErmJurCerrDurr is now known as MJCD === mirespace_ is now known as mirespace [09:42] ahasenack: :)! === vlm_ is now known as vlm === thegodsq- is now known as thegodsquirrel [11:10] ahasenack: rsync focal package regression on dgit(arm64) , in skew-clone test local file not found ... digging into it [11:11] ahasenack: rsync bionic, regression on lxc(ppc64el) and s3ql(armhf) ... checking it also [11:21] ahasenack: timeout for lxc (bionic, ppc64el) [11:32] ahasenack: fileNotFound on t4_fuse test (standart) for s3ql(armhf, bionic) [12:04] good morning! [12:04] hi athos [12:04] morning [12:04] hi Andreas [12:05] hi mirespace [12:06] \o === shabbz is now known as peninsularshabbz === lotuspsychje_ is now known as lotuspsychje [14:36] sergiodj: looking at squid now [14:38] I'm unsire if bileto is actually working wrt dep8 tests. Last I tried a few weeks ago, the test run never left the "queued" state [14:39] unsure* [15:04] ahasenack: thanks [16:15] mirespace: ahasenack: I will try retriggering dgit again; the error doesn't seem related to mirespace's patch, and I know that dgit's dep8 is pretty extensive and sometimes a bit flaky [16:15] ok, that will be the 2nd retry [16:15] remember this is also an opportunity to fix flaky tests ;) [16:16] or else the next uploader will go through this ordeal again [16:16] of course, it depends on what is going on, we can't fix the world [16:20] this is for an SRU, so I'm not expecting us to fix flaky tests in this case [16:22] also, based on what I've seen from dgit's tests, investigating and fixing its dep8 tests might not be very trivial [16:44] sergiodj: ahasenack: the lxc is now different from the one I see this morning on the log "ERROR: Failed to download http://images.linuxcontainers.org//images/ubuntu/xenial/ppc64el/default/20220207_07:42//rootfs.tar.xz" [16:44] exercise test failing [16:45] see* saw [16:52] now it's showing the failure to download? The latest run? [16:58] yes [16:59] it passed from time out to failed to download [18:54] Odd_Bloke: what did you mean the other day with "artifactory doesn't support zstd compression"? Isn't it "just" storage? Does it need to open the debs? [19:44] mirespace: thanks, I retried those tests [19:44] let's see tomorrow [20:49] mirespace: ahasenack: the second retrigger of dgit (with rsync as the trigger) has passed [20:50] phew [20:54] \o/ [21:06] Hello, I am running 30+ docker containers, orchestrated using nomad, vault, consul -- lots of processes. Sometimes during the day I will get high load for an hour or so (1m, 5m, 15m >25 @ 16 threads) but it's been hard to diagnose which processes are causing the high load. Using top, htop, etc and referencing the % CPU usage has not been helpful because they are real-time values and are very fleeting. Any suggestions on how I can help diagnose [21:06] which processes are contributing to the high load? [21:16] axsuul: I think perf could be helpful in this case [21:18] you can do some nice things with e.g. "perf top" [21:29] Thanks will give that a try! [21:33] sergiodj: "perf top" seems pretty low level in that it's saying which calls are taking up the most CPU. Is there anything more high level, something like this process is contributing xx amount of load [21:34] did perf top not point at the largest contributors? [21:34] Here's what I'm seeing, it's hard to decipher what these are https://0x0.st/oXxL.txt [21:35] axsuul: perf top will let you know that. if you want something finer grained (over a longer period of time), you can also use "perf record" to monitor the activity on CPUs, and then "perf report" to analyse the data (beware that "perf record" will require non-trivial disk space to save its data file) [21:35] heh, it's saying the measurement itself is contributing the most to cpu load [21:36] axsuul: a simple top with the list sorted by CPU time consumed would let you see which processes are burning cycles (assuming they are long lived) [21:37] sdeziel: is that the same thing as sorting by %CPU? === Poster` is now known as Poster [21:38] axsuul: you'd need to press "shift-t" to sort by time consumed so not the same I think [21:40] sdeziel: thanks, however it's hard to debug with the CPU time if many of my processes have already been running for a month or so right? It would be useful to reset the CPU time so I can see what's contributing in the short-term [21:54] axsuul: you could compare the top X TIME consumer now and in a few hours [22:02] Thanks, I suppose I could graph it in grafana === genii is now known as genii-core