/srv/irclogs.ubuntu.com/2022/02/10/#ubuntu-server.txt

=== genii is now known as genii-core
=== Kilroy9 is now known as Kilroy
tomreynoctav1a: for a yet more complete answer: 20.04 lts server would also install updates as snaps, for packages that were installed as snaps (or have been converted to snaps, but i don't think this will happen during an LTS release). there are ways to delay those, but - as far as i know - none to prevent it.04:34
tomreyn(the linux kernel or nvidia drivers are not packaged as snaps as this time)04:35
octav1aI don't think I'm using any non-default snaps04:36
=== hornbill047 is now known as hornbill
cpaelzergood morning06:22
=== gschanuel215 is now known as gschanuel21
=== ErmJurCerrDurr is now known as MJCD
=== mirespace_ is now known as mirespace
mirespaceahasenack: :)!09:42
=== vlm_ is now known as vlm
=== thegodsq- is now known as thegodsquirrel
mirespaceahasenack: rsync focal package regression on dgit(arm64) , in skew-clone test  local file not found ... digging into it11:10
mirespaceahasenack: rsync bionic, regression on lxc(ppc64el) and s3ql(armhf) ... checking it also11:11
mirespaceahasenack: timeout for lxc (bionic, ppc64el)11:21
mirespaceahasenack: fileNotFound on t4_fuse test (standart) for s3ql(armhf, bionic)11:32
athosgood morning!12:04
mirespacehi athos12:04
ahasenackmorning12:04
mirespacehi Andreas12:04
ahasenackhi mirespace 12:05
athos\o12:06
=== shabbz is now known as peninsularshabbz
=== lotuspsychje_ is now known as lotuspsychje
ahasenacksergiodj: looking at squid now14:36
ahasenackI'm unsire if bileto is actually working wrt dep8 tests. Last I tried a few weeks ago, the test run never left the "queued" state14:38
ahasenackunsure*14:39
sergiodjahasenack: thanks15:04
sergiodjmirespace: ahasenack: I will try retriggering dgit again; the error doesn't seem related to mirespace's patch, and I know that dgit's dep8 is pretty extensive and sometimes a bit flaky16:15
ahasenackok, that will be the 2nd retry16:15
ahasenackremember this is also an opportunity to fix flaky tests ;)16:15
ahasenackor else the next uploader will go through this ordeal again16:16
ahasenackof course, it depends on what is going on, we can't fix the world16:16
sergiodjthis is for an SRU, so I'm not expecting us to fix flaky tests in this case16:20
sergiodjalso, based on what I've seen from dgit's tests, investigating and fixing its dep8 tests might not be very trivial16:22
mirespacesergiodj: ahasenack: the lxc is now different from the one I see this morning on the log "ERROR: Failed to download http://images.linuxcontainers.org//images/ubuntu/xenial/ppc64el/default/20220207_07:42//rootfs.tar.xz"16:44
mirespaceexercise test failing16:44
mirespacesee* saw16:45
ahasenacknow it's showing the failure to download? The latest run?16:52
mirespaceyes16:58
mirespaceit passed from time out to failed to download 16:59
ahasenackOdd_Bloke: what did you mean the other day with "artifactory doesn't support zstd compression"? Isn't it "just" storage? Does it need to open the debs?18:54
ahasenackmirespace: thanks, I retried those tests19:44
ahasenacklet's see tomorrow19:44
sergiodjmirespace: ahasenack: the second retrigger of dgit (with rsync as the trigger) has passed20:49
ahasenackphew20:50
mirespace\o/20:54
axsuulHello, I am running 30+ docker containers, orchestrated using nomad, vault, consul -- lots of processes. Sometimes during the day I will get high load for an hour or so (1m, 5m, 15m >25 @ 16 threads) but it's been hard to diagnose which processes are causing the high load. Using top, htop, etc and referencing the % CPU usage has not been helpful because they are real-time values and are very fleeting. Any suggestions on how I can help diagnose21:06
axsuul which processes are contributing to the high load?21:06
sergiodjaxsuul: I think perf could be helpful in this case21:16
sergiodjyou can do some nice things with e.g. "perf top"21:18
axsuulThanks will give that a try!21:29
axsuulsergiodj: "perf top" seems pretty low level in that it's saying which calls are taking up the most CPU. Is there anything more high level, something like this process is contributing xx amount of load21:33
sarnolddid perf top not point at the largest contributors?21:34
axsuulHere's what I'm seeing, it's hard to decipher what these are https://0x0.st/oXxL.txt21:34
sergiodjaxsuul: perf top will let you know that.  if you want something finer grained (over a longer period of time), you can also use "perf record" to monitor the activity on CPUs, and then "perf report" to analyse the data (beware that "perf record" will require non-trivial disk space to save its data file)21:35
sarnoldheh, it's saying the measurement itself is contributing the most to cpu load21:35
sdezielaxsuul: a simple top with the list sorted by CPU time consumed would let you see which processes are burning cycles (assuming they are long lived)21:36
axsuulsdeziel: is that the same thing as sorting by %CPU?21:37
=== Poster` is now known as Poster
sdezielaxsuul: you'd need to press "shift-t" to sort by time consumed so not the same I think21:38
axsuulsdeziel: thanks, however it's hard to debug with the CPU time if many of my processes have already been running for a month or so right? It would be useful to reset the CPU time so I can see what's contributing in the short-term21:40
sdezielaxsuul: you could compare the top X TIME consumer now and in a few hours21:54
axsuulThanks, I suppose I could graph it in grafana22:02
=== genii is now known as genii-core

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!