/srv/irclogs.ubuntu.com/2020/01/07/#ubuntu-server.txt

=== BenderRodriguez is now known as RobertMuellerIsG
=== RobertMuellerIsG is now known as RobertMuellerIII
Tuv0xAnybody know of a sort of web portal to all your services? (Plex, Portainer, Deluge, Node-red, etc...)01:48
=== RobertMuellerIII is now known as BenderRodriguez
Tuv0xGuess I'll write my own.01:56
lordievaderGood morning07:09
SkyriderMorning :D10:14
lordievader👋10:42
SkyriderGreetings all.. Got a quick question.11:26
SkyriderI've set up quite a few .service / .timers on my systemd, and noticed after I rebooted the server a few times, those timers ran when they shouldn't have.11:27
SkyriderI don't have anything in the timer and or service that they should be running on system boot either.11:28
lordievaderAre they persistent?11:35
tomreynsystemctl list-timers   helps reviewing timers' status (in case you didn't do this yet)11:38
SkyriderMeh, I can't seem to figure it out11:48
lordievaderIn case you missed my question: Are they persistent?11:49
SkyriderOh, sorry. Rebooted, so I didn't see.11:50
Skyrider1 sec, will paste the timer/service file.11:50
Skyriderhttps://paste.ubuntu.com/p/mR6WdpTV6z/ - timer and Service: https://paste.ubuntu.com/p/XwfY6zgYgv/11:51
tomreyn<tomreyn> systemctl list-timers   helps reviewing timers' status (in case you didn't do this yet)11:55
lordievaderSkyrider: How often should this run? (And why specify 2020?)11:55
lordievaderAs [1] suggests, `systemd-analyze calendar <time>` is a useful way of checking your interval notation.11:56
lordievader[1]: https://unix.stackexchange.com/questions/126786/systemd-timer-every-15-minutes11:56
fricklercoreycb: jamespage: do you have some planned schedule for updating rocky uca? or do I just have to nag you often enough? ;) https://bugs.launchpad.net/cloud-archive/+bug/185332011:56
ubottuLaunchpad bug 1853320 in Ubuntu Cloud Archive rocky "[SRU] rocky stable releases" [High,Triaged]11:56
SkyriderI specified 2020 because I set up the script in 2019.11:58
SkyriderDidn't want it to run in 2019 :)11:58
SkyriderLink doesn't show anything to prevent it to run on boot though12:00
lordievaderWas just giving my source ;)12:00
lordievaderCan you answer my question, though?12:00
Skyriderpersistent, how?12:06
SkyriderThere are no persistent rules in the service/timer files.12:06
lordievaderNo, how often should it run, I meant ;)12:08
SkyriderMultiple scripts. one per 10m, the other per day and another per week.12:11
SkyriderSeeing remove/move commands are also in the scripts, it's screwing up.12:12
lordievaderRight, so this is doing every 10 minutes. Can't it be that the next run has passed during the boot?12:13
SkyriderThe 10 min? Sure.12:14
SkyriderMon 2020-01-13 23:58:00 CET  6 days left   n/a                          n/a      inferno-daily-archiver.timer inferno-daily-archiver.service12:14
SkyriderThat one however, ran after boot.12:14
jamespagefrickler: I'd defer to coreycb on that one - we try to have a monthly cadence for each series but that does not always work out12:15
lordievaderSkyrider: Was the service "enabled" by accident?12:16
fricklerjamespage: great, I just noticed that I also need neutron-fwaas releases being made, so maybe not a bad thing if it takes another couple of days. https://review.opendev.org/70135812:18
SkyriderI only enabled the timer.12:19
SkyriderEven when I run the disable service command, no responds is given.12:20
SkyriderSo there are no symlinks.12:20
fricklerjamespage: regarding this glibc bug, do you think bumping the minor version in bionic-updates would seem feasible? or do we need to locate specific patches? https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/183959212:22
ubottuLaunchpad bug 1839592 in glibc (Ubuntu) "Open vSwitch (Version 2.9.2) goes into deadlocked state" [High,Confirmed]12:22
fricklerwe did run for 3 weeks now without hitting the issue again on >100 machines12:22
SkyriderWhen I check the multi-user.target.wants directory, only timers are shown, not the services.12:22
SkyriderSo that looks normal as well.12:23
lordievaderSkyrider: What does `systemctl cat inferno-daily-archiver.timer` return?12:26
SkyriderSame as https://paste.ubuntu.com/p/mR6WdpTV6z/ - But different timers "OnCalendar=Mon 2020-*-* 23:58:00" and WantedBy=multi-user.target, changed it to timers.target in minute timer as I was trying a few things.12:28
lordievaderYeah, that should be timers.target.12:35
lordievaderDid you test it with that target?12:35
SkyriderLet me adjust it.12:36
SkyriderDo I have to disable/enable after changing it, or will daemon-reload do12:37
SkyriderOkay, rebooted.12:51
SkyriderAll timers were run.12:51
Skyrider***All timers still ran regardless.12:51
lordievaderAre they present in the multi-user.target?12:52
SkyriderOnly 3 timers still exists in multi-user.target, none of them are important atm :p12:53
SkyriderI disabled/enabled all timers as well just to be sure before the reboot12:53
SkyriderCreated symlink /etc/systemd/system/timers.target.wants/inferno-daily-archiver.timer → /etc/systemd/system/inferno-daily-archiver.timer.12:53
Skyriderif I cat the file, WantedBy=timers.target exists.12:54
lordievaderDid all of them run or just those from multi-user.target?12:55
SkyriderAll.12:55
lordievaderEven though persistent is by default false, what happens when you make this explicit?12:56
SkyriderPersistent=False in timer?12:57
lordievaderYes12:57
lordievaderAs per https://www.freedesktop.org/software/systemd/man/systemd.timer.html12:57
coreycbfrickler: jamespage: i'll push on getting what's currently in -proposed released today, and will get some bugs created for the next batch. we try to make sure not to release prior to holidays but the goal is a monthly cadence.13:17
coreycbfrickler: I'm going to grab that rocky one from sahid13:19
SkyriderAnyone here happen to be familiar with CSF and its web UI?14:51
vlmnot very confident with dd tool,is it possible to read only say from block 1100 to 1200 from a file or device?14:57
sdezielvlm: yes, use seek= and count=14:58
vlmsdeziel: ill look that up thanks!15:01
sdezielis there a way to insert routes on boot? I'm trying to blackhole a bunch of IPs and would like the blackhole routes to persist through reboots. netplan/systemd-networkd is unfortunately not an option for various reasons15:09
vlmim interested in that too,ive used rc.local on my non systemd boxes before with ip route would be nice if theres some way through a systemd unit file or so15:15
tdssdeziel: if you're just blackholing, you could handle that at the firewall level with ipsets?15:47
sdezieltds: yes but then I need to bother with OUTPUT and FORWARD which I'd like to avoid15:48
tdsdo you need to be able to easily make live changes?15:49
sdezieltds: re ipsets, how do you restore them on boot? Is there any documentation you could point me to?15:49
sdezieltds: yes15:49
tdsfor ipsets, look at the ipset-persistent package15:50
tdsit's a hook for netfilter-persistent15:50
sdezieltds: thanks that's what I wanted15:50
tdsawesome15:50
sdezielbut that's my plan b ;)15:50
tdsif you want to be able to make live but persisting changes to routes, a rather overkill solution would be a routing daemon like BIRD15:51
teward[m]sdeziel: ummmm `ipset-persistent`?15:51
teward[m]don't you mean `iptables-persistent`?15:51
teward[m]ah no i'm blind NEVERMIND15:52
* teward[m] can't read today15:52
sdezielvlm: I found a semi-decent way using systemd-networkd drop-in snippets. This is a little annoying to make work with netplan but it does work15:53
sdezielvlm: let me know if you want me to elaborate15:53
vlmsdeziel: was afk a while,please do tell your solution16:32
vlmalso i didnt know there was a netfilter hook for ipset,made my own unit file just for that nice find16:34
sdezielvlm: I am hesitating between 2 ways. One uses a dummy NIC (https://paste.ubuntu.com/p/MzJHfkGK2T/) and the other uses drop-ins files that supplement the netplan generated config16:34
vlmsdeziel: interesting,ive not dealt too much with systemd-networkd thanks for sharing16:36
sdezieltds: looks like ipset-persistent is not packaged but is easy enough to setup according to https://selivan.github.io/2018/07/27/ipset-save-with-ufw-and-iptables-persistent-and.html16:39
tdswhat version of ubuntu is this?16:39
sdezieltds: I'm on 18.0416:40
tdsoh yes, looks like ipset-persistent only appears on 19.04 onwards :(16:41
shibbolethapt-get install ifupdown16:43
shibbolethapt-get purge netplan.io16:43
shibbolethtada16:43
sdezieltds: ah, right, thx16:44
vlmsdeziel: if you want i might be able to paste the unit file for ipset i made,although i think i didnt make it all by myself,some of it i found on the net,but seems to work ok though16:45
sdezielvlm: thanks but I'll backport the package to my PPA17:09
shubjeroHi, I am trying to achieve greater than 10Gbps network throughput on an instance that lives on a compute node which has two 10G nic's in lacp (layer 3+4 hash policy). At the hypervisor level I can iperf test with two other clients and achieve 20Gbps but when I try to do a similar test but at the virtualized level I can only hit 10Gbps, with all that 10Gbps traffic only going over one of the19:54
shubjerohypervisors nic's in the bond. Any thoughts?19:54
Ussatwhich hypervisor, whit nic drivers ?20:05
Ussatwhat20:05
shubjeroUbuntu 18.04 kvm/qemu. Virtio nic driver20:05
compdoceww20:05
shubjeroThis is just a single compute node in a large openstack cluster20:07
sdezielshubjero: the hashing policy could have both sides of your iperf going over a single NIC if you are unlucky20:07
shubjeroWell I noticed that sometimes the iperf3 tests I would perform at the hypervisor level would sometimes only reach 10Gbps but a simple re-start of one of the iperf3 clients would allow it to be load balanced across both links and utilize 20Gbps.20:10
shubjeroI've not been able to repeat those results at the virtualized level though. Seems strange20:10
shubjeroAnd in each iperf3 test scenario, we are talking about traffic from 3 different servers.20:11
shubjerovirtual and bare metal20:11
sdezielshubjero: the hashing fans out flows to the underlying NICs in a semi-random fashion (depends on src/dst IPs and ports used) so it's important to repeat them a couple of time as each time your src port will be different and you get another chance to move the flow to another NIC20:12
sdezielthat said, you probably have been touching both NICs already20:13
shubjerosdeziel: yeah i've definitely done that. I am wondering if perhaps the lack of balancing is happening at the bridge level20:13
sdezielshubjero: dunno if that's still relevant but I remember loading the vhost_net module to get better performance. Seems to be done automatically on 18.04 now though20:16
tdsshubjero: how are you running iperf exactly, testing on multiple flows?20:17
shubjerotds: On the server with 20Gbps, I'm spawning two instances of iperf3 server (listening on two ports). Then on each client I am connecting to the IP on the iperf3 server.., on each I've tried multiple streams but it doesnt change the result20:18
tdshmm, if you're doing full l3+4 hashing, i'd expect a single iperf run with -P 10 or something to get hashed over multiple links fine20:20
shubjerotds: Yeah, and that all works at the hypervisor/bare metal level. It's when I run the same test at the instance level is where I don't see balancing occuring20:20
tdswhat does the network config on the host look like? just a plain bridge with the bond as a slave?20:21
shubjerotds: http://paste.openstack.org/show/788141/20:22
tdshow is the vm tied to that bond exactly? full ip a / ip l output would be great20:24
shubjerotds: Here's the ip a output from the compute node: http://paste.openstack.org/show/47tQwySQU1GmAd0tDtZ4/20:25
tdsshubjero: that's definitely not a standard network config, we'll need more details on how the vm's connectivity works - it looks like ovs is involved as well, are you running overlay networks of some kind?20:28
tdsthe output of `ovs-vsctl show` may say a little more about what's going on if so20:29
shubjeroYes, this is a compute node in openstack so neutron is involved.20:29
shubjerotds: http://paste.openstack.org/show/v9s6nrLJNFcKrzAUpen1/ there's brctl show and ovs-vsctl show20:31
tdsi'd take a wild guess that it's running some kind of overlay network (eg vxlan) between nodes, and your vxlan flows are getting hashed onto a single link20:31
shubjeroYep, GRE actually20:31
tdsah right, there's your issue then20:31
shubjeroElaborate?20:31
tdsyour bond will hash on tcp/udp flows, so probably only l3 info for gre (i'd guess) - therefore, you only end up with one flow as far as the bonding driver is concerned20:33
shubjeroI would have assumed though that because I was performing the iperf3 tests from multiple VM's that live on different compute nodes with different GRE IP's that it would at some point, be load balanced across the two links.. I've yet to see that so far20:34
shubjeroI even tried changing the bonding hash method to encap3+420:35
shubjeroncap3+4 This policy uses the same formula as layer3+4 but it relies on skb_flow_dissect to obtain the header fields which might result in the use of inner headers if an encapsulation protocol is used.20:35
shubjeroThe default value is layer2. This option was added in bonding version 2.6.3. In earlier versions of bonding, this parameter does not exist, and the layer2 policy is the only policy. The layer2+3 value was added for bonding version 3.2.2.20:35
shubjerofrom: https://help.ubuntu.com/community/UbuntuBonding20:35
shubjeroI can reach out to some openstack operators to see if any of them have dabbled in this20:36
tdsencap3+4 sounds a lot more like what you might want, though i've no idea if it behaves properly with ovs rather than linux doing the encap20:36
tdsyes, that would probably be sensible, or ovs people20:36
tdsdo you have 10g links between switches anywhere in the topology?20:36
shubjeroDo you know of an ovs centric channel is?20:36
tdssince even if you can persuade linux to hash on l3/4, that's a bit useless if the switch will hash it all down a single 10g link later20:37
shubjerotds: Well, the iperf3 CLIENTS are 10G limited.. but I am using multiple clients so that shouldnt be the issue.. plus it all works at the non-virtualized layer (bypassing neutron, gre, ovs, etc)20:37
tdsbut I would expect the tests over multiple nodes to get split over the different links20:38
shubjerome too20:38
tdsif you have even more nodes handy for testing, i'd definitely try that, might've just been you got unlucky if you only tried a few20:38
shubjeroyeah I can throw more at it20:38

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!