[01:48] <Tuv0x> Anybody know of a sort of web portal to all your services? (Plex, Portainer, Deluge, Node-red, etc...)
[01:56] <Tuv0x> Guess I'll write my own.
[07:09] <lordievader> Good morning
[10:14] <Skyrider> Morning :D
[10:42] <lordievader> 👋
[11:26] <Skyrider> Greetings all.. Got a quick question.
[11:27] <Skyrider> I've set up quite a few .service / .timers on my systemd, and noticed after I rebooted the server a few times, those timers ran when they shouldn't have.
[11:28] <Skyrider> I don't have anything in the timer and or service that they should be running on system boot either.
[11:35] <lordievader> Are they persistent?
[11:38] <tomreyn> systemctl list-timers   helps reviewing timers' status (in case you didn't do this yet)
[11:48] <Skyrider> Meh, I can't seem to figure it out
[11:49] <lordievader> In case you missed my question: Are they persistent?
[11:50] <Skyrider> Oh, sorry. Rebooted, so I didn't see.
[11:50] <Skyrider> 1 sec, will paste the timer/service file.
[11:51] <Skyrider> https://paste.ubuntu.com/p/mR6WdpTV6z/ - timer and Service: https://paste.ubuntu.com/p/XwfY6zgYgv/
 systemctl list-timers   helps reviewing timers' status (in case you didn't do this yet)
[11:55] <lordievader> Skyrider: How often should this run? (And why specify 2020?)
[11:56] <lordievader> As [1] suggests, `systemd-analyze calendar <time>` is a useful way of checking your interval notation.
[11:56] <lordievader> [1]: https://unix.stackexchange.com/questions/126786/systemd-timer-every-15-minutes
[11:56] <frickler> coreycb: jamespage: do you have some planned schedule for updating rocky uca? or do I just have to nag you often enough? ;) https://bugs.launchpad.net/cloud-archive/+bug/1853320
[11:58] <Skyrider> I specified 2020 because I set up the script in 2019.
[11:58] <Skyrider> Didn't want it to run in 2019 :)
[12:00] <Skyrider> Link doesn't show anything to prevent it to run on boot though
[12:00] <lordievader> Was just giving my source ;)
[12:00] <lordievader> Can you answer my question, though?
[12:06] <Skyrider> persistent, how?
[12:06] <Skyrider> There are no persistent rules in the service/timer files.
[12:08] <lordievader> No, how often should it run, I meant ;)
[12:11] <Skyrider> Multiple scripts. one per 10m, the other per day and another per week.
[12:12] <Skyrider> Seeing remove/move commands are also in the scripts, it's screwing up.
[12:13] <lordievader> Right, so this is doing every 10 minutes. Can't it be that the next run has passed during the boot?
[12:14] <Skyrider> The 10 min? Sure.
[12:14] <Skyrider> Mon 2020-01-13 23:58:00 CET  6 days left   n/a                          n/a      inferno-daily-archiver.timer inferno-daily-archiver.service
[12:14] <Skyrider> That one however, ran after boot.
[12:15] <jamespage> frickler: I'd defer to coreycb on that one - we try to have a monthly cadence for each series but that does not always work out
[12:16] <lordievader> Skyrider: Was the service "enabled" by accident?
[12:18] <frickler> jamespage: great, I just noticed that I also need neutron-fwaas releases being made, so maybe not a bad thing if it takes another couple of days. https://review.opendev.org/701358
[12:19] <Skyrider> I only enabled the timer.
[12:20] <Skyrider> Even when I run the disable service command, no responds is given.
[12:20] <Skyrider> So there are no symlinks.
[12:22] <frickler> jamespage: regarding this glibc bug, do you think bumping the minor version in bionic-updates would seem feasible? or do we need to locate specific patches? https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1839592
[12:22] <frickler> we did run for 3 weeks now without hitting the issue again on >100 machines
[12:22] <Skyrider> When I check the multi-user.target.wants directory, only timers are shown, not the services.
[12:23] <Skyrider> So that looks normal as well.
[12:26] <lordievader> Skyrider: What does `systemctl cat inferno-daily-archiver.timer` return?
[12:28] <Skyrider> Same as https://paste.ubuntu.com/p/mR6WdpTV6z/ - But different timers "OnCalendar=Mon 2020-*-* 23:58:00" and WantedBy=multi-user.target, changed it to timers.target in minute timer as I was trying a few things.
[12:35] <lordievader> Yeah, that should be timers.target.
[12:35] <lordievader> Did you test it with that target?
[12:36] <Skyrider> Let me adjust it.
[12:37] <Skyrider> Do I have to disable/enable after changing it, or will daemon-reload do
[12:51] <Skyrider> Okay, rebooted.
[12:51] <Skyrider> All timers were run.
[12:51] <Skyrider> ***All timers still ran regardless.
[12:52] <lordievader> Are they present in the multi-user.target?
[12:53] <Skyrider> Only 3 timers still exists in multi-user.target, none of them are important atm :p
[12:53] <Skyrider> I disabled/enabled all timers as well just to be sure before the reboot
[12:53] <Skyrider> Created symlink /etc/systemd/system/timers.target.wants/inferno-daily-archiver.timer → /etc/systemd/system/inferno-daily-archiver.timer.
[12:54] <Skyrider> if I cat the file, WantedBy=timers.target exists.
[12:55] <lordievader> Did all of them run or just those from multi-user.target?
[12:55] <Skyrider> All.
[12:56] <lordievader> Even though persistent is by default false, what happens when you make this explicit?
[12:57] <Skyrider> Persistent=False in timer?
[12:57] <lordievader> Yes
[12:57] <lordievader> As per https://www.freedesktop.org/software/systemd/man/systemd.timer.html
[13:17] <coreycb> frickler: jamespage: i'll push on getting what's currently in -proposed released today, and will get some bugs created for the next batch. we try to make sure not to release prior to holidays but the goal is a monthly cadence.
[13:19] <coreycb> frickler: I'm going to grab that rocky one from sahid
[14:51] <Skyrider> Anyone here happen to be familiar with CSF and its web UI?
[14:57] <vlm> not very confident with dd tool,is it possible to read only say from block 1100 to 1200 from a file or device?
[14:58] <sdeziel> vlm: yes, use seek= and count=
[15:01] <vlm> sdeziel: ill look that up thanks!
[15:09] <sdeziel> is there a way to insert routes on boot? I'm trying to blackhole a bunch of IPs and would like the blackhole routes to persist through reboots. netplan/systemd-networkd is unfortunately not an option for various reasons
[15:15] <vlm> im interested in that too,ive used rc.local on my non systemd boxes before with ip route would be nice if theres some way through a systemd unit file or so
[15:47] <tds> sdeziel: if you're just blackholing, you could handle that at the firewall level with ipsets?
[15:48] <sdeziel> tds: yes but then I need to bother with OUTPUT and FORWARD which I'd like to avoid
[15:49] <tds> do you need to be able to easily make live changes?
[15:49] <sdeziel> tds: re ipsets, how do you restore them on boot? Is there any documentation you could point me to?
[15:49] <sdeziel> tds: yes
[15:50] <tds> for ipsets, look at the ipset-persistent package
[15:50] <tds> it's a hook for netfilter-persistent
[15:50] <sdeziel> tds: thanks that's what I wanted
[15:50] <tds> awesome
[15:50] <sdeziel> but that's my plan b ;)
[15:51] <tds> if you want to be able to make live but persisting changes to routes, a rather overkill solution would be a routing daemon like BIRD
[15:51] <teward[m]> sdeziel: ummmm `ipset-persistent`?
[15:51] <teward[m]> don't you mean `iptables-persistent`?
[15:52] <teward[m]> ah no i'm blind NEVERMIND
[15:52]  * teward[m] can't read today
[15:53] <sdeziel> vlm: I found a semi-decent way using systemd-networkd drop-in snippets. This is a little annoying to make work with netplan but it does work
[15:53] <sdeziel> vlm: let me know if you want me to elaborate
[16:32] <vlm> sdeziel: was afk a while,please do tell your solution
[16:34] <vlm> also i didnt know there was a netfilter hook for ipset,made my own unit file just for that nice find
[16:34] <sdeziel> vlm: I am hesitating between 2 ways. One uses a dummy NIC (https://paste.ubuntu.com/p/MzJHfkGK2T/) and the other uses drop-ins files that supplement the netplan generated config
[16:36] <vlm> sdeziel: interesting,ive not dealt too much with systemd-networkd thanks for sharing
[16:39] <sdeziel> tds: looks like ipset-persistent is not packaged but is easy enough to setup according to https://selivan.github.io/2018/07/27/ipset-save-with-ufw-and-iptables-persistent-and.html
[16:39] <tds> what version of ubuntu is this?
[16:40] <sdeziel> tds: I'm on 18.04
[16:41] <tds> oh yes, looks like ipset-persistent only appears on 19.04 onwards :(
[16:43] <shibboleth> apt-get install ifupdown
[16:43] <shibboleth> apt-get purge netplan.io
[16:43] <shibboleth> tada
[16:44] <sdeziel> tds: ah, right, thx
[16:45] <vlm> sdeziel: if you want i might be able to paste the unit file for ipset i made,although i think i didnt make it all by myself,some of it i found on the net,but seems to work ok though
[17:09] <sdeziel> vlm: thanks but I'll backport the package to my PPA
[19:54] <shubjero> Hi, I am trying to achieve greater than 10Gbps network throughput on an instance that lives on a compute node which has two 10G nic's in lacp (layer 3+4 hash policy). At the hypervisor level I can iperf test with two other clients and achieve 20Gbps but when I try to do a similar test but at the virtualized level I can only hit 10Gbps, with all that 10Gbps traffic only going over one of the
[19:54] <shubjero> hypervisors nic's in the bond. Any thoughts?
[20:05] <Ussat> which hypervisor, whit nic drivers ?
[20:05] <Ussat> what
[20:05] <shubjero> Ubuntu 18.04 kvm/qemu. Virtio nic driver
[20:05] <compdoc> eww
[20:07] <shubjero> This is just a single compute node in a large openstack cluster
[20:07] <sdeziel> shubjero: the hashing policy could have both sides of your iperf going over a single NIC if you are unlucky
[20:10] <shubjero> Well I noticed that sometimes the iperf3 tests I would perform at the hypervisor level would sometimes only reach 10Gbps but a simple re-start of one of the iperf3 clients would allow it to be load balanced across both links and utilize 20Gbps.
[20:10] <shubjero> I've not been able to repeat those results at the virtualized level though. Seems strange
[20:11] <shubjero> And in each iperf3 test scenario, we are talking about traffic from 3 different servers.
[20:11] <shubjero> virtual and bare metal
[20:12] <sdeziel> shubjero: the hashing fans out flows to the underlying NICs in a semi-random fashion (depends on src/dst IPs and ports used) so it's important to repeat them a couple of time as each time your src port will be different and you get another chance to move the flow to another NIC
[20:13] <sdeziel> that said, you probably have been touching both NICs already
[20:13] <shubjero> sdeziel: yeah i've definitely done that. I am wondering if perhaps the lack of balancing is happening at the bridge level
[20:16] <sdeziel> shubjero: dunno if that's still relevant but I remember loading the vhost_net module to get better performance. Seems to be done automatically on 18.04 now though
[20:17] <tds> shubjero: how are you running iperf exactly, testing on multiple flows?
[20:18] <shubjero> tds: On the server with 20Gbps, I'm spawning two instances of iperf3 server (listening on two ports). Then on each client I am connecting to the IP on the iperf3 server.., on each I've tried multiple streams but it doesnt change the result
[20:20] <tds> hmm, if you're doing full l3+4 hashing, i'd expect a single iperf run with -P 10 or something to get hashed over multiple links fine
[20:20] <shubjero> tds: Yeah, and that all works at the hypervisor/bare metal level. It's when I run the same test at the instance level is where I don't see balancing occuring
[20:21] <tds> what does the network config on the host look like? just a plain bridge with the bond as a slave?
[20:22] <shubjero> tds: http://paste.openstack.org/show/788141/
[20:24] <tds> how is the vm tied to that bond exactly? full ip a / ip l output would be great
[20:25] <shubjero> tds: Here's the ip a output from the compute node: http://paste.openstack.org/show/47tQwySQU1GmAd0tDtZ4/
[20:28] <tds> shubjero: that's definitely not a standard network config, we'll need more details on how the vm's connectivity works - it looks like ovs is involved as well, are you running overlay networks of some kind?
[20:29] <tds> the output of `ovs-vsctl show` may say a little more about what's going on if so
[20:29] <shubjero> Yes, this is a compute node in openstack so neutron is involved.
[20:31] <shubjero> tds: http://paste.openstack.org/show/v9s6nrLJNFcKrzAUpen1/ there's brctl show and ovs-vsctl show
[20:31] <tds> i'd take a wild guess that it's running some kind of overlay network (eg vxlan) between nodes, and your vxlan flows are getting hashed onto a single link
[20:31] <shubjero> Yep, GRE actually
[20:31] <tds> ah right, there's your issue then
[20:31] <shubjero> Elaborate?
[20:33] <tds> your bond will hash on tcp/udp flows, so probably only l3 info for gre (i'd guess) - therefore, you only end up with one flow as far as the bonding driver is concerned
[20:34] <shubjero> I would have assumed though that because I was performing the iperf3 tests from multiple VM's that live on different compute nodes with different GRE IP's that it would at some point, be load balanced across the two links.. I've yet to see that so far
[20:35] <shubjero> I even tried changing the bonding hash method to encap3+4
[20:35] <shubjero> ncap3+4 This policy uses the same formula as layer3+4 but it relies on skb_flow_dissect to obtain the header fields which might result in the use of inner headers if an encapsulation protocol is used.
[20:35] <shubjero> The default value is layer2. This option was added in bonding version 2.6.3. In earlier versions of bonding, this parameter does not exist, and the layer2 policy is the only policy. The layer2+3 value was added for bonding version 3.2.2.
[20:35] <shubjero> from: https://help.ubuntu.com/community/UbuntuBonding
[20:36] <shubjero> I can reach out to some openstack operators to see if any of them have dabbled in this
[20:36] <tds> encap3+4 sounds a lot more like what you might want, though i've no idea if it behaves properly with ovs rather than linux doing the encap
[20:36] <tds> yes, that would probably be sensible, or ovs people
[20:36] <tds> do you have 10g links between switches anywhere in the topology?
[20:36] <shubjero> Do you know of an ovs centric channel is?
[20:37] <tds> since even if you can persuade linux to hash on l3/4, that's a bit useless if the switch will hash it all down a single 10g link later
[20:37] <shubjero> tds: Well, the iperf3 CLIENTS are 10G limited.. but I am using multiple clients so that shouldnt be the issue.. plus it all works at the non-virtualized layer (bypassing neutron, gre, ovs, etc)
[20:38] <tds> but I would expect the tests over multiple nodes to get split over the different links
[20:38] <shubjero> me too
[20:38] <tds> if you have even more nodes handy for testing, i'd definitely try that, might've just been you got unlucky if you only tried a few
[20:38] <shubjero> yeah I can throw more at it