[15:27] <moha> Hi
[15:29] <moha> I have installed microstack within a VM (=172.16.250.10) and I can access the Horizon console on this IP from outside of the VM; When I create a machine in this Microstack, it generates an IP in the range of 10.20.x.x for the machine. How can I access to this machine (ex: 10.20.20.5) over the VM IP? I mean opening that 10.20.20.5 outside of the VM; I was thinking of having a router within the VM alongside the microstack, right?
[15:40] <RoyK> moha: both 172.16.* and 10.* are addresses specified in rfc1918 - they are non-routable on the internet and typically have a nat router in front where you setup which ports should be forwarded to them
[15:40] <RoyK> moha: alongside 192.168.*
[15:41] <moha> I the 172 NATing is handled by the hypervisor. but I whant somehow route that 10.20.20.5 throught 172.x on a specific port
[15:42] <moha> want*
[15:42] <RoyK> dunno - this isn't really standard ubuntu as far as I know, though
[15:43] <moha> Ubuntu is installed within the VM; Microstack's installed on Ubuntu.
[15:43] <moha> Should I search for "How to configure Ubuntu as a router"?
[15:51] <RoyK> I guess you'll figure that out in Microstack
[15:53] <RoyK> I don't use that - I use kvm/libvirt - with that I use a bridge on my server (debian, not ubuntu, but generally the same thing), so that the VMs can connect to the network like anything else behind the router. It also allows for direct non-nat access for IPv6, which is nice
[15:53] <RoyK> !bridge
[15:54] <RoyK> https://www.tecmint.com/create-network-bridge-in-ubuntu/ might do
[16:52] <znf> what's that new annoying thing in 22.04 that asks you about services to be restarted every damn time you use apt?
[16:56] <znf> answer: 'needrestart' package
[16:57] <frickler> znf: if you find a way to disable that, I'd be interested to hear about it, too
[16:57] <znf> just remove the needrestart package
[16:57] <znf> I don't need that sort of negativity in my life
[17:00] <frickler> oh, easy, thx
[17:04] <RoyK> Your mouse moved! Windows must be restarted for the change to take effect.
[19:04] <ahasenack> does anybody know if I should get (slightly) better performance when using Soft-RoCE instead of the normal network stack? For example, for an NFS server?
[19:05] <ahasenack> in a couple of test vms, I got actually 3x better performance without RoCE, but I'm unsure how much the fact these are VMs is interfering
[19:05] <ahasenack> (and they used virtio networking)
[19:05] <patdk-lap> well, vm changes everything so
[19:06] <sarnold> i've never heard of soft-roce but i'm not too surprised to hear virtio networking is pretty good already
[19:07] <patdk-lap> the advantages of rdma just go downhill quick when using jumbo frames and expecially using tso, rso, gro, ...
[19:07] <patdk-lap> lro I mean
[19:07] <patdk-lap> it's more the large packet rollups that make it all feel good
[19:07] <patdk-lap> offloading all those packet stuff to hardware
[19:08] <patdk-lap> I found using infiniband with max mtu size, 64k I think, was near as fast as rdma
[19:08] <patdk-lap> and that is what offloading gets you without having to use extra large mtu sizes
[19:16] <ahasenack> I'll try on real hardware next (but still not specialized hardware, just off-the-shelf nics)
[19:16] <patdk-lap> what nic? most nics will support it
[19:16] <ahasenack> whatever is in my laptops :)
[19:17] <patdk-lap> oh, not good nics at all
[19:17] <ahasenack> some generic intel gigabit I think
[19:17] <patdk-lap> but should still support offloading
[19:17] <ahasenack> I mean, it's *soft* RoCE
[19:17] <ahasenack> I'm not expecting a leap
[19:17] <ahasenack> but was wondering if it would be better, since it would bypass the network stack, IIRC
[19:18] <ahasenack> or at least similar
[19:18] <patdk-lap> well, it's just rdma hasn't been a huge push, since all the large packet offload and scatter/gather offload features got added to nics
[19:18] <patdk-lap> the improvement of using rdma was very complex and minimal when compared
[19:18] <ahasenack> I see
[19:18] <patdk-lap> the network stack in linux is so ideally tuned (for tcp atleast)
[19:18] <patdk-lap> it's kindof hard to do better :)
[19:19] <patdk-lap> anything is possible, and everything is highly different outcomes depending on exact usecase so :)
[19:20] <patdk-lap> when I did my big testing in 2011, jumboframes and rdma was ideal
[19:20] <patdk-lap> but then a few years later, neither made sense, cause of large offloads, I even debated moving all my jumboframes back to 1500 mtu
[19:20] <patdk-lap> but overall lazeness said, keep it the same, conversion takes time and energy :)
[19:33] <ahasenack> well, this started with me trying to make sure rdma was being used, and the performance difference confirmed it, I just didn't expect it to be 3x slower ;)
[19:33] <ahasenack> but I'll try outside of a vm next, to be more fair
[22:45] <blackboxsw> Hey folks, I just responded to a bug about lintian warnings we are receiving in cloud-init packaging because of the ~XX.YY.1 diminshed package version syntax "W: cloud-init source: binary-nmu-debian-revision-in-source 22.2-0ubuntu1~18.04.2"   If anyone gets a chance to look my response over tomorrow for corrections/omissions  that'd be great
[22:45] <blackboxsw>  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1014584#18
[22:46] <blackboxsw> it seems something changes in lintian within the last 6 months/year related to nmu version handling that is generating these lintian warnings as I don't recall seeing them during package builds even last year and cloud-init packaging has used the ~XX.YY.Z syntax for quite some time.