[06:19] <nsnzero> morning all
[06:30] <superfly> morning nsnzero. got my internet working at my new house, wanna cry, I mean, see?
[06:30] <superfly> :-P
[06:31] <nsnzero> welcome back superfly - lol
[06:31] <superfly> nsnzero: http://www.speedtest.net/my-result/a/2798383711
[06:33] <nsnzero> i dont even get 1mbs 
[06:34] <superfly> I got a 300Mbps down, 30Mbps up connection.
[06:35] <nsnzero> i am so jealous
[06:38] <superfly> I'll be working from home, so I need to make sure that I can always do what I need to (which includes video conferencing).
[06:40] <MaNI> only 30Mbps up, not that impressive, shitty american internet :p
[06:54] <andrewlsd> Mornings Ubuntu-ZA
[06:55] <nsnzero> 300mbs is faster than my lan
[06:56]  * andrewlsd lurks again
[06:59] <MaNI> but 10 is not :p
[07:18] <theblazehen> Hi all
[07:22] <nsnzero> morning theblazehen 
[07:22] <theblazehen> hi nsnzero
[07:30] <nsnzero> one question : why when i use vboxmanage in tmux the virtual machine closes when the terminal after detaching tmux from it 
[07:36] <theblazehen> nsnzero: Does a vm console pop up, or is it headless?
[07:38] <nsnzero> theblazehen: headless - it starts fine - runs fine - but closes as soon as i end the ssh session - it runs in tmux 
[07:38] <theblazehen> nsnzero: Any reason for virtualbox over kvm?
[07:39] <theblazehen> Running a relatively recent distro?
[07:39] <theblazehen> Does your tmux stay alive?
[07:39] <nsnzero> no reason - it was the first vm i tried out
[07:40] <theblazehen> (Thinking about https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825394)
[07:40] <nsnzero> running 16.04 
[07:40] <nsnzero> tmux staying alive - good question - let me check
[07:40] <theblazehen> If you're running on linux, I'd go with kvm (or Xen, if you like. But kvm is more popular)
[07:40] <theblazehen> jSimple too
[07:40] <theblazehen> s/j//
[07:41] <theblazehen> Just `sudo tasksel`, select Virtual Machine Host and install `virt-manager`
[07:43] <nsnzero> nice - theblazehen 
[07:52] <andrewlsd> +1 for virt-manager
[07:53] <andrewlsd> virtualbox is nice for running Windows in a VM, with shared folders etc. but if you need to run  other VMs and/or have them start at boot time or even primarily run them headless,  then KVM via virt-manager FTW.
[07:55] <MaNI> Only annoying thing with kvm is networking, theres no nice solution to bridge AND be able to access host IP 
[07:55] <andrewlsd> ^ hmm yip. I tend to run two bridges.
[07:55] <andrewlsd> or use laptop wifi for internet and laptop eth for bridge of vms.
[07:56] <andrewlsd> (when they need internet access)
[07:59] <MaNI> I have some gross script that adds a macvlan interface onto the hypervisor with the same IP as main network card :(
[08:00] <MaNI> https://pastebin.com/N8xQ2GRk < I don't like it though and I worry that at times it might be the cause of weird network issues
[08:00] <nsnzero> virtual box is good for us beginners 
[08:01] <MaNI> I really wish there was just an official way to do this
[08:01] <MaNI> otherwise really happy with kvm though :p
[08:01] <theblazehen> MaNI: A normal bridge Just Works for me?
[08:02]  * nsnzero is slowly becoming a hard core linux user
[08:02] <theblazehen> nsnzero: Easier to `tasksel` and `apt install virt-manager` than virtualbox :) 
[08:03] <theblazehen> It's okay on windows though
[08:03] <MaNI> theblazehen, with 2 network cards you mean? It's easy enough to get it working on a machine with two cards but this machine only has one
[08:04] <MaNI> or you mean you are using bridging instead of e.g. macvtap? IIRC that doesn't scale well if you are running multiple VMs it's fine for 1 VM
[08:04] <theblazehen> MaNI: Nope, just one. Just a normal bridge, with your IP on the bridge not directly on the nic
[08:04] <theblazehen> > or you mean you are using bridging instead of e.g. macvtap? IIRC that doesn't scale well if you are running multiple VMs it's fine for 1 VMyeah, that
[08:04] <nsnzero> virtual box doesnt auto mount usb disks - which is irritating 
[08:04] <theblazehen> Well, works great with plenty of containers
[08:05] <MaNI> I'll probably just shove another network card in here at some point and be done with it
[08:06] <theblazehen> MaNI: Or are you talking say 100+ VMs on a host, with decent hardware?
[08:06] <MaNI> I tend to have 5 VMs running on my dev box at any given time
[08:07]  * theblazehen didn't have issues with around 6 VMs and 40 containers on old desktop-running-as-server
[08:08] <MaNI> I don't recall bridging working for me in this scenario, but I may remember wrong, or something may have changed - I mean it's been a few years since I looked I've just been using the same solution since I first set it up (which is the script)
[08:08] <theblazehen> Was only getting around 7 gbit/s between containers, but that was more an issue of mtu / cpu / ram speed I think
[08:08] <theblazehen> Yeah, I've only been running that many VMs + containers for like a year or so
[08:09] <MaNI> or it may have even been some hardware (or kernel) specific gotcha - can try it again when I get a break I guess
[08:09] <MaNI> though maybe I should anyway just shove in an extra card - it's only like R100 or whatever for another network card and that solves everything
[08:10] <nsnzero> can i import vbox images into kvm ?
[08:11] <MaNI> you can import vbox harddrives, you'll have to reconfigure the hardware part of the machine
[08:11] <theblazehen> nsnzero: Yeah, check `qemu-img`
[08:11] <nsnzero> mani theblazehen thanks 
[08:13] <theblazehen> MaNI, eh. On new server just getting 11 gbit/s between containers. dunno how exactly it's done between them, but it may be slowed down a bit due to NUMA stuff?
[08:23] <MaNI> No idea, I'm not a hardware/networking guy :p 
[08:27] <theblazehen> Either way, IMO if you're doing more than 10 gbit/s between VMs then it's either storage, in which case, pass through an iSCSI LUN as a direct block device, then it's just guest running storage <-> host, not storage guest<->host<->guest, or you'll likely be CPU bound anyway (I'm guessing) if you're passing that much application data. Or memory speed bound
[08:28] <theblazehen> Or different NUMA zones like this case I guess, could slow you down
[08:28] <theblazehen> Right. That's a valid use case if you have multiple CPUs
[08:30] <theblazehen> Although in that case if application isn't NUMA aware, which it should be if you run multiple instances and not just for replication, you can use shared memory between containers afaik, may be faster than overhead of TCP/IP
[08:31] <theblazehen> Not a huge improvement, but container -> guest network gets 12.6 gbit/s over TCP/IP
[08:32] <theblazehen> Just running a http://ark.intel.com/products/64590/Intel-Xeon-Processor-E5-2650-20M-Cache-2_00-GHz-8_00-GTs-Intel-QPI though
[08:32] <theblazehen> maxing out a single cpu core
[08:34] <theblazehen> For comparison, direct to localhost is just 30.6 gbit/s
[08:34] <MaNI> hehe, my needs are quite a bit more modest than anything like that, I just need dev VMs that can access the rest of the network at reasonable speeds while being able to also ping the hypervisor and not have to be on a different subnet :)
[08:35]  * theblazehen still thinks if your application traffic needs more than 10 gbit/s you should probably use more physical hardware
[08:35] <theblazehen> Heh, yeah :p I like taking things too far though
[08:35] <theblazehen> Have you _seen_ my hardware specs? lol
[08:36] <MaNI> yeah I can only dream of hardware like that
[08:37] <MaNI> which reminds me, I should upgrade soon now that theres finally some consumer CPU competition again
[08:37] <theblazehen> https://linx.li/lawf60tu.txt (I normally use around 64 GiB more RAM, so it's not all wasted)
[08:37] <theblazehen> Although an i7-4790k still ends up a bit faster than my Xeon though. At least I have 2 of them
[08:37] <theblazehen> And an i7-4790k can't address 384 GiB RAM :p
[08:38] <theblazehen> If that nigerian prince gets back to me soon I'll be upgrading to full flash storage though
[09:03] <theblazehen> ... Would anyone be interested in a south african FidoNet node?
[09:09] <nsnzero> is it like freenode ?
[09:09] <theblazehen> nsnzero: It's a BBS basically
[09:10] <nsnzero> ok nice old school 
[09:10] <theblazehen> Yeah
[09:11]  * theblazehen thinks that phone call costs will make people not really want to use it though
[09:11] <theblazehen> (FidoNet is basically a network of BBSs if I understand right)
[09:12] <theblazehen> And exposing it over telnet or something kinda gets rid of the cool part or running a BBS anyway
[09:13] <nsnzero> they all use dsl lines - before its was only dial-up 
[09:14] <theblazehen> nsnzero: yeah. So these days people would probably prefer to telnet into node, rather than dial in
[09:14] <theblazehen> In which case, why bother running a BBS
[09:16] <nsnzero> nostalgia theblazehen 
[09:17] <theblazehen> nsnzero: Heh. /me never got to experience it in the first place :(
[09:18] <nsnzero> looked cool in the old movies - but i also didnt have the thrill of bbs 
[09:18]  * theblazehen got rid of a PCI modem because I never expected to want to use it :(
[09:18] <theblazehen> Yeah. Wargames ftw
[09:18] <theblazehen> nsnzero: Did you know that hackthissite has a phreaking section?
[09:19] <nsnzero> no never knew that
[09:19]  * theblazehen also liked that kind-of phreaking? scene in wargames
[09:19] <nsnzero> war dialing 
[09:20] <nsnzero> telnet telehack.com 
[09:20] <theblazehen> And https://www.youtube.com/watch?v=o5b5GWDqYrk a real phreaking scene
[09:20] <theblazehen> nice ty nsnzero
[09:20] <nsnzero> its got a WOPR server there somewhere 
[09:23] <theblazehen> Nice
[09:26] <theblazehen> nsnzero: Did you get to it, or just know it's there?
[09:30] <nsnzero> i got to it - lol - everything from the movie is there including thermo-nulcear war
[09:30] <theblazehen> nice
[09:33] <nsnzero> its got 25 000 hosts to explore and hack into
[09:33] <nsnzero> http://telehack.com/telehack.html
[09:33] <theblazehen> wow, nice. ty nsnzero
[10:08] <MaNI> there just shoved a second NIC in and put all the VMS as macvtap|eth1 - hooray for hardware solutions
[10:13] <theblazehen> MaNI: but then you're limited by speed of the NIC :/
[10:14] <MaNI> most the traffic is to external boxes anyway
[10:14] <theblazehen> Ah
[10:15]  * theblazehen wishes my whole network was 10 gbit :( Only (storage + my pc) and server have 10 gbit connection between them
[10:16] <andrewlsd> MaNI: sudo brctl show
[10:17] <andrewlsd> http://pastebin.methlab.lsd.co.za/5zvv316p.txt
[10:17] <andrewlsd> I use a bridge to share network, so that VMs and containers can all talk to each other too.
[10:17] <andrewlsd> one network.
[10:17] <andrewlsd> admittedly, sometimes I remove eth0 from it so that they can't access external stuff.
[10:18] <theblazehen> andrewlsd: Nice idea. Have you looked a `ebtables`?
[10:18] <andrewlsd> theblazehen: I have _looked_ at ebtables. I haven't had a cause to use it yet.
[10:18] <theblazehen> http://pastebin.methlab.lsd.co.za/boehzrnn.txt my `brctl show`. Basically the same thing
[10:18] <theblazehen> andrewlsd: Heh. Try to avoid it :p
[10:19] <andrewlsd> except you have a bond device :-D
[10:19] <theblazehen> Ended up just dropping everything to iptables when I used it
[10:19] <theblazehen> andrewlsd: Yeah, but bridge is the same :p
[10:19] <andrewlsd> I configured LXD not to start its own bridge.
[10:20] <andrewlsd> (ditto for `libvirt`)
[10:20] <andrewlsd> interesting that your `virtbr0` has Spanning Tree Protocol enabled.
[10:21] <theblazehen> Hmm. Was the default IIRC
[10:21] <theblazehen> I use br0 for VMs anyway
[10:22] <theblazehen> nsnzero: `apt install bsdgames`, `wargames` :)
[10:23] <theblazehen> Hmm. /me should actually rewrite my `hangman` solver properly
[10:34] <theblazehen> The more I use perl the less I like it
[11:26]  * theblazehen never knew that going from web interface admin -> command execution was a big deal... /me has some reporting to do in that case...
[11:26] <theblazehen> https://www.cvedetails.com/cve/CVE-2017-6334
[12:13] <nsnzero> have good afternoon everyone
[17:50] <nsnzero> evening all
[18:03] <superfly> Hi nsnzero
[18:05] <nsnzero> hi superfly 
[18:06]  * superfly is busy getting all his Red Hat accounts set up
[18:06] <superfly> They take security seriously.
[18:07]  * nsnzero wonders why he cant ssh into his server
[18:08] <theblazehen> Hi nsnzero, superfly
[18:08] <theblazehen> superfly: Nice. What you going to be working on there?
[18:09] <nsnzero> hi theblazehen 
[18:09] <superfly> theblazehen: I'm a testing engineer on the CloudForms team. CloudForms Red Hat's "product" version of ManageIQ
[18:10] <nsnzero> congrats superfly 
[18:11] <nsnzero> theblazehen: do you think installing kvm messed up my ssh settings ? i didnt reboot after installing as well 
[18:16] <theblazehen> superfly: Nice
[18:16] <theblazehen> nsnzero: How so?
[18:16] <theblazehen> Can't ssh in?
[18:18] <nsnzero> nope - no errors just no response - server is up 
[18:20] <theblazehen> Can you `ssh -vvv teh.server`?
[18:20] <nsnzero> it emails me  its system state every 30 minutes 
[18:20] <theblazehen> Does it hang after sending version string?
[18:20] <theblazehen> You can ping it?
[18:20] <nsnzero> connection timeout after a long wait 
[18:20] <theblazehen> Does IP come from dhcp or static?
[18:21] <nsnzero> dhcp 
[18:21] <theblazehen> You should have gotten a new IP
[18:21] <theblazehen> Can probably check hostname on dhcp server
[18:22] <theblazehen> Otherwise check arp cache if your local pc is on same lan perhaps
[18:22] <theblazehen> Or just nmap the network if you run on a /24
[18:22] <nsnzero> i suppose it  just needs a reboot 
[18:23] <nsnzero> it connected fine on the lan early now it just not responding 
[18:28] <nsnzero> evening Kilos 
[18:39] <nsnzero> have a good night all 
[19:00] <Kilos> night guys.