[06:06] <lordievader> Good morning
[06:20] <ZPQ> morrn
[12:50] <Jenshae> Hi. I am trying to connect remotely to a Ubuntu server. Between me and it, there is a UTM + Windows domain controller. Currently, I can get into it via Windows RDP -> Win terminal session -> Putty -> SSH. Ideally, I would like to have a graphical session via freerdp. Any config / guides that you can recommend to help?
[13:10] <jamespage> coreycb: hey do you have any bionic-proposed updates pending testing? running a regression test now so may as well mark any other bugs pending as tested
[13:11] <jamespage> Jenshae: erm well
[13:11] <jamespage> Jenshae: ubuntu server does not come with any sort of graphical environment installed by default that you can connect to
[13:12] <jamespage> SSH is the default method
[13:15] <coreycb> jamespage: no, nothing in particular. if you're also testing queens-proposed there are security updates that need regression testing.
[13:16] <jamespage> coreycb: I will be doing the UCA next
[13:16] <coreycb> jamespage: great, thanks
[13:16] <jamespage> coreycb: np
[13:42] <aruns> Hey guys, need some help, working on a dedicated backup server running on Ubuntu 18.04 for a client, and cannot connect to an ethernet interface named ens1f1 - it shows up under ip link show and when I do dmesg | grep 'ens1f1' - but not sure how to proceed. If I try to SSH into the machine, I get the following: nc: getaddrinfo: Servname not supported for ai_socktype
[13:43] <aruns> So I presume that I cannot SSH into the machine because it has no network connectivity.
[13:43] <aruns> I can bring up the contents of /etc/network/interfaces if needed.
[14:05] <avu> aruns: does `ip a` show valid addresses for the interface?
[14:08] <aruns> Yes.
[14:09] <aruns> A static IP of 192.168.111.25 has been set for the interface.
[14:09] <aruns> ip a shows both 192.168.111.25/24 and 192.168.111.202/24
[14:09] <aruns> For the ens1f1 interface.
[14:10] <aruns> Does that seem correct?
[14:14] <avu> having two different IPs in the same subnet doesn't seem correct, no
[14:19] <Ussat> why not ? you can have multiple IP's in same subnet
[14:22] <avu> sure, if you do it on purpose and set up the system to know which address to use in which case
[14:27] <avu> since aruns only mentioned a static IP being configured, I assumed that the second one didn't happen on purpose which leads me to believe that the configuration of the interface is not reflecting what the user wants to achieve
[14:32] <aruns> Yeah, it's a client server as well, they gave us carte blanche to do what we want but also a limited deadline :/
[14:35] <avu> aruns: not sure how that relates. Did you mean to give the machine two different IPs on the same subnet on the same interface?
[16:34] <TJ-> Unexpected issue on 18.04 - adding a gretap interface also creates an erspan interface, and then "ip link del XXXX" seems to silently fail for each of the erspan/gretap/gre interfaces that were created. Anyone have experience of this or suggestions on what's going on?
[18:16] <Jenshae> jamespage: I did install LXDE and x2goserver, I can connect to it on the LAN. It is routing a connection from a remote site to it that I am struggling with. Is there some sort of config for it to listen to connection from the UTM, etc that I need to do?
[18:18] <Jenshae> Oh and on an 18.04 note, I couldn't install it via manual partitioning. I ended up using a desktop persistent USB to use gparted, then I could only install 16.04, upgraded to 18.04 and now things like NetworkManager doesn't work.
[18:19] <Jenshae> Can't remember the new thing, some nmap config thing that search results are returning?
[18:24] <sarnold> Jenshae: there's some known issues mentioned in the release notes that might be related to your partitioning problem https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes#Known_issues
[18:28] <Jenshae> sarnold: It was to do with swap space, it kept trying to grab space that I wanted to leave and use in ZFS raid and then it wanted an encrypted space but couldn't mount the partitions I allocated to it after encrypting them. That was raw.
[18:29] <Jenshae> I was trying to have Drive 1: 5.5GB for /boot, 10GB for LVM / Raid 5 /root, the rest of the space as a software raid.
[18:30] <Jenshae> Drives 2-3, the first partition was LVM swap, then I tried having the first partitions as raw.
[18:34] <nacc> 5.5 GB /boot ??
[18:36] <teward> um... that's huge
[18:36] <nacc> and a waste of space in general, I'd think
[18:37] <nacc> a 10GB /root is also ... weird
[18:38] <teward> nacc: I can understand a 512MB or 1GB /boot if you don't want to have to autoremove old kernels regularly, but 5GB is obscene :|
[18:38] <nacc> teward: agreed
[18:41] <Ussat> I do 2GB
[18:41] <Ussat> 5 is ...big
[18:42] <jelly> I always put a 4GB recovery live .iso in /boot, you don't?
[18:42]  * jelly hides
[18:42] <teward> *finds jelly, drags jelly out into the desert, ties them to a pole, then returns, leaving jelly in the desert alone*
[18:42] <teward> (just kidding!)
[18:43] <jelly> you can't drag a jelly anyway
[18:43] <nacc> Jenshae: --^ fwiw, those comments were for you :) [not the stuff about jelly, before that]
[18:44] <jelly> well they weren't for them as much as about their unusual specs
[18:45] <jelly> (grml-rescueboot is neat tho, even if not completely serious)
[18:46] <sarnold> when the drives are 10tb fiddling over a few gigs here or there feels a bit funny :)
[18:46] <Jenshae> nacc 10GBx4 /root and the 5.5GB /boot is because sometimes for some reason apt will use /boot to temporarily store files + symmetry with the other drives.
[18:48] <nacc> sarnold: ah, sorry, missed that context
[18:49] <sarnold> nacc: I don't know how large Jenshae's drives *actually* are.. it's just amused me in the past that this problem feels easy enough to address by the application of more money :)
[18:49] <jelly> using raid5 for / seems silly as well, in that case
[18:49] <nacc> sarnold: absolutely
[18:49] <jelly> nothing wrong with raid10 or raid1
[18:50] <Jenshae> Well, in the end I resorted to getting another drives, setting it up as the boot one, nothing interesting, no RAID config, etc. Then just attached the four drives as ZFS pool.