[06:20] Good morning === m1dnight_ is now known as sometesttempnick === sometesttempnick is now known as m1dnight_ [10:44] lordievader: good localtime(); # ;) [10:48] Heho RoyK [10:48] Doing alright? [11:15] it's ok - quarantened atm, waiting for a test result, but hell, it's ok [20:59] OK, so...I just changed the hostname / ip on a Ubuntu 18.04 system, modified the netplan file, changes the hostname with hostnamectl, edited the /etc/hosts file but sshing in is REAL slow, what did I miss ? [21:00] Ussat: generally that means name resolving is timing out [21:01] it's been a while that sshd no longer does PTR lookups [21:02] Ya...thats the issue trying to find out why [21:02] I just dont know what I missed [21:02] I mean it just changed from xxx.xxx.xxx.120 to xxx.xxx.xxx.119, same subnet [21:03] Ussat: anything special in "ssh -v"? [21:03] nothing needed to change re resolver [21:03] Ussat: once the SSH session is up, do sudo ... commands also delay? [21:04] why does apt autostart everything before you're allowed to configure it? [21:05] TJ-, yes [21:05] and is there any way to make it stop doing that? [21:05] I know its name resolution, just not sure what config I missed [21:05] especially for things like webservers, bitlbee, irc bouncers, etc [21:05] Ussat: right. though as much. the issue is that nsswitch is still trying to find the old hostname [21:06] (and even transmission-daemon tbh) [21:06] TJ-, ok...how to tell it to stop :) [21:06] Ussat: maybe check you don't have UseDNS in sshd_config? [21:06] The RHEL system I did 20 mins before this was fine [21:07] sdeziel, that wouldnt fix the nsswitch issue though [21:07] (just... how do you stop apt from making the system pwnable by default .-.) [21:07] Ussat: it would at least prevent sshd from doing reverse lookup upon connection [21:08] its not set anyway [21:09] OK, dunno then [21:09] Ussat: it's quite a while since I had that issue but I vaguely recall it was always due to having made a mistake somewhere [21:09] maybe PAM stuff kicking in? [21:10] Ussat: do /etc/hostname, /etc/hosts 127.0.1.1 match? [21:11] THATS it, missed /etc/hostname file [21:11] Thanks [21:12] wait...they match [21:12] grr [21:14] OK...typo , instead or . [21:14] hanks [21:14] Thanks === not_phunyguy is now known as phunyguy [21:27] hey does anyone know of an app to build a distrobution iso from a current ubuntu server install? [21:27] i want something i can run from command line i dont want to have to install a gui [21:33] FFS.....my esxi admin put it on the wrong vlan [21:33] Ussat: oh sheeeesh [21:33] yup [21:34] Decided to give her a acall, Hey can we verify this vlan please [21:34] 129...192 [21:34] sigh [21:35] grendal-prime: you can prepare a cloud-init configuration to do what needs to be done and use the standard server installer iso or cloud images etc https://ubuntu.com/server/docs/install/autoinstall-quickstart [21:42] sarnold, sooo i have an installation that i have removed all the bs from the 20.04 installtion media. For instance, i dont need snapd or cloudinit and i remove netplan and set up init.d/networking [21:43] i want to take this runnable working instance and write it to an iso with installer so when i need to do a bare metal install i dont have to go through all the removing of all that bs again. [21:43] but i dont want to use the gui toll because if im understanding this correctly it needs to run this process on the actual server itself. [21:44] it just seems like it'd be easier to write down what you did so it'd be easy to re-do it :) [21:45] I could have sworn there was a way to build install media with the current system. [21:45] agreed sarnold but sometimes things change. [21:46] I have a virtual system that i spin off clones from for virtual servers..but the vhosts i have to build...well i had to build 4 of them...i would have prefered to build one. make an iso...and build the other three and change their hostnames. [21:47] i thought it was genisoimage but i am just getting errors from that application [21:54] grendal-prime: hmm.. there's a p2v tool somewhere that'll let you turn a physical machine to a virtual machine, maybe that'd then let you use your tooling to get the rest of the way? [22:09] best way to do that kind of thing in my experience is to originally install to a COW file-system, then snapshot as required. When you want to make 'installers' either take a literal copy of the snapshot, or, script post-install steps based on the diff between the virgin install snapshot and a later one [22:35] this is just crazy [22:35] i cant believe this is this difficult [22:42] grendal-prime: there seems to be a split -- folks who want to package up a laptop as a server aim for docker so they can run what worked on their devs' laptops as containers in their kubernetes or something; and the other group of people use juju or ansible or puppet or chef or salt to script the deployments in the first place