[05:40] <patz0r> hey guys, I have a Ubuntu server running 18.04 and I just noticed that timedatectl is not running and fails to start
[05:41] <patz0r> this is what I see in the logs, any idea what I can try to fix this?
[05:41] <patz0r> # timedatectl
[05:41] <patz0r> Failed to query server: Connection timed out
[05:41] <patz0r> # tail /var/log/syslog
[05:41] <patz0r> systemd[1]: systemd-timedated.service: Failed with result 'exit-code'.
[05:41] <patz0r> systemd[1]: Failed to start Time & Date Service.
[05:41] <patz0r> dbus-daemon[16875]: [system] Activating systemd to hand-off: service name='org.freedesktop.timedate1' unit='dbus-org.freedesktop.timedate1.service' requested by ':1.76' (uid=0 pid=20578 comm="timedatectl " label="unconfined")
[06:35] <patz0r> I think this is actually my issue.. [system] Failed to activate service 'org.freedesktop.systemd1': timed out (service_start_timeout=25000ms)
[06:36] <patz0r> any ideas on how to fix it without a reboot?
[06:36] <patz0r> haven't touched this machine for a few months, logs like that go back to at least december
[07:43] <lordievader> Good morning
[14:46] <DammitJim> definitely not the right channel, but I don't know where else to ask... do you guys know of the proper technology/tool to continuously transfer files @ 5 files per second over WAN? I think at this point, the files are about 56KB
[14:49] <lordcirth> DammitJim, are these files coming in constantly, and need to be synced across when they do?
[14:50] <DammitJim> they are coming in constantly
[14:50] <DammitJim> what do you mean by synced?
[14:51] <lordcirth> I mean you want to immediately start copying them over WAN, best-effort, non-blocking
[14:51] <lordcirth> Firstly I suspect you'll want inotify to trigger the copy
[14:52] <DammitJim> I see what you are saying. Yes, I'd like for something in the stack to ensure the delivery of the files
[14:52] <DammitJim> in this sceneario, I'm looking for what kinda transport/protocol to use
[14:52] <DammitJim> We'll be the receivers fo this data
[14:52] <lordcirth> Yeah, this is an interesting question actually.
[14:53] <lordcirth> DammitJim, what's the latency between sites?
[14:53] <DammitJim> I'm not sure about the latency
[14:53] <lordcirth> Also, are the files compressible?
[14:53] <DammitJim> I actually don't know how to measure that right now
[14:54] <lordcirth> DammitJim, do you have a sample file?
[14:54] <lordcirth> Or have any idea about their format>?
[14:54] <DammitJim> the files are already compressed, but I'm sure we can try more compression... my fear is just the amount of files and the fact that they are constantly being sent
[14:54] <DammitJim> no, no sample, yet
[14:54] <lordcirth> ok, if they are already compressed, and 56kb, there's probably no point
[14:56] <lordcirth> DammitJim, at 5/s, you might be able to just have inotify trigger an scp.
[14:56] <lordcirth> depending on latency.
[14:57] <lordcirth> However, if you lose connection, reboot, etc, you want a good way to catch up, possibly while more stuff is coming in. That might be the hard part.
[14:58] <DammitJim> don't worry about inotify (that's the client's issue)
[14:58] <DammitJim> I'm worried about picking the right implementation so that there isn't all this over head for connections
[14:59] <lordcirth> Yeah, scp isn't an *efficient* way, but a 5/s it might work
[14:59] <DammitJim> so, perform 5 connections every second?
[14:59] <DammitJim> I guess that could work....
[14:59] <lordcirth> It's far from optimal, but probably functional
[15:00] <DammitJim> hhmmmm
[15:00] <DammitJim> gotta go to a meeting
[15:00] <DammitJim> bbl.. thanks for your input
[16:35] <Ussat> Guys...srsly......Disco Dingo ?
[16:36] <andol> Yepp, that's the part I look forward to most about Ubuntu 19.04.
[16:39] <Ussat> There was a time I was wonderiong how people came up with these names...now..I really dont wanna know
[16:42] <Ussat> Guess I should test the beta, but its not LTS so....
[17:30] <blizzow> I installed an ubuntu 18.04 zerver using a ZFS root. After the installation was complete, I added a second disk as a completely new pool and set mountpoint=/path/to/my/dir newpool/newzfs. Now the machine hangs during boot because the new pool is not imported automagically. How do I get the zpool imported on boot?
[17:33] <lordcirth> blizzow, did you use /dev/sd* device names? That can cause problems
[17:33] <blizzow> lordcirth: I did because this is a qemu based VM. The disk is actually a ceph based rbd image.
[17:34] <blackflow> blizzow: there's some race condition with systemd, so I'm mounting all datasets on boot via legacy
[17:34] <blackflow> otehrwise, the zfs-mount.service is responsible for automounting
[17:35] <blackflow> blizzow: which path are you trying to mount it? that's important. if it's a path in charge by tmpfiles, it won't work
[17:35] <blizzow> blackflow: I'm trying to mount /var/lib/graphite/whisper/
[17:36] <blackflow> blizzow: is that directory (supposed to be) empty on boot?
[17:36] <blizzow> currently I have this line in /etc/fstab:
[17:36] <blizzow> sdb/stats /var/lib/graphite/whisper zfs nodev,noatime,x-systemd.requires-import-sdb.service	0 0
[17:36] <blackflow> ah no, use the dataset name
[17:36] <blizzow> blackflow: definitely not supposed to be empty.
[17:36] <blackflow> newpool/whateve  /var/lib/.../  zfs .... 0 0
[17:37] <blizzow> blackflow: I named my newpool "sdb" :(
[17:37] <blackflow> blizzow: I meant is there anything on boot that predefines any files in there, like tmpfiles
[17:37] <blackflow> oh k. so is the sdb/stats have mountpoint=legacy attribute?
[17:37] <blackflow> *does the
[17:37] <blizzow> nothing should happen on boot that adds to that directory, no.
[17:38] <blizzow> blackflow: no, it doesn't.
[17:38] <blackflow> it should
[17:39] <blizzow> okay, so I did zfs set mountpoint=legacy sdb/stats
[17:39] <blackflow> if you're using fstab, then teh ds needs mountpoint=legacy, or else zfs-mount.service will try to mount it which may enter into race conditions. in particular, is /var or /var/lib also a zfs dataset? you said this was root on ZFS
[17:40] <blackflow> I separate /var/log, /var/tmp and /tmp  from the root pool so I can rollback root without affecting those dirs, which usually should NOT roll back along with root.
[17:40] <blizzow> It's ZFS root and I did not break out any subdirectories onto other pools.
[17:41] <blackflow> well you should. if you rollback to fix botched update, you'll rollback logs and whatever else. /var/lib databases should also be separate
[17:41] <blackflow> (of specific programs, not /var/lib itself, and definitely not apt's)
[17:41] <sarnold> blizzow: have you tried dhe's advice yet?
[17:41] <blizzow> sarnold: trying it now.
[17:46] <blizzow> When I boot, it goes into default mode. I press enter for maintenance and do not see the zpool in there. I do a zpool import -a , then I do a mount /var/lib/graphite/whisper
[17:46] <blizzow> Then I exit and things are up.
[17:47] <blizzow> It's like the zpool is not being picked up during boot.
[17:52] <blackflow> blizzow: did you set mountpoint=legacy?
[17:53] <blackflow> btw, are you also asking in #zfsonlinux? I'd like to not waste my time here if you are :)
[17:57] <blizzow> blackflow: I didn't even think to look for #zfsonlinux, I did ask in #zfs though ;).
[17:58] <blackflow> ah, k
[18:30] <plsuh> Would this be the right place to ask about trying to run a pfSense(FreeBSD) domU on Ubuntu Server using Xen?
[18:32] <sdeziel> plsuh: do you have to use Xen? I believe you'd get a better experience with KVM instead
[18:33] <sdeziel> plsuh: that said, if your question is about setting up that domU in the dom0 context, it's not pfSense specific so feel free to ask here
[18:33] <plsuh> will FreeBSD run on top of KVM?
[18:33] <sdeziel> yes
[18:33] <sdeziel> I am myself running OpenBSD inside KVM for years now
[18:34] <plsuh> ok, I will try that
[18:34] <plsuh> I can get the HVM domU to execute, but cannot get any kind of response out of the domU whether via VNC or serial console
[18:40] <sdeziel> plsuh: it's been a while since I played with Xen but I believe you still need to pass a cmdline arg to get a serial console on the Xen device (don't remember the name)
[18:41] <sdeziel> ah found it, console=/dev/hvc0
[18:41] <codefriar> Hello! I've an ubuntu 18.10 server reporting 'temporary failure in name resolution' whenever I try to ping a subdomain running on it.
[18:42] <plsuh> I did that with a linux guest -- Alpine Linux specifically
[18:42] <codefriar> I'm wondering if it could have something to do with setting up the bond on 3 of the 4 ethernet ports?
[18:42] <plsuh> but that was in the Alpine init, not the HVM setup
[18:43] <sdeziel> plsuh: by cmdline I mean kernel boot arg
[18:43] <codefriar> any idea what might cause the 'temporary failure in name resolution' error?
[18:44] <sarnold> codefriar: try pastebinning some dig output that shows what you're trying, where it fails, and maybe someone will be able to spot something
[18:45] <codefriar> sarnold https://pastebin.com/3rHezetg this is ping output. I'm on the server hydra.local. their is a running docker service that supposed to be attached to plex.hydra.local. However, I can't ping it, because of the name resolution error
[18:46] <sdeziel> codefriar: are you using systemd-resolved? If yes, please share /etc/resolv.conf content
[18:46] <codefriar> https://pastebin.com/58Aw9z39 here's the pastebin with dig info
[18:47] <codefriar> sdeziel i'm using 18.10, does that use systemd-resolved?
[18:47] <sdeziel> codefriar: yes, and your dig output confirmed
[18:47] <sdeziel> codefriar: I suspect that your search domain doesn't include 'local' in it so systemd-resolved tries mDNS to resolve this name
[18:49] <codefriar> sdeziel this contains the only two uncommented lines in resolv.conf
[18:49] <codefriar> https://pastebin.com/miw87PaU
[18:50] <sdeziel> codefriar: you can check systemd-resolved config with "systemd-resolve --status"
[18:52] <codefriar> sdeziel wow so much info here. Any clue what I'm looking for? How can I add .local ?
[18:52] <sdeziel> codefriar: the domain name used
[18:53] <sdeziel> codefriar: as to how to add .local to the list, it depends on how your network is configured. It's probably done using netplan as the config generator so please pastebin the /etc/netplan/*.yaml files
[18:54] <codefriar> here's the output of the status command. I don't see a top level domain listed? https://pastebin.com/skwZBYW1
[18:55] <codefriar> sdeziel here's my netplan: https://pastebin.com/HeWvb9Qu
[18:55] <codefriar> sdeziel the netplan information was created by the installer.
[18:57] <sarnold> if you picked .local yourself, you'd probably be better served by picking a different tld entirely.
[18:58] <sdeziel> codefriar: hmm, I am not knowledgeable enough with netplan but I think you can achieve what you want by adding Domain=hydra.local to /etc/systemd/resolved.conf and restarting systemd-resolved
[18:58] <sarnold> if you're trying to use mdns as it was designed, of course, I don't know why it's not going well.
[18:58] <codefriar> sarnold well hostname returns 'hydra'
[18:58] <sdeziel> codefriar: right, avoid abusing .local is the right way
[18:58] <codefriar> sdeziel how am I abusing local? (not a combattive question, just ignorant)
[18:59] <sdeziel> codefriar: understood. .local is reserved for mDNS resolution
[19:00] <sdeziel> codefriar: technically, this mDNS thing is supposed to only be applicable to single labels under .local but many resolver/stub libs get this wrong, systemd-resolved included
[19:00] <codefriar> sdeziel ok... so adding the domain bit should help?
[19:01] <sdeziel> codefriar: adding .local to the domain list used by systemd-resolved disables mDNS for the said domain
[19:02] <codefriar>  will that cause other machines on the network to fail to find hydra.local?
[19:03] <codefriar> hmm, well that didn't end up fixing the 'temporary failure in name resolution' issue
[19:04] <sdeziel> codefriar: did you restart systemd-resolved?
[19:04] <codefriar> sdeziel Yep!
[19:04] <codefriar> (twice)
[19:04] <sdeziel> codefriar: pastebin the /etc/systemd/resolved.conf and systemd-resolve --status output please
[19:04] <codefriar> --status now shows hydra.local as the DNS Domain
[19:04] <sdeziel> hmm
[19:05] <codefriar> https://pastebin.com/7CgzCrhv
[19:05] <codefriar> this is /etc/systemd/resolved.conf https://pastebin.com/KwZQENnq
[19:06] <sdeziel> codefriar: could you try with Domain=local instead?
[19:06] <codefriar> sdeziel sure thing, one second
[19:07] <codefriar> no dice.
[19:07] <codefriar> sdeziel i did restart systemd-resolved
[19:10] <sdeziel> codefriar: can you "dig @auth-server plex.hydra.local" ?
[19:11] <sdeziel> or "dig @upstream-resolver plex.hydra.local"
[19:12] <codefriar> dig: couldn't get address for 'auth-server': failure
[19:12] <codefriar> same for upstream
[19:12] <sarnold> you've got the give the ip address for those nameservers
[19:12] <sdeziel> that was meant to be replaced with I think 192.168.1.65
[19:13] <sdeziel> as this is the IP systemd-resolved will turn to for DNS resolution that it cannot fulfill from its cache
[19:15] <plsuh> sdeziel: thanks, yeah I mis-typed -- it was in the kernel args. Thanks for the assistance.
[19:58] <codefriar> sdeziel sorry, I mis understood. one second. (also, sorry, wife's car wouldn't start, had to fix)
[19:59] <codefriar> sdeziel  no A record returned. resultedin SERVFAIL
[20:00] <sdeziel> codefriar: OK so the upstream server has an issue, you'd need to fix this as it's what systemd-resolved would ask for the A record
[20:00] <sdeziel> codefriar: assuming I really understood what you really want to achieve ;)
[20:04] <codefriar> sdeziel that's fair. here's the dime tour of the goal. brand new (4hr) old ubuntu-server 18.10 install on dell r710. 3 of 4 nic's bonded to bond0, one left alone. bond pulls 192.168.1.80. En4 pulls 192.168.1.81. Server name is Hydra. Avahi installed, mac can ssh in via hydra.local . I've attempted to use homelabos (an anisble script) to setup a bunch of docker services that are supposed to be available on
[20:04] <codefriar>  servicename.host.domain aka: plex.hydra.local however, none of the subdomains are functional, and whenever i try to ping one of them, I get nothing but the temp failure to resolve name error. That error seems un-related to the docker based stuff, so I was trying to work on resolving that issue.
[20:04] <codefriar> OOC, is there a big networking change between 18.04lts and 18.10
[20:05] <sarnold> nothing that I can recall
[20:05] <sdeziel> codefriar: so you seem to need/want mDNS/avahi to work
[20:05] <sdeziel> but only for the first label under .local
[20:06] <sdeziel> codefriar: who/what publishes the "servicename.host.domain aka: plex.hydra.local" ?
[20:06] <sdeziel> I'd hope for a dnsmasq or something ;)
[20:13] <sdeziel> https://gitlab.com/NickBusey/HomelabOS says: "A domain configured with a A type DNS record of *.yourdomain.com pointed at your server's IP address."
[20:14] <sdeziel> codefriar: maybe you should go to #homelabos
[20:15] <codefriar> sdeziel no one there.
[20:16] <codefriar> sdeziel the instructions say just put a whatever.local there
[20:16] <sarnold> there's at least six people there now :)
[20:16] <codefriar> sarnold thats new
[20:17] <sarnold> now whether or not anyone is at their keyboard, good question..
[20:17] <sdeziel> codefriar: since you don't seem to have an authoritative source of information to resolve plex.hydra.local, you can probably put it under /etc/hosts as a quick and dirty hack
[20:19] <sdeziel> codefriar: I haven't look closely at the project but it seems that all the HTTP(S) traffic needs to be directed to a reverse proxy (traefik) that hopefully knows how to reach the actual backend
[20:19] <codefriar> sdeziel yeah, traefik is up and running
[22:32] <gbkersey> Doing a pxe install of 16.04lts and it is hanging because there is a dependency problem with the kernel...  Seems that the net installer installs linux-generic which pulls linux-image-generic 4.4.0.143 but that version of the kernel depends on linux-base >= 4.1, but linux-base 4.0 is being installed by the installer....
[22:38] <sarnold> gbkersey: let me look around a bit
[22:40] <sarnold> gbkersey: this looks like your bug .. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1820419
[22:43] <sarnold> gbkersey: comment #9 may help https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1820755
[23:25] <keithzg[m]> My co-worker spent aaaaaages trying to figure out why nullmailer wasn't working on one Ubuntu instance atop Windows but was on another; turns out the working one was 16.04 while the non working one was 18.04, and nullmailer apparently no longer ships with a SysV init script for /etc/init.d and the WSL builds of Ubuntu don't use systemd for some reason?
[23:27] <tomreyn> !wsl
[23:44] <keithzg[m]> tomreyn: I mean, I'm not surprised to see that WSL is weird, my comment in many respects was more about the lack of SysV init support in the current packaging of nullmailer (which would affect more than just WSL builds)
[23:49] <gbkersey> sarnold: Thanks...
[23:49] <tomreyn> keithzg[m]: according to https://github.com/systemd/systemd/issues/8036 systemd works on WSL, but then this is the wrong channel to discuss WSL specifics, as you read above.
[23:51] <keithzg[m]> tomreyn: Yeah, I'm not too worried about that, I was just surprised by it. The only thing I really care about, which is why I bothered commenting here in #ubuntu-server, is that nullmailer apparently no longer ships with a script for `/etc/init.d` (as far as I can tell, using the one from the old package works just fine in 18.04 too).
[23:51] <tomreyn> why would it, if the init system is systemd
[23:52] <sarnold> I really wish we'd kill all the old sysv scripts from packages that support both sysv-init and systemd unit files
[23:52] <sarnold> it doesn't feel like a cohesive system to have files for N different service managers in each package
[23:53] <tomreyn> yes that'd be more than nice. probably also means a lot of work.
[23:54] <sarnold> yeah
[23:54] <sarnold> I mean, I haven't volunteered to do it :)
[23:58] <tomreyn> if you had, i had asked you to share your scientific advances in human cloning and / or time travelling.
[23:58] <sarnold> alas I keep hoping a future-me will show up with some good news but that bastard hasn't done it yet