[05:40] hey guys, I have a Ubuntu server running 18.04 and I just noticed that timedatectl is not running and fails to start [05:41] this is what I see in the logs, any idea what I can try to fix this? [05:41] # timedatectl [05:41] Failed to query server: Connection timed out [05:41] # tail /var/log/syslog [05:41] systemd[1]: systemd-timedated.service: Failed with result 'exit-code'. [05:41] systemd[1]: Failed to start Time & Date Service. [05:41] dbus-daemon[16875]: [system] Activating systemd to hand-off: service name='org.freedesktop.timedate1' unit='dbus-org.freedesktop.timedate1.service' requested by ':1.76' (uid=0 pid=20578 comm="timedatectl " label="unconfined") [06:35] I think this is actually my issue.. [system] Failed to activate service 'org.freedesktop.systemd1': timed out (service_start_timeout=25000ms) [06:36] any ideas on how to fix it without a reboot? [06:36] haven't touched this machine for a few months, logs like that go back to at least december === aditya_ is now known as aditya === aditya_ is now known as aditya [07:43] Good morning === cpaelzer__ is now known as cpaelzer === aditya_ is now known as aditya === lotuspsychje_ is now known as lotuspsychje [14:46] definitely not the right channel, but I don't know where else to ask... do you guys know of the proper technology/tool to continuously transfer files @ 5 files per second over WAN? I think at this point, the files are about 56KB [14:49] DammitJim, are these files coming in constantly, and need to be synced across when they do? [14:50] they are coming in constantly [14:50] what do you mean by synced? [14:51] I mean you want to immediately start copying them over WAN, best-effort, non-blocking [14:51] Firstly I suspect you'll want inotify to trigger the copy [14:52] I see what you are saying. Yes, I'd like for something in the stack to ensure the delivery of the files [14:52] in this sceneario, I'm looking for what kinda transport/protocol to use [14:52] We'll be the receivers fo this data [14:52] Yeah, this is an interesting question actually. [14:53] DammitJim, what's the latency between sites? [14:53] I'm not sure about the latency [14:53] Also, are the files compressible? [14:53] I actually don't know how to measure that right now [14:54] DammitJim, do you have a sample file? [14:54] Or have any idea about their format>? [14:54] the files are already compressed, but I'm sure we can try more compression... my fear is just the amount of files and the fact that they are constantly being sent [14:54] no, no sample, yet [14:54] ok, if they are already compressed, and 56kb, there's probably no point [14:56] DammitJim, at 5/s, you might be able to just have inotify trigger an scp. [14:56] depending on latency. [14:57] However, if you lose connection, reboot, etc, you want a good way to catch up, possibly while more stuff is coming in. That might be the hard part. [14:58] don't worry about inotify (that's the client's issue) [14:58] I'm worried about picking the right implementation so that there isn't all this over head for connections [14:59] Yeah, scp isn't an *efficient* way, but a 5/s it might work [14:59] so, perform 5 connections every second? [14:59] I guess that could work.... [14:59] It's far from optimal, but probably functional [15:00] hhmmmm [15:00] gotta go to a meeting [15:00] bbl.. thanks for your input === techmagus_ is now known as techmagus [16:35] Guys...srsly......Disco Dingo ? [16:36] Yepp, that's the part I look forward to most about Ubuntu 19.04. [16:39] There was a time I was wonderiong how people came up with these names...now..I really dont wanna know [16:42] Guess I should test the beta, but its not LTS so.... [17:30] I installed an ubuntu 18.04 zerver using a ZFS root. After the installation was complete, I added a second disk as a completely new pool and set mountpoint=/path/to/my/dir newpool/newzfs. Now the machine hangs during boot because the new pool is not imported automagically. How do I get the zpool imported on boot? [17:33] blizzow, did you use /dev/sd* device names? That can cause problems [17:33] lordcirth: I did because this is a qemu based VM. The disk is actually a ceph based rbd image. [17:34] blizzow: there's some race condition with systemd, so I'm mounting all datasets on boot via legacy [17:34] otehrwise, the zfs-mount.service is responsible for automounting [17:35] blizzow: which path are you trying to mount it? that's important. if it's a path in charge by tmpfiles, it won't work [17:35] blackflow: I'm trying to mount /var/lib/graphite/whisper/ [17:36] blizzow: is that directory (supposed to be) empty on boot? [17:36] currently I have this line in /etc/fstab: [17:36] sdb/stats /var/lib/graphite/whisper zfs nodev,noatime,x-systemd.requires-import-sdb.service 0 0 [17:36] ah no, use the dataset name [17:36] blackflow: definitely not supposed to be empty. [17:36] newpool/whateve /var/lib/.../ zfs .... 0 0 [17:37] blackflow: I named my newpool "sdb" :( [17:37] blizzow: I meant is there anything on boot that predefines any files in there, like tmpfiles [17:37] oh k. so is the sdb/stats have mountpoint=legacy attribute? [17:37] *does the [17:37] nothing should happen on boot that adds to that directory, no. [17:38] blackflow: no, it doesn't. [17:38] it should [17:39] okay, so I did zfs set mountpoint=legacy sdb/stats [17:39] if you're using fstab, then teh ds needs mountpoint=legacy, or else zfs-mount.service will try to mount it which may enter into race conditions. in particular, is /var or /var/lib also a zfs dataset? you said this was root on ZFS [17:40] I separate /var/log, /var/tmp and /tmp from the root pool so I can rollback root without affecting those dirs, which usually should NOT roll back along with root. [17:40] It's ZFS root and I did not break out any subdirectories onto other pools. [17:41] well you should. if you rollback to fix botched update, you'll rollback logs and whatever else. /var/lib databases should also be separate [17:41] (of specific programs, not /var/lib itself, and definitely not apt's) [17:41] blizzow: have you tried dhe's advice yet? [17:41] sarnold: trying it now. [17:46] When I boot, it goes into default mode. I press enter for maintenance and do not see the zpool in there. I do a zpool import -a , then I do a mount /var/lib/graphite/whisper [17:46] Then I exit and things are up. [17:47] It's like the zpool is not being picked up during boot. [17:52] blizzow: did you set mountpoint=legacy? [17:53] btw, are you also asking in #zfsonlinux? I'd like to not waste my time here if you are :) [17:57] blackflow: I didn't even think to look for #zfsonlinux, I did ask in #zfs though ;). [17:58] ah, k [18:30] Would this be the right place to ask about trying to run a pfSense(FreeBSD) domU on Ubuntu Server using Xen? [18:32] plsuh: do you have to use Xen? I believe you'd get a better experience with KVM instead [18:33] plsuh: that said, if your question is about setting up that domU in the dom0 context, it's not pfSense specific so feel free to ask here [18:33] will FreeBSD run on top of KVM? [18:33] yes [18:33] I am myself running OpenBSD inside KVM for years now [18:34] ok, I will try that [18:34] I can get the HVM domU to execute, but cannot get any kind of response out of the domU whether via VNC or serial console [18:40] plsuh: it's been a while since I played with Xen but I believe you still need to pass a cmdline arg to get a serial console on the Xen device (don't remember the name) [18:41] ah found it, console=/dev/hvc0 [18:41] Hello! I've an ubuntu 18.10 server reporting 'temporary failure in name resolution' whenever I try to ping a subdomain running on it. [18:42] I did that with a linux guest -- Alpine Linux specifically [18:42] I'm wondering if it could have something to do with setting up the bond on 3 of the 4 ethernet ports? [18:42] but that was in the Alpine init, not the HVM setup [18:43] plsuh: by cmdline I mean kernel boot arg [18:43] any idea what might cause the 'temporary failure in name resolution' error? [18:44] codefriar: try pastebinning some dig output that shows what you're trying, where it fails, and maybe someone will be able to spot something [18:45] sarnold https://pastebin.com/3rHezetg this is ping output. I'm on the server hydra.local. their is a running docker service that supposed to be attached to plex.hydra.local. However, I can't ping it, because of the name resolution error [18:46] codefriar: are you using systemd-resolved? If yes, please share /etc/resolv.conf content [18:46] https://pastebin.com/58Aw9z39 here's the pastebin with dig info [18:47] sdeziel i'm using 18.10, does that use systemd-resolved? [18:47] codefriar: yes, and your dig output confirmed [18:47] codefriar: I suspect that your search domain doesn't include 'local' in it so systemd-resolved tries mDNS to resolve this name [18:49] sdeziel this contains the only two uncommented lines in resolv.conf [18:49] https://pastebin.com/miw87PaU [18:50] codefriar: you can check systemd-resolved config with "systemd-resolve --status" [18:52] sdeziel wow so much info here. Any clue what I'm looking for? How can I add .local ? [18:52] codefriar: the domain name used [18:53] codefriar: as to how to add .local to the list, it depends on how your network is configured. It's probably done using netplan as the config generator so please pastebin the /etc/netplan/*.yaml files [18:54] here's the output of the status command. I don't see a top level domain listed? https://pastebin.com/skwZBYW1 [18:55] sdeziel here's my netplan: https://pastebin.com/HeWvb9Qu [18:55] sdeziel the netplan information was created by the installer. [18:57] if you picked .local yourself, you'd probably be better served by picking a different tld entirely. [18:58] codefriar: hmm, I am not knowledgeable enough with netplan but I think you can achieve what you want by adding Domain=hydra.local to /etc/systemd/resolved.conf and restarting systemd-resolved [18:58] if you're trying to use mdns as it was designed, of course, I don't know why it's not going well. [18:58] sarnold well hostname returns 'hydra' [18:58] codefriar: right, avoid abusing .local is the right way [18:58] sdeziel how am I abusing local? (not a combattive question, just ignorant) [18:59] codefriar: understood. .local is reserved for mDNS resolution [19:00] codefriar: technically, this mDNS thing is supposed to only be applicable to single labels under .local but many resolver/stub libs get this wrong, systemd-resolved included [19:00] sdeziel ok... so adding the domain bit should help? [19:01] codefriar: adding .local to the domain list used by systemd-resolved disables mDNS for the said domain [19:02] will that cause other machines on the network to fail to find hydra.local? [19:03] hmm, well that didn't end up fixing the 'temporary failure in name resolution' issue [19:04] codefriar: did you restart systemd-resolved? [19:04] sdeziel Yep! [19:04] (twice) [19:04] codefriar: pastebin the /etc/systemd/resolved.conf and systemd-resolve --status output please [19:04] --status now shows hydra.local as the DNS Domain [19:04] hmm [19:05] https://pastebin.com/7CgzCrhv [19:05] this is /etc/systemd/resolved.conf https://pastebin.com/KwZQENnq [19:06] codefriar: could you try with Domain=local instead? [19:06] sdeziel sure thing, one second [19:07] no dice. [19:07] sdeziel i did restart systemd-resolved [19:10] codefriar: can you "dig @auth-server plex.hydra.local" ? [19:11] or "dig @upstream-resolver plex.hydra.local" [19:12] dig: couldn't get address for 'auth-server': failure [19:12] same for upstream [19:12] you've got the give the ip address for those nameservers [19:12] that was meant to be replaced with I think 192.168.1.65 [19:13] as this is the IP systemd-resolved will turn to for DNS resolution that it cannot fulfill from its cache [19:15] sdeziel: thanks, yeah I mis-typed -- it was in the kernel args. Thanks for the assistance. [19:58] sdeziel sorry, I mis understood. one second. (also, sorry, wife's car wouldn't start, had to fix) [19:59] sdeziel no A record returned. resultedin SERVFAIL [20:00] codefriar: OK so the upstream server has an issue, you'd need to fix this as it's what systemd-resolved would ask for the A record [20:00] codefriar: assuming I really understood what you really want to achieve ;) [20:04] sdeziel that's fair. here's the dime tour of the goal. brand new (4hr) old ubuntu-server 18.10 install on dell r710. 3 of 4 nic's bonded to bond0, one left alone. bond pulls 192.168.1.80. En4 pulls 192.168.1.81. Server name is Hydra. Avahi installed, mac can ssh in via hydra.local . I've attempted to use homelabos (an anisble script) to setup a bunch of docker services that are supposed to be available on [20:04] servicename.host.domain aka: plex.hydra.local however, none of the subdomains are functional, and whenever i try to ping one of them, I get nothing but the temp failure to resolve name error. That error seems un-related to the docker based stuff, so I was trying to work on resolving that issue. [20:04] OOC, is there a big networking change between 18.04lts and 18.10 [20:05] nothing that I can recall [20:05] codefriar: so you seem to need/want mDNS/avahi to work [20:05] but only for the first label under .local [20:06] codefriar: who/what publishes the "servicename.host.domain aka: plex.hydra.local" ? [20:06] I'd hope for a dnsmasq or something ;) [20:13] https://gitlab.com/NickBusey/HomelabOS says: "A domain configured with a A type DNS record of *.yourdomain.com pointed at your server's IP address." [20:14] codefriar: maybe you should go to #homelabos [20:15] sdeziel no one there. [20:16] sdeziel the instructions say just put a whatever.local there [20:16] there's at least six people there now :) [20:16] sarnold thats new [20:17] now whether or not anyone is at their keyboard, good question.. [20:17] codefriar: since you don't seem to have an authoritative source of information to resolve plex.hydra.local, you can probably put it under /etc/hosts as a quick and dirty hack [20:19] codefriar: I haven't look closely at the project but it seems that all the HTTP(S) traffic needs to be directed to a reverse proxy (traefik) that hopefully knows how to reach the actual backend [20:19] sdeziel yeah, traefik is up and running [22:32] Doing a pxe install of 16.04lts and it is hanging because there is a dependency problem with the kernel... Seems that the net installer installs linux-generic which pulls linux-image-generic 4.4.0.143 but that version of the kernel depends on linux-base >= 4.1, but linux-base 4.0 is being installed by the installer.... [22:38] gbkersey: let me look around a bit [22:40] gbkersey: this looks like your bug .. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1820419 [22:40] Launchpad bug 1820419 in linux (Ubuntu Xenial) "linux-generic should depend on linux-base >=4.1" [High,Fix committed] [22:43] gbkersey: comment #9 may help https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1820755 [22:43] Launchpad bug 1820419 in linux (Ubuntu Xenial) "duplicate for #1820755 linux-generic should depend on linux-base >=4.1" [High,Fix committed] [23:25] My co-worker spent aaaaaages trying to figure out why nullmailer wasn't working on one Ubuntu instance atop Windows but was on another; turns out the working one was 16.04 while the non working one was 18.04, and nullmailer apparently no longer ships with a SysV init script for /etc/init.d and the WSL builds of Ubuntu don't use systemd for some reason? [23:27] !wsl [23:27] Windows 10 has a feature called Windows Subsystem for Linux, which allows it to run Ubuntu (and other Linux distro) userspace programs without porting/recompliation. For discussion and support, see #ubuntu-on-windows or ##windows. For installation instructions, see https://msdn.microsoft.com/en-us/commandline/wsl/install_guide [23:44] tomreyn: I mean, I'm not surprised to see that WSL is weird, my comment in many respects was more about the lack of SysV init support in the current packaging of nullmailer (which would affect more than just WSL builds) [23:49] sarnold: Thanks... [23:49] keithzg[m]: according to https://github.com/systemd/systemd/issues/8036 systemd works on WSL, but then this is the wrong channel to discuss WSL specifics, as you read above. [23:51] tomreyn: Yeah, I'm not too worried about that, I was just surprised by it. The only thing I really care about, which is why I bothered commenting here in #ubuntu-server, is that nullmailer apparently no longer ships with a script for `/etc/init.d` (as far as I can tell, using the one from the old package works just fine in 18.04 too). [23:51] why would it, if the init system is systemd [23:52] I really wish we'd kill all the old sysv scripts from packages that support both sysv-init and systemd unit files [23:52] it doesn't feel like a cohesive system to have files for N different service managers in each package [23:53] yes that'd be more than nice. probably also means a lot of work. [23:54] yeah [23:54] I mean, I haven't volunteered to do it :) [23:58] if you had, i had asked you to share your scientific advances in human cloning and / or time travelling. [23:58] alas I keep hoping a future-me will show up with some good news but that bastard hasn't done it yet