[02:51] <xibalba> what're you guys using for a time server now a days? ntpd/openntpd/chrony ?
[02:52] <xibalba> i need it to provide time to a bunch of down stream devices
[02:54] <tomreyn> many prefer chrony over ntpd for its reduced complexity / newer code. but i don't know how well it works as a server (i suspect it does).
[03:04] <xibalba> i'm going to point alot of network gear at it, i know ntpd is tried and true but chrony is the new comer and has better/quicker time sync code
[03:04] <xibalba> i have seen it more demo'd as a client based time sync than a server for various gear down stream
[03:07] <tomreyn> maybe also ask in ##linux , ##networking , ##security to get more opinions
[03:10] <xibalba> will do, thanks tomreyn
[06:26] <lordievader> Good morning
[13:45] <Ussat> 19.02 wont be a LTS release, correct ?
[13:45] <sdeziel> Ussat: correct, next LTS will be 20.04
[13:45] <Ussat> OK, thought as much, just making sure
[13:46] <sdeziel> s/19\.02/19.04/
[13:46] <Ool> each 2 years
[13:46] <Ussat> we only run LTS, I might spin a 19.04 for testing etc....
[13:46] <Ussat> so even LTS odd not ?
[13:47] <sdeziel> in the past, the following were LTSes: 6.06, 8.04, 10.04, 12.04, 14.04, 18.04
[13:47] <sdeziel> and 16.04 :)
[14:06] <ahasenack> rbasak: hi, what would you expect to happen with an SRU for two packages, where one introduces a new api call (samba), and the other one just needs a rebuild in order to detect it and use it
[14:06] <ahasenack> rbasak: samba will land later today, but the other one (gvfs) needs to wait for samba to arrive in proposed and then be rebuilt
[14:07] <ahasenack> so for the second one, would you expect an MP? Or just a dch -R like change and upload?
[14:07] <ahasenack> both packages are tasks on the same bug
[15:52] <teward> hmm... has anyone had any issues submitting DNS queries to 127.0.0.53, the stub resolver for SystemD?  `dig` to it always times out as if it doesn't know how to reply...
[15:52] <teward> or do I have to configure it differently?
[16:38] <blackflow> teward: I have no issues, I don't use that pile of ...
[16:47] <lordcirth> teward, systemd-resolve has worked pretty well for me. I just tried dig google.com @127.0.0.53 and it worked fine too.
[16:48] <teward> hmm
[16:48] <teward> wonder why then it's being a derp
[16:50] <blackflow> yeah I wonder...
[16:57] <lordcirth> teward, what does systemd-resolve --status show?
[16:58] <teward>          DNS Servers: 127.0.2.1  <-- but this server uses a local bind9 recursive resolver and only the stub resolver derps on replies
[16:58] <teward> DNS *does* work systemwide though, and querying the bind server direct works too (127.0.2.1, and yes that address does exist on-box)
[16:59]  * teward shrugs
[16:59] <teward> probably some SystemD nonsense
[16:59] <lordcirth> teward, does /etc/resolv.conf point to systemd?
[16:59] <teward> lordcirth: yes
[16:59] <teward> sudo nano /etc/resolv.conf
[16:59] <teward> oops
[16:59] <teward> yep
[17:00] <teward> so we know the stub resolver works NORMALY, but it just doesn't like something about the rest of the infra for some reason :?
[17:00] <teward> not sure what that's about
[17:00]  * teward checks the progress on his local packages mirror's sync, and sees it's completed.
[17:00] <teward> ... wow 1.2TB in a little over 8 hours o.O
[17:04] <sarnold> nice
[17:05] <teward> sarnold: yeah, 20MB/s average (and yes that's mega*bytes*) is pretty decent :P
[17:06] <lordcirth> shiny. I'm testing a Ceph cluster right now, getting 3 to 5 GiB/s read O_o
[17:06] <sarnold> teward: man how'd you win the archive mirror lottery? :)
[17:06] <teward> sarnold: well, gigabit internet is nice... and sleeping for 12 hours means nothing's using the internet majorly.  :P
[17:06] <lordcirth> Turns out that sticking enough 7200rpm drives together can get things going pretty fast, as long as you have nvme for write journaling
[17:07] <teward> heh nice
[17:07] <lordcirth> Know what the bottleneck is? 100GiB/s ethernet.
[17:08] <lordcirth> On reads, that is
[17:08] <lordcirth> On writes, it's the Write-ahead-log (WAL) on nvme.
[17:09] <sarnold> then you need more nvmes! :D
[17:09] <lordcirth> Well, considering that the prod cluster is doing fine, and this next gen one I'm testing is several times faster on every metric, no, not really :P
[17:09] <lordcirth> Not that I would *mind* more vroom
[17:10]  * teward anti-vrooms the Ceph cluster with magicks
[17:10] <teward> lordcirth: this server IS set up with its recursive resolver to pass through a pihole though...
[17:10] <teward> ... wonder if *that* is the first bottleneck breaking the replies
[17:11] <teward> yep that'd be the problem
[17:11] <teward> lordcirth: solved it xD
[17:13] <lordcirth> teward, cool. blackflow ha, not systemd's fault :P
[17:55] <blackflow> lordcirth: lies!