=== mundus2018 is now known as mundus | ||
xibalba | what're you guys using for a time server now a days? ntpd/openntpd/chrony ? | 02:51 |
---|---|---|
xibalba | i need it to provide time to a bunch of down stream devices | 02:52 |
tomreyn | many prefer chrony over ntpd for its reduced complexity / newer code. but i don't know how well it works as a server (i suspect it does). | 02:54 |
xibalba | i'm going to point alot of network gear at it, i know ntpd is tried and true but chrony is the new comer and has better/quicker time sync code | 03:04 |
xibalba | i have seen it more demo'd as a client based time sync than a server for various gear down stream | 03:04 |
tomreyn | maybe also ask in ##linux , ##networking , ##security to get more opinions | 03:07 |
xibalba | will do, thanks tomreyn | 03:10 |
lordievader | Good morning | 06:26 |
=== cpaelzer__ is now known as cpaelzer | ||
=== azidhaka_ is now known as azidhaka | ||
Ussat | 19.02 wont be a LTS release, correct ? | 13:45 |
sdeziel | Ussat: correct, next LTS will be 20.04 | 13:45 |
Ussat | OK, thought as much, just making sure | 13:45 |
sdeziel | s/19\.02/19.04/ | 13:46 |
Ool | each 2 years | 13:46 |
Ussat | we only run LTS, I might spin a 19.04 for testing etc.... | 13:46 |
Ussat | so even LTS odd not ? | 13:46 |
sdeziel | in the past, the following were LTSes: 6.06, 8.04, 10.04, 12.04, 14.04, 18.04 | 13:47 |
sdeziel | and 16.04 :) | 13:47 |
ahasenack | rbasak: hi, what would you expect to happen with an SRU for two packages, where one introduces a new api call (samba), and the other one just needs a rebuild in order to detect it and use it | 14:06 |
ahasenack | rbasak: samba will land later today, but the other one (gvfs) needs to wait for samba to arrive in proposed and then be rebuilt | 14:06 |
ahasenack | so for the second one, would you expect an MP? Or just a dch -R like change and upload? | 14:07 |
ahasenack | both packages are tasks on the same bug | 14:07 |
=== lotuspsychje__ is now known as lotuspsychje | ||
=== Wryhder is now known as Lucas_Gray | ||
teward | hmm... has anyone had any issues submitting DNS queries to 127.0.0.53, the stub resolver for SystemD? `dig` to it always times out as if it doesn't know how to reply... | 15:52 |
teward | or do I have to configure it differently? | 15:52 |
blackflow | teward: I have no issues, I don't use that pile of ... | 16:38 |
lordcirth | teward, systemd-resolve has worked pretty well for me. I just tried dig google.com @127.0.0.53 and it worked fine too. | 16:47 |
teward | hmm | 16:48 |
teward | wonder why then it's being a derp | 16:48 |
blackflow | yeah I wonder... | 16:50 |
lordcirth | teward, what does systemd-resolve --status show? | 16:57 |
teward | DNS Servers: 127.0.2.1 <-- but this server uses a local bind9 recursive resolver and only the stub resolver derps on replies | 16:58 |
teward | DNS *does* work systemwide though, and querying the bind server direct works too (127.0.2.1, and yes that address does exist on-box) | 16:58 |
* teward shrugs | 16:59 | |
teward | probably some SystemD nonsense | 16:59 |
lordcirth | teward, does /etc/resolv.conf point to systemd? | 16:59 |
teward | lordcirth: yes | 16:59 |
teward | sudo nano /etc/resolv.conf | 16:59 |
teward | oops | 16:59 |
teward | yep | 16:59 |
teward | so we know the stub resolver works NORMALY, but it just doesn't like something about the rest of the infra for some reason :? | 17:00 |
teward | not sure what that's about | 17:00 |
* teward checks the progress on his local packages mirror's sync, and sees it's completed. | 17:00 | |
teward | ... wow 1.2TB in a little over 8 hours o.O | 17:00 |
sarnold | nice | 17:04 |
teward | sarnold: yeah, 20MB/s average (and yes that's mega*bytes*) is pretty decent :P | 17:05 |
lordcirth | shiny. I'm testing a Ceph cluster right now, getting 3 to 5 GiB/s read O_o | 17:06 |
sarnold | teward: man how'd you win the archive mirror lottery? :) | 17:06 |
teward | sarnold: well, gigabit internet is nice... and sleeping for 12 hours means nothing's using the internet majorly. :P | 17:06 |
lordcirth | Turns out that sticking enough 7200rpm drives together can get things going pretty fast, as long as you have nvme for write journaling | 17:06 |
teward | heh nice | 17:07 |
lordcirth | Know what the bottleneck is? 100GiB/s ethernet. | 17:07 |
lordcirth | On reads, that is | 17:08 |
lordcirth | On writes, it's the Write-ahead-log (WAL) on nvme. | 17:08 |
sarnold | then you need more nvmes! :D | 17:09 |
lordcirth | Well, considering that the prod cluster is doing fine, and this next gen one I'm testing is several times faster on every metric, no, not really :P | 17:09 |
lordcirth | Not that I would *mind* more vroom | 17:09 |
* teward anti-vrooms the Ceph cluster with magicks | 17:10 | |
teward | lordcirth: this server IS set up with its recursive resolver to pass through a pihole though... | 17:10 |
teward | ... wonder if *that* is the first bottleneck breaking the replies | 17:10 |
teward | yep that'd be the problem | 17:11 |
teward | lordcirth: solved it xD | 17:11 |
lordcirth | teward, cool. blackflow ha, not systemd's fault :P | 17:13 |
blackflow | lordcirth: lies! | 17:55 |
=== drrzmr is now known as holoturoide |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!