[07:38] Good morning [08:30] Guys, I always install server from minimal iso and I don't use netplan. Ifupdown setup works fine during but... [08:31] "systemctl restart networking" doesn't set new settings on /etc/network/interfaces [08:31] *buring boot [08:31] *during boot.. [08:35] ok, what happens? nothing? [08:35] yes [08:36] any hints in 'journalctl -u networking'? [08:37] Stopping/Stopped/Starting/Started Raise network interfaces... [08:37] for each attemp [08:40] so the interface is stopped and restarted? [08:41] Does it really matter if new settings not applied? [08:41] And I don't know messages indicates interfaces actually got up/down [08:45] probably there are other people here using ifupdown instead new uber super tech "netplan" gızmo [08:53] Maybe netplan will be deprecated again... when get used to use it. Hope playing with core components doesn't occur so often. [08:57] i'm using ifupdown myself, haven't seen this issue [09:00] https://paste.ubuntu.com/p/9QpPmgyJSY/ [09:01] * lordievader +1 for ifupdown, netplan.io is one of the first things to go on a new install. [09:47] jamespage: coreycb: I'm not seeing the qemu/libvirt from Disco in https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/stein-staging/+index?batch=300 [09:47] since we agreed that stein will have those I wanted to double check if that is going well? [09:47] cpaelzer: next on my list to backport [09:47] not started on it yet [09:47] thank you jamespage [10:22] Greetings ! Does anybody know what kind of performance are to be expected with a MD RAID6 ? [10:22] I'm running ubuntu 18.04.1 with kernel 4.18.0-14-generic and have a 12 drives RAID6. [10:22] I expected a single dd write to the MD devices from /dev/zero would give me something close to 2 GiB/s but I get 96 MiB/s. [10:23] What am I missing ? [11:45] Is there such a thing as an apt proxy that caches downloads, and can be addressed by other machines on the LAN? [11:45] such that you set up a deb entry for the caching machine [11:51] kstenerud: apt-cacher{,-ng} is smart like that, iirc it uses avahi-daemon to let others know it exists [11:52] you can also install squid-deb-proxy, and squid-deb-proxy-client on the client machines [12:17] ahasenack: So does that mean that if I have a proxy on the lan, any system will pick it up automatically via avahi-daemon? [12:19] kstenerud: that's the theory, but I'm not using it, I set up an actual squid proxy, and point my machines at it [12:21] JulietDeltaGolf: maybe the raid is stil being bult by the time you're testing? [12:22] tomreyn > not I waited patiently [12:23] for it to finish [12:23] but I've just found out 4 drives were doing a background check [12:23] JulietDeltaGolf: so you had checked /proc/mdstat [12:23] that helped a bit [12:23] oh ok [12:23] then I did a dd with an absurdly large blocksize [12:23] and now I get ~600 MiB/s [12:24] and the raid process is using around 93% of the cpu [12:24] are you using oflag=direct ? [12:25] 93% of CPU?! that sounds wrong [12:25] sooooooooooo I guess o_direct is not only bypassing the VFS cache [12:25] but the MD and the disk cache as well ... [12:25] and I didn't expect that... that should be O_SYNC flag's job [12:27] tomreyn > yeah ... [12:30] sorry I meant 93% of a core :) [12:32] hum ... so it would mean the RAID6 implementation is single threaded and I will never go above that ever. [12:34] JulietDeltaGolf: what sort of drives are these? [12:35] JulietDeltaGolf: keep in mind that dd is a very poor performance test tool - try fio [12:35] fio is built to benchmark disks - dd is made to move data [12:36] RoyK > old spinning rust :) [12:38] setting 1 in group_thread_cnt [12:38] seems to help [12:39] JulietDeltaGolf: as for that of singlethreadedness - I really don't know. How large are these drives? what's their rpm? what's the expected workload? [12:39] JulietDeltaGolf: for raid-specific questions, there's also #linux-raid [12:41] RoyK > 8TB 7200k NL SAS [15:02] with 18.04, we have both rsyslogd and systemd-journald. isn't this redundant - would not make sense to have only one or the other? [15:06] tomreyn: well, you can't get rid of journal. and removing rsyslog would probably break legacy stuff that relies on those logfiles being available. [15:07] and there'd be additional pitchfork mobs if rsyslog was removed. so it's a good default for now., [15:08] personally I prefer to use journal as a short (100M) in memory (volatile only) buffer, and forward everything to a syslog for persistent logging. [15:11] blackflow: yes, that's probably true, and the redundancy doesn't really hurt. maybe the addition of systemd-journal should be mentioned in the release notes, possibly accompanied by a hint that users may want to replace rsyslogd if they don't need it. [15:11] I've also heard that systemd-journald makes a poor network log aggregator (doesn't scale well) [15:12] i see, that's good to know. [15:13] tomreyn: journald is core component of systemd, it was here since 15.04 [15:15] oh, it's been in 16.04, too? i wasn't aware! [15:15] yup. [15:16] rbasak: i may have to add a third party plugin go nginx-full and -extras - there's a headache with GeoIP legacy being discontinued and not available freely anymore; there's a GeoIP2 third party module that was requested in Debian, and I asked sarnold to do a cursory review, and he said for -full and -extras it'd be acceptable (it's Universe so..). Just wanted to give the FYI. It adds to the delta, but I'd rather have a 'freely usable' GeoIP [15:16] module in the NGINX config than one which you have to pay MaxMind for to utilize. [15:16] thoughts? [15:16] (all server team members are permitted to give comments0 [15:30] teward: OK, thanks. And good job on checking with the security team and letting them know :) [15:31] rbasak: always :P [15:31] sarnold gets pokes from me about that stuff regularly :P [15:32] rbasak: sarnold's usually the first one I ask for cursory code reviews for any major evils [15:32] in this case there wasn't any major problems identified, but it's also in Universe so we're not needing a heavy in-depth analysis at this point, just the ultra major headaches/concerns (since it's not in Main) [15:33] rbasak: though in the future we may need to talk on a MIR to include that module, since GeoIP's... well, now a 'paid product' for the geoip date from MaxMind. [20:33] cyphermox: noticed something odd on an 18.04 machine - netplan's 51-local-config.yaml (made in house) and 99-vmware-cfg.yaml (provided by VMware Template), netplan only processes 51 and not 99, is this normal? [20:33] (similar behavior when 50-cloud-init.yaml is present too) [20:42] Hi! I need to pick someones brain... :) [20:42] I want to copy data from an external drive back to the Ubuntu Server 18.04 and I use rsync for that. [20:43] However, it is painfully slow. [20:43] When I was copying data from the server to the drive, it was rather fast (USB3)... [20:43] What could be the cause for this, or what should I look at... [20:44] just how slow is it going? [20:45] I started rsync maybe a minute ago and it still says sending `incremental file list` [20:45] But when I earlier tried to copy files, it was up to 2MB/s [20:46] Given that I have almost 1TB of data, this is crazy... [20:46] hello [20:46] o/ amcclure [20:46] hi Ben [20:46] The drive is partitioned as ext4 [20:47] small world, hey? [20:47] yep lol [20:48] super_koza: you might want to try something like https://github.com/jbd/msrsync [20:48] Ok, when I try copying big files, the speeds reach up to 150MB/s [20:50] that's not a big surprise [20:50] getting information about small files can take forever, lots of syncs, lots of syscalls, etc [20:51] but copying huge files can issue huge read() calls, usually it's sequential copies.. [20:52] For example: 2.03G 100% 8.38MB/s 0:03:51 [20:52] I don't really know why since I didn't have this issue with older Ubuntu Server versions or other distros, but when I type "sudo [command]" there's always a long delay before it asks to enter my password. After that however there aren't any delays unless I try running a command with sudo again. [20:52] readahead also possibly help with bigger files [20:53] is there anything I could do to fix this issue? [20:53] amcclure: is that delay about six, twelve, or eighteen seconds? [20:53] It was showing ~150MB/s, but as it took long, the avg is only ~8MB/s [20:54] super_koza: yes [20:54] wrong tab sorry [20:54] sarnold: yes [20:54] smells like DNS lookups [20:55] amcclure: smells like busted dns lookups, as sdeziel mentions [20:55] dns lookups? [20:55] you may be able to configure the PAM module doing the lookups to not do them, or install records in your DNS server or /etc/hosts to give speedy answers.. [20:56] or maybe it's sudo doing the lookups.. [20:56] sarnold: I'm curious to know where you took those 6, 12, and 18 s? resolv.conf mentions the default timeout to be 5s and retry 2 [20:57] is there anything else that I could use? [20:57] sdeziel: hmm, I thought it'd be six seconds forever, and the 12 and 18 in case the second and third listed resolvers give trouble.. [20:57] it was a hosts issue [20:58] didn't think sudo and hosts were connected at all [20:58] sudo's config file lets you specify specific permissions on specific hosts [20:58] super_koza: for this transfer I don't know but if you need to do this frequently, there are possibly better ways than rsync [20:58] then you can distribute one config file to every server in your organization [20:59] sdeziel: no, I won't do this frequently, but please do tell me what would be a better option in that case [21:02] super_koza: you could use "modern" filesystems (like btrfs or zfs) to send snapshots arounds instead of copying individual files [21:03] and depending upon how much changed vs how much stayed the same, it might be faster to skip rsync and just cp -R instead [21:03] computing diffs between source and dest can require reading them both in.. but cp -R will have fewer reads of the destination drive.. [21:04] sarnold: I think that rsync defaults to using --whole-file when the src/dest are both local [21:05] but maybe cp -R can still beat it, I never compared both to be honest [21:06] sdeziel: ooh, nice [21:19] msrsync doesn't seem to do much [21:19] the terminal stays blank even though I have specified -P flag [21:22] When I try copying the files with rsync to my laptop over USB2 port, speeds of 30MB/s are reached for larger files [21:22] Whis seems decent I would say. [21:22] Why the hell is my server so slow then? [21:23] How can I check USB speeds? [21:26] amcclure, does your hostname for 127.0.0.1 in /etc/hosts match your hostname in /etc/hostname ? [21:26] Oh, nvrm, you found that [21:29] super_koza: maybe your server is doing other things taking away IO bandwidth? [21:29] Nope, it is doing nothing [21:31] super_koza, in future, using tar might make this a lot faster [21:32] btrfs/zfs snapshots are great, but if the problem is random IO being really slow, tar will turn it into a single sequential IO [21:34] I have done a very very stupid thing [21:35] oh? [21:35] I have changed permission on my home with -R 777 :D [21:35] I have no idea why I did this... [21:35] Well, that's not nearly as bad as doing it to / [21:36] How can I recover from it? [21:36] your subconscious wanted to test your backup procedure ;) [21:36] I can't ssh anymore [21:37] are there other untrusted users on the system? [21:38] super_koza, you need to remove permissions from ~/.ssh. chmod -R o-rwx ~/.ssh [21:38] -> recursively remove 'rwx' from 'others' [21:38] .ssh should just be 700 [21:38] ok [21:38] let me try it [21:40] so I should do chmod -R 770 /home/myuser? [21:41] then once you're done with that, figure out how to remove that execute bit from files that shouldn't have it [21:43] damn [21:43] :D [21:44] I thinkt he last time I needed to clean up after someone's chmod 777, I used find . -type f -exec .. and find . -type d -exec .. [21:46] what does that do? [21:51] super_koza: "-type f" will only run the -exec stuff on files, "-type d" will deal with dirs only [21:55] Now I have mounted the drive into /mount [21:55] and now I get decent speeds [21:55] Not sure how that makes any difference, but hey, whatever works... [23:10] evening all [23:11] anyone have any idea why *the* default installer for 18.04.1 LTS Server is a subiquity based cloud one? [23:15] I don't think that question can be answered in a way that would satisfy you. Defaults get decided by the project according to our governance structure. [23:16] rbasak: wondering because; there is no warning, it *requires* a network connection to install or start, and it litters the install with cloud based programs and configuration files. [23:18] See bug 1750819. [23:18] bug 1750819 in subiquity "Impossible to install without network" [Undecided,Triaged] https://launchpad.net/bugs/1750819 [23:19] rbasak: yea I found that entry *after* I ran head first into that bug. [23:20] As for "litters", you'll need to explain why that's actually a problem. It's common to end up with things that people don't specifically need in order to provide a more comfortable experience and to help reduce the bug count (fewer possible combinations generally means fewer bugs). [23:22] For example, you also end up with a ton of hardware drivers you don't need. [23:22] The advantage is that everyone* is using the same kernel, which makes bugs shallower. [23:43] back everyone... [23:51] bye now.