[07:38] <lordievader> Good morning
[08:30] <terra> Guys, I always install server from minimal iso and I don't use netplan. Ifupdown setup works fine during but...
[08:31] <terra> "systemctl restart networking" doesn't set new settings on /etc/network/interfaces
[08:31] <terra> *buring boot
[08:31] <terra> *during boot..
[08:35] <ducasse> ok, what happens? nothing?
[08:35] <terra> yes
[08:36] <ducasse> any hints in 'journalctl -u networking'?
[08:37] <terra> Stopping/Stopped/Starting/Started Raise network interfaces...
[08:37] <terra> for each attemp
[08:40] <ducasse> so the interface is stopped and restarted?
[08:41] <terra> Does it really matter if new settings not applied?
[08:41] <terra> And I don't know messages indicates interfaces actually got up/down
[08:45] <terra> probably there are other people here using ifupdown instead new uber super tech "netplan" gızmo
[08:53] <terra> Maybe netplan will be deprecated again... when get used to use it. Hope playing with core components doesn't occur so often.
[08:57] <ducasse> i'm using ifupdown myself, haven't seen this issue
[09:00] <terra> https://paste.ubuntu.com/p/9QpPmgyJSY/
[09:01]  * lordievader +1 for ifupdown, netplan.io is one of the first things to go on a new install.
[09:47] <cpaelzer> jamespage: coreycb: I'm not seeing the qemu/libvirt from Disco in https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/stein-staging/+index?batch=300
[09:47] <cpaelzer> since we agreed that stein will have those I wanted to double check if that is going well?
[09:47] <jamespage> cpaelzer: next on my list to backport
[09:47] <jamespage> not started on it yet
[09:47] <cpaelzer> thank you jamespage
[10:22] <JulietDeltaGolf> Greetings ! Does anybody know what kind of performance are to be expected with a MD RAID6 ?
[10:22] <JulietDeltaGolf> I'm running ubuntu 18.04.1 with kernel 4.18.0-14-generic and have a 12 drives RAID6.
[10:22] <JulietDeltaGolf> I expected a single dd write to the MD devices from /dev/zero would give me something close to 2 GiB/s but I get 96 MiB/s.
[10:23] <JulietDeltaGolf> What am I missing ?
[11:45] <kstenerud> Is there such a thing as an apt proxy that caches downloads, and can be addressed by other machines on the LAN?
[11:45] <kstenerud> such that you set up a deb entry for the caching machine
[11:51] <ahasenack> kstenerud: apt-cacher{,-ng} is smart like that, iirc it uses avahi-daemon to let others know it exists
[11:52] <ahasenack> you can also install squid-deb-proxy, and squid-deb-proxy-client on the client machines
[12:17] <kstenerud> ahasenack: So does that mean that if I have a proxy on the lan, any system will pick it up automatically via avahi-daemon?
[12:19] <ahasenack> kstenerud: that's the theory, but I'm not using it, I set up an actual squid proxy, and point my machines at it
[12:21] <tomreyn> JulietDeltaGolf: maybe the raid is stil being bult by the time you're testing?
[12:22] <JulietDeltaGolf> tomreyn > not I waited patiently
[12:23] <JulietDeltaGolf> for it to finish
[12:23] <JulietDeltaGolf> but I've just found out 4 drives were doing a background check
[12:23] <tomreyn> JulietDeltaGolf: so you had checked /proc/mdstat
[12:23] <JulietDeltaGolf> that helped a bit
[12:23] <tomreyn> oh ok
[12:23] <JulietDeltaGolf> then I did a dd with an absurdly large blocksize
[12:23] <JulietDeltaGolf> and now I get ~600 MiB/s
[12:24] <JulietDeltaGolf> and the raid process is using around 93% of the cpu
[12:24] <tomreyn> are you using oflag=direct ?
[12:25] <tomreyn> 93% of CPU?! that sounds wrong
[12:25] <JulietDeltaGolf> sooooooooooo I guess o_direct is not only bypassing the VFS cache
[12:25] <JulietDeltaGolf> but the MD and the disk cache as well ...
[12:25] <JulietDeltaGolf> and I didn't expect that... that should be O_SYNC flag's job
[12:27] <JulietDeltaGolf> tomreyn > yeah ...
[12:30] <JulietDeltaGolf> sorry I meant 93% of a core :)
[12:32] <JulietDeltaGolf> hum ... so it would mean the RAID6 implementation is single threaded and I will never go above that ever.
[12:34] <RoyK> JulietDeltaGolf: what sort of drives are these?
[12:35] <RoyK> JulietDeltaGolf: keep in mind that dd is a very poor performance test tool - try fio
[12:35] <RoyK> fio is built to benchmark disks - dd is made to move data
[12:36] <JulietDeltaGolf> RoyK > old spinning rust :)
[12:38] <JulietDeltaGolf> setting 1 in group_thread_cnt
[12:38] <JulietDeltaGolf> seems to help
[12:39] <RoyK> JulietDeltaGolf: as for that of singlethreadedness - I really don't know. How large are these drives? what's their rpm? what's the expected workload?
[12:39] <RoyK> JulietDeltaGolf: for raid-specific questions, there's also #linux-raid
[12:41] <JulietDeltaGolf> RoyK > 8TB 7200k NL SAS
[15:02] <tomreyn> with 18.04, we have both rsyslogd and systemd-journald. isn't this redundant - would not make sense to have only one or the other?
[15:06] <blackflow> tomreyn: well, you can't get rid of journal. and removing rsyslog would probably break legacy stuff that relies on those logfiles being available.
[15:07] <blackflow> and there'd be additional pitchfork mobs if rsyslog was removed. so it's a good default for now.,
[15:08] <blackflow> personally I prefer to use journal as a short (100M) in memory (volatile only) buffer, and forward everything to a syslog for persistent logging.
[15:11] <tomreyn> blackflow: yes, that's probably true, and the redundancy doesn't really hurt. maybe the addition of systemd-journal should be mentioned in the release notes, possibly accompanied by a hint that users may want to replace rsyslogd if they don't need it.
[15:11] <sdeziel> I've also heard that systemd-journald makes a poor network log aggregator (doesn't scale well)
[15:12] <tomreyn> i see, that's good to know.
[15:13] <blackflow> tomreyn: journald is core component of systemd, it was here since 15.04
[15:15] <tomreyn> oh, it's been in 16.04, too? i wasn't aware!
[15:15] <blackflow> yup.
[15:16] <teward> rbasak: i may have to add a third party plugin go nginx-full and -extras - there's a headache with GeoIP legacy being discontinued and not available freely anymore; there's a GeoIP2 third party module that was requested in Debian, and I asked sarnold to do a cursory review, and he said for -full and -extras it'd be acceptable (it's Universe so..).  Just wanted to give the FYI.  It adds to the delta, but I'd rather have a 'freely usable' GeoIP
[15:16] <teward> module in the NGINX config than one which you have to pay MaxMind for to utilize.
[15:16] <teward> thoughts?
[15:16] <teward> (all server team members are permitted to give comments0
[15:30] <rbasak> teward: OK, thanks. And good job on checking with the security team and letting them know :)
[15:31] <teward> rbasak: always :P
[15:31] <teward> sarnold gets pokes from me about that stuff regularly :P
[15:32] <teward> rbasak: sarnold's usually the first one I ask for cursory code reviews for any major evils
[15:32] <teward> in this case there wasn't any major problems identified, but it's also in Universe so we're not needing a heavy in-depth analysis at this point, just the ultra major headaches/concerns (since it's not in Main)
[15:33] <teward> rbasak: though in the future we may need to talk on a MIR to include that module, since GeoIP's... well, now a 'paid product' for the geoip date from MaxMind.
[20:33] <teward> cyphermox: noticed something odd on an 18.04 machine - netplan's 51-local-config.yaml (made in house) and 99-vmware-cfg.yaml (provided by VMware Template), netplan only processes 51 and not 99, is this normal?
[20:33] <teward> (similar behavior when 50-cloud-init.yaml is present too)
[20:42] <super_koza> Hi! I need to pick someones brain... :)
[20:42] <super_koza> I want to copy data from an external drive back to the Ubuntu Server 18.04 and I use rsync for that.
[20:43] <super_koza> However, it is painfully slow.
[20:43] <super_koza> When I was copying data from the server to the drive, it was rather fast (USB3)...
[20:43] <super_koza> What could be the cause for this, or what should I look at...
[20:44] <sarnold> just how slow is it going?
[20:45] <super_koza> I started rsync maybe a minute ago and it still says sending `incremental file list`
[20:45] <super_koza> But when I earlier tried to copy files, it was up to 2MB/s
[20:46] <super_koza> Given that I have almost 1TB of data, this is crazy...
[20:46] <amcclure> hello
[20:46] <benharri> o/ amcclure
[20:46] <amcclure> hi Ben
[20:46] <super_koza> The drive is partitioned as ext4
[20:47] <benharri> small world, hey?
[20:47] <amcclure> yep lol
[20:48] <benharri> super_koza: you might want to try something like https://github.com/jbd/msrsync
[20:48] <super_koza> Ok, when I try copying big files, the speeds reach up to 150MB/s
[20:50] <sarnold> that's not a big surprise
[20:50] <sarnold> getting information about small files can take forever, lots of syncs, lots of syscalls, etc
[20:51] <sarnold> but copying huge files can issue huge read() calls, usually it's sequential copies..
[20:52] <super_koza> For example: 2.03G 100%    8.38MB/s    0:03:51
[20:52] <amcclure> I don't really know why since I didn't have this issue with older Ubuntu Server versions or other distros, but when I type "sudo [command]" there's always a long delay before it asks to enter my password. After that however there aren't any delays unless I try running a command with sudo again.
[20:52] <sdeziel> readahead also possibly help with bigger files
[20:53] <amcclure> is there anything I could do to fix this issue?
[20:53] <sarnold> amcclure: is that delay about six, twelve, or eighteen seconds?
[20:53] <super_koza> It was showing ~150MB/s, but as it took long, the avg is only ~8MB/s
[20:54] <amcclure> super_koza: yes
[20:54] <amcclure> wrong tab sorry
[20:54] <amcclure> sarnold: yes
[20:54] <sdeziel> smells like DNS lookups
[20:55] <sarnold> amcclure: smells like busted dns lookups, as sdeziel mentions
[20:55] <amcclure> dns lookups?
[20:55] <sarnold> you may be able to configure the PAM module doing the lookups to not do them, or install records in your DNS server or /etc/hosts to give speedy answers..
[20:56] <sarnold> or maybe it's sudo doing the lookups..
[20:56] <sdeziel> sarnold: I'm curious to know where you took those 6, 12, and 18 s? resolv.conf mentions the default timeout to be 5s and retry 2
[20:57] <super_koza> is there anything else that I could use?
[20:57] <sarnold> sdeziel: hmm, I thought it'd be six seconds forever, and the 12 and 18 in case the second and third listed resolvers give trouble..
[20:57] <amcclure> it was a hosts issue
[20:58] <amcclure> didn't think sudo and hosts were connected at all
[20:58] <sarnold> sudo's config file lets you specify specific permissions on specific hosts
[20:58] <sdeziel> super_koza: for this transfer I don't know but if you need to do this frequently, there are possibly better ways than rsync
[20:58] <sarnold> then you can distribute one config file to every server in your organization
[20:59] <super_koza> sdeziel: no, I won't do this frequently, but please do tell me what would be a better option in that case
[21:02] <sdeziel> super_koza: you could use "modern" filesystems (like btrfs or zfs) to send snapshots arounds instead of copying individual files
[21:03] <sarnold> and depending upon how much changed vs how much stayed the same, it might be faster to skip rsync and just cp -R instead
[21:03] <sarnold> computing diffs between source and dest can require reading them both in.. but cp -R will have fewer reads of the destination drive..
[21:04] <sdeziel> sarnold: I think that rsync defaults to using --whole-file when the src/dest are both local
[21:05] <sdeziel> but maybe cp -R can still beat it, I never compared both to be honest
[21:06] <sarnold> sdeziel: ooh, nice
[21:19] <super_koza> msrsync doesn't seem to do much
[21:19] <super_koza> the terminal stays blank even though I have specified -P flag
[21:22] <super_koza> When I try copying the files with rsync to my laptop over USB2 port, speeds of 30MB/s are reached for larger files
[21:22] <super_koza> Whis seems decent I would say.
[21:22] <super_koza> Why the hell is my server so slow then?
[21:23] <super_koza> How can I check USB speeds?
[21:26] <lordcirth__> amcclure, does your hostname for 127.0.0.1 in /etc/hosts match your hostname in /etc/hostname ?
[21:26] <lordcirth__> Oh, nvrm, you found that
[21:29] <sdeziel> super_koza: maybe your server is doing other things taking away IO bandwidth?
[21:29] <super_koza> Nope, it is doing nothing
[21:31] <lordcirth__> super_koza, in future, using tar might make this a lot faster
[21:32] <lordcirth__> btrfs/zfs snapshots are great, but if the problem is random IO being really slow, tar will turn it into a single sequential IO
[21:34] <super_koza> I have done a very very stupid thing
[21:35] <lordcirth__> oh?
[21:35] <super_koza> I have changed permission on my home with -R 777 :D
[21:35] <super_koza> I have no idea why I did this...
[21:35] <lordcirth__> Well, that's not nearly as bad as doing it to /
[21:36] <super_koza> How can I recover from it?
[21:36] <sdeziel> your subconscious wanted to test your backup procedure ;)
[21:36] <super_koza> I can't ssh anymore
[21:37] <sarnold> are there other untrusted users on the system?
[21:38] <lordcirth__> super_koza, you need to remove permissions from ~/.ssh. chmod -R o-rwx ~/.ssh
[21:38] <lordcirth__> -> recursively remove 'rwx' from 'others'
[21:38] <benharri> .ssh should just be 700
[21:38] <super_koza> ok
[21:38] <super_koza> let me try it
[21:40] <super_koza> so I should do chmod -R 770 /home/myuser?
[21:41] <sarnold> then once you're done with that, figure out how to remove that execute bit from files that shouldn't have it
[21:43] <super_koza> damn
[21:43] <super_koza> :D
[21:44] <sarnold> I thinkt he last time I needed to clean up after someone's chmod 777, I used find . -type f -exec .. and find .  -type d -exec ..
[21:46] <super_koza> what does that do?
[21:51] <sdeziel> super_koza: "-type f" will only run the -exec stuff on files, "-type d" will deal with dirs only
[21:55] <super_koza> Now I have mounted the drive into /mount
[21:55] <super_koza> and now I get decent speeds
[21:55] <super_koza> Not sure how that makes any difference, but hey, whatever works...
[23:10] <jilocasin> evening all
[23:11] <jilocasin> anyone have any idea why *the* default installer for 18.04.1 LTS Server is a subiquity based cloud one?
[23:15] <rbasak> I don't think that question can be answered in a way that would satisfy you. Defaults get decided by the project according to our governance structure.
[23:16] <jilocasin> rbasak: wondering because; there is no warning, it *requires* a network connection to install or start, and it litters the install with cloud based programs and configuration files.
[23:18] <rbasak> See bug 1750819.
[23:19] <jilocasin> rbasak: yea I found that entry *after* I ran head first into that bug.
[23:20] <rbasak> As for "litters", you'll need to explain why that's actually a problem. It's common to end up with things that people don't specifically need in order to provide a more comfortable experience and to help reduce the bug count (fewer possible combinations generally means fewer bugs).
[23:22] <rbasak> For example, you also end up with a ton of hardware drivers you don't need.
[23:22] <rbasak> The advantage is that everyone* is using the same kernel, which makes bugs shallower.
[23:43] <jilocasin> back everyone...
[23:51] <jilocasin> bye now.