/srv/irclogs.ubuntu.com/2019/03/28/#ubuntu-server.txt

qwebirc24999it is automated by configure_networking though00:01
qwebirc24999how do I make it so that it'd work properly00:01
qwebirc24999alright, so if I manually fix the DNS I now get a curl 60 error (SSL certificate problem: unabel to get local issuer certificate). I have the certificates in /etc/ssl/certs. What is wrong?00:06
sarnolddid you run update-ca-certificates?00:09
qwebirc24999yes00:34
qwebirc24999sarnold00:34
sarnoldhmm, I don't know what to suggest next00:35
=== qwebirc24999 is now known as umhello
=== disposable3 is now known as disposable2
lordievaderGood morning08:44
=== Wryhder is now known as Lucas_Gray
=== aditya_ is now known as aditya
=== aditya_ is now known as aditya
kstenerudI have some questions about the MIR process12:00
kstenerudA package I want promoted depends on just a few other packages, but some of them are pretty big12:00
kstenerudso is there some rule of thumb over what gets in and what remains out?12:01
kstenerudOr are there any packages that should simply be dismissed out of hand?12:01
=== Wryhder is now known as Lucas_Gray
rbasakkstenerud: importance gets weighed against difficulty12:21
kstenerudOK. In this case it's a command line tool that would pull in python-tornado and ruby-sinatra12:21
kstenerudSo now I'm wondering... will this become an uphill battle? Should I reconsider?12:22
ksteneruder python3-tornado that is12:23
rbasakBoth of those seem really major to me. For Ubuntu, Ruby is particularly painful as we don't have most of that stack in main currently.12:34
rbasakWhy are they needed by the tool? Are they really mandatory?12:34
kstenerudrbasak I haven't dug in too deeply yet. It looks like it produces a command line tool, and a web API server. If these can somehow be split out, that could work... I'm trying to decide if this has a realistic chance of being a MIR or not12:42
kstenerudwhat I'd been hoping for was just a command line tool that people use to configure pacemaker/corosync. This web stuff being mixed in was a nasty surprise12:43
kstenerudI'd still need to dig deeper to see if the cmdline tool is just a web client in disguise, which would kybosh the whole thing I imagine. But even if it can be standalone, would a split in theory be feasible>12:45
rbasakkstenerud: there aren't any rules precluding a split. A single source package can produce multiple binary packages, some of which we can move to main and some of which can remain in universe. The only feasibility question is how involved the packaging work and maintaining that delta would be.12:48
kstenerudah ok, cool. I'll do some more digging then12:48
evitAnyone know why the https://usn.ubuntu.com/ website and mailing list are always behind the release of patches. Seems like they should be in sync15:13
lotuspsychjeevit: wich patch are you talking about?15:13
evitThe patch comes and then it isn't announced for a whole day some times15:13
evitThe patch will be available but people might not know cuz its not on the https://usn.ubuntu.com/ or announced via the mailing list15:14
evitfor nearly a day sometimes15:14
evitSeems like that should be more 'realtime'15:14
tomreynevit: smtp does not guarantee instant message delivery. a mailing list with thousands of users can take days until messages to all recipients were distributed.15:14
evittomreyn, I'm aware. That's not why15:15
tomreynhave you tried the newsfeeds (RSS+ Atom), are those also behind?15:15
evittomreyn, no but all sources should be updated as soon as possible15:17
tomreynevit: maybe bring it up in #ubuntu-hardened once you have specific observations (with timestamps) you can share.15:18
lotuspsychjeevit: whats your end goal with this anyway? if a security flaw comes out, the flaw is being worked on to fix15:18
lotuspsychjewhen the fix is there, the system updates15:19
tomreynit's relevant in companies where security updates are managed.15:20
tomreynor any form of organozations15:20
evitlotuspsychje, Not my point. We know about vulns but need good info on when patches are available to mitigate them15:20
evitlotuspsychje, Time is of the essence in many cases. The longer its not patched the more likely it is to be exploited15:21
lotuspsychjewe have seen in the past the hardened guys where behind of work..its human to get work done too right15:21
Odd_BlokeI don't know any details, but I wouldn't be surprised if the usn.u.c updates can't start until embargo is lifted in some cases.15:22
evitlotuspsychje, I'm not blaming anyone I'm just suggesting it should be more synced up and done in a more coordinated/timely fashion for the sake of the security of the community15:22
Odd_BlokeSo the update goes out to the archive as the notification process starts.15:23
lotuspsychjeyeah i think Odd_Bloke is right15:23
evitI hear you both15:23
lotuspsychjeperhaps as tomreyn suggest talk to the hardened guys about it evit ?15:24
evitlotuspsychje, Yes, I will. Thanks to you all15:24
tomreynyou're welcome. ;)15:26
laurenwho would I talk to to argue for zstd and brotli, both very small binaries, being installed by default in the base server install? it would be cool to be able to distribute things as .tar.brotli or .tar.zstd, and the main bottleneck for this being useful is brotli or zstd being installed by default17:00
tewardlauren: a bit late in the dev cycle for this discussion.17:02
laurenI don't really care which iteration it makes it into. I just realized it was something that would make sense in the future, and someone had to bring it up, so i might as well17:03
tewardgetting it included into the main server images would require us syncing in the Security team and the SEcurity team doing analysis on zstd and brotli to determine if there's any major issues with the package(s) which would fail to land in Main17:03
lauren:)17:03
tewardtry reading https://wiki.ubuntu.com/MainInclusionProcess though17:03
tewardbecause Main Inclusion is... tricky17:05
sarnoldboth brotli and libzstd are in main17:08
tewardthey are?17:08
tewardsarnold: then why does rmadison say they aren't?17:08
teward brotli | 1.0.7-2                | disco/universe           | amd64, arm64, armhf, i386, ppc64el, s390x17:09
sarnoldteward: http://paste.ubuntu.com/p/J9Zk8J2Jfn/17:09
tewardhmm17:09
sarnoldteward: probably usual source / binary things17:09
rbasaklauren: what's the full justification for why it'd be useful, please, for someone who doesn't know much about this area?17:10
rbasak(I understand the default part)17:12
laurenrbasak: brotli and zstd are the current state of the art open source lossless compressors; I'm not sure who zstd was originally by, but it's now permissively licensed and maintained by Facebook, and brotli is permissively licensed by Google. both are already in universe as of xenial. both brotli and zstd can reach much higher compression rates than nearly anything else, and can do it at much higher throughput than anything else open17:29
laurensource; the only exceptions seem to be lzma which gets just barely better compression in the very best case, which is also not installed by default and is much slower on compress17:29
laurenhttps://quixdb.github.io/squash-benchmark/#results https://sites.google.com/site/powturbo/home/benchmark17:29
teward> maintained by Facebook17:32
tewardgiven their current Security track record I'd call that a negative impacting factor17:32
laurenthe relevant use case for me is I'd like to be able to distribute software brotli-compressed; brotli seems to average 2x or so smaller file size for binary data than bzip2, in the powturbo benchmark17:32
tewardjust saying17:32
laurenah pretty good point17:32
sarnoldrbasak: zstd's --adapt feature is pretty neat; folks like using it with eg zfs send ... | zstd --adapt | ssh remote@host zfs recv ...17:32
sarnoldrbasak: .. if the network's really fast, it'll compress very fast; if the network is very slow, it'll spend more time compressing17:33
sdezielwow, that's interesting ^17:33
supamanno matter who the developers are (google, facebook, kaspersky ... ) everything should be vetted right?17:33
sarnoldlauren: for "offline" use cases like that, be sure to compare against xz; xz is slow as sin to compress but gets great ratios17:33
laurenfor sure, guessing inclusion by developer is just a heuristic17:34
lordcirthadaptive streaming compression? Ok that's really cool17:35
laurenand fair enough. is xz installed by default? if not maybe I just want to suggest that be included by default17:35
lordcirthBy the way, I hear you can also speed up those sorts of ssh-piping things by including | buffer | on one or both ends17:35
sarnoldlordcirth: did you mean mbuffer?17:35
laurenobviously since these are in the repos, it doesn't make that big of a difference17:35
lordcirthsarnold, nope, the command is just 'buffer', also apt install buffer17:36
lordcirthIt just buffers pipes17:36
lordcirthAh, it seems mbuffer is an upgraded version?17:36
laurenhuh good to know. I assumed ssh would be reasonable internally17:36
lordcirthlauren, it is reasonable in the sense of being safe and not using 500MB of RAM.17:38
lordcirthMost unix tools, especially older ones, will err on the side of using less resources.17:39
lordcirthFor example, dd uses 512b blocks by default, and you can often get a big speed boost by specifying bs=1M17:39
laurenah makes sense. I always do something like that with dd yup17:40
sdeziellordcirth: I've reduced my use of dd after reading https://www.vidarholen.net/contents/blog/?p=479 which I found interesting17:42
lordcirthsdeziel, cool, thanks17:43
sdeziellordcirth: I still use dd when I need to seek/skip though17:44
sarnoldI'm pretty sure that cp behaviour is pretty new..17:44
lordcirthsarnold, which cp behaviour?17:45
sarnoldcp foo /dev/sdb17:45
sarnoldOnce Upon A Time it would just unlink sdb and then put the data there. as you asked.17:46
lordcirthYeah, that was my first thought too17:46
lordcirthsarnold, but old cat would work, right?17:47
tomreynthere are pitfalls in working with raw devices like this17:47
tomreynsome aren't block special but symlinks17:47
tomreynalso accessing them will often require root / sudo, which makes piping and redirecting a tid bit more difficult than this blog post suggests17:48
sdezieltomreyn: symlinks should be resolved so that cp ends up writing to the real destination17:48
sarnoldlordcirth: yeah I'd expect cat to work. but then you've got to use a root shell rather than just using sudo on dd ..17:48
lordcirthI've used sudo tee > /dev/null, but that's maybe slow for ISOs?17:49
tomreyni remember that i ended up replacing a symlink in /dev/mapper by a file when using .. i don't know what tool exactly in the past.17:50
rbasaklauren: thanks, that's useful to know. Can I suggest that you file a bug (a single bug for both should be fine) with that justification?17:50
rbasakI wonder what the longevity of this stuff will be (so we don't end up with a big pile of them that we can't remove017:50
rbasak)17:50
sdezielsarnold: this old cp behaviour you describe sounds like "cp --remove-destination", I am a bit surprised it ever was the default17:50
laurenoh hmm longevity is a good point17:51
sarnoldrbasak: what, you don't love having compress and gzip and bzip2 and lz4 and brotli and zstd and xz all installed at once because once upon a time each one was the best tool available? :)17:51
rbasak:)17:52
lordcirthI've personally seen zstd referenced many more times than brotli, anecdotally.17:52
rbasakPerhaps we should operate a one in one out policy :)17:52
sdezielconsidering that zstd can be used with btrfs, that pretty much guaranty it needs to remain supported forever17:52
laurennah, none of them are dead, it's a great criticism17:52
sarnoldsdeziel: the difference between in-kernel implementation vs userspace utilities..17:52
sarnoldthough I haven't actually seen a compress file in AGES17:53
sdezielah right17:53
lordcirth'compress' isn't installed in my 18.04 Xubuntu?17:53
laurenmaybe compress could be removed from default by now, but someone is going to be irritated and need to install it17:53
laurenoh nice!17:53
lordcirthOr on my 18.04 server17:53
sarnoldwow :)17:53
sarnoldI sure didn't expect that17:53
lordcirthIt hints to install 'ncompress' as it should, of course.17:54
sdezielgunzip  can currently decompress files created by gzip, zip, compress, compress -H or pack17:54
tewardsarnold: well given ncompress is in Universe I"m not surprised it's not a default :p17:54
teward(yay for umt search!)17:54
sarnoldhahaha17:54
laurenahh interesting. so maybe the reasonable thing is just to tell users to install one. I expect both zstd and brotli to have solid longevity, because if I understand correctly, both have a lot of room for improvement in terms of more efficient compressors without changing the protocol17:56
rbasakDo zstd and brotli have different use cases?17:56
laurenI don't think so. I just don't want to pick sides.17:56
sdezielI've only heard of brotli being used for web stuff (browsers, libs, servers)17:57
sarnoldyeah brotli seems to be used with webfonts or something similar17:58
rbasakPart of Ubuntu's purpose, for defaults, at least, _is_ to pick sides, to help us focus rather than dilute attention to detail. Perhaps that means we should wait until there's a clear winner between the two then. I don't have a strong feeling in any direction right now - just conscious that it's hard to go back once we do put something in.17:58
sarnoldzstd's adaptive compression feature is nice for some potentially-network-bound usecases17:58
sdezielsarnold: Firefox nowadays sends Accept-Encoding: gzip, deflate, br17:59
sdezielsame for Chromium18:00
sarnoldgah why would it not put br first?18:00
sdezieloh, I didn't know the ordering played a role in the preference18:01
sarnoldoh well. web is their own world18:01
sarnoldthey do what they do :)18:01
sdezielI bet the reason was because some middle boxes would exploded if gzip and deflate were not seen first18:07
lordcirthAh, middleboxes. Breaking compatibility since forever18:08
sarnoldheh :( probably true18:13
tomreyns/(compatibility)/\1 and encryption/19:40

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!