[00:01] <qwebirc24999> it is automated by configure_networking though
[00:01] <qwebirc24999> how do I make it so that it'd work properly
[00:06] <qwebirc24999> alright, so if I manually fix the DNS I now get a curl 60 error (SSL certificate problem: unabel to get local issuer certificate). I have the certificates in /etc/ssl/certs. What is wrong?
[00:09] <sarnold> did you run update-ca-certificates?
[00:34] <qwebirc24999> yes
[00:34] <qwebirc24999> sarnold
[00:35] <sarnold> hmm, I don't know what to suggest next
[08:44] <lordievader> Good morning
[12:00] <kstenerud> I have some questions about the MIR process
[12:00] <kstenerud> A package I want promoted depends on just a few other packages, but some of them are pretty big
[12:01] <kstenerud> so is there some rule of thumb over what gets in and what remains out?
[12:01] <kstenerud> Or are there any packages that should simply be dismissed out of hand?
[12:21] <rbasak> kstenerud: importance gets weighed against difficulty
[12:21] <kstenerud> OK. In this case it's a command line tool that would pull in python-tornado and ruby-sinatra
[12:22] <kstenerud> So now I'm wondering... will this become an uphill battle? Should I reconsider?
[12:23] <kstenerud> er python3-tornado that is
[12:34] <rbasak> Both of those seem really major to me. For Ubuntu, Ruby is particularly painful as we don't have most of that stack in main currently.
[12:34] <rbasak> Why are they needed by the tool? Are they really mandatory?
[12:42] <kstenerud> rbasak I haven't dug in too deeply yet. It looks like it produces a command line tool, and a web API server. If these can somehow be split out, that could work... I'm trying to decide if this has a realistic chance of being a MIR or not
[12:43] <kstenerud> what I'd been hoping for was just a command line tool that people use to configure pacemaker/corosync. This web stuff being mixed in was a nasty surprise
[12:45] <kstenerud> I'd still need to dig deeper to see if the cmdline tool is just a web client in disguise, which would kybosh the whole thing I imagine. But even if it can be standalone, would a split in theory be feasible>
[12:48] <rbasak> kstenerud: there aren't any rules precluding a split. A single source package can produce multiple binary packages, some of which we can move to main and some of which can remain in universe. The only feasibility question is how involved the packaging work and maintaining that delta would be.
[12:48] <kstenerud> ah ok, cool. I'll do some more digging then
[15:13] <evit> Anyone know why the https://usn.ubuntu.com/ website and mailing list are always behind the release of patches. Seems like they should be in sync
[15:13] <lotuspsychje> evit: wich patch are you talking about?
[15:13] <evit> The patch comes and then it isn't announced for a whole day some times
[15:14] <evit> The patch will be available but people might not know cuz its not on the https://usn.ubuntu.com/ or announced via the mailing list
[15:14] <evit> for nearly a day sometimes
[15:14] <evit> Seems like that should be more 'realtime'
[15:14] <tomreyn> evit: smtp does not guarantee instant message delivery. a mailing list with thousands of users can take days until messages to all recipients were distributed.
[15:15] <evit> tomreyn, I'm aware. That's not why
[15:15] <tomreyn> have you tried the newsfeeds (RSS+ Atom), are those also behind?
[15:17] <evit> tomreyn, no but all sources should be updated as soon as possible
[15:18] <tomreyn> evit: maybe bring it up in #ubuntu-hardened once you have specific observations (with timestamps) you can share.
[15:18] <lotuspsychje> evit: whats your end goal with this anyway? if a security flaw comes out, the flaw is being worked on to fix
[15:19] <lotuspsychje> when the fix is there, the system updates
[15:20] <tomreyn> it's relevant in companies where security updates are managed.
[15:20] <tomreyn> or any form of organozations
[15:20] <evit> lotuspsychje, Not my point. We know about vulns but need good info on when patches are available to mitigate them
[15:21] <evit> lotuspsychje, Time is of the essence in many cases. The longer its not patched the more likely it is to be exploited
[15:21] <lotuspsychje> we have seen in the past the hardened guys where behind of work..its human to get work done too right
[15:22] <Odd_Bloke> I don't know any details, but I wouldn't be surprised if the usn.u.c updates can't start until embargo is lifted in some cases.
[15:22] <evit> lotuspsychje, I'm not blaming anyone I'm just suggesting it should be more synced up and done in a more coordinated/timely fashion for the sake of the security of the community
[15:23] <Odd_Bloke> So the update goes out to the archive as the notification process starts.
[15:23] <lotuspsychje> yeah i think Odd_Bloke is right
[15:23] <evit> I hear you both
[15:24] <lotuspsychje> perhaps as tomreyn suggest talk to the hardened guys about it evit ?
[15:24] <evit> lotuspsychje, Yes, I will. Thanks to you all
[15:26] <tomreyn> you're welcome. ;)
[17:00] <lauren> who would I talk to to argue for zstd and brotli, both very small binaries, being installed by default in the base server install? it would be cool to be able to distribute things as .tar.brotli or .tar.zstd, and the main bottleneck for this being useful is brotli or zstd being installed by default
[17:02] <teward> lauren: a bit late in the dev cycle for this discussion.
[17:03] <lauren> I don't really care which iteration it makes it into. I just realized it was something that would make sense in the future, and someone had to bring it up, so i might as well
[17:03] <teward> getting it included into the main server images would require us syncing in the Security team and the SEcurity team doing analysis on zstd and brotli to determine if there's any major issues with the package(s) which would fail to land in Main
[17:03] <lauren> :)
[17:03] <teward> try reading https://wiki.ubuntu.com/MainInclusionProcess though
[17:05] <teward> because Main Inclusion is... tricky
[17:08] <sarnold> both brotli and libzstd are in main
[17:08] <teward> they are?
[17:08] <teward> sarnold: then why does rmadison say they aren't?
[17:09] <teward>  brotli | 1.0.7-2                | disco/universe           | amd64, arm64, armhf, i386, ppc64el, s390x
[17:09] <sarnold> teward: http://paste.ubuntu.com/p/J9Zk8J2Jfn/
[17:09] <teward> hmm
[17:09] <sarnold> teward: probably usual source / binary things
[17:10] <rbasak> lauren: what's the full justification for why it'd be useful, please, for someone who doesn't know much about this area?
[17:12] <rbasak> (I understand the default part)
[17:29] <lauren> rbasak: brotli and zstd are the current state of the art open source lossless compressors; I'm not sure who zstd was originally by, but it's now permissively licensed and maintained by Facebook, and brotli is permissively licensed by Google. both are already in universe as of xenial. both brotli and zstd can reach much higher compression rates than nearly anything else, and can do it at much higher throughput than anything else open
[17:29] <lauren> source; the only exceptions seem to be lzma which gets just barely better compression in the very best case, which is also not installed by default and is much slower on compress
[17:29] <lauren> https://quixdb.github.io/squash-benchmark/#results https://sites.google.com/site/powturbo/home/benchmark
[17:32] <teward> > maintained by Facebook
[17:32] <teward> given their current Security track record I'd call that a negative impacting factor
[17:32] <lauren> the relevant use case for me is I'd like to be able to distribute software brotli-compressed; brotli seems to average 2x or so smaller file size for binary data than bzip2, in the powturbo benchmark
[17:32] <teward> just saying
[17:32] <lauren> ah pretty good point
[17:32] <sarnold> rbasak: zstd's --adapt feature is pretty neat; folks like using it with eg zfs send ... | zstd --adapt | ssh remote@host zfs recv ...
[17:33] <sarnold> rbasak: .. if the network's really fast, it'll compress very fast; if the network is very slow, it'll spend more time compressing
[17:33] <sdeziel> wow, that's interesting ^
[17:33] <supaman> no matter who the developers are (google, facebook, kaspersky ... ) everything should be vetted right?
[17:33] <sarnold> lauren: for "offline" use cases like that, be sure to compare against xz; xz is slow as sin to compress but gets great ratios
[17:34] <lauren> for sure, guessing inclusion by developer is just a heuristic
[17:35] <lordcirth> adaptive streaming compression? Ok that's really cool
[17:35] <lauren> and fair enough. is xz installed by default? if not maybe I just want to suggest that be included by default
[17:35] <lordcirth> By the way, I hear you can also speed up those sorts of ssh-piping things by including | buffer | on one or both ends
[17:35] <sarnold> lordcirth: did you mean mbuffer?
[17:35] <lauren> obviously since these are in the repos, it doesn't make that big of a difference
[17:36] <lordcirth> sarnold, nope, the command is just 'buffer', also apt install buffer
[17:36] <lordcirth> It just buffers pipes
[17:36] <lordcirth> Ah, it seems mbuffer is an upgraded version?
[17:36] <lauren> huh good to know. I assumed ssh would be reasonable internally
[17:38] <lordcirth> lauren, it is reasonable in the sense of being safe and not using 500MB of RAM.
[17:39] <lordcirth> Most unix tools, especially older ones, will err on the side of using less resources.
[17:39] <lordcirth> For example, dd uses 512b blocks by default, and you can often get a big speed boost by specifying bs=1M
[17:40] <lauren> ah makes sense. I always do something like that with dd yup
[17:42] <sdeziel> lordcirth: I've reduced my use of dd after reading https://www.vidarholen.net/contents/blog/?p=479 which I found interesting
[17:43] <lordcirth> sdeziel, cool, thanks
[17:44] <sdeziel> lordcirth: I still use dd when I need to seek/skip though
[17:44] <sarnold> I'm pretty sure that cp behaviour is pretty new..
[17:45] <lordcirth> sarnold, which cp behaviour?
[17:45] <sarnold> cp foo /dev/sdb
[17:46] <sarnold> Once Upon A Time it would just unlink sdb and then put the data there. as you asked.
[17:46] <lordcirth> Yeah, that was my first thought too
[17:47] <lordcirth> sarnold, but old cat would work, right?
[17:47] <tomreyn> there are pitfalls in working with raw devices like this
[17:47] <tomreyn> some aren't block special but symlinks
[17:48] <tomreyn> also accessing them will often require root / sudo, which makes piping and redirecting a tid bit more difficult than this blog post suggests
[17:48] <sdeziel> tomreyn: symlinks should be resolved so that cp ends up writing to the real destination
[17:48] <sarnold> lordcirth: yeah I'd expect cat to work. but then you've got to use a root shell rather than just using sudo on dd ..
[17:49] <lordcirth> I've used sudo tee > /dev/null, but that's maybe slow for ISOs?
[17:50] <tomreyn> i remember that i ended up replacing a symlink in /dev/mapper by a file when using .. i don't know what tool exactly in the past.
[17:50] <rbasak> lauren: thanks, that's useful to know. Can I suggest that you file a bug (a single bug for both should be fine) with that justification?
[17:50] <rbasak> I wonder what the longevity of this stuff will be (so we don't end up with a big pile of them that we can't remove0
[17:50] <rbasak> )
[17:50] <sdeziel> sarnold: this old cp behaviour you describe sounds like "cp --remove-destination", I am a bit surprised it ever was the default
[17:51] <lauren> oh hmm longevity is a good point
[17:51] <sarnold> rbasak: what, you don't love having compress and gzip and bzip2 and lz4 and brotli and zstd and xz all installed at once because once upon a time each one was the best tool available? :)
[17:52] <rbasak> :)
[17:52] <lordcirth> I've personally seen zstd referenced many more times than brotli, anecdotally.
[17:52] <rbasak> Perhaps we should operate a one in one out policy :)
[17:52] <sdeziel> considering that zstd can be used with btrfs, that pretty much guaranty it needs to remain supported forever
[17:52] <lauren> nah, none of them are dead, it's a great criticism
[17:52] <sarnold> sdeziel: the difference between in-kernel implementation vs userspace utilities..
[17:53] <sarnold> though I haven't actually seen a compress file in AGES
[17:53] <sdeziel> ah right
[17:53] <lordcirth> 'compress' isn't installed in my 18.04 Xubuntu?
[17:53] <lauren> maybe compress could be removed from default by now, but someone is going to be irritated and need to install it
[17:53] <lauren> oh nice!
[17:53] <lordcirth> Or on my 18.04 server
[17:53] <sarnold> wow :)
[17:53] <sarnold> I sure didn't expect that
[17:54] <lordcirth> It hints to install 'ncompress' as it should, of course.
[17:54] <sdeziel> gunzip  can currently decompress files created by gzip, zip, compress, compress -H or pack
[17:54] <teward> sarnold: well given ncompress is in Universe I"m not surprised it's not a default :p
[17:54] <teward> (yay for umt search!)
[17:54] <sarnold> hahaha
[17:56] <lauren> ahh interesting. so maybe the reasonable thing is just to tell users to install one. I expect both zstd and brotli to have solid longevity, because if I understand correctly, both have a lot of room for improvement in terms of more efficient compressors without changing the protocol
[17:56] <rbasak> Do zstd and brotli have different use cases?
[17:56] <lauren> I don't think so. I just don't want to pick sides.
[17:57] <sdeziel> I've only heard of brotli being used for web stuff (browsers, libs, servers)
[17:58] <sarnold> yeah brotli seems to be used with webfonts or something similar
[17:58] <rbasak> Part of Ubuntu's purpose, for defaults, at least, _is_ to pick sides, to help us focus rather than dilute attention to detail. Perhaps that means we should wait until there's a clear winner between the two then. I don't have a strong feeling in any direction right now - just conscious that it's hard to go back once we do put something in.
[17:58] <sarnold> zstd's adaptive compression feature is nice for some potentially-network-bound usecases
[17:59] <sdeziel> sarnold: Firefox nowadays sends Accept-Encoding: gzip, deflate, br
[18:00] <sdeziel> same for Chromium
[18:00] <sarnold> gah why would it not put br first?
[18:01] <sdeziel> oh, I didn't know the ordering played a role in the preference
[18:01] <sarnold> oh well. web is their own world
[18:01] <sarnold> they do what they do :)
[18:07] <sdeziel> I bet the reason was because some middle boxes would exploded if gzip and deflate were not seen first
[18:08] <lordcirth> Ah, middleboxes. Breaking compatibility since forever
[18:13] <sarnold> heh :( probably true
[19:40] <tomreyn> s/(compatibility)/\1 and encryption/