[00:11] <blackflow> sarnold: you showoff :) triple mirror stripes for volatile (easily replaceable) data! :)
[00:12] <sarnold> blackflow: well, the intention is to some day power on the big stack of hard drives that have music and photos and consolidate decades of stuff into one place
[00:13] <sarnold> blackflow: funny thing is, now that there's special vdevs, I've got a vague feeling of replacing both my pools. currently 3-three way mirrors on one, 2-two way SSDs on the other... a 9-drive raidz3 plus two mirrored special vdevs from the ssds...
[00:14]  * blackflow faints
[00:15] <blackflow> I never found any need to go beyond any-2 failure margin of a raidz2. once, in past 10 years, I had one two-disk mirror fail when the other drive failed mid-resilvering of the replacement for the first one.
[00:15] <sarnold> juggling two pools is slightly annoying, and Big Files are probably fast enough from a raidz3 vdev .. metadata and smaller files from the ssds..
[06:03] <lordievader> Good morning
[07:40] <ZZlatev> Hey guys
[08:50] <jamespage> sahid, coreycb: I've added openstack notes to https://wiki.ubuntu.com/DiscoDingo/ReleaseNotes
[08:50] <jamespage> we should make sure we keep doing that
[08:52] <sahid> jamespage: ack
[08:53] <jamespage> sahid: also prepped a release email for the UCA which I shared with your and coreycb - pending a successfull smoke I'm running will send that shortly!
[08:56] <sahid> jamespage: ok i'm reviewing it right now
[09:00] <sahid> jamespage: ack for me thanks for it
[09:00] <jamespage> np
[09:00] <jamespage> its a bit of a copy/paste/search/replace exercise
[09:00] <jamespage> but useful non-the-less
[09:01] <sahid> :)
[13:57] <shubjero> You have to download a total of 416 M. This download will take about
[13:57] <shubjero> 52 minutes with a 1Mbit DSL connection and about 16 hours with a 56k
[13:57] <shubjero> modem.
[13:57] <shubjero> Still providing estimates using 56k modem :)
[13:58] <shubjero> (do-release-upgrade for 16.04 > 18.04)
[14:01] <tomreyn> there is IoT
[14:02] <blackflow> Soon bases on the moon, the BDP there will kill! jumbo frames ftw.
[14:04] <shubjero> PING moon (moon): 56 data bytes
[14:04] <shubjero> 64 bytes from moon: icmp_seq=0 ttl=122 time=2.849 s
[14:04] <shubjero> 64 bytes from moon: icmp_seq=1 ttl=122 time=2.995 s
[14:04] <shubjero> 64 bytes from moon: icmp_seq=2 ttl=122 time=2.933 s
[14:04] <shubjero> "low ping bastard!"
[14:05] <blackflow> uhm.... moon will be ipv6 only. so yeah, fake ping output! busted!
[14:15] <shubjero> https://www.theverge.com/2019/4/17/18411843/uk-porn-block-delayed-start-date-july-15th
[14:16] <shubjero> well thats interesting
[14:16] <shubjero> anyways
[14:18] <lotuspsychje> shubjero: please keep offtopic chat in other channels, like #ubuntu-offtopic
[14:18] <shubjero> yeah not gonna lie that was a wrong-window paste
[14:18] <shubjero> my apologies
[14:46] <adac> guys on one of my server I have this file
[14:46] <adac>  /proc/sys/net/netfilter/nf_conntrack_tcp_be_liberal
[14:46] <adac> on the other however not
[14:47] <adac> where is this coming from which package? How can I create it?
[14:50] <sdeziel> adac: the kernel or one of the modules create it
[14:50] <tomreyn> adac: this is not a file, it is just presented as such. read the sysctl man page.
[14:50] <adac> ok thanks
[14:51] <adac> would need to know how this is being ativated this module
[14:51] <sdeziel> adac: I'd guess nf_conntrack_ipv4 or nf_conntrack_ipv6
[14:51] <sdeziel> or maybe the generic one nf_conntrack.
[14:52] <adac> I try to execute this ansible task due to some kernel issues with ufw
[14:52] <adac> https://github.com/ansible/ansible/issues/45446
[14:52] <adac> but the file is not found
[14:52] <adac> So that emans the module is not enabled
[14:52] <tomreyn> documentation on this setting https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt
[14:53] <adac> makes me just wonder why on my other server it is enabled
[14:53] <adac> maybe ufw with some command enables it
[14:53] <sdeziel> (those are the modules I load to get a different but related sysctl key: net.netfilter.nf_conntrack_tcp_loose)
[14:54] <tomreyn> sdeziel: copy / paste bug?
[14:55] <adac>  tomreyn sdeziel how can I activate this modul?
[14:55] <tomreyn> adac: that's a possible explanation. compare the 'lsmod' output of both systems
[14:55] <sdeziel> tomreyn: no but maybe I wasn't clear. I'm trying to explain that I load nf_conntrack_ipv4 & nf_conntrack_ipv6 so that I have those proc files (also available as sysctl keys)
[14:55] <sdeziel> adac: you can add those to /etc/modules-load.d/nf-conntrack.conf
[14:55] <tomreyn> sdeziel: oh, yes, i misunderstood.
[14:55] <sdeziel> adac: this will have them loaded on boot
[14:56] <adac> would this only temporary enable it?
[14:56] <adac> https://github.com/ansible/ansible/issues/45446#issuecomment-467829815
[14:56] <sdeziel> adac: no, that would be permanent for the module load
[14:56] <adac> sdeziel, ok thanks
[14:57] <sdeziel> adac: the manual module loading is to make the sysctl command work reliably
[14:57] <adac> sdeziel, guess this ansible module would work
[14:57] <adac> https://docs.ansible.com/ansible/latest/modules/modprobe_module.html
[14:57] <adac> to enable the module in first plance
[14:57] <sdeziel> adac: normally the conntrack modules are loaded on demand based on your ip{,6}tables rules
[14:58] <adac> sdeziel, yes but that only happens later in time when ufw is setup
[14:58] <sdeziel> adac: but sometimes this on demand loading is too late for the sysctl to happen reliably
[14:58] <adac> I need to do this first
[14:58] <sdeziel> adac: hehe, I ran into the same issue and my workaround was to manually load the modules on boot
[14:59] <adac> So mayb i can via ansible just enable the module, then enable this liberal tcp thingy
[14:59] <sdeziel> (gee, took me way to many words to explain that one... sorry not fully awake it seems)
[14:59] <adac> and then do the ufw stuff
[14:59] <adac> sdeziel, welcome to the club :)
[15:00] <sdeziel> adac: IIRC TCP liberal is for when you want to accept an ongoing connection you see for the first time. In otherwords it removes some of the stateful checks
[15:01] <sdeziel> adac: have you tried "iptables -I INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT" as a workaround instead?
[15:01] <adac> sdeziel, I need it beause of this issue: https://github.com/ansible/ansible/issues/45446
[15:02] <adac> sdeziel, would that be better option?
[15:02] <adac> mean your workaround?
[15:05] <sdeziel> adac: dunno, it looks like the connection states are reset so possibly my workaround wouldn't work.
[15:06] <adac> sdeziel, for me the question is simply also what are the effects of setting this liberal TCP thingy
[15:06] <sdeziel> adac: if it was my machine though, I'd probably do the TCP liberal thing before and undo it after, assuming you have that flexibility with ansible
[15:06] <adac> on ohter connections
[15:06] <adac> ok that sounds good yes
[15:06] <sdeziel> adac: that's where tomreyn's link becomes handy, look at what it says under nf_conntrack_tcp_be_liberal
[15:07] <adac> sdeziel, hmm loading the module does not automagically create this "file"
[15:08] <adac> is this the module nf_conntrack or is this a own module as well: nf_conntrack_tcp_be_liberal
[15:08] <sdeziel> adac: hmm, I'm assuming that you checked "lsmod | grep nf_conntrack" ?
[15:08] <adac> the latter should be a variable right?
[15:08] <sdeziel> more or less yes
[15:09] <sdeziel> it's a pseudo file under /proc and is also available as a sysctl key
[15:10] <adac> sdeziel, this is what ufw enable or a similar command enables:
[15:10] <adac> https://pastebin.com/MDzNvWy6
[15:10] <sdeziel> adac: I'll try to reproduce on a local VM
[15:13] <adac> sdeziel, kk thanks
[15:14] <sdeziel> adac: so loading nf_conntrack_ipv4 is what created /proc/sys/net/netfilter/nf_conntrack_tcp_be_liberal
[15:15] <adac> sdeziel, ohh thanks so much!!
[15:16] <sdeziel> adac: np
[15:17] <adac> sdeziel, i will set this to 0 again after the ansible task is finshed
[15:17] <adac> thank you as well for this good advise
[15:34] <paulatx> anybody happen to know how long the Ubuntu 16.04 EC2 images (https://cloud-images.ubuntu.com/locator/ec2/) that use the AWS specific kernel will support new hardware? The first graphic on this page: https://www.ubuntu.com/about/release-cycle makes it look like hardware support has already stopped for 16.04 but I'm not sure if that also applies to the AWS specific kernels which are based off of the 4.4.0 GA kernel and not the 4.
[15:34] <paulatx> I'm trying to figure out when the support for new hardware will stop on those AWS tuned kernels.  I spoke with tomreyn yesterday about this and he said I should try again today during UK business hours
[16:34] <tomreyn> paulatx: it looks like you hit the late part of this day (very understandable if you're on the U.S. west coast). maybe we can try together to get a clearer picture. by the way, i'm just a volunteer here, don't work for canonical or anything.
[16:35] <tomreyn> paulatx: also, i assume commercial support is available for those EC2 instances also, if you need a fully reliable response to this question.
[16:37] <tomreyn> i run only 'standard' (non EC2) ubuntu installations myself (including 16.04 LTS) but those should be similar to EC2 instances apart from the kernel and some of the packages installed by default.
[16:38] <tomreyn> i see that there is a linux-image-aws package available in 'xenial' (16.04 LTS), currently at version 4.4.0.1079.82
[16:39] <rbasak> paulatx: what do you mean by new hardware in the context of EC2? New virtual hardware?
[16:39] <tomreyn> in ubuntu 18.04 LTS, there is also this package (currently version 4.15.0.1035.34) as well as a linux-image-aws-edge package (in case one would need to use the very latest hardware, mostly for testing purposes, currently at version 4.18.0.1012.11, and in the community supported 'universe' section)
[16:39] <sarnold> paulatx: there's an #ubuntu-kernel channel that may have more folks specifically aware of hardware support on amazon clouds
[16:40] <rbasak> In general, hardware enablement, which includes virtual hardware in the case of EC2, is permitted until end of standard support, which for 16.04 is April 2021. However I don't know of specific intentions in the case of these particular images.
[16:42] <paulatx> sarnold: ahh it sounds like #ubuntu-kernel is where I should be asking.. they may have a better pulse on what the deal is with support of the AWS specific kernels/images
[16:42] <rbasak> rcj: would you know the answer to paulatx's question above please? I don't see fginther or gaughen in here.
[16:42] <tomreyn> paulatx: can you discuss the use case? based on your hostname, i assume this may be graphics hardware related?
[16:44] <paulatx> tomreyn: the quick and dirty is that AWS is always announcing new hardware so we want to know how long the Ubuntu maintained 16.04 LTS EC2 images will support new hardware.  If they will support new h/w through April 2021 then we may put off upgrading to 18.04 LTS but if support for new h/w ends sooner then we will upgrade sooner
[16:45] <tomreyn> thanks.
[16:47] <rcj> rbasak: thanks for the ping.  I'm
[16:48] <rcj> rbasak: thanks for the ping.  I'm seeing who from the kernel team is here to answer that completely (rather than bumping paulatx to #ubuntu-kernel)
[16:50] <paulatx> rcj: thanks
[16:52] <rcj> The short answer is that the linux-aws custom kernel was created to allow for AWS specific tuning and for new AWS instance/feature support that would not be possible with the generic kernel.
[16:59] <paulatx> rcj: right.. so are there details somewhere on the support / lifespan for the linux-aws custom kernel?
[17:02] <lotuspsychje> fresh from the press: https://blog.ubuntu.com/2019/04/17/ubuntu-server-development-summary-16-april-2019
[17:04] <bjf> rcj, paulatx, what's up?
[17:05] <teward> assuming rcj summoned you, bjf, questions re: linux-aws custom kernel.  trying to find the exact original question in the scrollback though.
[17:05] <teward> ah here it is.
[17:06] <teward> bjf: sent you a small number of lines in PM of the original question :P
[17:06] <bjf> teward, got them
[17:06] <bjf> teward, so .. in general we don't backport new support for new HW back to earlier releases
[17:06] <teward> paulatx: ^
[17:06] <bjf> teward, however
[17:06] <teward> bjf: mishighlight, targets -> paulatx :)
[17:06] <teward> i'm just helping :)
[17:07] <bjf> teward, paulatx, however if Amazon asks us for specific support and that support isn't too intrusive (low risk of regressions) then we will do it
[17:08] <bjf> teward, paulatx, it's not a simple rule .. in general, the idea with an LTS every 2 years is that you have time to plan for your upgrade
[17:09] <paulatx> bjf: interesting.. ok.  So it sounds like the new hardware support essentially boils down to how hard it will be to add support for the h/w, if it is simple it will be added and if level of effort / risk is high then no dice
[17:10] <bjf> paulatx, yes, that's basically it
[17:11] <bjf> paulatx, if you are a Canonical customer then you can raise your specific request through our support org. otherwise you can create a launchpad bug and point us at it and we will look at it
[17:12] <paulatx> bjf: no I think this answers my question, thank you very much.  Sorry for pulling in so many people.. didn't realize this was going to be a hard question to answer!  :)
[17:13] <sarnold> part of the beauty of aws is not caring much about the hardware :)
[17:13] <bjf> paulatx, no problem
[17:14] <blackflow> I don't know how anyone can use public clouds/VMs in the post-Meltre (Meltdown+Spectre) world.
[17:14] <sarnold> meltre :D
[17:15] <blackflow> mmmh-hmm. :)
[17:16] <blackflow> I stopped using them after I saw with my own five eyes, cross-VM ssh pubkey injection via rowhammer and non-ECC host side RAM. yes, 'sright, one VM injecting ssh keys into another's memory.
[17:16] <paulatx> sarnold: haha.. but when they come out with <insert cool new tech here> that is AWS specific and you can only take full advantage if you have kernel support then we have to care
[17:16] <blackflow> I haven't yet seen personally Meltre exploited like that in public clouds, but I've heard stories.
[17:17] <paulatx> anyway, thanks for the help everyone
[17:17] <sarnold> blackflow: wow. crazy. it feels almost criminal that They still sell non-ecc ram on new machines. (Lookin at you intel.)
[17:17] <blackflow> well all hetzner's non-enterprise baremetals are non-ECC and they're used a lot by the eastern bloc for VMs and games.
[17:18] <sarnold> I hadn't heard that about hetzner before :(
[17:18] <blackflow> I've seen a few offers at lowendtalk dot com that were non-ECC too
[17:18] <blackflow> sarnold: their entire EX line, the oldest, is non-ECC corei7 https://www.hetzner.de/dedicated-rootserver/matrix-ex
[17:19] <blackflow> only with PX they started offering ECC
[17:23] <sarnold> blackflow: seems silly to have dual nvme in those things but no ecc
[17:25] <blackflow> they're aimed at companies offering cheap shared hosting and VMs, and for games.
[20:22] <DammitJim> man, I have to say that I am very happy with Canonical support
[20:27] <OerHeks> :-)