blackflowsarnold: you showoff :) triple mirror stripes for volatile (easily replaceable) data! :)00:11
sarnoldblackflow: well, the intention is to some day power on the big stack of hard drives that have music and photos and consolidate decades of stuff into one place00:12
sarnoldblackflow: funny thing is, now that there's special vdevs, I've got a vague feeling of replacing both my pools. currently 3-three way mirrors on one, 2-two way SSDs on the other... a 9-drive raidz3 plus two mirrored special vdevs from the ssds...00:13
* blackflow faints00:14
blackflowI never found any need to go beyond any-2 failure margin of a raidz2. once, in past 10 years, I had one two-disk mirror fail when the other drive failed mid-resilvering of the replacement for the first one.00:15
sarnoldjuggling two pools is slightly annoying, and Big Files are probably fast enough from a raidz3 vdev .. metadata and smaller files from the ssds..00:15
lordievaderGood morning06:03
ZZlatevHey guys07:40
jamespagesahid, coreycb: I've added openstack notes to https://wiki.ubuntu.com/DiscoDingo/ReleaseNotes08:50
jamespagewe should make sure we keep doing that08:50
sahidjamespage: ack08:52
jamespagesahid: also prepped a release email for the UCA which I shared with your and coreycb - pending a successfull smoke I'm running will send that shortly!08:53
sahidjamespage: ok i'm reviewing it right now08:56
sahidjamespage: ack for me thanks for it09:00
jamespageits a bit of a copy/paste/search/replace exercise09:00
jamespagebut useful non-the-less09:00
=== DerRaiden`afk is now known as DerRaiden
=== gislaved65 is now known as gislaved
=== gislaved34 is now known as gislaved
=== Wryhder is now known as Lucas_Gray
=== cryptodan_d is now known as cryptodan
shubjeroYou have to download a total of 416 M. This download will take about13:57
shubjero52 minutes with a 1Mbit DSL connection and about 16 hours with a 56k13:57
shubjeroStill providing estimates using 56k modem :)13:57
shubjero(do-release-upgrade for 16.04 > 18.04)13:58
tomreynthere is IoT14:01
blackflowSoon bases on the moon, the BDP there will kill! jumbo frames ftw.14:02
shubjeroPING moon (moon): 56 data bytes14:04
shubjero64 bytes from moon: icmp_seq=0 ttl=122 time=2.849 s14:04
shubjero64 bytes from moon: icmp_seq=1 ttl=122 time=2.995 s14:04
shubjero64 bytes from moon: icmp_seq=2 ttl=122 time=2.933 s14:04
shubjero"low ping bastard!"14:04
blackflowuhm.... moon will be ipv6 only. so yeah, fake ping output! busted!14:05
shubjerowell thats interesting14:16
lotuspsychjeshubjero: please keep offtopic chat in other channels, like #ubuntu-offtopic14:18
shubjeroyeah not gonna lie that was a wrong-window paste14:18
shubjeromy apologies14:18
adacguys on one of my server I have this file14:46
adac /proc/sys/net/netfilter/nf_conntrack_tcp_be_liberal14:46
adacon the other however not14:46
adacwhere is this coming from which package? How can I create it?14:47
sdezieladac: the kernel or one of the modules create it14:50
tomreynadac: this is not a file, it is just presented as such. read the sysctl man page.14:50
adacok thanks14:50
adacwould need to know how this is being ativated this module14:51
sdezieladac: I'd guess nf_conntrack_ipv4 or nf_conntrack_ipv614:51
sdezielor maybe the generic one nf_conntrack.14:51
adacI try to execute this ansible task due to some kernel issues with ufw14:52
adacbut the file is not found14:52
adacSo that emans the module is not enabled14:52
tomreyndocumentation on this setting https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt14:52
adacmakes me just wonder why on my other server it is enabled14:53
adacmaybe ufw with some command enables it14:53
sdeziel(those are the modules I load to get a different but related sysctl key: net.netfilter.nf_conntrack_tcp_loose)14:53
tomreynsdeziel: copy / paste bug?14:54
adac tomreyn sdeziel how can I activate this modul?14:55
tomreynadac: that's a possible explanation. compare the 'lsmod' output of both systems14:55
sdezieltomreyn: no but maybe I wasn't clear. I'm trying to explain that I load nf_conntrack_ipv4 & nf_conntrack_ipv6 so that I have those proc files (also available as sysctl keys)14:55
sdezieladac: you can add those to /etc/modules-load.d/nf-conntrack.conf14:55
tomreynsdeziel: oh, yes, i misunderstood.14:55
sdezieladac: this will have them loaded on boot14:55
adacwould this only temporary enable it?14:56
sdezieladac: no, that would be permanent for the module load14:56
adacsdeziel, ok thanks14:56
sdezieladac: the manual module loading is to make the sysctl command work reliably14:57
adacsdeziel, guess this ansible module would work14:57
adacto enable the module in first plance14:57
sdezieladac: normally the conntrack modules are loaded on demand based on your ip{,6}tables rules14:57
adacsdeziel, yes but that only happens later in time when ufw is setup14:58
sdezieladac: but sometimes this on demand loading is too late for the sysctl to happen reliably14:58
adacI need to do this first14:58
sdezieladac: hehe, I ran into the same issue and my workaround was to manually load the modules on boot14:58
adacSo mayb i can via ansible just enable the module, then enable this liberal tcp thingy14:59
sdeziel(gee, took me way to many words to explain that one... sorry not fully awake it seems)14:59
adacand then do the ufw stuff14:59
adacsdeziel, welcome to the club :)14:59
sdezieladac: IIRC TCP liberal is for when you want to accept an ongoing connection you see for the first time. In otherwords it removes some of the stateful checks15:00
sdezieladac: have you tried "iptables -I INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT" as a workaround instead?15:01
adacsdeziel, I need it beause of this issue: https://github.com/ansible/ansible/issues/4544615:01
adacsdeziel, would that be better option?15:02
adacmean your workaround?15:02
sdezieladac: dunno, it looks like the connection states are reset so possibly my workaround wouldn't work.15:05
adacsdeziel, for me the question is simply also what are the effects of setting this liberal TCP thingy15:06
sdezieladac: if it was my machine though, I'd probably do the TCP liberal thing before and undo it after, assuming you have that flexibility with ansible15:06
adacon ohter connections15:06
adacok that sounds good yes15:06
sdezieladac: that's where tomreyn's link becomes handy, look at what it says under nf_conntrack_tcp_be_liberal15:06
adacsdeziel, hmm loading the module does not automagically create this "file"15:07
adacis this the module nf_conntrack or is this a own module as well: nf_conntrack_tcp_be_liberal15:08
sdezieladac: hmm, I'm assuming that you checked "lsmod | grep nf_conntrack" ?15:08
adacthe latter should be a variable right?15:08
sdezielmore or less yes15:08
sdezielit's a pseudo file under /proc and is also available as a sysctl key15:09
adacsdeziel, this is what ufw enable or a similar command enables:15:10
sdezieladac: I'll try to reproduce on a local VM15:10
adacsdeziel, kk thanks15:13
sdezieladac: so loading nf_conntrack_ipv4 is what created /proc/sys/net/netfilter/nf_conntrack_tcp_be_liberal15:14
adacsdeziel, ohh thanks so much!!15:15
sdezieladac: np15:16
adacsdeziel, i will set this to 0 again after the ansible task is finshed15:17
adacthank you as well for this good advise15:17
paulatxanybody happen to know how long the Ubuntu 16.04 EC2 images (https://cloud-images.ubuntu.com/locator/ec2/) that use the AWS specific kernel will support new hardware? The first graphic on this page: https://www.ubuntu.com/about/release-cycle makes it look like hardware support has already stopped for 16.04 but I'm not sure if that also applies to the AWS specific kernels which are based off of the 4.4.0 GA kernel and not the 4.15:34
paulatxI'm trying to figure out when the support for new hardware will stop on those AWS tuned kernels.  I spoke with tomreyn yesterday about this and he said I should try again today during UK business hours15:34
tomreynpaulatx: it looks like you hit the late part of this day (very understandable if you're on the U.S. west coast). maybe we can try together to get a clearer picture. by the way, i'm just a volunteer here, don't work for canonical or anything.16:34
tomreynpaulatx: also, i assume commercial support is available for those EC2 instances also, if you need a fully reliable response to this question.16:35
tomreyni run only 'standard' (non EC2) ubuntu installations myself (including 16.04 LTS) but those should be similar to EC2 instances apart from the kernel and some of the packages installed by default.16:37
tomreyni see that there is a linux-image-aws package available in 'xenial' (16.04 LTS), currently at version
rbasakpaulatx: what do you mean by new hardware in the context of EC2? New virtual hardware?16:39
tomreynin ubuntu 18.04 LTS, there is also this package (currently version as well as a linux-image-aws-edge package (in case one would need to use the very latest hardware, mostly for testing purposes, currently at version, and in the community supported 'universe' section)16:39
sarnoldpaulatx: there's an #ubuntu-kernel channel that may have more folks specifically aware of hardware support on amazon clouds16:39
rbasakIn general, hardware enablement, which includes virtual hardware in the case of EC2, is permitted until end of standard support, which for 16.04 is April 2021. However I don't know of specific intentions in the case of these particular images.16:40
paulatxsarnold: ahh it sounds like #ubuntu-kernel is where I should be asking.. they may have a better pulse on what the deal is with support of the AWS specific kernels/images16:42
rbasakrcj: would you know the answer to paulatx's question above please? I don't see fginther or gaughen in here.16:42
tomreynpaulatx: can you discuss the use case? based on your hostname, i assume this may be graphics hardware related?16:42
paulatxtomreyn: the quick and dirty is that AWS is always announcing new hardware so we want to know how long the Ubuntu maintained 16.04 LTS EC2 images will support new hardware.  If they will support new h/w through April 2021 then we may put off upgrading to 18.04 LTS but if support for new h/w ends sooner then we will upgrade sooner16:44
rcjrbasak: thanks for the ping.  I'm16:47
rcjrbasak: thanks for the ping.  I'm seeing who from the kernel team is here to answer that completely (rather than bumping paulatx to #ubuntu-kernel)16:48
paulatxrcj: thanks16:50
rcjThe short answer is that the linux-aws custom kernel was created to allow for AWS specific tuning and for new AWS instance/feature support that would not be possible with the generic kernel.16:52
paulatxrcj: right.. so are there details somewhere on the support / lifespan for the linux-aws custom kernel?16:59
lotuspsychjefresh from the press: https://blog.ubuntu.com/2019/04/17/ubuntu-server-development-summary-16-april-201917:02
bjfrcj, paulatx, what's up?17:04
tewardassuming rcj summoned you, bjf, questions re: linux-aws custom kernel.  trying to find the exact original question in the scrollback though.17:05
tewardah here it is.17:05
tewardbjf: sent you a small number of lines in PM of the original question :P17:06
bjfteward, got them17:06
bjfteward, so .. in general we don't backport new support for new HW back to earlier releases17:06
tewardpaulatx: ^17:06
bjfteward, however17:06
tewardbjf: mishighlight, targets -> paulatx :)17:06
tewardi'm just helping :)17:06
bjfteward, paulatx, however if Amazon asks us for specific support and that support isn't too intrusive (low risk of regressions) then we will do it17:07
bjfteward, paulatx, it's not a simple rule .. in general, the idea with an LTS every 2 years is that you have time to plan for your upgrade17:08
paulatxbjf: interesting.. ok.  So it sounds like the new hardware support essentially boils down to how hard it will be to add support for the h/w, if it is simple it will be added and if level of effort / risk is high then no dice17:09
bjfpaulatx, yes, that's basically it17:10
bjfpaulatx, if you are a Canonical customer then you can raise your specific request through our support org. otherwise you can create a launchpad bug and point us at it and we will look at it17:11
paulatxbjf: no I think this answers my question, thank you very much.  Sorry for pulling in so many people.. didn't realize this was going to be a hard question to answer!  :)17:12
sarnoldpart of the beauty of aws is not caring much about the hardware :)17:13
bjfpaulatx, no problem17:13
blackflowI don't know how anyone can use public clouds/VMs in the post-Meltre (Meltdown+Spectre) world.17:14
sarnoldmeltre :D17:14
blackflowmmmh-hmm. :)17:15
blackflowI stopped using them after I saw with my own five eyes, cross-VM ssh pubkey injection via rowhammer and non-ECC host side RAM. yes, 'sright, one VM injecting ssh keys into another's memory.17:16
paulatxsarnold: haha.. but when they come out with <insert cool new tech here> that is AWS specific and you can only take full advantage if you have kernel support then we have to care17:16
blackflowI haven't yet seen personally Meltre exploited like that in public clouds, but I've heard stories.17:16
paulatxanyway, thanks for the help everyone17:17
sarnoldblackflow: wow. crazy. it feels almost criminal that They still sell non-ecc ram on new machines. (Lookin at you intel.)17:17
blackflowwell all hetzner's non-enterprise baremetals are non-ECC and they're used a lot by the eastern bloc for VMs and games.17:17
sarnoldI hadn't heard that about hetzner before :(17:18
blackflowI've seen a few offers at lowendtalk dot com that were non-ECC too17:18
blackflowsarnold: their entire EX line, the oldest, is non-ECC corei7 https://www.hetzner.de/dedicated-rootserver/matrix-ex17:18
blackflowonly with PX they started offering ECC17:19
sarnoldblackflow: seems silly to have dual nvme in those things but no ecc17:23
blackflowthey're aimed at companies offering cheap shared hosting and VMs, and for games.17:25
DammitJimman, I have to say that I am very happy with Canonical support20:22

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!