/srv/irclogs.ubuntu.com/2019/03/21/#ubuntu-server.txt

genewitch /wc03:23
lordievaderGood morning07:09
=== dpawlik_ is now known as dpawlik
sinhueHi, I'm trying to install Ubuntu Server 18.04 on Virtual Machine But all I'm getting is= "Error setting up gfx boot." I'm getting desperate. Even tried older releases. What could be the problem?08:56
blackflowsinhue: did you try with google, there are some suggestions in first few results, for "Error setting up gfx boot"09:06
sinhueI did but nothing helped me.09:06
sinhueWhat helped me was noticing that VM does in fact have 8 mb ram, instead of 8gb :D :D09:07
sinhueSorry for bothering09:07
blackflowsinhue: which hypervisor is this? Did you see this, and try as suggested to reach the menu? https://askubuntu.com/questions/895148/error-setting-up-gfxboot/92090709:08
sinhueblackflow, I solved it already bro. But this was ESXI 6.5 hypervisor.09:09
blackflowthis is ubuntu server support, you're not bothering, but it is expected you try to help yourself first as much as possible :)09:09
blackflowsolved how?09:09
blackflowI'm suspecting missing or invalid graphics device?09:09
blackflow(and please don't call me bro)09:10
kstenerudcpaelzer: I'm beginning to think that a keepalived backport isn't in the cards. The commits are all merges containing other merges containing yet more merges, which completely obscures what's actually happening. I've found vrrp and IP deletion code strewn through the logs, but it spans years, fixing various problems when using ipv4, ipv6, static ro10:19
kstenerudutes, physical vs virtual devices, etc. I'm not confident that I'd be able to find it all :/10:19
kstenerudAll references to fixing keepalived + systemd-networkd only refers back to the "beta branch", with no commit refs10:21
kstenerudThe workaround I posted does work. Perhaps we're better off waiting to see if this solves the issue for enough people? The issue should be fixed in keepalived 2.0 in disco, which I'm testing now10:23
kstenerudIt's actually quite a complicated problem, as keepalived must rely on heuristics to determine if a device or ip being removed was intentional or not...10:24
cpaelzerok, sounds right10:29
cpaelzerdo we have a time limit when we check if "this solves the issue for enough people"  is true or not?10:30
cpaelzerand if so  planned action (e.g. as discussed yesterday docs/blogs/...)10:30
kstenerudThe bug report has the workaround, but we should definitely update any docs we maintain that deal with keepalived. I don't see much other than https://www.ubuntu.com/kubernetes/docs/keepalived which uses juju10:47
kstenerudI suppose it comes down to which approach we would bless? apt? snaps? charms?10:48
* blackflow bets 5€ on the snap.10:56
lotuspsychjelol10:57
blackflowmakes sense, if there's a lot of issues to shoehorn it into specific base env.10:58
=== Wryhder is now known as Lucas_Gray
blackflowIs the ZFS in Bionic patched to support latest 0.7.x featureset?13:05
lordcirthblackflow, bionic has 0.7.514:00
blackflowas baseline version yes, but a megaton of patches to make it work with 4.15+ kernels14:01
blackflowso I'm asking what else is there featureset wise, or most importantly, can it fully use pools created with 0.7.12 (which I'm preparing to test in a few)14:01
ahasenackwrt pools, most likely, We are settling on a very baseline set of features to remain compatible with other pools out there14:02
ahasenackbut you could ask in the kernel mailing list perhaps: https://lists.ubuntu.com/mailman/listinfo/kernel-team14:02
blackflowif so that's awesome. I don't enable any non-default features, I only need it to rw to pools created with 0.7.12 defaults14:02
blackflowbut... about to test all that now14:02
blackflowand make that 0.7.13, being latest actually :)14:03
ahasenackI have 0.8 elsewhere, and the story there is different14:03
ahasenackas one might expect given the version bump14:04
blackflowoh yeah, that's a whole new bag of goodies in that :)Č14:04
blackflows/Č//14:04
lordcirthWe are very glad for the faster scrubs in 0.8. Our current production storage takes 6 days to do the weekly scrubs, and it's less than half full. We need the speed to keep up.14:05
ahasenacklordcirth: and with 0.8, how long does it take?14:05
lordcirthHaven't tested yet14:05
lordcirthWe are still putting together the new system14:06
blackflowlordcirth: why weekly tho? esp. if that sized. scrub is just "let's find problems now before regular block access does anyway"14:06
lordcirthblackflow, best to be safe.14:06
ahasenackthat much activity could also reduce the lifetime of the drives, no?14:07
lordcirthThen we RMA them.14:07
blackflowlordcirth: but if you have properly set up pools (redundancy, hot swappable drives) then you gain very little from frequent scrubs14:07
lordcirth3-year evergreen due to warranty anyway14:07
blackflowfrom *too frequenty I mean14:07
ahasenacklordcirth: have these scrubs ever found problems?14:07
blackflowstatistically, there should be one bit corrupt for every 10TB accessed, according to (now old) google research14:08
ahasenackwell, "ever" is a long time :)14:08
ahasenacklet's change that to "frequently" :)14:08
lordcirthWe've had two drive failures. IIRC at least one of them was found during a scrub?14:08
ahasenackor provoked by one ;)14:08
ahasenackby several, that is14:09
lordcirthBetter to have 2 known failures than 1 unknown one, I guess14:09
blackflowI second what ahasenack said. at that scale, too frequent scrubbing is just adding to the wear14:09
lordcirthWeekly scrubs are the default in Ubuntu's packages, presumably for a reason.14:09
blackflowno, monthly are14:09
blackflowfirst sunday in the month14:09
ahasenack# Scrub the second Sunday of every month.14:10
ahasenack24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-linux/scrub14:10
ahasenackin my disco system14:10
lordcirthReally? hmm I thought it was every sunday14:10
blackflow*Second, okay.14:10
ahasenackI didn't do the date +\%w math, just going by the comment in there14:10
blackflow(yeah I change that to first in the month)14:10
blackflowahasenack: 8-14 'th day in the month, then date +%w checks if it's sunday14:12
sdezielscrub should be read only unless there is anything to correct so presumably this is lighter on the disks, no?14:13
lordcirthScrubs are still a lot of seeks, at least until 0.8 when it's sequential14:14
sdezielI agree that scrubbing 6 days per week is very excessive though14:14
sdezielah14:14
lordcirthNot 6 days per week, once per week14:14
lordcirthAh, you mean total time14:14
sdezielyeah :)14:15
lordcirthWell, hopefully 0.8 will be much less load14:15
sdezielmust absolutely kill the normal performance, except on Sundays where nobody's there to enjoy the real speed ;)14:15
lordcirthActually, part of why the scrubs take so long is that they are quite low priority compared to normal usage14:17
zetherooblackflow: you here?14:20
blackflowyup14:22
zetheroowhy ewww ... cloudflare? I am being told that it's "the best" etc ...14:28
blackflowzetheroo: personal observation, nothing more.   btw, that "search mt.local" if you're using .local for domains, resolved might have problems with that. .local should be reserved for mDNS14:30
zetherooblackflow: Ok, was just wondering if there was anything about cloudflare specifically ... since I never heard about it until the other IT guy said "We are switching to it because its the best" :D14:39
zetheroo... and "google sucks" ... because we had been using 8.8.8.8 etc14:40
blackflowzetheroo: "best hyped" more likely. nothing special about it tho'.  technically, some networks erroneously treat 1.0.0.0/8 as a test range so it might not work everywhere, but that's probably very edge case14:41
lordcirthcloudflare got 1.1.1.1 in exchange for being able to handle and study the huge amounts of random traffic it gets14:42
lordcirthNot many companies can handle that much traffic.14:43
blackflowwhich only puts them in the position to abuse that power14:46
blackflowpersonally, been running own Bind authoritative and resolvers for many years.14:47
blackflowahasenack: lordcirth: fwiw, tests have shown Bionic having normal rw access to 0.7.12 generated zpool with default features. thanks for your feedback.14:50
lordcirthblackflow, cool14:50
ahasenacknice14:50
sdezielcloudflare's anycast network is pretty cool on paper though14:53
lordcirthCloudflare has a lot of cool articles on HA14:54
=== dpawlik is now known as dpawlik_
zetherooblackflow: about systemd-resolved ... you said that it tries the first two nameservers but no more than that ... or did I misunderstand you?15:26
blackflowzetheroo: you misunderstood.15:28
blackflowzetheroo: glibc is the one that doesn't take more than 3 "nameserver" entries in resolv.conf15:28
blackflowzetheroo: systemd does something else. tries one nameserver, and if that fails, tries another and then keeps on using that other, ignoring the order defined by resolv.conf15:28
blackflowthere's been huge issue/debate upstream about that unexpected behavior15:29
zetheroosorry, that's what I meant - so if you have three nameservers ( A, B and C) systemd tries A, then B and then keeps trying B?15:30
zetherooor does it just randomly try A, B or C ?15:30
blackflowkeeps trying B15:32
blackflowI mean, keeps _using_ B until it fails, then tries C15:32
blackflowor in other words, it sticks to using the nameserver last known to work15:33
zetheroohow long does it take to fail on B?15:34
blackflowzetheroo: note that NXDOMAIN isn't a failure, and your B is cloudflare, and if you're asking for mt.local, you'll get NXDOMAIN15:35
zetherooJust was odd to hit a brick wall like that15:35
zetherooblackflow: oh, darn ... that config was what was changed to after the issue ...15:35
zetheroooriginally the cloudflare IP was the third or forth15:36
zetherooIt was: Win server primary DNS, Win server secondary DNS, Gateway IP, Cloudflare Primary, CF secondary15:37
blackflowtoo much. more than 3 isn't effective for glibc anyway15:37
zetheroois that a recent limit in glibc?15:41
blackflowno, it's from ay one. bug in LP shows 2007 at least15:42
blackflowbug #11893015:43
ubottubug 118930 in glibc (Ubuntu) "Resolver: MAXNS should be increased" [Wishlist,Confirmed] https://launchpad.net/bugs/11893015:43
blackflowit's like that in pretty much all the distros, a glibc default nobody thinks is important to raise15:43
zetheroowhoa15:43
blackflow(and they be right. if 3 NS fails, you've got more serious issues :) )15:43
rbasakSeems to me that the bug is requesting much more complex DNS handling - more than what glibc does.15:55
rbasakIn the VPN use cases for example, and where different non-Internet domains are being added by some subset of configured servers.15:55
rbasakIn these cases, surely something like systemd-resolved would be more appropriate, rather than asking for implemnetation in the glibc resolver?15:56
blackflowwould, if resolved behaved properly.15:57
lordcirthblackflow, what does it do wrong?16:00
blackflowlordcirth: for starters, breaks the decades old expectation of nameserver priorities. then it has issues with DNSSEC. then it has issues with VPNs.16:00
blackflowthe NS order is very much important, it's not round robin. you have primary and then failover resolvers. resolved breaks that entirely with round-robin only resolving.16:01
blackflowyou get round robin if you explicitely set options rotate in resolv.conf16:02
rbasakblackflow: if resolved has issues then they need to be resolved in resolved, sure. I don't think we can expect glibc to implement more complex behaviour than it already does though. resolv.conf is overloaded enough as it is.16:08
rbasak(and it's got to do everything in process, can't realistically keep state, etc)16:08
rbasakSo I think the answer to that bug really needs to be "Won't Fix; use resolved, or implement and use some other proxy".16:09
rbasakHowever I don't consider myself expert enough in the area to decide that for the project without the opinion of some other Ubuntu devs who are more expert.16:09
blackflowrbasak: I agree. I just mentioned it because zetheroo had more than 3 nameserver entries so I pointed at that being useles16:11
rbasakAh, sorry.16:12
blackflowrbasak: btw the issues all exist upstream. unfortunately some of them are being WONTFIX'd, like theh round robin thingy16:12
supamanwhat is the unit name for the nfs daemon?16:27
supamannfs-server?16:28
Oolnfs-kernel-server is the service name16:29
supamanok, thanks16:29
blackflowsome like to call it nfs<tab> :)16:34
=== kallesbar_ is now known as kallesbar

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!