/srv/irclogs.ubuntu.com/2023/01/31/#ubuntu-server.txt

habysinstalling over the network for uefi seems to require chainloading grub. It seems to have the location of grub.cfg hard coded in it. Is there any easy way to change that? I already have an ipxe menu, I'd rather not also have to go through a grub menu only for the UEFI installations.00:50
sarnoldI thought the default grub config was no menu and you've got to hold down shift or something to even see the choices00:54
habysthat may be the case, but considering I have several UEFI targets, and the grub config seems to be hardcoded, I need a way to tell grub to read a different config01:02
habysoff the bat, I need different targets for amd64 vs arm64, server vs desktop, and more. If I can't instruct grub from ipxe to choose a different config, then I can't automatically install things anymore (with ipxe target files)01:03
habysfor some reason I don't need to chainload grub to boot BIOS.. kernel+initrd and the right kernel parameters and you are off to the races01:04
sarnoldyou may be better served without grub at all?01:04
habysfor some reason UEFI just crashes when it tries to set the ISO as root. I could only get it to work by chainloading grub. Not sure why that is the case01:05
habysyeah I wish I could get rid of it. No idea why all the boot documentation uses it.01:05
habysI could try to build grubx64.efi myself and hard code a different config path for each binary, referring to each from ipxe01:08
habysbut man it would be so much easier if I could just `chain http://blah/grubx64.efi target=desktop` or something similar01:08
habyshttps://ubuntu.com/server/docs/install/netboot-amd64 was useful to get this working, but this basically assumes you don't have any other pxe targets..01:12
sarnoldthat might be worth clicking the 'help improve this document inthe forum' link on the page01:13
habysI'll give it a shot, I don't have a clear solution though01:14
habysha that link says "use IRC for support questions" >_>01:14
sarnoldsure :)01:15
habyshttps://ubuntuforums.org/showthread.php?t=2483456&p=14128774#post1412877401:49
sarnoldsomehow that doesn't feel like a 'forums' kind of thing to me; I'd expect the server discourse or askubuntu to do a better job collecting answers01:50
habyswell the server discourse says no tech support questions01:50
habysthere's just not a lot of hope for me huh01:51
sarnoldI certainly think you're out in unusual territory01:51
sarnoldbut there's loads of folks who do funny things :) hehe01:51
habysyeah... trying to network boot OSes.. I feel like I'm from a different planet01:52
arraybolt3habys: That error you showed in the forums post looks like it can't find an initramfs.01:53
arraybolt3Maybe that's what's wrong?01:53
habysI usually see that error when it can't find the iso01:54
habysbut I'll double check my syntax for the initrd01:54
habysty for looking seriously!01:54
arraybolt3Sure thing!01:54
arraybolt3habys: Whatever loads the kernel should also load an initramfs that the kernel can mount and use to help boot, at least that's how it works when booting a disk. I assume it would work the same way when booting over the network, though I've only ever booted Debian over the network so I'm not sure.01:55
habysoh yeah, it definitely can find the inird. you can see ipxe downloading it01:56
arraybolt3Hmm... perhaps the compression used on the initrd isn't supported by the kernel?01:56
arraybolt3I've had that happen once (trying to use an old kernel with a newer version of Ubuntu).01:56
habysyeah, brand new and matching kernel + initrd + live server.iso01:56
habysI saw that same error when I was setting up server to boot in BIOS mode, and I had put the wrong path in for the ISO01:57
habysbut I've got BIOS working fine, and it's going to be the same kernel+initrd+iso for bios or uefi01:57
arraybolt3Weird. All I know is I see that exact error (I think) every time the initramfs goes missing. Maybe it also applies to the ISO missing, but that's what I've seen in the past.01:57
habysI think the initramfs only serves to pull down the iso and use it as the root for the netbooted filesystem. I'll keep trying stuff. Not sure if I should give up on trying to make this work without GRUB, but that would be great..02:04
=== chris14_ is now known as chris14
alkisghabys: see my code there for how to pass variables from ipxe to grub: https://github.com/alkisg/ltsp5-uefi/blob/main/ltsp.ipxe#L7306:33
alkisgI haven't read the rest of this chat, I only read part of what you uploaded to the forum link above06:34
yalebhi what's the easiest way to get ssh using a cert so I don't have to type the password each time12:45
yalebit's getting complicated with ~20 different logins lol12:45
yalebsome with "secure" passwords that are different for the same user between two different servers12:46
yalebbut that's less important as they're hardly accessed via cli12:46
yalebno need for complicated public cert shares/intermediaries etc heh12:46
yalebjust basically to remember the login for X days12:47
yalebI did think I could make some little services that connect automatically with passwords in plaintext since it's using FUKS or similar anyway12:48
yaleband then just connect a terminal to those as needed12:48
yalebbut... kinda stupid just because I don't know how to do what I describe heh12:48
yalebif anyone knows or can point me in the right direction, i'd appreciate it12:49
konstruktoidansible vault can be an option yaleb 13:01
konstruktoid * <del>ansible vault can be an option</del> wrong channel13:03
konstruktoidbut a perhaps hashicorp vault, but the "remember the login for X days" part is done via pam. and an ssh connection will only be held open for the current connection13:03
yalebas I say i'm pretty sure I can just generate a private user key and copy that over to the host?13:10
yaleband connect with that instead of the user/pass combo13:10
yalebI try to add as little extra software as I can haha13:10
konstruktoidah sorry, then lineinfile your ssh key to the user authorized_keys13:19
konstruktoid * ah sorry, then copy your ssh key strings to the user authorized_keys13:20
konstruktoidor append is perhaps the correct term13:20
konstruktoidbut do you need a separate key for each user?13:20
yalebaha yesss that was exactly right and I have done it per user with the help of the DO guide below ^_^13:26
yalebhttps://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys-on-ubuntu-20-0413:26
yalebit was pretty much what I thought but in reverse haha13:26
yalebsafe passwordless accounts now yay13:26
yalebthanks for the tip <313:27
ahasenackI'm building a package in a ppa, lunar, proposed disabled, and it builds fine13:58
ahasenackthen I do the same in a lxd container, lunar, also proposed disabled, and it fails because something is injecting `-Wall` in the CFLAGS :/13:58
ahasenackcc1: all warnings being treated as errors13:58
ahasenackI can't think of what it would be13:59
ahasenackI'll just start over13:59
habysalkisg: the installer for ubuntu changed since 18.04, thanks though14:24
=== arraybolt3_ is now known as arraybolt3
ahasenackkanashiro[m]: that corosync-blackbox script is just a 3-line shell script16:45
ahasenackkanashiro[m]: we could also patch it to check if qb-blackbox is installed, and if not, tell the user to install libqb-tools16:46
ahasenackbut that would need checking with the consumer of this script if it is prepared for such a response and exit status16:46
ahasenackfunny, it uses $(date +%s) twice to name a file, instead of calling it once and storing the timestamp in a var16:47
kanashiro[m]ahasenack: and add libqb-tools as Suggests 16:47
ahasenackthe way it is, it's quite possible the two dump files it creates will have different names16:47
ahasenackkanashiro[m]: yeah16:47
kanashiro[m]right, we can do that16:48
ahasenackmaybe also check in the upstream docs where and in which cases they recommend to run corosync-blackbox16:48
ahasenackbut the path of less surprises right now is still a recommends, and bring libqb-tools into main, if that's possible16:49
ahasenacklet me write that up in the MP16:50
ahasenackoh, about that rsyslog build failure, I bet I know what it is17:08
ahasenackit's the presence of .git in the source17:08
ahasenacksince I'm building from a git branch, there is a .git, and I bet the build system interprets that as "oh, we have ourselves a developer here, let's enable some devel flags"17:09
ahasenack# Running from git source?17:09
ahasenackin_git_src=no17:09
ahasenackAS_IF([test -d "$srcdir"/.git && ! test -f  "$srcdir"/.tarball-version], [in_git_src=yes])17:09
* ahasenack shakes fist at it17:09
=== arif-ali_ is now known as arif-ali
=== sdeziel_ is now known as sdeziel
lenovoHi all, My squid proxy is not hitting anything in the access.log, what can I be missing?18:13
yalebidk that provides very little detail18:14
yalebwhat have you tried/checked?18:14
lenovois supossed to be fully configure, working only on memory, but all I get is TCP_tunnel TCP_miss18:15
lenovono TCP_HIT18:16
sdeziellenovo: can you share your squid.conf via a paste service?18:30
lenovodifficult to copy via SSH, any specific detail/parameter you would think?18:32
yalebhow is it difficult to copy via ssh18:32
yalebliterally just cat it into another file haha18:32
yalebor a network share to your main pc18:32
yalebor use the gist/pastebin cli client even18:33
yalebso many good options!18:33
ravagecat /etc/squid/squid.conf|nc termbin.com 999918:33
lenovohttps://pastebin.com/My4h9V9c18:38
ravagelenovo: https://termbin.com/x2zl9 18:42
ravagethis is your actual config18:42
ravageif i read it correctly there is a general deny for everything except some local IPs18:43
lenovoyou mean http_access deny all?18:45
ravageyes18:45
ravagehttps://termbin.com/e6uo18:46
lenovostandard squid proxy config after the particulars are permitted18:46
ravagethis one works for me18:47
ravagebut i have have additional firewall rules18:47
lenovoand it still seems to read other request, just doesn't cache'em18:47
lenovoyou are caching in drive, I am trying to have it work in memory only18:50
lenovomight that be the reason?18:52
sdeziellenovo: anything in /etc/squid/conf.d/?18:55
lenovoyes debian.conf with #http_access allow localnet18:58
lenovoshall that be uncommented?18:58
sdeziellenovo: if it's commented out, it's not important18:59
sdeziellenovo: the localnet ACL definition looks weird. I'm not entirely sure which CIDR you'd like to allow but possibly this: `acl localnet src 192.168.0.0/25`19:00
sdezielor maybe the more common `/24`19:00
yalebI cannot reply to someone named "lenovo"19:04
yalebmy core ideals go against it19:04
lenovohahahah19:05
yalebtalk to my 10th gen intel rig that can't get wifi networking on any major linux distro about it if any querires19:05
yalebit's available to chat if you google chatgpt19:05
lenovohow old are you?19:06
sdeziellenovo: that said, despite that weird ACL, if you are seeing TCP_TUNNEL and TCP_MISS, it means you are allowed to poke it. Maybe you are trying to fetch some resources that is not cache-able by your config. Do you have a sample URL of something you'd like to be cached?19:07
lenovosdeziel, not really, but there is not hit at all, browsing simple cnn.com bbc.com19:08
sdeziellenovo: I think those sites use HTTPS making them harder to cache19:08
lenovoany sure to be cached website you suggest?19:09
sarnoldor they employ 'cache-busting' curls like tacking on ?t=<unixepoch> or similar nonsense19:09
sdeziellenovo: cause when you flow an HTTPS connection through squid (with your current config) it will behave as a TCP proxy and won't be able to see whatever gibberish is passing through that TCP connection19:09
sdeziellenovo: with your current squid config, you'd need to hit a HTTP site so that squid can observe the "objects" flowing through it19:10
sdeziellenovo: and then, if those objects have specific headers indicating squid that it's OK to cache them (Cache-Control, Expiry, etc), then it would keep a copy and serve it to the next client requesting the same data19:11
sdeziellenovo: assuming the cached copy didn't go stale between the first and second requests19:11
lenovosdeziel, not matter it is working only with memory?19:12
sdeziellenovo: no, the caching backend type shouldn't change anything19:13
sdeziellenovo: with the modern web being like 95% HTTPS (yay!) running a caching proxy isn't trivial anymore19:13
lenovosuggest a sure to be cached site19:14
lenovoI am checking also chistes.com, no difference19:14
sdezielhttp://httpforever.com/19:14
sarnold:D19:15
sarnoldhttp://neverssl.com/19:15
lenovo:)19:15
sdeziellenovo: and from that page, it seems that only the .js is cache-able http://httpforever.com/js/init.min.js19:15
lenovoTCP_MISS all19:16
sdezielhttp forever have links to JS libs hosted on Cloudflare using HTTPS oh the irony19:16
lenovono haha, we got a hit19:16
lenovoTCP_IMS_HIT/304 282 GET http://httpforever.com/19:17
lenovonah, is https then the problem here?19:17
sdezielIMS means a 304 revalidation was done19:18
lenovoneverssl can also hit19:18
sdeziellenovo: HTTPS isn't a problem in and of itself but it makes it hard to cache19:18
sdeziellenovo: there are ways around it but it's hard to do it in a secure way19:19
lenovowell, at least is proved that it is working19:22
lenovosdeziel, sarnold yaleb ravage thank you all19:23
sdeziellenovo: there is one place where a HTTP caching proxy is still quite useful IMHO: apt proxy :)19:23
sdezielnp19:23
yalebyeee thank me even though I did basically nothing19:23
yaleb>:D19:23
lenovoforgot to mention, this server is also running pi-hole19:23
yaleblenovo pi-hole19:24
lenovonot sure how they affect each other19:24
yalebcoming 202319:24
sdeziellenovo: pi-hole does ad filtering/blocking at the DNS level so if your squid is configured to use it to resolve domains, it should block ads and that even on HTTPS sites19:26
lenovowell the ideal is that to work, and what is not blocked by dns to be cached when possible19:30
yalebfwiw19:37
yalebif you hard block all ads at the network level19:37
yaleba lot of stuff around the web is designed to basically just <hang forever>19:38
yalebads don't load? nothing for you!19:38
yalebor more commonly19:38
yalebads don't load?19:38
yaleb...19:38
yaleb..19:38
yaleb...19:38
yaleb..19:38
yalebad infinit19:38
sdeziellenovo: with the prevalence of HTTPS and embedded ads, I think ad blocking is best done on the client side with ublock origin for example... but that's getting off topic19:55
lenovomight be, but I have two soon to be teenagers that are about to challenge me19:56
lenovoso the 4 pcs at home need19:56
lenovoCONTROL19:56
patdk-lap4?19:57
sdeziellenovo: ah, then squid might be your friend cause even for HTTPS sites, you can decide if the proxy will authorize or not the connection19:57
lenovoyeah only 4 of some, two more on weekends for games19:58
patdk-lapya, ublock on all browsers + dns blocker + block all dns-over-https and redirect all port 53 to local dns server19:58
sdeziellenovo: even if the bulk of the HTTPS connection is encrypted, the connection initiation has some clear text info revealing the host where the HTTPS connection is directed to. You can tell squid to use that information to allow/deny access19:58
lenovoit might be, other problem is to find specific porn blacklists19:58
patdk-lapeach of my kids have 3 each it seems :(19:58
patdk-lapthough the 5year old only has two19:58
patdk-lapthat is simple19:58
lenovoonly two :) hehe19:59
patdk-laphttps://apaste.info/WeKu19:59
sdeziellenovo: some DNS providers do have filtered feed (cloudflare for family or something similarly named). You can tell your squid to use those19:59
sdeziellenovo: this way, if you make the proxy a mandatory hop to exit your network, you will have your control point20:00
patdk-lapyep, use cloudfare ones at work20:00
arraybolt3I've heard of the Cloudflare things, that should be easy to set up.20:00
sdeziellenovo: and then your kids will use their LTE connection to bypass it all... human nature :)20:00
lenovo:)20:00
arraybolt3Who says kids need cellphones? :D20:00
patdk-lapI have no cell coverage here :)20:00
lenovoyes, guess there is a limit for everything20:00
sdezielyep20:01
patdk-lapmy son learned how to vpn really quick after the dns filter was installed20:01
arraybolt3You probably still want a network-wide adblock even if you do use Cloudflare's blocker, since Google ads can be shockingly bad.20:01
sdezielI think some safeguards are appropriate but for the rest, education is the only option... my 2cents20:01
sdezielarraybolt3: yeah, I also do DNS level ad removal for the network otherwise the Internet looks ugly20:02
lenovonah all saying good bye here20:06
lenovothank you again20:06
arraybolt3o/20:06
arraybolt3sdeziel: Personally I just use client-side blocking on everything, but then again I don't have anyone on the inside actively trying to defeat it so the protections I need are far more minimal.20:06
sdezielarraybolt3: yeah, but I have a bunch of iphones here :/20:07
arraybolt3Oh blah. iPhones are junk.20:08
mybalzitch100%20:09
kanashiro[m]ahasenack: I ran check-mir script in libqb and apparently there is no extra dep requiring promotion in order to promote libqb-tools. Do you think I need to file a bug and subscribe the MIR team to demote libqb-dev and promote libqb-tools? Or is this straightforward enough to get in touch with an AA directly?20:31
ahasenackdid you find the original MIR bug? If there is one?20:41
kanashiro[m]yes, let me grab the link20:43
kanashiro[m]but it is an old one, not much info: https://bugs.launchpad.net/ubuntu/+source/libqb/+bug/120273720:44
-ubottu:#ubuntu-server- Launchpad bug 1202737 in libqb (Ubuntu) "[MIR] libqb" [High, Fix Released]20:44
ahasenackkanashiro[m]: did you see the rationale? Point 2 :)20:48
ahasenackthe demotion of -dev should be quasi-automatic, if nothing else pulls it in main20:50
ahasenack(or maybe fully automatic)20:50
kanashiro[m]ahasenack: I think we can upload it with libqb-tools as Recommends and once the component-mismatch appears we ping an AA to replace libqb-dev20:52
ahasenackok20:52
ahasenacklet me pull/refresh20:53
yalebsdeziel: I mean21:03
yalebeven with a network gateway filtering things [probably using ublock or similar]21:03
yalebit's still a good idea to run it on each machine as well21:03
yaleberrr sorry wrong chan I think21:03
sdezielyaleb: I agree on the need to do both network and client level filtering21:04
mohaIs it secure to install `net-tools` package on a production server as it has been deprecated for years?21:30
sarnold"secure", sure; it just can't do the things iproute2 can21:37
patdk-lapheh?22:01
patdk-lapiproute2 never supported arp, and it deprecated the little support it did have22:02
patdk-lap"For those who insist on such a thing, there is support for creating and deleting proxy ARP entries with ip neighbor, although this has been deprecated." and the iproute2 tells you to use arp instead :(22:06
sarnoldpatdk-lap: 'ip neigh'22:10
sarnoldoh hah22:10
patdk-lapthe support I forget exactly, didn't work for some reason22:42

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!