[00:50] <habys> installing over the network for uefi seems to require chainloading grub. It seems to have the location of grub.cfg hard coded in it. Is there any easy way to change that? I already have an ipxe menu, I'd rather not also have to go through a grub menu only for the UEFI installations.
[00:54] <sarnold> I thought the default grub config was no menu and you've got to hold down shift or something to even see the choices
[01:02] <habys> that may be the case, but considering I have several UEFI targets, and the grub config seems to be hardcoded, I need a way to tell grub to read a different config
[01:03] <habys> off the bat, I need different targets for amd64 vs arm64, server vs desktop, and more. If I can't instruct grub from ipxe to choose a different config, then I can't automatically install things anymore (with ipxe target files)
[01:04] <habys> for some reason I don't need to chainload grub to boot BIOS.. kernel+initrd and the right kernel parameters and you are off to the races
[01:04] <sarnold> you may be better served without grub at all?
[01:05] <habys> for some reason UEFI just crashes when it tries to set the ISO as root. I could only get it to work by chainloading grub. Not sure why that is the case
[01:05] <habys> yeah I wish I could get rid of it. No idea why all the boot documentation uses it.
[01:08] <habys> I could try to build grubx64.efi myself and hard code a different config path for each binary, referring to each from ipxe
[01:08] <habys> but man it would be so much easier if I could just `chain http://blah/grubx64.efi target=desktop` or something similar
[01:12] <habys> https://ubuntu.com/server/docs/install/netboot-amd64 was useful to get this working, but this basically assumes you don't have any other pxe targets..
[01:13] <sarnold> that might be worth clicking the 'help improve this document inthe forum' link on the page
[01:14] <habys> I'll give it a shot, I don't have a clear solution though
[01:14] <habys> ha that link says "use IRC for support questions" >_>
[01:15] <sarnold> sure :)
[01:49] <habys> https://ubuntuforums.org/showthread.php?t=2483456&p=14128774#post14128774
[01:50] <sarnold> somehow that doesn't feel like a 'forums' kind of thing to me; I'd expect the server discourse or askubuntu to do a better job collecting answers
[01:50] <habys> well the server discourse says no tech support questions
[01:51] <habys> there's just not a lot of hope for me huh
[01:51] <sarnold> I certainly think you're out in unusual territory
[01:51] <sarnold> but there's loads of folks who do funny things :) hehe
[01:52] <habys> yeah... trying to network boot OSes.. I feel like I'm from a different planet
[01:53] <arraybolt3> habys: That error you showed in the forums post looks like it can't find an initramfs.
[01:53] <arraybolt3> Maybe that's what's wrong?
[01:54] <habys> I usually see that error when it can't find the iso
[01:54] <habys> but I'll double check my syntax for the initrd
[01:54] <habys> ty for looking seriously!
[01:54] <arraybolt3> Sure thing!
[01:55] <arraybolt3> habys: Whatever loads the kernel should also load an initramfs that the kernel can mount and use to help boot, at least that's how it works when booting a disk. I assume it would work the same way when booting over the network, though I've only ever booted Debian over the network so I'm not sure.
[01:56] <habys> oh yeah, it definitely can find the inird. you can see ipxe downloading it
[01:56] <arraybolt3> Hmm... perhaps the compression used on the initrd isn't supported by the kernel?
[01:56] <arraybolt3> I've had that happen once (trying to use an old kernel with a newer version of Ubuntu).
[01:56] <habys> yeah, brand new and matching kernel + initrd + live server.iso
[01:57] <habys> I saw that same error when I was setting up server to boot in BIOS mode, and I had put the wrong path in for the ISO
[01:57] <habys> but I've got BIOS working fine, and it's going to be the same kernel+initrd+iso for bios or uefi
[01:57] <arraybolt3> Weird. All I know is I see that exact error (I think) every time the initramfs goes missing. Maybe it also applies to the ISO missing, but that's what I've seen in the past.
[02:04] <habys> I think the initramfs only serves to pull down the iso and use it as the root for the netbooted filesystem. I'll keep trying stuff. Not sure if I should give up on trying to make this work without GRUB, but that would be great..
[06:33] <alkisg> habys: see my code there for how to pass variables from ipxe to grub: https://github.com/alkisg/ltsp5-uefi/blob/main/ltsp.ipxe#L73
[06:34] <alkisg> I haven't read the rest of this chat, I only read part of what you uploaded to the forum link above
[12:45] <yaleb> hi what's the easiest way to get ssh using a cert so I don't have to type the password each time
[12:45] <yaleb> it's getting complicated with ~20 different logins lol
[12:46] <yaleb> some with "secure" passwords that are different for the same user between two different servers
[12:46] <yaleb> but that's less important as they're hardly accessed via cli
[12:46] <yaleb> no need for complicated public cert shares/intermediaries etc heh
[12:47] <yaleb> just basically to remember the login for X days
[12:48] <yaleb> I did think I could make some little services that connect automatically with passwords in plaintext since it's using FUKS or similar anyway
[12:48] <yaleb> and then just connect a terminal to those as needed
[12:48] <yaleb> but... kinda stupid just because I don't know how to do what I describe heh
[12:49] <yaleb> if anyone knows or can point me in the right direction, i'd appreciate it
[13:01] <konstruktoid> ansible vault can be an option yaleb 
[13:03] <konstruktoid>  * <del>ansible vault can be an option</del> wrong channel
[13:03] <konstruktoid> but a perhaps hashicorp vault, but the "remember the login for X days" part is done via pam. and an ssh connection will only be held open for the current connection
[13:10] <yaleb> as I say i'm pretty sure I can just generate a private user key and copy that over to the host?
[13:10] <yaleb> and connect with that instead of the user/pass combo
[13:10] <yaleb> I try to add as little extra software as I can haha
[13:19] <konstruktoid> ah sorry, then lineinfile your ssh key to the user authorized_keys
[13:20] <konstruktoid>  * ah sorry, then copy your ssh key strings to the user authorized_keys
[13:20] <konstruktoid> or append is perhaps the correct term
[13:20] <konstruktoid> but do you need a separate key for each user?
[13:26] <yaleb> aha yesss that was exactly right and I have done it per user with the help of the DO guide below ^_^
[13:26] <yaleb> https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys-on-ubuntu-20-04
[13:26] <yaleb> it was pretty much what I thought but in reverse haha
[13:26] <yaleb> safe passwordless accounts now yay
[13:27] <yaleb> thanks for the tip <3
[13:58] <ahasenack> I'm building a package in a ppa, lunar, proposed disabled, and it builds fine
[13:58] <ahasenack> then I do the same in a lxd container, lunar, also proposed disabled, and it fails because something is injecting `-Wall` in the CFLAGS :/
[13:58] <ahasenack> cc1: all warnings being treated as errors
[13:59] <ahasenack> I can't think of what it would be
[13:59] <ahasenack> I'll just start over
[14:24] <habys> alkisg: the installer for ubuntu changed since 18.04, thanks though
[16:45] <ahasenack> kanashiro[m]: that corosync-blackbox script is just a 3-line shell script
[16:46] <ahasenack> kanashiro[m]: we could also patch it to check if qb-blackbox is installed, and if not, tell the user to install libqb-tools
[16:46] <ahasenack> but that would need checking with the consumer of this script if it is prepared for such a response and exit status
[16:47] <ahasenack> funny, it uses $(date +%s) twice to name a file, instead of calling it once and storing the timestamp in a var
[16:47] <kanashiro[m]> ahasenack: and add libqb-tools as Suggests 
[16:47] <ahasenack> the way it is, it's quite possible the two dump files it creates will have different names
[16:47] <ahasenack> kanashiro[m]: yeah
[16:48] <kanashiro[m]> right, we can do that
[16:48] <ahasenack> maybe also check in the upstream docs where and in which cases they recommend to run corosync-blackbox
[16:49] <ahasenack> but the path of less surprises right now is still a recommends, and bring libqb-tools into main, if that's possible
[16:50] <ahasenack> let me write that up in the MP
[17:08] <ahasenack> oh, about that rsyslog build failure, I bet I know what it is
[17:08] <ahasenack> it's the presence of .git in the source
[17:09] <ahasenack> since I'm building from a git branch, there is a .git, and I bet the build system interprets that as "oh, we have ourselves a developer here, let's enable some devel flags"
[17:09] <ahasenack> # Running from git source?
[17:09] <ahasenack> in_git_src=no
[17:09] <ahasenack> AS_IF([test -d "$srcdir"/.git && ! test -f  "$srcdir"/.tarball-version], [in_git_src=yes])
[17:09]  * ahasenack shakes fist at it
[18:13] <lenovo> Hi all, My squid proxy is not hitting anything in the access.log, what can I be missing?
[18:14] <yaleb> idk that provides very little detail
[18:14] <yaleb> what have you tried/checked?
[18:15] <lenovo> is supossed to be fully configure, working only on memory, but all I get is TCP_tunnel TCP_miss
[18:16] <lenovo> no TCP_HIT
[18:30] <sdeziel> lenovo: can you share your squid.conf via a paste service?
[18:32] <lenovo> difficult to copy via SSH, any specific detail/parameter you would think?
[18:32] <yaleb> how is it difficult to copy via ssh
[18:32] <yaleb> literally just cat it into another file haha
[18:32] <yaleb> or a network share to your main pc
[18:33] <yaleb> or use the gist/pastebin cli client even
[18:33] <yaleb> so many good options!
[18:33] <ravage> cat /etc/squid/squid.conf|nc termbin.com 9999
[18:38] <lenovo> https://pastebin.com/My4h9V9c
[18:42] <ravage> lenovo: https://termbin.com/x2zl9 
[18:42] <ravage> this is your actual config
[18:43] <ravage> if i read it correctly there is a general deny for everything except some local IPs
[18:45] <lenovo> you mean http_access deny all?
[18:45] <ravage> yes
[18:46] <ravage> https://termbin.com/e6uo
[18:46] <lenovo> standard squid proxy config after the particulars are permitted
[18:47] <ravage> this one works for me
[18:47] <ravage> but i have have additional firewall rules
[18:47] <lenovo> and it still seems to read other request, just doesn't cache'em
[18:50] <lenovo> you are caching in drive, I am trying to have it work in memory only
[18:52] <lenovo> might that be the reason?
[18:55] <sdeziel> lenovo: anything in /etc/squid/conf.d/?
[18:58] <lenovo> yes debian.conf with #http_access allow localnet
[18:58] <lenovo> shall that be uncommented?
[18:59] <sdeziel> lenovo: if it's commented out, it's not important
[19:00] <sdeziel> lenovo: the localnet ACL definition looks weird. I'm not entirely sure which CIDR you'd like to allow but possibly this: `acl localnet src 192.168.0.0/25`
[19:00] <sdeziel> or maybe the more common `/24`
[19:04] <yaleb> I cannot reply to someone named "lenovo"
[19:04] <yaleb> my core ideals go against it
[19:05] <lenovo> hahahah
[19:05] <yaleb> talk to my 10th gen intel rig that can't get wifi networking on any major linux distro about it if any querires
[19:05] <yaleb> it's available to chat if you google chatgpt
[19:06] <lenovo> how old are you?
[19:07] <sdeziel> lenovo: that said, despite that weird ACL, if you are seeing TCP_TUNNEL and TCP_MISS, it means you are allowed to poke it. Maybe you are trying to fetch some resources that is not cache-able by your config. Do you have a sample URL of something you'd like to be cached?
[19:08] <lenovo> sdeziel, not really, but there is not hit at all, browsing simple cnn.com bbc.com
[19:08] <sdeziel> lenovo: I think those sites use HTTPS making them harder to cache
[19:09] <lenovo> any sure to be cached website you suggest?
[19:09] <sarnold> or they employ 'cache-busting' curls like tacking on ?t=<unixepoch> or similar nonsense
[19:09] <sdeziel> lenovo: cause when you flow an HTTPS connection through squid (with your current config) it will behave as a TCP proxy and won't be able to see whatever gibberish is passing through that TCP connection
[19:10] <sdeziel> lenovo: with your current squid config, you'd need to hit a HTTP site so that squid can observe the "objects" flowing through it
[19:11] <sdeziel> lenovo: and then, if those objects have specific headers indicating squid that it's OK to cache them (Cache-Control, Expiry, etc), then it would keep a copy and serve it to the next client requesting the same data
[19:11] <sdeziel> lenovo: assuming the cached copy didn't go stale between the first and second requests
[19:12] <lenovo> sdeziel, not matter it is working only with memory?
[19:13] <sdeziel> lenovo: no, the caching backend type shouldn't change anything
[19:13] <sdeziel> lenovo: with the modern web being like 95% HTTPS (yay!) running a caching proxy isn't trivial anymore
[19:14] <lenovo> suggest a sure to be cached site
[19:14] <lenovo> I am checking also chistes.com, no difference
[19:14] <sdeziel> http://httpforever.com/
[19:15] <sarnold> :D
[19:15] <sarnold> http://neverssl.com/
[19:15] <lenovo> :)
[19:15] <sdeziel> lenovo: and from that page, it seems that only the .js is cache-able http://httpforever.com/js/init.min.js
[19:16] <lenovo> TCP_MISS all
[19:16] <sdeziel> http forever have links to JS libs hosted on Cloudflare using HTTPS oh the irony
[19:16] <lenovo> no haha, we got a hit
[19:17] <lenovo> TCP_IMS_HIT/304 282 GET http://httpforever.com/
[19:17] <lenovo> nah, is https then the problem here?
[19:18] <sdeziel> IMS means a 304 revalidation was done
[19:18] <lenovo> neverssl can also hit
[19:18] <sdeziel> lenovo: HTTPS isn't a problem in and of itself but it makes it hard to cache
[19:19] <sdeziel> lenovo: there are ways around it but it's hard to do it in a secure way
[19:22] <lenovo> well, at least is proved that it is working
[19:23] <lenovo> sdeziel, sarnold yaleb ravage thank you all
[19:23] <sdeziel> lenovo: there is one place where a HTTP caching proxy is still quite useful IMHO: apt proxy :)
[19:23] <sdeziel> np
[19:23] <yaleb> yeee thank me even though I did basically nothing
[19:23] <yaleb> >:D
[19:23] <lenovo> forgot to mention, this server is also running pi-hole
[19:24] <yaleb> lenovo pi-hole
[19:24] <lenovo> not sure how they affect each other
[19:24] <yaleb> coming 2023
[19:26] <sdeziel> lenovo: pi-hole does ad filtering/blocking at the DNS level so if your squid is configured to use it to resolve domains, it should block ads and that even on HTTPS sites
[19:30] <lenovo> well the ideal is that to work, and what is not blocked by dns to be cached when possible
[19:37] <yaleb> fwiw
[19:37] <yaleb> if you hard block all ads at the network level
[19:38] <yaleb> a lot of stuff around the web is designed to basically just <hang forever>
[19:38] <yaleb> ads don't load? nothing for you!
[19:38] <yaleb> or more commonly
[19:38] <yaleb> ads don't load?
[19:38] <yaleb> ...
[19:38] <yaleb> ..
[19:38] <yaleb> ...
[19:38] <yaleb> ..
[19:38] <yaleb> ad infinit
[19:55] <sdeziel> lenovo: with the prevalence of HTTPS and embedded ads, I think ad blocking is best done on the client side with ublock origin for example... but that's getting off topic
[19:56] <lenovo> might be, but I have two soon to be teenagers that are about to challenge me
[19:56] <lenovo> so the 4 pcs at home need
[19:56] <lenovo> CONTROL
[19:57] <patdk-lap> 4?
[19:57] <sdeziel> lenovo: ah, then squid might be your friend cause even for HTTPS sites, you can decide if the proxy will authorize or not the connection
[19:58] <lenovo> yeah only 4 of some, two more on weekends for games
[19:58] <patdk-lap> ya, ublock on all browsers + dns blocker + block all dns-over-https and redirect all port 53 to local dns server
[19:58] <sdeziel> lenovo: even if the bulk of the HTTPS connection is encrypted, the connection initiation has some clear text info revealing the host where the HTTPS connection is directed to. You can tell squid to use that information to allow/deny access
[19:58] <lenovo> it might be, other problem is to find specific porn blacklists
[19:58] <patdk-lap> each of my kids have 3 each it seems :(
[19:58] <patdk-lap> though the 5year old only has two
[19:58] <patdk-lap> that is simple
[19:59] <lenovo> only two :) hehe
[19:59] <patdk-lap> https://apaste.info/WeKu
[19:59] <sdeziel> lenovo: some DNS providers do have filtered feed (cloudflare for family or something similarly named). You can tell your squid to use those
[20:00] <sdeziel> lenovo: this way, if you make the proxy a mandatory hop to exit your network, you will have your control point
[20:00] <patdk-lap> yep, use cloudfare ones at work
[20:00] <arraybolt3> I've heard of the Cloudflare things, that should be easy to set up.
[20:00] <sdeziel> lenovo: and then your kids will use their LTE connection to bypass it all... human nature :)
[20:00] <lenovo> :)
[20:00] <arraybolt3> Who says kids need cellphones? :D
[20:00] <patdk-lap> I have no cell coverage here :)
[20:00] <lenovo> yes, guess there is a limit for everything
[20:01] <sdeziel> yep
[20:01] <patdk-lap> my son learned how to vpn really quick after the dns filter was installed
[20:01] <arraybolt3> You probably still want a network-wide adblock even if you do use Cloudflare's blocker, since Google ads can be shockingly bad.
[20:01] <sdeziel> I think some safeguards are appropriate but for the rest, education is the only option... my 2cents
[20:02] <sdeziel> arraybolt3: yeah, I also do DNS level ad removal for the network otherwise the Internet looks ugly
[20:06] <lenovo> nah all saying good bye here
[20:06] <lenovo> thank you again
[20:06] <arraybolt3> o/
[20:06] <arraybolt3> sdeziel: Personally I just use client-side blocking on everything, but then again I don't have anyone on the inside actively trying to defeat it so the protections I need are far more minimal.
[20:07] <sdeziel> arraybolt3: yeah, but I have a bunch of iphones here :/
[20:08] <arraybolt3> Oh blah. iPhones are junk.
[20:09] <mybalzitch> 100%
[20:31] <kanashiro[m]> ahasenack: I ran check-mir script in libqb and apparently there is no extra dep requiring promotion in order to promote libqb-tools. Do you think I need to file a bug and subscribe the MIR team to demote libqb-dev and promote libqb-tools? Or is this straightforward enough to get in touch with an AA directly?
[20:41] <ahasenack> did you find the original MIR bug? If there is one?
[20:43] <kanashiro[m]> yes, let me grab the link
[20:44] <kanashiro[m]> but it is an old one, not much info: https://bugs.launchpad.net/ubuntu/+source/libqb/+bug/1202737
[20:44] -ubottu:#ubuntu-server- Launchpad bug 1202737 in libqb (Ubuntu) "[MIR] libqb" [High, Fix Released]
[20:48] <ahasenack> kanashiro[m]: did you see the rationale? Point 2 :)
[20:50] <ahasenack> the demotion of -dev should be quasi-automatic, if nothing else pulls it in main
[20:50] <ahasenack> (or maybe fully automatic)
[20:52] <kanashiro[m]> ahasenack: I think we can upload it with libqb-tools as Recommends and once the component-mismatch appears we ping an AA to replace libqb-dev
[20:52] <ahasenack> ok
[20:53] <ahasenack> let me pull/refresh
[21:03] <yaleb> sdeziel: I mean
[21:03] <yaleb> even with a network gateway filtering things [probably using ublock or similar]
[21:03] <yaleb> it's still a good idea to run it on each machine as well
[21:03] <yaleb> errr sorry wrong chan I think
[21:04] <sdeziel> yaleb: I agree on the need to do both network and client level filtering
[21:30] <moha> Is it secure to install `net-tools` package on a production server as it has been deprecated for years?
[21:37] <sarnold> "secure", sure; it just can't do the things iproute2 can
[22:01] <patdk-lap> heh?
[22:02] <patdk-lap> iproute2 never supported arp, and it deprecated the little support it did have
[22:06] <patdk-lap> "For those who insist on such a thing, there is support for creating and deleting proxy ARP entries with ip neighbor, although this has been deprecated." and the iproute2 tells you to use arp instead :(
[22:10] <sarnold> patdk-lap: 'ip neigh'
[22:10] <sarnold> oh hah
[22:42] <patdk-lap> the support I forget exactly, didn't work for some reason