[00:00] <oerheks> but this happened on wayland too?
[00:00] <ravage> yes happens on wayland too
[00:00] <ravage> not sure if maybe only with nvidia in general
[00:01] <ravage> https://bugs.launchpad.net/ubuntu/+source/gnome-shell-extension-tiling-assistant/+bug/2063970
[00:01] -ubottu:#ubuntu- Launchpad bug 2063970 in gnome-shell-extension-tiling-assistant (Ubuntu) "Enhanced tiling orange shading that indicates what a window will resize to will persist even after I'm done with the adjustment." [Undecided, Confirmed]
[00:03] <sarnold> ravage: nice find
[00:14] <kuiperanon> Hi. For some reason, my apt is giving 404 for IPs that I can ping. https://gist.github.com/kuiperanon/746561a7b533d467a19497716e734cac
[00:14] <kuiperanon> And I'm getting other errors
[00:15] <patrick_somebody> are you running ubu as a virtual instance?
[00:15] <tomreyn> !kinetic | kuiperanon
[00:15] <patrick_somebody> can you do a basic ping and reach anything outside of your loopback?
[00:16] <tomreyn> it's not a networking issue.
[00:16] <kuiperanon> I'm running ubu in digital ocean
[00:16] <tomreyn> the mirror server reports 404s because this is an unsupported release
[00:17] <kuiperanon>  I'll see if I can migrate my stuff easily
[00:17] <patrick_somebody> nice
[00:17] <patrick_somebody> let me guess, only 22.04 is supported
[00:17] <tomreyn> see the channel /topic
[00:18] <patrick_somebody> sorry. trying to be helpful
[00:18] <patrick_somebody> ill shut up
[00:18] <tomreyn> no, no, please do try to help when you can
[00:18] <Guest62> Yay! ravage, disabling the enhanced tiling really solved the issue, thanks man!
[00:18] <sarnold> ravage: botsnack
[00:19] <sarnold> kuiperanon: hopefully helpful https://help.ubuntu.com/community/EOLUpgrades
[00:19] <ravage> nice. mark yourself as affected on https://bugs.launchpad.net/ubuntu/+source/gnome-shell-extension-tiling-assistant/+bug/2063970
[00:19] -ubottu:#ubuntu- Launchpad bug 2063970 in gnome-shell-extension-tiling-assistant (Ubuntu) "Enhanced tiling orange shading that indicates what a window will resize to will persist even after I'm done with the adjustment." [Undecided, Confirmed]
[00:28] <oerheks> !cookie | ravage
[00:29] <ravage> \o/ COOKIE!
[00:48] <felco> marked as affected there
[00:56] <felco> https://bugs.launchpad.net/ubuntu/+source/remmina/+bug/2062177
[00:56] -ubottu:#ubuntu- Launchpad bug 2062177 in remmina (Ubuntu) "Remmina crashes after RDP connection" [High, Fix Committed]
[00:56] <felco> guys any eta to get this? :[
[00:58] <sarnold> it looks like someone's verified the fix
[00:58] <sarnold> so it'll probably land in a week
[00:59] <tomreyn> it's also in -proposed and the projects' ppa according to comment 9, if you need it sooner
[01:19] <felco> niiicee
[02:19] <applepear> hello
[02:29] <smoltalk> Has anybody tried running subiquity 24.04.x for an ubuntu desktop 22.04 install? I'm writing a provisioning script for server 22.04 to help me set up LVM over LUKS on multiple disks. I got the partitions and volumes set up the way I like them via a bash script, but neither subiquity 22.04 or 24.04 on live server images gives me the option to select
[02:29] <smoltalk> the (v)fat32 partition I set up as the system EFI partition (like I was able to do on desktop 22.04).
[02:29] <smoltalk> I'm thinking a good option for this that could be shared out later would be to create a script that generates autoinstall.yaml and then restarts the ubiquity snap to pick up the generated config. This also sidesteps the issue of having to have a plaintext key in autoinstall.yaml or include a keyfile on the CIDATA drive for LUKS partitions. I'm
[02:29] <smoltalk> hoping if I need to provision another desktop 22.04 machine, it would be usable there as well.
[02:32] <smoltalk> I was also thinking of trying to build this out as an addition to subiquity (ex: a guided partitioning option as a middle ground between manual partitioning and just picking a single drive for the OS installation), but I think my approach is too opinionated for a PR like that to make it upstream. Any thoughts/suggestions? Or experience using
[02:32] <smoltalk> subiquity on desktop 22.04?
[02:37] <ravage> If you already got this far with a bash script add debootstrap and grub 
[02:37] <ravage> And you have a running system
[02:41] <smoltalk> ravage I haven't heard of debootstrap before, and I see some folks have had success with setting up ubuntu on it. This sounds like a much less involved option than trying to rewrite half of my script to spit out a correct autoinstall.yaml. Thank you!
[02:42] <sarnold> debootstrap has some problems with usrmerge I don't understand :/
[02:42] <sarnold> hunt around a little bit to see if there's a good solution there
[02:42] <sarnold> mmdebstrap is apparently much faster than debootstrap
[02:43] <ravage> had no problem with debootstrap for my 24.04 setup
[02:43] <ravage> but you should try to find the latest version
[02:43] <ravage> i think i actually used the noble one
[02:44] <ravage> did the setup from a grml live iso
[02:44] <smoltalk> ravage just out of curiosity, were you trying to do use debootstrap for a server or desktop install on 24.04?
[02:44] <ravage> desktop
[02:44] <smoltalk> sarnold I'll also check out mmdebstrap, thanks!
[02:45] <ravage> https://launchpad.net/ubuntu/noble/amd64/debootstrap/1.0.134ubuntu1 this one shoule do the job. i should check mmdebstrap
[02:47] <smoltalk> ravage according to what I've read so far, it utilizes APT as part of its bootstrap process, so you can use multiple package sources. It also claims to be considerably faster.
[02:47] <ravage> and the reason for debootstrap was encrypted raid1 that was not possible the way i liked it with the installer
[02:47] <ravage> yes thats what i read too
[02:47] <ravage> but speed is not that relevant
[02:47] <ravage> irs not like you install 1000 packages and i only need to do it once
[02:47] <ravage> *its
[02:48] <ravage> but i also use it to install 24.04 on our dedicated servers
[02:48] <ravage> works fine
[02:48] <ravage> (and 22.04 too)
[02:54] <smoltalk> Either seems like a good option for my use case. Glad I logged on when y'all were around! This has been really helpful, thanks!
[03:08] <ravage> good luck. i could share my basic bash script if you need it
[03:09] <ravage> but that does not use any encryption. so here the only tricky part is make sure you install cryprsetup and get the crypttab right
[03:25] <cedar> Hello team
[03:25] <smoltalk> Thanks ravage, I have that part all set, actually so I just need something that dove-tails into the actual installation portion. I'm also looking into cloud-init now in addition to debootstrap and mmdebstrap. I'm sort of confused as to why canonical forked their own project for subiquity, now. I've heard the subiquity autoinstall schema is a
[03:25] <smoltalk> superset of cloud-init, but cloud-init seems like it can already handle a pretty wide variety of stuff. I can even bake in the server-specific setup that way
[03:26] <lotuspsychje> welcome cedar
[03:26] <cedar> I want to upgrade my 22.04 LTS installation to 24.04 LTS but from what I've researched, it appears I have to either wait until August or upgrade to 23.10 first
[03:27] <cedar> What is the reason for waiting until August?  24.04 LTS has already replaced 22.04 on the main webpage
[03:27] <ravage> right now i would not recommend upgrading at all
[03:27] <ravage> there are a few open bugs
[03:27] <smoltalk> I can share my script as well for LVM over LUKS with multiple disk support, but so far it's only managed to work on ubuntu desktop 22.04. The partitioning table isn't very customizable either
[03:28] <ravage> i am the wrong person to discuss the new installer. just not a fan of it in general.
[03:29] <ravage> the UI they put on top of it looks nice but does not make the backend better
[03:29] <smoltalk> Yeah, which is why I'm thinking a cloud-init-based setup could be pretty cool. With a few tweaks, it could hopefully be applied to more than just ubuntu/debian flavors
[03:31] <ravage> You can do a full installation with cloud init 
[03:31] <ravage> But ubuntu added a lot of custom stuff
[03:32] <ravage> And the problem is that's not trivial to figure out where that line is 
[03:32] <smoltalk> ravage I agree. I've tried ubuntu server 22.04 (with subiquity 22.04 and 24.04), ubuntu desktop 22.04 (pre-subiquity), and ubuntu desktop 24.04. The install wizard on subiquity was fairly restrictive on all versions/variants, and it wasn't very intuitive to figure out how to iterate on autoinstall.yaml when it failed. I also had to dig through
[03:32] <smoltalk> multiple levels of log files to find out what happened.
[03:33] <ravage> I'm not saying the Debian installer was great. But at least bendable to my needs with a little effort 
[03:33] <smoltalk> one of the servers I got running with a more basic setup has the auto-install conf sitting in /var/installer, and this also seems like it will be helpful: https://github.com/canonical/autoinstall-desktop
[03:33] <lotuspsychje> !discuss
[03:34] <smoltalk> The `snaps` portion definitely seems subiquity-specific, but that also seems pretty easy to script
[03:34] <lotuspsychje> join over to the discuss channel please smoltalk
[03:34] <ravage> We are discussing his Ubuntu installation 
[03:34] <lotuspsychje> not really support?
[03:34] <ravage> Should be close enough to support 
[03:35] <ravage> Also nothing else is going on here 
[03:35] <smoltalk> lotuspsychje it's a bit of a hybrid thing? Subiquity has been doing me dirty, and I'm trying to figure out a good workaround that will hopefully be reusable for future provisioning efforts
[03:37] <smoltalk> been chewing on this for a day or two now. Much longer if you consider the desktop install. After finishing installation one ubuntu desktop machine, and getting halfway through one server installation with another to go, I've realized this is a general problem that I want to have a good solution for. That does require some more design-oriented
[03:37] <smoltalk> thinking, but it's still in service of solving a problem.
[03:39] <smoltalk> And considering how I shuffled around a considerable number of words in that message, sleep might be part of the solution...
[03:39] <ravage> maybe not a bad idea. feel free to come back after some rest if you have more questions or maybe even a solution you can share 🙂
[03:42] <smoltalk> Thanks for all of the help and feedback, ravage! I'm hoping to publish something to the wiki or github by the time I'm done. I'll be sure to ping you if you're around.
[03:53] <cedar> I am installing updates in TTY and the system prompted me to view the differences between my defaults.list and the one the package had. Anyways, I'm stuck in this text editor, I don't know what it is or the commands to escape it
[03:53] <cedar> How do I get out of this text editor?
[03:53] <lotuspsychje> try ESC or q cedar ?
[03:54] <cedar> thanks it was q
[03:56] <cedar> One of my HDDs recently stopped showing up under Ubuntu.  While booting the system gets stuck for 1.5 minutes trying to mount it.  Perhaps it is dead?  Does not show up on lsblk at all
[03:57] <lotuspsychje> cedar: can you press F1 at booting, to switch to text boot, see if you can catch bottleneck errors
[03:57] <lotuspsychje> cedar: once in your desktop, investigate your dmesg
[04:02] <cedar> yes I did that while booting and it was stuck mounting the disk until it timed out
[04:02] <cedar> Looking in dmesg not sure what to look for
[04:04] <lotuspsychje> cedar: share with the volunteers in a !paste if you like
[04:38] <life> hi
[04:39] <life> any one
[04:41] <Bashing-om> !ask | life
[04:41] <life> ok
[04:42] <life> after somany years i came here
[04:45] <life> hello
[06:06] <itu> OMG, ALT+TAB suddenly no more working!
[06:11] <jiggawatt> ruh roh
[06:17] <itu> CoolSwitch is not working any more :-/ :-/
[07:20] <levity> "Everybody can be great...because anybody can serve(warez!). You don't have to have a college degree to serve. You don't have to make your subject and verb agree to serve. You only need a heart full of grace. A soul generated by LOVE(and archive.org  :P)."
[07:20] <levity> - Dr. Martin Luther King, Jr.
[07:20] <levity> http://fuckmoney.blog
[08:02] <itu> help. ALT+TAB suddenly no more working ..
[08:03] <itu> maybe all navigation shortcuts are not functionally
[08:35] <daryld> Hola
[09:00] <NickName> hi. first time here. unsure what this channel is for
[09:01] <lotuspsychje> !support | NickName
[09:13] <filip_> hey
[09:13] <filip_> help
[09:13] <EriC^^> !ask filip_
[09:13] <EriC^^> !ask | filip_
[09:37] <johnsonmagic> Fucking niggggger ugly homo cunt
[09:38] <johnsonmagic> Linux has been poisoned by nigggers
[10:07] <cbreak_> linux is great.
[10:07] <cbreak_> is 24.04 safe to upgrade to by now?
[10:10] <lotuspsychje> cbreak_: you could try it from 23.10 if it works already and make a good backup in front
[10:10] <toddc> cbreak_: all the basics work great just a few minor paper cuts depending on what you need most users should have no problems
[10:10] <cbreak_> nice
[10:11] <cbreak_> https://discourse.ubuntu.com/t/noble-numbat-release-notes/39890#heading--known-issues still mentions problems with upgrading the desktop install
[10:11] <cbreak_> but I guess I can try it already
[10:13] <lotuspsychje> cbreak_: safest way for now, would be clean install (full disk wipe) without too much other options
[10:13] <cbreak_> yeah... I kind of want to avoid that :)
[10:14] <lotuspsychje> yeah well you prob want to avoid upgread breakage too :p
[10:15] <cbreak> ideally yes...
[11:40] <Macer> so i re-installed ubuntu on the macbook pro using luks+zfs (i think?) and the io problems seem to have vanished
[11:40] <negrogod> hi
[11:41] <siyou> hello
[11:41] <Macer> although i also joined the AD with realm vs the built in gnome AD stuff since i always run into issues with sssd joining because some gpo permissive option in sssd.conf
[11:41] <CosmicDJ> Macer: why luks and not just zfscrypto?
[11:41] <negrogod> i have problem with obs, dont capture the windows, anybody can helpme
[11:41] <Macer> CosmicDJ i just used the installer. it seems like it still uses luks
[11:41] <negrogod> i use ubuntu studio
[11:41] <Macer> with zfs underneath i guess
[11:42] <Macer> at least that is what the installer made it seem like since it said (zfs,luks) .. and you can see it using the dm-crypt mapper
[11:43] <Macer> either way. i have no idea what happened with the last install.. i just used the typical luks/lvm method and the io was really messed up. i was getting 60%+ wa
[11:44] <Macer> i also just realized that you can edit user-dirs.dirs in .config to change default dirs in home :/ i was totally doing that wrong for a long time.
[11:45] <Macer> i was doing the whole creating symlinks thing that would ruin nautilus .. it would work but i'd lose all the default icons.
[11:50] <BluesKaj> Hi all
[11:50] <negrogod> hi
[12:29] <patf49> Hello qui parle français ?
[12:30] <patf49> personne merci
[12:30] <patf49> c'est un chat privé ?
[12:32] <felco> patf49 #ubuntu-fr
[12:34] <patf49> merci felco, j'arrive je suis de Suisse
[12:35] <patf49> je suis sur L-Ubintu
[12:35] <patf49> L-Ubuntu pardon
[12:35] <felco> I do not speak french, I just pointed to you the channel were they speak french
[12:35] <felco> Here is english only
[12:36] <patf49> ok thanks I can speak english too
[14:44] <NeilRG> Can I use the cuda toolkit for Ubuntu 22.04 on 24.04?
[14:44] <NeilRG> or else how do I get the latest cuda toolkit?
[14:46] <pragmaticenigma> NeilRG: It can take a few months for developers to catch up to the latest version of Ubuntu. This is why most will recommend waiting to install the latest LTS release when the .1 release is made in late July.
[14:46] <NeilRG> pragmaticenigma, got it, thanks
[14:46] <pragmaticenigma> NeilRG: Sometimes older packages can work on the latest version, but your milage may vary
[14:47] <NeilRG> is there a way to run linux within linux?
[14:47] <NeilRG> can I run a different version of linux in docker?
[14:47] <pragmaticenigma> that's not what docker is for
[14:48] <leftyfb> NeilRG: for what ourpose?
[14:48] <pragmaticenigma> docker is for running containered applications, not operating systems
[14:48] <NeilRG> yes, but I want to run a container with a new CUDA
[14:48] <leftyfb> NeilRG: if you're thinking of running the cuda toolkit in a docker, that won't work the way you think it will
[14:48] <NeilRG> and that's not available on my OS yet
[14:49] <NeilRG> ok
[14:49] <pragmaticenigma> you can run another version of linux inside a VM, though getting direct hardware access will depend on the VM tooling used.
[15:19] <JanC> you can run a different OS in a container, but not a different linux (=kernel)
[15:25] <delsol_laptop> JanC: yes, but there can be some gotchas there.
[15:27] <JanC> the OS has to be compatible with the kernel obvious (as it can't change)
[15:27] <JanC> *obviously*
[15:27] <delsol_laptop> like, if you're running linux containers on windows or mac.... obviously the linux container is going to effectively be a full VM....
[15:28] <delsol_laptop> and is going to have a different kernel than the host OS
[15:28] <delsol_laptop> but yeah, arch container in ubuntu...   same kernel, no problem.
[15:32] <JanC> at that point it's no longer a container...
[15:40] <delsol_laptop> I mean, its still considered a container, its just got the extra QEMU->Linux kernel layer in there.
[15:41] <delsol_laptop> I don't think its actually QEMU though... been a while
[15:43] <pragmaticenigma> A container carries only a minimal OS as part of the base image. There is no kernel attached, which means containers written for windows cannot and will not work on linux, and vice versus. There are some docker instances built for linux that run in windows, because of the WSL implementation on Windows platforms. Otherwise the best way to think of a container is this... it's just another running application. just that the applications
[15:43] <pragmaticenigma> have no idea there are boundaries to what they can do, and have no idea what their state is within the host os.
[15:44] <delsol_laptop> right, but if you're running a linux container on a Mac host... its NOT using the mac kernel.
[15:44] <delsol_laptop> there is an extra layer there to run the linux kernel... that THEN is shared by all the containers
[15:44] <pragmaticenigma> That would mean that what ever is running the container, is likely running inside a virtualized machine, something like a hypervisor
[15:45] <delsol_laptop> Yes.
[15:45] <delsol_laptop> Windows I believe uses virtualbox as the hypervisor
[15:45] <delsol_laptop> not sure whats used on docker for mac
[15:45] <pragmaticenigma> haha... no
[15:45] <pragmaticenigma> virtualbox is oracle... and it's own game
[15:45] <pragmaticenigma> *its
[15:47] <pragmaticenigma> microsoft uses Hyper-V
[15:47] <delsol_laptop> Ahh. Right.
[15:47] <delsol_laptop> I don't run windows really... so
[15:48] <ArchDave> here's a way to run ubuntu on ubuntu https://ubuntu.com/server/docs/how-to-create-a-vm-with-multipass
[15:48] <pragmaticenigma> This is why we have the internet and search engines... to fill in the gaps of our knowledge (though search engines are clearly an endangered species)
[15:49] <ArchDave> there's also https://canonical.com/lxd
[15:51] <ArchDave> The "Arch" in ArchDave is a reference to The St. Louis Arch, and not Arch Linux, btw ya'll
[15:52] <tomreyn> there's also https://linuxcontainers.org/incus/
[15:57] <ArchDave> ya really. once you can run a vm, how hard can it be to run a container?
[16:02] <ash_worksi> do people normally set up a root password to copy ssh keys between machines?
[16:04] <ash_worksi> I am reading docs on pgbackrest and they suggest (basically) `ssh root@... cat /path/to/.ssh/id_rsa.pub >> authorized_keys`
[16:05] <ash_worksi> note that my account is neither the key owner nor recipient
[16:05] <pragmaticenigma> ash_worksi: a root account is not required to move keys between machines
[16:06] <pragmaticenigma> what ever that article is going on about, is very wrong from a view point of keeping systems secure.
[16:07] <leftyfb> ash_worksi: you should never set a root password nor allow ssh as root
[16:07] <ash_worksi> pragmaticenigma: so, not that I'm arguing that point, but lets just make things more concrete: the key owner is "postgres" and recipient is "pgbackrest"
[16:08] <cbreak> ash_worksi: I would strongly advise against using passwords unless you REALLY have to
[16:08] <leftyfb> ash_worksi: neither of those users should be doing anything with ssh
[16:08] <cbreak> I use ssh keys when ever I can, and only have a PW for emergency recovery somewhere in my pw manager
[16:08] <leftyfb> or setting passwords on root or system accounts
[16:09] <cbreak> ash_worksi: I use yubikeys most of the time
[16:09] <leftyfb> ash_worksi: can you explian what your end goal is you're trying to achieve?
[16:09] <pragmaticenigma> ash_worksi: those are system process users, they should be treated the same as root. there is absolutely no reason they should be given access or permitted to use SSH
[16:10] <ash_worksi> so much talking before I could reply, but here goes anyway:
[16:10] <ash_worksi> pragmaticenigma: so, given I, as ash_m, have sudo, is the typical method for this `ash_m@db:~$ sudo cp /var/lib/postgresql/.ssh/id_rsa.pub .` then `ash_m@bak:~$ sudo ssh db cat /home/ash_m/id_rsa.pub >> /home/pgbackrest/.ssh/authorized_keys`
[16:11] <leftyfb> ash_worksi: don't do that
[16:12] <cbreak> why sudo ssh?
[16:12] <ash_worksi> cbreak: for the redirect
[16:12] <cbreak> seems silly
[16:12] <ash_worksi> I'm all ears
[16:12] <cbreak> you can just cp the file later
[16:12] <cbreak> if you need it to be done with root
[16:12] <leftyfb> ash_worksi: bind mount /var/lib/postgresql into a directory in pgbackrest's home dir
[16:13] <leftyfb> and do not set a password or try to ssh as the postgresql user
[16:13] <cbreak> usually, when I set up a new user account, I log in as root, then I su to that specific user, and then copy the key into their .ssh/authorized_keys as that user itself
[16:13] <ash_worksi> leftyfb: how do you do that?
[16:13] <cbreak> this only needs to be done once for initial setup, or when users lose their keys...
[16:14] <leftyfb> ash_worksi: https://www.baeldung.com/linux/bind-mounts
[16:14] <leftyfb> cbreak: the point is, they do not need to be ssh'ing in as a system account. No sense in going down the path of the best methods of doing that
[16:14] <ash_worksi> leftyfb: and just to be clear, this is mounting (binding?) a dir on a remote machine right?
[16:15] <leftyfb> ash_worksi: lets go over this 1 step at a time. You have a server running postgresql correct?
[16:15] <ash_worksi> (the dir is on a remote machine)
[16:15] <ash_worksi> correct
[16:15] <ash_worksi> and another one with pgbackrest installed (and a pgbackrest user)
[16:16] <ash_worksi> (with no password)
[16:16] <ash_worksi> another one => another server
[16:16] <ash_worksi> (backup server)
[16:16] <leftyfb> ash_worksi: https://pgbackrest.org/user-guide.html#repo-host/setup-ssh
[16:17] <ash_worksi> leftyfb: yes, that is how I started the convo
[16:18] <leftyfb> setup a pgbackrest account on the remote server and login as that. On that server, bind mount /var/lib/postgresql into /home/pgbackrest/postgresql or something
[16:18] <leftyfb> ash_worksi: https://www.baeldung.com/linux/bind-mounts
[16:18] <ash_worksi> leftyfb: yes but I am bindmounting pg-primary:/var/lib/postgresql right?
[16:19] <ash_worksi> like postgres run doesn't exist on the repository server
[16:19] <leftyfb> you're bind mounting on the postgresql server
[16:20] <leftyfb> so the bgbackrest user you create on it doesn't need access to /var/lib/postgresql exactly (it will as part GID's though)
[16:21] <ash_worksi> leftyfb: so the command runs on the db server (postgres) or the backup server (pgbackrest)?
[16:21] <leftyfb> the point is to not allow root or postgresql to ssh to or from anything. Neither should be able to login to or from anything
[16:22] <leftyfb> ash_worksi: you have a server with large storage running pgbackrest that is going to ssh to the postgresql server as a newly created pgbackrest user on the postgresql and backup the db located at /home/pgbackrest/postgresql due to the bind mount to /var/lib/postgresql
[16:23] <ash_worksi> leftyfb: you don't need to sign me on, I was signed on the moment I read `root@` in the docs
[16:25] <ash_worksi> leftyfb: so, are you saying the `mount` takes places between `db:/var/lib/postgresql/.ssh` and `db:/accessible/by/local/pgbackrest/acct` ?
[16:25] <ash_worksi> (`db` is the postgres server, `bak` is the pgbackrest server)
[16:25] <leftyfb> no
[16:26] <leftyfb> on the db server: sudo mount -o /var/lib/postgresql /home/pgbackrest/postgresql/
[16:27] <ash_worksi> I don't know how that's not what I said, but I'm following...
[16:27] <leftyfb> from the backup server, you login to the db server via ssh as pgbackrest@postgresql-server.host and it will have access to postgresql dir right there
[16:27] <ash_worksi> leftyfb: but pgbackrest has no password
[16:27] <ash_worksi> it can't log in
[16:28] <ash_worksi> (nor, atm, does a pgbackrest acct even exist on the db server)
[16:28] <leftyfb> good, use ssh keys. The private key on the backup server in ~/home/pgbackrest/.ssh/id_rsa and the public key on the db server at /home/pgbackrest/.ssh/id_rsa.pub
[16:28] <leftyfb> you create one
[16:29] <leftyfb> you create a pgbackrest user on the db server so you can login via ssh and backup the postgresql db files which are bind mounted to it's home dir
[16:29] <ash_worksi> the point of the bind mount is to set up ssh
[16:29]  * leftyfb sigh
[16:29] <leftyfb> no
[16:30] <ash_worksi> okay hold on
[16:30] <leftyfb> the point of the bind mount is to give the pgbackrest user (that you create) access to the postgresql files in pgbackrest's home dir without ssh'ing directly to /var/lib/postgresql or worse, ssh'ing AS postgrsql
[16:31] <ash_worksi> leftyfb: let me paint the scenario for you and then you can tell me when I am going down the wrong path and what to do instead
[16:33] <leftyfb> unfortunately, I have to head out now. I'm hoping others are following the idea and can chime in
[16:33] <ash_worksi> leftyfb: thanks for helping :)
[16:39] <ash_worksi> okay, just to finish my question I guess, I am really looking at this: https://pgbackrest.org/user-guide.html#repo-host/setup-ssh -- I don't like that it says "roo@" in the command of the 3rd block (red flag for me), but nonetheless, pgbackrest@bak needs postgres@db's public key in it's authorized_keys file. Neither have passwords or sudo; so the question is how to get the key from db to bak. I did by
[16:39] <ash_worksi> using ash_m which does have sudo, to copy the key somewhere I have access to it; then I can `ssh ... cat` it to the authorized_keys file on bak (or scp it, and then `sudo cat >> .../authorized_keys`)
[16:42] <ash_worksi> but I didn't like having to cp the public key; so instead of that, I am now thinking I could do the exact same thing except bind-mount db:/var/lib/postgresql db:/home/ash_m/postgresql (this is not a command, the `db:` is just so you know what server I'm talking about)
[16:44] <ash_worksi> anyway, if there's a better way than (1) db: mount /var/lib/postgresql /home/ash_m/postgresql (2 [essentially]) bak: sudo ssh db cat /home/ash_m/postgresql/.ssh/id_rsa.pub >> /home/pgbackrest/.ssh/authorized_keys -- then I want to hear it
[16:46] <ash_worksi> i think leftyfb was too far ahead. "you create a pgbackrest user on the db server so you can login via ssh" assumes the key has been copied already
[18:02] <younder> ubuntu 24.04 Why is distro-info-data on phased update?
[18:07] <tomreyn> why shouldn't it be?
[18:15] <ash_worksi> does anyone have any input on my "copying ssh keys" thing? I guess is, "it doesn't matter if you bind-mount or just cp to an accessible location"
[18:20] <leftyfb> ash_worksi: set a password for the pgbackrest user on the db server, enable PasswordAuthentication on the server and then use ssh-copy-id to copy the public key to the db server. Then disable PasswordAuthentication
[18:25] <NeilRG> why is arch so much more popular now?
[18:25] <NeilRG> almost as popular as ubuntu
[18:26] <leftyfb> !ot | NeilRG
[18:26] <NeilRG> ok
[18:26] <NeilRG> sorry
[18:28] <ash_worksi> leftyfb: and remove the password, right?
[18:29] <ash_worksi> leftyfb: that that what you'd normally do?
[18:30] <leftyfb> you don't need to. If anything, I would set it to some obscenely large randomly generated password
[18:30] <ash_worksi> leftyfb: (also, would you diverge from the pgbackrest docs? they do not have a pgbackrest user on pg-primary. They just expect the postgres key to be used to login as pgbackrest)
[18:30] <leftyfb> in some cases, I do that and never save the password so even I don't know it
[18:31] <leftyfb> is pg_primary the backup server or the db?
[18:31] <ash_worksi> (that is to say, they expect `postgres@pg-primary:~$ ssh pgbackrest@repository` to work)
[18:31] <leftyfb> you do not need the pgbackrest user on the backup server, just the db server
[18:32] <leftyfb> hold on
[18:32] <leftyfb> the docs suggest you ssh FROM the db server TO the backup server???
[18:32] <ash_worksi> the docs suggest that to be possible such that the command `pgbackrest` can issue commands to the backup server over ssh
[18:33] <ash_worksi> (namely "archive")
[18:34] <ash_worksi> but it is prudent for the postgres user to be able to do this since that command will be used in postgresql configuration (backup_command=pgbackrest ... or whatever it is)
[18:35] <leftyfb> ash_worksi: honestly, I think this pgbackrest is overkill for a simple backup. Just stick "pg_dump database_name > db_name_20240503.sql" and scp/rsync/whatever that file to some remote location. Run it as root or some new user that has access to the db which you can easily configure
[18:36] <leftyfb> you do NOT want the postgres user doing anything except postgresql. No password, no ssh, no anything
[18:36] <leftyfb> it's a system user, leave it alone
[18:36] <ash_worksi> an archive_command is native to postgres; it can be pg_dump if you wanted
[18:37] <leftyfb> good luck. This pgbackrest has us going in circles.
[18:37] <ash_worksi> no, you squared it all up for me really
[18:38] <ash_worksi> you were just earlier sort of helping "ahead"... or I was "behind"...
[18:39] <leftyfb> sorry, I didn't actually read that pgbackrest doc. They suggest "unideal" ways of doing things
[18:39]  * ash_worksi read that momentarily as uni-deal
[18:39] <leftyfb> un-ideal
[18:39] <leftyfb> bad
[18:40] <ash_worksi> is that your commentary or theirs?
[18:41] <ash_worksi> (not that I'm refuting it)
[18:41] <leftyfb> the docs are doing dumb things
[18:41] <leftyfb> in my opinion
[18:41] <leftyfb> in my 20+ years experience
[18:45] <ash_worksi> leftyfb: such as? (again, not refuting)
[18:45] <leftyfb> I don't know how many times I have to say it
[18:46] <leftyfb> they have you ssh'ing as root and directly into /var/lib/postgresql
[18:46] <leftyfb> and ssh'ing as the pastgresql user
[18:47] <leftyfb> that doc should be set ablaze with napalm
[18:48] <ash_worksi> yes, that's just for the key exchange though. What I meant was, was there any other glaring problems that struck you or when you said the "unideal" comment you there was no new info from the docs for you
[18:49] <ash_worksi> s/comment you/comment/
[18:49] <leftyfb> ssh'ing in as postgresql isn't just for setting up the key
[18:49] <leftyfb> please follow my advice above
[18:49] <leftyfb> there's no point in repeating everything
[18:51] <ash_worksi> k, sorry for being tedious, but I appreciate it. thanks again.
[18:53] <ash_worksi> I am pretty sure the only time (you) are ever supposed to type ssh in that entire doc though is to exchange keys and test the connection
[18:53] <ash_worksi> I searched the whole thing for ssh; and those were the only times I saw
[18:55] <leftyfb> one of those tests is: sudo -u pgbackrest ssh postgres@pg-primary
[18:55] <leftyfb> which assumes the posgresql account is able to ssh in. BAD
[18:56] <ash_worksi> if you do so, it will prompt you with a pgbackrest command
[18:56] <ash_worksi> since it cant do anything except that
[18:56] <ash_worksi> that's why they have that conveluded echo
[18:57] <ash_worksi> which is extremely conveluded
[18:57] <leftyfb> you just want a backup of your db right?
[18:58] <ash_worksi> it should just be `key=$(ssh admin@pg-primary cat /path/to/accessible/key); printf 'no-agent-forwarding,...command="/usr/bin/pgbackrest ..." %s' "$key" >> .ssh/authorized_keys`
[18:58] <leftyfb> forget about this pgbackrest mess. Just dump the db and copy it somewhere on a schedule
[18:59] <ash_worksi> I mean, even the `archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'  # Unix` in the postgresql docs is better than that since it wont take forever to restore
[19:00] <ash_worksi> but this takes care of scheduling, retention and other things we care about
[19:00] <ash_worksi> esp since a blow-up a few years ago with just dumps caused us to be down for over 2 days
[19:00] <ash_worksi> (yes, we have a standby now)
[19:01] <leftyfb> it took 2 days to restore a dump?
[19:01] <ash_worksi> yes
[19:01] <ash_worksi> it was sad
[19:01] <leftyfb> why?
[19:01] <ash_worksi> it was just a lot of data
[19:02] <leftyfb> you know using pgbackrest isn't going to be much better right? It's still the same data
[19:02] <leftyfb> or you could get the DBA's to build a better solution that isn't so monolithic
[19:05] <leftyfb> you could also do replication for hardware redundancy. I'm not that familiar with postgres, but with mysql you can stagger the replication so if someone drop'd the entire db, you'd have X amount of time to restore from the slave
[19:07] <delsol_laptop> leftyfb: I'm doing a bunch of DB replication with mariadb... but never time-delayed it.
[19:08] <delsol_laptop> but nightly backups and binlogs going back a week gives you some options....
[19:08] <leftyfb> that's fine for hardware or network redundancy. Not as any sort of backup though
[19:08] <ash_worksi> leftyfb: using an archive rather than a dump file is going to be faster
[19:08] <leftyfb> ash_worksi: and doesn't always work the way you think it does
[19:09] <delsol_laptop> leftyfb: yeah, replication isn't a backup, and backups aren't replication
[19:09] <delsol_laptop> but with a bit of both, you should be able to cover your bases pretty well....
[19:09] <ash_worksi> leftyfb: yes, well, at least we have a standby for those occasions
[19:09] <leftyfb> ash_worksi: unless you turn off the db backup the files, they won't all be in sync
[19:09] <leftyfb> "turn off the db TO backup the files"
[19:10] <ash_worksi> leftyfb: postgres has a workflow to produce archives in an uninterupted manner
[19:10] <leftyfb> it takes time to copy files, while that is happening, some of those files are still updating, creating discrepancies
[19:10] <ash_worksi> its WAL archiving
[19:10] <leftyfb> ash_worksi: you'd have to lock or turn off the db
[19:10] <pragmaticenigma> No idea why companies do not replicate their DBs across multiple instances... 1 Live, 1 Readonly, 1 for Reports, and 1 as Backup. If Live ever goes down, there are three others to keep the company afloat
[19:11] <delsol_laptop> leftyfb: right, but thats why you don't just grab the files.... but instead use the tools your DB provides.
[19:11] <ash_worksi> leftyfb: you start with a base_backup, then the archive_command in the setting will push changes to your backup in WAL segments
[19:11] <delsol_laptop> mysqldump --single-transaction
[19:12] <delsol_laptop> and keep the binlog location at that point.
[19:12] <leftyfb> right, same thing with the bin files in mysql
[19:12] <ash_worksi> leftyfb: there are also settings for PITR to help with "oops" at a certain time crap, it just requires more storage
[19:12] <leftyfb> a dump is still cleaner and guaranteed
[19:13] <delsol_laptop> leftyfb: only guaranteed if you remember to tail it....
[19:13] <ash_worksi> delsol_laptop: tail it?
[19:13] <leftyfb> you mean lock the db and then do the dump :)
[19:13] <leftyfb> that's what I used to do
[19:13] <delsol_laptop> ash_worksi: mysqldump --single-transaction database > data.sql
[19:14] <delsol_laptop> tail data.sql
[19:14] <delsol_laptop> look for the "-- Dump completed on 2024-05-03 12:42:32"
[19:14] <ash_worksi> delsol_laptop: but why?
[19:14] <delsol_laptop> or whatever the timestamp is.
[19:14] <delsol_laptop> Because if it doesn't say dump completed, you don't have any guarantee the dump actually completed.
[19:14] <delsol_laptop> it could have bomb-dropped out of the operation 12 tables in...
[19:14] <ash_worksi> oh
[19:14] <ash_worksi> til
[19:15] <leftyfb> that's what exit codes are for
[19:15] <leftyfb> the backup script makes sure the dump succeeded
[19:15] <leftyfb> before moving onto copying off the dump
[19:15] <leftyfb> I've never had it die part way though
[19:16] <delsol_laptop> leftyfb: seemed to be more common with mysql 5+ years ago...... less common with mariadb.
[19:16] <delsol_laptop> still happens occasionally.
[19:16] <leftyfb> delsol_laptop: it was over 10 years ago I was doing that lol
[19:17] <delsol_laptop> I've got 100+ machines out there doing a dump every morning at 5am
[19:17] <ash_worksi> I mean, maybe it's prudent to setup dumps also... in the event that an archive doesn't work; though I would hope that would still be possible from the standby (ie, dump and restore the standby)... though things get hairy, if I have to wait for 2 days, I'm sure there'd be a lot of changes. I guess I could promote the standby and reverse the roles in that case....
[19:17] <delsol_laptop> I get notification if/when they don't backup properly or don't finish ir right...
[19:17] <leftyfb> also did mysql replication for an access control system where each door had a pi on it with a slaved db from the server. In case of a power or network outage, the doors still worked (PoE was on UPS)
[19:18] <delsol_laptop> Also, file corruption can totally hose your ability to dump.
[19:18] <delsol_laptop> but an on-site replication to a separate machine..... sidesteps that since it just read the binfiles when it happened... and thus the odds of same file corrupting in same way at same time is effectively zero.
[19:20] <leftyfb> ash_worksi: in the end, you could probably still use pgbackrest, but by only loosely following it's documentation and not doing stupid things like ssh'ing as the postgres or root users and not sticking ssh keys in /var/lib/postgresql/
[19:20] <windows11niggger>  need help on how to uninstall ubunto
[19:22] <LuckyMan> windows11niggger, easy way, insert a usb disk with windows and overwrite everything
[19:22] <leftyfb> LuckyMan: lets not encourage them
[19:22] <LuckyMan> leftyfb, ok
[19:22] <LuckyMan> sorry
[19:22] <windows11niggger> send me usb over postal code
[19:23] <ash_worksi> LuckyMan: my only experience with OS+USB is ubuntu live
[19:23] <windows11niggger> etcher doesnt work
[19:23] <ash_worksi> LuckyMan: yes, plz snail mail them a usb drive
[19:23] <ash_worksi> XD
[19:24] <delsol_laptop> ......
[19:24]  * delsol_laptop points at windows11n's racist and offensive as fuck name.
[19:24] <leftyfb> delsol_laptop: lets just ignore them please
[19:24] <windows11niggger> george soros paid me 500 bucks
[19:25] <windows11niggger> you can't blame honest work
[19:25] <ash_worksi> leftyfb: btw, did my version of that `(echo ...` command make things more comprehensible overall?
[19:26] <leftyfb> ash_worksi: can you login to both servers via ssh?
[19:26] <windows11niggger> + i'm jewish nigggggerino i work for google
[19:26] <windows11niggger> bring over your wife
[19:27] <windows11niggger> fuck linux
[19:27] <leftyfb> ash_worksi: just sudo su and write out the public ssh key as needed then change the permissions on it. Or follow my advice above and use ssh-copy-id and then disable PasswordAuthentication. Either way
[19:28] <ash_worksi> leftyfb: I meant, did it help to see me write `key=$(ssh admin@pg-primary cat /path/to/accessible/key); printf 'no-agent-forwarding,...command="/usr/bin/pgbackrest ..." %s' "$key" >>
[19:28] <ash_worksi> gr
[19:28] <ash_worksi> leftyfb: I meant, did it help to see me write `key=$(ssh admin@pg-primary cat /path/to/accessible/key); printf 'no-agent-forwarding,...command="/usr/bin/pgbackrest ..." %s' "$key" >> .ssh/authorized_keys`
[19:29] <ash_worksi> I thought maybe it clarified what they're trying to have you do
[19:30] <leftyfb> ash_worksi: is this server publicly accessible?
[19:30] <ash_worksi> neither is; I have to connect to a vpn
[19:32] <leftyfb> ash_worksi: honestly, for some reason you seem to prefer doing things in a very convoluted way. I prefer to keep things simple and do the quick and dirty work quick and dirty and not spend an hour going over 3 different ways to copy ssh keys
[20:51] <Pjoff> Hey! Trying to mount my secondary disk in ubuntu but getting "error mounting, wrong fs type, bad option, bad superblock etc
[20:52] <oem> rc.irc-hispano.org
[20:54] <leftyfb> Pjoff: what command did you use to try to mount it? What filesystem should it be?
[20:57] <Pjoff> Just used the gui and tried to mount it
[20:59] <Pjoff> https://dpaste.com/48RAY2FE3
[21:01] <JanC> I assume you want to mount /dev/nvme1n1p3 ?
[21:02] <leftyfb> Pjoff: what command did you use to try to mount it?
[21:02] <leftyfb> oh right, the gui
[21:02] <JanC> GUI, so just click on it I suppose
[21:02] <leftyfb> which one were you trying to mount?
[21:03] <Pjoff> The 2TB one
[21:03] <JanC> oh
[21:03] <Pjoff> -> /dev/nvme0n1
[21:03] <leftyfb> that's 16M
[21:03] <leftyfb> you mean nvme0n1p2 ?
[21:04] <Pjoff> Oh yeah. But its a 2TB disk, so I assumed it was the same hah
[21:04] <leftyfb> it's not the same
[21:04] <Pjoff> Yeah, must be that one
[21:04] <JanC> there is a Windows filesystem & a linux filesystem on that disk
[21:04] <Pjoff> In windows my files are there, but for some reason it wount mount in ubuntu
[21:05] <leftyfb> Pjoff: I would recommend putting this drive into a windows computer and running chkdsk /f /r /x on the partition, reboot and then do it again. Then power down completely and then you can try to mount it on your linux OS
[21:05] <leftyfb> sorry, run chkdsk /f /r /x, reboot, run chkdsk /f /r /x reboot, then power down
[21:05] <JanC> you can probably do that from the GUI too  :)
[21:05] <leftyfb> from within Windows
[21:06] <Pjoff> Isn't that the process which takes hours ?
[21:06] <leftyfb> not linux
[21:06] <leftyfb> Pjoff: it might. Depends on a lot of factors. Either way, that would be my next step
[21:06] <oerheks> maybe fastboot keeps control over that disk
[21:06] <leftyfb> that's also another possibility
[21:06] <Pjoff> Will try to boot into windows and turn off fastboot
[21:07] <Pjoff> and do the chkdsk thing
[21:07] <JanC> BTW: you boot linux from that disk (but not from that filesystem)?
[21:08] <Pjoff> Yes
[21:16] <Pjoff> Hey! Turning off fastboot did the trick guys. Thanks for the help :)
[21:16] <oerheks> have fun!
[22:02] <SwollenBernard> how is the new ubuntu lts
[22:18] <NeilRG> I like it
[22:19] <NeilRG> fixed the screen tearing, latency I was getting with VLC
[22:30] <younder> (org.shirakumo.machine-state:gc-room)
[23:22] <uzz> Is it possible to make a launchpad account without making an ubuntu one account?
[23:36] <sarnold> uzz: not in any useful way, no