[03:16] Hey guys, I have an old system that started on 16.04 and is mostly on 18.04 except some i386 stuff that's still sourced from trusty AFAIK and some newer stuff from focal in places - it's ended up like this in part because of the language on the lubuntu focal release page "Note, due to the extensive changes required for the shift in desktop environments, the Lubuntu team does not support upgrading from 18.04 or below to any greater rel [03:16] ease. Doing so will result in a broken system. If you are on 18.04 or below and would like to upgrade, please do a fresh install." - I have been using LXDE since around 06-08 or so and this machine is 110% NOT getting reinstalled, I rely on it for numerous things - I'm no noob to Deb/Ubu having been using Deb for over 20 years and Ubu daily driven for 7 or 8 after sid got annoying - does anyone know off the top of their head if there is [03:16] a guide or some meta packages to smooth the transition from lubuntu 18.04 to ubuntu+lxde 20.04? I do not want LXQT even if you love it and LXDE is neglected now. I will take a stab at ripping out some packages and installing others if I can figure out a set of operations that looks like it will work, but if anyone has gone through this basically migrating away from lubuntu, advice and help other than "reinstall" (how very MicroSoft of [03:16] you) is more than welcome :-) thanks to any/all contributors present who have made my life better since I moved away from wmaker in 06-08 or so :-D [03:47] fredcooke: a few things are confusing me about your setup and what you want to achieve [03:48] so you've been using debian sid for over 20 years? [03:48] but you want to migrate from lubuntu 18.04 to ubuntu 20.04? [03:48] and you've decided that you're 110% not going to reinstall (reinstall what?) [03:49] it's not quite adding up [03:50] if I'm understanding correctly, your main motivation here is to "keep" your lubuntu 16.04 install but also not keep it, so you can say you have never reinstalled linux since 2016 because it's stable, or something? is that assessment correct? [04:09] Yeah, pretty much, I did successfully upgrade from 16 to 18 years ago, but some 386 stuff was left behind, not available in 18 and independent so not a problem to have split - I would like to be more up to date as 18 is not supported by most vendors these days and when you need X you need X and by X I mean Y not twitter. I haven't used sid for many moons - I do still have deb stable on my ancient never reinstalled VPS though it too has [04:09] become a bit of a mess due to the time elapsed since I got it around 2008 or so, but that's a topic for another day. I want to transition from Lubuntu 18.04 to Ubuntu+LXDE 20.04 - I don't want it to wind up broken. When 20 came out I upgraded another machine that was on 18 to 20 and it wasn't a good time. It was a test/experiment. [04:09] I expect to need some custom or already existing meta packages pinned to achieve this [04:10] I need to get rid of all of the lubuntu glue meta packages that are binding the UX stack together now - and either mark each sub item as manually installed or put in a new meta package to control those [04:25] I think your biggest issue here is priorities, along with running the wrong distro to begin with given your objectives [04:26] Debian Stable (not sid) or a CentOS clone would have been right for you for install once and upgrade forever [04:26] lxde/lxqt are just desktop environments though; so you could try installing something completely different too but still keep your base distro [04:26] but I think it's a bit futile trying to upgrade 18.04 now when we're in 2024 anyway [04:27] to get it up to date, you'd have to keep repeating the upgrade steps. this could have been so much simpler with Debian Stable [04:28] there's no way anyone should spend that much time messing around the edges just to "prove" that an install has "survived" for x years. 2016 isn't even that old of a system [04:28] fredcooke, there was documentation written on migrating LXDE to LXQt; for the 18.04->18.10 cycle, but I'm not sure it could be found any longer, as the infrastructure it existed on has been retired.. That upgrade was LONG AGO & few will remember it; our oldest supported release now is 22.04 [04:28] regressions in ubuntu and variants are almost kind of expected [04:29] that said, I don't think the ubuntu situation is quite as bad as with fedora: https://www.theregister.com/2024/09/27/fedora_41_beta/ "(Oddly, version-to-version upgrades are not such a routine thing in Red Hat land as they are for most other distros. We've heard from several Fedora users that they simply wipe and reinstall every year or so. "Normal" is just what you're used to, after all.)" [04:29] fedora basically forces you to upgrade every 6 months and if you fall behind by 14 months; you're behind. [04:30] but the idea of fedora is very different; it's for developers and is bleeding edge and things are expected to break and be fixed again [04:30] kind of like arch (see also the recent KDE Plasma update) [04:31] I don't think "never reinstalling" is the flex you paint it out to be in your mind [04:32] sometimes, you just have to do the thing that leaves the best result [04:34] while it's easy to "beat" Windows users on that weird meaningless metric, you'll never beat the old Debian Stable gurus who have run the "same" install for 20+ years [04:35] so if you just throw in the towel on that idea, you'll save a lot of time by not messing around trying to get 18.04 to 20.04 to work; and you can just run 24.04, or switch to Debian Stable [04:35] just my own perspective and two cents on how much time I'd be willing to spend on this. You're welcome to disagree, I don't care [05:02] I'm not going away from the best package management to fedora/centos/ubi/anything like it - it'll be Ubu, Mint, Deb or related creature - upgrades of Ubuntu plain are seamless just as deb stable - this was a Lubuntu problem/decision - it should really have been a set of new lubuntu-qt or lubuntu-ng prefixed stuff that allowed a "stay the same" migration path - instead there was an assumption made "they'll move" - well, no, sorry, I won' [05:02] t. And subsequent decisions made that made staying the same awkward and that cop out quote put on the release page. [05:03] I understand it's old news and few will remember and if no one can help, no worries - the to QT docs mentioned are no use to me, that's not what I'm doing - I'm staying with DE one way or another [05:04] You've made huge assumptions about my reasons for not reinstalling - a heap of stuff hangs off this machine - stuff I absolutely rely on - this is about continuity of service not BS metrics of "OMG I didn't reinstall for N years" [05:04] If you carefully go over everything I explained later down the track, you'll understand what I mean and understand why no one is going to be able to help you [05:05] you just contradicted your answer earlier after I asked directly: "04:09:25 Yeah, pretty much," ec [05:05] This machine hosts multiple ZFS zpools and other software on top of that that other devices rely on and is also my desktop machine - 64gig ram, half for ZFS, half for me [05:05] sorry to hear that [05:05] short LTS support spans can be rough [05:06] I mean, everyone loves a good up time metric, but if you want to know mine, just call my power company and ask about outages, ha ha [05:06] hence the reason others opt for the longer spans that RHEL and Debian Stable offer [05:06] 51 days up on this now [05:06] 831 days up on the VPS [05:06] if a system is that important to you, going with something that's based on debian unstable or testing sounds like poor planning to me [05:07] I can't remember which one ubuntu/lubuntu is based on but close enough [05:07] again, the spans aren't the issue, the about face on LXDE and the total lack of thought/effort to support people who don't want LXQT is - not complaining about it as much as stating it as a fact - I appreciate all the work done and still being done, not ungrateful - but not a wise choice IMO, it kicked up a stink at the time and I'm living proof that the impacts of those choices live on [05:07] and the very least you can do if you do go with that, is to keep up with the support window [05:08] yeah, the issue is your approach to it all [05:08] it can't really be solved [05:08] I tried - I could not - the spare machine proved to be a train wreck [05:08] I suppose maybe one way it could be solved is with paid support [05:08] it absolutely can be solved - I've done much more tricky stuff with sid many times ;-) [05:08] I'm sure there's always a price someone is willing to accept in exchange for the effort [05:08] I was only hoping to short cut the effort somehow - by someone else having been there, done that [05:08] I think your hopes are misplaced [05:09] good luck though [05:09] let me know if you figure it all out [05:09] they may well be, but I don't think misplaced is the right word - just out of luck might be [05:09] I will, no doubt at all :-D [05:09] thanks for humoring me [05:09] no problem; it's always fascinating to see what others have set up [05:11] there are side problems, too - unrelated to lubuntu, all about the never-stable Linux ABI space - I can still run an open source app I wrote for Win on Linux on Win today, but I cannot run it on Linux as everything just moved on and abandoned the runtime requirements of old stuff like that - Linux is ruthless to the past - and that has always been a big thing holding it back for some applications - windows shines here - hate to say it - [05:11] hate using it - WSL2 makes it tolerable - but still hate it [05:12] I'm also vaguely worried about the main SSD - I wouldn't lose a lot - but it wouldn't be fun starting again [05:12] should probably put a good new one in with it and DD the old one and swap them around [05:15] sooner or later, your SSD will fail. They all do [05:15] you can choose to buy a replacement/backup today, or you can wait until it's too late [05:15] https://techreport.com/review/the-ssd-endurance-experiment-theyre-all-dead/ [05:15] yeah, it's an older samsung before they switched to the less reliable tech and it doesn't have a high write load so should be okay for a while [05:16] I did check at one point, just trying to recall the commands [05:16] "should be okay for a while"? what is "a while" and how much would you bet? [05:16] and why are you also worried about your main SSD if you're not worried [05:17] these self-contradictions don't make a lot of sense [05:18] ha ha, fair critique, let's say a year, maybe 5, but could die tomorrow, I have little choice right now than sit tight and ensure nothing on it is irreplaceable - made redundant a short time ago and on the job hunt and dollars temporarily tight [05:19] outdated distro version, unknown SSD health without a real backup; really flying by the seat of your pants there [05:19] I guess everyone has their own idea of fun & excitement [05:22] Samsung SSD 950 PRO 512GB < this is the one, veteran of two physical machines now - the OS was installed in the old skull canyon NUC - it's now in the NUC 9 extreme kit ghost canyon - loaded with drives and tonnes of HDDs hanging off USB cables - every single one a different model, all data except root and home triple mirrored with ZFS [05:22] ha ha, yeah, well, that's life sometimes - let me get the numbers for health, it'd be nice to know [05:23] power_cycles : 132 [05:23] power_on_hours : 44,975 [05:23] unsafe_shutdowns : 112 [05:24] 5.13 years on :-D [05:24] fredcooke: whats the difference between old SSD Samsung and new less reliable? [05:25] available_spare : 100% [05:25] available_spare_threshold : 10% [05:25] percentage_used : 5% [05:25] data_units_read : 4,282,940 [05:25] data_units_written : 122,184,688 [05:25] they type of memory inside them [05:32] 950, 970 were v-nand, 980, 990 are MLC, 870 TLC and some Q type - that's the least - I have the 8TB one and since it's low write frequency it should be fine - but I'm glad the 950 is v-nand - it's going the distance and still doing fine [05:32] when they die it's instant though - sooooo, no promises - but trust in samsung ssds I do. [05:39] 400TB of write endurance - looks like I'm still under 1TB of written data? or my math is totally wrong - quite possibly [05:41] yeah, the 5% used number is what samsung thinks I'm up to - bad solder joints and other issues likely before I hit anything near 50 :-D [05:41] so it should be fine for a bit longer while I look for work and try to push my real life forward on our land [05:43] I'll lurk and report back if/when I tackle it - in the mean time mostly 18.04 for me :-D [07:06] it won't be 1TB written if the drive is 512GB [07:07] 100% spare sounds suspiciously new [07:07] ok 5% used might be right depending on lack of use [07:08] probably 58TB written [07:18] TLP will show how many terabytes have been written without you having to do the maths [07:30] 5% of 400 is 20TB which sounds about right anecdotally - TLP? Usage: tlp start|true|bat|false|ac|usb|bayoff|chargeonce|discharge|setcharge|fullcharge|recalibrate|diskid < how? [07:30] just type tlp-stat as root [07:31] then look for the output with the header +++Disks [07:33] 241 Total_LBAs_Written = 1.654 [TB] < only for the 870 QVO 8TB that's on sata - not for the ONE nvme it picked up (out of 4) [07:33] out of 3* sorry [07:33] huge wall of output too, not very focussed [07:33] how many data units for that one? [07:34] that's the unreliable type - but it's write-once-only so no big deal - the number is about what's on it ;-) [07:34] the one NVME listed did not have LBAs [07:34] the wear leveling count on my 512G 850 EVO is 91%, so 9% "used" but 9% of 512G is about 46G, not 24TB written [07:34] and --disk got just that part as output - but no improvement in accuracy / completeness [07:35] total LBAs written: 52777518226; sector size 512 bytes [07:35] on sata? [07:35] yep [07:35] nvme-cli is what you need to use for nvmes typically [07:36] I'd say yours is at 58TB, not 20 [07:36] tlp-stat and smartctl works fine on my thinkpad with micron nvme [07:36] until recently... when smart data reset to 0 hours [07:36] a bit suspicious [07:37] wait; how come your data units read is so small [07:37] what is that drive used for? [07:40] boot up, installs, home dir, light use, all the heavy data is going into ZFS 24/7 - and the frequent write part is going into a mirror so I'm not nervous about it at all [07:40] I guess browsers are writing caches to it too - and occasionally I do something productive like build some code ha ha [07:43] any swap? [07:43] or hibernation [07:44] yes to swap, no to hibernation - swap was doing a bit of work in the early days with 16GB RAM but probably all but unused now with 64GB and light usage [07:46] hmm maybe 20TB might be right then [07:46] my 850 EVO is 7 years old [07:47] so the less used drive is reporting 1.65TB? [07:48] TBW on 09-11-2024 20:48:20 --> 62.5683 TB, which is 62568.3 GB -- Health (% used, higher is worse): 5% [07:48] Average writes/day: 34128.23GB [07:49] you were close if the math in the script is right :-) [07:49] still, 62 is far lower than 400, at this rate it'll last about .... 27 and a half more years :-D [07:50] script from: https://www.reddit.com/r/unRAID/comments/qndsv6/log_nvme_tbwhealth_with_a_user_script/ [07:50] I made it write to console only commenting out much of the last part - needs sudo obviously [07:51] the average writes per day is garbage because no baseline - maybe I'll put it back how it was and crontab it and see what it looks like over time [12:48] lxqt 2.1 on plucky has thus far been good... no issues. [12:49] (featherpad appeared to disappear, not sure why, or if I caused it.. so just re-installed)