[00:39] hello, why do I get Error, was given host, but we don't know about it. when doing dput ppa:amosbird/ppa [00:50] what's pdebuild? [00:51] pdebuild is a wrapper for Debian Developers, to allow running [00:51] pbuilder just like "debuild", as a normal user. [00:51] from apt-cache show pbuilder [00:59] So, you'd run `pdebuild` in the packaging dir rather than dpkg-buildpackage -sa -S -d -tc && sudo pbuilder build ../*.dsc [01:10] ok, what's debuild? [01:10] is it better than pdebuild? [01:11] debuild does a build just "on your machine", with whatever packages you've got installed [01:11] pdebuild uses pbuilder to give some reproducable build, similar to sbuild [01:17] ok. [01:18] So I have a bunch of local deb files. I'd like to quickly spin up a local repo serving those files for machines in the same LAN. what's the easiest way to achieve this? [01:21] apt-utils apt-ftparchive works okay [01:21] aptly if you want Something Better (I've not used this one myself, but it sure looks neat) [01:22] mini-dinstall if you want something super cheap and easy. :> [02:00] dpkg-scanpackages is the easiest if you literally just have a directory with some debs. [02:00] Other stuff is massive overkill. [02:04] infinity: hmm, dpkg-scanpackages can setup a repo for all machines in the same LAN? [02:05] amosbird: I mean, it can generate a Packages file. :P [02:05] not sure what that means [02:05] https://www.aptly.info/ does look nice [02:05] amosbird: The "repo for all machine in the LAN" is left to you to decide how to serve. http (apache, etc), smb, nfs, whatever. [02:06] aptly is ridiculous overkill (and broken in many fun ways) if you really have "a bunch of local debs" rather than something much larger. [02:06] Of course, another cheap and easy way to go is build everything in a PPA, mirror the PPA locally, and serve that via apache. [02:07] infinity: hmm, that means I need to serve apache [02:07] is there a tool that can just do ./serve ? [02:07] and I'm good to go [02:08] No. [02:08] Creating repositories and serving them are two distinct concepts. [02:08] Because it's just a flat directory structure, there's no point writing a tool so serve it, when any httpd will do. [02:09] but it shouldn't forbid a easy use case [02:09] s/so serve/to serve/ [02:09] I don't understand [02:09] A repository of debs, at the simplest, is a single directory with debs and a Packages file. More complex, a tree with pool, dists, etc. Either way, it's just a directory structure. [02:10] Creating that locally and serving it are two different things. [02:10] Serving it via any simple httpd takes seconds to set up, so why would someone write a different one to do the same thing? [02:10] infinity: ok. after using httpd to serve that directory, how can clients use that? [02:10] ... [02:11] yeah, centos user here [02:11] And this handholding has gone way beyond the help that #ubuntu-devel doesn't provide. [02:11] ok [02:11] You don't have to serve it from an Ubuntu system. :P [02:11] If you have CentOS servers with apache on them, you win. [02:12] ... [02:12] (or, apt-get install apache2, put your debs and Packages files in /var/www or whatever the default dir is, and go to town) [02:12] ok [02:12] any one liner I can use to add that http server as an apt repo? [02:13] I'm not sure what you mean. [02:13] You mean on the client machines? [02:13] It'll depend on where you serve from relative to the root of the machine. [02:13] "You mean on the client machines?" yes [02:14] suppose I put those debs and Packages files in /var/www [02:14] But adding something like "deb http://my.server.local/ ./" to sources.list on a client, assuming /var/www if where you put your debs, will do the trick. [02:14] After you dpkg-scanpackages . > Packages in said root. [02:14] ah, I see what's going on now [02:14] lemme try thta [02:16] infinity: btw, my workflow has one additional jump [02:17] so my dev box generates deb files, I'll push them into a box serve the repo, then clients pull the packages from that serving box. [02:17] rbalint, cpaelzer__: is the usd-importer stuck again? [02:17] my question is, do I need to rerun dpkg-scanpackages after updating the old debs? [02:18] or can I just rsync them over [02:19] dpkg-scanpackages generates the Packages file, so you need to re-scan any time you add new ones. [02:20] Also, that sources.list line should be "deb [trusted=yes] http://my.server.local/ ./" because it won't be signed (which is fine assuming it's a local network you trust) [02:34] "so you need to re-scan any time you add new ones." If I don't add new ones, but replace old ones? [02:52] can I use apt-add-repository command instead of deb command? [02:56] oh... [02:56] E: The repository 'http://10.138.0.2:33992 ./ Release' does not have a Release file. [03:00] Does apt really require Release files now? That would be silly. [03:00] trusted=yes shouldn't need one. [03:00] Did you use that? [03:01] Apt maintainer actually was talking about dropping support for Release(,.gpg) and requiring InRelease, last I knew. [03:01] infinity: I do [03:02] Ign:4 http://10.138.0.2:33992 ./ InRelease [03:02] Err:5 http://10.138.0.2:33992 ./ Release [03:02] 404 Not Found [IP: 10.138.0.2 33992] [03:02] there is an InRelease request too [03:03] amosbird: What's your sources.list line? [03:05] weird, it's deb http://10.138.0.2:33992/ ./ [03:05] but I surely did sudo apt-add-repository 'deb [trusted=yes] http://10.138.0.2:33992/ ./' [03:05] apt-add-repository isn't really a useful tool for this. [03:06] You could as easily have said "echo 'deb [trusted=yes] http://10.138.0.2:33992/ ./' > /etc/apt/sources.list.d/my.list" for the same effect. [03:06] Without wondering what the tool was doing. [03:06] Oh, and as to your other question, "replacing" files in the repo *is* updating them. :P [03:07] Unless they're identical, your Packages file would be wrong (it contains sizes and hashes) [03:07] ah, I see, so I need to update the Packages file too [03:07] But it's also generally poor form to replace a package with the same version. [03:08] btw, how can I make sure the update process won't break any in-progress apt install? [03:08] if any [03:08] apt locks. You can't run it twice. [03:08] so I first do apt locks, then pull in the updates, then apt unlocks? [03:09] No, I mean apt locks itself. :P [03:09] You can't run two at the same time. [03:09] but the update process is just scp [03:09] Oh, you mean updating the repo. [03:09] yes [03:10] "apt-add-repository isn't really a useful tool for this." what's the right tool then? echo seems ... not appropriate [03:11] So, if you're pushing new versions (instead of trying to overwrite versions, which is evil and scary and undefined), then you push new debs, dpkg-scanpackages > Packages.new && gzip -c Packages.new > Packages.gz && mv Packages.new Packages, then remove the old versions $later. [03:11] Why is echo inappropriate? [03:11] One doesn't need a fancy tool to put a line in a text file. [03:11] infinity: ok [03:11] so I need to relying the filesystems atomic mv [03:11] apt-add-repository was designed for the specific use-case of setting up hard-to-remember PPA URLs and fetching the signing keys for them. [03:11] ah, ok [03:12] You know your URL, and you have no key. So... meh? [03:13] Basically, the only way to break an in-progress update (if you update Packages atomically) is to remove the old debs while someone's still trying to download them. [03:13] And, really, that's not world-ending. The client just needs to apt-get update to get the new Packages file, and carry on with life getting shiny new debs instead. [03:14] btw [03:15] after a new Packages file uploaded, do clients need to run apt update? Or is apt install just enough? [03:15] Anyhow, if you have tons of packages you want to do this sort of thing with, something like mini-dinstall will help with building the repos and deleting old versions, etc. [03:15] It's just serious overkill if your repository is a few source packages and their binaries. [03:15] apt update fetches the new Packages file, yes. That's literally its only function. [03:16] infinity: so I have to do a apt update first [03:16] A bit confusing that yum conflated update and upgrade into one command. That hurt my brain the first time I used it. [03:16] ....... [03:16] And I assume the same brain hurtiness happens the other direction. [03:19] heh, dpkg-scanpackages doesn't allow selecting debs [03:20] but all debs in the directory [03:21] Right. [03:21] And apt-ftparchive in its simple mode does the same. [03:21] You can use apt-ftparchive in the complex and super confusing mode we do for the primary archive, which takes config files and lets you feed in package lists, but again, massive overkill unless you're managing a huge repository. [03:22] ok [03:22] But only you know your limit for simple tooling versus complex infrastructure. [03:23] I'd recommend mini-dinstall when things get too complex. [03:23] Or, if none of your packages are secret/proprietary, it really is dirt simple to just let LP PPAs handle it, and then mirror the PPA structure locally so your clients don't all have to hit a remote. [03:37] https://github.com/yandex/ClickHouse/blob/master/debian [03:38] so the /debian folder looks like this. How can I make debuild exclude clickhouse-common-static-dbg and clickhouse-test? [03:49] infinity: hi, how can I apt update this repo only ? deb [trusted=yes] http://10.138.0.2:33992/ ./ === cpaelzer__ is now known as cpaelzer [06:44] mwhudson: yes I was missing imports of tonight [06:44] I'll check if it has died [06:45] and you probably meant rbasak instead of rbalint :-) [06:47] hmm, it wasn't aborted - but maybe stuck [06:47] Examining publishes in ubuntu since 2019-09-22 05:08:13 is quite old [06:49] yeah something made it stuck on a futex and a socket it seems [06:50] mwhudson: thanks for the ping, I restarted the importer ... [06:55] Unit193, hello, what happened to filezilla upload? [06:55] filezilla is now rc buggy [06:58] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=941193 [06:58] Debian bug 941193 in libfilezilla0 "filezilla: Crash on startup: undefined symbol: _ZTIN2fz6threadE" [Grave,Open] [06:58] also libfilezilla [06:58] Adri2000, ^^ [07:35] vorlon, hey, do you plan to send debdiff back to Debian for those pylint changes? [08:04] mwhudson: FYI the git importer has fully catched up now [08:35] LocutusOfBorg: hello, yes just saw that :( [08:49] LocutusOfBorg: Hmm? [08:52] Unit193, you sponsored it, right? [08:52] :) [08:55] I didn't do a filezilla upload though. [08:55] LocutusOfBorg: I'm asking upstream if they plan to fix the bug in libfilezilla itself... also I can't get filezilla to build with wxwidgets gtk3 (./configure fails, saying gtk3 is missing), so I'm asking upstream about that as well [08:56] I wanted to find a solution for the wxwidgets gtk3 build before uploading filezilla, that's why [08:56] What error exactly, if I may? [08:58] checking for gtk+-3.0... no [08:58] configure: error: gtk+-3.0 was not found, even though the used version of wxWidgets depends on it. Are you missing the gtk+3.0 development files? [08:58] I just changed the build-dep from libwxgtk3.0-dev to libwxgtk3.0-gtk3-dev, as suggested in the bug report [08:59] so either I'm missing something, or the ./configure is buggy [08:59] https://launchpadlibrarian.net/441494360/filezilla_3.39.0-2_3.39.0-2aegir1~19.04.diff.gz that's what I did. [09:00] oops, libgtk2.0, that was obvious :s sorry I didn't even see you did it in ubuntu! [09:01] Not Ubuntu, a silly PPA. :P [09:01] ah :) [09:01] ok let me fix that and I'll ping you again to upload in debian... :) [09:06] LocutusOfBorg: Symbols would really help avoid that bug. [09:23] rafaeldtinoco: would the cluser-solutions need all to be updated to know about this new features? [09:23] or do you from your testing think that the admin could to that [09:23] what I'm wondering is if e.g. some cluster solutions set up those virtual ips/devices themselve dynamically [09:24] then an admin can't do that, which means all those cluster pkgs would need to know how/where to drop netplan config [09:24] hence I'm asking about that in regard to your HA testing so far [09:24] could this work with "just" systemd/netplan changes or will all those packages need updates as well? [09:25] cpaelzer: my thought is that .. if clusters are putting virtual aliases to interfaces, those should contain the "keep configuration" systemd-networkd flag (discussed in LP: #1815101) [09:25] Launchpad bug 1815101 in systemd (Ubuntu Eoan) "[master] Restarting systemd-networkd breaks keepalived, heartbeat, corosync, pacemaker (interface aliases are restarted)" [Medium,In progress] https://launchpad.net/bugs/1815101 [09:26] cpaelzer: so, usually HA clusters will have private interconnects and public nics, you have to set that flag in those receiving VIPs. [09:26] for keepalived, haproxy, ipvs, the same thing [09:27] backporting to 237 and modifying all those packages might be complex, but at least going forward that solution LGTM [09:28] network: [09:28] version: 2 [09:28] renderer: networkd [09:28] ethernets: [09:28] eth1: [09:28] dhcp4: no [09:28] dhcp6: no [09:28] addresses: [10.0.1.2/24] [09:28] keepaliases: true [09:28] cpaelzer: netplan could be configured with something like this ^ [09:28] and we would explain what keepaliases mean (or any other word) [09:28] once you did some doability checks and experiments on it be sure to pull in cyphermox for his opinion on the netplan portion of it [09:29] +1 [09:29] thx for reviewing this, i'd appreciate constant review in next comments to come [09:29] =) [09:29] rafaeldtinoco: the value is more fine grained [09:29] rafaeldtinoco: so if exposing in netplan expose the full set of values maybe [09:29] ok [09:29] makes sense [09:29] static/dhcp/dhcp-on-stop ... [09:30] I've read the commits now and I don't have a better idea how to resolve this - which means between the two of us it is the best idea we have so I won#t stop you on it [09:30] :-) [09:30] rafaeldtinoco: what about the "older" CriticalConnection attribute? [09:31] could that be available in earlier systemd? [09:31] https://github.com/systemd/systemd/commit/c98d78d32abba6aadbe89eece7acf0742f59047c [09:31] https://github.com/systemd/systemd/issues/12050 [09:31] it does not prevent the issue [09:31] thus the need to create a new feature [09:32] ok, thanks for checking that already [09:32] sure! [09:33] rafaeldtinoco: -- topic switch -- did you see the KVM acpi issue was resovled by just not passing acpi to the guest? [09:33] sometimes trivial solutions are the best [09:33] cpaelzer: err, i knew that would "solve" the issue [09:33] but he wanted to shutdown through acpi [09:34] i thought about checking why acpid was working in previous qemu version [09:34] likely acpi tables from the new qemu broke older kernels [09:34] (thats what I had in mind) [09:34] but I saw your comment on modes [09:34] I wasn't aware of that [09:34] hopefully without acpi there is some other way for shutting down [09:34] TBH he should go on with the non crashing case and resolve the shutdown in one of the "too many ways" to do it [09:34] guest being rhel does not give big incentive to continue =( [09:34] anyway lets see what the bgu brings up, it seems ok for now [09:35] yep [09:35] we are not a service-portal :-) [09:35] yep, its tricky to define where to stop [09:35] im learning =) [10:59] bdmurray: infinity: but also that grub-installer upload didn't fix server iso. I don't know if more things are needed in grub-installer, or something else is broke. === paride2 is now known as paride [11:47] seb128, can you please also merge gst-plugins-bad1.0? [11:47] LocutusOfBorg, I would share my corrently open editor if that was easier :p [11:47] on it atm [11:47] currently [11:48] also there are a few from Debian where syncpackage tells me launchpad didn't pick the new version yet [11:48] which I will also sync [11:52] yes they have been uploaded half an hour ago, thanks for caring [11:54] on the gst-plugins-good1.0 merge some changelog entries have been lost... not sure if I made a mistake when updating git [11:56] http://launchpadlibrarian.net/444275983/gst-plugins-good1.0_1.16.0-3ubuntu2_1.16.1-1ubuntu1.diff.gz [11:56] looks like the changelog wasn't updated correctly? [11:57] LocutusOfBorg, what do you mean? [11:57] what changelog? how not correctly? [11:57] lots of entries disappeared [11:57] look at the diff on the queue... [11:57] debian/changelog? [11:57] yes [11:58] that's how I do merges [11:58] I don't keep the old entries [11:58] imagine we synced and added back some delta we dropped by error if that helps you :p) [11:59] actually I was going to do the same on some other package, just I never saw people doing it [11:59] lots of packages have more delta in changelog than everything else [11:59] right, well for desktop packages I tend to do it this way [11:59] the vcs and launchpad have the uploads history anyway [12:00] ok so the ubuntu/1.16.1-1ubuntu1 tagged differs from the uploaded package because of this pruning, right? [12:01] oh looks like git is updated [12:01] thanks, I'll do this on my merges from now on [12:02] np! [12:04] how can you do that with git? a debian/changelog merge driver which takes the 'theirs' side? [12:58] 14 [12:58] with a / [13:14] cpaelzer: I remember seeing some discussions regarding rolling back the last systemd version in -proposed (242-6ubuntu1). [13:14] u know anything about it ? wondering which version I should backport the feature to [13:14] rafaeldtinoco: it already changed from 243 to 242 [13:14] rbalint: will know [13:15] ^^ === ricab is now known as ricab|lunch === ricab|lunch is now known as ricab === dannf` is now known as dannf [17:23] xnox: I'm not seeing any indication that bug 1779767 is fixed in Eoan [17:23] bug 1779767 in cron (Ubuntu) "Default cron PATH does not include /snap/bin" [Medium,Confirmed] https://launchpad.net/bugs/1779767 [20:57] cpaelzer: thanks