=== guiverc2 is now known as guiverc [13:53] doko: I can confirm (from PPA) that the postgresql bugs related to perl will vanish with the stable update that I'm preparing [13:53] so that should be good over the weekend or on monday [13:53] nice [13:54] and libvirt >6.8 I already mentioned that I have that ready and will work on it next week (after +1) [13:54] cpaelzer: what about libvirt? [13:54] that will unblock the two others [13:54] ahh, ok. alternative would be remove that version from -proposed, and build with the old one [13:55] no I'm rather close [13:55] give me that few days to finalize it [13:55] ok [13:55] if - when looking into the test results - it looks horrible, then I'll let you know that we should revert and build the odl version [13:55] as an interim solution [16:08] mwhudson: hey, is there a chance to backport a fix in go's IO loop: https://go-review.googlesource.com/c/go/+/232862/ [16:09] mwhudson: this bug incorrectly surfaces EINTR in valid go programs === _hc[m] is now known as _hc [16:17] Hello everybody, [16:17] We want to make you aware that on Friday November 20th and Saturday November 21st there will be a scheduled downtime in our Boston data center to move several compute nodes that form part of the Launchpad build farm. [16:17] During this maintenance window, the Launchpad build farm will not be able to perform builds on the following platforms: arm64, armhf, ppc64el, s390x, powerpc and armel. [16:17] We appreciate your patience and understanding.If you have questions or issues, please submit them at: https://help.launchpad.net/Feedback. [16:41] That affects autopkgtest too FWIW. [16:41] :( [16:42] I know, earlier today I was like "hmm, ok, they should be done by the end of the weekend" [16:46] I wonder if it'll fix the bos01 ppc64el compute instance though, if the cloud gets turned off and on again [16:46] silver lining? [16:48] ah actually, maybe no worries, that's a week tomorrow not tomorrow! [18:08] zyga: probably! does it backport cleanly? === ijohnson is now known as ijohnson|lunch === ijohnson|lunch is now known as ijohnson [19:09] rcj, Odd_Bloke, so I finally got to try the old fashioned script and I think I got it working, thanks again for that! now how do I turn the .ext4 & co into an iso? using ubuntu-cdimage? [19:15] That's a great question seb128 but I don't know for sure. We trade in qcows and the like. But someone like xnox or Laney might be able to tell you how they make the ISOs for server or desktop from your output. [19:19] rcj, alright, thanks [19:19] mwhudson, or maybe you know about ^ ? [19:19] is he awake yet? [19:21] he wrote on this channel an hour ago [19:22] and even if he's not, he might read the backlog later :) [19:33] seb128: i simply checkout ubuntu-cdimage; and checkout debian-cd branch and drive it normally as ubuntu-cdimage does. [19:34] xnox, how does ubuntu-cdimage knows where to find debian-cd ? [19:34] seb128: and i tweak config to use my own +livefs in launchpad, instead of the production ones. [19:34] seb128: it finds it in the cwd [19:34] seb128: readme says, to checkout the debian-cd ubuntu' fork. [19:34] xnox, is that https://code.launchpad.net/~ubuntu-cdimage/debian-cd/ubuntu ? [19:34] i have no idea why they are separate repos, but they are. [19:35] seb128: sounds right. [19:36] xnox, I'm getting confused [19:36] $ DIST=bionic for-project ubuntu cron.daily-live --live [19:36] is what I try in a debian-cd checkout [19:36] (with the ubuntu-cdimage/bin in the path) [19:36] but it errors with [19:36] cdimage.livefs.LiveBuildsFailed: No live filesystem builds succeeded. [19:39] after export CDIMAGE_ROOT to the ubuntu-cdimage checkout I get [19:39] cdimage.livefs.UnknownArchitecture: No live filesystem builder known for amd64 [19:39] I wonder how many people had to figure out again those details over the years [19:42] xnox, and re livefs, there is no way to do that locally, you need to hit launchpad? [19:47] seb128: not in debian-cd checkout [19:48] seb128: one dir above it. [19:48] ubuntu-cdimage/debian-cd/ [19:48] you want to be in the ubuntu-cdimage [19:48] with debian-cd available, as ./debian-cd [19:48] normally [19:48] k, well I get [19:48] cdimage.livefs.UnknownArchitecture: No live filesystem builder known for amd64 [19:48] that sounds like lack of permissions [19:49] are you in ~ubuntu-cdimage? [19:49] yes [19:49] cause in etc/livefs-launchpad, i normally do twek to use my own xnox/any name of livefs [19:49] ok [19:50] path = os.path.join(config.root, "production", "livefs-builders") [19:50] i never have CDIMAGE_ROOT exported..... [19:50] do you add ubuntu-cdimage/bin to PATH? [19:50] just the bin added to path, and then i'm just in ubuntu-cdimage/ folder [19:51] yes, add ubuntu-cdimage/bin to PATH, and be in ubuntu-cdimage [19:51] I should maybe edit etc/livefs-launchpad ? [19:52] one sec, yeah, you ended up in the "ssh into builder" codepath (back in the day when we did not use launchpad as livefs builders) [19:52] if lp_livefs is not None: [19:52] if lp_livefs is not None [19:52] also how does it know where to find the artifact if those are local? [19:52] yeap, your livefs got empty [19:53] $ ls ~/build.output/ [19:53] binary.log livecd.ubuntu.ext4 livecd.ubuntu.kernel-generic livecd.ubuntu.manifest-minimal-remove log [19:53] etc livecd.ubuntu.initrd-generic livecd.ubuntu.manifest livecd.ubuntu.manifest-remove [19:53] it should use those right? [19:53] seb128: no. [19:53] seb128: ubuntu-cdimage by default looks up livefs (the build it triggered, or the last one) and download things, and puts it into scratch dir, to invoke debian-cd against. [19:54] you can patch the download code, to just copy things from stuff you built locally. [19:54] or sometimes i just comment out downloading, after one download and hack things myself byhand. [19:54] sounds like that should be an option rather than individuals redoing that patching :) [19:54] seb128: also i don't use old-fashioned, i simply use livefs builder in launchapd [19:54] doing local sounds better rather than wasting infra resources [19:55] seb128: well, if my teams upload would not be reverted we might have more time to work on cdimage ;-) [19:55] do you remember offhand where the download code you patch is? [19:55] xnox, punt :p [19:55] seb128: yeah, i should have it as the bzr shelve, one sec. [19:55] and then i run the build without the '--live' option [19:56] download_live_filesystems and download_live_items [19:56] are the ones i stub out. [19:56] seb128: but it would be nice, to fix ubuntu-cdimage local instance not able to talk to launchpad. [19:57] seb128: cause i guess you might want to have one end-to-end run, with everything stock. [19:57] cause then there are logs & scratch dir with all the state, which then one can inspect and just run parts of it. [19:59] seb128: path = os.path.join(config.root, "etc", "livefs-launchpad") -> this must exists and does exists, if one is in the ubuntu-cdimage checkout. [19:59] seb128: and if your config.root is off, then lots of stuff will not work. [20:00] seb128: if you cd into your checkout of "ubuntu-cdimage" (i.e. ~/canonical/ubuntu-cdimage) and then run DIST=bionic for-project ubuntu cron.daily-live [20:01] does it work? [20:01] it should download all the things, and just work. [20:01] Exception: RSYNC_SRC not configured! Edit /home/ubuntu/ubuntu-cdimage/production/anonftpsync or /home/ubuntu/ubuntu-cdimage/etc/anonftpsync and try again. [20:09] seb128: horay, that's good! [20:09] seb128: adjust etc/anonftpsync to taste, or make change to skip mirroring stuff. [20:10] xnox, can you maybe share you download functions diff? [20:10] seb128: i have CDIMAGE_NOSYNC=1 => [20:10] easier than redo that part [20:10] FileNotFoundError: [Errno 2] No such file or directory: '/home/ubuntu/ubuntu-cdimage/ftp/dists/bionic/main/debian-installer/binary-amd64/Packages.gz' [20:10] seb128: i've looked for it, and i don't have it. [20:11] seb128: without a mirror, locally, one cannot make an iso. because it needs to download things. and there is no way to rsync unsplit mirror locally. [20:11] seb128: so i rsync the dists/ on people.canonical.com [20:11] seb128: and then rsync it locally; and update whenever i want newer bootloaders. [20:11] seb128: lately i had to build new images, with changes in livecd-rootfs => meaning full rebuilds. [20:12] seb128: but yeah ftp/disksts/$suites for the suites you want to build for are needed [20:12] seb128: and partial pools. [20:12] bah [20:12] seb128: i dream to rework cdimage to work off networked mirror with apt-get download [20:12] seb128: because one can totally add all the repos for all the arches locally. without this mirror bullshit. [20:12] I don't undestand why I need all this [20:12] old fashioned gave me the ext4 manifest etc [20:13] it sounds like building an ISO from those artifacts shouldn't that much overhead :/ [20:13] seb128: to assemble the iso these things are ndeed https://paste.ubuntu.com/p/CpGHgDgTMV/ [20:13] seb128: the iso consists of: squashfs, bios/uefi bootloader, pakcages pool of files with offline optional debs needed for install. [20:14] seb128: that's for recent series [20:14] seb128: for bionic it needs access to the bionic's isolinux ubuntu theme grub etc. to unpack them, put them in the scratch dir and pass them to the xorriso command when assembling the iso [20:14] seb128: and the pool on the production iso must be signed with the cdimage key.... [20:15] I'm not interested in bionic, I just though it would give me a stable basis [20:15] seb128: hence launchpad/livecd-rootfs cannot spit out ready made iso, because it will not sign things on it. [20:15] seb128: which series you care aobut then? [20:15] xnox, so basically going down the path we were discussing, I need to create a mirror in ftp/dists? [20:15] seb128: because every series supports different targets and behaves differently and is assembled differently, and needs differt amount of partial pool. [20:15] seb128: yes, you need ftp/dists for the suites you want to build for. [20:16] xnox, hirsute, my goal is to fix the canary build to use it as a basis for the subiquity desktop project [20:16] for the arches you want to build ofr. [20:16] you don't have a command to rsync or create that pool? ;) [20:16] I'm still amazed that no-one properly documented those [20:17] most cdimage people.... simply ssh into old minster, and run builds from there. [20:17] there is probably a bunch of us that went through the exercice to get this somehow working [20:17] each having their own tricks [20:17] i am one of the few who don't have that ssh access, hence have lcoal deployemnt; and i bootstrap julinak to it too [20:17] seb128: sync [20:17] $ cat bin/sync [20:17] #!/bin/sh [20:17] set -x [20:17] MIRROR=/home/xnox/mirror/ubuntu [20:17] DEVEL=groovy [20:17] rsync -av --exclude dapper\* --exclude edgy\* --exclude feisty\* --exclude gutsy\* --exclude hardy\* --exclude intrepid\* --exclude jaunty\* --exclude karmic\* --exclude lucid\* --exclude maverick\* --exclude natty\* --exclude oneiric\* --exclude quantal\* --exclude raring\* --exclude saucy\* --exclude utopic\* --exclude wily\* --exclude yakkety\* --exclude zesty\* --exclude artful\* --exclude bionic\* [20:17] --exclude disco\* --exclude cosmic\* --exclude eoan\* --exclude precise\* --exclude xenial\* --exclude trusty\* --exclude vivid\* --include Packages\* --include Sources\* --include Release\* --include InRelease\* --include udeb.list --include \*\*/installer-\*/current --include "**/$DEVEL/Contents-*" --include "**/focal*/Contents-*" --include \*/ --include i18n/\* --exclude \* --delete --delete-excluded [20:17] --prune-empty-dirs ftpmaster.internal::ubuntu-dists/ "$MIRROR/dists/" [20:17] GERMINATE="/home/xnox/mirror/ubuntu-germinate" [20:18] rsync -avz --include germinate.output --exclude _\* --exclude \*.new --include "*_groovy_*" --include "*_focal_*" --exclude \* --delete --delete-excluded ftpmaster.internal::ubuntu-germinate/ "$GERMINATE/" [20:18] that will give you dists..... bah [20:18] https://paste.ubuntu.com/p/MWNb6zrYD3/ [20:18] thanks [20:18] seb128: and then for the pool, i just run the build, see where it fails to fetch, mkdir -p that dir, and then run like $ pull-lp-debs --arch $arch $pkg => for the right version, or series. [20:18] i guess you will want the latest cd-boot-images-amd64 [20:19] it's 9 at the moment. [20:19] and you will need the keyring too [20:19] so pull-lp-debs ubuntu-keyring [20:20] xnox, alright, I'm doing a rsync and will probably come back to that in a while or tomorrow, thanks for the guidance [20:21] xnox, the other option would be to just restore the canary build and try to fix it by upload and watch the result on the official infra :p [20:21] seb128: yes that would be the quickest way, to just readd it in the etc/crontab [20:22] seb128: because then for example, i will see the logs too, as they would be publically mirrored. [20:22] and anyone else. [20:22] xnox, well I expect the first thing is going to fix the kernel problem that was discussed back then when didrocks and ji_bel were working on it, I don't think that anyone handled it [20:22] seb128: especially if the canary image is to become the new subiquity-desktop playground. [20:23] that's the plan [20:23] but I wanted to iterate locally rather than waste infra resources to test things, I though it would also be quicker to iterate [20:23] I didn't expect it to be so complex to set up [20:23] seb128: what was the kernel problem? when we redid the bootloader, it didn't work for the canary build? a debian-cd / ubuntu-cdimage build log would be helpful for that. [20:24] seb128: this one is _easy_ =))))))) wait till you see the CPC or the OEM-images setup =))))))))))))))))))) [20:24] (the publishers there, were forked ubuntu-cdimage when it was shell with added ceph buckets and jenkins) [20:25] lol [20:25] xnox, what you wrote back then was 'We should make kernel/linux install be done in the is_live_layer only. Including regenerating the initrd as done in hacks, and rip it out.' [20:30] seb128: oooooh. [20:31] seb128: but that's in livecd-rootfs and live-build, not in ubuntu-cdimage/debian-cd levels. [20:33] xnox, right, well I'm trying to build up my way from bottom to top [20:34] I though old fashioned would drive livecd-rootfs for me [20:34] which is where I started [20:34] well; I think that's still true [20:34] if I replace the download code and use those artifacts [20:35] yes that would drive livecd-rootfs & live-build, but not ubuntu-cdimage/debian-cd. [20:35] I can use old fashioned to build with the livecd-rootfs changes [20:35] and then I need the up stack to end up to the ISO [20:35] imho having a personal xnox/ubuntu +livefs build, in etc/launchpad-livefs is easier than old-fashioned. [20:35] with upload of livecd-rootfs / live-build to a ppa [20:35] and launchpad the builds in that ppa [20:35] thus iterating out of archive. [20:35] I don't know how to do that either [20:35] so much knowledge missing there :) [20:36] seb128: yes, we have teams that only do that. i.e. cpc & oem. [20:36] they should fix the canary image for us :p [20:36] seb128: which salesforce contract is it for? they have minimum revenue before they can be invoked =) [20:37] seb128: do you have pull-lp-source of live-build? [20:37] seb128: i don't know how layers work, but i think the ./scripts/build/lb_chroot_linux-image needs to be come layer aware [20:38] and do nothing if the layered build is run and the layer is not the live-layer; [20:39] xnox, I will check later or tomorrow but I need to go for now, thanks again for the help and have a nice evening! [20:39] seb128: and separately we need to get UPDATE_INITRAMFS_OPTIONS="CASPER_GENERATE_UUID=1" into there too. [20:39] i.e. if live-layer also rerun CASPER_GENERATE_UUID=1 update-initramfs -u [20:39] k [20:40] because ./scripts/build/lb_chroot_hacks is not run at all, i think, or something like that. [20:40] seb128: so we had a headcount cancelled multiple times / postponed / fail to hire, to complete ubuntu-image => which is a snap, and just builds everything in one go. [20:40] seb128: it was meant to be able to do all classic images, reproducibly. but alas, still not here. [20:41] aka replacement for debian-cd and the image building. [20:41] right [20:41] I with I had an IRC proxy but I don't, need to drop now sorry [20:42] xnox, thanks again for the help and the advices! enjoy your evening [21:46] 16:49 openssh has two regressions, libssh and sshuttle - is anyone looking at it? [21:46] bdmurray: ^ FWIW I just got libssh fixed in Debian today [21:47] will need a merge [21:48] sshuttle looks plausibly transient - retrying [21:49] cjwatson: I'd already retried sshuttle [21:49] cjwatson: I also found debian bug 974039 [21:49] Debian bug 974039 in libssh "libssh: autopkgtest failure with openssh 8.4p1" [Important,Fixed] http://bugs.debian.org/974039 [21:52] ah, I thought I'd checked sshuttle's history but maybe I failed [21:52] It's likely still in the queue [21:52] ack [21:53] Oh nope, its running right now [22:18] bdmurray: Do you have an idea what's going on re bug #1903574? The Debian maintainer would appreciate a helping hand. [22:18] bug 1903574 in isenkram (Ubuntu) "isenkram-lookup crashed with SIGSEGV in g_type_check_instance_cast()" [Medium,New] https://launchpad.net/bugs/1903574 [22:18] GunnarHj: I haven't looked at it other than trying to get it to retrace [22:19] GunnarHj: sorry for the noise! [22:19] bdmurray: Ok, n.p. === ebarretto_ is now known as ebarretto