[13:53] <cpaelzer> doko: I can confirm (from PPA) that the postgresql bugs related to perl will vanish with the stable update that I'm preparing
[13:53] <cpaelzer> so that should be good over the weekend or on monday
[13:53] <doko> nice
[13:54] <cpaelzer> and libvirt >6.8 I already mentioned that I have that ready and will work on it next week (after +1)
[13:54] <doko> cpaelzer: what about libvirt?
[13:54] <cpaelzer> that will unblock the two others
[13:54] <doko> ahh, ok. alternative would be remove that version from -proposed, and build with the old one
[13:55] <cpaelzer> no I'm rather close
[13:55] <cpaelzer> give me that few days to finalize it
[13:55] <doko> ok
[13:55] <cpaelzer> if - when looking into the test results - it looks horrible, then I'll let you know that we should revert and build the odl version
[13:55] <cpaelzer> as an interim solution
[16:08] <zyga> mwhudson: hey, is there a chance to backport a fix in go's IO loop: https://go-review.googlesource.com/c/go/+/232862/
[16:09] <zyga> mwhudson: this bug incorrectly surfaces EINTR in valid go programs
[16:17] <cristiangsp> Hello everybody,
[16:17] <cristiangsp> We want to make you aware that on Friday November 20th and Saturday November 21st there will be a scheduled downtime in our Boston data center to move several compute nodes that form part of the Launchpad build farm.
[16:17] <cristiangsp> During this maintenance window, the Launchpad build farm will not be able to perform builds on the following platforms: arm64, armhf, ppc64el, s390x, powerpc and armel.
[16:17] <cristiangsp> We appreciate your patience and understanding.If you have questions or issues, please submit them at: https://help.launchpad.net/Feedback.
[16:41] <Laney> That affects autopkgtest too FWIW.
[16:41] <juliank> :(
[16:42] <Laney> I know, earlier today I was like "hmm, ok, they should be done by the end of the weekend"
[16:46] <Laney> I wonder if it'll fix the bos01 ppc64el compute instance though, if the cloud gets turned off and on again
[16:46] <Laney> silver lining?
[16:48] <Laney> ah actually, maybe no worries, that's a week tomorrow not tomorrow!
[18:08] <mwhudson> zyga: probably! does it backport cleanly?
[19:09] <seb128> rcj, Odd_Bloke, so I finally got to try the old fashioned script and I think I got it working, thanks again for that! now how do I turn the .ext4 & co into an iso? using ubuntu-cdimage?
[19:15] <rcj> That's a great question seb128 but I don't know for sure.  We trade in qcows and the like.  But someone like xnox or Laney might be able to tell you how they make the ISOs for server or desktop from your output.
[19:19] <seb128> rcj, alright, thanks
[19:19] <seb128> mwhudson, or maybe you know about ^ ?
[19:19] <rcj> is he awake yet?
[19:21] <seb128> he wrote on this channel an hour ago
[19:22] <seb128> and even if he's not, he might read the backlog later :)
[19:33] <xnox> seb128:  i simply checkout ubuntu-cdimage; and checkout debian-cd branch and drive it normally as ubuntu-cdimage does.
[19:34] <seb128> xnox, how does ubuntu-cdimage knows where to find debian-cd ?
[19:34] <xnox> seb128:  and i tweak config to use my own +livefs in launchpad, instead of the production ones.
[19:34] <xnox> seb128:  it finds it in the cwd
[19:34] <xnox> seb128:  readme says, to checkout the debian-cd ubuntu' fork.
[19:34] <seb128> xnox, is that https://code.launchpad.net/~ubuntu-cdimage/debian-cd/ubuntu ?
[19:34] <xnox> i have no idea why they are separate repos, but they are.
[19:35] <xnox> seb128:  sounds right.
[19:36] <seb128> xnox, I'm getting confused
[19:36] <seb128> $ DIST=bionic for-project ubuntu cron.daily-live --live
[19:36] <seb128> is what I try in a debian-cd checkout
[19:36] <seb128> (with the ubuntu-cdimage/bin in the path)
[19:36] <seb128> but it errors with
[19:36] <seb128> cdimage.livefs.LiveBuildsFailed: No live filesystem builds succeeded.
[19:39] <seb128> after export CDIMAGE_ROOT to the ubuntu-cdimage checkout I get
[19:39] <seb128> cdimage.livefs.UnknownArchitecture: No live filesystem builder known for amd64
[19:39] <seb128> I wonder how many people had to figure out again those details over the years
[19:42] <seb128> xnox, and re livefs, there is no way to do that locally, you need to hit launchpad?
[19:47] <xnox> seb128:  not in debian-cd checkout
[19:48] <xnox> seb128:  one dir above it.
[19:48] <xnox> ubuntu-cdimage/debian-cd/
[19:48] <xnox> you want to be in the ubuntu-cdimage
[19:48] <xnox> with debian-cd available, as ./debian-cd
[19:48] <xnox> normally
[19:48] <seb128> k, well I get
[19:48] <seb128> cdimage.livefs.UnknownArchitecture: No live filesystem builder known for amd64
[19:48] <xnox> that sounds like lack of permissions
[19:49] <xnox> are you in ~ubuntu-cdimage?
[19:49] <seb128> yes
[19:49] <xnox> cause in etc/livefs-launchpad, i normally do twek to use my own xnox/any name of livefs
[19:49] <xnox> ok
[19:50] <xnox> path = os.path.join(config.root, "production", "livefs-builders")
[19:50] <xnox> i never have CDIMAGE_ROOT exported.....
[19:50] <seb128> do you add ubuntu-cdimage/bin to PATH?
[19:50] <xnox> just the bin added to path, and then i'm just in ubuntu-cdimage/ folder
[19:51] <xnox> yes, add ubuntu-cdimage/bin to PATH, and be in ubuntu-cdimage
[19:51] <seb128> I should maybe edit etc/livefs-launchpad ?
[19:52] <xnox> one sec, yeah, you ended up in the "ssh into builder" codepath (back in the day when we did not use launchpad as livefs builders)
[19:52] <xnox>         if lp_livefs is not None:
[19:52] <xnox> if lp_livefs is not None
[19:52] <seb128> also how does it know where to find the artifact if those are local?
[19:52] <xnox> yeap, your livefs got empty
[19:53] <seb128> $ ls ~/build.output/
[19:53] <seb128> binary.log  livecd.ubuntu.ext4            livecd.ubuntu.kernel-generic  livecd.ubuntu.manifest-minimal-remove  log
[19:53] <seb128> etc         livecd.ubuntu.initrd-generic  livecd.ubuntu.manifest        livecd.ubuntu.manifest-remove
[19:53] <seb128> it should use those right?
[19:53] <xnox> seb128: no.
[19:53] <xnox> seb128:  ubuntu-cdimage by default looks up livefs (the build it triggered, or the last one) and download things, and puts it into scratch dir, to invoke debian-cd against.
[19:54] <xnox> you can patch the download code, to just copy things from stuff you built locally.
[19:54] <xnox> or sometimes i just comment out downloading, after one download and hack things myself byhand.
[19:54] <seb128> sounds like that should be an option rather than individuals redoing that patching :)
[19:54] <xnox> seb128:  also i don't use old-fashioned, i simply use livefs builder in launchapd
[19:54] <seb128> doing local sounds better rather than wasting infra resources
[19:55] <xnox> seb128:  well, if my teams upload would not be reverted we might have more time to work on cdimage ;-)
[19:55] <seb128> do you remember offhand where the download code you patch is?
[19:55] <seb128> xnox, punt :p
[19:55] <xnox> seb128:  yeah, i should have it as the bzr shelve, one sec.
[19:55] <xnox> and then i run the build without  the '--live' option
[19:56] <xnox> download_live_filesystems and download_live_items
[19:56] <xnox> are the ones i stub out.
[19:56] <xnox> seb128:  but it would be nice, to fix ubuntu-cdimage local instance not able to talk to launchpad.
[19:57] <xnox> seb128:  cause i guess you might want to have one end-to-end run, with everything stock.
[19:57] <xnox> cause then there are logs & scratch dir with all the state, which then one can inspect and just run parts of it.
[19:59] <xnox> seb128:  path = os.path.join(config.root, "etc", "livefs-launchpad") -> this must exists and does exists, if one is in the ubuntu-cdimage checkout.
[19:59] <xnox> seb128:  and if your config.root is off, then lots of stuff will not work.
[20:00] <xnox> seb128:  if you cd into your checkout of "ubuntu-cdimage" (i.e. ~/canonical/ubuntu-cdimage) and then run  DIST=bionic for-project ubuntu cron.daily-live
[20:01] <xnox> does it work?
[20:01] <xnox> it should download all the things, and just work.
[20:01] <seb128> Exception: RSYNC_SRC not configured!  Edit /home/ubuntu/ubuntu-cdimage/production/anonftpsync or /home/ubuntu/ubuntu-cdimage/etc/anonftpsync and try again.
[20:09] <xnox> seb128:  horay, that's good!
[20:09] <xnox> seb128:  adjust etc/anonftpsync to taste, or make change to skip mirroring stuff.
[20:10] <seb128> xnox, can you maybe share you download functions diff?
[20:10] <xnox> seb128:  i have CDIMAGE_NOSYNC=1 =>
[20:10] <seb128> easier than redo that part
[20:10] <seb128> FileNotFoundError: [Errno 2] No such file or directory: '/home/ubuntu/ubuntu-cdimage/ftp/dists/bionic/main/debian-installer/binary-amd64/Packages.gz'
[20:10] <xnox> seb128:  i've looked for it, and i don't have it.
[20:11] <xnox> seb128:  without a mirror, locally, one cannot make an iso. because it needs to download things. and there is no way to rsync unsplit mirror locally.
[20:11] <xnox> seb128:  so i rsync the dists/ on people.canonical.com
[20:11] <xnox> seb128:  and then rsync it locally; and update whenever i want newer bootloaders.
[20:11] <xnox> seb128:  lately i had to build new images, with changes in livecd-rootfs => meaning full rebuilds.
[20:12] <xnox> seb128:  but yeah ftp/disksts/$suites for the suites you want to build for are needed
[20:12] <xnox> seb128:  and partial pools.
[20:12] <seb128> bah
[20:12] <xnox> seb128:  i dream to rework cdimage to work off networked mirror with apt-get download
[20:12] <xnox> seb128:  because one can totally add all the repos for all the arches locally. without this mirror bullshit.
[20:12] <seb128> I don't undestand why I need all this
[20:12] <seb128> old fashioned gave me the ext4 manifest etc
[20:13] <seb128> it sounds like building an ISO from those artifacts shouldn't that much overhead :/
[20:13] <xnox> seb128:  to assemble the iso these things are ndeed https://paste.ubuntu.com/p/CpGHgDgTMV/
[20:13] <xnox> seb128:  the iso consists of: squashfs, bios/uefi bootloader, pakcages pool of files with offline optional debs needed for install.
[20:14] <xnox> seb128:  that's for recent series
[20:14] <xnox> seb128:  for bionic it needs access to the bionic's isolinux ubuntu theme grub etc. to unpack them, put them in the scratch dir and pass them to the xorriso command when assembling the iso
[20:14] <xnox> seb128:  and the pool on the production iso must be signed with the cdimage key....
[20:15] <seb128> I'm not interested in bionic, I just though it would give me a stable basis
[20:15] <xnox> seb128:  hence launchpad/livecd-rootfs cannot spit out ready made iso, because it will not sign things on it.
[20:15] <xnox> seb128:  which series you care aobut then?
[20:15] <seb128> xnox, so basically going down the path we were discussing, I need to  create a mirror in ftp/dists?
[20:15] <xnox> seb128:  because every series supports different targets and behaves differently and is assembled differently, and needs differt amount of partial pool.
[20:15] <xnox> seb128:  yes, you need ftp/dists for the suites you  want to build for.
[20:16] <seb128> xnox, hirsute, my goal is to fix the canary build to use it as a basis for the subiquity desktop project
[20:16] <xnox> for the arches you want to build ofr.
[20:16] <seb128> you don't have a command to rsync or create that pool? ;)
[20:16] <seb128> I'm still amazed that no-one properly documented those
[20:17] <xnox> most cdimage people.... simply ssh into old minster, and run builds from there.
[20:17] <seb128> there is probably a bunch of us that went through the exercice to get this somehow working
[20:17] <seb128> each having their own tricks
[20:17] <xnox> i am one of the few who don't have that ssh access, hence have lcoal deployemnt; and i bootstrap julinak to it too
[20:17] <xnox> seb128:  sync
[20:17] <xnox> $ cat bin/sync
[20:17] <xnox> #!/bin/sh
[20:17] <xnox> set -x
[20:17] <xnox> MIRROR=/home/xnox/mirror/ubuntu
[20:17] <xnox> DEVEL=groovy
[20:17] <xnox> rsync -av --exclude dapper\* --exclude edgy\* --exclude feisty\* --exclude gutsy\* --exclude hardy\* --exclude intrepid\* --exclude jaunty\* --exclude karmic\* --exclude lucid\* --exclude maverick\* --exclude natty\* --exclude oneiric\* --exclude quantal\* --exclude raring\* --exclude saucy\* --exclude utopic\* --exclude wily\* --exclude yakkety\* --exclude zesty\* --exclude artful\* --exclude bionic\*
[20:17] <xnox> --exclude disco\* --exclude cosmic\* --exclude eoan\* --exclude precise\* --exclude xenial\* --exclude trusty\* --exclude vivid\* --include Packages\* --include Sources\* --include Release\* --include InRelease\* --include udeb.list --include \*\*/installer-\*/current --include "**/$DEVEL/Contents-*" --include "**/focal*/Contents-*" --include \*/ --include i18n/\* --exclude \* --delete --delete-excluded
[20:17] <xnox> --prune-empty-dirs ftpmaster.internal::ubuntu-dists/ "$MIRROR/dists/"
[20:17] <xnox> GERMINATE="/home/xnox/mirror/ubuntu-germinate"
[20:18] <xnox> rsync -avz --include germinate.output --exclude _\* --exclude \*.new --include "*_groovy_*" --include "*_focal_*" --exclude \* --delete --delete-excluded ftpmaster.internal::ubuntu-germinate/ "$GERMINATE/"
[20:18] <xnox> that will give you dists..... bah
[20:18] <xnox> https://paste.ubuntu.com/p/MWNb6zrYD3/
[20:18] <seb128> thanks
[20:18] <xnox> seb128:  and then for the pool, i just run the build, see where it fails to fetch, mkdir  -p that dir, and then run like $ pull-lp-debs --arch $arch $pkg => for the right version, or series.
[20:18] <xnox> i guess you will want the latest cd-boot-images-amd64
[20:19] <xnox> it's 9 at the moment.
[20:19] <xnox> and you will need the keyring too
[20:19] <xnox> so pull-lp-debs ubuntu-keyring
[20:20] <seb128> xnox, alright, I'm doing a rsync and will probably come back to that in a while or tomorrow, thanks for the guidance
[20:21] <seb128> xnox, the other option would be to just restore the canary build and try to fix it by upload and watch the result on the official infra :p
[20:21] <xnox> seb128:  yes that would be the quickest way, to just readd it in the etc/crontab
[20:22] <xnox> seb128:  because then for example, i will see the logs too, as they would be publically mirrored.
[20:22] <xnox> and anyone else.
[20:22] <seb128> xnox, well I expect the first thing is going to fix the kernel problem that was discussed back then when didrocks and ji_bel were working on it, I don't think that anyone handled it
[20:22] <xnox> seb128:  especially if the canary image is to become the new subiquity-desktop playground.
[20:23] <seb128> that's the plan
[20:23] <seb128> but I wanted to iterate locally rather than waste infra resources to test things, I though it would also be quicker to iterate
[20:23] <seb128> I didn't expect it to be so complex to set up
[20:23] <xnox> seb128:  what was the kernel problem? when we redid the bootloader, it didn't work for the canary build? a debian-cd / ubuntu-cdimage build log would be helpful for that.
[20:24] <xnox> seb128:  this one is _easy_ =))))))) wait till you see the CPC or the OEM-images setup =)))))))))))))))))))
[20:24] <xnox> (the publishers there, were forked ubuntu-cdimage when it was shell with added ceph buckets and jenkins)
[20:25] <seb128> lol
[20:25] <seb128> xnox, what you wrote back then was 'We should make kernel/linux install be done in the is_live_layer only. Including regenerating the initrd as done in hacks, and rip it out.'
[20:30] <xnox> seb128:  oooooh.
[20:31] <xnox> seb128:  but that's in livecd-rootfs and live-build, not in ubuntu-cdimage/debian-cd levels.
[20:33] <seb128> xnox, right, well I'm trying to build up my way from bottom to top
[20:34] <seb128> I though old fashioned would drive livecd-rootfs for me
[20:34] <seb128> which is where I started
[20:34] <seb128> well; I think that's still true
[20:34] <seb128> if I replace the download code and use those artifacts
[20:35] <xnox> yes that would drive livecd-rootfs & live-build, but not ubuntu-cdimage/debian-cd.
[20:35] <seb128> I can use old fashioned to build with the livecd-rootfs changes
[20:35] <seb128> and then I need the up stack to end up to the ISO
[20:35] <xnox> imho having a personal xnox/ubuntu +livefs build, in etc/launchpad-livefs is easier than old-fashioned.
[20:35] <xnox> with upload of livecd-rootfs / live-build to a ppa
[20:35] <xnox> and launchpad the builds in that ppa
[20:35] <xnox> thus iterating out of archive.
[20:35] <seb128> I don't know how to do that either
[20:35] <seb128> so much knowledge missing there :)
[20:36] <xnox> seb128:  yes, we have teams that only do that. i.e. cpc & oem.
[20:36] <seb128> they should fix the canary image for us :p
[20:36] <xnox> seb128:  which salesforce contract is it for? they have minimum revenue before they can be invoked =)
[20:37] <xnox> seb128:  do you have pull-lp-source of live-build?
[20:37] <xnox> seb128:  i don't know how layers work, but i think the ./scripts/build/lb_chroot_linux-image needs to be come layer aware
[20:38] <xnox> and do nothing if the layered build is run and the layer is not the live-layer;
[20:39] <seb128> xnox, I will check later or tomorrow but I need to go for now, thanks again for the help and have a nice evening!
[20:39] <xnox> seb128:  and separately we need to get UPDATE_INITRAMFS_OPTIONS="CASPER_GENERATE_UUID=1" into there too.
[20:39] <xnox> i.e. if live-layer also rerun CASPER_GENERATE_UUID=1 update-initramfs -u
[20:39] <seb128> k
[20:40] <xnox> because ./scripts/build/lb_chroot_hacks is not run at all, i think, or something like that.
[20:40] <xnox> seb128:  so we had a headcount cancelled multiple times / postponed / fail to hire, to complete ubuntu-image => which is a snap, and just builds everything in one go.
[20:40] <xnox> seb128:  it was meant to be able to do all classic images, reproducibly. but alas, still not here.
[20:41] <xnox> aka replacement for debian-cd and the image building.
[20:41] <seb128> right
[20:41] <seb128> I with I had an IRC proxy but I don't, need to drop now sorry
[20:42] <seb128> xnox, thanks again for the help and the advices! enjoy your evening
[21:46] <cjwatson> 16:49 <sil2100> openssh has two regressions, libssh and sshuttle - is anyone looking at it?
[21:46] <cjwatson> bdmurray: ^ FWIW I just got libssh fixed in Debian today
[21:47] <cjwatson> will need a merge
[21:48] <cjwatson> sshuttle looks plausibly transient - retrying
[21:49] <bdmurray> cjwatson: I'd already retried sshuttle
[21:49] <bdmurray> cjwatson: I also found debian bug 974039
[21:52] <cjwatson> ah, I thought I'd checked sshuttle's history but maybe I failed
[21:52] <bdmurray> It's likely still in the queue
[21:52] <cjwatson> ack
[21:53] <bdmurray> Oh nope, its running right now
[22:18] <GunnarHj> bdmurray: Do you have an idea what's going on re bug #1903574? The Debian maintainer would appreciate a helping hand.
[22:18] <bdmurray> GunnarHj: I haven't looked at it other than trying to get it to retrace
[22:19] <bdmurray> GunnarHj: sorry for the noise!
[22:19] <GunnarHj> bdmurray: Ok, n.p.