[01:04] <infinity> slangasek, nacc: Build-Conflicts in LP are basically pointless, since every build chroot is "fresh", so the only things you could conflict against is stuff we have installed on purpose, which you may find less than helpful to do.
[01:04] <infinity> (If you're talking crypto stacks, it's probably there because of apt-transport-https, which needs to be there to support private PPAs)
[01:05] <infinity> But fixing build systems to not wig out about locally-installed cruft is always more robust than a Build-Conflict anyway.
[01:07] <slangasek> infinity: right, so in a public build where apt-transport-https isn't needed, would a Build-Conflicts: with it successfully remove it and let the build proceed?
[01:07] <nacc> infinity: yeah, i think that's the right answer, just not exactly sure what hte right fix should be -- have some ideas (and i have one hacky method that does work) -- but i'm still thinking about the 'right' solution
[01:08] <infinity> slangasek: It's plausible that sbuild might remove it for you, yeah.
[01:08] <nacc> slangasek: but in this specific case, that won't help, because of the b-d on libldap2-dev itself :/
[01:08] <slangasek> right
[01:08] <infinity> Though, in *this* case, it's particularly silly for a package to fail to build when its own binaries are already installed.
[01:09] <nacc> yeah, it's a testcase bug, without question
[01:09] <infinity> That almost has to be a bug in the Debian packaging somewhere, cause I can't fathom how an upstream wouldn't notice.
[01:09] <infinity> Oh, it's just the testsuite?
[01:09] <nacc> it's the generated LD_LIBRARY_PATH in the test wrapper
[01:09] <infinity> That's more plausible that someone committed an oops there indeed.
[01:10] <infinity> We've had any number of glibc tests over the years that were accidentally testing the system libc.
[01:10] <nacc> which is putting the system  libs in there, since sqlite3 is from the system, and then we end up pickin gup heimdal-crypto from it and there's an ABI mismatch
[01:10] <infinity> Turns out that's really hard to notice until it breaks.
[01:11] <infinity> nacc: Surely, the build directories should always be listed first.
[01:11] <infinity> There's literally no point in an LD_LIBRARY_PATH that puts your special things last.
[01:12] <infinity> "Yeah, I guess maybe test that one, if nothing else is around *shrug*"
[01:12] <wxl> what about figuratively?
[01:12] <nacc> infinity: yeah, i agree -- it's just 'happening' to do this, it feels like
[01:12] <nacc> i think upstream heimdal may build sqlite3 at the same time? not sure
[01:12] <infinity> Ew.
[01:13] <nacc> infinity: but i agree, it makes sense (esp. for this testcase) to i think order the LD_LIBRARY_PATH
[01:13] <nacc> or something like that, just need to figure out what generates them and tweak it, i expect
[01:13]  * infinity nods.
[01:13] <infinity> LD_LIBRARY_PATH should always be hand-crafted for a sane order.
[01:13] <infinity> Well, "hand-crafted" that would ideally use an intelligent generator, but you know what I mean.
[01:14] <infinity> "Have some random paths" will end in sad.
[01:14] <nacc> yeah
[01:14] <nacc> very sad in this case :)
[01:14] <infinity> I'd cite the glibc testsuite as a solid example of how to get this right (now), but also, don't read it.
[01:15] <nacc> heh
[01:16] <infinity> We may reach a critical tipping point where lines of make outnumber lines of C in glibc at some point.
[01:16] <infinity> (Not really, but it feels that way)
[01:22] <mwhudson> infinity: does it use dejagnu?
[01:28] <infinity> mwhudson: Nope.
[01:31] <mwhudson> infinity: well that's something at lest
[01:31] <mwhudson> *least
[01:37]  * mwhudson remembers "DejaGNU has to be stopped somewhere." https://sourceware.org/ml/binutils/2008-03/msg00221.html
[01:43] <mwhudson> lol
[03:20] <Unit193> sarnold: BTW, atheme-services, https://github.com/atheme/atheme/blob/master/NEWS.md (CVE-2014-9773, CVE-2016-4478) That's the version Zesty has.
[03:24] <sarnold> xmlrpc processing in irc? o_O what happened to our crappy thirty year old protocol?
[03:25] <Unit193> .8 is a fix+more for .7, and .9 is a fix for .8. :P
[03:35] <UHck> hey
[03:39] <UHck> hi
[03:40] <UHck> I want to know what are other developers working on right now so I can help out I want to be MOTU
[06:29] <cpaelzer> good morning
[11:27] <Laney> tjaalton: I guess you saw the dogtag-pki autopkgtest failures?
[11:41] <tjaalton> Laney: the old one?
[11:41] <tjaalton> -2ubuntu1 should fix that
[11:42] <Laney> ok
[11:42] <Laney> I didn't see that on excuses
[11:42] <tjaalton> uploaded -3 to debian and then decided I can't wait 10h :)
[11:59] <UHck_> Hey does anyone know any cool projects being worked on for chromium
[11:59] <UHck_> I meant ubuntu
[12:02] <tjaalton> why the armhf builder is so painfully slow..
[12:42] <cult-> xnox: thanks for your work! how long it takes to move the proposed packages to the universal repo?
[12:44] <xnox> cult-, everything is described on https://wiki.ubuntu.com/StableReleaseUpdates have you read it?
[12:45] <xnox> plus not everything has been published yet.
[12:45] <cult-> alright. i verified it already. do we have to verify all the other packages?
[13:15] <liveiso> Hi IRC Council
[13:16] <liveiso> any articles about how official daily ISO are build?
[13:20] <Laney> tjaalton: It's failing still https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/amd64/d/dogtag-pki/20170216_132005_70578@/log.gz
[13:21] <Laney> did you run it locally?
[13:21] <tjaalton> Setting up libresteasy-java (3.0.19-2)
[13:21] <tjaalton> where did that come from
[13:22] <tjaalton> it should depend on 3.1.0-2
[13:22] <tjaalton> ah heck
[13:23] <tjaalton> only on build-depends
[13:23] <Laney> /o\
[13:24] <liveiso> any one know how a ubuntu daily build are created?
[13:24] <tjaalton> still, proposed is enabled so why doesn't it pick up the new one
[13:25] <Odd_Bloke> liveiso: They're built in Launchpad's builders using livecd-rootfs and live-build.  Do you have a specific question/issue?
[13:27] <Laney> you get the minimal amount of things from proposed
[13:28] <liveiso> Hi Odd_Bloke, I want to learn advanced knowledge and background info
[13:28] <liveiso> https://debian-live.alioth.debian.org/live-manual/stable/manual/html/live-manual.en.html
[13:29] <liveiso> I know debian distro has this project to build ISO, so I ask the same project that creating daily iso for ubuntu
[13:29] <tjaalton> Laney: oh well, that's not what breaks the test though
[13:29] <tjaalton> I have it working with 3.0.19-3 just fine
[13:30] <liveiso> after several google search (result quite less..), I guess there must be a build team in launchpad
[13:34] <Laney> tjaalton: This command fails in the same way for me: autopkgtest -s -U --apt-pocket=proposed=src:dogtag-pki dogtag-pki -- lxd autopkgtest/ubuntu/zesty/amd64
[13:39] <tjaalton> I've lost my lxc images
[13:39] <tjaalton> can't remember how I tested these four months ago
[13:40] <Laney> autopkgtest-build-lxd images:ubuntu/zesty/amd64
[13:40] <Laney> assuming lxd works for you in general
[13:40] <Laney> I would guess that qemu will do the same too
[13:45] <tjaalton> lxd doesn't work because my uid is silly
[13:46] <tjaalton> creating the image does, autopkgtest does not
[13:47] <tjaalton> works with sudo
[13:51] <tjaalton> and fails the same
[13:51] <tjaalton> how can I see what lxd images are around?
[13:51] <Laney> lxc image list
[13:52] <tjaalton> thx
[13:52] <tjaalton> dunno then, pkispawn works fine on a qemu host
[13:56] <Laney> autopkgtest-buildvm-ubuntu-cloud -r zesty, replace the end of the autopkgtest command with -- qemu autopkgtest-zesty-amd64.img
[13:56] <Laney> that fails in the same way
[13:56] <Laney> qemu instead of lxd this time
[13:57] <tjaalton> okay
[14:00] <tjaalton> i'll look into it
[14:02] <Laney> awesome
[14:02] <Laney> now I can lunch happy
[14:47] <tjaalton> Laney: weird, restarting it manually from the autopkgtest instance works
[15:21] <Laney> fun
[15:30] <tjaalton> guess I found the bug, just don't know why it happens now
[15:30] <tjaalton> Feb 16 15:11:38 autopkgtest-lxd-nffbdq pki-tomcatd[7293]: ERROR:  No 'tomcat' instances installed!
[15:31] <tjaalton> it runs "/etc/init.d/pki-tomcatd start pki-tomcat" and it fails, with just "start" it works
[16:24] <nacc> rbasak: caribou: sorry if already discussed, but in the nut merge, is that conflict context in the diff? (<<<<<< [16:25] <rbasak> nacc: it's not present. I think it's because it technically conflicts with debian/sid because debian/sid is slightly newer now.
[16:25] <rbasak> I think it's an LP bug.
[16:26] <nacc> rbasak: oh ok, wasn't sure, just saw it in the e-mail
[16:32] <smoser> hey...  the open-iscsi tests take a long time to run.
[16:32] <smoser>  and end in timeout sometimes
[16:32] <smoser>  example https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-zesty/zesty/i386/o/open-iscsi/20170210_235810_8d27d@/log.gz
[16:32] <smoser> anyone have suggestions on how to do that ?
[16:33] <smoser> to increase the timeout essentially? it was *not* hung, it just is slow
[16:34] <smoser> and then... cloud-utils is held because of such a failure (and another failure also). my uploaded version of open-iscsi from yesterday should be fixed
[16:34] <pitti> Laney: ^ add it to long_tests in worker*.conf, please?
[16:34] <smoser> do i just hit the recycle icon ?
[16:34] <Laney> long_tests
[16:34] <Laney> pitti: Yes, I know, thanks.
[16:34] <smoser> \o/ that was easy.
[16:34] <pitti> ah, good :)
[16:34] <smoser> so do ij ust hit the recycle button ?
[16:35] <Laney> You wait, and then do that
[16:35] <pitti> makes more sense to wait until the change is deployed
[16:36] <Laney> I have to pull in like 9999 locations
[16:36]  * Laney sniggers
[16:38] <smoser> ok. thank you.
[16:40] <Laney> smoser: ok, do it now
[16:44] <nacc> tjaalton: hrm, new dogtag-pki also fails autopkgtest
[16:45] <tjaalton> I know
[16:45] <Laney> haha
[16:45] <tjaalton> it's some sort of a race condition
[16:45] <nacc> tjaalton: i can reproduce locally -- it seems like a tomcat interaction?
[16:45] <tjaalton> no
[16:45] <nacc> tjaalton: it reproduces every time for me at home -- doesn't feel like a race we ever win :)
[16:46] <tjaalton> does autopkgtest drop to a shell?
[16:46] <tjaalton> does so here
[16:47] <nacc> tjaalton: not the way it is run in the automatic tests, but with -s, yes
[16:47] <nacc> tjaalton: i'm at the shell in a failure at home, if you want me to debug anything
[16:47] <tjaalton> "/etc/init.d/pki-tomcatd stop; /etc/init.d/pki-tomcatd start pki-tomcat" works fine after the test fails
[16:47] <tjaalton> it thinks /etc/dogtag/tomcat is empty
[16:47] <tjaalton> the first time
[16:49] <tarpman> tjaalton: this isn't the same thing I wrote about in bug 1664453?
[16:49] <nacc> tarpman: good bug/reminder
[16:50] <tjaalton> tarpman: now it is, after all the other bugs got sorted
[16:50] <tjaalton> or maybe is
[16:50] <tjaalton> but the error was
[16:50] <tjaalton> Feb 16 15:11:38 autopkgtest-lxd-nffbdq pki-tomcatd[7293]: ERROR:  No 'tomcat' instances installed!
[16:50] <tjaalton> from the initscript
[16:50] <tjaalton> that left the job in "active(exited)" status
[16:50] <tarpman> my analysis was - that happens when the package is initially installed, before anything is configured
[16:51] <tjaalton> ah
[16:51] <tjaalton> could be
[16:51] <tarpman> but the start() issued from pkispawn is a no-op because systemd considers the service already started
[16:51] <tjaalton> yeah the timestamp might suggest that actually
[16:51] <tarpman> I changed start() to restart() in scriptlets/configuration.py and that made the test pass on my system *shrug*
[16:51] <tarpman> this is all in the bug, anyway ;)
[16:52] <tjaalton> sure
[16:52] <tjaalton> it really shouldn't start anything by default
[16:53] <tarpman> I was wondering about that. not sure how to set up a sysv service that starts on boot but not when installed - at least without resorting to ENABLED=0 sort of hacks
[17:01] <tjaalton> I'll fix the initscript
[17:01] <nacc> tjaalton: where does the 143 SuccessExitStatus come from?
[17:02] <tjaalton> nacc: where?
[17:02] <nacc> tjaalton: in the pki-tomcatd service file
[17:02] <tjaalton> no idea
[17:02] <tjaalton> upstream
[17:03] <nacc> 'pki-tomcat' must still be CONFIGURED! ... (see /var/log/pki-tomcat-install.log). Which doesn't exist :)
[17:03] <smoser> nacc, do i need to kick the importer manually ?
[17:03] <smoser> it does not have my open-iscsi upload from yesterda
[17:04] <ogra_> wouldnt that be "footually" (if you kick) ?
[17:04] <ogra_> :)
[17:04] <tjaalton> nacc: where do you see that path?
[17:04] <nacc> smoser: let me check
[17:04] <nacc> tjaalton: i was messing around locally and trying to start the service after the failure and `systemctl status pki-tomcatd` says that
[17:05] <tjaalton> nacc: 143; https://fedorahosted.org/pki/ticket/716
[17:06] <nacc> tjaalton: ack, thanks
[17:07] <tjaalton> nacc: ok, found it.. looks like it's a bit misleading message :) fedora doesn't use any of the initscript crap anymore, but we have no choice because tomcat has not migrated
[17:10] <nacc> smoser: hrm, it seems like it should have, you're right. kicking it
[17:12] <nacc> smoser: there must be a bug in my logic, i'll take a look
[17:12] <nacc> tjaalton: ok :)
[17:31] <nacc> tjaalton: manually running the failing test immediately after being dropped to the shell does produce this gem: http://paste.ubuntu.com/24008402/
[17:33] <tjaalton> hehe
[17:36] <tjaalton> I've got a working initscript now..
[17:36] <tjaalton> which fails the right way ;)
[17:50] <tjaalton> leaves the job in a failed state, guess that's proper.. test passes now
[17:55] <nacc> tjaalton: cool, uploading?
[17:56] <tjaalton> in a bit
[17:57] <nacc> tjaalton: ok
[18:01] <nacc> smoser: i think it should be there now
[18:02] <nacc> rbasak: just to verify, you've not pushed your snapd import fix to master, right?
[18:02] <tjaalton> nacc: done
[18:02] <nacc> tjaalton: thanks!
[18:07] <rbasak> nacc: I thought you had?
[18:07]  * rbasak looks
[18:07] <nacc> rbasak: hrm, it's failing to import, let me check
[18:07] <nacc> rbasak: yeah it's there
[18:07] <rbasak> What's the error?
[18:07] <nacc> rbasak: checking
[18:08] <nacc> rbasak: i bet it's failing to FF one of the -devel heads
[18:09] <nacc> rbasak: that just happend to tomcat8 too
[18:10] <nacc> rbasak: bah i know why
[18:10] <nacc> rbasak: the importer's repo is out of date
[18:11] <rbasak> Ah
[18:11] <nacc> rbasak: sorry for the noise
[18:16] <nacc> tjaalton: i think we can sync -3 from debian, right?
[18:16] <nacc> tjaalton: as in, it's the same as 2ubuntu1 afaict
[18:18] <tjaalton> nacc: I uploaded this as -3u1
[18:18] <nacc> tjaalton: ah ok
[18:18] <tjaalton> debian can wait for this, as it's busted there anyway
[18:19] <nacc> tjaalton: just didn't see it in the queue or on lp
[18:19]  * nacc will be patient
[18:19] <tjaalton> zesty-changes has it
[18:19] <tjaalton> armhf build will take 2h
[18:19] <rbasak> nacc: if you have time, would you mind casting your eyes over https://git.launchpad.net/~racb/ubuntu/+source/nut/log/?h=merge please, before I upload? caribou is out now, so he can't check for me.
[18:20] <nacc> rbasak: lookin
[18:20] <nacc> *looking
[18:24] <nacc> rbasak: do we care that there is a 2.7.4-5 already?
[18:24] <rbasak> Oh, good point.
[18:24] <nacc> looks to be a debian bugfix, so not urgent, i suppose
[18:25] <nacc> rbasak: the debian/tests/test-nut.py is formatted a bit funny
[18:25] <nacc> rbasak: in that the "+ 5..." i think is actually not a sub-bullet
[18:26] <rbasak> nacc: in the changelog?
[18:27] <nacc> rbasak: yeah
[18:27] <nacc> rbasak: http://paste.ubuntu.com/24008659/
[18:27] <nacc> rbasak: the first line just ends 'give nut at most'
[18:28] <rbasak> wAh
[18:28] <rbasak> Ah
[18:28] <rbasak> And now that you point it out, some of the other bullets are indented wrong or use the wrong bullet type.
[18:28] <rbasak> Thanks, I'll fix up.
[18:28] <rbasak> Also I just rebased to debian/sid. Seems to work.
[18:28] <nacc> yeah, cosmetic, but worth cleaning now
[18:28] <nacc> nice
[18:28] <nacc> rbasak: othewrise, contentfully looks good
[18:28] <mapreri> is DIF already in effect really?
[18:29] <nacc> rbasak: i do think we should at some point adjust our documentation to drop whitespace noise in the changelog diffs
[18:29] <nacc> rbasak: the tail of the debian/changelog changes is all noise
[18:29] <nacc> rbasak: as well as a deletion of some metadata?
[18:31] <rbasak> git-merge-changelogs is a no-op.
[18:31] <rbasak> And a diff against old/ubuntu shows no changes apart from the dropping of the emacs modelines at the bottom
[18:31] <nacc> ack, i'm not saying the merge is wrong
[18:31] <rbasak> So it's an error carried forward. Opinions on fixing up?
[18:31] <nacc> i'm suggesting we fix those errors :)
[18:31] <nacc> not urgent for this merge
[18:32] <rbasak> Let's leave it for now. We can discuss later. I feel that's a job for our tooling.
[18:32] <nacc> we should talk about it generally, though, it's just noisy, and adds some overhead to the review (trivial amount, probably)
[18:32] <rbasak> Yeah
[18:32] <rbasak> http://paste.ubuntu.com/24008663/ are my changes from what you've just reviewed. Look good?
[18:33] <rbasak> asciidoc-dblatex is picked up from latest sid.
[18:33] <rbasak> It's in universe, but appears to be a documentation building thing anyway, so I don't think it'll result in a component mismatch (new rules)
[18:34] <nacc> rbasak: yep, looks good
[18:34] <rbasak> Thank you for the review! Uploading now.
[18:35] <rbasak> Oh, and I need to bump the version.
[18:36] <nacc> rbasak: err, right
[18:43] <rbasak> And it FTBFS due to a symbol mismatch :-(
[18:50] <rbasak> Oddly, it's on ppc64el only. caribou, do you mind looking into that when you're back please?
[18:51] <rbasak> I filed bug 1665431 so hopefully we (server team) won't forget.
[18:53] <nacc> infinity: not urgent, but the heimdal issue seems to actually be from libtool itself (ltmain.sh generates the wrapper script and doesn't elide system library paths from a variable called temp_rpath like it does for a few others (compile_rpath and finalize_rpath)). Not entirely sure how to fix, since that gets regenerated at build-time from libtool
[18:53] <jgrimm> nacc, have time for a merge review (/me races against the FF clock)
[18:53] <nacc> jgrimm: i should be able to
[18:53] <nacc> jgrimm: which one?
[18:53] <jgrimm> nacc, https://code.launchpad.net/~jgrimm/ubuntu/+source/libqb/+git/libqb/+merge/317540
[18:54] <nacc> jgrimm: well, and technically, FF has happened (see /topic)
[18:55] <jgrimm> nacc, /me had 3pm in his head
[18:55]  * nacc is not entirely sure what DIF stands for
[18:55] <sarnold> what's DIF?
[18:55] <nacc> heh
[18:55] <nacc> Laney: --^
[18:55] <sarnold> nothing useful looking in lastlog or google "ubuntu DIF" ..
[18:56] <nacc> jgrimm: the merge looks good, but if FF is in place, we'd need an FFe, i think
[18:56] <Unit193> Debian Import Freeze, sarnold.
[18:56] <sarnold> thanks Unit193 :D
[18:56] <nacc> Unit193: ah thanks!
[18:56] <Unit193> No problem.
[18:57] <jgrimm> nacc, Zesty will be enter Feature Freeze at 21:00 UTC tonight. ?
[18:57] <nacc> hrm, maybe DIF is meant to be the 'type' of freeze right now?
[18:58] <jgrimm> was email sent out a bit ago
[18:58] <nacc> jgrimm: to which list?
[18:58] <jgrimm> nacc, ubuntu-release at least
[18:58] <rbasak> jgrimm: taking the autofs merge.
[18:58] <jgrimm> nacc, and ubuntu-devel-announce
[18:59] <nacc> jgrimm: ack
[18:59] <jgrimm> rbasak, ack and thanks.
[18:59] <jgrimm> nacc, thanks
[19:00] <Unit193> Archives haven't picked 'em up.
[19:32] <rbasak> jgrimm: autofs merge uploaded.
[19:32] <rbasak> It was trivial, so I didn't file an MP.
[19:33] <rbasak> Also *none* of the patches added over the years appears to have been upstreamed, which is quite disappointing.
[19:33] <jgrimm> rbasak, thanks sir
[19:33] <rbasak> I've made a note to do that.
[19:33] <jgrimm> rbasak, yeah seems to be a mixed bag on whether that happens. i like that we've been religious this cycle
[19:34] <nacc> infinity: this is rather ugly, but it does seem to at least pass the tests now. http://paste.ubuntu.com/24008952/
[19:35] <infinity> nacc: patch is build-essential, you don't need to build-dep on it.
[19:36] <nacc> infinity: ah ok
[19:37] <nacc> i think i've seen another src pkg that has had to similar runtime patching outside of quilt, but I can't recall. I don't love it, and it feels fragile, but I'm also not a libtool expert as to why temp_rpath hasn't been made to resemble the other rpath's
[19:47] <robru> cyphermox: so, looking at https://launchpadlibrarian.net/303039011/buildlog_ubuntu-zesty-arm64.fbset_2.1-29_BUILDING.txt.gz is that as easy as it looks? just add -fPIC?
[19:48] <cyphermox> from my naive look at it, yeah
[19:51] <robru> ok I'll give it a shot
[19:51] <infinity> Probably not that simple.
[19:52] <infinity> That one might rely on a dpkg merge magically making it happy.
[19:53] <infinity> Which I need to do immediatelt after feature freeze. :P
[19:54] <infinity> robru: Given that the fbset change was to use dpkg-buildflags, and it worked in Debian and not Ubuntu, *and* the upload was by the dpkg maintainer, I'll give 90-to-1 odds it relies on a newer dpkg-buildflags. ;)
[19:55] <robru> infinity: so I'll leave that for you then?
[19:56] <Unit193> Ah, that's when.  I'd asked but you may have been busy.  I'd presume/hope that depends on LP 1657704 (or, at least not rejecting the upload.)
[20:02] <infinity> Unit193: Yes.  I'll need to talk to Colin before I go breaking the world.
[20:14] <jgrimm> are autopkgtests able to touch real world internet? python-boto tests want to interact with AWS, works fine on my system, but possibly not allowed in real build/test infra?
[20:20] <stgraber> jgrimm: so long as you test deals fine with http_proxy and https_proxy you should be fine. Direct connection isn't allowed, but a proxy is provided for adt tests to access the internet.
[20:20] <jgrimm> stgraber, ok good to know, i'll  look for that
[20:21] <jgrimm> i should probably set my local testing with that configuration too, catch things pre-upload
[20:22] <jgrimm> or learn to use bileto..doh
[20:35] <smoser> hey. anyone ran autopkgtest on ppc64el ?
[20:35] <smoser> failing for me like: http://paste.ubuntu.com/24009324/
[20:35] <wxl> smoser: could be wrong but i thought i saw some discussion on that on -release
[20:37] <smoser> my history doesnt show anything similar, other than that infinity probably can tell me what is wrong.
[20:38] <nacc> infinity: fwiw, i've sent an e-mail to the libtool list to ask about it, but do you think that change makes sense otherwise?
[20:42] <infinity> nacc: I don't have the time to unpack it and have an opinion, sorry. :/
[20:42] <nacc> infinity: no problem!
[20:42] <nacc> rbasak: would you be around by any chance?
[20:46] <rbasak> nacc: o/
[20:55] <bdmurray> smoser: Do you have any plans to update curtin for yakkety like is being done for xenial?
[21:44] <slangasek> nacc: heh, running process-removals... an awful lot of packages removed from Debian unstable in December because they were uninstallable w/ php7 and never fixed
[21:47] <nacc> slangasek: not too surprising
[21:56] <nacc> infinity: quick check-in, I briefed rbasak on my chnage, I've verified it should have no impact on the built packages (just lets the test pass), I've opened a bugtask on libtool in the heimdal bug. We can alwasy revert the chagne if you find time to review and disagree with it. Are you ok if I upload the fix?
[21:56] <infinity> nacc: I have no opinion.  If it fixes things and breaks nothing, go for it.
[21:57] <nacc> infinity: ok :)
[22:13] <nacc> i'm struggling to figure out why a debian/.gitignore file is being deleted by my dpkg-buildpackage runs. I think maybe I'm missing a flag, but -i -I doesn't seem to make a difference.
[22:15] <rbasak> nacc: don't you *not* want -i -I if you want stuff like that included?
[22:19] <nacc> rbasak: it didn't seem to make a difference either way
[22:19] <infinity> nacc: It's meant to be excluded.
[22:19] <nacc> rbasak: maybe because it's debian/.gitignore rather than .gitignore?
[22:19] <infinity> nacc: And "-i -I" (without extra args) are the default now, I thought.
[22:19] <nacc> infinity: right, but for some reason it still shows up as deleted in the debdiff (and it's not present in the debian.tar.xz)
[22:20] <infinity> Err.  Yes, I'm arguing that's a good thing.
[22:20] <rbasak> I didn't realise it was default.
[22:20] <rbasak> But if it is, then AFAICT nacc's reported behaviour is the expected behaviour.
[22:20] <infinity> I could be wrong, but I thought it had become the default.  I dunno, '-i -I' are finger memory for me.
[22:20] <rbasak> And debdiffs would show one deletion at the point where the default changed.
[22:21] <nacc> acck
[22:21] <tarpman> default for 3.0 source formats, isn't it?
[22:42] <xnox> barry, maybe i should do that, no?
[22:42] <xnox> if you are about to SRU https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1647031 into yakkety?
[22:43] <barry> xnox: you're welcome to do the sru :)  i was going to cherry-pick back the two reverted revisions in zesty
[22:43] <xnox> barry, what was reverted?
[22:43] <barry> xnox: 98974a88 and 47c16e59
[22:43] <xnox> barry, i already made a zesty upload with -18 merge and other patches that were in the queue outstanding.
[22:43] <barry> they cherry picked cleanly
[22:44] <barry> xnox: i'm running the autopkgtests now and then will test the packages on my vm
[22:44] <xnox> barry, which repository is that in? cause I don't see those in either debian or the upstream systemd =/
[22:44] <barry> xnox: that's because it's in network-manager and i haven't pushed the revisions yet :)
[22:44] <xnox> barry, carry on! =)
[22:44] <barry> % git remote -v
[22:44] <barry> origin	git+ssh://barry@git.launchpad.net/~network-manager/network-manager/+git/ubuntu (fetch)
[22:44] <barry> xnox: :)
[22:44] <xnox> barry, i thought you are talking about systemd ;-)
[22:45] <barry> xnox: slangasek already uploaded the systemd fix that broke nm
[22:45] <barry> i'm just reverting nm back to using resolved
[22:53] <nacc> jgrimm: fyi, heimdal build fix is committed, just waiting on arm64 to finish (it successfully built everywhere else)
[22:53] <jgrimm> ack
[23:09] <tjaalton> nacc: finally, dogtag tests pass
[23:13] <kyrofa> So let's say I wanted to compare the size of an Ubuntu Core image with the size of an Ubuntu Server image. I'm having trouble determining what exactly to compare. I have a basic uncompressed Ubuntu Core image, but comparing to e.g. the Server ISO makes no sense
[23:13] <jgrimm> tjaalton, \o/
[23:13] <sarnold> kyrofa: a cloud image is probably most immediately comparable http://cloud-images.ubuntu.com/xenial/current/
[23:14] <kyrofa> sarnold, I thought about that, but which one? disk1?
[23:14] <kyrofa> (.img)
[23:15] <sarnold> kyrofa: that's probably the best choice; I don't know which of the others would have compression (but based on the sizes I could guess..)
[23:16] <kyrofa> sarnold, alright great, thank you!
[23:26] <kyrofa> sarnold, this Ubuntu Core image is 689.5MB, whereas the Ubuntu Server disk1.img is 322.8MB. How is that possible?
[23:27] <nacc> tjaalton: nice!
[23:27] <kyrofa> I could the image could include dead space...
[23:27] <sarnold> kyrofa: heh, good question. are there published manifestts that might explain the differences?
[23:27] <kyrofa> sarnold, not published, no. I know they're generated, but I'm not sure where they are
[23:28] <kyrofa> sarnold, if I compressed them both with the same args, think that'd account for any padding in the partitioning? Or would that completely invalidate the comparison?
[23:28] <sarnold> kyrofa: I think that would be ideal; I doubt there's much padding otherwise we'd serve it compressed...
[23:29] <kyrofa> sarnold, I'm talking Ubuntu Core here (which is indeed served compressed), so good deal, I'll give that a shot
[23:34] <kyrofa> sarnold, yep, you're right, like 5MB savings on the server image. Still waiting on core...
[23:42] <kyrofa> sarnold, *cough* 417MB. Definitely padding, and definitely still larger than the cloud img
[23:43]  * kyrofa deletes a slide from his deck