[01:48] <psusi> I've had a patch atached to bug #1250109 to fix a very long standing annoying problem making upgrades very slow due to the massive number of update-grub invocations since December.  could someone *please* upload it?
[01:50] <psusi> also I've been trying since about then to get upgraded from contributing developer to core so I could upload things like this myself, and would appreciate your endorsement: https://wiki.ubuntu.com/PhillipSusi/DeveloperApplication2
[02:04] <sarnold> psusi: does the grub2 debdiff need refreshing?
[02:09] <psusi> sarnold, apparently there have been a few new uploads since I posted that...
[02:09] <sarnold> psusi: I figured, the zfs stuff alone probably got a few..
[02:09] <sarnold> psusi: maybe re-fresh, re-test, re-upload, and re-bug apw :)
[02:10] <sarnold> it'd sure be nice to have it fixed, that bothers me every time I updae.
[02:10] <psusi> yea.. I just autoremoved' like 6 kernels
[02:10] <psusi> and watched the long, slow, update-grub for every one of them
[02:11] <psusi> I just saw a headline about that on slashdot and still don't get this insane belief that any kernel module is a derived work of the kernel and so zfs can't be licensed under something other than the gpl2
[02:13] <infinity> psusi: That "insane" belief is held by a lot of free software people and a lot of kernel copyright holders, so the debate is certainly heated.
[02:13] <psusi> by that logic, every hardware device driver that contains binary blobs in the kernel violates it
[02:14] <infinity> Yes.
[02:14] <psusi> and that's basically most of the most important ones
[02:14] <infinity> Hardly.
[02:14] <sarnold> which is why there's a linux-firmware package..
[02:14] <infinity> Unless you mean firmware, which is executed on the device, not the host CPU.
[02:14] <psusi> putting it in a separate package doesn't change the way the way the licenses effect it
[02:15] <psusi> doesn't matter where it is executed... if the argument is that the code is combined with the kernel that makes it a derived work and must be under the same license
[02:15] <infinity> It's widely accepted that the GPL linking clause doesn't attach across CPUs like that.  Not that it makes the firmware free without source (it's not), but the firmware doesn't taint the kernel code, as it's not linked to it in any way.
[02:16] <psusi> also by that logic, any executable program loaded by the kernel is a derived work and must be gpl2
[02:16] <infinity> No, that's not how linking works.
[02:16] <infinity> Reading the GPL helps before arguing for or against it.
[02:16] <sarnold> psusi: the first few lines of the kernel's COPYING file:
[02:16] <sarnold>    NOTE! This copyright does *not* cover user programs that use kernel
[02:16] <sarnold>  services by normal system calls - this is merely considered normal use
[02:16] <sarnold>  of the kernel, and does *not* fall under the heading of "derived work".
[02:16] <psusi> "linking" isn't something that is part of copyright law... being in the same address space on the same cpu doesn't make something a derived work
[02:17] <infinity> psusi: It's not part of copyright law, but it *is* part of the license, ergo the copyright holders can withdraw your license if you violate it.
[02:17] <psusi> by that logic, WINE is a derived work of MS windows....
[02:17] <infinity> psusi: And then you no longer can copy.
[02:17] <psusi> no... they can't withdraw their license depending on how you *use* the work
[02:18] <infinity> Of course they can.
[02:18] <psusi> first sale doctorine
[02:18] <psusi> once you sell it to me I can use it however I choose
[02:18] <infinity> Yes, but you can't distribute it.
[02:18] <infinity> Copyright licenses are about distribution, not use.
[02:18] <infinity> And you absolutely can have your rights to distribute terminated.
[02:18] <psusi> compilations... they can't say that you must distribute it alone and not as part of a larger collection with other software
[02:20] <psusi> i.e. they can't say that you can redistribute linux, but not as part of an OS that contains non gpl2 software
[02:21] <infinity> " If a compilation utilizes material under copyright by someone else, compilation protection does not grant the compiler rights to that material or permission to use it without license"
[02:21] <infinity> A copyright license can restrict your distribution rights in ANY WAY THEY WANT.
[02:21] <infinity> Especially given that it's not a restriction at all, it's a grant.
[02:22] <infinity> By default, you can't distribute anything that's copyrighted.
[02:22] <infinity> That's how copyright works.
[02:22] <psusi> no.. it can't... they can't for instance, say that you can distribute it, but only if the servers you use to host it aren't running windows
[02:23] <infinity> Not that the GPL (and any GPL work) ever claims that the whole OS needs to be GPL.
[02:23] <psusi> I don't see any distinction between that and claiming that all kernel modules you distribute with it must be GPL
[02:23] <infinity> psusi: They really could.  Just as they can state that you must post advertisements, or hop on one foot.
[02:24] <infinity> psusi: You're being *granted* a distribution license on their terms.  It's a contract.  Some bits of that contract could certainly be tested in some courts and found unenforceable, but there's no list of "you can't" assigned to it.
[02:24] <infinity> psusi: The distinction is both how the GPL defines linking, and that the kernel copyright license itself even goes out of its way to clarify.
[02:24] <psusi> no.. they can't... the courts have upheld that there are reasonable things they can require, and unresonable things they can not, including that you can not demand that it not be distributed as part of an OS that contains propietary software
[02:25] <infinity> psusi: Good thing no one's demanding that (also, where's that case, since I've not heard of it, since I know of no licesense that would cause that to be tested in a court in the first place)
[02:26] <psusi> there were plenty of software vendors in the 80s that did not like their software being distributed on cds with hundreds of other programs...
[02:27] <infinity> Those are collections, which are clearly not derived works.  The GPL claims linking creates a derived work.  It's not just a collection at that point, as a transformation more significant than copying to a filesystem has occurred.
[02:28] <psusi> yes, and that is the claim that is fallacious... if linking created a derived work, then MS would have a copyright claim on every single piece of windows software ever created
[02:28] <psusi> and every windows software vender would have a copyright claim on WINE
[02:29] <sarnold> I suspect windows had a EULA that allowed linking to their APIs without having to share the license
[02:29] <psusi> maybe to *theirs* but not to WINE's
[02:29] <infinity> Derivative works are not copyright violations of the original work.  But putting them together is also not a naive collection.
[02:30] <sarnold> wine's API use is likely covered by the oracle / google 'can you copyright an API' question.
[02:30] <psusi> same applies to kernel modules using the kerne's apis
[02:36] <psusi> it doesn't seem any different to me than MS trying to copyright the use of INT21 and saying that dosbox violates their copyright for implementing it, or every dos program ever written does and so they can stop anyone they want from writing a DOS program
[02:48] <psusi> what the hell is wrong with patch?  I unpacked the latest grub2 source into ~/grub2-2.02~beta2... in there I run patch -p1 and paste in the old patch and it complains that it can't find grub2-2.02~beta2/debian/grub2.postinst.... Ummm.. the -p1 means strip off the gurb2-2.02~beta2 prefix and look instead for ./debian/grub2.postinst... why isn't it?
[02:49] <psusi> oh... now htat's weird... that file isn't there
[02:54] <nacc_> psusi: the naming has changed, i think? i see postinst.in, e.g.
[02:54] <psusi> hrm.. I think someone fscked up the package uploads... last I patched this, it went from 2.02beta2-ubuntu1 to 2.02beta2-ubuntu2... the current changelog only shows 2.02~beta2-32 followed by 2.02-beta2-33
[02:54] <nacc_> it's up to ~36
[02:55] <psusi> i.e. 2.02beta2-ubuntu1 is now missing from the changelog
[02:56] <psusi> i.e. someone updated to the new debian release and discarded the previous ubuntu releases from the changelog
[02:56] <psusi> kind of like how I recently tried to upload a gparted package to debian discarding the previous NMU
[02:57] <nacc_> psusi: i think ~36 is sync'd direct from debian, there's no delta (afaict)
[02:58] <sarnold> with a version like "2.02~beta2-36" doesn't that mean it was synched right from debian?
[02:59] <nacc_> sarnold: that's my reading, as well
[03:02] <psusi> aren't the previous ubuntu changelog entries supposed to be preserved?
[03:02] <psusi> just like I'm supposed to preserve NMU uploads in debian when I issue a new update there?
[03:02] <sarnold> maybe with merges but unlikely with syncs, after all, it's just copied from debian.. whoever does the sync is supposed to ensure the delta is no longer needed
[03:03] <psusi> hrm... seems odd that it s ok to discard changelog entries for previous revisions... but.. shouldn't stop my patch from applying... hrm...
[03:04] <sarnold> if we had to retain all previous ubuntu changelog entries we'd never be able to blindly sync from debian for any package ever again
[03:10] <psusi> damn... I can not figure out how to reconsile these changes
[03:10] <psusi> there is no longer a grub2.triggers but instead a postinst.in
[03:11] <psusi> grub2.postinst rather
[03:12] <psusi> .triggers was the new file I added
[03:14]  * psusi kicks cjwatson to the rescue
[03:14]  * psusi may have to try again tomorrow when he's sober
[05:07] <sarnold> pitti: what are the chances the systemd-logind restart also loses my xset dpms setting? I know it's stretching but the same day I test the new systemd-logind my X11 didn't suspend .. heh :) thanks
[05:54] <pitti> Good morning
[05:55] <pitti> sarnold: is that on trusty or xenial? xenial's xorg now directly integrates with logind, trusty's didn't yet
[05:55] <pitti> barry: I am now
[07:45] <dholbach> good morning
[08:13] <cpaelzer> I have a package with a dep8 test and realized by adding a new one without needing isolation-machine that the only thing preventing them to run on armhf and s390x was that restriction
[08:14] <cpaelzer> now the new test without that obviously ran on those arches and failed by "Test dependencies are unsatisfiable"
[08:14] <cpaelzer> and please excuse me in case that is a silly question, but I can't find the right pointer to start via search engines, but - what is the "right" way to guard architectures for dep8 tests?
[08:15] <cpaelzer> to be honest I'm even more puzzled by ppc64el failing with the same dependency but instead of a regression it lists them as "Always failed"
[08:16] <cpaelzer> seems that a former "SKIP", but then adding a fail counts as regression while in fact no test ever ran successfulyl on those
[08:16] <cpaelzer> but my real question/issue is around how to properly flag/guard toward only supported architectures
[08:17] <cpaelzer> jamespage: since you were kind of involved FYI ^^
[08:18] <cpaelzer> it's obvisous I could just add "isolation-machine", but that can't be right - is it?
[08:21] <cpaelzer> an architecture list for the Depends field of d/t/control maybe?
[08:29] <jamespage> cpaelzer, I'm surprised that as that package only builds for x86 archs, that the tests are even running...
[08:29] <jamespage> pitti, ^^ any pointers?
[08:30] <cpaelzer> jamespage: that is exactly what threw me off
[08:32] <pitti> cpaelzer: you can add arch qualifiers to test dependencies, just like with binary dependencies
[08:33] <pitti> so you can say "Depends: foo [i386 amd64]" if foo is only available there
[08:33] <pitti> and then have your test skip the parts that need foo if it isn't available
[08:33] <pitti> there is no "Architecture:" field in debian/tests/control
[08:33] <pitti> if a package isn't built on a particular arch, it doesn't get tested there either
[08:34] <pitti> so it should not normally be necessary
[08:34] <pitti> cpaelzer: right, if it never succeeded on ppc64el it counts as "always failed", so it doesn't block propagation
[08:35] <cpaelzer> pitti: but it "never worked" on s390/armhf either but now lists it as regression
[08:35] <cpaelzer> that is what I meant with only SKIP seems not to count as "fail"
[08:35] <pitti> cpaelzer: ok, what is "it" here? which packages?
[08:35] <cpaelzer> http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html look for dpdk
[08:35] <pitti> cpaelzer: right, so http://autopkgtest.ubuntu.com/packages/d/dpdk/xenial/armhf/ did succeed before
[08:36] <cpaelzer> pitti: yeah with 1/1 tests being SKIPed
[08:36] <cpaelzer> pitti: I'm totally fine, I just didn't expect that skip only would have been considered "working" and adding a new one that fails a "regression then"
[08:37] <cpaelzer> we will work on arch flagging the other tests which then should skipped as well and the regression gone
[08:37] <cpaelzer> pitti: but as you said - "if a package isn't built on a particular arch, it doesn't get tested there either"
[08:38] <cpaelzer> pitti: but we can add the arch qualifiers and things should be fine
[08:38] <cpaelzer> pitti: thank you
[08:38] <cpaelzer> maybe it is the -doc packages because they coem from the same source and are for all archs ?
[08:39] <cpaelzer> pitti: would that cause the package be tested on all archs ^^ ?
[08:40] <pitti> cpaelzer: it would be a bit harsh to count a test as failed just because we can't provide "good enough" testbeds
[08:40] <cpaelzer> pitti: yeah, with that point of view I totally agree to you - thanks
[08:41] <cpaelzer> jamespage: I'll push the changes + the revert of the revert of the VCS things to the repo the next minutes
[08:41] <pitti> cpaelzer: -doc package> right, that's it, so it'll count as "available" on s390x
[08:41] <cpaelzer> jamespage: would you reupload that then so we see if it resolves the test issue
[08:42] <cpaelzer> pitti: ahh, and one more thanks for that confirmation - great
[08:42] <cpaelzer> now everything makes sense and a solution is ahead
[08:42] <pitti> cpaelzer: just adding the arch qualifiers will fish the "test dep not available", you still need to make sure that your tests skip gracefully if dpdk isn't available
[08:43] <cpaelzer> pitti: ah you mean it won't install the package because we properly flagged its arch dep but the test then still wants to run and would fail
[08:43] <pitti> right
[08:43] <cpaelzer> ok, thank
[08:43] <cpaelzer> that saves one extra upload
[08:43] <pitti> cpaelzer: for this corner case there's currently no declarative way to say that, I'm afraid
[08:43] <pitti> so it needs to be done in the test code for the time being
[08:44] <pitti> cpaelzer: at some point we could add an Architecture: field to d/t/control of course, but it doesn't exist ATM
[08:44] <cpaelzer> pitti: no SW can be perfect and autotest is great as-is - we can handle that minor thing for now
[08:44] <cpaelzer> I wonder we are the first to run into that
[08:44] <cpaelzer> maybe the others just didn't have to ask ...
[08:48] <pitti> cpaelzer: I faintly remember that it came up once before, but it's indeed not very common
[10:05] <dholbach> can somebody remind me how to find all the packages which depend or build-depend on a binary package of a given source package? :-)
[10:07] <Mirv> pitti: hi! I'm unable to submit an autopkgtest rety request ("does not have any test results") when trying the i386 ubuntuone-credentials from https://requests.ci-train.ubuntu.com/static/britney/vivid/landing-049/excuses.html
[10:10] <cpaelzer> dholbach: apt-rdepends -r [-b] ?
[10:10] <cpaelzer> dholbach: [-b] for depends and build-depends
[10:11] <dholbach> thanks
[10:11] <dholbach> looks like -r and -b won't go together
[10:11] <cpaelzer> yeah the -r if for the bin, wait I had something for reverse build as well
[10:12] <Mirv> dholbach: or you might want to check reverse-depends with or without -b
[10:12] <Mirv> from ubuntu-deve-tools
[10:12] <Mirv> -e
[10:12] <dholbach> ah, great
[10:12] <dholbach> thanks a bunch! :-)
[10:12] <cpaelzer> yeah that is the second one I had
[10:13] <Mirv> one needs to iterate through the binary packages though
[10:14] <cpaelzer> dholbach: I documented all kind of this dependency stuff in this section https://wiki.canonical.com/ServerTeam/ServerReleaseHandling#Messing_around_with_dependencies
[10:14] <cpaelzer> including e.g. how to iterate over the binary packages and so on
[10:14] <dholbach> excellent
[10:14] <cjwatson> Mirv: src:
[10:15] <Mirv> !! all these years...
[10:16] <LocutusOfBorg> dholbach, reverse-depends -b binary works :)
[10:16] <Mirv> reading the man page would have helped easily
[10:16] <LocutusOfBorg> and reverse depends works also with a given release series
[10:16] <LocutusOfBorg> reverse-depends -r xenial fonts-droid
[10:17] <Mirv> what about reverse depends of a Provides: provided virtual package? :)
[10:18] <LocutusOfBorg> oops Mirv I did answer before reading your answer :)
[10:19]  * LocutusOfBorg did rebuild itksnap, elastix, plastimatch, lets see insighttoolkit4.9 migrate today!
[10:19] <Mirv> I'm using synaptic to list binary packages that depend on qtbase-abi-[version] or qtdeclarative-abi-[version], but I'd actually need a way to get the list of source packages that have binary packages that depend on them
[10:33] <pitti> Mirv: can you please file a bug against auto-package-testing about that for now? (sorry, can't look into that right away)
[10:33] <pitti> Mirv: did the retry work for qtmir?
[10:35] <pitti> Mirv: requeued the ubuntuone-credentials test
[10:39] <Son_Goku> nacc_: after a session of debugging with Remi Collet, I found out the problem: https://bugs.launchpad.net/ubuntu/+source/php7.0/+bug/1548442/comments/2
[11:07] <LocutusOfBorg> nacc_, a new php7 upload hurray!
[11:08] <LocutusOfBorg> I mean in Debian :)
[11:19] <pitti> Mirv: well, I just restarted it too
[11:20] <Mirv> pitti: ok, filing, qtmir hadn't failed yet at that point but seems to work (retry), and thanks
[11:20] <Mirv> and because of timing I guess qtmir is now twice in the queue :)
[11:20] <pitti> Mirv: ah, snap :)
[11:20] <pitti> Mirv: yep -- http://autopkgtest.ubuntu.com/running.shtml#pkg-qtmir :)
[12:52] <flexiondotorg_> I'd like to submit a one line patch to gnome-language-selector to fix an issue with MATE.
[12:52] <flexiondotorg_> I see lp:language-selector is retired.
[12:52] <flexiondotorg_> Should I just prepare a debdiff for the language-selector package, as can't find a source repository.
[12:55] <Saviq> pitti, hey, I'm putting our autopilot tests into autopkg, will britney be happy with this https://code.launchpad.net/~saviq/unity8/autopilot-dep8/+merge/287228 ? Will the "isolation-machine" restriction make it skip it?
[13:22] <pitti> Saviq: that looks a bit odd still
[13:22] <pitti> Saviq: why is it isolation-machine?
[13:23] <pitti> Saviq: that will skip the tests on the arches that use LXC (armhf and s390x)
[13:23] <Saviq> pitti, those tests are only meant to run on phones
[13:23] <pitti> Saviq: oh, autopilot's input device emulation I figure?
[13:23] <Saviq> pitti, not just that, we just need a full unity8 session
[13:23] <pitti> Saviq: but "initctl --session" will fail on the normal infra
[13:23] <pitti> as there is no unity session
[13:24] <pitti> Saviq: so either you actually start one, or you should gracefully skip the test if it's not running
[13:24] <Saviq> pitti, well, my goal is to skip that test on britney altogether (unless it has touch devices available)
[13:24] <Saviq> pitti, that's what I'm doing
[13:24] <Saviq> pitti, see line 10 of the diff
[13:24] <pitti> Saviq: pidof unity 8 || { echo "Not running under unity8, skipping"; exit 0; }
[13:24] <pitti> something like that
[13:24] <pitti> Saviq: ah, of course
[13:25] <pitti> Saviq: yep, that's fine
[13:25] <Saviq> pitti, just wanted to skip it altogether where it doesn't make sense, should I not add the isolation-machine bit, then?
[13:25] <pitti> Saviq: in the future we can use something like "Class: ubuntu-touch", but that's not implemented yet
[13:25] <pitti> Saviq: i-machine is fine
[13:26] <Saviq> pitti, will it not trigger the test unnecessarily on some dedicated hardware, though? where it could just as well fail in a container/chroot?
[13:26] <Saviq> s/fail/gracefully exit/
[13:30] <Mirv> pitti: when you have time, retry unity8 i386 silo 049 (https://requests.ci-train.ubuntu.com/static/britney/vivid/landing-049/excuses.html). added that one to the bug report too.
[13:30] <pitti> Saviq: it will be skipped on containers/schroots, and not do much in qemu
[13:30] <pitti> Mirv: done
[13:31] <Saviq> Mirv, that's our last flaky test, will fix asap
[13:31] <pitti> Saviq: "our last flaky test" → famous last words :)
[13:31] <Saviq> pitti, no, for real! :)
[13:32] <Saviq> (outside of armhf, that is :P)
[13:39] <doko> jamespage, smoser: jdk-default-headless hit xenial. enjoy to reduce the size of images/whatever
[13:40] <Mirv> thank you
[13:41] <Mirv> Saviq: :)
[13:43] <blaze> was unity8-lxc tested on xenial?
[13:43] <blaze> not working for me here
[13:56] <LocutusOfBorg> doko, itk4 is migrating now I think
[15:05] <caribou_> barry: won't the regression on python-pip 8.0.2-8 block your new python-pip upload ?
[15:16] <doko> apw, please could you take care about http://people.canonical.com/~ubuntu-archive/nbs.html (module-init-tools), and then telling me that it can be removed?
[15:17] <doko> I'll take care about the cross-toolchain-* packages myself
[15:17] <Saviq> pitti, did you consider having adt-run generate a .xml test report file?
[15:20] <pitti> Saviq: it didn't come up before; we have a json file for the test environment data so far
[15:22] <Saviq> pitti, I'm asking because we're constructing a dummy .xml file containing a fail when adt-run exits with a "failed" exit code, to mark the job unstable instead of plain FAILED
[15:22] <Saviq> pitti, I'll parse the summary file for this, but it would probably be easier for adt-run to do this for us :)
[15:28] <pitti> exactly as easy or hard, I'd say :)
[15:40] <barry> caribou_: i hope not 8.0.3-1 should fix the problem
[15:43] <nacc_> Son_Goku: can you put that in the php bug too?
[15:43] <nacc_> LocutusOfBorg: :)
[15:54] <barry> LP seems slow in picking up new debian versions
[15:56] <cjwatson> barry: do you have an example?
[15:56] <cjwatson> the cron job is every six hours which matched dinstall last I checked, but it does depend on exactly where in the cycle your upload falls
[15:56] <barry> cjwatson: python-pip and python-virtualenv.  both were uploaded to debian yesterday but i still can't syncpackage them
[15:56] <barry> cjwatson: definitely > 6h ago
[15:57] <cjwatson> looking
[15:57] <nacc_> Son_Goku: so, i *think*, that ubuntu's package just directly has whatever is in upstream php for ext/pcre ... and some of the patches in the repo you mention (fc23) are present (or mostly so...) and some aren't
[15:59] <cjwatson> barry: well, 8.0.3-1 isn't actually in unstable's Sources file yet
[15:59] <barry> maybe it's debian?
[15:59] <cjwatson> I just looked on coccia
[15:59] <barry> cjwatson: looks like both are in buildd-unstable?
[16:00] <cjwatson> barry: dak's DB appears to have it, sure, but we sync from Sources files
[16:00] <cjwatson> barry: I don't know why those are taking so long to update, but it's on the Debian side AFAICS
[16:00] <barry> cjwatson: ack.  okay, thanks!
[16:01] <nacc_> Son_Goku: i can build with the patches applied and narrow it down here, i think, there are few that seem likely
[16:01] <cjwatson> 11:41 <jcristau> looks like yesterday's 1952 dinstall broke
[16:01] <cjwatson> 11:42 <jcristau> in the pdiff stuff
[16:01] <cjwatson> 11:42 <jcristau> and then exited at the same point in the 0152 and 0752 ones
[16:01] <cjwatson> 11:44 <jcristau> http://paste.debian.net/405058/
[16:01] <cjwatson> barry: ^- from #debian-ftp
[16:01] <cjwatson> that probably explains it
[16:02] <cjwatson> barry: apparently it should be fixed for the dinstall running recently/nowish
[16:02] <barry> cjwatson: thanks.  i'll keep an eye on it over the next few hours, and we'll see how it falls in with the cron job cycle
[16:05] <Saviq> pitti, about the "isolation-machine" again, will that not affect the existing test, that happily runs in chroots today?
[16:13] <teward> any chance I could get a bug reviewed for an FFe to get the latest nginx upstream mainline version into Xenial?   Wasn't sure if I have to get the FFe approved before I uploaded, though...
[16:15] <nacc_> Son_Goku: so i think ubuntu, at least, has not patched pcre 8.38 at all ... btw, did you reproduce this with debian too? can you, if possible, i do see we have a delta in ubuntu specifically about JIT
[16:21] <nacc_> Son_Goku: looks to just be some symbol mangling, probably not the issue
[16:23] <nacc_> Son_Goku: i'll backport the lot and see if it fixes it locally first
[16:39] <Son_Goku> nacc_: it was a problem for me on Debian 8
[16:39] <Son_Goku> I don’t have anything on Debian sid
[16:40] <Son_Goku> so I can’t test there
[16:40] <nacc_> Son_Goku: ok, np
[17:06] <smoser> pitti, are you around ?
[17:31] <rbasak> Am I right in thinking that there's no need for <package>.dirs to specify a directory that <package>.install already installs into (with debhelper)?
[17:40] <cjwatson> rbasak: that's right
[17:41] <cjwatson> it's relatively common redundancy but it is indeed redundant
[17:43] <rbasak> Great, thanks.
[18:01] <pitti> Saviq: a test with isolation-machine isn't run with adt-virt-schroot, no
[18:02] <Saviq> pitti, I know, but that's the thing, we have one test without isolation-machine, another with, I just don't want the former to unnecessarily hog something
[18:03] <Saviq> anyway
[18:03] <pitti> Saviq: what do you mean, if it can run in a chroot, why wouldn't we run it?
[18:04] <Saviq> pitti, do the tests get split up when they have different requirements?
[18:04] <pitti> Saviq: each test is run separately anyway, and those which the testbed can't satisfy are skipped
[18:04] <Saviq> anyway, it's running in https://requests.ci-train.ubuntu.com/#/ticket/1053 so we'll know soon enough
[18:04] <Saviq> pitti, ack, if they're run separately then we should be good, thanks!
[19:17] <nacc_> Son_Goku: is the pcre fix needed in pcre or in the php extension? i assumed the former?
[19:17] <Son_Goku> the former
[19:17] <Son_Goku> as far as I know, anyway
[19:18] <Son_Goku> nacc_: that said, Remi does have some pcre patches… https://github.com/remicollet/remirepo/tree/master/php/php70
[19:19] <Son_Goku> but I don’t think they apply to Xenial
[19:45] <jdstrand> pitti: hey, so isc-dhcp got hung up in a network-manager regression, but all it fixed as an apparmor denial which should only make dhclient work better
[19:47] <jdstrand> test_open_b_ip6_dhcp() is what failed, but I fail to see how adding the openssl abstraction to the profile would've caused that
[19:48] <jdstrand> hmm, artificats does not have syslog in it...
[19:50] <jdstrand> looking at wpa-dhclient-stdout, I don't see anything, but the fact that the other 3 tests passed suggests to me it is not the isc-dhcp upload
[21:19] <nacc_> Son_Goku: yeah, doesn't look like it to me either
[22:03] <nacc_> Pharaoh_Atem: ok, i rebuilt pcre3 with a fairly clear bugfix, and am now rebuilding php7 with that pcre, to see if it fixes the issue
[23:17] <nacc_> Pharaoh_Atem: wasn't just that one patch, as i hpoed it was ... and the tests are failing when i backport the set (might have made a mistake on my part, of course)