[01:37] <michi> robru: Question for you…
[01:37] <robru> michi: hey what's up?
[01:37] <michi> Just trying to understand how things work.
[01:38] <michi> We have some silos stuck on testing because their dependencies can’t be satisfied.
[01:38] <michi> So britney isn’t happy.
[01:38] <robru> michi: where?
[01:38] <michi> Once the dependencies appear, how do we know? Does it re-test automatically once the dependencies are OK?
[01:38] <michi> silo 37.
[01:38] <michi> See the excuses for xenial
[01:39] <michi> scopes-shell won’t test because qtcreator-plugin-ubuntu is out of date, I think.
[01:39] <michi> I’m just trying to understand how to recover from this situation once the dependencies are OK again.
[01:39] <michi> Do we need to manually re-trigger a build?
[01:39] <michi> Or does Britney automatically wake up to the fact that it now makes sense to try again?
[01:39] <cjwatson> don't retrigger a build, people with appropriate privs can retry
[01:40] <michi> cjwatson: thanks
[01:40] <cjwatson> it will ... sometimes automatically retry, it depends on what changed
[01:40] <michi> so we just wait until things are fixed magically?
[01:40] <cjwatson> too many levels down and it won't notice
[01:40] <robru> cjwatson: michi is talking about autopkgtests, not depwaits in ppas
[01:41] <cjwatson> yes I know
[01:41] <robru> ok
[01:41] <cjwatson> it's still true
[01:41] <cjwatson> if an immediate dependency changes then that will in itself trigger new tests
[01:41] <cjwatson> but a change to a dependency of a dependency won't
[01:41] <michi> So, we basically just sit and do nothing?
[01:41] <robru> right
[01:41] <michi> OK.
[01:42] <michi> dobey mentioned that britney is struggling a bit right now.
[01:42] <michi> Where can I look at it?
[01:42] <cjwatson> that looks vaguely like the known jsoncpp ABI thing?
[01:42] <robru> michi: my 'right' was aimed at cjwatson
[01:42] <michi> Ah
[01:42] <cjwatson> michi: that's not britney struggling, that's the autopkgtest workers (different bit of the system, former uses the latter).  http://autopkgtest.ubuntu.com/running.shtml
[01:43] <robru> michi: britney logs are here: https://requests.ci-train.ubuntu.com/static/britney/last-run.txt it's currently taking an hour to do a run. this is caused by high load, but it's not actually causing any problems other than slowness
[01:43] <michi> Ah, cool
[01:43] <michi> cjwatson: Where do you see the json problem?
[01:44] <jamesh> maybe things would run faster if we had less tests?
[01:44] <cjwatson> I mean, robru knows more about how britney is set up in bileto, but dobey was specifically talking about autopkgtest queues earlier and just used the wrong name
[01:44] <cjwatson> michi: people talking about it today
[01:44] <cjwatson> and context and stuff
[01:44] <michi> Yes, I know, but I don’t actually know what happened with json
[01:44] <michi> Some ABI break
[01:44] <michi> But in detail?
[01:45] <robru> michi: yeah there's two different things going on here. your tests run in the autopkgtest system, britney polls it for results in regular intervals. currently that interval is "every 65 minutes" due to high load. in the past it was 25 minutes.
[01:45] <cjwatson> michi: I don't see it named in this particular test, but I know it's causing installability problems of this kind.  for detail I'm afraid you'll have to hunt around for more, I'm just stopping by briefly here.
[01:45] <michi> cjwatson: Cool, thanks.
[01:45] <robru> michi: really who you want to talk to about autopkgtest issues is pitti
[01:46] <michi> robru: The queue for britney looks ten miles long.
[01:46] <michi> Thanks.
[01:46] <cjwatson> a lot of those are relatively short, but it's probably a day's worth of queue :-/
[01:46] <michi> Aha
[01:46] <robru> michi: yeah I haven't had time to dig in, theoretically there's 17ish silos to process and they take 4ish minutes each, in order (can't be parallelized due to staggering memory usage)
[01:46] <michi> OK. So the upshot of it all is that we do nothing and just wait until silo 37 is ready.
[01:47] <michi> So be it :)
[01:47] <robru> michi: I dunno, if it still says 'regression' after a day and the queue died down a bit I'd poke somebody to retry them
[01:47] <cjwatson> in principle a lot of that has got to be parallelisable because most of the data structures should be based on the primary archive and therefore shared, in practice that's very hard to set up ...
[01:48] <michi> robru: OK. But I don’t want to needlessly get on people’s nerves.
[01:48] <michi> So, is there any way for me to check when it makes sense to talk to someone?
[01:48] <michi> And whom?
[01:48] <michi> pitti?
[01:48] <robru> michi: yeah pitti is the guy for digging into autopkgtests
[01:49] <michi> Aha.
[01:49] <robru> michi: I dunno how to tell when really, i'm not really familiar with that 'test dependency' error, and I think it's suspect that it all passed in vivid but failed in xenial.
[01:49] <michi> the problem is that a whole bunch of stuff is stuck behind scopes-api which got broken by an update to cap’n proto, which in turn broke abi-compliance-checker.
[01:49] <michi> Now everything is piling up because we can’t get silo 37 landed, and nothing that uses scopes can land either.
[01:50] <michi> robru: right. Vivid is fine
[01:50] <robru> michi: yeah I'd get pitti on it if this is blocking a lot of other stuff for you
[01:50] <cjwatson> I would suggest retrying when https://launchpad.net/ubuntu/+source/ubuntu-touch-meta/1.258 hits the release pocket + half an hour
[01:50] <michi> do you know his TZ?
[01:50] <cjwatson> pitti is in .de
[01:50] <robru> yeah
[01:50] <cjwatson> but the early side
[01:50] <michi> OK, thanks
[01:50] <michi> So late afternoom my time.
[01:52] <cjwatson> hilariously ubuntu-touch-meta is blocked on autopkgtest failures due to basically the thing it fixes.  *that* probably needs pitti to disentangle
[01:52] <robru> cjwatson: heh, was just going to say
[01:52] <robru> michi: http://people.canonical.com/~ubuntu-archive/proposed-migration/xenial/update_excuses.html#ubuntu-touch-meta
[01:53] <michi> Christ
[01:54] <michi> How does something like this *ever* land with that many dependencies?
[01:54] <robru> michi: "that many"? what are you looking at?
[01:55] <michi> There are tons of autopkg tests that ubuntu-touch-meta lists in its excuses
[01:55] <michi> Don’t they all have to pass before this can land?
[01:56] <jamesh> michi: the only thing mentioned under ubuntu-touch-meta is the click scope tests
[01:56] <michi> Ah, I get it now
[01:56] <robru> michi: http://i.imgur.com/hUnVrbx.jpg yeah
[01:57] <michi> Yep. I was under the mis-apprehension that everything the whole list had to pass.
[01:57] <michi> you can tell that I don’t know much about the packaging machinery :(
[01:58] <robru> michi: anyway follow that up with pitti, he can poke that through if that's the appropriate thing to do.
[01:58] <michi> Yes, thanks!
[01:58] <robru> you're welcome
[02:14] <cjwatson> As jamesh alluded to earlier, this was all a lot easier when we just uploaded straight to the archive without any automated tests running on the built packages. :-P  (But I think on the whole people prefer that we don't do that any more ...)
[02:15] <jamesh> it's a shame libjsoncpp could be uploaded without fixes to all the packages that sit on top of it then ...
[02:15] <cjwatson> Well, it hasn't reached xenial yet, it's only in xenial-proposed.
[02:16] <cjwatson> Unfortunately ubuntu-touch-meta has explicit dependencies on a particular ABI, which is anomalous for metapackages.
[02:16] <cjwatson> There are reasons for it, but it does tend to exacerbate this kind of problem.
[02:18] <robru> cjwatson: yeah kind of a funny situation, good that the change didn't break xenial, but seeing as how everything builds against -proposed anyway, it's sort of ground everything else to a halt
[02:19] <jamesh> yeah.  We wouldn't have been able to get something like that into -proposed for one of our own packages with problems like that
[02:21] <robru> cjwatson: TODO: make all debian imports go through silos ;-)
[02:24] <cjwatson> robru: hahahaha
[02:24] <cjwatson> robru: next step, work out how they interleave
[03:07] <dobey> hmm
[03:27] <jamesh> it looks like the autopkgtest queue length for armhf is steadily going down, but ppc64el is stuck at 1157
[03:27] <jamesh> is something broken there?
[05:31] <robru> michi: don't worry about that "version mismatch" message, i think you just lost a race condition, should sort itself out within 15 mins.
[05:31] <michi> robru: Aha.
[05:31] <michi> OK, thanks.
[05:31] <robru> you're welcome
[05:40] <Mirv> robru: \o/ yes, 023 got automated signoff now. thanks, train rules!
[05:40] <robru> Mirv: haha thanks. We've come a long way!
[05:52] <jamesh> robru: I don't suppose you're in a position to kick the ppc64el autopkgtests?
[05:53] <robru> jamesh: nope i think you need pitti for that
[05:53] <jamesh> thought that might be the case :(
[05:54] <Mirv> robru: and now... another corner case! https://requests.ci-train.ubuntu.com/static/britney/xenial/landing-032/excuses.html - removed binary, will be removed from archives but britney would need to run...
[05:55] <Mirv> jamesh: I may be able, there is now a button for those with upload rights. link?
[05:55] <jamesh> Mirv: I've just been watching http://autopkgtest.ubuntu.com/running.shtml
[05:55] <robru> Mirv: I've seen that message before, i think that can be fixed without bileto changes
[05:55] <jamesh> Mirv: the armhf queue length has steadily been decreasing from about 700 this morning
[05:56] <jamesh> Mirv: the ppc64el queue was stuck at 1157, and now seems to be up to 1160
[05:56] <robru> Mirv: he doesn't want a retry, he wants the entire machinery kicked because it's stuck
[05:56] <jamesh> it doesn't seem to be doing anything
[05:56] <robru> Like physically kicked
[05:57] <Mirv> jamesh: ah kicking, not rerunning... yes you need pitti
[06:01] <robru> Mirv: try deleting old superseded source packages from your ppa. The version being complained about doesn't seem to exist in Ubuntu so i think it's just complaining about ppa contents
[06:01] <Mirv> robru: but how? (fixed) the normal fix is to remove the obsolete binary from archives. although I'm not sure why it's saying that already there, since usually it's just a reason stated in addition to the tests run
[06:01] <Mirv> robru: sorry, reading your msg :)
[06:01] <Mirv> robru: ah, yes it seemed really weird. trying to delete superseded packages then, makes sense (in the weird PPA way)
[06:02] <Mirv> robru: indeed! I've seen that before - in such a situation it will not get auto-deleted in PPA even if superseded. ok, let's wait 12h for the new results :)
[06:03] <robru> Mirv: you should see new results in 1 to 2 hours, but then it will trigger autopkgtests of course after that
[06:04] <Mirv> robru: yeah, I was assuming how long time it will take until getting all reverse deps of qtbase finished
[06:06] <Mirv> anyway, no hurry, I put it churning those britney results early precisely for that reason. I've tested the silo but that's really a silo that can't see too much testing so I'll just continue.
[07:19] <michi> jamesh: could you look at this one? https://code.launchpad.net/~michihenning/thumbnailer/fix-admin-relative-paths/+merge/284192
[07:19] <michi> It’s fairly simple.
[07:26] <michi> jamesh: Whoa, something’s happening with ppc queue.
[07:34] <jamesh> michi: commented on the MP.
[07:34] <michi> I just saw, thanks!
[07:35] <michi> If a relative path is passed to vs-thumb, it bombs.
[07:35] <michi> That’s why the canonicalization on the client side.
[07:36] <jamesh> right, but you're canonicalising twice.  Maybe just checking for absolute paths in one of those cases would do.
[07:36] <michi> Yes, just looking at that.
[07:37] <michi> But the client could be broken too and pass a relative path to the qt lib
[07:37] <michi> So, I guess it would be sufficient to do it in the qt lib, and not in vs-thumb
[07:38] <michi> Or not… Thinking...
[07:38] <michi> Sight, it’s been a while.
[07:38] <jamesh> nothing in that branch touches vs-thumb
[07:38] <michi> So, originally, I stumbled across this because thumbnailer-admin get didn’t work with relative paths.
[07:39] <jamesh> there is already a canonical() call inside thumbnailer-service, so vs-thumb should be protected.
[07:39] <michi> thumbnailer-admin goes through the client lib.
[07:39] <michi> So, canonicalising in the client lib is correct.
[07:39] <michi> And, when I use vs-thumb for testing manually, I’d like it to work too.
[07:39] <michi> canonicalising in thumbnailer-service is pointless because that has the wrong directory
[07:39] <michi> current directory
[07:40] <jamesh> in your branch, thumbnailer-admin is canonicalising the path, and then passing that canonicalised pathname to a method that also canonicalises it
[07:40] <jamesh> canonicalising in thumbnailer-service is not pointless, since it is in a different security context
[07:40] <jamesh> it is doing so for a different reason
[07:42] <michi> ?
[07:42] <michi> If the client sends a relative to the service, the service will end up with a completely different canonicalized path.
[07:42] <michi> That’s why the change in thumbnailer.cpp
[07:42] <michi> around line 103 of the diff
[07:44] <michi> But, yes, the service side still canonicalises the path to not get confused with different keys for the same file.
[07:44] <michi> So, the service rejects non-absolute path names now, which is what we want I think.
[07:44] <jamesh> michi: my understanding is that the canonical() call already in thumbnailer-service was to improve cache hits in case the client passed a symlink file name in
[07:44] <michi> No other change there.
[07:44] <michi> Yes, right.
[07:44] <michi> Don’t disagree with that.
[07:44] <jamesh> michi: adding a requirement for is_absolute() there is fine by me too.
[07:44] <michi> But it was possbile previously to send it a relative path, which turned into garbage on the service side.
[07:44] <michi> Right, cool :)
[07:45] <jamesh> but I wouldn't remove anything on that side, because it shouldn't rely on any work done on the untrusted client side.
[07:45] <michi> Correct.
[07:45] <jamesh> my comment on the MP was that on the client side you were doing canonical() twice
[07:45] <michi> So, I think what *can* be removed is the check in thumbnailer-admin.
[07:46] <michi> Because the lib does that too.
[07:46] <michi> Are we on the same wavelength now?
[07:46] <jamesh> either remove the canonical() call in thumbnailer-admin, or make the client lib just throw on !is_absolute()
[07:47] <michi> I think the safer option is to leave the lib as is and remove it from thumbnailer-admin.
[07:47] <michi> That way, if someone talks directly to the lib, we avoid a round-trip to the service.
[10:19] <sil2100> pstolowski, michi, dobey: hey! I see that the acc-disabling silo is failing on autopkgtests, are you guys looking into that?
[10:19] <michi> sil2100: Been looking at it all day :(
[10:19] <michi> It’s a break in qt-something.
[10:20] <michi> The autopkg tests weren’t making progress for almost the entire day.
[10:20] <michi> Because the PPC test machine died.
[10:20] <michi> So nothing got through.
[10:20] <michi> silo 37 is what we really would like to get unblocked.
[10:22] <michi> sil2100: Been tinkering with abigail most of the day.
[10:22] <michi> Making progress. Found a bug today. The main dev at RedHat is looking at it.
[10:25] <sil2100> ;/
[10:25] <pstolowski> "badpkg: Test dependencies are unsatisfiable."
[10:26] <pstolowski> Broken libscope-harness2:amd64 Depends on libunity-scopes1.0 [ amd64 ] < none -> 1.0.3+16.04.20160209-0ubuntu1 > ( libs )
[10:26] <pstolowski>   Considering libunity-scopes1.0:amd64 2 as a solution to libscope-harness2:amd64 0
[10:26] <pstolowski>   Holding Back libscope-harness2:amd64 rather than change libunity-scopes1.0:amd64
[10:34] <sil2100> hm, I'm not an expert in autopkgtests
[10:35] <sil2100> Mirv: you seem to have more experience, could you help me parse that ^ ? :)
[10:50] <Saviq> sil2100, can you please press ♻ on the scope click regression in http://people.canonical.com/~ubuntu-archive/proposed-migration/xenial/update_excuses.html#unity8
[10:52]  * Saviq finds it quite weird this is limited to core-devs
[10:55] <sil2100> o/
[10:55] <sil2100> Yeah, it shouldn't
[10:55] <sil2100> Retried
[11:18] <Mirv> sil2100: I'm also not too expert in finding the actual problem from the logs.. experimenting in chroot helps, like with the python problem
[11:18] <sil2100> pete-woods: hey!
[11:19] <sil2100> pete-woods: do you know if we still use unity-voice anywhere? Is that project maintained and necessary?
[11:20] <Mirv> Saviq: more exactly it's limited to "those with upload rights to the package". there needs to be some limitations, but maybe additional rights could be considered at some points. pitti would know the rationale.
[11:21] <pete-woods> sil2100: we don't use it anywhere, no
[11:21] <Mirv> I'll try to look more into the failures in a bit
[11:22] <pete-woods> sil2100: it used to be used by HUD, but at request we removed it, as the functionality was only available on the phone
[11:22] <pete-woods> and HUD was also removed from the phone
[11:22] <sil2100> Yeah
[11:22] <sil2100> Ok, I wonder if we could remove it from the archive
[11:23] <pete-woods> if could certainly at least be bumped into universe from main
[11:23] <pete-woods> but I'd rather not remove it completely, in-case suddenly HUD needs to do voice recognition again
[11:26] <cjwatson> pete-woods: unity-voice is already in universe.
[11:27] <Laney> The problem is that some APIs it uses have changed and so it needs maintaining
[11:27] <Laney> You could ~easily bring it back if required
[11:33] <sil2100> I would say it's no use having it in the archive just for the sake of having it
[11:33] <sil2100> The lp project can stay so as Laney said, it can be re-released whenever needed
[11:36] <pete-woods> okay, fair enough, feel free to nuke it then
[12:19] <Mirv> sil2100: michi's silo 037 problem is because of the libjson transition. not sure what to do, maybe agree that vivid autopkgtests show green and override to get it into QA queue anywya.
[12:19] <michi> Mirv: Are you sure it’s because of json?
[12:19] <Mirv> sil2100: there's a clash with the two sides of things that the builds are with proposed and autopkgtests without proposed but with the PPA. that's intended and most of the time provides the best results.
[12:19] <michi> I thought I saw an excuse about qt-something
[12:20] <Mirv> michi: the PPA depends on libjson1 which is only available in -proposed, but the autopkgtests run without proposed so that there wouldn't be failures coming from outside the silo.
[12:20] <Mirv> michi: can you point me to it? the autopkgtest logs however have a lot of things in them that seem suspicious but are simply observations and not a cause for a failure.
[12:20] <michi> Ah
[12:20] <michi> Let me see if I can find it.
[12:21] <Mirv> sil2100: but the libjson transition should simply be finished, that's the real fix
[12:22] <michi> Mirv: I’m looking at https://requests.ci-train.ubuntu.com/static/britney/xenial/landing-037/excuses.html
[12:22] <michi> Scroll to the bottom.
[12:22] <michi> Then look at the i386 regression for scopes-shell: https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-xenial-ci-train-ppa-service-landing-037/xenial/i386/q/qtcreator-plugin-ubuntu/20160210_000636@/log.gz
[12:22] <michi> Right at the bottom of that, it bitches about qt-something
[12:23] <sil2100> Mirv: the scopes silo is the only thing left for the libjsoncpp transition
[12:24] <sil2100> Mirv: the only packages left with the old libjsoncpp0v5 dependencies are in this silo and this silo was supposed to fix it
[12:24] <sil2100> (as we needed them to be re-built)
[12:24] <Mirv> sil2100: ah. in that case I think we might simply stare at this beauty https://requests.ci-train.ubuntu.com/static/britney/vivid/landing-037/excuses.html and ask QA to get it into their queue manually
[12:25] <sil2100> Mirv: hah ;) Yeah, I just couldn't understand why this is failing, since all the required leftover packages that needed a rebuild were in this silo, so I expected autopkgtests to install all that are in the silo
[12:25] <Mirv> michi: the fact that it runs something (certain autopkgtests) is not that the failure in those running would be related to the failing package. _all_ reverse dependencies of unity-scopes-shell that have autopkgtests are always executed, and that includes qtcreator-plugin-ubuntu (SDK install some scope package) and unity8.
[12:25] <michi> Mirv: cjwatson mentioned earlier today that, when something won’t work because a dependency of a dependency isn’t available, it needs manual intervention.
[12:26] <Mirv> sil2100: because the libjson itself is not in the silo, but only in -proposed
[12:26] <Mirv> sil2100: and autopkgtests run without proposed
[12:26] <sil2100> Aaaaaaah, ok, I keep forgetting about the -proposed bits
[12:27] <michi> Mirv: If we are buliding against proposed, the bloody autopkg tests had better run against proposed too, otherwise this is a thoroughly pointless exercise!
[12:27] <sil2100> Yeah, I don't know why I forgot about it, I got bitten by it last week already
[12:27] <Mirv> sil2100: well it changed yesterday evening so ... :D
[12:28] <Mirv> sil2100: so after consulting with pitti we agreed to disable proposed from autopkgtests, because that causes a lot of non-PPA-related failures (failures that need to be fixed in -proposed before the other things migrate from there). but the counterside is this that if something from proposed is actually required...
[12:28] <michi> I really think it doesn’t make sense to run autopkg against xenial, when the code was built with xenial-proposed.
[12:28] <Mirv> sil2100: so this implementation made it possible for my silo 23 to get automatic signoff after 1,5 weeks of related train fixes :)
[12:29] <michi> Both success and failure of a test are pretty much meaningless in that case.
[12:29] <sil2100> jibel, davmor2, rvr: hey guys! Regarding silo 37 - could you guys manually shove it into the QA queue? The failing xenial tests are caused by us not using what's in -proposed
[12:29] <sil2100> jibel, davmor2, rvr: it's all part of a transition in xenial
[12:30] <davmor2> sil2100: take it the fix that was meant to land still didin't then
[12:30] <davmor2> didn't fix it that is
[12:30] <michi> Am I missing something here?
[12:30] <cjwatson> Mirv: Surely all this is libjsoncpp, not libjson?
[12:30] <michi> We are testing with old packages for code that built with newer packages and expect to get meaningful answers?
[12:30] <Mirv> cjwatson: sure, yes
[12:31] <cjwatson> michi: The point here is to ensure that integrating new code doesn't break stuff already in the archive.  Of course that's important and expected to be meaningful.
[12:31] <michi> Hmmm…
[12:31] <cjwatson> michi: In cases where we're integrating a block of multiple packages then as a general rule (modulo bugs) we try to test the newer ones, though.
[12:31] <michi> This feels dubious to me.
[12:31] <Mirv> sil2100: the proposed thing was bug #1541334
[12:31] <ubot5`> bug 1541334 in autopkgtest (Ubuntu) "Do not run silo tests against all of -proposed" [Undecided,Fix released] https://launchpad.net/bugs/1541334
[12:31] <michi> I mean, pretty much anything could happen.
[12:32] <cjwatson> Yes.  It's software.
[12:32] <michi> It’s essentially undefined what answers come back from a test.
[12:32] <cjwatson> Nonsense!¬
[12:32] <michi> The code tests with packages it wasn’t built against.
[12:32] <cjwatson> We can investigate and override cases where mistakes happen.
[12:32] <michi> That doubleplusungood.
[12:32] <cjwatson> Rubbish.
[12:32] <michi> How so?
[12:33] <cjwatson> If the packages you're fearful of need to be rebuilt, then that's a thing you need to know.
[12:33] <michi> Hmmm…
[12:33] <cjwatson> If tests fail, then it may well indicate that your landing is incomplete.
[12:33] <michi> So, I call into libx at v1
[12:33] <cjwatson> Simply ignoring that on the basis that it's "essentially undefined" is deeply dubious practice.
[12:33] <michi> I write tests, blah, blah.
[12:34] <michi> Then something goes and runs my tests built against v1 with autopkg tests from v0
[12:34] <michi> Sorry, with autopkg tests that call into into v0
[12:34] <michi> So: bool f()
[12:34] <michi> returns false in v0, and true in v1
[12:35] <cjwatson> In most cases, these things are integration tests.  They're not testing specific functions like that, they're testing that the package as a whole is still functional.
[12:35] <michi> How can I make any QA assurance now?
[12:35] <michi> Yes, I know.
[12:35] <michi> But this entire approach has landmines all over it.
[12:35] <cjwatson> There are some edge cases where one needs to change the autopkgtests to take account of interface changes, and those cases we can override.
[12:35] <cjwatson> But you still want to know about them.
[12:36] <michi> Well, for the record, I think I can safely predict that this will blow up in unpredictable ways some of the time.
[12:36] <cjwatson> If the package containing these autopkgtests has changed, then typically the newer tests will be run.
[12:36] <cjwatson> That's nice.
[12:36] <michi> I think that’s a thoroughly bad idea.
[12:36] <michi> Yes, newer tests are better than older ones.
[12:36] <cjwatson> What are you actually proposing, in concrete terms?
[12:36] <michi> But if they call into something that is now essentially undefined because it is unexpected, anything can happen.
[12:36] <cjwatson> That's not helpful.
[12:36] <cjwatson> What are you actually proposing, in concrete terms?
[12:37] <michi> If something was built with v1, it needs to test with v1
[12:37] <michi> this approach effectively switches dependencies underneath autopkg tests, unless I’m misunderstanding something.
[12:37] <cjwatson> In general that will happen if there was a documented interface change (i.e. soname change or whatever).
[12:37] <michi> Really?
[12:37] <michi> We change version all the time, with minor behavioral changes in an API.
[12:38] <michi> These are definitely not soname changes.
[12:38] <cjwatson> Most of which has zero effect on autopkgtests.
[12:38] <michi> Yes, because most people write sloppy autopkg tests that pay lip service to what an autopkg test should actually be doing, namely, to test the full functionality of a package.
[12:39] <cjwatson> That's kind of a retcon.
[12:39] <michi> I count myself among the sinners...
[12:39] <michi> retcon? What’s that? I don’t know the term.
[12:39] <cjwatson> Retroactive continuity
[12:39] <michi> Ah :)
[12:39] <cjwatson> As in, declaring something to have always been the case after the fact.
[12:39] <michi> Yes I get it :)
[12:40] <cjwatson> The real point of autopkgtests was to test packages in their as-installed state, rather than just in the build tree.
[12:40] <michi> Right.
[12:40] <michi> So the main point is to test that installation worked, rather than the code, which should have been thorougly tested much earlier.
[12:40] <michi> I get that.
[12:41] <michi> Except that software has its ways.
[12:41] <Saviq> sil2100, no dice, try again? http://people.canonical.com/~ubuntu-archive/proposed-migration/xenial/update_excuses.html#unity8
[12:41] <Saviq> at least there's no queue now
[12:41] <michi> To blow up in the most unbelievable ways, due to chains of impossible coincidences.
[12:41] <michi> Anyway, I got my five cents worth in. Thanks for listening! :)
[12:42] <cjwatson> The point here is that if your package doesn't require a newer version of the thing that depends on it to migrate, then you *do* need to make sure that the older version still works
[12:42] <cjwatson> Because otherwise you can end up letting integration failures through
[12:42] <michi> Correct. But these dependencies get switched underneath me without any warning too.
[12:42] <cjwatson> And in general it's even less likely that the newer version of autopkgtests for libx will work against an older version of libx
[12:43] <michi> We get broken all the time by packages we call into that do something different suddenly.
[12:43] <cjwatson> So you have to run the older test code
[12:43] <michi> It is *highly* likely that v2 of one of our dependencies will break us when v1 is still working.
[12:43] <michi> Happened to us just yesterday.
[12:43] <michi> And we never asked for v2
[12:43] <cjwatson> Great, so hopefully that's a test failure.
[12:43] <sil2100> Saviq: done
[12:43] <michi> Nope, it isn't
[12:44]  * sil2100 off to prepare lunch
[12:44] <cjwatson> Or insufficient tests.
[12:44] <cjwatson> Now, if your landing is part of a set with a newer version of libx, then we'd want to run the newer tests for libx and newer code for libx
[12:44] <michi> abi-compliance-checker crashes because of totally legitimate and correct and backward compatible changes in another package that abi-compliance-checker doesn’t not even know exists.
[12:44] <michi> And, as a result, our unit tests fail.
[12:45] <cjwatson> "nope, it isn't [a test failure]", "and as a result our unit tests fail"
[12:45] <cjwatson> which is it?
[12:45] <michi> See, we have a new version of cap’n proto in xenial-proposed.
[12:45] <michi> that’s a perfectly good and legitimate and compatible update.
[12:45] <michi> And acc has a but that causes it to barf when it compiles the new capn proto headrs.
[12:46] <michi> And, because we are diligent, we run ABI compliance checks as part of our unit tests.
[12:46] <michi> And, suddenly, we fail in xenial-proposed.
[12:46] <cjwatson> that's not autopkgtests though
[12:46] <michi> And we didn’t even know that capn proto had changed, or that, suddenly, someone was building our code against xenial-proposed.
[12:46] <michi> No.
[12:46] <michi> But it’s the same thing.
[12:46] <cjwatson> no, it's really really not
[12:47] <cjwatson> one of the things that autopkgtests are really good for that entirely didn't exist in any form previously is to allow packages to make assertions about the behaviour of their dependencies and have those stick
[12:47] <michi> It’s spooky action at a distance.
[12:47] <cjwatson> if capnproto were making assertions about the behaviour of acc in an autopkgtest, then failures there would prevent a newer version of autopkgtest from migrating to xenial
[12:47] <michi> Yes, I understand the reasoning. And I’m totally supportive of autopkg testins.
[12:47] <michi> capnproto does not know acc exists, and vice versa.
[12:48] <cjwatson> Anyway, I'm getting very confused by the way you're shifting between different facets of the argument here, so I'm going to go and do something more useful ...
[12:48] <michi> acc call gcc to compile the capn proto header files.
[12:48] <michi> And that fails.
[12:48] <michi> Ultimately, it’s a bug in acc.
[12:48] <michi> I’m not trying to make trouble. Just pointing out that this is dangerous.
[12:48] <michi> Now, one thing that would really be useful:
[12:49] <michi> Have Jenkins start building on the side for <next-adjective>-proposed, but without hard failures.
[12:49] <cjwatson> the problem is that you're pointing out lots of different things that are not actually technically the same thing, and using them in support of each other
[12:49] <michi> Then we’d get an early heads up when packages in -proposed cause trouble.
[12:49] <cjwatson> it's nearly impossible to follow an argument structured like this
[12:49] <michi> All I know is that we only found out on Monday that there was a problem.
[12:49] <michi> Because things started building in -proposed.
[12:49] <michi> So, the problem is that we find out late.
[12:50] <michi> If we could have some CI support, it would alert us to any problems much sooner.
[12:50] <cjwatson> that seems reasonable
[12:51] <michi> For example, the capn proto chane has been sitting in proposed for months.
[12:51] <michi> But we didn’t even know that this was coming.
[12:51] <michi> Then, we get all surprised when, suddenly, nothing passes tests anymore, and whole ton of people can’t land stuff because our package fails its tests due to an unknown change to a dependency.
[12:52] <cjwatson> Any reason the capnproto change hasn't landed properly?  It looks like it breaks several phone-related packages.
[12:52] <michi> I’m trying to catch that sort of thing earlier, so we can be proactive.
[12:52] <michi> I have no idea.
[12:52] <cjwatson> One thing that would help is to not let things languish in -proposed.
[12:52] <cjwatson>     * amd64: camera-app-autopilot, gallery-app-autopilot, indicator-network-autopilot, indicators-client, libscope-harness-dev, libscope-harness2, libunity-scopes-cli, libunity-scopes-dev, libunity-scopes-qt-dev, libunity-scopes-qt0.2, libunity-scopes1.0, python3-scope-harness, qtcreator-plugin-ubuntu, qtcreator-plugin-ubuntu-autopilot, ubuntu-experience-tests, ubuntu-pocket-desktop, ubuntu-push-autopilot, ubuntu-sdk, ...
[12:52] <cjwatson> ... ubuntu-sdk-libs-dev, ubuntu-touch, ubuntu-touch-session, unity-plugin-scopes, unity-scope-click, unity-scope-click-autopilot, unity-scope-mediascanner2, unity-scope-scopes, unity-scope-snappy, unity-scope-tool, unity8, unity8-autopilot, unity8-common, unity8-desktop-session-mir
[12:52] <michi> sil2100 wasn’t totally sure whether the guy wo pushed the change actually tested all the reverse deps.
[12:52] <cjwatson> that's the uninstallable list from the new capnproto
[12:52] <cjwatson> not only are they untested, they're uninstallable
[12:52] <cjwatson> very likely because of the soname change
[12:53] <michi> Probably
[12:53] <michi> We just disable one unit test, and that “fixed” it.
[12:53] <cjwatson> needs a rebuild of libunity-scopes1.0
[12:53] <michi> At the cost of no longer running abi compliance checks.
[12:53] <michi> We pushed a branch for that yesterday. silo 37
[12:55] <cjwatson> a thing I generally advocate for is keeping -proposed as empty as possible so that it's more obvious when this sort of thing is languishing
[12:55] <michi> that would be good, yes.
[12:56] <michi> It’s not going to happen without some rules and some process to enforce it though.
[12:56] <michi> to me, it’s all about catching problems early.
[12:56] <cjwatson> well, that was what I was trying to do with the whole +1 maintenance effort
[12:56] <michi> If we find out three days before a release freeze when something fails in a silo, that’s no good.
[12:57] <cjwatson> but it also requires getting people to care about things outside their little bubble
[12:57] <michi> Sends everyone into headless chicken mode.
[12:57] <michi> Agree.
[12:57] <michi> I have no simple answers :(
[12:58] <michi> But I know from bitter experience that, if we catch something early, that’s exponentially cheaper than catching it late.
[12:59] <michi> Looking at the list of reverse deps for capn proto, quite a few of those are simply because scopes-api uses capn proto, I think.
[13:00] <michi> But, getting focus...
[13:00] <michi> Is there a way to get silo 37 unblocked?
[13:00] <michi> the test failure is fixed now.
[13:00] <michi> So, whatever is holding up silo 37, it ain’t us, to the best of my knowledge.
[13:00] <cjwatson> let's see
[13:01] <cjwatson> I'll see what happens if I pick a random one of those tests and rerun it
[13:01] <cjwatson> but we still have a new ubuntu-touch-meta only in -proposed, so I'm a little doubtful
[13:02] <michi> Hmmm...
[13:03] <michi> The regression tests for scopes-shell fail for xenial in silo 13 because of this, I think: https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-xenial-ci-train-ppa-service-landing-037/xenial/i386/q/qtcreator-plugin-ubuntu/20160210_000636@/log.gz
[13:03] <michi> Right at th end of the log.
[13:04] <michi> qtcreator-plugin-ubuntu
[13:04] <cjwatson> I think that's ubuntu-sdk-libs being uninstallable which is basically due to the same thing
[13:04] <cjwatson> We probably just need to force the tests to be run with a slightly larger trigger set
[13:04] <michi> Is that the same problem as for ubuntu-touch-meta?
[13:04] <michi> I’m not aware of all the dependencies/groupings :(
[13:06] <cjwatson> Yeah
[13:07] <michi> OK.
[13:07] <michi> So there is nothing we can do for the moment.
[13:08] <michi> Somehow, qtcreator-plugin-ubuntu needs to pass first.
[13:08] <michi> I have no idea how to accomplish that.
[13:15]  * cjwatson tries running unity-scope-click tests with unity-scopes-api added to the trigger
[13:37] <cjwatson> michi: (now waiting to be able to see the output of the test I triggered)
[13:37] <michi> cjwatson: We just decided to unbundle scopes-api from silo 37.
[13:37] <ogra_> hmm, i'm running the ubuntu-pd image on my N7 now ... are the Xorg app launchers supposed to do anything ?
[13:37] <michi> We are going to land the scopes-api fix separately, which should unblock a whole bunch of other packages.
[13:38] <ChrisTownsend> ubuntu-qa: Hi, any ETA on when testing will commence for https://requests.ci-train.ubuntu.com/#/ticket/982 ?
[13:38] <ogra_> (i definitely dont even get feedback clicking something like gedit or firefox)
[13:38] <michi> cjwatson: What test did you trigger?
[13:39] <michi> cjwatson: Ah: tries running unity-scope-click tests with unity-scopes-api added to the trigger
[13:40] <michi> I don’t even know what a trigger is in this context :(
[13:40] <cjwatson> michi: can you explain why you're unbundling?  I really think that will make matters strictly worse
[13:40]  * cjwatson checks
[13:40] <michi> The list of reverse deps you pasted earlier had a bunch of packages in it that are failing only because of the inability of scopes-api to make progress.
[13:40] <jibel> ChrisTownsend, is it for OTA9.5 or 10? there is no milestone
[13:41] <michi> silo 37 bundles the scopes-api change with a bunch of other fixes.
[13:41] <cjwatson> michi: but it isn't going to be possible for unity-scopes-api to make independent progress
[13:41] <cjwatson> michi: please don't unbundle it
[13:41] <michi> silo 37 doesn’t make progress because there is a problem with one of scopes-shell’s dependencies.
[13:41] <jibel> our plate is already full but if it's urgent we can allocate some time.
[13:41] <jibel> ChrisTownsend, ^
[13:41] <cjwatson> michi: it's one of the things that has to be rebuilt against libjsoncpp1; therefore it has to go together with the other things being rebuilt against libjsoncpp1
[13:41] <michi> cjwatson: The *only* change for scopes api is that we have disabled the failing test.
[13:42] <michi> Oh Christ… :(
[13:42] <ChrisTownsend> jibel: Umm, who decides that?  I guess OTA9.5 as it's needed asap.
[13:42] <michi> Are you sure that *scopes-api* is breaking because of json? I have seen no evidence of that.
[13:42] <cjwatson> michi: it's also a rebuild against libjsoncpp1, perhaps incidentally but necessarily
[13:42] <michi> As best as I know, scopes-api works as well with the new json as with the old one.
[13:43] <cjwatson> michi: it's not that it's breaking, but promoting the libjsoncpp change requires everything to be rebuilt in sync
[13:43] <michi> Oh my God :(
[13:43] <michi> pstolowski: ^
[13:43] <cjwatson> michi: if you unbundle unity-scopes-api, not only will it make little progress on its own but it will also make it strictly more difficult to make progress on unity-scope-click
[13:43] <michi> I don’t know how to help anymore.
[13:43] <cjwatson> so please don't
[13:43] <cjwatson> just leave it, I'm working on your silo
[13:44] <michi> I really tried all day to move this ahead somehow, but I keep running into dead ends :(
[13:44] <michi> pstolowski: Please co-ordinate with cjwatson about this.
[13:44] <michi> I’m about to sign off.
[13:44] <cjwatson> the trigger in this case is what the autopkgtest system considers to be the cause of the test attempts; in this case it's used to construct apt pins for packages to pull from later than xenial
[13:44] <cjwatson> adding a trigger is how we arrange for multiple things to be tested in combination
[13:44] <michi> cjwatson: You just threw more Chinese at me than I can read. I’m way out of my depth here...
[13:45] <cjwatson> but that gets way harder if they're in separate silos, we'd basically have to forcibly ignore test results
[13:45] <michi> Ok, re-reading this, I sort of get the gist of it.
[13:45] <cjwatson> in this case they're necessarily combined in terms of the dependency graph, so keep them together
[13:45] <michi> the autopkg tests for scopes-api will pass in the silo.
[13:45] <michi> We actually run them as part of our unit tests.
[13:46] <cjwatson> yeah, they should do as soon as I figure out the correct trigger to pass
[13:46] <michi> I’m virtually certain that our autopkg test will pass even with proposed.
[13:46] <cjwatson> I'm just being slowed down by not being able to see results immediately
[13:46] <michi> Because I strongly suspect that the autopkg test for scopes API won’t touch any code path related to json.
[13:46] <cjwatson> that seems likely
[13:47] <michi> So, we *might* just get away with a separate scopes-api landing from silo 24.
[13:47] <cjwatson> seriously, no
[13:47] <michi> I’ll leave it up to you and pstolowski to decide.
[13:47] <cjwatson> please please please do not muck with this
[13:48] <michi> I’m about to go to bed. It’s nearly midnight here. I won’t physically be able to muck with anything for the next seven hours.
[13:48] <cjwatson> that was a collective plea :)
[13:48] <michi> I’m out of my depth here, so I’m happy to leave it to you.
[13:48] <cjwatson> I'm reasonably sure I can get 37 sorted out, it will just take a little time
[13:49] <cjwatson> and I will get back to you/pstolowski if I run into a problem that isn't just infrastructural
[13:49] <michi> Just one thought to keep in mind for the next sprint, maybe. The complexity of what has just happened tells me that we must be doing something wrong. I thing the root problem might be late detection of issues caused by new packages in proposed. If we can get to and test with proposed early, I suspect we’ll mitigate the problems.
[13:49] <michi> cjwatson: Thanks!
[13:50] <cjwatson> I think the basic thing that went wrong here is allowing too many separate transitions to become entwined due to inattention to the size of -proposed
[13:50] <cjwatson> (possibly collective)
[13:50] <michi> Yes.
[13:50] <michi> And us not even knowning about the oncoming train, such as capn proto and acc.
[13:51] <michi> So, we were in blissful ignorance all along, until yesterday, even though the actual problem has been there for months.
[13:52] <cjwatson> the other thing is, if this weren't phone, I'm pretty sure somebody would've just fired off rebuilds ages ago and been done with it
[13:52] <cjwatson> but because phone is all super-careful about everything, the people who try to generally care about -proposed backlog are often averse to dealing with it
[13:52] <cjwatson> and as a result this can end up backing up in a way that wouldn't have been a problem if dealt with earlier :-/
[13:53] <dobey> hmm
[13:53] <jibel> ChrisTownsend, someone will verify it today.
[13:54] <ChrisTownsend> jibel: Ok, thanks.
[13:55] <dobey> well i guess the queue is emptied now
[13:55] <ChrisTownsend> jibel: Just for my info, is there a freeze going on that would normally cause this to be delayed?
[13:55] <ChrisTownsend> jibel: ie, if there wasn't a critical bug being fixed?
[13:57] <jibel> ChrisTownsend, there is no freeze, just a busy week.
[13:57] <ChrisTownsend> jibel: Ok, I understand
[14:04] <dobey> cjwatson: so you understand these pkgProblemResolver errors? i'm a little confused about them
[14:09] <dobey> hmm, what a mess
[14:10] <cjwatson> dobey: more or less.  excessive isolation for things that need to be tested in combination
[14:11] <dobey> cjwatson: ah. and you're doing some work to hopefully squeeze things through?
[14:11] <dobey> (sorry if this is repetitive, but i just got on line and trying to catch up :)
[14:17] <cjwatson> dobey: that is my hope
[14:23] <dobey> cjwatson: great. thanks!
[14:30] <AlbertA> trainguards: could someone retrigger the xenial/amd64 build in this silo ppa: https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/landing-064/+packages
[14:31] <robru> AlbertA: done
[14:31] <AlbertA> robru: thanks!
[14:31] <robru> AlbertA: you're welcome
[14:42] <pstolowski> cjwatson, hey, i see you and michi had a lengthy conversation... the conclusion is not to try to land unity-scopes-api with a separate silo right?
[14:43] <cjwatson> pstolowski: indeed, I think that will only make matters worse
[14:44] <cjwatson> pstolowski: doing that won't decouple it, it will just mean we have to figure out how to do cross-silo autopkgtesting and argh
[14:44] <pstolowski> cjwatson, fair enough.. i fully trust your judgement
[14:45] <davmor3> ping davmor2
[14:46] <davmor2> pong davmor3
[14:48] <dobey> oh no, the machines are taking over
[14:53] <davmor3> ChrisTownsend, ping well this is xchat mako \o/
[14:53] <ogra_> neat !
[14:53] <ChrisTownsend> davmor3:  Awesome!  Thanks for confirming!
[14:54] <davmor3> jibel, so looks like this is working for the ever \o/ now to test the rest :)
[15:15] <cjwatson> robru: So, I need to copy some packages from -proposed into a silo in order that they can all be autopkgtested together.  Do I need to add them to the source packages list in the request, or does that not matter if they're already in -proposed anyway?
[15:16] <jibel> rvr, didn't you approve silo 46 (click) 2 days ago?
[15:16] <robru> cjwatson: the source packages list in the request will be auto-filled, just copy the packages
[15:17] <cjwatson> robru: thanks, will try it
[15:17] <robru> yw
[15:27] <dobey> oh wow that's kind of awful
[15:28] <dobey> cjwatson, robru: does that mean the CI machinery is going to want to have the new capnproto and jsoncpp in vivid-overlay too?
[15:29] <robru> dobey: only if you've pushed packages to vivid overlay that depend on them? I dunno
[15:30] <dobey> robru: well i see the status says "Ready to build" in vivid for those packages now since cjwatson copied the xenial-proposed binaries in
[15:31] <robru> dobey: what request are you even talking about. i don't see it
[15:31] <cjwatson> is it possible it doesn't understand that those source packages are only needed in xenial and not vivid?
[15:31] <cjwatson> robru: 975
[15:31] <dobey> robru: 975
[15:32] <dobey> cjwatson: right, that's what i'm thinking
[15:32] <robru> cjwatson: dobey: well you guys have the silo set as a dual silo, so that means it wants everything in xenial and vivid
[15:32] <cjwatson> there's another approach available for this, but this would be the simplest one
[15:33] <cjwatson> robru: no way to make an exception for a few packages then?
[15:33] <dobey> really the autopkgtests should be running against proposed
[15:33] <robru> cjwatson: no, a dual silo is a dual silo. but if those are manual packages you can just copy the vivid versions into the silo and it'll be happy with that.
[15:34] <cjwatson> dobey: well, I'd like to avoid pulling in random other stuff I don't know about
[15:34] <cjwatson> and pitti tells me silos in general deliberately don't because that causes other problems
[15:34] <dobey> cjwatson: right, but i mean, i don't see how this hasn't been a problem already, given the PPAs build against -proposed.
[15:34] <cjwatson> dobey: maybe it has occasionally and has been worked around in similar ways
[15:35] <cjwatson> robru: does that seem reasonable to you?  it potentially results in duplicate packages in the overlay
[15:35] <dobey> robru: will it be happy if the same versions are already in the target?
[15:35] <dobey> i guess it will be ok
[15:35] <cjwatson> it'll be fine in xenial, I just don't want to create cruft in vivid
[15:35] <robru> cjwatson: what do you mean duplicate? if they're packages that are already in the overlay just copy what's in the overlay. train will just see it as a successfully published package rather than something needing to be built
[15:35] <dobey> cjwatson: i guess they won't be duplicate in the overlay. it'll just treat them as already copied
[15:36] <cjwatson> robru: I suspect these are only in the base archive, not the overlay
[15:36] <cjwatson> no particular reason they would've been overlaid at any point
[15:36] <cjwatson> well, not all of them anyway
[15:36] <dobey> well yeah
[15:36] <dobey> -meta will be in the overlay
[15:36] <dobey> but jsoncpp and capnproto probably not
[15:36] <robru> cjwatson: ok, my recommendation is just copy whatever from vivid archive and then delete from overlay ppa after publishing the silo.
[15:36] <cjwatson> mkay
[15:37] <cjwatson> thanks
[15:37] <robru> yw
[15:37] <dobey> i think we do just need to fix these autopkgtests to always run against proposed though
[15:38] <cjwatson> pitti has just finished telling me how the effects of that were worse
[15:38] <cjwatson> 15:13 <pitti> as that is in many cases unintended and led to too many blockings
[15:38] <cjwatson> so I mean you can argue it out with him, but he seems to have taken a decision based on data ...
[15:39] <dobey> well, that blockage is going to happen when things get published to -proposed anyway; but sure, it will maybe make things block a bit more in silos
[15:45] <davmor2> ChrisTownsend: approved libertine
[15:45] <ChrisTownsend> davmor2: Thank you very much!
[15:58] <dobey> hmm, i guess it's not entirely happy about that
[16:01] <cjwatson> I don't know if the "diff missing" stuff actually matters
[16:01] <cjwatson> had to copy in cmake too, so there'll be another cycle here
[16:06] <cjwatson> but at least unity-scopes-api and qtcreator-plugin-ubuntu are now passing
[16:06] <cjwatson> (not green on excuses yet but they should be when it next updates)
[16:13] <Trevinho> Laney: can you ACK https://requests.ci-train.ubuntu.com/#/ticket/873 ?
[16:25] <dobey> cjwatson: ok, cool, hopefully it is happy now then
[16:26] <cjwatson> hmm
[16:26] <cjwatson> robru: "Diff missing" is going to prevent this getting anywhere, isn't it?
[16:26] <robru> cjwatson: define "getting anywhere"
[16:27] <cjwatson> robru: reaching the "Successfully built" state
[16:27] <dobey> robru: "qa: ready"
[16:27] <robru> cjwatson: it's not going to stop britney running on it
[16:27] <robru> cjwatson: also trivially fixed by running the build job with "DIFF_ONLY" set
[16:28] <cjwatson> aha
[16:28]  * cjwatson will do that
[16:43] <Laney> Trevinho: what is:
[16:43] <Laney> -	rm -f debian/tmp/usr/share/compiz/networkarearegion.xml
[16:43] <Laney> -	rm -f debian/tmp//usr/lib/compiz/libnetworkarearegion.so
[16:43] <Laney> ?
[16:46] <Trevinho> Laney: it was a plugin that we didn't use in ubuntu anymore since long time.. It was still built, though. So I just disabled it from CMake instead of removing the built bits
[16:48] <Laney> ok!
[16:52] <Laney> Trevinho: I should publish it?
[16:56] <dobey> yay, got past installation now
[17:03] <cjwatson> dobey,pstolowski: can you have a look at https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-xenial-ci-train-ppa-service-landing-037/xenial/armhf/u/unity-scope-click/20160210_165255@/log.gz ?
[17:05] <pstolowski> cjwatson, yes, that's the test flakiness that is going to be disabled with https://code.launchpad.net/~dobey/unity-scope-click/temp-no-int-tests/+merge/285079
[17:05] <dobey> cjwatson: ah, i can fix that. i need to disable it in two places
[17:06] <cjwatson> well, the test is retrying anyway for reasons, if it's just flaky then maybe it will pass next time
[17:06] <dobey> cjwatson: well, the MP in that silo is supposed to disable the tests for now. but i forgot we were actually running them against the installed scope in the autopkgtests too, so need to update the branch to disable there and rebuild unity-scope-click in that silo
[17:08] <cjwatson> dobey: ok
[17:08] <dobey> cjwatson: and just started the rebuild. hopefully it will be quick
[17:09] <cjwatson> ta
[17:09] <cjwatson> it'll retrigger unity8 autopkgtests, so that'll take a while
[17:09] <cjwatson> but with any luck this should all go through without my intervention now
[17:10] <cjwatson> and hopefully this can get quickish QA
[17:11] <dobey> right
[17:15] <dobey> ok, i'm going to go get lunch.
[17:37] <robru> alright! fun times. anybody need the train urgently for the next 2 hours? I'm gonna do a big rollout...
[17:38] <robru> (britney/autopkgtests/ppas unaffected, just source builds might be a bit delayed)
[17:58] <Trevinho> Laney: yeah, if you can...
[17:58] <Laney> ok
[18:00] <Trevinho> Laney: why that? ^
[18:02] <Laney> shrug
[18:02] <Laney> it's working now
[18:06] <Trevinho> :)
[18:12] <dobey> robru: ugh, your timing sucks ;)
[19:58] <dobey> huh
[20:02] <dobey> i think i found a bug in the autopkgtests running in silos, but i'm not quite sure how/where to report it
[20:03] <dobey> or it's not a bug and things are slow enough to just be annoying/confusing
[20:11] <dobey> i wonder if the currently "running" stuff has passed yet
[20:15] <dobey> ok, who can i bug from qa right now
[20:16] <dobey> alesage: can you be our qa savior?
[20:18] <alesage> dobey, not sure yet, more info pls
[20:21] <dobey> alesage: xenial -proposed is highly blocked at the moment, and silo 37 fixes it. we've been trying to get everything sorted so qa can review it (it's a dual landing silo), and it's about ready. but we want to get it through asap to get things unblocked
[20:22] <dobey> so there are some silos that can't build on xenial at the moment as a result of this blockage, which means they can't land until it's unblocked
[20:24] <alesage> dobey, not sure what you're asking for
[20:25] <dobey> alesage: to get https://requests.ci-train.ubuntu.com/static/britney/xenial/landing-037/excuses.html onto the qa trello, tested, and passed, so we can publish it and unblock everyone
[20:25] <dobey> err
[20:25] <dobey> https://requests.ci-train.ubuntu.com/#/ticket/975
[20:25] <dobey> not the excuses
[20:27] <alesage> dobey, unlikely we'd get to it until someone better qualified can answer :) , also not able to parse test failures
[20:27] <dobey> alesage: the one failed on that excuses page is the older unity-scope-click build; the newer one would pass.
[20:28] <dobey> (and did pass)
[20:38] <dobey> alesage: whom would be better qualified to answer, that would be awake in our TZs?
[20:38] <dobey> well, awake and working, as opposed to awake and at the pub
[20:40] <alesage> dobey, we're all busy with a release atm, unlikely to find an ear until EU AM
[20:42] <dobey> :(
[22:53] <cjwatson> dobey: I just poked it
[22:54] <cjwatson> dobey: (I only bothered with armhf)
[23:46] <cjwatson> dobey: https://requests.ci-train.ubuntu.com/static/britney/xenial/landing-037/excuses.html \o/
[23:46] <cjwatson> I guess the train will notice in a bit
[23:47] <cjwatson> michi: 37 passes autopkgtests now.  What I eventually did was copy various other packages into the silo from -proposed which had previously needed to be rebuilt against libjsoncpp1, in order that the system knows to test them all together.
[23:47] <cjwatson> The copy back to -proposed when you publish should be a no-op.