=== steve__ is now known as sbeattie [00:43] stgraber: hey, if still around, quick question, I did a binary copy of phablet-tools to proposed (https://launchpad.net/ubuntu/+source/phablet-tools/1.0+14.04.20131108-0ubuntu1/+publishinghistory), now is there anything else I need to do or just wait for migration to happen? [00:44] first time I'm actually copying the binary package to the archive (from the daily-build one) [00:45] seems it was finally published in proposed [00:49] if britney/autopkgtest don't find any problem, there shouldn't be anything for you to do [00:52] great :-) === doko_ is now known as doko [09:58] cjwatson, could you have a look at https://launchpad.net/ubuntu/+source/libpod-spell-perl/1.12-1/+build/5193802 ? libclass-tiny-perl is in main [09:59] doko: only the source [10:00] looking ... [10:00] double-override accident, maybe? [10:00] https://launchpad.net/ubuntu/trusty/i386/libclass-tiny-perl suggests so [10:00] I'll try copying it back. Please don't do the same within this publisher run [10:01] hmm, I did run ./change-override -s trusty -y -S -c main libclass-tiny-perl [10:01] ok [10:01] But it looks like either you did it twice, or somebody else did it at the same time [10:02] yes, could be [10:02] wondering what else I did do twice for the perl stuff [10:03] Not sure OTTOMH how to tell [10:03] infinity: do you have a way to search for double-override accidents? [10:04] cjwatson: Nope. [10:05] We could iterate all binary packages looking for ones where the most recent BPPH is superseded, but that'd be really slow over the API [10:06] cjwatson: /win 36 [10:06] Erm. [10:06] La la la. [10:06] Shan't [10:57] cjwatson, does NBS complain about recommends too? e.g. gmsh/gmsh-doc [11:02] Yes [11:02] If you think it's appropriate to remove anyway then you can [11:02] Probably makes sense to remove in that case [11:04] ok [11:04] filed a bug in debian === tkamppeter_ is now known as tkamppeter [13:57] i am just doing a dist-upgrade from saucy to trusty and (for the second time of asking) i am hitting a rhythmbox-data install failure, which doesn't seem to have any deatails ... is this known [14:18] apw: Were there really no details? Check /var/log/apt/term.log [14:19] (also, reproduced) [14:21] Laney, doh didn't look there, looked a lot of other places [14:21] Simple file conflict, fixing [14:21] heh ... thanks ... yes file conflict [14:22] Laney, doing a rb upload? [14:22] ya [14:22] well, unless you have something [14:22] Laney, can you include https://git.gnome.org/browse/rhythmbox/commit/?id=f326f8e7055ee8b681a72f000203d071ccc72646 ? [14:22] in which case I [14:22] damn! [14:22] lol [14:22] too slow :p [14:22] if you did it already no worry, we can queue in the vcs for the next upload [14:22] no, I'm not that fast :P [14:23] will do [14:23] thanks [14:23] awsome ... [14:30] seb128: is there an lp bug for that? [14:31] Laney, no, I just noticed it while look at git logs for cherrypicks for the LTS [14:31] k [15:43] seb128, that rhythmbox upload you did that is in -proposed, seems to change the Uploaders:, does that make sense ? [15:44] apw: GNOME Team packages get control auto-mangled via gnome team tools. [15:45] apw: So, uploaders and some other bits randomly morph from week to week. [15:45] infinity, up in debian right ? in ubuntu with an ubuntu specific change only i am supprised [15:45] apw: Same tools run in Ubuntu. [15:46] apw: And, in fact, that can often be the source of the delta, as we may have a different version of said tools with an older uploaders list. :P [15:46] (But it doesn't much matter) [15:46] fair enough then, it is an oddity and no mistake [15:47] Laney, when you have dumped that fix for rhythmbox in ping me, i have a nice test rig waiting for it [15:47] now [15:48] it's trivially reproducible in a chroot by installing rhythmbox in saucy then dist-upgrading to trusty [16:08] apw, what infinity said, GNOME packages computes the Upload from the recent entries in the changelog [16:09] compute the Uploaders* [16:09] fair enough, something to ignore when looking at diffs [16:09] as i was really looking for Laney's diff anyhow :) [16:57] please could someone do magic to make upstart 1.11-0ubuntu1 migrate out of proposed? 1 test is failing but only on amd64 (I'm sure this is a test or test env bug). [16:57] jodh: Generally, the solution to failing tests is to fix the failure. :P [16:58] really ? [16:58] not "close your eyes" ? [16:59] infinity: clearly, I'd prefer that. Since the other 1669 test all pass both locally and on the build's and in the jenkins env, and I've boot tested on 3 arches locally, I'm happy that the failure is not indicative of a regression. [16:59] jodh: Have you tested on the latest trusty kernel? [17:00] jodh: (If the test is known-broken, an upload to disable it, or at least a commit that will do so in the next upload would still be better than me having to wave it through every third upload) [17:00] jodh: If the test isn't provably broken, then maybe the code is. Which is the point of testsuites. [17:01] jodh: (Is there an urgent reason upstart needs to be a unique snowflake in this regard?) [17:01] jodh: another test run is happening now [17:04] infinity: >latest kernel=yes. I am obviously currently trying to resolve the issue. If slangasek is happy to wait until its fixed, fine with me but we were hoping to get this into the archive today and every test run until now has passed 100%. [17:05] Everyone's always hoping to get things in when they upload them. I just prefer not to short around the whole point of testsuites until one can provably demonstrate that a failure is due to a broken test. [17:05] *shrug* [17:05] "It usually passes but sometimes doesn't" isn't so much proof. [17:07] jodh: I said you should upload today, but if it doesn't get into trusty because of a test suite failure, we should resolve that rather than bypass it :) [17:09] slangasek: ok, I'll keep at it then... [17:17] hmm, looks like the most recent run of the test just passed (not sure what triggered the re-run, maybe a rdepend upload) [17:17] stgraber: where do you see that? [17:17] amd64 build 14 is the most recent on https://jenkins.qa.ubuntu.com/view/Trusty/view/AutoPkgTest/job/trusty-adt-upstart/ and has a test suite failure [17:17] slangasek: http://d-jenkins.ubuntu-ci:8080/view/Trusty/view/AutoPkgTest/job/trusty-adt-upstart/ARCH=amd64,label=adt/15/console [17:17] currently running [17:18] jodh: so is this a known test failure? [17:18] stgraber: ok [17:18] so we get it into trusty for free, but we should still figure out why the test was failing ;) [17:20] stgraber: we have seen it before. I thought I'd fixed it before release but clearly not :) [17:20] the remaining tests just finished without any failure, so looks like upstart will get promoted next time britney runs [17:20] slangasek: that is to say, I had a test fix merged for this bug and that appeared to squish it, but... [17:39] jodh: that was lp:~jamesodhunt/upstart/fix-test_state-test-reprise ? [17:56] jodh: I'm confused how this test ever works. You have a child that blocks waiting to read a byte from fds[0], but I don't see the parent ever writing to fds[1] [17:56] jodh: so I think this thread only works by accident when the kernel *happens* to have flushed the log after the write of the initial data [17:56] s/thread/test/ [19:09] slangasek: the parent does write to fds[1]. I'll debug Monday now... [19:09] no really, it doesn't ;) === Pici is now known as Guest25996 === Guest25996 is now known as Pici [22:01] slangasek: i guess i'm on the hook, for accepting that fix. thus clearly i don't understand the fds in that test either. [22:01] well, I was wrong when I said it doesn't write to fds[1] [22:01] I just didn't read down far enough [22:02] (it might help if there were comments to make it clear where the expected sync points are for child and parent...) [22:03] so parent asserts at 2153, did not stat filename. [22:04] so the failure seems to be to do with TEST_FORCE_WATCH_UPDATE() not successfully triggering the write to the logfile. But that's a complex macro that's only used in the test suite, so we basically don't know if it's correct [22:04] and child asserts at 2122, failed to read. [22:05] right; the child assert is strictly secondary [22:07] so imho when the write to pty_slave happened, that didn't hit the disk (?!) [22:07] correct [22:07] but we don't know why, because the route to disk goes into upstart's logging subsystem and back out again :) [22:07] i did ask / comment (maybe it was private irc) to have "sync()" after write. [22:07] and of course, that's not what this test is /about/ [22:07] that wouldn't work [22:08] the pty doesn't point at the log file, it points at upstart's internal log handler, which buffers and then writes to disk [22:08] right, so the test, if the file was not statted, should have done in tap "# skip initial data didn't write" [22:08] why 'skip'? [22:09] slangasek: well, i guess it should continue, and everything should be still unflushed. [22:09] slangasek: why do we do initial file checks? [22:09] I don't know [22:10] and does the upstart's internal log handler have something like nih_flush_sync_now()? =) [22:10] I would /guess/ that it's to ensure that a race doesn't cause the log file to be created and flushed when we're not looking, before the re-exec happens [22:10] but given that we're not in an event loop at this point, I don't know why we're worried about that happening [22:11] hmm, but we do have to trigger the read from the pty to have any of the data in our internal buffer [22:11] which might silently trigger a flush to disk [22:12] so... we *could* pre-populate the file in the parent with EACCES perms, then all of the child's output would be queued [22:12] I think? [22:13] yeah. [22:14] it's just it wouldn't tell us if it kept the original output, or overwrote it. [22:14] how do you mean? [22:14] start with a log file that has "hello world\n" in it. [22:15] why wouldn't we start with an empty log file? [22:15] this test is only supposed to be testing that unflushed log buffers are serialized/deserialized correctly across re-exec [22:16] ok. [22:16] and i presume log buffer is tested to not clobber over previous output already. [22:17] slangasek: today nih test-suite caught non-Posix behaviour of the FreeBSD kernel. =) [22:17] I don't know if we have a test for not clobbering logs, but that's not this test ;) [22:17] xnox: heh, nice [22:17] so i'm happy. (PR filed in freebsd & and the devs were grumpy on irc) [22:18] * slangasek snerks [22:19] int off-by-a-WEXITEDSTATUS-bitmask error [23:31] xnox: lp:~vorlon/upstart/flaky-log-serialization-test, maybe [23:46] slangasek: code looks good. should probably also remove the introduced #define TIMED_BLOCK as well. [23:47] ah, good catch [23:47] pushed