[07:13] <abeato> trainguards, is it possible to land https://requests.ci-train.ubuntu.com/#/ticket/1109 ? it is needed for turbo
[07:15] <robru> abeato: looks like sil tried to publish but then didn't due to packaging changes. better ask him why he didn't, eg maybe there's a problem with the packaging
[07:16] <abeato> robru, but it passed automated signoff
[07:17] <robru> abeato: all i know is that sil hit publish and then didn't follow it up when it prompted him to ack the packaging changes. either he forgot, or there's something wrong
[07:17] <abeato> robru, got it, thanks
[07:18] <robru> abeato: you're welcome! I'd offer to take a look but I don't have permission to publish anyway. best to follow up with sil, or find some other core dev if you're really in a hurry
[07:18] <abeato> robru, can wait for sil :)
[07:40] <Mirv> abeato: the packaging changes cause no reason for doubt, so I assume sil2100 just forgot to check the ack_packaging
[07:40] <Mirv> abeato: publishing
[07:41] <abeato> Mirv, actually I remember he asked about the new library dependency, and I told him why it was needed. So yes, he must have forgotten to push the button
[07:41] <abeato> Mirv, thanks
[12:32] <mterry> robru: I notice silo 41 failed its "automated signoff" -- can you tell me what that's about?
[12:32] <mterry> oh you're probably not up now
[12:32] <mterry> trainguards ^
[12:34] <jibel> mterry, there is lot of red on https://requests.ci-train.ubuntu.com/static/britney/xenial/landing-041/excuses.html
[12:35] <jibel> mterry, unsatisfiable deps
[12:35] <jibel> on xenial
[12:35] <mterry> jibel: ah thank you.  I see the links to the excuses now in the train -- I didn't notice that
[12:39] <mterry> jibel: it's because a binary is left in the ppa?  A binary that was temporarily built by unity8 in the ppa, but is no longer built.  But somehow is still hanging around
[12:40] <mterry> jibel: do you have clues on how to clean that out?
[12:40] <mterry> The remove-package archive-tools script only handles sources, and it doesn't see that version of the source in the ppa anymore
[12:41] <mterry> jibel: ah, I found the -b argument to work on binaries
[12:43] <jibel> mterry, sorry, I've no idea how to clean that out
[12:43] <mterry> jibel: I just cleaned.  How can I retest vivid auto signoff tests?
[12:44] <mterry> jibel: clicking "regression" isn't enough in this case because u8 doesn't offer a regression button, it just says there are old binaries
[12:45] <mterry> mzanetti: which toggle started the auto signoff?  I guess we can reset lander and qa signoff fields and then move them back to restart auto signoff?
[12:46] <mterry> That would start xenial and vivid.  When we only need vivid
[12:46] <mterry> But it would work
[12:46] <mzanetti> mterry, I think it was the Lander signoff one
[12:47] <mzanetti> robru, can you give us some hints here? ^
[12:47] <mterry> mzanetti: I just reset it
[12:47] <mzanetti> ah ok
[12:47] <mterry> mzanetti: would take more time to figure it out than just do it  :)
[12:58] <Mirv> mterry: there is a superseded unity8 in the PPA that can be deleted that will probably help
[12:58] <Mirv> mterry: I now deleted it
[12:59] <Mirv> mterry: mzanetti: it's far better to retry the failed tests instead of restarting the whole process in general, but I see you do have lots of lots of red
[13:01] <mterry> Mirv: I would like to restart the failed tests, but there was no button for this failure (it wasn't a dep8 regresion).  Is there another way to restart?
[13:02] <mzanetti> mterry, just skimming over some logs, it looks like some other things are weird in the system, right? not really like a real test failure
[13:03] <mterry> mzanetti: no I think it's just due to stale binaries in the ppa
[13:03] <mzanetti> mterry, can you keep retrying. I need to eat a bite... starving
[13:03] <mterry> mzanetti: which I'm surprised weren't automatically cleaned (from when we deleted unity8-schemas -- sorry that it ended up causing trouble :-/)
[13:03] <mterry> mzanetti: I did
[13:03] <Mirv> mterry: if it's a claim about missing packages instead of red Regression with logs, it'd be autodetected when it's fixed in the next britney run.
[13:04] <Mirv> mterry: mzanetti: stale binaries always stay around in the PPA:s like this and need to be manually deleted
[13:04] <mterry> Mirv: it was a claim about leftover binaries, but sure same diff.  How frequent are the britney runs for landing silos?
[13:04] <mterry> Mirv: that's a pain :)
[13:05] <Mirv> mterry: too rarely IMHO. it seems to fluctuate from 30mins to 90mins, at least when it comes to updating the web pages with results (time stamp at the top)
[13:10] <mterry> Mirv: hrm.  I'm removing the packages from the ppa with the following command:
[13:10] <mterry> ./remove-package -A ppa:ci-train-ppa-service/ubuntu/landing-041 -s xenial -m 'obsolete' -b -e  8.12+16.04.20160315.2-0ubuntu1 unity8-schemas
[13:10] <mterry> Mirv: and it claims to remove them.  But if I run it again, it says they are still there and offers to remove them again
[13:12] <Mirv> mterry: I delete with the webui and then it changes from Superseded to Deleted and it has helped in identical situations in the past with britney
[13:13] <mterry> Mirv: where do you see it in the webui?  I couldn't find the old package
[13:14] <Mirv> mterry: https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/landing-041/+delete-packages
[13:14] <mterry> Mirv: ah thanks, didn't see it on just +packages
[13:15] <mterry> Mirv: so it says deleted, but the brittney results are the same, presumably I have to wait for another brittney run and can't seepd that up?
[13:15] <Mirv> mterry: yes it needs some time
[13:21] <mterry> mzanetti: so I guess we're waiting on brittney for another pass at the xenial bits
[13:21] <mterry> mzanetti: for vivid, I'm seeing a qmluitest failure?  But I can't find which test is failing in the logs
[13:21] <mzanetti> mterry, link
[13:22] <mterry> mzanetti: https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-vivid-ci-train-ppa-service-landing-041/vivid/amd64/u/unity8/20160317_124841@/artifacts.tar.gz
[13:22] <mterry> mzanetti: same test passed on i386, but failed on amd64.  Might just be a fluke
[13:23] <mzanetti> mterry, the log doesn't contain a "FAIL!" string, which is what qmltestrunner would print
[13:23] <mterry> mzanetti: yeah exactly  :-/
[13:23] <mterry> mzanetti: the stderr output shows a build warning, but I didn't get the sense that we failed the test because stderr was non-empty
[13:29] <michi> trainguards: I have question with one of my silo builds
[13:29] <michi> Anyone around?
[13:30] <sil2100> michi: what's up?
[13:30] <michi> Hi
[13:30] <michi> Latest revision here is 144: https://code.launchpad.net/~michihenning/thumbnailer/fix-1556835/+merge/289321
[13:31] <michi> Currently building in silo 3. But I’ve tried twice now, and the train only picks up revision 143: https://ci-train.ubuntu.com/job/ubuntu-landing-003-1-build/36/console
[13:31] <michi> I don’t understand why.
[13:32]  * sil2100 takes a look
[13:34] <sil2100> 2016-03-17 13:27:40,810 INFO Merging https://code.launchpad.net/~michihenning/thumbnailer/fix-1556835 at r144. <- this line says the train uses rev 144
[13:34] <sil2100> Is that not true?
[13:34] <michi> Let me look...
[13:35] <michi> My apologies.
[13:35] <michi> False alarm.
[13:35] <sil2100> No worries :)
[13:35] <michi> I was looking a few lines further down, where it mentions r143
[13:35] <michi> All is well then, sorry to bother you!
[13:47] <dobey> rvr: hi, can you recreate bug #1537105 using a fake card on staging with "3D" as the cardholder name? you might need silo 64 to be able to use the add card page. i'm having trouble recreating the issue, as i'm always getting a failure dialog
[13:48] <rvr> dobey: Hi. I read your comment... right now I'm busy with silo 41 verification will try to check that ASAP
[13:50] <sil2100> dobey: hey! Just confirming: removing payui click from rc-proposed is fine, right?
[13:51] <sil2100> dobey: is the pay-ui deb installed on images already?
[13:51] <sil2100> Or do we need a seed change?
[13:51] <dobey> sil2100: it should be installed as pay-service depends on it
[13:52] <sil2100> I'll double-check and proceed, ok
[13:53] <sil2100> dobey: ok, I see it as installed, removing the click in that case - thanks!
[13:53] <dobey> great
[14:36] <mterry> jibel: if a silo is in "qa-signoff: ready", does the qa happen even if it's failing its automatic signoff bits?
[14:36] <sil2100> mterry: no, not in most cases
[14:36] <mterry> bummer
[14:37] <sil2100> mterry: the scripts only consider a silo good once both are approved
[14:37] <sil2100> mterry: what's up with the autopkgtests?
[14:38] <jibel> mterry, it depends
[14:38] <mterry> sil2100: I'm trying to figure it out, but it seems like stale package nonsense or flaky tests.  Not something I would think would affect manual testing.  But I'm workign oni t
[14:38] <jibel> mterry, it depends on the nature of the failure
[14:38] <sil2100> QA can consider manually adding it to the queue
[14:39] <mterry> jibel, sil2100: OK, if I have more information I may ping.  Thanks
[14:40] <jibel> mterry, the the failure is well understood and has nothing to do with the change then it is possible to force it
[14:40] <jibel> s/the the/if the/
[15:11] <robru> mterry: mzanetti: any non autopkgtest is going to be re run by britney in every run. There's no need to "clear the result" and "re trigger", it is just always triggered every run. Unfortunately each run takes an hour or so due to high load
[15:11] <mterry> robru: makes sense OK
[15:33] <mterry> jibel: http://autopkgtest.ubuntu.com/running.shtml#pkg-qtcreator-plugin-ubuntu shows britney blocked on creating nova instances for like 18 minutes.  Is that common?
[15:38] <awe> trainguards, can someone confirm that the following silo needs core-dev acking due to packaging changes?
[15:38] <awe> https://requests.ci-train.ubuntu.com/#/ticket/1093
[15:39] <robru> awe: yes it shows packaging changes in the artifacts field
[15:42] <mterry> robru: do you know much about britney?  It seems to be getting stuck on amd64 a lot, creating nova instances
[15:44] <robru> mterry: autopkgtests you mean? (britney just triggers them, it doesn't *run* them).
[15:45] <mterry> robru: yeah autopktests triggered by britney
[15:45] <robru> mterry: no i dunno anything about it, best to ask pitti
[15:45] <mterry> robru: ok thanks
[15:46] <robru> mterry: yw
[15:53] <awe> cyphermox, can you ack the packaging changes in the NM silo?
[15:53] <awe> https://requests.ci-train.ubuntu.com/#/ticket/1093
[15:54] <awe> robru, thanks; now I see; might this be a discrete state ( ie. CoreDev review needed? )
[15:55] <robru> awe: well, strictly speaking, not necessarily core dev, just anybody with upload rights (so eg if it's a universe package, you can get MOTU instead of core dev)
[15:56] <robru> awe: in this case it is a main package so it would be core dev, or somebody with per-package rights on that package specifically
[15:57] <awe> k
[15:59] <cyphermox> awe: sorry, I can't right now
[16:03] <awe> morphis, can you try to get someone to ack the silo packaging changes?  If not, I deal with it when I return
[16:04] <morphis> awe: on it!
[16:04] <morphis> kenvandine: ping
[16:06] <awe> morphis, thanks!!!
[17:18] <rvr> sil2100: Do you know why Kaleo can't set this silo as "ready for testing"? https://requests.ci-train.ubuntu.com/#/ticket/1096
[17:21] <sil2100> rvr: looking
[17:21] <sil2100> rvr: not sure, I just switched it to ready
[17:22] <Kaleo> thx!
[17:23] <robru> morphis: just curious why the version of n-m here: https://requests.ci-train.ubuntu.com/#/ticket/1093 is lower than the one in vivid. why not just copy the vivid version?
[17:24] <morphis> robru: I think that is simply because we didn't synced back yet with what has landed in vivid since we branched  of
[17:25] <morphis> robru: but we can't just copy as there touch specific changes in the package
[17:25] <morphis> so for now we ahve to live with that
[17:26] <robru> morphis: ok, yeah it's a weird situation. it breaks britney because it refuses to test a lower-versioned package. it won't break the phone though because the phone uses ppa pinning to ensure the overlay version is chosen over the distro version, but eg anybody trying to use overlay ppa on vivid desktop would not get this package, but I guess that doesn't
[17:26] <robru> matter anyway since vivid is eol
[17:26] <morphis> robru: right
[17:27] <morphis> robru: thing is, this package has to land by tomorrow
[17:27] <morphis> so syncing back etc. isn't an option
[17:27] <morphis> and awe has done it for some time like this
[17:28] <robru> morphis: ok, well it's in qa queue, maybe ask qa people to bump up priority. I just happened to notice that britney said "N/A" which is very strange, it should really be "failed" from britney but I guess there's a bug in my parsing of britney output.
[17:28] <kenvandine> morphis, pong
[17:28] <morphis> sil2100, robru, kenvandine: can one of you review the package https://requests.ci-train.ubuntu.com/#/ticket/1093 so that nothing expect QA is in the way for landing this?
[17:28] <morphis> robru: could be ;-)
[17:29] <kenvandine> sure
[17:29] <morphis> kenvandine: thanks!
[17:29] <morphis> kenvandine, robru, sil2100: redirect any further comments to awe, dropping off now
[17:30] <kenvandine> morphis, looks fine
[17:30] <kenvandine> awe, ^^
[17:30] <robru> morphis: thanks
[17:46]  * dobey hopes 41 gets approved soon
[17:47]  * kenvandine too
[17:53] <awe> robru, so... we pinned the version of NM in the PPA as we don't want to pull in a change that had landed in distro that could've potentially de-stablized touch
[17:53] <awe> that said, not sure if anyone looked over .2 and .3 to see if there were any bits applicable to the phone
[17:53] <awe> if CVE related, I would've expected someone to ping me about it
[17:53] <awe> but maybe that's a hole in our process
[17:54] <awe> needless to say, I'm working on NM 1.2 for post OTA10
[17:54] <awe> which hopefully will line us back up again with the distro again
[17:56] <robru> awe: glad to hear it, thanks
[17:56] <awe> np
[17:56] <awe> I'll also chase down what landed in .2 and .3, just to make sure nothing urgent was missed
[18:22] <charles> meh
[18:43] <dobey> oh
[18:43] <dobey> charles: ^^ that's your problem then
[19:07] <charles> robru, could you take a look at this build failure?
[19:08] <charles> robru, it's failing to find the upstream tarball and I'm not sure why that's failing
[19:31] <robru> charles: your branch has removed .bzr-builddeb dir which the train cannot build without
[19:32] <robru> (although I'm working on removing the need for that; it's still necessary today)
[19:33] <charles> robru, you're right. ok. So I'll copy .bzr-builddeb from trunk to the branch where it's missing, and try again
[19:34] <dobey> oh why did i not catch that
[19:34] <charles> robru, thanks
[19:34] <charles> and dobey, thanks to you for looking as well
[19:36] <dobey> sure
[19:36] <dobey> now i have my own problems; flipping nonsense conflicts
[19:37] <charles> :)
[19:50] <robru> charles: yw