[04:21] <plars> Laney: I talked to thomi about the USS tests, he said they are planning a new AP release and we should have a fix for it soon: https://bugs.launchpad.net/autopilot/+bug/1278272
[04:21]  * plars -> zzz
[04:32] <fginther> veebers, are you still there?
[04:32] <veebers> fginther: Hi, yep still here
[04:32] <fginther> veebers, what's up with autopilot/cupstream2distro-config
[04:32] <fginther> ?
[04:33] <veebers> fginther: I'm not too sure. I noticed earlier today this failure here: https://jenkins.qa.ubuntu.com/job/autopilot-1.4-trusty-amd64-ci/13/console
[04:33] <veebers> and see that it's trying to use lp:autopilot/1.4 instead of lp:autopilot
[04:34] <fginther> veebers, ok, autopilot/1.4 was removed I see
[04:34] <thomi> sorry about that fginther.
[04:37] <fginther> I'm confused as to why the autopilot-1.4 job was used, there still a set of jobs dedicated to lp:autopilot
[04:37] <fginther> brb
[04:43] <veebers> fginther: just heading back home, will be online again shortly (or file  me an email :-))
[04:43] <fginther> veebers, ack
[04:43] <fginther> veebers, I think I have it figured out, will send a message
[04:45] <veebers> fginther: awesome thanks
[09:00] <seb128> s!
[09:01] <seb128> (ups, xchat changing channels on start while typing)
[09:31] <sil2100> Mirv: meeting!
[09:34] <Mirv> möte
[09:42] <psivaa> sil2100: https://jenkins.qa.ubuntu.com/job/trusty-touch-mako-smoke-daily/28/artifact/clientlogs/ubuntu_system_settings/_usr_sbin_system-image-dbus.32011.crash/*view*/
[09:42] <psivaa> probably for Laney to look at?
[09:43] <psivaa> may be not..
[09:48] <ogra_> PermissionError: [Errno 13] Permission denied: '/var/log/system-image/client.log'
[09:48] <vila> sil2100: also, this morning, a kernel crash on an otto node you may want to track: http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-daily_release/label=qa-intel-4000/1419/console
[09:49] <popey> sil2100: so was there a discussion last week while I was away about who should dogfood?
[09:49] <popey> I am happy not to do any more dogfooding if someone from QA is taking it on
[09:49] <ogra_> Mirv, have you seen my mail about rootstock-ng ? you should be able to procude testable images for Qt 5.2 now
[09:50] <popey> (I have other things I can of course be doing) but if it's still needed, happy to do it.
[09:50] <ogra_> *produce
[09:53] <Laney> psivaa: not me, system-image is barry
[09:53] <Laney> I actually got this a lot on my desktop already and filed https://bugs.launchpad.net/ubuntu/+source/system-image/+bug/1260237
[09:53] <psivaa> Laney: ack, got that late, my bad
[09:54] <Laney> and there's https://bugs.launchpad.net/ubuntu-system-image/+bug/1222984 which is Won't Fix...
[09:54] <ogra_> well
[09:55] <ogra_> i would think that system-image-dbus talks to a root owned process on the other side
[09:55] <ogra_> weird if it doesnt
[09:55] <ogra_> (and that process should own the logging preferably)
[09:57] <psivaa> popey: i brought it up in the meeting since i saw someone from QA team started doing it last week. I could have wrongly assumed about it.
[09:57] <Laney> I haven't looked into the details, but crashing is certainly wrong
[09:57] <Mirv> ogra_: yes, very interesting! the QA team should be able to use that too for fully flashed testing.
[09:57] <Laney> a) it should print an error
[09:57] <Laney> b) why is it a d-bus activated service if only root can make use of it?
[09:58] <Laney> anyway, I'm not really here :-)
[10:04] <thostr_> sil2100: can you publish silo 1 and 3. also reconfigure silo 5?
[10:07] <sil2100> popey: so, I know there were discussions, I know Didier and Jason were chatting with Julien a lot, but nothing definite got set
[10:08] <sil2100> popey: at least I don't know about it
[10:08] <sil2100> thostr_: hello! I'll reconfigure silo 5, with publishing we try to wait a little bit until we get one failure tracked down
[10:09] <sil2100> thostr_: but I promise to publish them before evening today
[10:12] <popey> psivaa: I asked omer to test while I was away at the sprint.
[10:12] <psivaa> popey: ohh, then my assumption was wrong. sorry :)
[10:13] <popey> np
[10:20] <thostr_> sil2100: reconfigured?
[10:22] <sil2100> thostr_: ...done!
[10:23] <thostr_> sil2100: thanks
[10:41] <ogra_> Mirv, right ...
[11:09] <sil2100> psivaa: any luck with downgrading telepathy?
[11:09] <sil2100> I mean, telephony
[11:10] <psivaa> sil2100: the dialer app test failure is not as deterministic as we thought before.
[11:10] <psivaa> the tests sometimes pass
[11:10] <psivaa> sil2100: so i am having to run the test a few times before confirming anything
[11:11] <sil2100> psivaa: oh, right, the smoketesting gave the impression it's reproducible all the time
[11:11] <sil2100> psivaa: if you manage to find something out, just ping me here please
[11:11] <psivaa> sil2100: right it failed on the first attempt, but a couple of reruns passed, then it failed again, so there is flakiness.
[11:12] <psivaa> sil2100: reverted the package and running now to see if the flakiness is going and i'll update you
[11:13] <sil2100> Thanks :)
[12:12] <rsalveti> morning
[12:15] <sil2100> Morning
[12:17] <psivaa> sil2100: ogra_: still no luck on the dialer-app hangup test failure.
[12:17] <psivaa> one thing i see is that in all the attempts where the test fails, i see http://pastebin.ubuntu.com/6908750/
[12:18] <psivaa> sil2100: ogra_ but i couldn't see which package that could cause it,
[12:18] <psivaa> i tried reverting the dbus related packages in http://people.canonical.com/~ogra/touch-image-stats/20140206.2.changes too but to no avail
[12:20] <ogra_> psivaa, did you try telephony-service too ?
[12:20] <ogra_> (http://people.canonical.com/~ogra/touch-image-stats/20140207.changes)
[12:20] <psivaa> ogra_: that was the first one and it dint make any difference
[12:20] <sil2100> hmmm
[12:27] <psivaa> sil2100: ogra_ https://bugs.launchpad.net/ubuntu/+source/telepathy-ofono/+bug/1226298 appear to have some similarities. but this was fixecd
[12:28] <ogra_> and quite a while ago it seems
[12:29] <psivaa> yes, the kernel log is kind of similar though
[12:29] <ogra_> it actually points to pulse/alsa though
[12:30] <psivaa> ogra_: are there any recent uploads on these?
[12:30] <ogra_> http://people.canonical.com/~ogra/touch-image-stats/20140207.1.changes
[12:30] <ogra_> libasound2
[12:30] <ogra_> (and *-data)
[12:31] <psivaa> ok, let me try them next
[12:45] <psivaa> reverting that doesn't help
[12:45] <sil2100> ;/
[13:10] <xnox> cihelp: josepht: autopilot project configuration was broken, so https://code.launchpad.net/~xnox/autopilot/use-fb-sizes/+merge/199295 jenkins bot at the end disapproved things.
[13:11] <xnox> cihelp: josepht: can that be re-triggered please?
[13:11] <josepht> xnox: looking
[13:12] <xnox> josepht: also I need to land https://code.launchpad.net/~xnox/autopilot/use-fb-sizes/+merge/199295 into the archive ASAP. Can we trigger the CI-train upload for that branch?
[13:14] <tvoss> sil2100, ping
[13:29] <rsalveti> sil2100: who will review unity-mir's landing now that Saviq is in vacation?
[13:29] <rsalveti> kgunn: ^
[13:30] <rsalveti> that's a blocker for latest mir landing request
[13:39] <josepht> xnox: I've kicked off the autopilot-ci job once that's finished successfully I'll kick off the autolanding job.
[13:40] <xnox> josepht: not sure about the autolanding job, since it's under ci-train management actually.
[13:40] <xnox> josepht: and i'm not the person who requested this.
[13:40] <asac> josepht: err... please dont do autolandings for stuff in CI train.
[13:41] <asac> josepht: these thigns are in CI train silo staged, most likely waiting for veebers and thomi to get up finish their testing
[13:41] <josepht> xnox, asac: ack no autolanding
[13:41] <asac> thx
[14:01] <Mirv> updated libunity in archive now
[14:02] <kgunn> rsalveti: thanks....we can get gerry to do it (he's wanting this mir anyway)
[14:03] <rsalveti> kgunn: thanks, otherwise this will block us for days
[14:03] <rsalveti> and we don't have days
[14:03] <greyback> rsalveti: what do I need to do?
[14:03] <rsalveti> greyback: https://docs.google.com/a/canonical.com/spreadsheet/ccc?key=0AuDk72Lpx8U5dFlCc1VzeVZzWmdBZS11WERjdVc3dmc&usp=drive_web#gid=22
[14:04] <rsalveti> greyback: there was a landing request for unity-mir that Saviq created a few days ago
[14:04] <rsalveti> still needs testing, so it can be approved and landed
[14:04] <kgunn> rsalveti: oh...you were talking about Saviq's
[14:04] <kgunn> rsalveti: he might have left that with mzanetti let me check
[14:05] <kgunn> greyback: i thot rsalveti meant a review of this one... https://code.launchpad.net/~kgunn72/unity-mir/um-mir0.1.5-bump
[14:05] <greyback> kgunn: ah ok
[14:05] <rsalveti> I want the big landing first :-)
[14:06]  * kgunn loves that rsalveti doesn't consider mir a big landing anymore :)
[14:09] <kgunn> rsalveti: ok, tsdgeos is looking at the 004 silo build
[14:09] <tsdgeos> well i'm not
[14:09] <kgunn> tsdgeos: huh ?
[14:09] <tsdgeos> *yet*
[14:09] <tsdgeos> i'm getting cimi and/or mzanetti to give me access to their maguros
[14:09] <kgunn> ah
[14:09] <tsdgeos> and then i'll get it fixed
[14:09] <psivaa> sil2100: no luck at all with the dialer app failure.. there is a lxc-android-config change in http://people.canonical.com/~ogra/touch-image-stats/20140206.2.changes
[14:09] <kgunn> ...oh yeah...how could i forget
[14:10] <tsdgeos> hopefully
[14:10] <tsdgeos> not so easy to get a thing fixed over the interwebs :D
[14:10] <tsdgeos> but we'll see
[14:10] <kgunn> tsdgeos: hey...wait, i thot that ended up being due to AP update ?
[14:10] <rsalveti> maybe davmor2 can give you a hand as well?
[14:10] <kgunn> and davmor2 and elopio were looking into that?
[14:10] <kgunn> ...but maybe i'm missing some new info
[14:10] <kgunn> ?
[14:11] <ogra_> psivaa,
[14:11] <ogra_> lxc-android-config (0.136) trusty; urgency=medium
[14:11] <ogra_>   * 30-no-surface-flinger: adding logic to start/stop surface flinger via
[14:11] <ogra_>     properties (needed by the SDK team to compare performance against MIR)
[14:11] <ogra_> unrelated
[14:11] <tsdgeos> kgunn: i don't know, i told elopio to send me an email on friday night after his shift ended, but i never got anything
[14:11] <tsdgeos> kgunn: it may be
[14:12] <psivaa> ogra_: ack, it was a last ditch attempt
[14:15] <rsalveti> tsdgeos: what do you need to do with maguro?
[14:15] <rsalveti> tsdgeos: I can open a port for you
[14:15] <tsdgeos> rsalveti: cimi just gave me one
[14:15] <rsalveti> tsdgeos: awesome then
[14:15] <tsdgeos> rsalveti: well, figure out those autopilot tests that fail and are blocking us
[14:16] <tsdgeos> which honestly if we don't support it
[14:16] <tsdgeos> makes no sense to me
[14:16] <tsdgeos> but i'm not the one that decides if that makes sense or not, so i'll just fix them
[14:16] <rsalveti> haha, alright
[14:18] <kgunn> tsdgeos: i feel you brother...
[14:18] <kgunn> rsalveti: so when do we officially move to 4.4 ? :)
[14:18] <elopio> good morning.
[14:18] <rsalveti> kgunn: well, I'm also waiting on you for that :-)
[14:18] <kgunn> elopio: hey there!
[14:18] <elopio> I'm just waking up, so trying to parse what you are saying.
[14:18] <kgunn> rsalveti: oh no...chicken egg?
[14:18] <kgunn> elopio: no worries
[14:18] <rsalveti> kgunn: first I need mir 1.5, then the backend packaging split that alf_ is working on
[14:19] <elopio> tsdgeos: davmor2 leaves pretty much at the same time you leave, so we couldn't collect more maguro information.
[14:19] <kgunn> yep....chicken egg
[14:19] <rsalveti> kgunn: yeah
[14:19] <tsdgeos> elopio: ok
[14:19] <elopio> tsdgeos: the other bug you found is already reviewed and approved by mterry.
[14:19] <tsdgeos> i saw :)
[14:19] <tsdgeos> elopio: so basically we still don't know why and how to fix the other 2 AP fails, right?
[14:20] <elopio> tsdgeos: right. But I understand from your recent messages that you now have access to a maguro, right?
[14:20] <tsdgeos> kind of yes
[14:20] <tsdgeos> i'll take care of it
[14:20] <tsdgeos> will ping you if need help
[14:20] <tsdgeos> elopio: or maybe, let's both try to get something out of it if you can get davmor2 or rsalveti to give you another maguro
[14:21] <davmor2> tsdgeos, kgunn, elopio: what's needed on what now save me digging into a lot of backscroll
[14:21] <tsdgeos> so if one fails the other gets it to work
[14:21] <tsdgeos> we need to be able to land
[14:21] <kgunn> davmor2: sorry...
[14:21] <tsdgeos> davmor2: is there any chance you can give ssh to elopio in a maguro so he can debug the AP failures we're having in unity8?
[14:21] <kgunn> davmor2: we're trying to determine what remains an unknown on the AP failures
[14:21] <xnox> josepht: thanks for that. There are a few other autopilot jobs that need a re-kick now that it's correctly configured:
[14:22] <xnox> https://code.launchpad.net/~thomir/autopilot/trunk-add-output_stream_tests/+merge/204793
[14:22] <xnox> https://code.launchpad.net/~veebers/autopilot/fix_deprecate_pick_app_launcher/+merge/202784
[14:22] <xnox> https://code.launchpad.net/~thomir/autopilot/trunk-fix-functional-tests/+merge/205512
[14:22] <elopio> I didn't know rsalveti could give access to maguros. rsalveti, how does that work? I could use one.
[14:22] <xnox> https://code.launchpad.net/~veebers/autopilot/fix_1275913_launch/+merge/204815
[14:22] <xnox> https://code.launchpad.net/~thomir/autopilot/trunk-add-unit-test-coverage/+merge/205517
[14:22] <rsalveti> elopio: sure, with system-image as rw I'd guess?
[14:22] <kgunn> davmor2: basically those 2 AP failures are the current landing process log jam....unity8, and then mir is also help up due to that...etc
[14:22] <rsalveti> elopio: which image as well?
[14:23] <davmor2> kgunn, tsdgeos, elopio: ah right is this what we started digging into on Friday night?
[14:23] <elopio> davmor2: yes, before you left on friday, I thought you could reproduce it. What I would love is to see a video from it failing.
[14:23] <josepht> xnox: sure, is just the autopilot-ci job enough or do you need autopilot-1.4-ci as well?  the second seems to fail a lot.
[14:23] <elopio> tsdgeos: are we seeing the error in the latest image too? Or which image should I get?
[14:24] <xnox> josepht: the autopilot-1.4-ci is obsolete, and shouldn't be used any more.
[14:24] <xnox> josepht: just autopilot-ci please.
[14:24] <tsdgeos> elopio: no idea :D
[14:24] <josepht> xnox: okay
[14:24] <tsdgeos> elopio: i'm just trying to get it to fail now
[14:25] <davmor2> elopio: is there a way to run just the one test?  Other wise there is no way I can grab it.  elopio I don't think the video capture will work on maguro for the same reason that the screenshot doesn't
[14:25] <elopio> rsalveti: the most recent image please.
[14:25] <rsalveti> alright, flashing
[14:27] <elopio> davmor2: yes, you can run only one with phablet-test-run test unity8.shell.tests.test_emulators.DashEmulatorTestCase.test_open_applications_scope
[14:28] <davmor2> elopio: right give me a couple of minutes I'll see what I can rig up
[14:28] <elopio> davmor2: and if it happens every time, maybe you can record it with a camera? Maybe I'm asking too much :)
[14:28] <davmor2> elopio: no that is what I was going to do
[14:29] <elopio> davmor2: thanks!
[14:29] <davmor2> elopio: hmmm hang on phone doesn't seem to be accepting charge give me about 10 minutes to remove the battery and get a small amount of charge in it.
[14:31] <elopio> davmor2: sure. We'll try to gather more information from our side.
[14:32] <psivaa> sil2100: so, i'm going out for lunch... one thing that we could do is to flash image 169 to see if this issue is there. and then upgrade the packages one by one to see if the issue pops up.
[14:32] <psivaa> sil2100: but that will take quite a lot of time..
[14:36] <josepht> xnox: I've re-kicked the autopilot-ci job for all of those MPs
[14:37] <xnox> josepht: thanks!
[14:46] <davmor2> elopio: I'm guessing this isn't the right response http://paste.ubuntu.com/6909483/ :D
[14:47] <elopio> davmor2: that means it didn't find the test.
[14:48] <elopio> davmor2: do you have unity8-autopilot installed on the phone?
[14:48] <elopio> well, that's not a phone.
[14:48] <davmor2> elopio: hmmm I bet had to the reinstall it to test 170.  I'll grab the stuff again D'oh
[14:51] <elopio> davmor2: https://bugs.launchpad.net/autopilot/+bug/1278462
[15:00] <sil2100> psivaa: ouch...
[15:01] <sil2100> psivaa: how often can you reproduce the problem locally on the device?
[15:02] <sil2100> psivaa: and, is it also possible to encounter the problem during normal usage?
[15:02] <sil2100> bfiller: hello!
[15:03] <sil2100> bfiller: even though it doesn't seem to be caused by any of the changes you and your team made, we seem to have a problem with one of dialer-app's AP tests since some images
[15:04] <sil2100> bfiller: psivaa tried bisecting which package could have caused this failure to appear, but we seem to not have much luck with that
[15:05] <bfiller> sil2100: crashes still? probably the same mir bug
[15:05] <sil2100> bfiller: could you have someone investigate this failure here? http://ci.ubuntu.com/smokeng/trusty/touch/mako/173:20140210:20140115.1/6527/dialer_app/756161/
[15:05] <bfiller> sil2100: ok
[15:05] <sil2100> bfiller: not the crash thing sadly, something new - seems like a problem with hangup
[15:06] <sil2100> bfiller: from what I know the test failure is reproducible (but not always)
[15:06] <sil2100> psivaa mentioned that he usually required some re-runs of the test to get the failure
[15:13] <sil2100> bfiller: thanks!
[15:30] <davmor2> elopio: okay this is getting frustrating now I can run phablet-test-run -n -p unity8-autopilot unity8 and that runs fine if I run phablet-test-run test unity8.shell.tests.test_emulators.DashEmulatorTestCase.test_open_applications_scope I get the 0 tests run
[15:31] <davmor2> elopio: and I have unity8 listed in the the /home/phablet/autopilot dir
[15:31] <elopio> davmor2: let me double check the module path.
[15:31] <davmor2> elopio: will do
[15:32] <elopio> unity8.shell.tests.test_emulators.DashEmulatorTestCase.test_open_applications_scope
[15:32] <elopio> seems correct, that's what I've been running.
[15:32] <elopio> davmor2: but, I have run the single test 20 times without failures, on this magic rsalveti's maguro.
[15:32] <elopio> I'm trying to run the whole suite now to see if it's because there's a test war.
[15:33] <elopio> davmor2: oh, you have an extra "test" on your command
[15:34] <elopio> phablet-test-run -n unity8.shell.tests.test_emulators.DashEmulatorTestCase.test_open_applications_scope
[15:35] <davmor2> elopio: thanks
[15:45] <davmor2> elopio: http://paste.ubuntu.com/6909764/ note the last one
[15:46] <davmor2> elopio: not sure why they are so vastly different in times
[15:46] <elopio> davmor2: yes, I'm looking that here too.
[15:47] <elopio> according to tsdgeos, the device can get busy and it takes a lot of time to settle.
[15:47] <davmor2> elopio: yeah only on a full test run there is no settle time
[15:47] <tsdgeos> i may be lying :D
[15:47] <davmor2> tsdgeos: ^
[15:48] <tsdgeos> yes i've seen that too
[15:48] <tsdgeos> honestly it's not what bothers me
[15:48] <tsdgeos> if you add some prints
[15:48] <tsdgeos> you'll see it's at the end
[15:48] <tsdgeos> it's not what's causing this problem
[15:58] <asac> sil2100: http://people.canonical.com/~ogra/touch-image-stats/20140208.changes thats the changes that made the dialer_app problem firs tappear, right?
[15:58] <asac> and we cant reproduce at all?
[15:58] <ogra_> asac, thats not clear
[15:59] <asac> ogra_: i dont see it before int he results
[15:59] <ogra_> asac, psivaa rolled back piece by pice of any phone related packages since 20140206
[15:59] <asac> oh wait
[15:59] <asac> http://people.canonical.com/~ogra/touch-image-stats/20140207.1.changes that one seems to have ti
[15:59] <ogra_> there were a few images where everything was broken
[15:59] <davmor2> elopio: off hand do you know what the test is looking for to confirm that the screen is now on the applications page?
[16:00]  * ogra_ needs to re-locate for meeting, one sec
[16:01] <asac> ogra_: on 7 (with .1) it was green
[16:01] <asac> http://ci.ubuntu.com/smokeng/trusty/touch/mako/169:20140207:20140115.1/6492/
[16:01] <asac> there were crashes though
[16:02] <asac> psivaa: there?
[16:02] <psivaa> asac: yep, just came back from lunch
[16:03] <asac> kk
[16:03] <psivaa> and reading the backlog
[16:03] <asac> psivaa: the bisecting for dialer-app regression failed or you still have options to try out?
[16:03] <psivaa> asac: not in terms of reverting packages.
[16:03] <asac> psivaa: so you reverted everything and the issue was still there?
[16:03] <ogra_> asac, on 7 ?
[16:04] <psivaa> asac: yea, reverted everything that we thought might cause the issue, but the issue is still there
[16:04] <ogra_> asac,  the last green image was 6.1
[16:04] <asac> ogra_: 6.2 also had a green dialer-app
[16:04] <asac> http://ci.ubuntu.com/smokeng/trusty/touch/mako/168:20140206.2:20140115.1/6487/
[16:04] <asac> with crashes, but still
[16:04] <sil2100> The problem is that even when it was green, the issue could still have been there
[16:05] <psivaa> asac: not really knowing when this was introduced is making decision difficult
[16:05] <sil2100> As the problem is not reproducible in 100%
[16:05] <ogra_> asac, after 6.1 (167) there were several images that were so broken that you cant take any results serious)
[16:05] <asac> so i guess
[16:05] <asac> http://people.canonical.com/~ogra/touch-image-stats/20140207.changes
[16:05] <elopio> davmor2: it waits until the current index is the previous index + 1
[16:05] <asac> ogra_: 168 above has green dialer-app
[16:05] <sil2100> I flashed my phone and will try investigating a bit further
[16:05] <ogra_> asac, ignore 168 and 169 please
[16:05] <asac> ogra_: why? they are green :)
[16:05] <asac> on dialer-app
[16:05] <asac> so 170 is the first to show this problem
[16:06] <ogra_> asac, there were low level issues, i wouldnt take any of the tests serious on these images
[16:06] <asac> ogra_: well, i think a green can be taken into account
[16:06] <asac> low level issues might invalidate a red
[16:06] <asac> but not a green :)
[16:06] <asac> http://people.canonical.com/~ogra/touch-image-stats/20140207.changes
[16:07] <asac> psivaa: did you work against 170 and backed out the ones fromt he changes in the line above?
[16:08] <ogra_> asac,  we went through the phone related packages, he tested while backing them out one by one
[16:08] <psivaa> asac: i worked with 173 and reverted packages
[16:08] <ogra_> (back to 07)
[16:08] <asac> psivaa: try start from 170 ... and revert the few from above
[16:08] <asac> most likely telephony-service :)
[16:08] <asac> 169 as green on dialer app
[16:08] <ogra_> tried already
[16:08] <asac> 170 had the same issue we see
[16:09] <sil2100> Looking at the failure right now
[16:09] <ogra_> was my first shot too :)
[16:09] <asac> maybe there was confusion or a mistake
[16:09] <psivaa> asac: ack, will do that. in the meeting and once this is over i'll install 170 and try
[16:10] <davmor2> elopio: so why would the test suddenly take nearly 4 times as long to run  :/
[16:11] <tsdgeos> no didrocks?
[16:11] <asac> tsdgeos: what do you need?
[16:11] <sil2100> tsdgeos: no, he's on holidays
[16:11] <asac> sil is here for you
[16:11] <sil2100> bfiller: any luck regarding the issue?
[16:12] <tsdgeos> sil2100: so about the failing AP tests for the unity8 silo
[16:12] <bfiller> sil2100: boiko is looking at it, no update yet
[16:12] <tsdgeos> sil2100: i can make them fail without any of the MPs from that silo
[16:12] <sil2100> tsdgeos: oh oh! Did you manage to reproduce it?
[16:12] <tsdgeos> sil2100: so they were already broken
[16:12] <sil2100> tsdgeos: yes, we know
[16:12] <tsdgeos> do you?
[16:12] <tsdgeos> ok
[16:12] <sil2100> tsdgeos: that's why we don't want to land anything until it's fixed
[16:13] <tsdgeos> i thought we were blocking because it was a regression
[16:13] <asac> sil2100: when did that rregress?
[16:13] <tsdgeos> but it's not a regression
[16:13] <sil2100> tsdgeos: Didier's point is: no new landings of unity8 as long as it's not fixed, as it seems to have regressed in one of the earlier landings
[16:13] <tsdgeos> it's just an unstable test
[16:13] <asac> tsdgeos: it probably regressed last week sometimes
[16:13] <tsdgeos> sil2100: that's bad point
[16:13] <tsdgeos> it's new tests
[16:13] <tsdgeos> nothing regressed
[16:14] <tsdgeos> it's just that CI doesn't run galaxy nexus
[16:14] <tsdgeos> so nothing runs those tests
[16:14] <asac> tsdgeos: maybe you are talking about something different?
[16:14] <asac> sil2100: ?
[16:14] <tsdgeos> except the ultra blocker silo thing
[16:14] <tsdgeos> i'm talking about
[16:14] <tsdgeos> "2 test failures on maguro that we need to understand:
[16:14] <tsdgeos> http://ci.ubuntu.com/smokeng/trusty/touch/maguro/167:20140206.1:20140115.1/6480/unity8-autopilot/741839/
[16:14] <tsdgeos> http://ci.ubuntu.com/smokeng/trusty/touch/maguro/168:20140206.2:20140115.1/6490/unity8-autopilot/744272/"
[16:14] <tsdgeos> that the ci train document mentions
[16:15] <tsdgeos> if you say we can't merge new stuff until those tests don't fail anymore
[16:15] <asac> sil2100: can you check that those never succeeded and are new? if so, it might be indeed not right to block on them
[16:15] <tsdgeos> asac: they do succeed
[16:15] <tsdgeos> eventually
[16:16] <tsdgeos> i mean i run it 10 times in a loop
[16:16] <asac> sure
[16:16] <tsdgeos> it succeeds aroud 50%
[16:16] <asac> thats not the point :)
[16:16] <tsdgeos> the point is
[16:16] <asac> tsdgeos: when were they introduced?
[16:16] <tsdgeos> that it may have succeed in an earlier run
[16:16] <tsdgeos> and you'll claim "look they worked"
[16:16] <tsdgeos> and i'll claim "they worked the same they work now :D"
[16:16] <asac> http://ci.ubuntu.com/smokeng/trusty/touch/maguro/169:20140207:20140115.1/6491/unity8-autopilot/
[16:16] <tsdgeos> we were just lucky
[16:16] <asac> tsdgeos: are they in there at all?
[16:17] <tsdgeos> one is old
[16:17] <tsdgeos> so yes it is there
[16:17] <tsdgeos> the other let me check
[16:17] <asac> tsdgeos: which one is old?
[16:17] <tsdgeos> yes it is there too
[16:17] <tsdgeos> the hud_click_one
[16:17] <sil2100> tsdgeos: do you know why this test is so flacky then? You think it can be fixed?
[16:17] <asac> cant find the string :/
[16:18] <asac> cant find "click_one"
[16:18] <tsdgeos> asac: sorry unity8.shell.tests.test_hud.TestHud.test_hide_hud_click
[16:18] <tsdgeos> sil2100: i think it can be fixed, yes
[16:18] <tsdgeos> sil2100: i am not sure it is worth doing the effort of fixing them *now*
[16:18] <asac> tsdgeos: check if you can find this happening regularly in the past https://jenkins.qa.ubuntu.com/job/trusty-touch-maguro-smoke-unity8-autopilot/
[16:19] <asac> tsdgeos: if its flaky it should have been there a few time in that list
[16:19] <sil2100> tsdgeos: I just looked at maguro results from the past weeks and I didn't see these tests failing, so we had to be *really* lucky before ;)
[16:19] <asac> tsdgeos: did you figure why they fail?
[16:19] <ogra_> asac, just an observation, but between the dialer-app working and it failing there was the utah change ...
[16:20] <asac> doanac: what changed in utah?
[16:20] <sil2100> ogra_: but it fails also on local devices
[16:20] <tsdgeos> asac: no, if i had figured why they fail i'd be fixing them and not arguing here :)
[16:20] <tsdgeos> sil2100: true
[16:21] <asac> tsdgeos: so the way of htinking is that on maguro you see nasty stuff that will strike us later. hence its very valuble to check them out if they are reproducible right now
[16:21] <asac> e.g. if its non-hw related races
[16:21] <asac> you certainly want to get rid of them
[16:21] <asac> ogra_: do you know what changed?
[16:21] <ogra_> asac, note that the test links have different names between 169 and 170
[16:21] <tsdgeos> asac: i can reproduce them now, i also have 13 branches landing that are starting to conflict amonst themselves
[16:21] <tsdgeos> so it'd be helpful if we could land them
[16:21] <sil2100> ogra_: you talking about the dialer-app failure now?
[16:21] <tsdgeos> but if you guys are blocking until we get that test fixed
[16:21] <ogra_> asac, nope, i see that it seems to have gotten a lot more reliable ... zero change images come out with the same results now
[16:22] <asac> tsdgeos: if you land your branches how do the APs look?
[16:22] <tsdgeos> we'll get it fixed
[16:22] <asac> tsdgeos: did you run every AP
[16:22] <asac> ?
[16:22] <ogra_> sil2100, yes
[16:22] <asac> and you get exactly this 1 failyre only?
[16:22] <tsdgeos> asac: you get the same errors
[16:22] <sil2100> ogra_: you think it might be caused by utah?
[16:22] <asac> tsdgeos: did you test all APs on mako?
[16:22] <tsdgeos> asac: i mean, isn't that what the silo does?
[16:22] <doanac> asac: utah hasn't changed. we stopped using utah over the weekend and use phablet-test-run directly now for autopilot tests
[16:22] <asac> tsdgeos: the silo does not test anything
[16:22] <asac> tsdgeos: you are supposed to test stuff that is in there
[16:22] <tsdgeos> asac: ah ok
[16:22] <asac> :)
[16:22] <sil2100> ogra_: since as I mentioned before, it's reproducible on local devices - but I might misunderstand your point ;)
[16:22] <ogra_> sil2100, not sure, i'm just seeing that between dialer-app passing and dialer-app failing there was the change
[16:22] <tsdgeos> asac: anyway yes, i've run all the stuff on my nexus4 and nothing fails
[16:22] <sil2100> ogra_: since the test fails on my mako every time I run it
[16:23] <ogra_> sil2100, in 169 the test is called dialer-app-autopilot, in 170 it is just named dialer-app
[16:23] <asac> tsdgeos: not even the dialer_app?
[16:23] <asac> tsdgeos: seems you didnt test enough then :)
[16:23] <asac> tsdgeos: unity8 needs all APs run
[16:23] <asac> from apps etc.
[16:23] <tsdgeos> asac: ok
[16:23] <tsdgeos> i'm bailing out from here
[16:23] <tsdgeos> i'm not even the lander of my team
[16:23] <tsdgeos> and i'm not going to run the dialer_app tests for fun
[16:23] <asac> kgunn: we need  alnder
[16:23] <ogra_> sil2100, right, i didnt mean to say there must be a connection between the failure and the utah change, i just pbserved that it happened exactly when the failing started
[16:24] <sil2100> hmmm
[16:24] <asac> tsdgeos: unity is a big beast that has impacted lots of apps etc.
[16:24] <tsdgeos> because you know it's the dialer team that should be running the dialer tests
[16:24] <tsdgeos> not me
[16:24] <asac> tsdgeos: hence unity landings need to run all or most of app tests ... so we protect the folks that have green app tests
[16:24] <tsdgeos> asac: i'd like to see an example of when unity8 broke any app
[16:24] <asac> tsdgeos: ask bfiller :)
[16:24] <tsdgeos> no, i'm asking you since it's you that are arguing it did
[16:25] <tsdgeos> and i really have a hard time seeing how we can break apps
[16:25] <tsdgeos> we can break apps not starting, i can take that
[16:25] <asac> its accumulated intrinsic know how/best practices we got from doing these landings for many month
[16:25] <asac> can be revisited
[16:25] <tsdgeos> ok
[16:25] <ogra_> tsdgeos, you could put the panel to the bottom, suddnely all taps the AP test generates would be off by a margin
[16:25] <tsdgeos> i'm back to fixing the tests
[16:26] <tsdgeos> ogra_: that only would happen if the ap tests were crap
[16:26] <ogra_> unity8 will not break apps, but it can break AP tests
[16:26] <tsdgeos> with hardcoded numbers
[16:26] <tsdgeos> it's not *my* issue at all if someone elses tests are crap
[16:26] <ogra_> well, you will be the first go-to person since your change exposed it
[16:26] <bfiller> tsdgeos: this bug breaks the dialer - if the shell (or mir) crashes: https://bugs.launchpad.net/ubuntu/+source/mir/+bug/1240400
[16:26] <ogra_> doesdnt mean it is your fault indeed :)
[16:27] <kgunn> josepht: ping
[16:27] <tsdgeos> bfiller: that's mir crashing yes
[16:27] <josepht> kgunn: hi, what's up?
[16:27] <rsalveti> tsdgeos: asac: so what is the real blocker here, just the dialer-app?
[16:27] <kgunn> josepht: how are you?
[16:27] <rsalveti> while I understand we want to fix the maguro issues, they are not regressions
[16:27] <ogra_> rsalveti, just ...
[16:27] <tsdgeos> rsalveti: honestly, i don't have a clue what the blocker is
[16:27] <kgunn> josepht: hey, we're seeing a sudden trend in ci runs
[16:28] <kgunn> for mir, they are starting to time out
[16:28] <rsalveti> and blocking a new landing causes way more issues than first trying to fix the "regressions" we had for maguro
[16:28] <kgunn> wondering if something changed on the ci infra end of things
[16:28] <kgunn> josepht: e.g. like https://jenkins.qa.ubuntu.com/job/mir-mediumtests-builder-trusty-armhf/415/consoleText
[16:28] <josepht> kgunn: let me do some digging
[16:28] <kgunn> josepht: thanks...
[16:28] <asac> rsalveti: no, we have maguro regressions that someone should confirm are a) really no regression or b) are understood to be hardware related (e.g. nothing general in our software stack)
[16:28] <ogra_> rsalveti, what are these regressions ?
[16:28] <kgunn> josepht: one other piece of info from history...
[16:29] <asac> rsalveti: i couldnt confirm a) from looking at the past runs
[16:29] <kgunn> francis gave us a dedicated host previously when this was happening a bunch
[16:29] <ogra_> we have five test failures ... pretty much the same ones we have all the time
[16:29] <rsalveti> asac: if I understood correctly tsdgeos said that those failures were already happening before, and not related with the silo
[16:29] <rsalveti> oh, ok then
[16:29] <ogra_> yeah
[16:29] <kgunn> josepht: after he did that those problems went away...i wonder if maybe we're back to "gen-pop"
[16:29] <sil2100> It's for didrocks and/or asac to decide whether we unblock or not
[16:30] <rsalveti> ogra_: asac: and who is looking at the dialer-app regression?
[16:30] <ogra_> rsalveti, everyone it seems
[16:30] <ogra_> (see backlog of the last hours)
[16:30] <rsalveti> alright :-)
[16:30] <bfiller> rsalveti: if you guys are talking about the one failed autopilot test in the nightly smoke test, my team is looking at it
[16:30] <ogra_> rsalveti, the issue is that we cant really pin it to one specific landing
[16:31] <rsalveti> bfiller: cool
[16:31] <ogra_> rsalveti, i thinnk psivaa is currently trying with a different image and different roll-backs ... but that will take time
[16:31] <josepht> fginther: is what kgunn is referring to above with the timeouts related to what you worked on on Friday?
[16:31] <bfiller> rsalveti: don't think it's critical or should be a blocker, but whatever. a failed test is a failed test. works fine manually
[16:31] <rsalveti> ogra_: right, might be just better to have someone from bfiller's team to look for a fix instead
[16:32] <rsalveti> instead of reverting tons of stuff to try to find the culprit one
[16:32] <rsalveti> as that's really painful
[16:32] <bfiller> are we talking about this? http://ci.ubuntu.com/smokeng/trusty/touch/mako/173:20140210:20140115.1/6527/dialer_app/756161/
[16:32] <kgunn> asac: ogra_ rsalveti : i (tried to :) read most of the backlog....so basically AP failures on maguro were there before, but now dialer app has a new failure ?
[16:32] <sil2100> rsalveti: that's what I did
[16:32] <ogra_> rsalveti, ++
[16:32] <bfiller> ???
[16:32] <ogra_> kgunn, dialer-app has developed a failure over the weekend
[16:32] <sil2100> rsalveti: once I noticed bfiller online, I poked him about the problem - we couldn't do it earlier because he wasn't around, so psivaa was doing bisection of packages
[16:32] <rsalveti> right, awesome then
[16:33] <rsalveti> bfiller will fix it ;-)
[16:33] <ogra_> kgunn, the maguro failures are always around 5 (+/-2)
[16:33] <kgunn> ogra_: ack... i see now
[16:33] <sil2100> I'm sure he will, there's no task their team can't handle ;)
[16:33]  * psivaa is just installing 170 on a mako
[16:33] <kgunn> ogra_: right...so tsdgeos is focused on trying to fix those
[16:34] <sil2100> davmor2: hi!
[16:34] <davmor2> sil2100: hello
[16:34] <ogra_> kgunn, i would say as long as your tests dont expose a signficantly different result to the last image test on http://ci.ubuntu.com/smokeng/trusty/touch/ your landing should be fine
[16:34] <rsalveti> right
[16:34] <ogra_> we should define that in some policy :)
[16:34] <ogra_> so people can be pointed to it
[16:34] <asac> kgunn: noone showed me that the maguro AP failres were there before
[16:34] <asac> kgunn: i ddont see any data in our test log indicating that thats the case
[16:34] <asac> but you guys can show me :)
[16:35] <ogra_> asac, see http://ci.ubuntu.com/smokeng/trusty/touch/
[16:35]  * kgunn just remains silent
[16:35] <asac> ogra_: i looked through them
[16:35] <ogra_> maguro always varies around 5-7 test failures
[16:35] <ogra_> since like forever
[16:35] <asac> ogra_: wello, we talk about specific tests here
[16:35] <asac> ogra_: the new unity8 ones
[16:35] <asac> ogra_: those have never shown up before in the whole history that i can see
[16:35] <ogra_> if the failures kgunn see are identical with the ones on the dashboard i'd say all is fine
[16:36] <asac> https://jenkins.qa.ubuntu.com/job/trusty-touch-maguro-smoke-unity8-autopilot/
[16:36] <ogra_> if they vary thats indeed different
[16:36] <asac> in general i agree with what you say
[16:36] <asac> however, there is risk that we accumulate new issues in the same part
[16:36] <asac> without seeing
[16:36] <ogra_> http://ci.ubuntu.com/smokeng/trusty/touch/maguro/173:20140210:20140115.1/6529/unity8/
[16:36] <asac> and people still claim its old issues
[16:36] <ogra_> that has one failure
[16:36] <asac> so i would like to undersand that claim
[16:37] <fginther> josepht, kunn, I don't think this is related to any recent changes, the armhf builds have always been in 'gen-pop'. There has been more armhf builds lately which may be dragging down the build speeds.
[16:37] <asac> ogra_: yes, thats a pretty new image
[16:37] <ogra_> asac, it wasnt clear to me that we talk about one single test run and not the whole suite
[16:37] <asac> right
[16:37] <asac> ogra_: they say this particular test (which happens daily now) is not new
[16:37] <ogra_> against the whole suite there are always 5-7 failures on maguro ... in random places
[16:37] <asac> which can be right, but i just dont see them failing looking back :)
[16:37] <fginther> josepht, kgunn, I'm in the process of getting a few more armhf machines update to saucy, until then, I'll increase the timeout to at least avoid the failures
[16:37] <ogra_> unity8 only is a different thing
[16:38] <kgunn> alan_g: ^ bummer
[16:38] <ogra_> http://ci.ubuntu.com/smokeng/trusty/touch/maguro/172:20140209:20140115.1/6520/unity8/
[16:38] <ogra_> that one has two
[16:38] <josepht> fginther: ack, thanks
[16:38] <ogra_> http://ci.ubuntu.com/smokeng/trusty/touch/maguro/171:20140208:20140115.1/6511/ has two as well
[16:38] <alan_g> kgunn: yes, waiting for more than 2 hrs for a CI build is a PITA
[16:39] <ogra_> and the last image dropped to one unity8 failure on maguro
[16:39] <asac> kgunn: so lets talk again about your silo :)
[16:40] <asac> kgunn: what we need at minimum to go forward (ignoreing the maguro) is to be sure that you see exactly the same errors as on the dashboard
[16:40] <davmor2> ogra_, kgunn, sil2100, asac: is it worth doing a comparison between maguro and mako.  If maguro fails and mako passes we look into the issue but don't hold up promotion and then if maguro and mako both fail a test we look at that being a blocker and really dig into that.  Maguro seems to have so many faults with hardware that it could just be that some of the time
[16:40] <rsalveti> iirc the new errors are all maguro specific
[16:40] <ogra_> yeah
[16:40] <sil2100> rsalveti: one is mako specific ;) The dialer-app test failure
[16:41] <rsalveti> right, not this one :-)
[16:41] <ogra_> davmor2, right, but this is about landing and doing tests before actually getting a package in the archive
[16:41] <ogra_> davmor2, i.e. running the unity8 tests on a new unity8 package before it gets in
[16:42] <sil2100> davmor2: btw. are you free now for some dogfooding?
[16:42] <asac> kgunn: can you show up on the landing team call in 20?
[16:42] <ogra_> he usually does anyway :)
[16:42] <asac> so we can sort this out?
[16:43] <kgunn> asac: if course
[16:43] <asac> ah good
[16:43] <asac> i will be there
[16:43] <asac> and sell blank landing approvals :)
[16:43] <asac> lol
[16:43] <ogra_> uuuh, manager participation
[16:43] <asac> i take bitcoins through private channels :)
[16:43] <rsalveti> asac: can you invite me as well?
[16:43] <kgunn> asac: are you money laundering :)
[16:43] <asac> rsalveti: done
[16:43] <davmor2> sil2100: I can be on what 173 or is there a new image landing?
[16:43] <rsalveti> asac: thanks
[16:44] <ogra_> kgunn, nah, he is monbey dry cleaning :)
[16:44] <ogra_> *money
[16:44] <asac> kgunn: lol
[16:44] <asac> not yet
[16:44] <sil2100> davmor2: 173 is fine
[16:45] <ogra_> davmor2, with all that back and forth i think we'll not have a new image before 3am UTC
[16:45] <sil2100> davmor2: since I'd like to maybe promote it if bfiller finds a fix for the test failure
[16:46] <davmor2> sil2100: ah what a wishful thinking man ;)
[16:46] <davmor2> sil2100: It's nice to optimism :)
[16:46] <ogra_> davmor2, hey, bfiller's team is cool, they'll find a fix fast ;)
[16:47] <davmor2> ogra_: Every team is awesome here :P  It's just if it is the maguro tests they are trying to fix we are not having much joy reproducing them :)
[16:48] <ogra_> davmor2, nah it is the dialer-app issue
[16:48] <sil2100> tsdgeos: hey, so hm, after some discussion with asac, I'm willing to unblock unity8 if you guys promise to try and fix the flacky unity8 tests on maguro in the nearest landings
[16:48] <bfiller> ogra_: so far I can't even reproduce the issue on mako, have run the tests like 5 times
[16:48] <davmor2> ogra_: oh the real issue, oh well that is fixable I'm sure :)
[16:49] <sil2100> bfiller: uh
[16:50] <ogra_> bfiller, hmm, well ... as i stated before (even though this might be a complete coincidence) between http://ci.ubuntu.com/smokeng/trusty/touch/mako/169:20140207:20140115.1/6492/ and http://ci.ubuntu.com/smokeng/trusty/touch/mako/170:20140207.1:20140115.1/6500/ the test infra was upgraded (note the different test names)
[16:50] <ogra_> bfiller, and that seems when the failure started ... but as i said, could be just coincidence
[16:50] <ogra_> bfiller, i think psivaa could reproduce it somehow
[16:50] <sil2100> bfiller: I can reproduce it everytime on my mako
[16:51] <sil2100> MismatchError: '0612302' != u''
[16:51] <bfiller> sil2100: with build 173?
[16:51] <sil2100> bfiller: yes, I just flashed my device
[16:52] <ogra_> http://ci.ubuntu.com/smokeng/trusty/touch/mako/173:20140210:20140115.1/6527/dialer_app/756161/
[16:52] <ogra_> thats the dialer-app test from 173
[16:52] <bfiller> sil2100: just did the same and ran successfully, now rebooting and trying again. must be a race
[16:52] <bfiller> will figure it out
[16:52] <sil2100> hmm
[16:53] <ogra_> plars, doanac, could we probably have only one single entry per logfile on the test results scrolling down on http://ci.ubuntu.com/smokeng/trusty/touch/mako/173:20140210:20140115.1/6527/dialer_app/756161/ looks pretty weird
[16:53] <bfiller> boiko: just flashed 173 on nexus4 and autopilot running fine for me, wondering if the notification for an end call is getting in the way of the test and it's a race condition
[16:54] <boiko> bfiller: that might be, om26er also reported that
[16:54] <boiko> bfiller: I will remove those anyways, as the design has changed
[16:54] <doanac> ogra_: hmm. i thought i'd fixed that.
[16:54] <ogra_> :)
[16:54] <ogra_> fix harder :)
[16:55] <ogra_> no biggie indeed ... not having logs at all would be worse
[16:55] <doanac> ogra_: yeah. i guess its just fixed for the "testsuite": http://ci.ubuntu.com/smokeng/trusty/touch/mako/173:20140210:20140115.1/6527/dialer_app/ not the test-case
[16:55] <bfiller> boiko: weird I can't reproduce. which package would need to be updated to get rid of the notifications?
[16:55] <doanac> ogra_: you are referring to all the artifacts showing up, correct?
[16:55] <boiko> bfiller: telephony-service
[16:55] <ogra_> doanac, exactly
[16:56] <doanac> ogra_: ack. i'll work up a fix. thanks for noticing!
[16:56] <ogra_> doanac, btw, nice work, it seems to be way more reliable (zero chnage images suddenly have the same results on first run etc)
[16:56] <boiko> bfiller: I'm flashing the device right now to test
[16:56] <asac> cyphermox_: will you be in the call in 5?
[16:56] <doanac> ogra_: thanks. it took a lot of work in spare time to convert thing over :)
[16:56] <om26er> boiko, its a race you may need to run the suite multiple times
[16:56] <ogra_> you should do that in paid time really :)
[16:56] <boiko> bfiller: ^
[16:57] <cyphermox_> asac: I am supposed to be off today, just happen to be looking at IRC :)
[16:57] <asac> sil2100: ^^
[16:57] <sil2100> heh, my device has to be really racy, as it fails every time here ;p
[16:57] <ogra_> cyphermox_, look away then !!!
[16:57] <asac> cyphermox_: that eliminates our hopes to do an aggressive landing move tonight
[16:57] <sil2100> Oh crap
[16:58] <bfiller> sil2100: it's weird, still no failures for me
[16:58] <cyphermox_> sil2100: we discussed this on thu or fri
[16:58] <sil2100> asac: well, I think tonight it might not be needed
[16:58] <asac> k
[16:58] <cyphermox_> asac: what do you want to land? maybe there's a way to do it anyway
[16:58] <sil2100> asac: let's wait for tomorrow with the aggression - I mean, let's kick the image with new AP in the morning
[16:58] <asac> kk
[16:58] <sil2100> cyphermox_: will you be on tomorrow?
[16:58] <asac> sil2100: sounds good
[16:58] <cyphermox_> sil2100: no, wednesday
[16:58] <bfiller> sil2100, asac : bottom line - the failure seems to be occuring because of a notification that was added when a call ends, and that interferes with autopilot depending on timing
[16:59] <bfiller> so it's a test bug, not a functional bug
[16:59] <ogra_> yay
[16:59] <cyphermox_> sil2100: I will likely be online though, so don't hesistate to ask me for packaging reviews if needed
[16:59] <sil2100> bfiller: \o/
[16:59] <bfiller> asac, sil2100 : we plan to remove the notification anyway because design wants it removed now
[16:59] <psivaa> bfiller: how was that introduced ?
[16:59] <asac> bfiller: what would have to land for removing those?
[16:59] <psivaa> i mean the failure is quite recent
[16:59] <bfiller> asac: a new telephony-service
[16:59] <rsalveti> awesome
[17:00] <bfiller> psivaa: telephony-service last week
[17:00] <asac> see :)
[17:00] <asac> i knew it :)
[17:00] <rsalveti> then we can speed up landings today still
[17:00] <asac> bfiller: did you back that out and it went away? :)
[17:00] <bfiller> asac: I can't repro it in the first case honestly
[17:00] <bfiller> it's a race
[17:00] <bfiller> but that is the problem
[17:00] <bfiller> it never failed during our landing testing either
[17:00] <asac> psivaa: can you confirm? you seem to be able to reproduce this dialer issue
[17:01] <asac> just backout the telephony-service... then it shoudl be gone
[17:01] <asac> bfiller: its odd.,..
[17:01] <ogra_> it isnt
[17:01] <ogra_> that was the first thing we backed out
[17:01] <asac> bfiller: ^^
[17:01] <psivaa> asac: just running the test for the first time after 170 install and was able to see the failure. now i'll revert telephony and see if that goes
[17:01] <asac> ogra_: maybe you did a mistake?
[17:01] <asac> or just an oversight?
[17:01] <asac> psivaa: thanks!
[17:01] <ogra_> asac, psivaa did the test
[17:02] <om26er> sil2100, psivaa where to find the failing dialer-app test log ?
[17:02] <ogra_> om26er, http://ci.ubuntu.com/smokeng/trusty/touch/mako/173:20140210:20140115.1/6527/dialer_app/756161/
[17:02] <bfiller> asac, ogra_ : telephony-service was released on Feb 6th: 0.1+14.04.20140206-0ubuntu1
[17:03] <asac> bfiller: ack. thats the one that landed first in 170
[17:03] <ogra_> bfiller, right and it entered the image friday morning http://people.canonical.com/~ogra/touch-image-stats/20140207.changes
[17:03] <ogra_> (which was image 170)
[17:03] <om26er> this one is different
[17:05] <om26er> boiko, I have seen this as well, it happens when the page title is not updated, thats probably due to a hang phonesim thing we use for fake calling
[17:05] <ogra_> om26er, well, it is the one that is reliably showing since friday
[17:05] <tsdgeos> sil2100: asac: great, that's good. i'll ask mzanetti to do the landing since he's a trained-lander
[17:05] <ogra_> (and currently blocking landing)
[17:05] <boiko> om26er: hmm, ok, as soon as the device finishes flashing I will debug this further
[17:08] <rsalveti> tsdgeos: so can we say that the landing-4 was fully tested then?
[17:09] <tsdgeos> rsalveti: honestly, i can't say, it was Saviq doing landing-4
[17:09] <rsalveti> or should we wait mzanetti to publish his test results
[17:09] <tsdgeos> if the only two problems were the tests listen in there, yes we can
[17:09] <rsalveti> right, as he's gone we need someone to sign for it, just not sure who yet
[17:09] <tsdgeos> otherwise, not sure
[17:09] <tsdgeos> rsalveti: i'd say mzanetti
[17:09] <tsdgeos> he's having dinner, but said he'd be back
[17:09] <rsalveti> great
[17:10] <bfiller> sil2100: can you run this test and tell me what happens on the screen? does it never get to the live call page?
[17:10] <bfiller> dialer_app.tests.test_calls.TestCalls.test_outgoing_answer_local_hangup
[17:10] <bfiller> as I can't make it fail
[17:10] <bfiller> om26er: ^^^^
[17:11] <om26er> bfiller, the title stays blank, the live call page does open
[17:12] <om26er> bfiller, reboot the phone, I mostly get it to fail on clean boots
[17:12] <bfiller> om26er: does the call duration update or stay on 00:00
[17:13] <om26er> bfiller, I believe it stays at 00:00 sil2100 is that right ?
[17:13] <om26er> I am updating the phone as well to check
[17:14] <sil2100> bfiller: one moment, on a meeting
[17:15] <bfiller> ogra_: has ofono-phonesim* been upgraded recently?
[17:15] <ogra_> bfiller, i dont think so
[17:15] <ogra_> bfiller, last upload way jan 8
[17:15] <ogra_> *was
[17:15] <bfiller> ogra_: ok
[17:18] <psivaa> first run after reverting telephony-service on 170 was successful, running a couple of more times to confirm
[17:24] <asac> psivaa: cool
[17:24] <asac> psivaa: can you try the same on the latest image? just to confirm that there is no other issue hidden underneath
[17:25] <psivaa> asac: ack, once confirming that revert on 170 works, i'll go back to 173 although i did that before as the first attempt.
[17:25] <asac> psivaa: right. sounds good
[17:25] <om26er> bfiller, hey, there are two issues, the more apparent one can be fixed in a single line:
[17:25] <om26er> self.assertThat(lcp.title, Equals(number))
[17:26] <om26er> self.assertThat(lcp.title, Eventually(Equals(number)))
[17:26] <asac> psivaa: yeah, might be just a mistake or something or we really grew another regression in the same spot
[17:26] <sil2100> bfiller: ok, trying to run it and see what's on the screen
[17:26] <om26er> the other is a race and happens very less often
[17:26] <psivaa> asac: ack
[17:26] <sil2100> om26er: ok, the dialer-app test fails when the state is still in 'calling'
[17:26] <boiko> om26er: nice catch on this one, I will fix it
[17:27] <om26er> sil2100, yeah, that will be fixed with the above change
[17:28] <sil2100> \o/
[17:28] <bfiller> om26er: is the other race due to the notification?
[17:29] <om26er> bfiller, yes, probably
[17:30] <davmor2> sil2100: maguro finally finished flashing dialer app works fine from what I can see
[17:30] <bfiller> sil2100: can you confirm that changing line 178 of test_calls.py to self.assertThat(lcp.title, Eventually(Equals(number))) fixes your failure?
[17:31] <sil2100> bfiller: doing that, just need to finish restarting my phone
[17:31] <bfiller> sil2100: cool thanks
[17:36] <sil2100> bfiller: fixes ;)
[17:37] <bfiller> sil2100: great, om26er nice catch there. boiko please submit and MR and we can get this released
[17:37] <boiko> bfiller: yep
[17:38] <om26er> great ;)
[17:38] <boiko> bfiller: sil2100: om26er: is there a bug reported on this problem? just to reference it
[17:38] <bfiller> boiko: let me file one
[17:38] <boiko> bfiller: thanks
[17:39] <bfiller> boiko: https://bugs.launchpad.net/ubuntu/+source/dialer-app/+bug/1278519
[17:40] <boiko> bfiller: thanks
[17:48] <davmor2> sil2100: tea has been called back in 30
[17:50] <sil2100> davmor2: ok, thanks! The test results look good so far
[17:51] <psivaa> bfiller: ogra_: asac: so now we are running the tests using phablet-test-run and dont use utah to run autopilot tests individually so the timing related issues are more visible.
[17:52] <bfiller> psivaa: that's good for sure, but it still didn't fail for me using phablet-test-run or autopilot directly but I guess it's not an environment issue, rather a poorly writtent test
[17:52] <psivaa> so even after reverting telephony-service on 170 i see the failures when running using the latest method of running the test
[17:53] <bfiller> psivaa: turns out the failure is not because of telephony-service
[17:53] <psivaa> bfiller: ohh. did i fail to read the backlog
[17:53] <bfiller> psivaa: it's this: https://code.launchpad.net/~boiko/dialer-app/fix_test_calls/+merge/205636
[17:53] <boiko> sil2100: I have submitted an MR, just waiting for jenkins to run it to fill the ckecklist
[17:54] <bfiller> sil2100: MR is here: https://code.launchpad.net/~boiko/dialer-app/fix_test_calls/+merge/205636
[17:54] <psivaa> bfiller: boiko ack, thanks
[17:54] <sil2100> bfiller, boiko: thanks guys! Could you set up a landing for that? I'll assign a silo then straight away :)
[17:55] <bfiller> sil2100: yup
[17:58] <bfiller> sil2100: done
[17:58] <sil2100> bfiller: thanks, let me assign a slot
[18:00] <om26er> sil2100, I have a new source that I want to add to CI and at the same time get it uploaded to ubuntu universe, I can do the former part, who to contact about the latter ?
[18:02] <bfiller> sil2100: I started the build, going to lunch now. can test when I get back (like 1 hr)
[18:02] <sil2100> om26er: you mean the ubuntu-integration-tests?
[18:02] <om26er> sil2100, yes
[18:02] <sil2100> bfiller: excellent, I'll inform Robert to keep a look at this silo
[18:03] <sil2100> om26er: ok, so we can try doing that normally, we can add it to the CITrain and push it to the archive
[18:03] <sil2100> (I'll try doing that tomorrow)
[18:03] <om26er> sil2100, that sounds great, thanks
[18:06] <sil2100> ogra_: can you ACK 2 packaging changes for me ;) ?
[18:06] <sil2100> Seem trivial
[18:06] <ogra_> show me
[18:06] <sil2100> http://162.213.34.102/job/landing-001-2-publish/lastSuccessfulBuild/artifact/packaging_changes_indicator-keyboard_0.0.0+14.04.20140207-0ubuntu1.diff <- bamf dependency removed (in CMake it seems to be removed as well, and builds!)
[18:06] <sil2100> http://162.213.34.102/job/landing-001-2-publish/lastSuccessfulBuild/artifact/packaging_changes_indicator-sound_12.10.2+14.04.20140207-0ubuntu1.diff <- a new Recommends, changelog sounds legit
[18:08] <ogra_> sil2100, hmm, was that second one discussed with the desktop team ... i.e. seb128 ?
[18:08] <ogra_> sil2100,  adding that recommends means pavucontrol by default on all desktop installs
[18:08] <ogra_> sil2100,  the first one is fine
[18:09] <ogra_> sil2100, for the second one i'd like to defer to a desktop team member, that seems very intrusive
[18:09] <sil2100> ogra_: not sure, let me dig deeper - it's a |, so it shouldn't be as long as others are visible
[18:09] <sil2100> ogra_: ah, seb acked it
[18:09] <sil2100> ogra_: http://bazaar.launchpad.net/~indicator-applet-developers/indicator-sound/trunk.14.04/revision/411 <- approved by Sebastien
[18:10] <ogra_> ok,. then ack from me too
[18:10] <ogra_> yeah
[18:10] <sil2100> phew, dodged a bullet here
[18:10] <sil2100> Since this way we're not responsible for anything being broken now ;D
[18:11] <ogra_> haha
[18:14] <sil2100> ogra_: ok, so, it seems that the image is fine so far, so let's promote #173!
[18:14] <sil2100> robru: morning :)
[18:14] <sil2100> robru: I have some missions for you today o/
[18:15] <robru> sil2100, sure
[18:15] <sil2100> robru: let me just finish writing the e-mail
[18:15] <robru> sil2100, no worries
[18:19] <davmor2> sil2100: they look good so far :)]
[18:20] <sil2100> ogra_: whenever you're ready press the promote button!
[18:20] <ogra_> yeah yeah ... already running ...
[18:20] <ogra_> takes a while :)
[18:21] <om26er> cihelp if there is no launchpad project for a source package and only a lp branch, can it be added to cupstream2distro-config for CI ?
[18:23] <ogra_> [18:23] <sil2100> \o/
[18:23] <ogra_> :)
[18:26] <sil2100> robru: ok, done
[18:27] <robru> sil2100, thanks
[18:27] <sil2100> robru: soooo:
[18:27] <sil2100> robru: could you make sure once the dialer-app landing from silo 007 is tested you release that?
[18:28] <robru> sil2100, sure thing.
[18:28] <sil2100> robru: same for the unity8 landing from silo 004
[18:28] <robru> sil2100, alright, I'll keep an eye
[18:29] <sil2100> robru: try also finding a silo maybe for the platform-api landing from l63, I think this might be good to have an it's small (and I don't think we'll need to release platform-api anywhere else)
[18:30] <sil2100> robru: you can also add one or two more landings silos if you want, just be sure that the component is no-risk and that we won't block anythin important ;)
[18:30] <sil2100> robru: just don't assign mir for now!
[18:30] <robru> sil2100, ok, no mir, sure.
[18:30] <sil2100> robru: as it's an ABI break, so this locks up many silos at once
[18:30] <robru> sil2100, ahhhh ok thanks
[18:31] <sil2100> robru: unity-scope-scopes is +1'ed by seb and ready for release, but I don't know how to proceed with new pacakges in CITrain yet - not sure if the whitelist is updated and such
[18:31] <sil2100> robru: so let's wait with that till tomorrow I guess
[18:31] <sil2100> robru: anyway, thanks :)
[18:31] <robru> sil2100, yeah, I saw that one. I guess we have to wait for didier to preNEW it
[18:32] <sil2100> robru: I think seb just did it today, we asked about it and he said it's ok... but not sure if he did all the other manual stuff for preNEWing, like the whitelist and such
[18:32] <sil2100> Not even sure if that's still used
[18:32] <robru> sil2100, well what's the worst that can happen if I publish it? the archive robot won't copy it...
[18:33] <sil2100> robru: right, but I won't sure if it won't leave the package in some strange transient state
[18:33] <sil2100> But I guess not
[18:34] <robru> sil2100, well the only "inconsistent state" it'll get is that citrain will say it's published, but it won't actually make it into the archive
[18:34] <sil2100> robru: well, you can try publishing with 'ACK PACKAGING' later on I think, if it doesn't go through we'll check the code and try to proceed
[18:37] <sil2100> Ok, time for me to EOD
[18:38] <sil2100> See you tomorrow!
[18:44] <om26er> fginther, I created a simple test job but its failing with https://jenkins.qa.ubuntu.com/job/generic-mediumtests-builder-trusty-armhf/2788/console
[18:44] <om26er> help ?
[18:51] <boiko> psivaa: tests passed on dialer MR: https://code.launchpad.net/~boiko/dialer-app/fix_test_calls/+merge/205636
[18:58] <fginther> om26er, yes, branches can be added, as long as the branch is owned by a team which ps-jenkins is a member
[19:00] <fginther> om26er, supply lp:ubuntu-autopilot-tests/ubuntu-integration-tests as the landing_candidate instead of the target_branch and it should build
[19:01] <thomi> Mirv: hey - are you able to give me write access to the CI train self service SS please?
[19:01] <om26er> fginther, trying that, triggered a rebuild
[19:02] <psivaa> boiko: ack, does not look like that the tests are run in the MP is similar to what we run smoke now. but hope the test passes with the image
[19:02] <psivaa> in smoke
[19:04] <psivaa> boiko: i meant the way the tests are run ^
[19:05] <boiko> psivaa: yeah, I have seen in the past tests that would pass in one and fail in another
[19:20] <om26er> fginther, ps-jenkins is now added in the team that supervises that branch, can you add CI for that ?
[19:20] <om26er> lp:ubuntu-autopilot-tests/ubuntu-integration-tests
[19:35] <fginther> om26er, will add it to the list for today
[19:37] <om26er> fginther, lastly, do you know why these tests are failing on otto ? https://jenkins.qa.ubuntu.com/job/autopilot-testrunner-otto-trusty/2684/?
[19:37] <ogra_> hmm sil2100 is gone
[19:37] <ogra_> 173 doesnt actually look good on my mako
[19:38]  * ogra_ has a completely hanging UI 
[19:39] <fginther> om26er, is this error meaningful to you? ERROR content:49 - Could not add content object 'None' due to IO Error: [Errno 13] Permission denied: '/var/log/syslog'
[19:39] <ogra_> hmm, after a unity crash it seems to behave now
[19:40] <om26er> fginther, not really, doesn't sound related to the tests, could be an environmental fault
[19:48] <fginther> om26er, I'll add it to the pile, but probably won't be help to provide any further insight today. I've noticed that https://jenkins.qa.ubuntu.com/job/dialer-app-ci/179/ passed just recently, possibly there is a fix that needs to be merged in
[23:01] <robru> bfiller, I published dialer-app. please merge & clean silo 7 once it hits the archive
[23:02] <robru> bfiller, also history-service in silo 8.
[23:23] <robru> sergiusens, assigned platform-api to silo 9, please build
[23:23] <sergiusens> robru, thanks
[23:38] <bregma> fginther, what's going on with the head/unity tests?
[23:39] <cjohnston> bregma: link?
[23:39] <fginther> cjohnston, we've been discussin in emal
[23:39] <cjohnston> ack. nevermind :-)
[23:41] <fginther> bregma, I've been running some single test suite experiments, the nvidia machine is passing a lot more tests then the intel one
[23:42] <bregma> fginther, I was just wondering if you had some kind of idea about causes, since I see you've been doing what appear to be dark and ritualistic experiments
[23:42] <fginther> bregma, should these machines have dual monitors?
[23:42] <fginther> bregma, unfortunately the dark arts have not spoken to me today
[23:43] <bregma> fginther, some times they do and sometimes they don't have dual monitors, which revealed latent bugs in our test code recently, so either *should* be OK
[23:43] <fginther> bregma, I've also got a test in the queue to run the suites in a different order