[08:02] <Mirv> cihelp failure with successful tests, permission denied on autopilot-nvidia http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/OIF/job/cu2d-oif-head-2.2check/368/console
[08:39] <vila> Mirv: hurrah! That's evidence we can find the remaining hardcoded /var/lib/jenkins
[08:41] <didrocks> interesting, that was though what was updated by Francis during the migration
[08:42] <vila> didrocks: probably something else
[08:42] <vila> didrocks: could it be a reference in results produced earlier ?
[08:42] <didrocks> vila:     echo "Calculating results for machine $machine"
[08:42] <didrocks>     JUNIT=$(/iSCSI/jenkins/cu2d/cupstream2distro/latest_autopilot_results $JENKINS_HOME $JOBROOT $machine)
[08:42] <didrocks> vila: no, I don't think so
[08:42] <didrocks> but latest_autopilot_results has been updated
[08:42] <didrocks> is JENKINS_HOME pointing to the right one?
[08:43] <didrocks> yeah, seems so: http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/OIF/job/cu2d-oif-head-2.2check/368/injectedEnvVars/?
[08:43] <didrocks> let's check first latest cu2d is in prod
[08:44] <didrocks> yep
[08:44] <didrocks> hum, not sure TBH
[08:44] <didrocks> vila: I would suggest set -x
[08:44] <didrocks> to debug the issue
[08:45] <didrocks> to know exactly where it's failing
[08:46] <didrocks> forget that, it's failing in cu2d-autopilot-report
[08:46] <didrocks>     histdir = os.path.abspath(os.path.expanduser(histdir))
[08:46] <didrocks> vila: here yo uhave your issue ^
[08:47] <didrocks> so, it's args.logfile
[08:47] <didrocks> which is the first non optional argument
[08:47] <didrocks> so $JUNIT
[08:47] <didrocks> and     JUNIT=$(/iSCSI/jenkins/cu2d/cupstream2distro/latest_autopilot_results $JENKINS_HOME $JOBROOT $machine)
[08:48] <didrocks> would be interesting to really run that with set -x then
[08:48] <vila> ~jenkins should expand to iSCSI
[08:48] <didrocks> vila: can be $JUNIT which is wrong
[08:48] <didrocks> anyway, I'll let the CI team try that, I can help if needed
[08:48] <vila> and it does (~jenkins that is)
[08:48]  * didrocks stops stepping on shoes :)
[08:49] <vila> didrocks: how did you checked the latest cu2d is in prod ?
[08:49] <didrocks> Mirv: how is the situation for those prepare job failing due to bad debian/source/format? really bad or okish?
[08:49] <didrocks> vila: yep:
[08:49] <didrocks> 09:43:52   didrocks | let's check first latest cu2d is in prod
[08:49] <didrocks> 09:44:05   didrocks | yep
[08:50] <vila> didrocks: *how* ?
[08:50] <vila> :)
[08:50] <didrocks> vila: logged in as desktop-team@
[08:50] <didrocks> and checked the rev
[08:51] <vila> didrocks: I thought 'bzr pull'ing there was different from deploying the jobs though (but fginther said cyphermox did that yesterday anyway)
[08:51] <didrocks> vila: hum, no, normally the version in ~jenkins is a symlink to that
[08:52] <didrocks> vila: not sure if that changed though
[08:53] <didrocks> vila: if not, there are potential for failures btw
[08:54] <vila> didrocks: nah, pretty sure I checked that and that's a symlink to /home/desktop-team (so a potential issue if ~desktop-team change but that's not the case for now)
[08:55] <didrocks> ah great :)
[08:56] <vila> ./bin/find_publisher_failures.sh:find /var/lib/jenkins/cu2d/work -name "publisher.xml" -print -exec grep -i fail {} \;
[08:56] <didrocks> what's this bin/ ?
[08:56] <vila> different one, is that script used in jenkins or just something
[08:56] <vila> sry ~d-t
[08:56] <didrocks> waow, doesn't ring a bell at all
[08:56] <vila> different one, is that script used in jenkins or just something used manually ?
[08:57] <vila> ok, will fix anyway just in case
[08:57] <didrocks> vila: I don't think it's used
[08:57] <didrocks> I would say 2)
[08:57] <didrocks> but nice catch anyway :p
[09:06] <Mirv> didrocks: I didn't find other occurences besides autopilot-gtk, just other problems like the one pointed to ci_help with autopilot-nvidia, manual upload to archive, and autopilot failures
[09:06] <Mirv> didrocks: but the mergers have problem with mathieu's commit even though bzr bd works fine
[09:06] <didrocks> Mirv: if there is only one, I would say "phew"!
[09:06] <didrocks> Mirv: for the manual upload to archive, can you reconcile them?
[09:06] <didrocks> so that at least all prepare jobs are green
[09:07] <didrocks> elopio: you can remove network slowness from the title I guess :)
[09:07] <didrocks> ev: ^
[09:07] <didrocks> sorry elopio :)
[09:08] <elopio> np
[09:09] <Mirv> didrocks: I fixed the changelog entry for indicator-datetime already
[09:09] <elopio> hey didrocks, would you know why an autopilot feature that landed in 2013-11-07 is not yet available in the mako runners?
[09:09] <didrocks> Mirv: rock \o/
[09:09] <didrocks> elopio: not released due to the CI infra, mostly (and no landing ask from autopilot on the landing spreadsheet)
[09:10] <elopio> didrocks: so, is the landing ask the only thing I'm missing now?
[09:10] <thomi> didrocks: I added a landing ask yesterday I think?
[09:11] <didrocks> elopio: thomi: ah, nice, I didn't check
[09:11] <thomi> didrocks: :)
[09:11] <didrocks> just be aware that the queue is a little bit long
[09:11] <didrocks> until the whole infra is up and we catch up on our debt
[09:11] <didrocks> so probably next week
[09:12] <elopio> thanks thomi and didrocks.
[09:12] <thomi> didrocks: that's cool - I wanted to land AP before I merged some more.... experimental features :)
[09:12] <didrocks> thomi: hum, you meant:
[09:12] <didrocks> experimental *safe* features?
[09:12] <didrocks> right right? ;)
[09:12] <thomi> didrocks: right, of courtse :)
[09:12] <didrocks> heh
[09:12] <thomi> I made sure they're not getting merged yet
[09:12] <thomi> until I see the new AP released to distro
[09:13] <thomi> then I'll merge them and make sure they don't break anything
[09:13] <thomi> and submit a new landing aslk
[09:13] <thomi> anyway, I'm off to bed. Hooray for the weekend!
[09:13] <thomi> catch you later y'all ;)
[09:18] <vila> didrocks: I think I have some winners here:
[09:18] <vila> jenkins@jatayu:~$ grep /var/lib/jenkins ~jenkins/cu2d/*rc
[09:19] <vila>  
[09:19] <vila> grrr stupid results starting with '/'
[09:19] <vila> cu2d/100scopes.autopilotrc:history=/var/lib/jenkins/cu2d/history/100scopes
[09:19] <vila> cu2d/default.autopilotrc:history=/var/lib/jenkins/cu2d/history
[09:19] <vila> cu2d/indicators.autopilotrc:history=/var/lib/jenkins/cu2d/history/indicators
[09:19] <vila> cu2d/oif.autopilotrc:history=/var/lib/jenkins/cu2d/history/oif
[09:19] <vila> cu2d/unity.autopilotrc:history=/var/lib/jenkins/cu2d/history/unity
[09:19] <vila> didrocks: so, what are these files ?
[09:20] <didrocks> vila: oh, you got it!
[09:21] <didrocks> vila: yeah, those conf files are what set the threshold
[09:21] <didrocks> like number of accepted failures
[09:21] <didrocks> number of regressions between 2 runs
[09:21] <didrocks> and so on
[09:21] <didrocks> so they need to be adjusted
[09:21] <didrocks> vila: really nice catch!
[09:21] <vila> didrocks: not under version control then and not part of the job definitions, so... edited manually ?
[09:22] <didrocks> vila: yeah, we changed those values a lot, they are configurations
[09:22] <didrocks> vila: the final goal is to remove them
[09:22] <didrocks> and not have any flacky test
[09:22] <didrocks> so all to "0"
[09:23] <vila> didrocks: grey area then, sounds like ~desktop-team should maintain them rather than ~ci ? Or both ? Hard to to get a clear cut there, thoughts :-/
[09:23] <didrocks> vila: in fact, the ones you need to changes are in cu2d/history/*/
[09:23] <didrocks> vila: I can change them
[09:23] <didrocks> if I have access
[09:23] <didrocks> let me check
[09:23] <didrocks> vila: mind giving me the new path?
[09:24] <didrocks>  /I*?
[09:24] <vila> ~jenkins
[09:24] <vila> ~jenkins/cu2d even, haaa, so no access
[09:24] <vila> urgh
[09:25] <didrocks> vila: I'm read-only
[09:25]  * vila nods
[09:25] <didrocks> one those files
[09:25] <didrocks> vila: if you can just sed them, that would be appreciated
[09:25] <didrocks> vila: I don't see those values changing right now
[09:25] <vila> didrocks: sure, doing it right now
[09:25] <didrocks> and then, the config files will be dead
[09:25] <vila> didrocks:  ?
[09:25] <didrocks> vila: think about doing those in cu2d/history/*/
[09:25] <didrocks> vila: end goal: 0 flacky tests accepted
[09:26] <didrocks> so no file :)
[09:26] <vila> didrocks: why can't that be done in cu2d-config instead, will address the access issue
[09:29] <didrocks> vila: it can TBH, I don't think it worth it to add another move
[09:29] <didrocks> let's get the prod running again IMHO
[09:36] <vila> didrocks: oh sure, first thing is to fix prod, the question was for the middle/long term
[09:37] <vila> didrocks: should be fixed
[09:37] <vila> Mirv: ^
[09:37] <ogra_> [09:40] <Mirv> vila: thanks!
[09:40] <vila> Mirv: yw
[09:47] <Mirv> vila: rerunning at http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/OIF/job/cu2d-oif-head-2.2check/369/console
[09:47] <vila> Mirv: rock&roll ;)
[09:55] <vila> Mirv: and success right ?
[09:55] <vila> Mirv: I mean for the path fix that is
[09:58] <Mirv> vila: that seems success, something else isn't. the subtasks were still successful but it's marked a failure (as seen at http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/OIF/job/cu2d-oif-head-2.2check/369/)
[10:00] <Mirv> but yes the path seems now work. there's some problem of changed amount of tests ("-121"), some information from somewhere tells there should have been more tests that were run
[10:01] <vila> Mirv: yeah, same diagnostic as you. I would wait for one more run before worrying about it though because that previous run failed because of the path and that failure may not be taken into account by the check code
[10:01] <Mirv> didrocks: so will I merge that autopilot-gtk branch manually ie you don't need it for debugging whether merger works properly?
[10:02] <Mirv> vila: ok, doing that at /370/
[10:02] <didrocks> Mirv: no, don't bother
[10:02] <didrocks> Mirv: do you have the url?
[10:02] <didrocks> Mirv: that will enable with fginther to validate the change
[10:03] <Mirv> didrocks: ok. https://code.launchpad.net/~mathieu-tl/autopilot-gtk/source-format/+merge/196190 - it'll probably fail again soon.
[10:03] <didrocks> ev: FYI, this can delay a little bit what I needed to work with with fginther (but I do think we'll be ready by Monday evening), but production first ^
[10:03] <didrocks> ev: we'll still start together today, just maybe not end
[10:03] <didrocks> Mirv: thanks!
[10:08] <Ursinha> popey, good morning
[10:08] <Ursinha> oops, wrong channel hehe
[10:09] <ev> otp
[10:14] <Mirv> vila: still the same: http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/OIF/job/cu2d-oif-head-2.2check/370/console
[10:28] <vila> Mirv: otp
[10:38] <ev> didrocks: *nods* absolute agreement there
[10:42] <vila> Mirv: yeah, no idea but I get the feeling it's more a cu2d issue than a ci one so unless we find better evidence I won't dig further (also didrocks said don't bother ;)
[10:46] <Mirv> vila: I think he said don
[10:46] <Mirv> t bother to the autopilot-gtk one, not this one :)
[10:47] <Mirv> vila: another problem radeon connectivity broken, I saw it in another one as well in the morning: http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-daily_release/label=qa-radeon-7750/562/console
[10:48] <Mirv> connectivity or some another problem of course, it's just that the IP address renewal tends to show up when things are halted
[10:49] <Mirv> so that's blocking now, after which if this http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/OIF/job/cu2d-oif-head-2.2check/370/console happens elsewhere too then none of the check jobs will pass
[10:49] <Mirv> vila: feel free to stop that radeon job if you don't need it, but I'll keep it running for now for possible debugging needs. also, I see you're not vanguard anymore so feel free to pass it on :)
[10:54] <vila> Mirv: the X server crashed inside the container on qa-radeon-7750, you should get that log in the artifact once the job finish
[10:55] <Mirv> vila: regarding the test count issue, you're right that it's probably cu2d side although we may not have access rights to fix that manually. it's possible we need to do one publish by looking at the results beneath (if they are green)
[10:55] <Mirv> vila: the job finishing I guess still takes 2 hours or so?
[10:56] <Mirv> anyhow, it seems there's a problem on radeon since the previous problem was on radeon too
[10:56] <vila> Mirv: right, that's my take too
[10:56] <Mirv> previous one was http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-daily_release/label=qa-radeon-7750/551/console
[10:57] <asac> any idea what result output format autopkgtests has?
[10:57] <asac> jibel: ?
[10:57] <Mirv> vila: and yes there's a crash http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-daily_release/label=qa-radeon-7750/551/artifact/results/Xorg.0.log
[10:57] <seb128> didrocks, do you know if there is going to be an ubuntu-keyboard landing soon? I see it on the landing asks but with no landing slot assigned yet
[10:57] <vila> Mirv: enough for you to ping upstream right ?
[10:58] <Mirv> vila: well I don't think it's much help that "it crashes", more probable that the radeon isn't up to the task still and glamor-egl would need to mature still
[10:59] <didrocks> seb128: see my email, I think it will be after the indicators
[10:59] <didrocks> so monday
[10:59] <vila> Mirv: but some jobs succeed so that
[10:59] <Mirv> vila: there's (yet) another problem that this error still happens after aborted job in the next job: http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-daily_release/label=qa-radeon-7750/563/console - was it on radar somewhere to investigate?
[10:59] <vila> 's good input for upstream to reduce the scope
[11:00] <seb128> didrocks, ok, thanks (what email? the one to -phone earlier? I read it but it didn't have any detail that would me guess if keyboard would in 1 day or 1 week ;-)
[11:00] <Mirv> vila: yes some succeed some not. who was the upstream, I think you contacted last time?
[11:00] <vila> mklanhost (wrong spelling probably ;)
[11:01] <didrocks> seb128: not keyboard specifically, but that we are slowing resuming landings
[11:01] <seb128> didrocks, btw I'm asking because libpinyin had a soname change in trusty and is blocked in proposed until ubuntu-keyboard gets rebuild with the new soname
[11:01] <seb128> it can wait next week though, no worry
[11:01] <seb128> didrocks, thanks!
[11:04] <Mirv> vila: ok, reporting the radeon problem upstream, we may need to disable using it again until it's fixed
[11:05] <Mirv> vila: so what about that next-job-after-abort-fails-to-bring-up-lxc?
[11:07] <didrocks> seb128: sorry (was in a hangout)
[11:07] <didrocks> seb128: ok, setting it for Monday, thanks for the head's up!
[11:07] <seb128> didrocks, thanks!
[11:08] <didrocks> seb128: in landing plan
[11:08] <vila> Mirv: fixed, the container needs to be stopped, sounds like either an otto bug or a misuse, this should not happen right ?
[11:08] <seb128> didrocks, 'ci ;-)
[11:08] <vila> stopped manually that is
[11:09] <Mirv> vila: it still looks the same: http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-daily_release/label=qa-radeon-7750/564/console
[11:09] <Mirv> vila: so that should be at least on some list of issues to track down I think
[11:16] <vila> Mirv: http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-daily_release/label=qa-radeon-7750/566/console
[11:16] <vila> Mirv: are you sure you checked on a job started *after* I stopped the container ?
[13:04] <Mirv> vila: it happens every time when any job is aborted, so the next job that gets started
[13:12] <vila> Mirv: right and the current timeout is 330 mins leading to aborts, I wonder if the solution here is not to 1) reduce the timeout 2) consider that encountering a running container is plainly wrong and have otto stop it instead of emmitting that error message
[13:13] <vila> Mirv: otherwise I'd be tempted to say: it blocks if you abort a job ? Don't do that then ! ;)
[13:14] <Mirv> maybe both I'd think, there's not much sense (?) in 330 mins of timeout, but also it'd be nice if it wouldn't break but either wait or stop the container automatically, since it anyway happens for the next one after the breaking one
[13:14] <Mirv> vila: it blocks if I don't abort a job ;)
[13:14] <vila> Mirv: but if you let the timeout expire it doesn't block right ?
[13:15] <vila> Mirv: so something is wrong in how the abort is handled no ?
[13:16] <vila> Mirv: i.e. I agree with 'not much sense (?) in 330 mins of timeout' and 'it'd be nice if it wouldn't break'
[13:16] <vila> Mirv: IIUC, a manual intervention (or two) is required right now, that's wrong :)
[13:17] <Mirv> vila: yeah, in that case the next one doesn't break, as seen in the radeon job from 08:09:08 from this morning (the previous one timeouted)
[13:17] <Mirv> vila: indeed the abort case should behave more like what happens when it timeouts
[13:17] <vila> jibel: thoughts ^ should I start with filing a bug against otto ?
[13:17] <Mirv> even if it would be a shorter timeout, it would still make sense that it's possible to abort jobs when needed
[13:18] <didrocks> vila: agreed with 2)
[13:19] <didrocks> Mirv: vila: no way to try to catch the abort
[13:19] <didrocks> jenkins SIGKILL
[13:19] <josepht> or 3) have otto clean up containers when it exits abnormally
[13:19] <didrocks> not SIGTERM
[13:19] <jibel> vila, not a bug. The problem is that it runs through sudo and when you abort a job it is the user 'jenkins' that kills the process
[13:19] <didrocks> josepht: see above ^
[13:19] <jibel> vila, and it cannot kill processes it doesn't own
[13:19] <josepht> so the job needs to clean up the container then
[13:20] <vila> jibel: so, otto should not block when it encounter a running container since the only valid use case is that a previous job was aborted ?
[13:20] <jibel> vila, josepht I wrote a script to handle this situation that will kill all the process started from the top parent process
[13:20] <jibel> vila, yes it should because you cannot share the same physical device twice
[13:21] <vila> jibel: but why not just stopping the container instead of emitting the error message ?
[13:21] <josepht> is the container named the same thing every time?
[13:21] <jibel> vila, because you don't know if it is runnig on purpose or not, do you?
[13:21] <vila> josepht: doesn't really matter, only a single container should run
[13:22] <jibel> vila, try this http://bazaar.launchpad.net/~otto-dev/otto/trunk/view/head:/jenkins/children_monitor
[13:22] <vila> jibel: see above, it seems the only case when it happens is when the previous job was aborted
[13:22] <jibel> if is not already there
[13:23] <jibel> vila, there is no link between jobs, a new job doesn't know why a container is already running
[13:23] <jibel> if it has been started manually from the machine or another job is running or whatevere
[13:23] <jibel> you cannot just kill what is running
[13:23] <vila> ha, hence that script
[13:24] <vila> jibel: but who should call it and when ?
[13:24] <josepht> can't we use trap to remove the container (from the jenkins execute shell script) on exit?
[13:24] <jibel> in the jenkins build step that escalate privileges with sudo
[13:24] <jibel> josepht, that's basically what this script does
[13:25] <jibel> but it also ensure that all children processes started from this jenkins script are killed
[13:26] <josepht> jibel: okay, sounds good to me
[13:26] <vila> jibel: that would be http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-daily_release/configure right ? (and friends but let's start with this one)
[13:26] <didrocks> josepht: it's not an exit, it's a SIGKILL (so you can't trap)
[13:26] <jibel> correct
[13:27] <vila> you lost me here, can we or can't we use that script ? (almost no experience with signal handling in shell scripts)
[13:28] <jibel> vila, you can use the script, create a test script to verify
[13:28] <jibel> vila, in a shell build step put this script on first command, then sudo <forever> and abort
[13:29] <jibel> I'm pretty sure I had an exmaple somewhere
[13:29] <vila> 'sudo kill' is the trick, got it
[13:30] <vila> jibel: but what about didrocks remark about not being able to catch SIGKILL ? Or is instead that jenkins will send SIGABRT ?
[13:32] <jibel> vila, just try it
[13:41] <vila> jibel: doesn't work http://q-jenkins.ubuntu-ci:8080/job/otto-test-radeon/
[13:42] <vila> jibel: as in: the container is still running
[13:45] <vila> $HOME/bin/children_monitor & at the beginning of the execute shell build step '&' required otherwise the script doesn't background itself. But if I read it correctly the script monitors the children of its parent
[13:45] <jibel> vila, okay, I'll check in a moment
[13:47] <fginther> morning
[13:48] <didrocks> hey fginther!
[13:48] <vila> jibel: otherwise, I'm tempted to ignore the use cases 'started manually from the machine' (de-provision first) and 'another job is running' (it shouldn't, and both will break if it happens)
[13:52] <vila> jibel: or rather, if otto stops the running container, 'another job is running' the later job will be able to run
[14:04] <fginther> didrocks, Mirv what's the state of the cu2d jobs building?
[14:05] <didrocks> fginther: everything's fine, apart from the autopilot-gtk MP due to the dpkg change
[14:05] <Mirv> fginther: building pretty good, check jobs not so, plus the merger problem
[14:05] <didrocks> yeah, check jobs is another story
[14:06] <didrocks> Mirv: you are working with vila on it, right?
[14:06] <Mirv> fginther: mostly radeon AP would probably need disabling since it crashes again frequently (glamor-egl, filed a bug)
[14:06] <josepht> vila, jibel: this script http://paste.ubuntu.com/6458664/ results in this output http://paste.ubuntu.com/6458662/ when aborted
[14:06] <Mirv> didrocks: well vila is working on the long annoying detail of the next job breaking after abort, otherwise I've just suggested the disabling of radeon but not done it
[14:07] <didrocks> ok
[14:07] <Mirv> didrocks: I updated cu2d-config for those that need it, so the remaining problems are mostly radeon breaking up, and then the problem seen in oif that amount of tests has changed so we may need to publish once (I don't know how to fix that manually)
[14:07] <didrocks> Mirv: 2 runs of the check job should fix it
[14:08] <Mirv> didrocks: it did not, for some reason
[14:08] <didrocks> hum, interesting
[14:08] <Mirv> didrocks: http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/OIF/job/cu2d-oif-head-2.2check/370/console + 369
[14:08] <Mirv> hmm, looking at it again, maybe it's thrice this time? ;)
[14:08] <didrocks> Mirv: ah did you retry?
[14:09] <didrocks> Mirv: in fact, it stops at first issues
[14:09] <didrocks> so don't go to the next machine
[14:09] <didrocks> as you see here, it went to the next machine
[14:09] <didrocks> failed first on nvidia
[14:09] <Mirv> didrocks: yes, so it didn't stop on nvidia anymore
[14:09] <didrocks> second run, nvidia passed and intel stopped
[14:09] <didrocks> right
[14:09] <Mirv> ran again. so that's probably solved by rerunning, so remaining problem would be deciding of disabling of radeon
[14:09] <vila> josepht: so both EXIT and TERM are received and handled ?
[14:10] <josepht> vila: yes, it doesn't work with only EXIT or TERM trapped
[14:10] <vila> josepht: jibel script traps both
[14:10] <didrocks> josepht: you are the vanguard and looking at radeon?
[14:11] <josepht> didrocks: I'm helping :)
[14:11] <didrocks> great ;)
[14:11] <vila> josepht: may be I should indeed let you handle that with jibel, I've still a pile of incident backlog  to process
[14:11] <josepht> vila: ack
[14:12] <vila> josepht: and that script of yours should end up in some jenkins test suite, useful knowledge to share that ;)
[14:12] <Mirv> josepht: so I filed bug #1253974 for the glamor-egl crash, it may need further debugging / backtracing on the machine directly
[14:12] <fginther> Mirv, do we still have a problem with building autopilot-gtk to fix?
[14:12] <Mirv> fginther: yes, check that with didier
[14:13] <Mirv> (https://code.launchpad.net/~mathieu-tl/autopilot-gtk/source-format/+merge/196190 build with bzr bd and would build in daily-build, but not in merger)
[14:13] <fginther> Mirv, thanks
[14:14] <didrocks> fginther: we can hangout if you don't mind
[14:14] <didrocks> working on that + the little "rest" :)
[14:14] <fginther> didrocks, ack, I'll set it up
[14:15] <Mirv> josepht, vila: for radeon, if it seems that there's nothing that can be done at the moment about the constant autopilot crashes, apply http://pastebin.ubuntu.com/6458715/ to cu2d-config and deploy all stacks (when they're stopped)
[14:16] <cyphermox> didrocks: I think we're good to re-enable the build_all jobs, yo
[14:17] <josepht> Mirv: ack
[14:18] <didrocks> cyphermox: there is the radeon issue ^
[14:18] <cyphermox> radeon issue...
[14:22] <vila> Mirv: huh ? Why not a regular MP ?
[14:27] <vila> Mirv: 'long annoying detail of the next job breaking after abort' as in this problem has been know  ? Since when ?
[14:30] <didrocks> cyphermox: btw, thinking about it
[14:30] <didrocks> we need rather version 1
[14:30] <didrocks> think about that case:
[14:30] <didrocks> I'm upstream
[14:30] <didrocks> I add a patch
[14:31] <didrocks> I want to build the package
[14:31] <didrocks> bzr bd -> fail
[14:31] <cyphermox> why>
[14:31] <didrocks> because of the older orig.tar.gz
[14:31] <didrocks> and so inline patches for dpkg
[14:31] <cyphermox> *I'm not sure I follow, it should still work
[14:31] <cyphermox> but hey, I don't care what version it is in the end ;)
[14:32] <didrocks> cyphermox: you can try yourself
[14:32] <didrocks> bzr branch your autopilot-gtk branch
[14:32] <didrocks> bzr bd -S
[14:32] <didrocks> then add a fix in the code
[14:32] <didrocks> retry bzr bd -S
[14:32] <didrocks> -> it will fail
[14:32] <cyphermox> well yeah but cu2d bumps the versions
[14:33] <didrocks> right, but I'm speaking about upstream testing their package
[14:33] <cyphermox> it's not an issue
[14:33] <didrocks> and so, it's an issue
[14:33] <didrocks> they will have to bump themselves to test
[14:33] <cyphermox> heh
[14:33] <didrocks> something that they don't know/just making it harder :p
[14:33] <didrocks> that's why I picked v1
[14:33] <cyphermox> alright
[14:33] <didrocks> everytime I have to think back btw why I forced v1
[14:34] <didrocks> I should write that down somewhere
[14:35] <fginther> didrocks, still failed: http://s-jenkins.ubuntu-ci:8080/job/autopilot-gtk-trusty-amd64-ci/8/console
[14:36] <didrocks> fginther: the recipe doesn't use bzr bd it seems
[14:36] <didrocks> fginther: not sure how you can force it to use it
[14:36] <didrocks> I don't konw recipes ;)
[14:36] <fginther> didrocks, perhaps it's time to stop using recipes...
[14:37] <didrocks> fginther: I do agree ;)
[14:37] <didrocks> so, let's force v1 for now, you won't need that patch normally
[14:37] <didrocks> cyphermox: please oh please ^
[14:38] <cyphermox> what do you mean recipes don't use bzr bd?
[14:38] <cyphermox> fginther: using bzr builder for the ci stuff?
[14:38] <didrocks> cyphermox: upstream merger are using recipes
[14:39] <didrocks> and if you force version to be like <something>-0ubuntu1
[14:39] <didrocks> is seems recipes are ignoring the "split" mode
[14:39] <didrocks> and don't create the tarball automatically
[14:39] <cyphermox> hmm
[14:39] <cyphermox> I'm pretty sure that's configurable :)
[14:39] <fginther> cyphermox, http://paste.ubuntu.com/6458847/
[14:39] <jibel> vila, can I use your test job and the machine associated with it?
[14:40] <didrocks> cyphermox: well, we need to fallback to v1 anyway for upstream, but if you want to figure that one out…
[14:52] <vila> jibel: you should, check with Mirv for no running stacks and josepht as vanguard
[14:55] <dobey> didrocks, lool: i need to fix some dependencies in a -dev package for ubuntuone-credentials, do i need a landing ask for that? the -dev isn't on the image, but other binaries from the source are of course.
[14:55] <didrocks> dobey: if that's the only change, it's fine (and you ensure what builds against this -dev can still build)
[14:55] <cyphermox> didrocks: https://code.launchpad.net/~mathieu-tl/autopilot-gtk/source-format/+merge/196297
[14:56] <didrocks> cyphermox: ok, let's wait for the CI machinery to give feedback. fginther ^
[14:57] <dobey> didrocks: yeah, they should still build. it's just removing libsecret-1-dev (which we don't use any more anyway), and adding the correct -dev depends for libsignon/libaccounts/qtbase5
[15:03] <didrocks> dobey: go ahead then ;)
[15:03] <fginther> cyphermox, didrocks, it's building http://s-jenkins.ubuntu-ci:8080/job/autopilot-gtk-trusty-amd64-ci/9/console
[15:04] <dobey> didrocks: am doing :)
[15:06] <josepht> Mirv: is qa-radeon-7750 free for me to do some job testing one?
[15:06] <josepht> *on
[15:31] <cwayne> hey guys, any chance to get ubuntu-keyboard landed in an image today?  it's int he landing asks now
[16:07] <cwayne> asac, ^
[17:03] <didrocks> kenvandine: robru: hey! meeting :)
[17:04] <kenvandine> didrocks, 2m, just got out of another meeting
[17:13] <plars> Saviq: around?
[17:18] <robru> didrocks, hummm, it's not on my calendar!
[17:18] <didrocks> robru: can you grab it from mine?
[17:18] <didrocks> (all occurences)
[17:19] <robru> didrocks, i'm not sure how to find your calendar
[17:19] <didrocks> robru: you were invited
[17:19] <didrocks> robru: you told "no"
[17:20] <robru> didrocks, strange. i thought you cancelled today's meeting. i see them pick up again next week
[17:20] <didrocks> robru: only the 3 days of vUDS as told during the hangouts
[17:20] <didrocks> ;)
[17:20] <didrocks> robru: anyway, kenvandine has the details
[17:20] <robru> didrocks, ok
[17:20] <didrocks> and there are the spreadsheets :)
[17:21] <kenvandine> robru, can you build the keyboard?  (if it hasn't since the fix was merged)
[17:21] <robru> kenvandine, sure
[17:21] <didrocks> omg robru builds A keyboard!
[17:21] <kenvandine> i haven't looked yet, catching up my notes from 3 hours of meetings
[17:24] <robru> kenvandine, ppa has latest commit to lp:ubuntu-keyboard. unless there's an MP that's about to land i don't think it needs a build.
[17:30] <didrocks> robru: we don't care of latest of latest, you can go ahead :)
[17:31] <robru> didrocks, go ahead and test? ok
[17:31] <didrocks> yep ;)
[17:34] <plars> kgunn: do you have someone who can look at the unity crashes we're seeing?
[17:40] <kgunn> plars: i'm pretty sure Saviq is looking already (altho...are we talking about the same crashes ?:)
[17:40] <kgunn> bug ?
[17:40] <plars> kgunn: not sure there is one yet, but just about every ap test seems to be getting a .crash file in the image tests
[17:58] <Saviq> plars, if you can get me the .crash file (assuming it's not truncated), I can see if it's one that's on our radar
[17:59] <Saviq> plars, I believe you're seeing one on shutdown (as the tests pass, AFAICS?)
[18:00] <Saviq> plars, some of them will be gone with Qt 5.2 that's coming up soon
[18:02] <plars> Saviq: shouldn't be truncated, here's one: https://jenkins.qa.ubuntu.com/job/trusty-touch-mako-smoke-unity8-autopilot/39/artifact/clientlogs/_usr_bin_unity8.32011.crash/*view*/
[18:03] <Saviq> that's a recent image?
[18:03] <plars> Saviq: yes
[18:03] <plars> Saviq: and yes, the test pass rate doesn't seem to be affected by it
[18:04] <Saviq> plars, ok, trying to retrace
[18:29] <Kaleo> hi
[18:29] <Kaleo> approved MRs don't seem to be landing: https://code.launchpad.net/ubuntu-ui-toolkit/+activereviews
[18:35] <cjohnston> Kaleo: a specific exampleplease?
[18:36] <cjohnston> nm.. found one
[18:36] <dobey> doh. just missed didrocks i guess
[18:39] <cjohnston> Kaleo: there are a bunch of jobs pending
[18:39] <cjohnston> looks like waiting on hardware to be available
[18:47] <Kaleo> cjohnston, is it going to be resolved with time?
[18:48] <cjohnston> Kaleo: should
[18:48] <Kaleo> cjohnston, thank you
[18:49] <cjohnston> np
[19:00] <Saviq> plars, unfortunately this doesn't retrace at all :/
[19:00] <plars> Saviq: :(
[19:01] <Saviq> plars, i.e. https://bugs.launchpad.net/ubuntu/+source/unity8/+bug/1253666
[19:01] <Saviq> plars, so, this is something that happens on the android side I'm afraid
[19:02] <Saviq> plars, and gonna be pretty tricky to hunt down
[19:02] <plars> ogra_: do you know if something changed on the android side in the past few days that could affect unity8? I don't know how much insight we have into those changes
[19:13] <ogra_> plars, well, android is a package ... it usually has changelog entries and all on the trusty-changes ML or on launchpad etc
[19:14] <ogra_> plars, but to my knowledge the changes were either completely emulator related or packaging changes over the last days
[19:14] <plars> ogra_: Saviq was thinking the new unity8 crashes we're seeing could be related to some change in android
[19:15] <ogra_> there were none that could affect the image ... all emulator or packaging stuff
[19:27] <xnox> well if it fails to retrace it's hard to tell, is it actually the unity8 from the archive or custom compiled elsewhere?
[20:52] <fginther> cyphermox, does this describe the issue you found trying to update autopilot-gtk: https://bugs.launchpad.net/pbuilderjenkins/+bug/1254162 ?
[21:15] <cjohnston> Kaleo: looks like its down to two jobs in the queue
[21:17] <Kaleo> cjohnston, thanks!
[21:17] <Kaleo> cjohnston, 4 actually :)
[21:18] <cjohnston> Kaleo: ? I only see two in jenkins
[21:19] <Kaleo> cjohnston, https://code.launchpad.net/ubuntu-ui-toolkit/+activereviews
[21:19] <cjohnston> https://code.launchpad.net/~elopio/ubuntu-ui-toolkit/fix1244615-autopilot_qtc/+merge/192713 hasnt been approved   nor has https://code.launchpad.net/~fboucault/ubuntu-ui-toolkit/icon_api_sanitization/+merge/194253
[21:19] <Kaleo> cjohnston, they have been top approved
[21:20] <cjohnston> they have to be 'comment' approved as well AFAIK
[21:21] <Kaleo> cjohnston, never been a requirement
[21:22] <Kaleo> cjohnston, unless it's changed
[22:35] <cyphermox> fginther: I couldn't put it better ;)
[22:45] <fginther> cyphermox, thanks