[01:55] <robru> i.... what?
[01:57] <robru> but platform-api is in the ppa... it built!
[02:04] <imgbot> [02:11] <robru> well the train's gone and shit itself good this time
[02:15] <robru> hmmmmmm
[02:16] <robru> AlbertA: Ok I think that worked. there seems to be some kind of bug where the train can't handle the idea that a silo is being published for the second time. I dug in and removed the evidence of the initial publication and now it looks like it's working. I gotta run out but I'll be back in a few hours and double check that this published for real.
[02:17] <robru> oh that's a good sign
[02:45] <AlbertA> robru: thanks dude!
[03:24] <imgbot> [03:25] <imgbot> [05:22] <Mirv> morning
[05:24] <robru> AlbertA: yep it definitely published. you're welcome. looks like it's still stuck in proposed due to the binary package from the previous build though, I think if we get an archive admin (cjwatson?) to delete the old arm64 build it should be able to get through this time.
[05:25] <robru> Mirv: heya
[05:28] <robru> Mirv: found a bug in the train, but it's a rare corner case. if you rebuild packages after they've been published, it can't publish them again. You have to go delete ~/silos/ubuntu/landing-XXX/*.project_* manually in order to publish. This regression is probably from some changes I made weeks ago, so it'll be difficult to just revert to a working state. Not
[05:28] <robru> sure how long it'll take me to get a fix out (definitely won't be tonight). Hopefully all your publishings succeed on the first try, but if anything gets caught in proposed you'll have to be aware of this.
[05:41] <Mirv> robru: hmmkay
[05:41] <Mirv> robru: it's quite rare indeed
[05:42] <Mirv> I don't remember myself ever doing that
[05:42] <Mirv> unless of course it happens also wen publish fails, then build/watch_only and pubish again, which is more common
[05:42] <robru> Mirv: I've done it a few times over the years. it definitely used to be possible, as things used to get stuck in -proposed more often in the past
[05:42] <Mirv> oh, yes, those rare cases. I remember a couple.
[05:43] <robru> not sure if sarcasm ;-)
[05:48] <Mirv> haha :D no, they are rare.
[07:45] <bzoltan> Mirv: My question on #ci did not provoke anybody to respond .. who do you know from CI who I can direct ping?
[08:03] <Mirv> bzoltan: I tend to point to fginther for solving CI issues but that's unfair since he is simply too good in achieving/fixing things :)
[08:04] <Mirv> bzoltan: so I don't kind of have clear ideas who to ping / how they delegate tasks
[08:04] <Mirv> bzoltan: they should respond to cihelp, but I haven't gotten any response when I've pinged that nick in my morning hours
[08:04] <Mirv> I mean, it does not look like they really properly backlog the requests
[08:04] <bzoltan> Mirv:  with "how" i can  help... i do not know who is at CI who is actually really responsible to keep the machine running.
[08:05] <Mirv> which is probably partially true for trainguards too - when the ping is long enough time ago, one may assume that "someone else" took care of it
[08:05] <robru> bzoltan: I think a lot of CI people are off at some kind of sprint/training right now.
[08:06] <robru> bzoltan: if your issue is with s-jenkins I can poke at it but I don't have a lot of expertise there.
[08:07] <bzoltan> robru: Our staging landings are tottaly blocked... we have 40+ MRs blocked by a CI problem. . according to this logs - https://jenkins.qa.ubuntu.com/job/generic-deb-autopilot-runner-mako/6179/console  the build rootfs does not seem to be pure vivid.
[08:07] <robru> hmmm
[08:07] <vila> bzoltan, robru, Mirv : fginther *is* working on it and tracks it in asana
[08:07] <bzoltan> robru:  It would be great if you could help. But of course I can wait if it beyond your reach
[08:08] <bzoltan> vila: ohh, really? So fginther knows aboout this issue... good to know.
[08:09] <vila> well, AFAIK he is, he did update the ticket 4 hours ago
[08:09] <robru> bzoltan: yeah that log doesn't have enough info to go off. when you see 'but it isn't going to be installed' it usually means there's a problem with some other dep that isn't even mentioned. You have to reproduce it locally and then try forcibly installing everything it complains about until it finally tells you the real problem. real frustrating... and it's
[08:09] <robru> midnight here ;-) I'll let fginther lead that one
[08:11] <vila> bzoltan: in any case, pinging cihelp is the way to go
[08:15] <vila> bzoltan: hold on, it's on a mako ? Then it's probably plars, there have been issues about installing images related to a pretty convoluted set of blockers
[08:15] <bzoltan> vila:  the logs say it is on mako
[08:16] <vila> bzoltan: that's where I am yes. No idea if it's related to the issues I mentioned above though
[08:21] <bzoltan> vila: OK, thanks... the other option is to disable AP tests for the UITK for the time this issue is fixed.
[08:21] <bzoltan> Mirv: the silo16 is good to go
[08:22] <vila> bzoltan: meh, the job you're pointing above was "Started by upstream project "generic-deb-autopilot-utopic-touch" build number 6616" *utopic* not vivid, is that expected ?
[08:22] <Mirv> bzoltan: ok!
[08:23] <Mirv> vila: I'm sure it's not expected, everything should be running vivid when it comes to merging vivid branches
[08:23] <vila> https://jenkins.qa.ubuntu.com/job/generic-deb-autopilot-utopic-touch/ doesn't look like a job that succeeded recently %-/
[08:25] <vila> bzoltan: Do you when this started to fail ? (An url for a successful run will help)
[08:26] <vila> bzoltan: or at least the project name (generic jobs are a pain :-/)
[08:26] <bzoltan> vila:  it is the lp:ubuntu-ui-toolkit/staging branch where we target our MRs
[08:29] <vila> bzoltan: sounds like http://s-jenkins.ubuntu-ci:8080/job/ubuntu-sdk-team-ubuntu-ui-toolkit-staging-check/ then las success on Nov 10 ?
[08:33] <vila> bzoltan: http://s-jenkins.ubuntu-ci:8080/job/generic-deb-autopilot-runner-mako/6104/consoleFull looks vivid to me... (despite being "Started by upstream project "generic-deb-autopilot-utopic-touch" build number 6527")...
[08:42] <vila> bzoltan: so no MPs has landed since Nov 10 ? Or am I looking at the wrong place ?
[08:48] <vila> bzoltan: according to the branch history, the last commit was done yesterday... revno 1339,  timestamp: Fri 2014-11-21 18:12:18 +0200, message: Sync with trunk, is from you, how did you land it ?
[08:52] <bzoltan> vila: this is our staging https://code.launchpad.net/~ubuntu-sdk-team/ubuntu-ui-toolkit/staging
[08:53] <bzoltan> vila:  some MRs do land occasionally .. But we up tracking down the issues when 95% of the jenkins job fail on some cryptic crap...
[08:56] <vila> bzoltan: as far as I can find my way into that cryptic thing you're mentioning above, it seems to me http://s-jenkins.ubuntu-ci:8080/job/ubuntu-sdk-team-ubuntu-ui-toolkit-staging-utopic-amd64-autolanding/635/consoleFull did land revno 1341 12 hours ago
[08:57] <vila> i.e. gating is done for utopic on armhf, amd64 and i386 : http://s-jenkins.ubuntu-ci:8080/job/ubuntu-sdk-team-ubuntu-ui-toolkit-staging-autolanding/691/console
[09:00] <vila> http://s-jenkins.ubuntu-ci:8080/job/ubuntu-sdk-team-ubuntu-ui-toolkit-staging-autolanding/691/parameters/? says utopic too (parent of the job above)
[09:05] <vila> Mirv: correct me if I'm wrong, those gating rules come cu2d-config right ? ubuntu-ui-toolkit is defined in stacks/head/sdk.cfg AFAICS, am I looking at the wrong place again ?
[09:12] <Mirv> vila: I don't unfortunately have a clue whether cu2d-config plays a role still. I kind of assumed it was not used at all anymore, but I might be wrong... if it is in use, that would explain something!
[09:13] <vila> crickets
[09:13] <Mirv> vila: looking at the changelog, it does seem to be in use still
[09:13] <Mirv> vila: I think we can fix it ourselves if that's the ase, let me give you a branch to approve..
[09:14] <vila> Mirv: thanks, that's useful feedback, I'll try to get feedback from fginther (new ticket created in asana so it is tracked)
[09:14] <Mirv> vila: oh, actually no branch that I could create, it seems to be all vivid for ubuntu-ui-toolkit from what I can see..
[09:15] <Mirv> vila: thanks for creating a ticket
[09:15] <vila> Mirv: ha, great, gee, so you end up with the same understanding ? I.e. it should be vivid but it's still utopic ?
[09:16] <Mirv> vila: yes, looks like it
[09:17] <vila> Mirv: pfew. THanks !
[09:24] <brendand> ogra_, no psivaa-holiday so who can help us from ci?
[09:25] <brendand> ogra_, seems security and sdk suites didn't run for some reason
[09:25] <psivaa-holiday> brendand: sorry i am not on holidays, this is irccloud madness
[09:25] <ogra_> psivaa, to late ... now you have to take off
[09:25] <brendand> psivaa, ok :)
[09:26] <psivaa> :D
[09:26] <ogra_> You have not informed bzr of your Launchpad ID, and you must do this to
[09:26] <ogra_> write to Launchpad or access private data.  See "bzr help launchpad-login".
[09:26] <ogra_> thats the issue it seems
[09:27] <psivaa> brendand: ogra_: it is lp temp issue '503', let me run those again
[09:27] <sil2100> ;)
[09:46] <Mirv> sil2100: we've vivid autopkgtests failing a lot, not sure what could be done about it
[09:46] <sil2100> ogra_: oh, btw.! How are those adbd changes going?
[09:47] <sil2100> Mirv: for which packages?
[09:47] <Mirv> sil2100: about everything that runs some :) for example see http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html#ubuntu-download-manager
[09:47] <sil2100> uh
[09:47] <Mirv> I just don't see any clear causer of it
[09:47] <ogra_> sil2100, rolled back ... they break the lab which is stuck on an old UDF until we can land a one line change in krillins recovery to touch the override file when usin --developer-mode
[09:48] <ogra_> needs to wait til next week ,when we can land stuff in RTM again
[09:48] <Mirv> sil2100: but since it's there in stuff like binutils too, I'd guess foundations people would be aware and might know where to find the actual culprit
[09:48] <ogra_> (assuming you mean the new lock-screen-check feature)
[09:57] <Mirv> sil2100: correction, I see it has been already discussed 2h ago, but more people are needed
[09:57] <Mirv> so let's assume the people will be found and vivid autopkg test issues fixed by eod or so
[10:12] <bzoltan> Mirv:  could you please gently kick the UITK package in the proposed pocket. It thinks that it is regressing on ubuntuone-credentials  what is BS
[10:13] <sil2100> bzoltan: yeah... see Mirv's messages above ^
[10:13] <sil2100> bzoltan: it seems autopkgtests are failing for vivid now and we're waiting for more people to help
[10:13] <sil2100> Not much we can do here :/
[10:13] <ogra_> did someone ping pitti ?
[10:14] <bzoltan> sil2100: Ahh... sorry I have not read the logs
[10:14] <Laney> That one is fixed: https://jenkins.qa.ubuntu.com/job/vivid-adt-ubuntuone-credentials/lastBuild/
[10:24] <Mirv> ogra_: yes, or he himself pinged mvo and mvo waits for barry :)
[10:24] <Mirv> Laney: great, it looks like it might fix uitk migration, although the system-image problem hits eg. http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html#ubuntu-download-manager
[10:25] <Laney> There are other failures, but I think that they are independent
[10:25] <Laney> e.g. the kate ones seem like a packaging error
[10:47] <cjwatson> robru,AlbertA: Yes, removing mir-graphics-drivers-android/arm64 from vivid-proposed was the right answer here.  (This was only a problem because there was a previous unmigrated version in -proposed.)  I've done that now.
[11:42] <pstolowski> trainguards, hey, may I ask for reconfiguring landing-013 if needs be (one extra MP added to unity-scopes-shell), and rebuilding of *just* unity-scopes-shell (it takes ages to rebuild entire silo again)?
[11:44] <sil2100> pstolowski: k
[11:45] <sil2100> pstolowski: in case of adding MPs to the list of already configured projects you can reconfigure yourself actually, but let me do that this time
[11:47] <sil2100> pstolowski: done
[11:47] <sil2100> pstolowski: it's building
[11:47] <pstolowski> sil2100, thanks; is rebuilding of just single project possible?
[11:48] <sil2100> pstolowski: yes, when you press the build button, you need to select the project you want to rebuild in the PACKAGES_TO_REBUILD
[11:48] <sil2100> pstolowski: but in theory when you just add one merge to the silo, CI Train by default will only rebuild the project that changed (e.g. had new merges)
[11:48] <sil2100> But to be perfectly safe you can include the name in PACKAGES_TO_REBUILD ;)
[11:49] <pstolowski> sil2100, awesome, thanks!
[11:49] <sil2100> yw!
[11:49] <pstolowski> pete-woods, ^ you may want to know that as well :)
[12:01] <pete-woods> :)
[12:43] <olli> hiho
[12:43] <olli> ogra_, sil2100, jibel, how are things looking?
[12:43] <olli> are we still playing audio on the latest build ;)
[12:44] <ogra_> olli, we have some more info about the issue
[12:44] <ogra_> indeed we do :P
[12:45] <ogra_> it is still open if we will have a fix ... but if there is a silo ready by tonight we could still get it in ...
[12:45] <ogra_> if not the agreeent was to not do anything and ship 169
[12:46] <olli> sounds good, thx for the update
[12:52] <sil2100> olli: still nothing concrete though ;)
[12:55]  * ogra_ wouldnt say that ... we knoe there is a long running connection which cgproxy isnt designed for 
[12:55] <ogra_> *know
[12:55] <ogra_> and that some process sends 13 times the same request over this connection
[12:55] <ogra_> which shouldnt happen
[12:55] <ogra_> we just need someonne to hit the issue again and collect the requested data and we should know more
[12:56] <ogra_> GRRRR !
[12:56]  * ogra_ hates adbd 
[12:56] <ogra_> #define open ___xxx_open
[12:56] <ogra_> #define write ___xxx_write
[12:57] <ogra_> lovely ... aint it ?
[12:59] <barry> Mirv: that looks like typical timeout errors.  you just have to retry the build
[12:59] <jibel> ogra_, I attached the data this morning t 1394919, is there anything else to provide?
[12:59] <jibel> *to
[12:59] <ogra_> jibel, nope, lets wait for stgraber
[12:59] <cwayne> heyhey
[12:59] <cwayne> freeze is tomorrow right
[13:01] <davmor2> cwayne: depends if the fix for cgmanager arrives there will be another spin tomorrow otherwise it is 169 which is already released, why?
[13:02] <cwayne> davmor2: had a bugfix for https://bugs.launchpad.net/hanloon/+bug/1395767
[13:02] <ogra_> cwayne, no more landings unless you get special approval from olli, victorp or pmcgowan
[13:03] <cwayne> ack
[13:03] <pmcgowan> that is indeed a bad one, see that all the time
[13:05] <victorp> cwayne, you will need a click which only has that change in from the current today
[13:05] <victorp> seems like alow of stuff has gone in for testing
[13:05] <victorp> so 1st prepare a branch for that
[13:05] <victorp> then pmcgowan I think let it in if we end up rebuilding today, but not sure we should rebuild just for that one
[13:06] <olli> yeah
[13:06] <Mirv> barry: I already reran it twice
[13:06] <ogra_> right, i would put that one in the langpack category
[13:06] <Mirv> so job #3, 4 and 5 failed within 12h
[13:06] <cwayne> i'm ok if we do it OTA, just thought the freeze was on wedsnesdays now (oops!)
[13:06] <ogra_> if we rebuild and it is available it can go in
[13:06] <pmcgowan> ok with me
[13:07] <cwayne> ok, ill get a branch with just that fix and make a click with it
[13:07] <cwayne> and go from there
[13:12] <barry> Mirv: there are times when i have to re-run it 2-3 times to get a clean build.  if it's failing with timeout errors more often then that, then there could be a regression in udm, since si hasn't changed in vivid (there's a newer version in rtm, but nothing new in vivid yet).  is this with mandel's latest udm upload?
[13:13] <mandel> barry, is with the latests, but did you see the errors? it complains about dry runs..
[13:13] <barry> mandel: look higher up
[13:14] <barry> i am testing it locally
[13:15] <mandel> ack
[13:20]  * sil2100 lunch o/
[14:24] <kenvandine> fginther, jgdx said you were helping him with some issues with the settings tests run on otto last week, what's the status of that?
[14:26] <fginther> kenvandine, the test environment was updated and the tests were passing again. Have they regressed yet again?
[14:26] <kenvandine> yeah, tedg has been trying to get CI passes and it's blowing chunks
[14:27] <kenvandine> https://code.launchpad.net/~ted/ubuntu-system-settings/silent-mode-trunk/+merge/241709
[14:27] <kenvandine> in otto there are crashes and everything fails
[14:28] <kenvandine> fginther, but there is one test failure in generic-deb-autopilot-runner-vivid-mako
[14:28] <kenvandine> which is a traceback that looks like something i see all over the otto job logs
[14:28] <kenvandine> but they aren't all failing, just one
[14:29] <kenvandine> the otto failures are significant that i would say we should just land his branch, if the others test pass
[14:29] <kenvandine> but the other job has 1 failure in the same panel as his change
[14:29] <kenvandine> so i'm reluctant to land it
[14:29] <kenvandine> but the traceback looks like an unrelated problem
[14:29] <kenvandine> and... the tests pass locally on mako
[14:30] <fginther> kenvandine, we can update the otto environment and try again.
[14:31] <fginther> kenvandine,although  the mako testing is a lot more reliable. It's doing the same testing the smoke testing runs, with the addition of the MP packages on top of the latest image
[14:31] <kenvandine> fginther, do those tracebacks look like something you saw last week?
[14:31] <kenvandine> right, which is why i hesitate to land
[14:32] <fginther> kenvandine, I'll look
[14:32] <kenvandine> thx
[14:34] <fginther> bzoltan, the issue with the uitk staging branch running mako tests on non-vivid has been resolved. A test run using trunk is now passing: https://jenkins.qa.ubuntu.com/job/ubuntu-sdk-team-ubuntu-ui-toolkit-staging-ci/1268/
[14:37] <kenvandine> tedg, i went ahead and prepared a silo for your silent-mode branch, but won't publish anything until i have a better sense of the CI failure
[14:37] <kenvandine> fginther, ^^
[14:37] <tedg> kenvandine, K, cool!
[14:38] <kenvandine> tedg, don't get too excited, i'm pretty worried about that mako failure
[14:38] <kenvandine> i don't want to break smoke testing
[14:39] <kenvandine> tedg, but i'm pretty suspicious that it's something outside of settings... we'll need fginther's magic there :)
[14:39] <fginther> kenvandine, should have a retest started in a few minutes
[14:40] <kenvandine> thx
[14:40] <kenvandine> we'll get silo testing started soon, tedg has been trying to land this branch for weeks :/
[15:02] <bzoltan> fginther: \o/ thank you a bunch
[15:06] <AlbertA> trainguards: silo 009 is still stuck in migration, any ideas why?
[15:06] <sil2100> AlbertA: let me take a look at that
[15:07] <cjwatson>     * amd64: mir-graphics-drivers-desktop, ubuntu-touch
[15:07] <cjwatson>     * arm64: mir-graphics-drivers-desktop
[15:07] <cjwatson>     * armhf: mir-graphics-drivers-desktop, ubuntu-touch
[15:07] <cjwatson>     * i386: mir-graphics-drivers-desktop, ubuntu-touch
[15:07] <cjwatson> It apparently makes those packages uninstallable
[15:09] <cjwatson> AlbertA,sil2100: looks like a hardcoded and now-incorrect dependency in mir-graphics-drivers-desktop on libmirplatform3driver-mesa rather than libmirplatform4driver-mesa
[15:10] <AlbertA> cjwatson: oh yeah!
[15:10] <sil2100> Yeah, saw this in update_output.txt but didn't check the sources in time ;)
[15:11] <sil2100> cjwatson: thanks!
[15:11] <kenvandine> tedg, fginther: vivid smoke testing for settings is 100% currently, so this must be somehow related to tedg's branch... i just can't seem to see how
[15:11] <cjwatson> Would be nice to unhardcode that, since it should be entirely possible to generate that dependency on the fly
[15:11] <kenvandine> fginther, is there any difference in how the smoke testing is run?
[15:11] <cjwatson> (If necessary, via a manual substvar)
[15:11] <sil2100> AlbertA: is it hardcoded in the end?
[15:11] <AlbertA> cjwatson: how should I do that?
[15:12] <AlbertA> cjwatson: sil2100: I believe the intention we had was to use those meta packages so that they could be listed in the seeds
[15:12] <cjwatson> AlbertA: few minutes please
[15:12] <AlbertA> and the alternatives wouldn't have to be changed every release
[15:15] <cjwatson> AlbertA: Something like http://paste.ubuntu.com/9233922/ should do it
[15:15] <cjwatson> AlbertA: You still have to hardcode the binary package names, but that should at least somewhat reduce the potential for error
[15:15] <cjwatson> (binary package names> by which I mean the contents of Package fields)
[15:16] <AlbertA> cjwatson: cool thanks!
[15:47] <Saviq> trainguards ↑ please :)
[16:19] <sil2100> ogra_: heeeeey, can I go for practice to day? Can I? Caaan I?
[16:19] <ogra_> hmm
[16:19] <sil2100> ogra_: (will you lead the meeting today ;) ? )
[16:19]  * ogra_ ponders
[16:20] <sil2100> ogra_: pliiiiiz
[16:20] <ogra_> lol. ok ok
[16:20] <ogra_> :)
[16:20] <sil2100> Thanks ;)
[16:21] <sil2100> Anyway, I guess we only have the cgmanager thing to discuss... oh, and the sanity tests for #169 still running
[16:21] <ogra_> yep
[16:21] <ogra_> and checking the two missing smoke results
[16:23] <sil2100> I see management is leaning towards using #169 for the GM image anyway - and indeed I think we won't be able to get a fix till EOD :|
[16:23] <ogra_> unless stgraber or ted have any brioght ideas
[16:24] <seb128> did anyone notice a screen flicker on apps switch since 169?
[16:25] <ogra_> its not since 169
[16:25] <seb128> when I right swipe between apps the screen flickers
[16:25] <sil2100> Oh, I thought that was only because of debugging
[16:25] <seb128> could be since > 165
[16:25] <ogra_> thats old and there is a bug (and iirc a fix in vivid) for it
[16:25] <seb128> not sure I upgraded during the w.e
[16:25] <seb128> well, it started today for me, was not there with friday's image
[16:25] <seb128> and it persists accross reboots
[16:26] <ogra_> well, it is definitely an old bug
[16:27] <AlbertA> cjwatson: sil2100: so how are the packaging issues like I had in mir caught? is there something we can run in our CI to catch these sort of issues?
[16:27] <Saviq> trainguards, can someone please publish vivid silo 30 for me :|
[16:30] <cjwatson> AlbertA: not really I'm afraid
[16:31] <cjwatson> AlbertA: one of the wishes for the new CI engine is to be able to run those tests per-silo though, in general; not sure how far that's got
[16:34] <sil2100> Saviq: doing! Been in a meeting!
[16:34] <Saviq> sil2100, you're always in a meeting! ;P
[16:34] <sil2100> I know! ;(
[16:34] <sil2100> Saviq: top approve plz!
[16:34] <Saviq> sil2100, OOPz
[16:36] <tedg> seb128, bug 1394622
[16:36] <sil2100> Saviq: ready?
[16:36] <tedg> seb128, Started with 166, fixed in vivid.
[16:36] <seb128> tedg, great, thanks
[16:37] <seb128> ogra_, sil2100, ^ it might not be an old bug?
[16:37] <Saviq> sil2100, just confirming why it was not top-ack, gimme a minute
[16:37] <seb128> tedg, do you know when the issue started?
[16:37] <ogra_> tedg, oh, wow, i thought it was older
[16:37] <ogra_> maybe i dreamt that
[16:37] <tedg> seb128, UAL version 0.4+15.04.20141118~rtm-0ubuntu1
[16:37] <seb128> ogra_, ^
[16:38] <Saviq> sil2100, ready
[16:38] <seb128> sil2100, ogra_, olli, imho that should be fixed in rtm, that's a recent regression is quite visible
[16:38] <ogra_> seb128, well ...
[16:39] <ogra_> tedg, feel like preparing a silo for that ... in case mgmt decides we allow a rebuild
[16:39] <ogra_> ?
[16:39] <tedg> seb128, Only do long right swipes, it doesn't happen there :-)
[16:39] <fginther> kenvandine, there is no difference in how the test_suite is executed during MP and smoke testing. The only differences I can come up with are different devices and during smoke testing, other test suites run on the device (but the device is rebooted between suites)
[16:39] <tedg> ogra_, Not allowed to have a silo unless it is critical and rtm.
[16:39] <ogra_> tedg, nonsense ... you can have silos as much as you want
[16:40] <fginther> kenvandine, I've kicked off another test run on a mako. The updated otto run didn't look much better.
[16:40] <tedg> ogra_, Hah, YOU can have silos as much as you want. :-)
[16:40] <ogra_> tedg, weather they *land* lies in the hands of ... well ... these three guys
[16:40] <tedg> There's a have and have-nots of landing.
[16:40] <ogra_> but if we dont have a silo at all it surely wont land at all
[16:41] <ogra_> not sure if olli did a bug meeting today, i would have brought that issue up there
[16:45] <kenvandine> fginther, :(
[16:45] <olli> pmcgowan, ^
[16:49] <pmcgowan> oo that is bad, how did I miss that
[16:49] <davmor2> bfiller: open messaging, swipe up for a new message, add a contact/number, click on the camera bottom left, add a photo, then try an type in a message, the keyboard disappears after a letter or so
[16:49] <pmcgowan> sil2100, how could that have been introduced with 166? what did we land there
[16:50] <bfiller> davmor2: ack, bug that please
[16:50] <ogra_> pmcgowan, ubuntu-app-launch
[16:50] <davmor2> bfiller: wilko
[16:51] <pmcgowan> ogra_, we obviously got more changes than we expected
[16:51] <ogra_> yep
[16:51] <ogra_> as usual :P
[16:51] <ogra_> the shiny world of sideefects ...
[16:53] <ogra_> olli, pmcgowan, in case you want to land this, we should let the apprmor fix (that is in the same u-a-l upload in vivid) in as well, so we dont get system slowdown when apparmor tries to collect the cproxy data (and fails)
[16:54] <tedg> apport :-)
[16:54] <ogra_> sigh
[16:54] <ogra_> things starting with "A"
[16:55] <ogra_> :)
[16:55] <cwayne> if we do that, can i also land my custom fix? :P
[16:55] <tedg> Better than the number of things starting with "U" ;-)
[16:55] <ogra_> true
[16:57] <ogra_> i guess it is my subconcious that wants me to ping jamie all the time in a subtle way
[16:57] <davmor2> bfiller: https://bugs.launchpad.net/messaging-app/+bug/1396248
[16:57] <bfiller> davmor2: thanks
[16:58] <davmor2> title is bad I'll mode it
[16:58] <tedg> ogra_, I think that jamie hates your subconscious :-)
[16:58] <ogra_> lol
[16:59] <brendand> ci-help - i have some merge requests that aren't running tests in jenkins, what do i need to do? https://code.launchpad.net/~brendan-donegan/ubuntu-clock-app/wait_for_bottomedgetip_visible/+merge/242792 & https://code.launchpad.net/~brendan-donegan/reminders-app/test_add_notebook_must_append_it_to_list_swipe_to_bottom/+merge/242808
[17:00] <brendand> cihelp ^ ?
[17:00] <fginther> brendand, looking
[17:06] <pmcgowan> tedg, if that fix was done last friday why are we only discussing today?
[17:11] <fginther> brendand, jenkins is finding your branches now
[17:14] <tedg> pmcgowan, I'm confused, what are you expecting? It's listed as a bug in RTM, and marked High.
[17:15] <tedg> There are a bunch of those.
[17:16] <pmcgowan> tedg, not regressions from the handful of landings we just did
[17:16] <pmcgowan> as a regression we should have been all over this
[17:16] <pmcgowan> moot point now
[17:24] <balloons> ohh fginther lucky man today. So can you have a look at this autolanding job? http://91.189.93.70:8080/job/ubuntu-calculator-app-vivid-amd64-autolanding/2/console. It failed do to apt-get update (hash sum mismatch), aka the indexes were updating.
[17:24] <balloons> We shouldn't fail the job because of that; you should be able to just keep going
[17:24] <cjwatson> balloons,fginther: You should apply the same workaround launchpad-buildd does
[17:25] <pmcgowan> tedg, in the meantime, it would make sense to prepare an rtm silo if we do not have one yet
[17:25] <cjwatson> balloons,fginther: End of http://bazaar.launchpad.net/~canonical-launchpad-branches/launchpad-buildd/trunk/view/head:/update-debian-chroot - in practice that's an extremely reliable workaround
[17:26] <cjwatson> As in I used to be forever retrying spurious failures due to that, now I can't remember the last time I had to
[17:26] <cjwatson> (OK, it's a bit different against archive.ubuntu.com, but should still help a lot)
[17:26] <ogra_> tedg, if you do that, please only th eone MP that fixes the issue ...
[17:27] <ogra_> in case there is a re-spin the fix can still be considered
[17:27] <tedg> pmcgowan, K, also bug 1394919 recoverable error fix?
[17:27] <tedg> Well, the bug got dup'd.
[17:27] <ogra_> tedg, if you do that, do it in two silos
[17:27] <tedg> ogra_, It's already in a separate MR
[17:27] <ogra_> right
[17:28] <ogra_> but if mgmt decides they only want one of the two we cant easily unbundle it
[17:28] <ogra_> would need new QA etc etc
[17:28] <tedg> Well, you can't have the same project in separate silos.
[17:28] <ogra_> (if mgmt decides at all :) )
[17:28] <tedg> The lock each other out.
[17:28] <balloons> cjwatson, awesome, thanks
[17:29] <ogra_> tedg, well, then only the flicker
[17:30] <balloons> cjwatson, since I have you btw, are you the right person to ask about supporting multi-arch builds with click automagically? Perhaps I specify the schroots (or click manages them)
[17:30] <cjwatson> balloons: can I redirect you to mvo, unless he can't handle it?  I'm moving out of click development
[17:31] <balloons> cjwatson, right, I couldn't remember if that was on the list or not.. I thought it might be
[17:32] <tedg> trainguards, rtm silo for line 57 please
[17:32] <ogra_> cjwatson, what ? you mean you want deploy LP in click packages ? :P
[17:32] <cjwatson> ogra_: some day you'll quit trolling and the world will implode ;-)
[17:32] <ogra_> lol
[17:53] <davmor2> cjwatson: pfff the world, it's the universe getting sucked into the blackhole that the world imploding causes that is more worrying ;)
[17:59] <kenvandine> tedg, which vivid build did you run these tests on?
[18:00] <kenvandine> tedg, i just ran it on image 32 mako, 6 of the 8 sound tests failed
[18:00]  * kenvandine tries with vivid version
[18:06] <kenvandine> interesting, i downgraded to the vivid version and also had 6 failures in the sound tests
[18:07] <kenvandine> but ran them again and had no failures
[18:07]  * kenvandine tries silo again
[18:07] <kenvandine> fginther, is there any chance the mako devices in the data center have the wrong orientation?
[18:08] <kenvandine> when i had the failures system-settings was started rotated each time
[18:08] <kenvandine> i turned it for the second run, when they passed
[18:09] <kenvandine> crap, one failure this time,  test_keyboard_sound_switch
[18:14] <kenvandine> fginther, ok, i have 3 runs in a row with the same 6 failures in the sound panel tests when in landscape
[18:15] <kenvandine> and 3 runs in portrait with just 1 failure each
[18:15] <kenvandine> we clearly need to be more resilient to orientation in testing
[18:19] <kenvandine> tedg, adding a sleep(0.2) before the assert in test_keyboard_sound_switch gave me repeated passes
[18:19] <kenvandine> in portrait :/
[18:20] <kenvandine> on a complete autopilot test run i had 117 failures when in landscape
[18:21] <kenvandine> looks like all of our tests need some love to make sure they work there
[18:21] <kenvandine> :(
[18:27] <fginther> kenvandine, orientation could be a concern as I don't know if this is always guaranteed to be portrait when the device first comes up. The devices in the lab lay flat, nothing is moving them except for the occasional device that must be manually reset
[18:28] <kenvandine> fginther, i thought so
[18:28] <kenvandine> i'm getting the same number of failures as the otto job i was looking at earlier
[18:28] <kenvandine> 117
[18:28] <kenvandine> when run in landscape
[18:29] <kenvandine> that's a lot of tests we should fix to work in landscape :/
[18:31] <kenvandine> i guess being better at following the page object model guidelines could improve that
[18:32] <fginther> kenvandine, hmm. I didn't even consider the desktop tests running under a different orientation.
[18:32] <kenvandine> is otto desktop?
[18:32] <kenvandine> i thought the job had "mako" in the name
[18:33] <kenvandine> oh it doesn't
[18:33] <fginther> kenvandine, yes. otto basically runs a x86 desktop iso
[18:33] <kenvandine> quite a coincidence then that i had 117 failures on mako in landscape
[18:34] <kenvandine> desktop it would always be portrait
[18:34] <kenvandine> so that can't be it
[18:34] <fginther> kenvandine, if the desktop tests aren't useful right now, we could disable them. but I'll leave that up to the project team
[18:34] <kenvandine> seb128, ^^ what do you think?  the otto job runs the tests on desktop
[18:34] <kenvandine> how much value do you think we get from that now?
[18:35] <kenvandine> seb128, it feels like that's where we spend the most time tracking down infrastructure/environment related failures
[18:35] <kenvandine> it'll be more important with convergence, but for now i see little value
[18:38] <kenvandine> pmcgowan, ^^ thoughts?
[18:39] <ogra_> sounds rather poointless to me
[18:39] <kenvandine> ogra_, yeah, someday sure
[18:39] <ogra_> you miss armhf ... you miss the android container backends ...
[18:39] <kenvandine> but many of our tests are around ofono, etc
[18:40] <ogra_> yeah and ofono
[18:40] <kenvandine> for convergence we'll need to ensure we do desktop testing
[18:40] <ogra_> right
[18:40] <kenvandine> but it'll be based on what is needed for desktop
[18:40] <ogra_> but thats still far out
[18:40] <kenvandine> right now it doesn't seem like it adds value
[18:40] <kenvandine> and we spend a TON of time chasing these problems
[18:40] <ogra_> ugh
[18:41] <pmcgowan> kenvandine, I thought settings was soon to be the solution on desktop
[18:41] <ogra_> do you ever chase the ones in touch smoketesting at all then ?
[18:41] <kenvandine> ogra_,  we try :)
[18:41] <ogra_> these are the only important ones currently
[18:41] <kenvandine> right now we're 100%
[18:41] <pmcgowan> kenvandine, is the issue x86 or landscape? trying to follow the backscroll
[18:41] <kenvandine> pmcgowan, at some point, sure
[18:41] <kenvandine> neither
[18:42] <kenvandine> it's that right now we have most tests failing in otto, which is run based on the x86 desktop iso
[18:42] <kenvandine> and we're not really sure why
[18:42] <kenvandine> i happened to hit the same number of failures on my mako when run in landscape
[18:42] <kenvandine> but on the desktop it would be portrait
[18:42] <kenvandine> so that can't be it
[18:42] <pmcgowan> was it all the tests?
[18:42] <kenvandine> i'm filing a bug now to fix our tests so they don't fail in landscape
[18:42] <kenvandine> no
[18:42] <kenvandine> over 90% though
[18:43] <kenvandine> and we're still unsure why
[18:43] <kenvandine> they pass on mako
[18:43] <kenvandine> but totally busted on otto
[18:43] <kenvandine> and we're questioning the value those even bring us right now
[18:43] <pmcgowan> last time ths happened the otto iso was old
[18:43] <kenvandine> yeah, fginther updated it
[18:43] <kenvandine> still blows chunks
[18:44] <pmcgowan> lets check with seb on timeframe for settings app on desktop, I thought it was maybe 15.04
[18:44] <kenvandine> jgdx and i have sunk a ton of time chasing this
[18:44] <kenvandine> even if it is 15.04, i think we need to look at the tests more for desktop
[18:44] <pmcgowan> and the failures dont tell us the reason?
[18:44] <ogra_> kenvandine, fyi test_language_page_title_is_correct fails in smoketesting
[18:44] <kenvandine> not really... different things crashing... tracebacks, etc
[18:44] <kenvandine> ogra_, i just looked this morning...
[18:44] <ogra_> (or failed today)
[18:45] <kenvandine> well, i was checking vivid mako
[18:45] <kenvandine> to compare these results
[18:45] <ogra_> right, we dont look at amko
[18:45] <ogra_> :)
[18:45] <kenvandine> ok... fair enough
[18:45] <ogra_> since thats not our product
[18:45] <kenvandine> i used that as an example to compare the CI runs
[18:45] <ogra_> nor do we check vivid
[18:45] <kenvandine> i'll check that out
[18:45] <ogra_> though system-settings occasionally has one or two failures
[18:45] <kenvandine> but see... this issue testing desktop distracted from smoketesting tests :)
[18:46] <kenvandine> yeah, often transient
[18:46] <kenvandine> next run will pass
[18:46] <ogra_> unlike camera-app that constantly is bad ...
[18:46] <kenvandine> i usually don't look closely until i see repeated failures
[18:46]  * ogra_ looks at bf
[18:46] <ogra_> iller
[18:46] <ogra_> ... who isnt here :P
[18:46] <fginther> kenvandine, do the uss tests need to touch the bottom edge?
[18:47] <ogra_> kenvandine, yeah, if we see them in the daily LT review we usually ping someone
[18:47] <kenvandine> fginther, nope
[18:48] <fginther> kenvandine, looking at the videos on the otto jobs, it appears that the bottom edge is off screen
[18:48] <fginther> kenvandine, k. I have to run to an appointment, will be back in about an hour
[18:49] <kenvandine> yeah, i think it's just that we need to be better at scrolling to where we need to touch
[18:49] <kenvandine> fginther, thanks!
[18:54] <seb128> kenvandine, sorry, was at dinner, what's the issue/question?
[18:55] <kenvandine> seb128, how much value do you think we get from the settings otto tests for desktop?
[18:55] <kenvandine> i know with convergence we'll need to make sure we have good tests for desktop
[18:55] <kenvandine> but right now i don't see a lot of value from those tests running on otto
[18:56] <kenvandine> and we sink a ton of time in trying to figure out what's broken
[18:56] <robru> AlbertA: disregard citrain failure message, looks like your package was uploaded just fine.
[18:56] <robru> AlbertA: let me know when you're ready to publish that
[18:56] <seb128> kenvandine, what are "otto tests"? running the autopilot tests on an otto setup?
[18:56] <kenvandine> seb128, and when we do put system-settings on the desktop, what we test will probably be different... so those desktop tests will be more about testing it for the desktop
[18:56] <kenvandine> not testing the phone settings on x86 desktop iso
[18:57] <kenvandine> yeah, autopilot tests run on a base x86 desktop iso
[18:57] <ogra_> robru, in case you missed it, see teds trainguads ping above please
[18:59] <AlbertA> robru: yeah it's still building armhf in the ppa
[18:59] <robru> tedg: sorry for the delay, rtm2
[19:00] <kenvandine> seb128, part of our problem could be the state of mir, unity8, etc for desktop in vivid
[19:00] <kenvandine> those tests run with all that
[19:02] <seb128> kenvandine, I think we should start running desktop tests if we can, we don't especially want to stop on failures though
[19:02] <seb128> just get things going
[19:03] <kenvandine> so we should keep investigating what's causing the otto failures?
[19:03] <kenvandine> at least some of them seem to include unity8 crashes
[19:03] <kenvandine> but others don't, with similar results
[19:03] <kenvandine> so actually, these tests don't test against desktop-next
[19:03] <kenvandine> they test on the stock x86 desktop iso
[19:04] <kenvandine> perhaps otto should be using the desktop-next iso?
[19:05] <seb128> well, not sure
[19:05] <seb128> I though CI wanted to replace otto
[19:05] <seb128> or provide a new infra for desktop testing
[19:05] <seb128> ev said that was on the roadmap for septembre
[19:05] <seb128> I guess that's got delayed
[19:05] <seb128> but it's worth checking with them
[19:05] <seb128> in any case we need to start testing unity8 desktop-next
[19:05] <seb128> and apps in desktop mode
[19:06] <kenvandine> seb128, yeah, but is testing settings on the desktop iso add value right now?
[19:06] <kenvandine> we don't even know why it's broken yet...
[19:07] <kenvandine> who knows, maybe it's because we aren't using desktop-next :)
[19:07] <seb128> kenvandine, not really, it's going to fail because things like indicator-network are not running
[19:07] <seb128> or do we mock those?
[19:07] <kenvandine> ah... not really
[19:08] <seb128> well in any case we want to test those under desktop-next
[19:08] <kenvandine> we need to
[19:08] <kenvandine> but jgdx struggled a bit with that
[19:08] <kenvandine> tedg's branch we are trying to land includes mocking the sound indicator
[19:09] <kenvandine> seb128, the otto tests do run unity8 though
[19:09] <kenvandine> so it should have the indicators
[19:09] <seb128> well, in any case testing under unity7/otto is better than nothing
[19:09] <kenvandine> unless something has changed about the way that runs on desktop
[19:09] <kenvandine> it's not unity7
[19:09] <seb128> hum, k
[19:09] <seb128> so dunno
[19:09] <seb128> I don't know enough about the otto setup
[19:09] <kenvandine> we see lots of dbus errors in the logs about qtmir
[19:10] <seb128> can we run that locally to debug?
[19:10] <kenvandine> didrocks knows how to :)
[19:10] <kenvandine> i don't
[19:10] <kenvandine> https://jenkins.qa.ubuntu.com/job/autopilot-testrunner-otto-vivid-fjg/4/console
[19:10] <seb128> my understanding was that otto was not properly supported by CI
[19:10] <kenvandine> seb128, ^^ a log
[19:10] <seb128> so maybe that's something that needs to be resolved on the CI side first
[19:10] <seb128> like they need to provide us a framework we can run desktop tests on
[19:12] <kenvandine> seb128, yeah, so i still think in it's current state, it doesn't provide as much value as it takes to make it work
[19:12] <seb128> right
[19:12] <kenvandine> we clearly need something for desktop mode testing though
[19:12] <seb128> we need some CI infra that allow us to boot unity8-mir and run tests
[19:12] <seb128> otto is vm based right?
[19:12] <seb128> and mir doesn't run in vms?
[19:12] <kenvandine> seb128, so is that a +1 to disable the current otto test until we can sort out the right solution?
[19:12] <seb128> how do we manage to run unity8?
[19:13] <kenvandine> no idea :)
[19:13] <kenvandine> these passed in utopic though
[19:13] <kenvandine> maybe something mir related has changed and unity8 isn't starting right?
[19:13] <kenvandine> or we're missing depends?
[19:13] <seb128> well, mir never worked in VMs afaik
[19:13] <kenvandine> the logs aren't very useful :(
[19:13] <kenvandine> yeah, so not sure how this worked
[19:13] <seb128> but yeah, +1 from me to disable tests
[19:14] <seb128> we need to resolve the desktop testing problem
[19:14] <seb128> but we need to start with a clean board
[19:14] <seb128> with step1 having a proper infra booting a desktop next iso
[19:14] <kenvandine> fginther, ^^ lets disable them
[19:14] <kenvandine> seb128, also in the settle tests i see the load is quite high in the otto tests
[19:14] <seb128> ok, on that note need to go
[19:14] <kenvandine> could be related, something spinning out of control
[19:14] <kenvandine> good night seb128!
[19:14] <kenvandine> thanks
[19:15] <seb128> yeah, it feels like we don't have a solid base
[19:15] <seb128> difficult to have datas
[19:15] <seb128> let's reboot that setup and see where we get
[19:16] <kenvandine> pmcgowan,  bug 1396305
[19:16] <kenvandine> we should take care of that when we refactor our testing over time
[19:40] <AlbertA> robru: ok mir package has been built - sanity checked, ready to publish (silo 009)
[19:40] <robru> AlbertA: alright, here goes...
[19:41] <fginther> kenvandine, ack
[19:41] <fginther> kenvandine, thanks for digging on that
[19:42] <kenvandine> fginther, np, we surely appreciate all your help too
[19:43] <kenvandine> tedg, now manually testing the silo i notices i get no sound... but now that i think about it i noticed a call the other day where the ringer never rang, i just saw it
[19:43] <kenvandine> anyone know if there is a known problem with sound on vivid/mako?
[19:47] <robru> AlbertA: yeah I'm gonna have to wrangle this. gimme a few minutes...
[19:52] <tedg> robru, Thanks, building!
[19:54] <robru> tedg: you're welcome
[19:58] <sil2100> ogra_: any news regarding the cgproxy issue?
[19:58] <sil2100> jibel: ^ ?
[20:07] <robru> AlbertA: ok I think it's happening...
[20:08] <jibel> sil2100, nothing from me. I cannot reproduce with gdb attach
[20:08] <jibel> ed
[20:08] <robru> AlbertA: I gotta step out for lunch. When you see version ...1125 show up at http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html#mir then we're golden.
[20:09] <AlbertA> robru: thanks!
[20:13] <sil2100> olli, jibel: so I guess we stick with #169, right? Did sanity testing pass on this? Did we start regression testing?
[20:14] <olli> yes, that's the plan of record
[20:21] <sil2100> olli, davmor2, jibel: so, I see #169's mako equivalent failed sanity testing...
[20:21] <sil2100> Ah, wait, no
[20:21] <sil2100> Correction
[20:21] <sil2100> The es image failed sanity testing
[20:22] <sil2100> What does that mean? Do we need a fix landing before we can promote?
[20:22] <olli> sil2100, otp, give me a few min
[20:22] <sil2100> Sure
[21:02] <olli> sil2100, done
[21:04] <olli> sil2100, so do we know why this only happens in the ES img?
[21:32] <robru> sil2100: you still around? can you help me interpret mir in http://people.canonical.com/~ubuntu-archive/proposed-migration/update_output.txt ? I guess we need a seed update? does that sound right?
[21:40] <jhodapp> robru, can you please publish vivid silo #1?
[21:41] <robru> jhodapp: please approve your merges: https://ci-train.ubuntu.com/job/ubuntu-landing-001-2-publish/51/console
[21:42] <jhodapp> robru, oh hrm...
[21:42] <robru> jhodapp: oh yeah, one's superceded. I guess we have to reconfigure and rebuild with the new MP.
[21:43] <jhodapp> robru, let me double check, one sec
[21:45] <jhodapp> robru, where are you seeing a superceded MR?
[21:46] <robru> jhodapp: https://ci-train.ubuntu.com/job/ubuntu-landing-001-2-publish/51/console one of these two has been superceded.
[21:49] <jhodapp> robru, argg, you're right...this silo will never land! ;)
[21:52] <AlbertA2> robru: that's strange....why does ubuntu-touch have a dep on libmirplatform3driver-android ?
[21:53] <robru> AlbertA2: I dunno, maybe indirectly? maybe some other component deps on it? can you check the rdeps and see if you can find it? I'm a bit busy
[21:54] <AlbertA2> robru: yeah I'll check it
[21:54] <robru> AlbertA2: thanks
[21:59] <jhodapp> robru, I need justinmcp_ to make a change before we can finish landing, so no worries for right now
[21:59] <robru> jhodapp: ok cool
[22:12] <AlbertA2> robru: it does look related to the seeds. I'll ping ogra_
[23:33] <sergiusens> rsalveti: line 59 on the sheet
[23:48] <sil2100> robru: update_output says that ubuntu-touch becomes uninstallable for the main archs
[23:49] <robru> sil2100: that's what I thought but I wasn't sure.
[23:49] <robru> sil2100: thanks for clarifying, that file is a mess.
[23:50] <sil2100> robru: yeah... there's not much documentation for it as well sadly
[23:50] <sil2100> o/