=== johnlage_partyha is now known as johnlage [01:55] i.... what? [01:57] but platform-api is in the ppa... it built! [02:04] === trainguards: IMAGE 32 building (started: 20141125 02:05) === [02:11] well the train's gone and shit itself good this time [02:15] hmmmmmm [02:16] AlbertA: Ok I think that worked. there seems to be some kind of bug where the train can't handle the idea that a silo is being published for the second time. I dug in and removed the evidence of the initial publication and now it looks like it's working. I gotta run out but I'll be back in a few hours and double check that this published for real. [02:17] oh that's a good sign [02:45] robru: thanks dude! [03:24] === trainguards: IMAGE 32 DONE (finished: 20141125 03:25) === [03:25] === changelog: http://people.canonical.com/~ogra/touch-image-stats/32.changes === === chihchun_afk is now known as chihchun === chihchun is now known as chihchun_afk [05:22] morning [05:24] AlbertA: yep it definitely published. you're welcome. looks like it's still stuck in proposed due to the binary package from the previous build though, I think if we get an archive admin (cjwatson?) to delete the old arm64 build it should be able to get through this time. [05:25] Mirv: heya [05:28] Mirv: found a bug in the train, but it's a rare corner case. if you rebuild packages after they've been published, it can't publish them again. You have to go delete ~/silos/ubuntu/landing-XXX/*.project_* manually in order to publish. This regression is probably from some changes I made weeks ago, so it'll be difficult to just revert to a working state. Not [05:28] sure how long it'll take me to get a fix out (definitely won't be tonight). Hopefully all your publishings succeed on the first try, but if anything gets caught in proposed you'll have to be aware of this. [05:41] robru: hmmkay [05:41] robru: it's quite rare indeed [05:42] I don't remember myself ever doing that [05:42] unless of course it happens also wen publish fails, then build/watch_only and pubish again, which is more common [05:42] Mirv: I've done it a few times over the years. it definitely used to be possible, as things used to get stuck in -proposed more often in the past [05:42] oh, yes, those rare cases. I remember a couple. [05:43] not sure if sarcasm ;-) [05:48] haha :D no, they are rare. === chihchun_afk is now known as chihchun === chihchun is now known as chihchun_afk === chihchun_afk is now known as chihchun [07:45] Mirv: My question on #ci did not provoke anybody to respond .. who do you know from CI who I can direct ping? [08:03] bzoltan: I tend to point to fginther for solving CI issues but that's unfair since he is simply too good in achieving/fixing things :) [08:04] bzoltan: so I don't kind of have clear ideas who to ping / how they delegate tasks [08:04] bzoltan: they should respond to cihelp, but I haven't gotten any response when I've pinged that nick in my morning hours [08:04] I mean, it does not look like they really properly backlog the requests [08:04] Mirv: with "how" i can help... i do not know who is at CI who is actually really responsible to keep the machine running. [08:05] which is probably partially true for trainguards too - when the ping is long enough time ago, one may assume that "someone else" took care of it [08:05] bzoltan: I think a lot of CI people are off at some kind of sprint/training right now. [08:06] bzoltan: if your issue is with s-jenkins I can poke at it but I don't have a lot of expertise there. [08:07] robru: Our staging landings are tottaly blocked... we have 40+ MRs blocked by a CI problem. . according to this logs - https://jenkins.qa.ubuntu.com/job/generic-deb-autopilot-runner-mako/6179/console the build rootfs does not seem to be pure vivid. [08:07] hmmm [08:07] bzoltan, robru, Mirv : fginther *is* working on it and tracks it in asana [08:07] robru: It would be great if you could help. But of course I can wait if it beyond your reach [08:08] vila: ohh, really? So fginther knows aboout this issue... good to know. [08:09] well, AFAIK he is, he did update the ticket 4 hours ago [08:09] bzoltan: yeah that log doesn't have enough info to go off. when you see 'but it isn't going to be installed' it usually means there's a problem with some other dep that isn't even mentioned. You have to reproduce it locally and then try forcibly installing everything it complains about until it finally tells you the real problem. real frustrating... and it's [08:09] midnight here ;-) I'll let fginther lead that one [08:11] bzoltan: in any case, pinging cihelp is the way to go [08:15] bzoltan: hold on, it's on a mako ? Then it's probably plars, there have been issues about installing images related to a pretty convoluted set of blockers [08:15] vila: the logs say it is on mako [08:16] bzoltan: that's where I am yes. No idea if it's related to the issues I mentioned above though [08:21] vila: OK, thanks... the other option is to disable AP tests for the UITK for the time this issue is fixed. [08:21] Mirv: the silo16 is good to go [08:22] bzoltan: meh, the job you're pointing above was "Started by upstream project "generic-deb-autopilot-utopic-touch" build number 6616" *utopic* not vivid, is that expected ? [08:22] bzoltan: ok! [08:23] vila: I'm sure it's not expected, everything should be running vivid when it comes to merging vivid branches [08:23] https://jenkins.qa.ubuntu.com/job/generic-deb-autopilot-utopic-touch/ doesn't look like a job that succeeded recently %-/ [08:25] bzoltan: Do you when this started to fail ? (An url for a successful run will help) [08:26] bzoltan: or at least the project name (generic jobs are a pain :-/) [08:26] vila: it is the lp:ubuntu-ui-toolkit/staging branch where we target our MRs === chihchun is now known as chihchun_afk [08:29] bzoltan: sounds like http://s-jenkins.ubuntu-ci:8080/job/ubuntu-sdk-team-ubuntu-ui-toolkit-staging-check/ then las success on Nov 10 ? [08:33] bzoltan: http://s-jenkins.ubuntu-ci:8080/job/generic-deb-autopilot-runner-mako/6104/consoleFull looks vivid to me... (despite being "Started by upstream project "generic-deb-autopilot-utopic-touch" build number 6527")... [08:42] bzoltan: so no MPs has landed since Nov 10 ? Or am I looking at the wrong place ? [08:48] bzoltan: according to the branch history, the last commit was done yesterday... revno 1339, timestamp: Fri 2014-11-21 18:12:18 +0200, message: Sync with trunk, is from you, how did you land it ? [08:52] vila: this is our staging https://code.launchpad.net/~ubuntu-sdk-team/ubuntu-ui-toolkit/staging [08:53] vila: some MRs do land occasionally .. But we up tracking down the issues when 95% of the jenkins job fail on some cryptic crap... [08:56] bzoltan: as far as I can find my way into that cryptic thing you're mentioning above, it seems to me http://s-jenkins.ubuntu-ci:8080/job/ubuntu-sdk-team-ubuntu-ui-toolkit-staging-utopic-amd64-autolanding/635/consoleFull did land revno 1341 12 hours ago [08:57] i.e. gating is done for utopic on armhf, amd64 and i386 : http://s-jenkins.ubuntu-ci:8080/job/ubuntu-sdk-team-ubuntu-ui-toolkit-staging-autolanding/691/console [09:00] http://s-jenkins.ubuntu-ci:8080/job/ubuntu-sdk-team-ubuntu-ui-toolkit-staging-autolanding/691/parameters/? says utopic too (parent of the job above) [09:05] Mirv: correct me if I'm wrong, those gating rules come cu2d-config right ? ubuntu-ui-toolkit is defined in stacks/head/sdk.cfg AFAICS, am I looking at the wrong place again ? [09:12] vila: I don't unfortunately have a clue whether cu2d-config plays a role still. I kind of assumed it was not used at all anymore, but I might be wrong... if it is in use, that would explain something! [09:13] crickets [09:13] vila: looking at the changelog, it does seem to be in use still [09:13] vila: I think we can fix it ourselves if that's the ase, let me give you a branch to approve.. [09:14] Mirv: thanks, that's useful feedback, I'll try to get feedback from fginther (new ticket created in asana so it is tracked) [09:14] vila: oh, actually no branch that I could create, it seems to be all vivid for ubuntu-ui-toolkit from what I can see.. [09:15] vila: thanks for creating a ticket [09:15] Mirv: ha, great, gee, so you end up with the same understanding ? I.e. it should be vivid but it's still utopic ? [09:16] vila: yes, looks like it [09:17] Mirv: pfew. THanks ! === chihchun_afk is now known as chihchun [09:24] ogra_, no psivaa-holiday so who can help us from ci? [09:25] ogra_, seems security and sdk suites didn't run for some reason [09:25] brendand: sorry i am not on holidays, this is irccloud madness === psivaa-holiday is now known as psivaa [09:25] psivaa, to late ... now you have to take off [09:25] psivaa, ok :) [09:26] :D [09:26] You have not informed bzr of your Launchpad ID, and you must do this to [09:26] write to Launchpad or access private data. See "bzr help launchpad-login". [09:26] thats the issue it seems [09:27] brendand: ogra_: it is lp temp issue '503', let me run those again [09:27] ;) === chihchun is now known as chihchun_afk [09:46] sil2100: we've vivid autopkgtests failing a lot, not sure what could be done about it [09:46] ogra_: oh, btw.! How are those adbd changes going? [09:47] Mirv: for which packages? [09:47] sil2100: about everything that runs some :) for example see http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html#ubuntu-download-manager [09:47] uh [09:47] I just don't see any clear causer of it [09:47] sil2100, rolled back ... they break the lab which is stuck on an old UDF until we can land a one line change in krillins recovery to touch the override file when usin --developer-mode [09:48] needs to wait til next week ,when we can land stuff in RTM again [09:48] sil2100: but since it's there in stuff like binutils too, I'd guess foundations people would be aware and might know where to find the actual culprit [09:48] (assuming you mean the new lock-screen-check feature) [09:57] sil2100: correction, I see it has been already discussed 2h ago, but more people are needed [09:57] so let's assume the people will be found and vivid autopkg test issues fixed by eod or so [10:12] Mirv: could you please gently kick the UITK package in the proposed pocket. It thinks that it is regressing on ubuntuone-credentials what is BS [10:13] bzoltan: yeah... see Mirv's messages above ^ [10:13] bzoltan: it seems autopkgtests are failing for vivid now and we're waiting for more people to help [10:13] Not much we can do here :/ [10:13] did someone ping pitti ? [10:14] sil2100: Ahh... sorry I have not read the logs [10:14] That one is fixed: https://jenkins.qa.ubuntu.com/job/vivid-adt-ubuntuone-credentials/lastBuild/ [10:24] ogra_: yes, or he himself pinged mvo and mvo waits for barry :) [10:24] Laney: great, it looks like it might fix uitk migration, although the system-image problem hits eg. http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html#ubuntu-download-manager [10:25] There are other failures, but I think that they are independent [10:25] e.g. the kate ones seem like a packaging error [10:47] robru,AlbertA: Yes, removing mir-graphics-drivers-android/arm64 from vivid-proposed was the right answer here. (This was only a problem because there was a previous unmigrated version in -proposed.) I've done that now. [11:42] trainguards, hey, may I ask for reconfiguring landing-013 if needs be (one extra MP added to unity-scopes-shell), and rebuilding of *just* unity-scopes-shell (it takes ages to rebuild entire silo again)? [11:44] pstolowski: k [11:45] pstolowski: in case of adding MPs to the list of already configured projects you can reconfigure yourself actually, but let me do that this time [11:47] pstolowski: done [11:47] pstolowski: it's building [11:47] sil2100, thanks; is rebuilding of just single project possible? [11:48] pstolowski: yes, when you press the build button, you need to select the project you want to rebuild in the PACKAGES_TO_REBUILD [11:48] pstolowski: but in theory when you just add one merge to the silo, CI Train by default will only rebuild the project that changed (e.g. had new merges) [11:48] But to be perfectly safe you can include the name in PACKAGES_TO_REBUILD ;) [11:49] sil2100, awesome, thanks! [11:49] yw! [11:49] pete-woods, ^ you may want to know that as well :) [12:01] :) [12:43] hiho [12:43] ogra_, sil2100, jibel, how are things looking? [12:43] are we still playing audio on the latest build ;) [12:44] olli, we have some more info about the issue [12:44] indeed we do :P [12:45] it is still open if we will have a fix ... but if there is a silo ready by tonight we could still get it in ... [12:45] if not the agreeent was to not do anything and ship 169 [12:46] sounds good, thx for the update === alan_g is now known as alan_g|lunch [12:52] olli: still nothing concrete though ;) [12:55] * ogra_ wouldnt say that ... we knoe there is a long running connection which cgproxy isnt designed for [12:55] *know [12:55] and that some process sends 13 times the same request over this connection [12:55] which shouldnt happen [12:55] we just need someonne to hit the issue again and collect the requested data and we should know more [12:56] GRRRR ! [12:56] * ogra_ hates adbd [12:56] #define open ___xxx_open [12:56] #define write ___xxx_write [12:57] lovely ... aint it ? [12:59] Mirv: that looks like typical timeout errors. you just have to retry the build [12:59] ogra_, I attached the data this morning t 1394919, is there anything else to provide? [12:59] *to [12:59] jibel, nope, lets wait for stgraber [12:59] heyhey [12:59] freeze is tomorrow right [13:01] cwayne: depends if the fix for cgmanager arrives there will be another spin tomorrow otherwise it is 169 which is already released, why? [13:02] davmor2: had a bugfix for https://bugs.launchpad.net/hanloon/+bug/1395767 [13:02] Error: launchpad bug 1395767 not found [13:02] cwayne, no more landings unless you get special approval from olli, victorp or pmcgowan [13:03] ack [13:03] that is indeed a bad one, see that all the time [13:05] cwayne, you will need a click which only has that change in from the current today [13:05] seems like alow of stuff has gone in for testing [13:05] so 1st prepare a branch for that [13:05] then pmcgowan I think let it in if we end up rebuilding today, but not sure we should rebuild just for that one [13:06] yeah [13:06] barry: I already reran it twice [13:06] right, i would put that one in the langpack category [13:06] so job #3, 4 and 5 failed within 12h [13:06] i'm ok if we do it OTA, just thought the freeze was on wedsnesdays now (oops!) [13:06] if we rebuild and it is available it can go in [13:06] ok with me [13:07] ok, ill get a branch with just that fix and make a click with it [13:07] and go from there [13:12] Mirv: there are times when i have to re-run it 2-3 times to get a clean build. if it's failing with timeout errors more often then that, then there could be a regression in udm, since si hasn't changed in vivid (there's a newer version in rtm, but nothing new in vivid yet). is this with mandel's latest udm upload? [13:13] barry, is with the latests, but did you see the errors? it complains about dry runs.. [13:13] mandel: look higher up [13:14] i am testing it locally [13:15] ack [13:20] * sil2100 lunch o/ === alan_g|lunch is now known as alan_g [14:24] fginther, jgdx said you were helping him with some issues with the settings tests run on otto last week, what's the status of that? [14:26] kenvandine, the test environment was updated and the tests were passing again. Have they regressed yet again? [14:26] yeah, tedg has been trying to get CI passes and it's blowing chunks [14:27] https://code.launchpad.net/~ted/ubuntu-system-settings/silent-mode-trunk/+merge/241709 [14:27] in otto there are crashes and everything fails [14:28] fginther, but there is one test failure in generic-deb-autopilot-runner-vivid-mako [14:28] which is a traceback that looks like something i see all over the otto job logs [14:28] but they aren't all failing, just one [14:29] the otto failures are significant that i would say we should just land his branch, if the others test pass [14:29] but the other job has 1 failure in the same panel as his change [14:29] so i'm reluctant to land it [14:29] but the traceback looks like an unrelated problem [14:29] and... the tests pass locally on mako [14:30] kenvandine, we can update the otto environment and try again. [14:31] kenvandine,although the mako testing is a lot more reliable. It's doing the same testing the smoke testing runs, with the addition of the MP packages on top of the latest image [14:31] fginther, do those tracebacks look like something you saw last week? [14:31] right, which is why i hesitate to land [14:32] kenvandine, I'll look [14:32] thx === fginther changed the topic of #ubuntu-ci-eng to: Need a silo? ping trainguards | Need help with something else? ping fginther | Train Dashboard: http://bit.ly/1mDv1FS | QA Signoffs: http://bit.ly/1qMAKYd | Known Issues: RTM Archive frozen (no new silos landing) ! RTM cron builds disabled [14:34] bzoltan, the issue with the uitk staging branch running mako tests on non-vivid has been resolved. A test run using trunk is now passing: https://jenkins.qa.ubuntu.com/job/ubuntu-sdk-team-ubuntu-ui-toolkit-staging-ci/1268/ [14:37] tedg, i went ahead and prepared a silo for your silent-mode branch, but won't publish anything until i have a better sense of the CI failure [14:37] fginther, ^^ [14:37] kenvandine, K, cool! [14:38] tedg, don't get too excited, i'm pretty worried about that mako failure [14:38] i don't want to break smoke testing [14:39] tedg, but i'm pretty suspicious that it's something outside of settings... we'll need fginther's magic there :) [14:39] kenvandine, should have a retest started in a few minutes [14:40] thx [14:40] we'll get silo testing started soon, tedg has been trying to land this branch for weeks :/ [15:02] fginther: \o/ thank you a bunch [15:06] trainguards: silo 009 is still stuck in migration, any ideas why? [15:06] AlbertA: let me take a look at that [15:07] * amd64: mir-graphics-drivers-desktop, ubuntu-touch [15:07] * arm64: mir-graphics-drivers-desktop [15:07] * armhf: mir-graphics-drivers-desktop, ubuntu-touch [15:07] * i386: mir-graphics-drivers-desktop, ubuntu-touch [15:07] It apparently makes those packages uninstallable [15:09] AlbertA,sil2100: looks like a hardcoded and now-incorrect dependency in mir-graphics-drivers-desktop on libmirplatform3driver-mesa rather than libmirplatform4driver-mesa [15:10] cjwatson: oh yeah! [15:10] Yeah, saw this in update_output.txt but didn't check the sources in time ;) [15:11] cjwatson: thanks! [15:11] tedg, fginther: vivid smoke testing for settings is 100% currently, so this must be somehow related to tedg's branch... i just can't seem to see how [15:11] Would be nice to unhardcode that, since it should be entirely possible to generate that dependency on the fly [15:11] fginther, is there any difference in how the smoke testing is run? [15:11] (If necessary, via a manual substvar) [15:11] AlbertA: is it hardcoded in the end? [15:11] cjwatson: how should I do that? [15:12] cjwatson: sil2100: I believe the intention we had was to use those meta packages so that they could be listed in the seeds [15:12] AlbertA: few minutes please [15:12] and the alternatives wouldn't have to be changed every release [15:15] AlbertA: Something like http://paste.ubuntu.com/9233922/ should do it [15:15] AlbertA: You still have to hardcode the binary package names, but that should at least somewhat reduce the potential for error [15:15] (binary package names> by which I mean the contents of Package fields) [15:16] cjwatson: cool thanks! [15:47] trainguards ↑ please :) [16:19] ogra_: heeeeey, can I go for practice to day? Can I? Caaan I? [16:19] hmm [16:19] ogra_: (will you lead the meeting today ;) ? ) [16:19] * ogra_ ponders [16:20] ogra_: pliiiiiz [16:20] lol. ok ok [16:20] :) [16:20] Thanks ;) [16:21] Anyway, I guess we only have the cgmanager thing to discuss... oh, and the sanity tests for #169 still running [16:21] yep [16:21] and checking the two missing smoke results [16:23] I see management is leaning towards using #169 for the GM image anyway - and indeed I think we won't be able to get a fix till EOD :| [16:23] unless stgraber or ted have any brioght ideas [16:24] did anyone notice a screen flicker on apps switch since 169? [16:25] its not since 169 [16:25] when I right swipe between apps the screen flickers [16:25] Oh, I thought that was only because of debugging [16:25] could be since > 165 [16:25] thats old and there is a bug (and iirc a fix in vivid) for it [16:25] not sure I upgraded during the w.e [16:25] well, it started today for me, was not there with friday's image [16:25] and it persists accross reboots [16:26] well, it is definitely an old bug [16:27] cjwatson: sil2100: so how are the packaging issues like I had in mir caught? is there something we can run in our CI to catch these sort of issues? [16:27] trainguards, can someone please publish vivid silo 30 for me :| [16:30] AlbertA: not really I'm afraid [16:31] AlbertA: one of the wishes for the new CI engine is to be able to run those tests per-silo though, in general; not sure how far that's got [16:34] Saviq: doing! Been in a meeting! [16:34] sil2100, you're always in a meeting! ;P [16:34] I know! ;( [16:34] Saviq: top approve plz! [16:34] sil2100, OOPz [16:36] seb128, bug 1394622 [16:36] bug 1394622 in ubuntu-app-launch (Ubuntu) "0.4+15.04.20141118~rtm-0ubuntu1 causes flickering on spread "alt+tab" gesture" [High,Fix released] https://launchpad.net/bugs/1394622 [16:36] Saviq: ready? [16:36] seb128, Started with 166, fixed in vivid. [16:36] tedg, great, thanks [16:37] ogra_, sil2100, ^ it might not be an old bug? [16:37] sil2100, just confirming why it was not top-ack, gimme a minute [16:37] tedg, do you know when the issue started? [16:37] tedg, oh, wow, i thought it was older [16:37] maybe i dreamt that [16:37] seb128, UAL version 0.4+15.04.20141118~rtm-0ubuntu1 [16:37] ogra_, ^ [16:38] sil2100, ready [16:38] sil2100, ogra_, olli, imho that should be fixed in rtm, that's a recent regression is quite visible [16:38] seb128, well ... [16:39] tedg, feel like preparing a silo for that ... in case mgmt decides we allow a rebuild [16:39] ? [16:39] seb128, Only do long right swipes, it doesn't happen there :-) [16:39] kenvandine, there is no difference in how the test_suite is executed during MP and smoke testing. The only differences I can come up with are different devices and during smoke testing, other test suites run on the device (but the device is rebooted between suites) [16:39] ogra_, Not allowed to have a silo unless it is critical and rtm. [16:39] tedg, nonsense ... you can have silos as much as you want [16:40] kenvandine, I've kicked off another test run on a mako. The updated otto run didn't look much better. [16:40] ogra_, Hah, YOU can have silos as much as you want. :-) [16:40] tedg, weather they *land* lies in the hands of ... well ... these three guys [16:40] There's a have and have-nots of landing. [16:40] but if we dont have a silo at all it surely wont land at all [16:41] not sure if olli did a bug meeting today, i would have brought that issue up there [16:45] fginther, :( [16:45] pmcgowan, ^ [16:49] oo that is bad, how did I miss that [16:49] bfiller: open messaging, swipe up for a new message, add a contact/number, click on the camera bottom left, add a photo, then try an type in a message, the keyboard disappears after a letter or so [16:49] sil2100, how could that have been introduced with 166? what did we land there [16:50] davmor2: ack, bug that please [16:50] pmcgowan, ubuntu-app-launch [16:50] bfiller: wilko [16:51] ogra_, we obviously got more changes than we expected [16:51] yep [16:51] as usual :P [16:51] the shiny world of sideefects ... [16:53] olli, pmcgowan, in case you want to land this, we should let the apprmor fix (that is in the same u-a-l upload in vivid) in as well, so we dont get system slowdown when apparmor tries to collect the cproxy data (and fails) [16:54] apport :-) [16:54] sigh [16:54] things starting with "A" [16:55] :) [16:55] if we do that, can i also land my custom fix? :P [16:55] Better than the number of things starting with "U" ;-) [16:55] true [16:57] i guess it is my subconcious that wants me to ping jamie all the time in a subtle way [16:57] bfiller: https://bugs.launchpad.net/messaging-app/+bug/1396248 [16:57] Launchpad bug 1396248 in messaging-app "Keyboard disappears when adding a image to an mms message" [Undecided,New] [16:57] davmor2: thanks [16:58] title is bad I'll mode it [16:58] ogra_, I think that jamie hates your subconscious :-) [16:58] lol [16:59] ci-help - i have some merge requests that aren't running tests in jenkins, what do i need to do? https://code.launchpad.net/~brendan-donegan/ubuntu-clock-app/wait_for_bottomedgetip_visible/+merge/242792 & https://code.launchpad.net/~brendan-donegan/reminders-app/test_add_notebook_must_append_it_to_list_swipe_to_bottom/+merge/242808 [17:00] cihelp ^ ? [17:00] brendand, looking [17:06] tedg, if that fix was done last friday why are we only discussing today? [17:11] brendand, jenkins is finding your branches now [17:14] pmcgowan, I'm confused, what are you expecting? It's listed as a bug in RTM, and marked High. [17:15] There are a bunch of those. [17:16] tedg, not regressions from the handful of landings we just did [17:16] as a regression we should have been all over this [17:16] moot point now [17:24] ohh fginther lucky man today. So can you have a look at this autolanding job? http://91.189.93.70:8080/job/ubuntu-calculator-app-vivid-amd64-autolanding/2/console. It failed do to apt-get update (hash sum mismatch), aka the indexes were updating. [17:24] We shouldn't fail the job because of that; you should be able to just keep going [17:24] balloons,fginther: You should apply the same workaround launchpad-buildd does [17:25] tedg, in the meantime, it would make sense to prepare an rtm silo if we do not have one yet [17:25] balloons,fginther: End of http://bazaar.launchpad.net/~canonical-launchpad-branches/launchpad-buildd/trunk/view/head:/update-debian-chroot - in practice that's an extremely reliable workaround [17:26] As in I used to be forever retrying spurious failures due to that, now I can't remember the last time I had to [17:26] (OK, it's a bit different against archive.ubuntu.com, but should still help a lot) [17:26] tedg, if you do that, please only th eone MP that fixes the issue ... [17:27] in case there is a re-spin the fix can still be considered [17:27] pmcgowan, K, also bug 1394919 recoverable error fix? [17:27] bug 1377332 in cgmanager (Ubuntu) "duplicate for #1394919 [TOPBLOCKER] UI randomly freezes" [Critical,Confirmed] https://launchpad.net/bugs/1377332 [17:27] Well, the bug got dup'd. [17:27] tedg, if you do that, do it in two silos [17:27] ogra_, It's already in a separate MR [17:27] right [17:28] but if mgmt decides they only want one of the two we cant easily unbundle it [17:28] would need new QA etc etc [17:28] Well, you can't have the same project in separate silos. [17:28] (if mgmt decides at all :) ) [17:28] The lock each other out. [17:28] cjwatson, awesome, thanks [17:29] tedg, well, then only the flicker [17:30] cjwatson, since I have you btw, are you the right person to ask about supporting multi-arch builds with click automagically? Perhaps I specify the schroots (or click manages them) [17:30] balloons: can I redirect you to mvo, unless he can't handle it? I'm moving out of click development [17:31] cjwatson, right, I couldn't remember if that was on the list or not.. I thought it might be [17:32] trainguards, rtm silo for line 57 please [17:32] cjwatson, what ? you mean you want deploy LP in click packages ? :P [17:32] ogra_: some day you'll quit trolling and the world will implode ;-) [17:32] lol [17:53] cjwatson: pfff the world, it's the universe getting sucked into the blackhole that the world imploding causes that is more worrying ;) [17:59] tedg, which vivid build did you run these tests on? [18:00] tedg, i just ran it on image 32 mako, 6 of the 8 sound tests failed [18:00] * kenvandine tries with vivid version === alan_g is now known as alan_g|EOD [18:06] interesting, i downgraded to the vivid version and also had 6 failures in the sound tests [18:07] but ran them again and had no failures [18:07] * kenvandine tries silo again [18:07] fginther, is there any chance the mako devices in the data center have the wrong orientation? [18:08] when i had the failures system-settings was started rotated each time [18:08] i turned it for the second run, when they passed [18:09] crap, one failure this time, test_keyboard_sound_switch [18:14] fginther, ok, i have 3 runs in a row with the same 6 failures in the sound panel tests when in landscape [18:15] and 3 runs in portrait with just 1 failure each [18:15] we clearly need to be more resilient to orientation in testing [18:19] tedg, adding a sleep(0.2) before the assert in test_keyboard_sound_switch gave me repeated passes [18:19] in portrait :/ [18:20] on a complete autopilot test run i had 117 failures when in landscape [18:21] looks like all of our tests need some love to make sure they work there [18:21] :( [18:27] kenvandine, orientation could be a concern as I don't know if this is always guaranteed to be portrait when the device first comes up. The devices in the lab lay flat, nothing is moving them except for the occasional device that must be manually reset [18:28] fginther, i thought so [18:28] i'm getting the same number of failures as the otto job i was looking at earlier [18:28] 117 [18:28] when run in landscape [18:29] that's a lot of tests we should fix to work in landscape :/ [18:31] i guess being better at following the page object model guidelines could improve that [18:32] kenvandine, hmm. I didn't even consider the desktop tests running under a different orientation. [18:32] is otto desktop? [18:32] i thought the job had "mako" in the name [18:33] oh it doesn't [18:33] kenvandine, yes. otto basically runs a x86 desktop iso [18:33] quite a coincidence then that i had 117 failures on mako in landscape [18:34] desktop it would always be portrait [18:34] so that can't be it [18:34] kenvandine, if the desktop tests aren't useful right now, we could disable them. but I'll leave that up to the project team [18:34] seb128, ^^ what do you think? the otto job runs the tests on desktop [18:34] how much value do you think we get from that now? [18:35] seb128, it feels like that's where we spend the most time tracking down infrastructure/environment related failures [18:35] it'll be more important with convergence, but for now i see little value [18:38] pmcgowan, ^^ thoughts? [18:39] sounds rather poointless to me [18:39] ogra_, yeah, someday sure [18:39] you miss armhf ... you miss the android container backends ... [18:39] but many of our tests are around ofono, etc [18:40] yeah and ofono [18:40] for convergence we'll need to ensure we do desktop testing [18:40] right [18:40] but it'll be based on what is needed for desktop [18:40] but thats still far out [18:40] right now it doesn't seem like it adds value [18:40] and we spend a TON of time chasing these problems [18:40] ugh [18:41] kenvandine, I thought settings was soon to be the solution on desktop [18:41] do you ever chase the ones in touch smoketesting at all then ? [18:41] ogra_, we try :) [18:41] these are the only important ones currently [18:41] right now we're 100% [18:41] kenvandine, is the issue x86 or landscape? trying to follow the backscroll [18:41] pmcgowan, at some point, sure [18:41] neither [18:42] it's that right now we have most tests failing in otto, which is run based on the x86 desktop iso [18:42] and we're not really sure why [18:42] i happened to hit the same number of failures on my mako when run in landscape [18:42] but on the desktop it would be portrait [18:42] so that can't be it [18:42] was it all the tests? [18:42] i'm filing a bug now to fix our tests so they don't fail in landscape [18:42] no [18:42] over 90% though [18:43] and we're still unsure why [18:43] they pass on mako [18:43] but totally busted on otto [18:43] and we're questioning the value those even bring us right now [18:43] last time ths happened the otto iso was old [18:43] yeah, fginther updated it [18:43] still blows chunks [18:44] lets check with seb on timeframe for settings app on desktop, I thought it was maybe 15.04 [18:44] jgdx and i have sunk a ton of time chasing this [18:44] even if it is 15.04, i think we need to look at the tests more for desktop [18:44] and the failures dont tell us the reason? [18:44] kenvandine, fyi test_language_page_title_is_correct fails in smoketesting [18:44] not really... different things crashing... tracebacks, etc [18:44] ogra_, i just looked this morning... [18:44] (or failed today) [18:45] well, i was checking vivid mako [18:45] to compare these results [18:45] right, we dont look at amko [18:45] :) [18:45] ok... fair enough [18:45] since thats not our product [18:45] i used that as an example to compare the CI runs [18:45] nor do we check vivid [18:45] i'll check that out [18:45] though system-settings occasionally has one or two failures [18:45] but see... this issue testing desktop distracted from smoketesting tests :) [18:46] yeah, often transient [18:46] next run will pass [18:46] unlike camera-app that constantly is bad ... [18:46] i usually don't look closely until i see repeated failures [18:46] * ogra_ looks at bf [18:46] iller [18:46] ... who isnt here :P [18:46] kenvandine, do the uss tests need to touch the bottom edge? [18:47] kenvandine, yeah, if we see them in the daily LT review we usually ping someone [18:47] fginther, nope [18:48] kenvandine, looking at the videos on the otto jobs, it appears that the bottom edge is off screen [18:48] kenvandine, k. I have to run to an appointment, will be back in about an hour [18:49] yeah, i think it's just that we need to be better at scrolling to where we need to touch === fginther changed the topic of #ubuntu-ci-eng to: Need a silo? ping trainguards | Need help with something brelse? ping cihelp | Train Dashboard: http://bit.ly/1mDv1FS | QA Signoffs: http://bit.ly/1qMAKYd | Known Issues: RTM Archive frozen (no new silos landing) ! RTM cron builds disabled [18:49] fginther, thanks! [18:54] kenvandine, sorry, was at dinner, what's the issue/question? [18:55] seb128, how much value do you think we get from the settings otto tests for desktop? [18:55] i know with convergence we'll need to make sure we have good tests for desktop [18:55] but right now i don't see a lot of value from those tests running on otto [18:56] and we sink a ton of time in trying to figure out what's broken [18:56] AlbertA: disregard citrain failure message, looks like your package was uploaded just fine. [18:56] AlbertA: let me know when you're ready to publish that [18:56] kenvandine, what are "otto tests"? running the autopilot tests on an otto setup? [18:56] seb128, and when we do put system-settings on the desktop, what we test will probably be different... so those desktop tests will be more about testing it for the desktop [18:56] not testing the phone settings on x86 desktop iso [18:57] yeah, autopilot tests run on a base x86 desktop iso [18:57] robru, in case you missed it, see teds trainguads ping above please [18:59] robru: yeah it's still building armhf in the ppa [18:59] tedg: sorry for the delay, rtm2 [19:00] seb128, part of our problem could be the state of mir, unity8, etc for desktop in vivid [19:00] those tests run with all that [19:02] kenvandine, I think we should start running desktop tests if we can, we don't especially want to stop on failures though [19:02] just get things going [19:03] so we should keep investigating what's causing the otto failures? [19:03] at least some of them seem to include unity8 crashes [19:03] but others don't, with similar results [19:03] so actually, these tests don't test against desktop-next [19:03] they test on the stock x86 desktop iso [19:04] perhaps otto should be using the desktop-next iso? [19:05] well, not sure [19:05] I though CI wanted to replace otto [19:05] or provide a new infra for desktop testing [19:05] ev said that was on the roadmap for septembre [19:05] I guess that's got delayed [19:05] but it's worth checking with them [19:05] in any case we need to start testing unity8 desktop-next [19:05] and apps in desktop mode [19:06] seb128, yeah, but is testing settings on the desktop iso add value right now? [19:06] we don't even know why it's broken yet... [19:07] who knows, maybe it's because we aren't using desktop-next :) [19:07] kenvandine, not really, it's going to fail because things like indicator-network are not running [19:07] or do we mock those? [19:07] ah... not really [19:08] well in any case we want to test those under desktop-next [19:08] we need to [19:08] but jgdx struggled a bit with that [19:08] tedg's branch we are trying to land includes mocking the sound indicator [19:09] seb128, the otto tests do run unity8 though [19:09] so it should have the indicators [19:09] well, in any case testing under unity7/otto is better than nothing [19:09] unless something has changed about the way that runs on desktop [19:09] it's not unity7 [19:09] hum, k [19:09] so dunno [19:09] I don't know enough about the otto setup [19:09] we see lots of dbus errors in the logs about qtmir [19:10] can we run that locally to debug? [19:10] didrocks knows how to :) [19:10] i don't [19:10] https://jenkins.qa.ubuntu.com/job/autopilot-testrunner-otto-vivid-fjg/4/console [19:10] my understanding was that otto was not properly supported by CI [19:10] seb128, ^^ a log [19:10] so maybe that's something that needs to be resolved on the CI side first [19:10] like they need to provide us a framework we can run desktop tests on [19:12] seb128, yeah, so i still think in it's current state, it doesn't provide as much value as it takes to make it work [19:12] right [19:12] we clearly need something for desktop mode testing though [19:12] we need some CI infra that allow us to boot unity8-mir and run tests [19:12] otto is vm based right? [19:12] and mir doesn't run in vms? [19:12] seb128, so is that a +1 to disable the current otto test until we can sort out the right solution? [19:12] how do we manage to run unity8? [19:13] no idea :) [19:13] these passed in utopic though [19:13] maybe something mir related has changed and unity8 isn't starting right? [19:13] or we're missing depends? [19:13] well, mir never worked in VMs afaik [19:13] the logs aren't very useful :( [19:13] yeah, so not sure how this worked [19:13] but yeah, +1 from me to disable tests [19:14] we need to resolve the desktop testing problem [19:14] but we need to start with a clean board [19:14] with step1 having a proper infra booting a desktop next iso [19:14] fginther, ^^ lets disable them [19:14] seb128, also in the settle tests i see the load is quite high in the otto tests [19:14] ok, on that note need to go [19:14] could be related, something spinning out of control [19:14] good night seb128! [19:14] thanks [19:15] yeah, it feels like we don't have a solid base [19:15] difficult to have datas [19:15] let's reboot that setup and see where we get [19:16] pmcgowan, bug 1396305 [19:16] bug 1396305 in ubuntu-system-settings (Ubuntu) "[autopilot] tests expect orientation to be portrait" [Undecided,New] https://launchpad.net/bugs/1396305 [19:16] we should take care of that when we refactor our testing over time [19:40] robru: ok mir package has been built - sanity checked, ready to publish (silo 009) [19:40] AlbertA: alright, here goes... [19:41] kenvandine, ack [19:41] kenvandine, thanks for digging on that [19:42] fginther, np, we surely appreciate all your help too [19:43] tedg, now manually testing the silo i notices i get no sound... but now that i think about it i noticed a call the other day where the ringer never rang, i just saw it [19:43] anyone know if there is a known problem with sound on vivid/mako? [19:47] AlbertA: yeah I'm gonna have to wrangle this. gimme a few minutes... === fginther changed the topic of #ubuntu-ci-eng to: Need a silo? ping trainguards | Need help with something else? ping fginther | Train Dashboard: http://bit.ly/1mDv1FS | QA Signoffs: http://bit.ly/1qMAKYd | Known Issues: RTM Archive frozen (no new silos landing) ! RTM cron builds disabled [19:52] robru, Thanks, building! [19:54] tedg: you're welcome [19:58] ogra_: any news regarding the cgproxy issue? [19:58] jibel: ^ ? [20:07] AlbertA: ok I think it's happening... [20:08] sil2100, nothing from me. I cannot reproduce with gdb attach [20:08] ed [20:08] AlbertA: I gotta step out for lunch. When you see version ...1125 show up at http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html#mir then we're golden. [20:09] robru: thanks! [20:13] olli, jibel: so I guess we stick with #169, right? Did sanity testing pass on this? Did we start regression testing? [20:14] yes, that's the plan of record [20:21] olli, davmor2, jibel: so, I see #169's mako equivalent failed sanity testing... [20:21] Ah, wait, no [20:21] Correction [20:21] The es image failed sanity testing [20:22] What does that mean? Do we need a fix landing before we can promote? [20:22] sil2100, otp, give me a few min [20:22] Sure [21:02] sil2100, done [21:04] sil2100, so do we know why this only happens in the ES img? [21:32] sil2100: you still around? can you help me interpret mir in http://people.canonical.com/~ubuntu-archive/proposed-migration/update_output.txt ? I guess we need a seed update? does that sound right? [21:40] robru, can you please publish vivid silo #1? [21:41] jhodapp: please approve your merges: https://ci-train.ubuntu.com/job/ubuntu-landing-001-2-publish/51/console [21:42] robru, oh hrm... [21:42] jhodapp: oh yeah, one's superceded. I guess we have to reconfigure and rebuild with the new MP. [21:43] robru, let me double check, one sec [21:45] robru, where are you seeing a superceded MR? [21:46] jhodapp: https://ci-train.ubuntu.com/job/ubuntu-landing-001-2-publish/51/console one of these two has been superceded. [21:49] robru, argg, you're right...this silo will never land! ;) [21:52] robru: that's strange....why does ubuntu-touch have a dep on libmirplatform3driver-android ? [21:53] AlbertA2: I dunno, maybe indirectly? maybe some other component deps on it? can you check the rdeps and see if you can find it? I'm a bit busy [21:54] robru: yeah I'll check it [21:54] AlbertA2: thanks [21:59] robru, I need justinmcp_ to make a change before we can finish landing, so no worries for right now [21:59] jhodapp: ok cool [22:12] robru: it does look related to the seeds. I'll ping ogra_ [23:33] rsalveti: line 59 on the sheet [23:48] robru: update_output says that ubuntu-touch becomes uninstallable for the main archs [23:49] sil2100: that's what I thought but I wasn't sure. [23:49] sil2100: thanks for clarifying, that file is a mess. [23:50] robru: yeah... there's not much documentation for it as well sadly [23:50] o/