[01:10] rsalveti, so is that a no on silo 18? [01:10] rsalveti, my silo 25 landing is a bit stuck, held in proposed [01:55] kenvandine: yeah, when testing silo 18 [01:55] kenvandine: taking longer than expected, can't easily connect with a few bt devices I have [01:55] have to check individually if that could have caused a regression or not [02:05] === IMAGE 110 building (started: 20150224-02:05) === [02:14] rsalveti, ok, thx [02:18] GRRR [02:18] Setting up python3-distupgrade (1:15.04.8) ... [02:18] File "/usr/lib/python3/dist-packages/DistUpgrade/DistUpgradeViewKDE.py", line 37 [02:18] from PyQt5.QtGui import QTextOption, QPixmap, QIcon, [02:18] SyntaxError: trailing comma not allowed without surrounding parentheses [02:19] settings held up in proposed because whatever package that provides that keeps getting uploaded with silly python syntax errors [03:05] === IMAGE RTM 243 building (started: 20150224-03:05) === [03:30] === IMAGE 110 DONE (finished: 20150224-03:30) === [03:30] === changelog: http://people.canonical.com/~ogra/touch-image-stats/110.changes === [04:15] === IMAGE RTM 243 DONE (finished: 20150224-04:15) === [04:15] === changelog: http://people.canonical.com/~ogra/touch-image-stats/rtm/243.changes === [07:37] * Mirv kicked some autopkgtests ^ [08:26] cihelp: hey! I'm looking back since the CI move in december) to the new infra for ubuntu make tests and ps-trusty-desktop-i386-1 is down (disconnected in s-jenkins), any chance to get it back up? [08:28] the job that reverts the vms are connecting to naartjie, but since the move to the new vpn, I wonder how to connect to it… (ssh naartjie doesn't work anymore, I guess it's a domain issue) [08:30] ok, seems that I could restart it myself adding .ubuntu-ci, thanks nevertheless [08:31] ah, was just looking [08:32] making a note to have a nagios alert for that [08:32] ev: oh, that would be awesome, for $random reason, they were failing to restart jenkins-slave like once a week [08:33] ev: btw, as I can restart the machines, do you mind if I create a newer snapshot? The current one is really old and dist-upgrade takes 40 minutes [08:33] go for it [08:33] thanks, I was wondering if disk space would be impacted [08:34] 411G free [08:34] should be enough :p [08:34] thanks! [08:37] didrocks: can you tell me a little bit more about exactly what was failing and for how long, and how this impacts you? You got in there a bit quicker than I could dig :) [08:37] and I need to write this up in a way that points at the bigger problem [08:40] ev: oh sure, basically since the new config in december, I was wanting for the vms to be up again to have my daily tests of ubuntu make running against trusty (trunk and trusty packages). This wasn't a big issue as I tend to run them manually myself as well and my connexion is quite good. [08:41] ev: those machines are updating to latest trusty, running tests, and then reverting to the snapshot + rebooting [08:41] ev: I know they are temporary solutions until you have a better way to tests things which needs a GPU + hw acceleration [08:41] basically, the issue was the with the old vpn setup, I was able to ssh to naartjie directly (as the added domain with .ubuntu-ci) [08:42] to restart the vm myself [08:42] so, now, I just need to add the domain myself [08:44] cihelp: sil2100: in SDK team we noticed mako UITK results regressing after 20150212, but since krillin didn't regress it does not look to be landing specific (and certainly not UITK landing), so does anyone know how mako broke on those Thu/Fri landings? [08:44] or if it was landing specific, then it would have been mako only landing. but I was thinking maybe something in mako testing changed. [08:48] cihelp another topic, SDK team had last successful merge to uitk staging on Friday, but after that we're getting constantly ~40 failures, and again we're puzzled and have no idea how is that. see for example https://code.launchpad.net/~zsombi/ubuntu-ui-toolkit/82-dragging-mode/+merge/246128 but also in any "no-op" branch MP. [08:48] sorry for spamming with multiple things, but just queue them up :) [08:49] the "no-op" branch would be eg https://code.launchpad.net/~timo-jyrinki/ubuntu-ui-toolkit/clean_up_build_dependencies_cruft/+merge/249461 [08:49] earlier failures in that branch was a different thing that got fixed, but then I didn't retry until it was too late.. [09:39] trainguards: I’ll need a binary copy of oxide-qt from https://launchpad.net/~phablet-team/+archive/ubuntu/ppa/+packages to silo 3 (version 1.5.3-0ubuntu2 removes the build dep that was in universe) [09:42] oSoMoN: ok, note that you unfortunately would need to file FFe as well as per latest mailing list discussions, not having the blanket FFe (+ Oxide is on desktop anyway) [09:42] since FF was last Thu [09:43] Mirv, right, will do that. no need for the FFe for the copy to happen though, right? it will only block actual landing, right? [09:43] oSoMoN: exactly [09:43] good [09:49] trainguards: can I have a silo assigned for line 49, please? [09:55] oSoMoN: on those, finished a meeting [09:57] mvo: not top-approved https://code.launchpad.net/~click-hackers/click/devel/+merge/250584 [09:57] mvo: if you don't use such a process, then just top-approve yourself [09:58] Mirv: thanks, I will top-approve myself, there is a bit of a reviewer shortage for click currently [10:00] Mirv: let me look at your pings [10:01] psivaa_: thanks :) no hurry, but both seem real problems SDK team itself does not have power upon [10:01] mako being broken in general or MP:s having problems after Friday [10:02] sil2100: focus is on vivid alright, we're out of silos! :) === marcusto_ is now known as marcustomlinson [10:04] Mirv: http://ci.ubuntu.com/smokeng/vivid/touch/mako/101:20150216:20150210/12311/ubuntuuitoolkit/ suggests uitk has been failing earlier than Thurs/ Fri on smoke === vrruiz_ is now known as rvr [10:11] psivaa_: ok. how did then eg https://code.launchpad.net/~seb128/ubuntu-ui-toolkit/gettext-application-name/+merge/249665 land on Friday? [10:12] psivaa_: or https://code.launchpad.net/~aacid/ubuntu-ui-toolkit/nonsquareicons/+merge/250110 - all of those seem to have first failure then some "without results" PS Jenkins bot continuous-intregration approve? [10:13] Mirv: that's worrying [10:13] Mirv: not sure who/ how that was possible [10:17] Mirv: I dont think we could do anything about the failures, but i'll certainly raise it to the team to see how that was 'Approved' by PS-jenkins-bot [10:17] bzoltan_: ^ some news at least. it'd look like the last week's landings got in "by mistake", sort of, via erronous mystery Approve from CI, and now they are back to failing because tests fail on mako [10:18] psivaa_: so, it's possible that from our side the core problem is that mako is broken, and not because of UITK but something else, since 20150212 [10:18] psivaa_: notably the tests don't fail on krillin, and there was no change on krillin on that date when mako started failing [10:24] Mirv: hey, still missing line 51 [10:24] psivaa_: thanks for investigating, now in general we have the problem "mako is broken" but not really sure who could point out in which way [10:24] it was two silo requests, think I could get that one going too? [10:24] if you dont mind [10:24] ricmm: yeah, sorry, we've all 31 silos in use, I'm just about to free one [10:24] oh, ok [10:24] didnt see, sorry [10:26] Mirv: yea, there are a couple of things.. [10:26] Mirv: 1. in mako there is a qmlscene crash [10:26] Mirv: 2. the tests are device specific too [10:27] lool: is row 7 "Fix platform-api dep to allow multiarch install for cross-builds" ok to free up, it has not been touched for 1 month 2 weeks and we are out of silos? [10:29] Mirv: I'll be cleaning up some silos now as well [10:30] psivaa_: can you give me a link to the qmlscene crash, so I can make sure there's a bug filed to whatever is causing it? [10:30] sil2100: thanks [10:31] Mirv: https://jenkins.qa.ubuntu.com/job/vivid-touch-mako-smoke-daily/325/artifact/clientlogs/ubuntuuitoolkit/_usr_lib_arm-linux-gnueabihf_qt5_bin_qmlscene.32011.crash/*view*/ [10:34] Mirv: as per how to land those MP's that have the failures, i'd ask in the team if that was done by one of us and if that was done for a specific reason [10:35] hmm, let me rephrase, i've combined two of my sentences there [10:36] Mirv: as per how to land those MP's that have failures, i dont think we could do it in the auto-* mode (because of the failures) [10:36] and as to how the MP's were allowed to land with the failures, i'd ask within the team [10:37] psivaa_: yes, the last week's landings shouldn't have landed, kind of, even though it's not that team that broke the mako. if it was not CI changing anything on mako, then it would have been either http://people.canonical.com/~ogra/touch-image-stats/20150213.changes or http://people.canonical.com/~ogra/touch-image-stats/20150213.1.changes that broke mako [10:37] lool: hey! Regarding silo 22 in vivid - there has been no movement since a month, is it ok to clear it ouot? [10:38] sil2100: I just asked lool like 10 minutes ago ^ :) [10:38] erf [10:38] sorry folks [10:38] I completely forgot about this silo [10:38] it's a completely risk free change (no code change), but I didn't follow the full process of testing the actual binaries, so couldn't push the publish button [10:39] lool: so do you want to update and work on it now? (the package itself is superseded currently) [10:40] Mirv: right, something that uitk tests depend on might be causing the failures [10:41] Mirv: it could be that UITK that did not 'break' mako, but it was UITK tests that were failing and in that situation UITK mps should not have been allowed to land. [10:43] psivaa_: yes so indeed there were no UITK changes when those mako failures appeared in UITK tests in the dashboard, and now new MP:s can't go in because something broke mako UITK tests (but not krillin) from outside of SDK. and now "just" the culprit should be found. [10:46] Mirv: yep, the culprit should be found (and should have been found before force landing the MP's :)) [10:56] * Mirv is sorry he has 5 vivid silos, all for good reasons though :( [10:57] Mirv: ;) [10:57] or 6, but I'm freeing up that qtpim now that it's most probably vivid+1 [11:02] Who would have thought that we might start being low on silos again even with 30 silos ;) [11:08] Building more roads always leads to more cars [11:21] I wonder what would be the equivalent of public transport, or biking [11:21] I think this analogy doesn't carry to the end :) [11:29] Laney: actually, we built more silos which resulted in more rails and more trains driving those..! === _salem is now known as salem_ [12:00] I'm seeing this interesting error [1] when running phablet-test-run. All tests fail.. [1] http://pastebin.ubuntu.com/10387874/ [12:02] mzanetti: ping [12:03] sil2100: hey [12:04] mzanetti: so, I'm looking at the unity8 silo that's ready for vivid now [12:04] mzanetti: https://code.launchpad.net/~mzanetti/unity8/reveal-launcher-with-mouse-hover/+merge/248913 <- I need to double check, is that a new feature? [12:04] * sil2100 hates the FFe in this regard [12:04] *FF's [12:07] sil2100: well... the launcher can't be revealed by mouse... from a phone point of view it might be a new feature... from a desktop point of view it fixes the launcher... [12:07] do I need to pull it out again? [12:07] it's been in the silo since last week already :/ [12:08] grrr [12:08] Not sure, we don't have the blanket FFe so I don't know if this won't break feature freeze [12:08] Let's leave it there, let me get some info [12:09] If anything we can just fill in a single FFe [12:09] It's really a bit silly to have an FFe for things like Unity8 for the desktop... [12:10] yeah well... that's what it is... so if I need to pull it out again I can do so... I thought I'd give it a shot as it's a small branch and it is a fix for the desktop mode... [12:16] psivaa_: the qmlscene crash seems once again "something happened before this crash, hence connection rejected" which have had earlier and would need I think the upstart logs for apps from the point when it happened. bug #1425034 [12:16] bug 1425034 in qtdeclarative-opensource-src (Ubuntu) "qmlscene crashed with SIGABRT in qt_message_fatal()" [Medium,New] https://launchpad.net/bugs/1425034 [12:19] cihelp: Could someone please take a look at these failures: http://s-jenkins.ubuntu-ci:8080/job/unity-scopes-api-ci/ [12:19] They are definitely not caused by problems in our code. [12:20] michi: I am on it [12:20] cprov: Thank you! [12:24] michi: it looks like the cloud workers are not behaving correctly, I will have to dig more to find what is wrong. Once it is sorted do I have to do anything else than retry the job ? [12:25] michi: also, is it blocking you to do anything urgent ? [12:25] cprov: No, not really. If you could re-start the three failed jobs, that woud be great. [12:25] No, not blocking us for anything mission critical. [12:25] michi: okay, will do, thanks for the info. [12:25] But I do wonder why the system doesn’t detect that kind of failure by itself. [12:26] There is clearly some problem in the infrastructure that is unrelated to the branches we are trying to land. [12:26] But no-one seems to notice unless we shout. [12:27] How hard would it be to detect this kind of failure with robot that alerts you when something isn’t working? [12:31] mvo: click seems to have some problems with autopkgtests http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html#click [12:31] Mirv: yeah, I just noticed that, I will do a new 0.4.38.1 [12:31] Mirv: thanks! [12:31] Mirv: or should I do a 0.4.38 and override the existing one? [12:33] Mirv: nevermind, I updated devel and will prepare a new release [12:34] michi: the cloud workers and their jobs could benefit of better monitoring, then we could possibly detect some of those weird and unknown malfunctioning issues. We are working on it. === MacSlow is now known as MacSlow|lunch [12:35] cprov: Well, it would be really nice if the system would realized when things aren’t working. We have a long history of suffering from this kind of issue, going back at least 12 months. My conservative estimate is that around 50% of all our CI failures were caused by infrastructure problems. [12:36] Basically, what I’m saying is that “CI breaks all the time”. I’m sorry for being so harsh. [12:36] mvo: it's in -proposed so you need to bump something anyway. btw I was slightly worried about that ignore option being a new feature.. [12:37] mvo: which would need FFe if it is [12:37] Mirv: uh, indeed, for some reason I thought of it as a bugfix but you are right of course [12:39] michi: well, I fully understand your complaints and it will get fixed, because we are working to mitigate the impacts on you. [12:39] Mirv: I will ensure I get a FFe [12:39] cprov: I do appreciate the efforts, and I realize that I’m most likely ranting at the wrong person. [12:40] But, please, if you can, pass this feedback up the chain: CI has been a long-standing problem for us, causing us many, many hours of wasted time. [12:41] michi: I sure will. [12:41] We have no choice: we have to squeeze through the CI gate, like it or not. But that puts CI at roughly the level of importance of a compiler. If it’s broken, we are hosed. [12:42] mvo: thanks! [12:43] michi: it's supposed to be like this and we should support a reliable CI infrastructure to you guys, it's on our job description ;-) With your help and feedback we will get things sorted. [13:08] mzanetti, what's the trouble with unity8 tests? [13:18] brendand: troubles? [13:19] mzanetti, jibel said there were some dashboard failures you needed help figuring out [13:19] oh... that's news to me... [13:19] need to look [13:21] mzanetti, not dashboard failures, it's related to tsdgeos reply on the ML, saying that him and you have different results for AP test of unity8 on the same devices and same code [13:22] if we gate on automated tests, we have to understand where these failures come from [13:23] jibel: well... so far this has been my experience with autopilot tests all the time [13:23] they fail randomly sometimes [13:24] but yeah... I have one failing test here on a device of mine which doesn't happen for tsdgeos nor in jenkins [13:24] not sure yet why [13:25] mzanetti, i'm here to help if you need anything === alan_g is now known as alan_g|lunch [13:37] michi: just double-checking, http://s-jenkins.ubuntu-ci:8080/job/unity-scopes-api-vivid-armhf-ci/121/consoleFull seems to be a legitimate test failure, correct ? [13:42] mzanetti, is test_can_unlock_passphrase_screen the one you sometimes see fail? [13:50] brendand: the one I had yesterday/today is unity8.shell.tests.test_emulators.DashAppsEmulatorTestCase.test_get_applications_should_return_correct_applications [13:54] mzanetti, do you have a paste or link of the error? [13:55] brendand: http://paste.ubuntu.com/10389462/ === alan_g|lunch is now known as alan_g [13:57] mzanetti, do you observe that the device is doing when that test is reached? [13:58] s/that/what/ [14:04] brendand: sorry. IRC notifications broken atm here. [14:04] brendand: so I see that it tries to move the scope, but not far enough, so it snaps back [14:05] so I'd know how to fix it I guess, but the question remains why it's only failing here [14:05] (running on mako btw) [14:05] indeed strange [14:17] is the weather channel scope working for others on bq/rtm? [14:19] seb128, works from "today" [14:19] not here, but I'm unsure how to debug :-/ [14:20] does location work ? [14:20] might not be getting location data [14:22] shrug [14:27] I have an excellent example if someone ever asks me to commit something to Qt upstream for him/her, "just one line"... https://codereview.qt-project.org/#/c/63026 - 1.5 years and counting to get that "just one line" in :) [14:28] bug filed against Qt 5.1.1 originally, and we've been carrying the patch since :) [14:28] I think now since there's even newer way of reading environment variable, it's going to satisfy everyone [14:31] * Mirv added a FAQ to the contributing page, that should do it === MacSlow|lunch is now known as MacSlow [14:34] trainguards: can silo 25 be published, please? [14:34] oSoMoN: looking o/ [14:41] so it looks like something has removed the "x" bit from the click debian/packagekit-check during a jenkins rebuild :/ [14:43] sil2100, who will approve the FFe's for touch ? [14:44] i assume we dont really want to give that to the release team but rather have the product team device [14:44] s/device/decide/ [14:44] *tsk* [14:44] pmcgowan, ^^^ [14:44] ogra_: normally I would say slangasek would be our man, but he still didn't comment on the blanket FFe I filled in ;p! [14:44] heh [14:45] Since he's aware of touch business and is an archive admin [14:45] sil2100: just a note, you need release team member, not archive admin :) [14:45] right, still, i think the product team should be involved in approvals [14:46] ogra_, I think we can ask for exceptions but not approve them [14:47] pmcgowan, well, the developer would ask for the exception ... but i think the product team has a way better overview of how intrusive a feature addition is than the release team has [14:47] didrocks: right ;) In any way, slangasek is the man ;p [14:47] ogra_, agreed [14:47] ogra_: right, but FF and FFe's are processes tightly controlled by the release team [14:48] ogra_: if the product team would be the one controlling that for touch, we probably wouldn't have FF at this time at all [14:48] sil2100, sure ... but the product team is responsible for the final product ... they should be gating FFe's [14:48] Since FF makes sense for the selected cycle, not for ubuntu-rtm or product release [14:48] well, vivid is a product release [14:49] and tedg sounds like he plans to file a lot of FFe's [14:49] Yeah, and I hope he has all of them discussed with his manager [14:49] i think they should be going in front of the product team first ... before they get handed to the release team [14:49] And the managers know best what is the current focus and what should be worked on [14:50] Currently all managers know that we're focusing solely on stability [14:50] ogra_, For things like indicators, for instance, there are multiple products involved. One archive. [14:50] * ogra_ doesnt trust managers :P [14:51] ouch [14:51] at least if they decide for their own team if $shiny_feature_they_promised_6_months_ago should still go in [14:52] I think that the release team is in a good position to understand the impact of a change. [14:52] the release team has no clue what impact a change has to the customer or the planned product [14:52] only the product team gas [14:52] *has [14:52] If there is different impact for different targets, the release team should be given info on that, not usurped. [14:53] the release team will decide from a distro POV [14:53] not from a product POV [14:53] Is the disto not a product? [14:54] You're saying they decide from the product of Ubuntu Desktop, not the product of XYZ Phone. [14:54] i'm not agaiinst FFe's (though it should really really be a rare exception, the release schedule is well known since months) ... but the phone is quite different from a product POV [14:54] which makes me think the product team should be the initial gatekeepter ... before it goes to release team [14:54] Well, there are different things here [14:54] There's vivid and there's RTM [14:55] tedg, right, thats what i mean [14:55] More hoops is never a better solution :-) [14:55] more hoops for a super rare thing is fine [14:55] The release team has and might have some knowledge on the vivid product, even for touch, but RTM works different and has different timetables and needs [14:55] FFe's aren't super rare. If nothing else, look at all the crap that landed in RTM after it was "feature froze" [14:56] right, we need to stop that [14:56] an FFe is an exception ... not the rule ... [14:56] sil2100, Sure, so individual products can branch and control that for customer deliveries. We shouldn't impose those on the main archive. [14:57] if we are bound to the main archive with a product we have to [14:57] vivid RTM will essentially be vivid [14:57] which means vivid needs to be at product quality [14:57] Sure, but there's a process so people don't rush to meet the deadline but make sure it's done. And FFe's get harder and the time goes on. I'd expect one this week to be easy to get, next harder, etc. [14:58] ogra_, I'm confused at how you're using "product" — Ubuntu Desktop is a product to me. [14:58] ubuntu desktop si a different kind of product [15:00] Do you consider it a lower quality product? [15:01] no, i consider it a different quality product ... it needs to fullfil completely different goals [15:03] desktop is a general purpose product ... if it works on 90% of the HW and 90% of the weird manually set up corner cases this is fine ... such a thing isnt possible in a phone release where people can not fix stuff following howtos etc [15:04] But we expect to do system image based desktops here Real Soon Now™, right? Seems like system image is the difference you're making to me. [15:05] where did i say system image anywhere [15:05] Certainly there is more HW variety, but we do have contracts that specify a list HW that should work. [15:05] i'm talking about use cases [15:05] i'm not even tallking about HW [15:05] I'm confused then. I don't understand the distinction you're making. [15:06] i'm talking about your mom .. who might buy a phone and wont be able to follow a howto to hack the fix in she needs [15:06] while this is possible in the desktop case [15:06] Well, my lawyer couldn't follow a howto in either case to fix a problem. [15:06] they are massively different products with massively different purpose and we need to treat them like that [15:07] He'd probably just go buy a Windows machine if he felt he needed to do that. [15:07] right ... or an android phone [15:07] the point is that he has the opportunity to follow a hosto in the desktop case if he wants [15:07] he doesnt have that ability on the phone [15:07] *howto [15:08] I think his ability to follow a howto on both of them is exactly the same. Or, at least, we should assume that they are. [15:08] especially in the case of a non rootable, fully locked down phone ... [15:08] how ? [15:08] you have no root, you have no access to the system [15:09] bfiller, tested silo 11 and it works great [15:09] you might not even be able to re-flash ... now dont tell me there is no difference to a desktop [15:10] I think you respect my lawyer's technical ability more than I do. :-) [15:10] pmcgowan: nice [15:10] bfiller, I dont have access to the ci sheet [15:11] pmcgowan: I'll mark it ready for QA [15:11] bfiller, tested on mako with 195 [15:11] pmcgowan: is that an rtm image? [15:11] yes [15:11] tedg, i dont care about your lawyer ... he could just ask you for help ... point is that not even you will be able too help him with a broken phone if it is locked down [15:11] bfiller, my krillin not set up for easy test right now [15:12] one is a consumer pproduct, the other is a general purpose product ... two different things [15:12] ogra_, And I think we should make that same assumption for the desktop, else we're failing the general market. [15:12] Desktops are consumer products today. [15:13] And, they'll be more so tomorrow. [15:13] sure, and there sill be a different quality standard applied if you buy a dekstop with preinstalled ubuntu [15:13] vs what you download from cdimage [15:13] and install yourself [15:14] and thats the use case the release team cares about ... [15:14] not the preinstalled one [15:15] And the OEM team takes that and prepares a golden image for a particular device. === iahmad__ is now known as iahmad [15:16] see [15:16] so you agree the iso on cdimage is different from a properly shaped product ... [15:17] the point for the phone this release is that vivid *is* our properly shaped product [15:17] there wont be any time left to do special RTM stuff before it goes out to the manufacturer [15:17] Then the release team needs to buy into that. [15:17] Having two different processes won't work. [15:17] bfiller, is there somewhere to add a description of the specific test to do on that silo? [15:17] (there will surely be later ... but thats after we gave out the golden master) [15:18] tedg, the release tea doesnt know the product team reqs. [15:18] and shouldnt care [15:19] * ogra_ wishes we would have stopped allowing FFe's long ago ... it used to be a very rare exception ... til uunity7 came around and teams simply ignored that an exception should be an exception [15:20] I fail to see how the release team could successfully navigate changes going into the archive without knowing the requirements they were being measured on. [15:21] they are measuring based on distro reqs. not based on product reqs for a certain product for a certain vendor on a certain HW [15:22] and not under the aspect that you cant change the installed system ... [15:24] Are you concerned about that being because of confidentiality or because they don't care? I don't understand why you believe those should be separate. [15:27] tedg, i dont say they should be separate ... what i'm suggesting is that a dev files an exception that the product team reviews ... if they approve they file an FFe that gets handed to the release team [15:28] so you get the review from a product POV before it goes to the distro review [15:29] So you think that every package that goes into a product the product team should get veto over the release team? [15:30] for this particular usecase at this particular time, yes [15:30] i.e. for the case where distro = RTM [15:30] which we hopefully only have once [15:31] I guess I'm a bit confused on that. You're saying that you believe in other cases the version given to a customer will be based on a release with customizations? [15:31] tedg, my point is that we have less than 6 weeks and that there wont be time for RTM fiddling in advance (we can only do that later) [15:32] tedg, so the vivid release for the phone needs to be treated like RTM [15:32] Landing random features constantly? [15:32] once we have merged vivid into RTM thats indeed all different [15:32] I'm not sure what "treated like RTM" means. [15:32] landing features only after review and selection [15:32] like we do now [15:33] (in RTM( [15:33] Hmm, perhaps you have a different view than me. But I haven't seen that. All I've seen is random chaos WRT RTM. [15:33] the set of to-land features gets reviewed once a week by a team from landing team and product teams [15:33] for rtm [15:33] and only approved stuff lands [15:34] Right, for ubuntu-rtm there's usually a strict list of fixes we land with priority, all of which are reviewed by the product team and the landing team [15:34] as long as vivid = RTM (i.e. the next 6-8 weeks) we need to do the same in vivid imho [15:35] else we have no chance of delivering a product in the given timeframe [15:35] Perhaps that meeting needs to send out minutes. [15:35] i thik olli_ keeps the summaries somewhere in a google site document [15:35] So they're encrypted and hidden. [15:35] tedg: all the accepted fixes/changes have bugs that are marked with milestones [15:36] tedg, well, obviously they did their work well enbough that you didnt notice it in your RTM work [15:36] And the engineering managers are aware of the most critical ones of those and escalate them to developers [15:36] :) [15:36] tedg, ... relax [15:36] ogra_: it's not in any spreadsheet right now btw. ;) There's one spreadsheet pmcgowan has, but it only overviews the 'most important ones' - launchpad bug milestones are enough to know what is worthwile to work on [15:37] ogra_, I am not involved in that part of the product atm, pmcgowan is [15:37] ok [15:37] Just looking at the milestones and the priority of their bugs is enough [15:37] ok [15:37] oops [15:37] no moa spreadsheets, as promised [15:37] that second ok i didnt type ! [15:40] davmor2, jibel: how's testing going so far? [15:40] Tell me, how bad is it? [15:41] sil2100, only 15% done and 80% pass rate. We just received a custom tarball that we'll install manually to proceed with the real target [15:43] sil2100, Honestly half the bugs on a milestone get moved, it's not clear looking at a milestone what is actually expected to land. [15:45] tedg: those milestones rather stay as they are - if a bug is assigned to the canonical-devices-system-image project and has a milestone set, then it means it has been marked as 'good to land' for RTM [15:46] sil2100, there are some bad bugs, like no second SIM (fix in progress) no keyboard on 1st boot, problems with indicators and notifications, the launcher, the clock, ... [15:46] jibel: the no keyboard on first boot bug is that something like what we saw in ubuntu-rtm? i.e. one of the things that sometimes don't work on first boot? [15:47] jibel: or is that reproducible everytime (i.e. at every reflash eva)? [15:47] sil2100, yes. I reflashed and it didn't happen [15:48] tedg, together with the summary that sil2100 sends out every day you should get a proper picture [15:48] I usually include the milestone links in my e-mails ;) [15:49] right [15:49] My issue there is there is that many times I work on my part of a bug there, "it's critical and milestoned", only to find out that no one else is actually working on their part. [15:50] So, it seems a better process is "wait for someone to ask, then ask if it's milestoned" === salem_ is now known as _salem [16:25] jibel, ogra_, davmor2, robru, popey, rvr, john-mcaleely: I will have to skip today's evening landing meeting [16:25] ok [16:25] I skipped practice last week and would like to attend today [16:25] sil2100, as usual [16:25] np, there is a pub nearby with my name on it [16:29] sil2100: man you need to get your priorities sorted ;) enjoy your training :) [16:30] davmor2: sorry, I'm still working it out later after I'm back, so it's not that I'm skipping work ;) [16:30] Let's say I'm aligning with the US-tz people this way [16:32] sil2100: haha, I didn't say you were working :) [16:32] weren't even [16:43] so davmor2 can't tell when sil is working or not? [16:46] john-mcaleely: well I know he'll work hard today martial arts training :) [16:56] rsalveti: were you able to get that rtm seed updated yesterday with ui-extras added? === _salem is now known as salem_ [17:11] bfiller: yes - http://people.canonical.com/~ogra/touch-image-stats/rtm/243.changes [17:46] sil2100: don't we require QA sign off for vivid now? [17:46] no vivid cards at https://trello.com/b/AE3swczu/qa-testing-requests-for-questions-ping-eu-jibel-us-jfunk-nz-thomi-or-ubuntu-qa-on-ubuntu-ci-eng [17:46] mine was archived === alan_g is now known as alan_g|EOD [19:50] rsalveti: not yet, we'll start doing QA sign-off on Monday [19:51] sil2100: alright [20:38] cihelp: could I get added to the landing sheet? (currently I'm only in view-only mode) [20:39] trainguards: ^ [20:39] kdub: sure [20:39] kdub: just out of curiosity - which components will you be landing usually? [20:40] mir [20:40] kdub: on it [20:41] kdub: let me check if you're in the train permissions as well [20:41] sil2100, thanks [20:43] kdub: should be all ok [20:52] o/ [21:06] kdub: ok you got silo 7, just be aware of your conflicts with silo 0 (I know 0 is for testing; you may need to rebuild 0 after 7 publishes) [21:06] robru, thanks [21:06] kdub: let me know if you need any help running the train. you should have permission to start the build. [21:06] kdub: you're welcome [21:07] robru, and that silo is for vivid, right? [21:11] kdub: yep [21:11] thanks again robru [21:11] kdub: you're welcome [21:11] kdub: you can limit the dashboard to your own name to see just the stuff you care about: http://people.canonical.com/~platform/citrain_dashboard/#?distro=ubuntu&q=kdub [21:12] kdub: or http://people.canonical.com/~platform/citrain_dashboard/#?distro=ubuntu&q=qtmir if you want to see conflicts as well [21:12] robru, ah, handy [22:21] robru: https://ci-train.ubuntu.com/job/ubuntu-landing-018-1-build/101/ [22:22] rsalveti: yeah, not sure what just happened there. try again? [22:22] robru: already did [22:22] crap [22:22] rsalveti: something's wrong with sso... [22:25] rsalveti: ok well, this is beyod my ability to fix, off to #webops! [22:26] robru: alright :-) === pat__ is now known as pmcgowan [22:39] rsalveti: ok wow, sorry about that! fixed, and re-ran your job: https://ci-train.ubuntu.com/job/ubuntu-landing-018-1-build/104/console [22:40] robru: what was it? [22:42] rsalveti: earlier today I discovered SSO was misconfigured, I fixed it incompletely at first, and then in my attempt to make a more permanent fix I managed to corrupt the global jenkins config rendering it incapable of running any jobs. [22:42] rsalveti: including the job I would have used to fix it, hence why I had to escalate to webops there. [22:43] oh, got it [22:43] cool === salem_ is now known as _salem === _salem is now known as salem_ [23:33] brb, late lunch