[01:03] heh [02:11] kgunn: terribly sorry about that. We saw this a mile back, but haven't been able to get an answer on whether we should replace those makos with BQs or somehow find more makos (i.e. build a time machine). [02:11] kgunn: if you want to step in and make the call on that email ("BQ vs Mako", 22 Sept) that would be grand [02:12] as I think Pat is out [02:12] thanks for rescuing this, fginther [02:19] robru, I can't connect either, came here to ask if I was the only one [02:23] ev, you're welcome [03:07] Ursinha: i got back in after a while, seems intermittent [03:08] still no luck here :/ [03:08] ev: i have a personal mako m thinking of upgrading, how much you willing to pay for it? ;-) [03:11] Ursinha: when I lost the connection it took me 2 hours before reconnecting was successful. want me to grab somebody from #webops to find you here on freenode? [03:13] robru: did they have to do anything for you to be able to connect? I think that would be helpful, thanks :) [03:13] Ursinha: nah I just ignored it until it fixed itself. but I'll ask for you [03:14] robru: thank you :) [03:14] Ursinha: you're welcome! [03:15] Ursinha: pjdc referred me to barryprice, hopefully he's around to follow up with you [03:16] robru, thanks! [03:23] lol, I did a train rollout 2.5 hours ago and nobody has run any train jobs yet to test it. [03:24] stupid lull between US EOD and EU morning [03:28] robru: one loonie [03:29] ev: plus shipping? [03:29] well, I'll put a stamp on the loonie [03:29] Lol [03:29] and we'll just trust the post office [03:30] What could possibly go wrong? [03:32] they could send Benton Fraser down after me for attempting to bribe a Canadian national [03:32] and bed [05:27] running some... === chihchun_afk is now known as chihchun [05:39] jibel: when you see Victor, would you please ping him to check the silo57 UITK. He misunderstood the logs. [05:41] jibel: robru: I think we'd need checking for unbuilt revisions before publishing, to avoid spending QA cycles on a silo that would have needed a rebuild. 2/4 of this morning's silos were such. [05:42] I guess I'd mean a constantly running job every hour or so, and an indicator on Bileto/Trello. [05:43] Mirv: good call, can you file a bug? [05:44] Mirv: I've been aware of this issue for a while, problem is that nothing runs after build but before publish to detect this stuff. [05:45] Mirv: so something that scans every five minutes and reports these types of problems would be good [05:46] Mirv: maybe check-publication-migration could be expanded to cover this, since it polls PPAs every five minutes but only for silos that were published. Could expand it to pre-publish checks [06:00] robru: yeah a scanning one sounds good. I'll file a bug. [06:58] robru, auditable logs is a nice new feature, thanks! is it complete or still in progress? [07:05] jibel: well I'm sure there's issues to iron out but it's basically done. [07:05] jibel: it'll look a bit better when I redesign the site. it was just a higher priority to get it functional, can make it pretty later [07:07] jibel: what makes you ask if it's in progress? something wrong? [07:07] I'm afraid certain silos will just be totally spammed with hundreds of messages, might have to simplify the number of status messages that jenkins uses. [07:14] I suppose I should make some attempt to create audit log entries for old requests that don't have any but I'm not sure how feasible that would be, or how much value a "fake" audit log entry would be. [07:16] robru, there is no entry when the user performs an action for example like changing the status [07:17] jibel: no, entries are only created by jenkins when jenkins is doing something. [07:17] robru, I'd like to know who marked it ready, failed, passed, ... [07:17] hmmmm [07:18] jibel: that sounds feasible, can you file a bug against lp:bileto and assign it to me? [07:18] robru, that was the bug you marked as duplicate [07:19] robru, bug 1496432 [07:19] bug 1491163 in CI Train [cu2d] "duplicate for #1496432 Auditable Publish logs." [Critical,Triaged] https://launchpad.net/bugs/1491163 [07:19] I'll add more info [07:19] not this one but the dupe [07:20] jibel: ah it was ambiguous, I interpreted "actions" as "jenkins jobs", didn't consider also changing qa_signoff state. [07:21] robru, no, for the user jenkins is a transparent piece of machinery [07:22] robru, I'll deduplicate and add more info [07:24] jibel: I think jenkins is maybe as transparent as mud ;-) [07:24] :) [07:24] robru, bug 1496432 [07:24] bug 1496432 in Bileto "Add status changes to the audit logs" [Undecided,New] https://launchpad.net/bugs/1496432 [07:25] jibel: ok I can probably do that during my shift tomorrow [07:26] robru, not urgent but nice to have and useful to understand how things happen [07:27] jibel: yeah it's a good idea, didn't even occur to me [07:30] Mirv, hey, we'll pull the qtmir commit out when greyback's around, don't wanna waste the testing and QA time, and the additional change is rather small [07:31] Mirv, unless you can force publish anyway? [07:39] Saviq: please pull it, there's no force option for that [07:39] (for publishing) [07:41] ogra: when around, please run https://ci-train.ubuntu.com/job/ubuntu-landing-021-2-publish/ job (no packaging changes, but manually uploaded main package) [07:55] ogra: and for the first time ever, train generated a real live diff for oxide, which you can review at https://ci-train.ubuntu.com/job/ubuntu-landing-021-2-publish/94/artifact/oxide-qt_content.diff/*view*/ [07:58] greyback, hey, can you pull the last change from qml cache branch, I didn't notice and didn't rebuild when you pushed it [08:00] Saviq: you want me to revert that change? What's wrong with rebuilding it (you'd need to retest?) [08:00] greyback, me, and QA team [08:00] Saviq: alrighty. Will need you to reapprove the MP [08:00] greyback, so yes, please revert, and we'll land separately (with a test verifying you only clear cache once?) [08:00] ack [08:02] Saviq: pushed [08:03] trying again [08:03] greyback, you'll need to uncommit and overwrite instead [08:04] right [08:04] Saviq: ah ok [08:04] greyback, otherwise train's always gonna find new commits [08:04] so rev 380 [08:04] done [08:04] greyback, thanks [08:05] Mirv, ok, last time (fingers crossed) [08:05] the second last time already [08:06] Mirv: wat [08:06] looking good now [08:06] robru: just a new commit that needed uncommitting so that train can publish what was built [08:07] robru: nothing to worry about [08:07] Mirv: oh ok. I was just looking at the audit log and I was like "you didn't build this, stop it" [08:07] Saviq: greyback: it's good now. [08:07] Mirv, great, thanks [08:10] a single publication just marked 8 silos dirty, good god people [08:12] Mirv: hmmm, so if I implement a continuous scanning check that would mark silos dirty due to having unbuilt revisions, removing the offending commit wouldn't un-mark it dirty... you'd be forced to rebuild. [08:21] Saviq: hey! Did you talk with Pat about the cursor changes in unity8 in the end? [08:21] sil2100, forgot, will do so this afternoon, we've a few more PD-geared things that we might land along side [08:33] robru: watch only would work though? the most usual use case is that the new build really needs to be done (before submitting to QA) [08:34] Mirv: nope, watching a dirty silo does not make it clean. you need to really do a build to clear the dirty state. [08:35] Mirv: oh, you might be thinking that watching a silo will set the status to 'packages built'. that's true. but the files that indicate dirty state are still on disk [08:35] robru: hmm, maybe it's better anyway. or we need a "clean dirty flag" to build job. [08:36] Mirv: please god no more flags I'm sick of flags, there should be fewer flags [08:36] * sil2100 likes flags [08:36] robru: well hmm how it'd work for manual uploads then? yes, I thought that. [08:37] Mirv: not sure. I think the current implementation would clear the dirty status if you rebuild a manual source even without uploading a new package. it would have no way of knowing. [08:37] robru: my Qt 5.5 silo would get marked dirty several times a week and I mostly upload just new packages manually. [08:38] robru: ok, so for manual uploads watch_only provavly clears the flag. [08:38] Mirv: right, there's no requirement to reupload every time it gets dirty, just in between getting dirty and when you want to publish. if you don't do it then you'll revert whatever release made it dirty in the first place [08:38] Mirv: no, watch_only will never clear the flag [08:39] Mirv: you would need to run the build as if it was a real build, it would clear the flag that way even though it doesn't actually upload a new package for you [08:39] Mirv: the flag gets cleared in the 'clean' phase which does not run during watch_only. [08:39] ah, ok. [08:39] so build without watch_only ok :) [08:40] robru: I think we want more to prevent this kind of problem (like Saviq's and pete-wood's todays silos) than we want to allow the uncommit dance [08:40] I mean, Saviq wouldn't have put the silo towards QA if it would have been flagged dirty [08:40] Mirv: in the bad old days you needed to use WATCH_ONLY on manual sources but for a while now the train knows what to do if you do a "real" build but you only have manual sources in the silo [08:40] right [08:41] now if we don't give more flags for sil2100 to love, the downside is if greyback decides to accidentally push a commit to the MP during QA [08:41] Mirv: ok I really like the idea of train automatically marking silos dirty when new commits appear, not sure when I'll have time for that though [08:41] I'm not sure how often accidental commits + uncommits + force push:s happen [08:42] me either. [08:42] robru: right, so in principle it's the right thing to do but there's no hurry [08:42] Mirv: what we'll do is make sure to put a BLACK MARK on their PERMANENT RECORD whenever it happens === dbarth_ is now known as dbarth [08:43] and I'll do it again too, mwahahaha [08:43] bad greyback, bad! [08:43] hey guys; i have a couple of manual upload requests for my silos (i guess that's trainguard material) [08:44] dbarth: yep, what's up? :) [08:44] in silo-005, in the ppa specifically, adding a webbrowser-app upload, to test the fix in platform-api [08:44] ok good night, gotta be up early for that meeting tomorrow [08:44] robru: night! [08:44] it would be a no change uplaod, to get the build dependency in [08:45] dbarth: you could do an empty MP if you want to control it yourself [08:45] and in silo 21, we have the final 1.9.4 build availble (replaces 1.9.3 with an extra security fix from upstream) [08:46] Mirv: so i need to bzr branch lp:trunk bzr push to a personal branch and then create the empty mp from there, right? [08:46] dbarth: exactly [08:46] i tried the other day but it wouldn't work without an actual 1 line change; well trying again l) [08:46] ;) [08:46] dbarth: in the past it worked fine :) [08:47] Since LP allows for pushing no-change merges [08:49] Mirv, robru, it'd definitely be helpful, but sounds overkill, that the train would monitor all the involved branches [08:50] this case was my fault since I added/built the MP before it was top-acked [08:50] should've monitored it more closely [08:55] ok, that seems to work [09:01] Saviq: the problem is that it happens often, and more often in that way it really needs a rebuild. wasting QA resources is bad. [09:02] Mirv, totally, OTOH this should never happen if the lander's doing his job right [09:06] the tools are there to help lander to do the job right. currently this seems a bit too common. [09:12] true [09:30] Mirv: for oxide 1.9.3, is there something i need to do to have it published and ready for ota-7? [09:31] Mirv: 1.9.4 is here, but oSoMoN righfully suggested to get 1.9.3 published first, just to be sure we have a recent oxide available in the image [09:32] dbarth: ping ogra more to publish it :) [09:32] or some other core-dev [09:32] dbarth: 1.9.3 is approved and ready to land [09:32] ah ok [09:32] fine, then i should create a new silo request for 1.9.4, it's safer [09:33] I wonder if eg Laney would like to run https://ci-train.ubuntu.com/job/ubuntu-landing-021-2-publish/build (no packaging changes, but a manual upload of a main package - full diff at https://ci-train.ubuntu.com/job/ubuntu-landing-021-2-publish/94/artifact/oxide-qt_content.diff) [09:34] dbarth: yes, please [09:39] Mirv: I don't think this policy applies to non-Ubuntu [09:40] does the train enforce it? [09:42] Laney: non-Ubuntu? so yes, train enforces that if the package is not built via MP:s but manually uploaded to the PPA, a main package always needs to be published by a core dev even if it would not have packaging changes. [09:42] oxide-qt is a bit special since it's so huge (4GB unpacked) that it's a bit hard to maintain it via MP:s [09:44] Mirv: I mean that the sign-off policy doesn't apply to the PPA which isn't really Ubuntu itself [09:44] DMB doesn't have authority over it anyway [09:44] but yes, can publish [09:45] done [09:53] Laney: thank you. it's just that non-coredev:s can't run the publish jobs anymore in case they don't have the permissions on the Ubuntu side. [09:53] Mirv: I think you *do* have the permissions for the PPA [09:54] Laney: https://ci-train.ubuntu.com/job/ubuntu-landing-021-2-publish/94/console "timo-jyrinki not authorized to upload oxide-qt" [09:54] that is the ~ci-train-ppa-service team, not ~ubuntu-core-dev or anything else [09:54] Laney: for the PPA, but not PPA -> archive [09:57] this was -> PPA [09:57] at least I hope so! [09:59] The thing is that the current CI Train permission checks always check permissions against the main archive [09:59] Even if you publish to a PPA (like the overlay) [10:00] OK, that's probably a bit stricter than is necessary [10:00] Yeah, I wanted it to do 'the right thing' and check against the destination, but slangasek said this way is good as well [10:03] It's fine, just you might have to get someone else in cases you don't strictly have to [10:07] Laney: ah, right, that's what you meant, sorry. yes, it's a bit strict, but we treat the overlay PPA pretty much as an archive. [11:34] trainguards: there is now a release of oxide 1.9.4 ready for manual upload to silo 16; thanks === dbarth__ is now known as dbarth [11:35] dbarth: requiring a rebuild against the overlay, right? [11:36] sil2100: exactly [11:36] dbarth: I'll try to pick it up in a moment then o/ [11:36] ty [11:37] sil2100: just tell if you want me to [11:37] dbarth: in what PPA is it available? In phablet I only see 1.10.1 [11:39] Mirv: if you know where to get it from then you can pick it up ;) [11:40] sil2100: ok then :) fetching it from https://launchpad.net/~ubuntu-mozilla-security/+archive/ubuntu/ppa [11:41] Ah, tha one ;) [11:41] Anyone happen to know what ordering that the train uses with multiple MPs? It doesn't appear to be the order they are listed [11:43] greyback: in the past it was using the order listed, not sure how it looks like now though [11:44] I'm not sure I understand the order any more. In silo22, qtmir, small-refactoring-of-MirWindowManager is applied *after* liveCaption [11:45] but listed before. And I think approved before. [11:49] greyback: let me look into that in a moment [11:49] sil2100: it's not urgent, just would be good to understand [11:53] sil2100: oops sorry, yeah, what Mirv said [11:56] Oxide is what I have all the CPU:s, SSD:s and 100/100 network connection for... === alan_g is now known as alan_g|lunch [12:15] any new arale image being built soon? [12:16] brendand: what's up? [12:17] sil2100, oh i'm just having some troubles with our test suite and the normal workarounds aren't working [12:17] sil2100, we get some issues updating when the archive moves on from the image [12:17] sil2100, so a new build solves it for sure [12:18] brendand: we might kick a new image in ~1 hour if you're ok with that, as I'm investigating some changes in livecd-rootfs [12:18] But you'd need to wait a bit [12:18] Would that be fine? [12:19] sil2100, yeah no rush === chihchun is now known as chihchun_afk === _salem is now known as salem_ === chihchun_afk is now known as chihchun === chihchun is now known as chihchun_afk [14:02] robru: FYI I tried comment silo 054 multiple times but the comments seemed to go to /dev/null [14:03] just wondering if they show in any log or such [14:04] jibel: Hi! I have top approved all the branches for citrain request 343. Please try again. About the changelog conflict, it seems to be a bzr/lp glitch not a real conflict. We had the same problem for USC 0.1.3 (entry 423) but it went through fine. [14:04] kgunn: ^^ FYI [14:08] alf_, okay, thanks. [14:08] bzoltan_: Approving silo 57 [14:10] alf_: jibel: I already checked that they were approved === alan_g|lunch is now known as alan_g [14:12] rvr: bzoltan_ oSoMoN: what's up with 57 really, QA has checked it but webbrowser never built in the PPA for any of the archs, on Tuesday? is it supposed to be published by first removing the webbrowser-app, or...? [14:12] so the silo status was never "Packages built" even though it was sent to QA [14:12] 5 failed QmlTests on each arch [14:13] Mirv: Hmm [14:13] if the UITK is good to go as is too, maybe it's ok to publish but it's clearly not "clear" as is [14:14] Mirv, webbrowser-app needs to be published together with uitk, I expected bzoltan_ to handle that silo [14:14] Mirv, can you please retry the failed builds for webbrowser-app there? [14:15] Mirv, note that it’s only autopilot tests changes for webbrowser-app, so rebuilding shouldn’t invalidate all the testing done so far [14:16] oSoMoN: since it failed tests on all archs on both releases, I assume it's not really going to work... but ok, rerunning them [14:16] QmlTests::TabsBar::test_context_menu_close() Uncaught exception: Cannot read property 'x' of undefined [14:16] huh, let me check the failures, I assumed it was only the favicon fetcher unit test failing [14:17] no, nothing to do with those [14:17] Mirv, would you mind re-running them anyway, to get fresh failures? [14:17] rvr: bzoltan: oSoMoN: ok, AP only fix at least make the situation much easier, no wasted resources from QA etc [14:17] oSoMoN: ok [14:18] oSoMoN: I find it weird also it seems the favicon test passed on all on the first run :S [14:18] Mirv, there’s definitely something fishy there! [14:18] oSoMoN: oh, ok, no it didn't [14:19] oSoMoN: just luck I guess [14:19] oSoMoN: the first two x86 I checked were ok, but found one where there were both qmltests and favicon failing [14:19] oSoMoN: ok, retrying all 6 builds now [14:22] thanks [14:29] Mirv, bzoltan_: wow, indeed unit tests reliably fail, looking into it [14:29] sil2100, are those images on their way? [14:29] brendand: in one moment :) [14:30] bzoltan_, maybe we should add to your test plan: rebuild webbrowser-app with the new UITK, as there are a growing number of autopilot tests that are being replaced by unit tests [14:31] robru: on silo 11, bileto is showing 3 times the status for the same build job, is that expected? [14:40] sil2100, is the ubuntu-pd image manually created or is it scheduled? [14:40] kenvandine: we manually kick those for now, but I can add it to the crontab if it's useful :) [14:41] could you spin another? [14:41] there are a ton of updates that take ages to apply :) [14:41] kenvandine: yeah, I want to do that as I modified the seeds :) [14:41] but of interest is the update for libertine :) [14:41] I'll kick one in a moment [14:41] thx [14:42] alf_: Where can I see the unity8 autopilot test results for silo 14? [14:46] alf_: Ok, found. There are two failures in Wily https://jenkins.qa.ubuntu.com/job/generic-deb-autopilot-runner-wily-mako/461/ [14:46] o/ trainguards, i need the account-polld packages to be removed from ppa 040 to release a landing [14:47] kenvandine: an image should be building soon... there's ubuntu-touch building now as well so I'm not sure if this won't slow things down [14:48] brendand: new image building [14:49] Damn, I was too fast, just hope it'll pick my new livecd-rootfs :| [14:49] sil2100, thx [14:51] rvr: ok, I will take a look, although these don't seem related to my changes... [14:57] Yeah, I think I kicked it too soon, bleh [14:58] greyback_: I have been getting some autopilot testing instability with unity8 CI causing it to fail, and making the QA team nervous (https://code.launchpad.net/~afrantzis/unity8/power-state-change-reason-snap-decision/+merge/272233). [14:58] greyback_: How do we deal with this? I have retriggered another CI run hoping it will pass this time... [14:59] brendand: hope you won't mind another ubuntu-touch image a bit later [15:01] tsdgeos hey, would your experienced eye notice anything obvious wrong with the above ^^ [15:01] those 2 tests have nothing in common afaict [15:02] greyback_: the autopilot are a bit unstable still [15:02] alf_: ^ [15:02] unless both wily and vivid fail in the same spot or you see a direct correlation with the code changed [15:02] it's most probably just noise [15:03] Mirv, bzoltan_: I can reproduce the unit test failures locally, looking into fixing them [15:03] alf_: greyback_: the vivid autopilot succeeded so i'd say that's just fine [15:03] alf_: in fact you were the sole lucky person with a total green in the last runs afair :D [15:04] tsdgeos: greyback_: ok, so I will tell QA to just ignore this [15:04] rvr: ^^ [15:05] alf_: tsdgeos: Isn't it a dual landing? [15:06] rvr: i don't understand the question [15:06] tsdgeos: Silo is marked for dual landing, wily and vivid. Vivid passes, wily fails. [15:06] yes [15:07] tsdgeos: What's the rationale for ignoring it? [15:07] as i've said our autopilot tests are unstable [15:07] i don't know why you'd be blocking this and not any of the 100 previous landings of unity8 because of that [15:08] Mirv, bzoltan_: ok, I think I know how to fix this, and the good news is that it won’t involve changing actual app code, only unit tests, but I have to go to a doctor appointment now, I’ll get back to it in the evening, and will request a new build in the silo once fixed [15:11] tsdgeos: It may be unfair to get a ticket on a highway for going at high speed, when usually no one is ticketed. It's called bad luck ;) [15:13] rvr, can you see a reliable failure, or is it just a flaky test (i.e. does the test work when you try it again alone?) [15:13] alf_: I won't be blocking the silo, if the failure is not related [15:15] Saviq: Sorry, which silo are you talking about now? [15:15] rvr, alf_'s [15:16] Saviq: Ok. I just checked the silo results, didn't check previous runs. [15:16] rvr, alf_, when testing a unity8 silo I always run the whole suite and then re-run any initially failed tests, there often is one or two that fail but pass when re-ran [15:16] I can help with that if you need [15:16] it's a relatively new situation that we have to look into still [15:17] Saviq: Do you know whether these tests usually fail? https://jenkins.qa.ubuntu.com/job/generic-deb-autopilot-runner-wily-mako/461/ [15:18] rvr, there is no "usually fail" currently, they are unfortunately random [15:19] Saviq: I see [15:20] rvr, also, that's wily, they passed on vivid [15:21] rvr, basically, I re-confirm manually every time when landing [15:21] but we do need to understand where those failures are coming from [15:22] Saviq: Yeah, I saw they are passing in vivid [15:23] rvr: Saviq: also, fwiw, the previous revision of the branch managed to pass all tests (which I understand is quite rare), and the latest revision (the one we are discussing) didn't actually change any behavior, it added a (currently) unused value to an enumeration which shouldn't affect anyone/thing [15:23] indeed [15:25] alf_: Saviq: Ok, thanks for the insight. I'll begin the silo validation. [15:25] rvr: great, thanks [15:31] rvr: Also note that we do appreciate your concern about unstable tests and thanks for keeping an eye open for them. Unfortunately, it seems that for the time being we need to be a bit less strict, until we find the cause of the instability. [15:35] alf_: No problem. [15:49] robru: pong [16:01] robru: ok, since I wasn't able to finish this up during my day today... could you switch dual landings to land wily to the overlay automatically? [16:55] boiko: yep the audit log for 11 looks as expected to me. Each status is recorded [16:56] robru: yeah, I saw your mail about that after I had already complained, sorry for the noise :) [16:57] Mirv: I see your comments https://requests.ci-train.ubuntu.com/#/silo/54 [16:57] boiko: no worries [16:57] sil2100: can do, just need to find some coffee [16:59] robru: I don't see them under https://requests.ci-train.ubuntu.com/#/user/timo-jyrinki , but yes under your direct link [17:00] robru: nor in main view. I think comments used to be shown without opening the direct link before? === alan_g is now known as alan_g|EOD [17:01] robru: it was confusing since I entered a comment, clicked the button and it just disappeared [17:02] Mirv: it changed with recent changes, from what I remember from the e-mail [17:04] Mirv: yes I mentioned in the email that comments are hidden if more than 4 requests are displayed to cut down on massive amounts of audit log spam. when I redesign the site I'll be sure to prevent you from being able to add comments on pages where they won't be shown [17:05] robru: ah right, now I remember, didn't connect! thanks :) [17:07] ogra, pmcgowan: the livecd-rootfs change for the apt lists removal made the rootfs tarball smaller by 11.8 MB - didn't check how much real space we got, but it's still pretty nice [17:08] sil2100, thats a good direction and on disk is more [17:09] real space was 60-80 as I recall [17:29] robru, hey, is it on purpose that the build job complains about packages missing in the PPA even though I only built a subset? [17:29] https://requests.ci-train.ubuntu.com/#/ticket/445 [17:29] Saviq: yes, that is on purpose. it used to be that the status would say 'packages built', which is wrong because not all packages are built. [17:30] robru, but the build job completed, maybe it should be less error-y? [17:31] robru, basically, sure, I'd like a note that some packages should be, but are not, available in the ppa, but the build job actually completed fine [17:31] Saviq: I consider packages not being built to be an error condition :-P [17:32] robru, it might be I'm abusing the packages field in the build job, because I'm building dependencies first, only then the dependents [17:33] robru, which means it will always complain in that sense [17:33] Saviq: yes, if you set the correct versions in your Depends: in the packaging then the PPA should be able to work out what order to build them in [17:33] maybe I should just rely on dependency waits, but historically I remember that taking a long time :P [17:34] but also sometimes there's no hard dependency on a new version [17:34] but you want a rebuild to let debsyms do its thing [17:36] Saviq: but if you're depending on a new feature in a new version, it's kinda broken not to specify that version in your Depends: [17:37] robru, not what I meant, if you're depending, obviously you need a Depends, I was rather thinking if the dependency changes ABI in a backwards-compatible way... but then there's no rebuild necessary [17:37] because it's backward compatible, and through .symbols file the resulting dependency should be as appropriate anyway [17:38] so ok, dependency waits it is then [17:39] Saviq: well no, if you want to selectively build parts of the silo that is a supported thing to do. it's just that now it helpfully tells you which packages you didn't build yet :-P [17:39] robru, yeah, I just wouldn't call that an error condition :) [17:39] rather the "helpful notice" [17:39] Saviq: it's tough to get right, since everything that gets reported from that part of the code is considered an error [17:40] not sure how I'd fix that, I'd have to define a new exception that isn't considered an error. would rather keep it an error, makes the code simpler. [17:40] robru, ack, was just scared that there was an actual build error, and the "None" bits in it suggested something broken even more [17:41] Saviq: actually the part I thought was weird was that it was looking for a specific qtmir version, not sure where it got that version number from [17:41] robru, I think that's because there was an actual failed build before [17:41] ah ok [17:41] in that same silo/assignment [17:42] audit logs don't go back that far ;-) [17:42] yup [18:17] robru, can you please dput qtmultimedia source package from ppa:jhodapp/ubuntu/ppa to the silo 55 PPA [18:18] jhodapp: done [18:18] robru, thanks man! [18:19] jhodapp: you're welcome === pat_ is now known as Guest97502 [18:45] bug 1469398 is being fixed in the overlay PPA and doesn't need SRU'ing to vivid right? [18:45] bug 1469398 in Canonical System Image "Push-client should be disabled when no network connection" [High,Fix committed] https://launchpad.net/bugs/1469398 [18:46] bdmurray, yes === Guest97502 is now known as pmcgowan [18:59] robru, one more time for dput qtmultimedia source package from ppa:jhodapp/ubuntu/ppa to the silo 55 PPA, the build failed last time but I verified locally it builds now [19:00] k [19:00] jhodapp: done [19:01] robru, think I have a good process recorded for this now that will alleviate errors in submitting and building a new qtmultimedia :) [19:01] robru, thanks [19:01] jhodapp: cool, you're welcome [19:54] jhodapp: what are you doing? You can't mix syncs and merges... I'm not sure that will do what you expect [19:54] robru, who's syncing? [19:54] Mirv, bzoltan_: if you’re still around, I fixed the webbrowser-app unit tests in the branch that’s in silo 57, and I just triggered a new build [19:54] robru, I'm just trying to get a build where it doesn't try and build -gles [19:55] robru, which IGNORE_MISSING_TWINS doesn't seem to be honoring [19:55] jhodapp: you request is configured to sync from silo 29. [19:56] "Not found in archive" means it is failing to sync the package from that ppa [19:56] See the log for details [19:56] robru, huh, seems the silo config changed while I was away [19:57] jhodapp: to bad it doesn't say who. I have a branch that would record that but it's not in production yet [19:57] robru, oh it's xavi with indicator-sound [19:58] robru, that's a weird setup then...29 has indicator-sound in it, 55 has media-hub, qtubuntu-media and qtmultimedia in it [19:58] jhodapp: are they supposed to all be the same? [19:59] robru, well before my holiday I had told xavi to just put indicator-sound into 55 as well, not sure why he set it up to sync instead [20:00] robru, so I guess I can just have 55 build everything except indicator-sound and -gles, that should work right? [20:00] jhodapp: yeah that's weird and wrong, if you want the package copied let me know, take the sync out of the request [20:00] robru, yes please, there's no reason to have it separate since it's not going to land independently of the other things like media-hub, etc [20:00] jhodapp: OK one sec [20:00] thanks [20:01] jhodapp: oh 29 has a merge, why not just put the merge into the right request? [20:02] jhodapp: I thought 29 would have a manual source that would need manual copying [20:02] robru, yeah I can do that [20:02] robru, I thought it did too, it was listed as a source package in 55 [20:02] jhodapp: currently it's listed as a *sync* package, but yeah [20:04] jhodapp: yeah please clear out the sync_request, drop indicator-sound from source package list, move the MP from silo 29 to 55, rebuild 55, and abandon 29. [20:04] robru, ok, did that, building now [20:05] jhodapp: ok, request looks good [20:05] robru, so since pressing save while editing a config automatically reconfigures, do I need to give it some time before doing a build or is it ready to go right away after save? [20:06] jhodapp: it's ready to go right away. there's no "automatically reconfigures", I just made jenkins load data directly from bileto instead of storing a "config" that requires "reconfiguring". basically reconfiguring isn't a thing. [20:06] robru, nice, I like that :) [20:06] jhodapp: thanks, yeah there's lots of things like that I'm trying to streamline [20:06] jhodapp: one day jenkins itself will go away and bileto will just absorb it all [20:06] but that's a ways off yet [20:07] robru, nice, your own CI tool [20:07] jhodapp: good god it sounds terrifying when you say it like that ;-) [20:07] lol [20:07] robru-ci [20:07] :) [20:07] that would sure look good on my resume! [20:08] no doubt [20:08] robru, any way to force that version for indicator-sound? [20:09] jhodapp: no, your trunk is wrong, you need to fix it [20:10] robru, oh, hmm...will need xavi to take a look then [20:10] jhodapp: or rather, are you sure this should be a vivid silo and not a dual silo? [20:10] jhodapp: the error is that you have a wily trunk and you're trying to build it for vivid which is bad and wrong [20:10] you're not allowed to go backwards [20:10] robru, perhaps that's why it was separate then, the media stuff can't be dual landing yet [20:10] right [20:11] jhodapp: so you need to either s/15.10/15.04/ in your trunk changelog (eg, branch for vivid), or make it dual [20:11] robru, but indicator-sound probably is [20:11] jhodapp: what's holding it back from dual? [20:11] oh because it's manual sources [20:11] robru, vivid doesn't have the same wily gstreamer or platform-api for starters, which the wily media stuff uses [20:12] robru, once gstreamer 1.6.x releases we'll sync wily and vivid for the media stuff [20:12] let me go back to using silo 29 then [20:12] jhodapp: ok so since silo 29 is dual and built (and you were smart enough not to abandon it like I told you to do), I can just copy the vivid package from there [20:13] robru, well can't I just go back to a sync from 29, and only request 55 to build qtmultimedia, media-hub and qtubuntu-media and not hit an error with indicator-sound then? [20:13] jhodapp: no, because syncs & merges can't coexist, the train will become very confused. you need to leave 55 as "some merges and some manual sources" and then I can just copy that one package when you need it [20:14] robru, alright, that'll work [20:15] jhodapp: brb tho [20:15] k [20:19] robru, just noticed, what tz are the comments in? [20:41] Saviq: should all be UTC unless there's a bug [20:41] jhodapp: sorry did you want that copy now? [20:41] robru, they're not https://requests.ci-train.ubuntu.com/#/ticket/445 [20:41] robru, yes please [20:41] robru, unless it's missing am/pm and is written in 12h clock [20:42] Saviq: that's possible. if you look at the raw json (s/#/v1/ in the URL) it looks to have correct UTC timezones [20:42] Saviq: so just a display issue [20:43] Saviq: will fix, and roll it into a larger rollout later today [20:44] kk [20:45] robru, it could probably even display in local time (not to mention human-readable times) [20:45] jhodapp: oh, copying failed because that version number already exists in silo 55. LP claims they have different contents though. if you want me to copy you'll need to rebuild 29 [20:45] Saviq: what do you consider human readable? [20:45] robru, rebuild with a new version I assume? [20:45] robru, 5 mins ago, yesterday etc. [20:46] jhodapp: yeah, rebuilding 29 will make a new version number that I can copy [20:46] Saviq: oh, twitter style. [20:46] Saviq: not sure if I'll implement twitter style timestamps today but I'll at least fix the display [20:46] robru, ok, will have to ask xavi...it's not my MR [20:46] robru, sure, nw [20:47] I'll write him an email [20:47] robru, I'd put the actual timestamp in a tooltip probbaly [20:47] probably, even [20:49] Saviq: lol, js thinks it's localtime but it's not [20:55] Saviq: ok, trunk is fixed to show 24h clock in local time, hoping to roll out in a few hours with some other features. [20:56] robru, great, tx [20:57] yw [21:03] robru, can the failed builds for webbrowser-app in silo 57 be retried, please? [21:03] oSoMoN: sure [21:03] thanks! [21:06] you're welcome === salem_ is now known as _salem