[07:53] <Saviq> trainguards, morning, what need I do about https://requests.ci-train.ubuntu.com/#/ticket/445 ?
[07:55] <Saviq> sil mentione binNEW, but not sure why?
[07:55] <pete-woods> trainguards: morning folks, I just tried to do a dual vivid+xenial landing, and I think I just ended up with both going into the overlay PPA
[07:55] <pete-woods> should I have done something different for the config for the silo?
[07:58] <robru> Saviq: looks like you just need a core dev to ack & publish
[07:58]  * Saviq looks around
[07:59] <Saviq> seb128, I can has core-dev ACK on https://requests.ci-train.ubuntu.com/#/ticket/445 please?
[07:59] <robru> pete-woods: yep, you set the destination to the overlay so that's where the packages went. You'll need a core dev to copy those to xenial now
[08:00] <seb128> Saviq, hey, looking, not a trivial one ;-)
[08:00] <pete-woods> robru: so if I wanted the overlay PPA for vivid, but not for xenial, what should I have done?
[08:01] <robru> pete-woods: leave dest ppa field blank because that only controls where the primary series goes. Vivid copies always go to overlay
[08:01] <pete-woods> robru: right okay, that's good to know, thanks :)
[08:01] <robru> pete-woods: you're welcome!
[08:03] <pete-woods> might I ask a passing core-dev if they could copy my mistaken package from https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/stable-phone-overlay/+packages?field.name_filter=cmake-extras&field.status_filter=published&field.series_filter=xenial
[08:03] <seb128> Saviq, the summary is unreadable :-/
[08:03] <pete-woods> into xenial?
[08:03] <seb128> it's like over a full screen of red bold text
[08:03] <seb128> wth
[08:03] <robru> pete-woods: it may be helpful to think of it this way: in a dual silo, everything is focused on the primary series. Only the primary series gets tracked in migration, only the primary series let's you set the destination, etc etc. The vivid copies of the packages are sort of just bolted on and hard coded to always go to overlay
[08:04] <Saviq> seb128, you mean the landing description?
[08:04] <seb128> Saviq, yes
[08:04] <seb128> https://requests.ci-train.ubuntu.com/#/ticket/445
[08:04] <seb128> that page
[08:04] <pete-woods> robru: yeah, that makes sense now, could have sworn this behaviour has change from the past, though
[08:04] <Saviq> seb128, it's a behemoth silo, I tried my best
[08:04] <seb128> is there a pointer somewhere to the diff to review?
[08:04] <pete-woods> anywa, I know what to do now
[08:05] <robru> seb128: second from top there's a link to the artifacts
[08:06] <seb128> would be nice if there one concatenated diff
[08:06] <seb128> and if the page was not only red text
[08:06] <robru> pete-woods: nope, vivid copies were always hardcoded to go to overlay. The only thing that changed was that wily copies also went to overlay for a time
[08:06] <Saviq> seb128, re: not trivial, the -gles reworks are robru's authorship, replacing the need to set changelog and silo in debian/rules
[08:06] <Saviq> but will only work in silo (or when you get the source yourself)
[08:06] <seb128> that feels hackish
[08:07] <robru> seb128: it is hackish but it has reduced the effort required to maintain gles in real terms and also had slangasek 's approval
[08:08] <seb128> then slangasek should publish it :-)
[08:08] <robru> Saviq: OK you need to wait 8 hours for Steve to wake up
[08:08] <seb128> did he state somewhere in public that he was ok with those?
[08:08] <seb128> like in a irc log I can read
[08:09] <robru> seb128: it was in emails but I can't remember if it was on a list or not
[08:10] <Saviq> robru, I got the thread you started after that, don't think Steve weighed in in that thread though
[08:11] <pete-woods> seb128: if you're feeling super generous could you push my mistaken upload to the overlay xenial series into xenial for real when you've finished reviewing Saviq's mega request? (https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/stable-phone-overlay/+packages?field.name_filter=cmake-extras&field.status_filter=published&field.series_filter=xenial)
[08:11] <seb128> pete-woods, hey, ok, queuing to have a look next
[08:11] <robru> seb128: Saviq ugh yes it was private irc where he approved it
[08:13] <robru> seb128: anyway I have no proof of Steve's approval but just realized that this dramatically reduces the amount of maintainance effort on the gles packages as they don't need to point debian/watch at a new ppa every time they do a release.
[08:14] <seb128> Saviq, robru, diff for other things is fine, I'm happy to press the publish button if you take responsability to deal with potential issues/discussions following the gles changes
[08:14] <robru> seb128: OK
[08:14] <Saviq> seb128, ACK
[08:16] <seb128> Saviq, there you go ;-)
[08:16] <Saviq> thank you
[08:16] <seb128> Saviq, nice to see those changes coming btw!
[08:16] <seb128> yw
[08:17] <Saviq> seb128, indeed, shame it took so long to land...
[08:17] <Saviq> freezes, releases, everything that could delayed it
[08:20] <seb128> Saviq, failed :-/
[08:21] <Saviq> huuuh
[08:21] <Saviq> aah
[08:22] <Saviq> robru, help ↑
[08:22] <Saviq> robru, the previous release never migrated https://launchpad.net/ubuntu/+source/unity8
[08:23] <robru> seb128: oh just hit ignore, that's a bug on silos transitioned from wily to xenial
[08:23] <Saviq> but we did rebuild after that landed
[08:23] <Saviq> oof
[08:23] <seb128> "IGNORE_VERSIONDESTINATION"?
[08:23] <robru> seb128: yeah
[08:23] <seb128> let's try that
[08:24] <Saviq> you can even see in https://ci-train.ubuntu.com/job/ubuntu-landing-022-2-publish/46/artifact/unity8_packaging_changes.diff that the version was there before :)
[08:24] <robru> Yeah
[08:25] <robru> Saviq: most likely the last release was to wily overlay. For some reason xenial-proposed is a bit backed up
[08:26] <Saviq> robru, yup
[08:36] <Saviq> robru, does train not push to https://code.launchpad.net/~ci-train-bot/unity8/unity8-ubuntu-wily-proposed any more?
[08:36] <Saviq> only to silo branches?
[08:37] <robru> Saviq: right. It's the same branch though...
[08:37] <Saviq> robru, ok, I didn't get a silo branch (didn't build when you added that), but I'll manage ;)
[08:38] <robru> Saviq: oh awesome
[08:38] <robru> Saviq: I can push something manually hang on
[08:39] <Saviq> robru, oh if you can that'd help a bit
[08:40] <robru> yeah just a minute to whip something up
[08:51] <robru> Saviq: ok I started a mass-push of all branches currently in the train. could take a while until silo 22 shows up: https://code.launchpad.net/~ci-train-bot
[08:51] <Saviq> robru, great, thanks
[08:51] <robru> Saviq: you're welcome!
[08:53] <robru> oh crap, I hard-coded all the branches to be labelled 'xenial', hopefully that doesn't confuse too many people (any subsequent builds will have correct branch names anyway) ¯\_(ツ)_/¯
[08:54] <robru> Saviq: ok 22 is there
[08:56] <Saviq> robru, thankski
[08:57] <robru> Saviq: you're welcome
[08:57] <robru> Saviq: ah and I've just noticed that train publication is broken, let me fix that...
[08:59] <Saviq> robru, check out this trick you just made possible https://requests.ci-train.ubuntu.com/#/ticket/564 ;)
[08:59] <robru> Saviq: oh god what now
[09:00] <Saviq> robru, while I wait for silo 22 to publish, I can use it to seed my new silo ;)
[09:00] <Saviq> it == the branch
[09:00] <robru> Saviq: how did you do that?
[09:00] <Saviq> robru, https://code.launchpad.net/~ci-train-bot/unity8/unity8-ubuntu-xenial-landing-022/+merge/275672 ;)
[09:00] <Saviq> robru, packages will be in proposed already, so it's just about the trunks
[09:01] <Saviq> (or overlay for that matter)
[09:01] <robru> Saviq: packages aren't in proposed yet ;-)
[09:01] <Saviq> robru, yeah, note the "will" be ;P
[09:01] <robru> Saviq: you can just force-merge the silo to get the same effect but you lose migration tracking that way
[09:01] <Saviq> robru, I'm more interested about conflicts for now
[09:02] <Saviq> robru, yeah, won't do that in case it's blocked in proposed forever
[09:06] <jgdx> rvr, let me know if you have questions about that silo 39.
[09:06] <rvr> jgdx: Ok
[09:17] <robru> Saviq: *whew* ^^ fixed publishing
[09:28] <Saviq> :)
[09:31] <sil2100> jibel, davmor2, ogra_, popey: hey, if you guys are not travelling, meeting ;)
[09:39] <Saviq> robru, bug in train: http://people.canonical.com/~ubuntu-archive/proposed-migration/xenial+vivid/update_excuses.html#unity8 (note the xenial+vivid)
[09:39] <seb128> pete-woods, copied
[09:39] <robru> Saviq: heh, yep, just fixed that in trunk but probably won't roll it out until tomorrow
[09:39] <Saviq> ack :)
[09:39] <pete-woods> seb128: awesome, thanks!
[09:40] <seb128> yw!
[09:43] <jibel> sil2100, I was in another meeting
[09:44] <sil2100> jibel: ok, no worries, we can sync on IRC if needed
[10:07] <morphis> sil2100: when using citrain currently I see:
[10:07] <morphis> The following packages will be REMOVED:
[10:07] <morphis>   ubuntu-touch ubuntu-touch-session unity8 unity8-common
[10:07] <morphis> is that expected?
[10:08] <morphis> it is with silo 13 which just contains an updated ofono package
[10:08] <jibel> morphis, did you install another silo with these packages previously?
[10:08] <morphis> jibel: no
[10:08] <morphis> it's the most recent rc-proposed image + silo 13
[10:09] <morphis> jibel: http://paste.ubuntu.com/12968947/
[10:11] <jibel> morphis, looking
[10:12] <jibel> rvr, can you confirm that 'silent mode' switches are out of sync between the indicator and system-settings on latest rc-proposed?
[10:12] <rvr> jibel: Checking
[10:14] <rvr> jibel: Yes
[10:14] <rvr> Silent mode active in indicator, off in System Settings
[10:14] <jibel> rvr, k, I'll file a bug
[10:14] <jibel> not sure when it regressed
[10:15] <jibel> rvr, any idea of a silo that could have broke it?
[10:15] <rvr> Hmm
[10:15] <jibel> morphis, no problem *without* citrain
[10:16] <jibel> morphis, trying with citrain now
[10:16] <rvr> jibel: Related to indicator-sound, we had silo 55 with mpris controls, but I don't think that touched silent mode switch
[10:21] <jibel> morphis, and I confirm that with citrain it removes unity8
[10:21] <jibel> sil2100, ^ any idea?
[10:23] <sil2100> jibel: sadly, no, I don't have any knowledge or experience of the citrain commandline tool
[10:24] <jibel> morphis, as a workaround for the moment, you can install the silo manually. switch your device rw, add the corresponding entries to sources.list and pin the silo higher than the overlay
[10:25] <morphis> jibel: yeah, currently looking at citrain
[10:25] <morphis> jibel: http://paste.ubuntu.com/12969030/
[10:25] <morphis> went through fine now
[10:26] <rvr> The following packages will be REMOVED:
[10:26] <rvr>   ubuntu-touch ubuntu-touch-session unity8 unity8-common
[10:26] <rvr> Argh
[10:26] <morphis> jibel: adb shell SUDO_ASKPASS=/tmp/askpass.sh sudo -A apt-get -o Dir::Etc::SourceList=/dev/null update is the faulty line
[10:26] <morphis> rvr: edit citrain and drop -o Dir::Etc::SourceList=/dev/null from the apt-get update line
[10:28] <jibel> morphis, yeah but I don't remember the purpose of this line
[10:28] <morphis> jibel: looks like that is already fixed
[10:28] <morphis> jibel: it only fetches updates from the silos
[10:28] <morphis> not from the archive
[10:29] <sil2100> ogra_: hey, you around for some package publishing? :)
[10:29] <morphis> jibel: robru already fixed this with https://bazaar.launchpad.net/~phablet-team/phablet-tools/trunk/revision/345
[10:30] <robru> morphis: jibel please don't edit the script just install the latest version from the ppa
[10:30] <robru> rvr: ^
[10:31] <rvr> robru: Ok
[10:31] <robru> Ooh
[10:31] <jibel> robru, I don't use citrain :)
[10:32] <jibel> robru, is the patch released?
[10:34] <robru> jibel: the fix is released in xenial only. If you're using anything prior to that you can get it from either phablet-team/tools or sdk ppa
[10:34] <jibel> robru, I'm on xenial
[10:35] <robru> jibel: the fix just landed in xenial 4 minutes ago ;-)
[10:35] <jibel> robru, heh :)
[10:36] <robru> jibel: i dunno why it was stuck in proposed so long, i published it last week
[10:39]  * rvr is in wily
[10:40] <jgdx> rvr, bbl—in two hours ca.
[10:41] <rvr> jgdx: Ack
[10:43] <jibel> rvr, so silent mode switch in system-settings does nothing
[10:44] <rvr> jgdx: ^
[10:44] <rvr> jibel: I'm checking in stable
[10:45] <rvr> jibel: Oops
[10:45] <rvr> jibel: It's also broken in stable
[10:46] <rvr> jibel: Silo 49
[10:46] <rvr> jibel: sound-silent-mode-handling https://trello.com/c/y5CFR6eY/2349-452-ubuntu-landing-049-ubuntu-system-settings-kenvandine
[10:47] <jibel> rvr, how did it pass testing? There is an explicit test for that
[10:47] <rvr> jibel: Yes, and I marked it as  passed
[10:48] <rvr> jibel: I don't know whether that's the culprit, but seems related
[10:49] <jibel> rvr, it looks like the silo you're testing contains the fix https://code.launchpad.net/~jonas-drange/ubuntu-system-settings/fix-sound-ap-regression/+merge/275038
[10:50] <rvr> Oh, I didn't finish installing the ppa
[10:50] <rvr> I will check it then
[10:51] <rvr> No bug attached
[10:57] <rvr> jibel: Silo fixes the sync between indicator and System Settings, indeed
[10:59] <rvr> seb128: Do you know which silo introduced the sound regression?
[10:59] <rvr> -                property variant silentMode: action("silent-mode")
[10:59] <rvr> 20	                 property variant highVolume: action("high-volume")
[11:00] <rvr> So it was the one that I tested
[11:00] <rvr> jibel: It was my fault
[11:16] <popey> is silo 22 going to land sometime soon? :) (I have a conf at the weekend, wanted to try and demo it) :D
[11:17] <jgdx> jibel, don't know how it wasn't picked up in manual testing, but the automated test failed as well and noone noticed.
[11:20] <jgdx> though, our suite saw many many spurious failures at that time, so it was easy to miss
[11:22]  * jgdx really bbl
[11:23] <jibel> jgdx, yeah, there is a test in our regression testsuite and the tester marked it as passed. It's clearly a failure on our side, I'll talk to him.
[11:25] <rvr> popey: ping
[11:26] <popey> rvr, hello
[11:26] <rvr> popey: Do you know when if music app being updated for mpris controls?
[11:26] <rvr> s/if/is/
[11:27] <popey> Heh
[11:27] <ahayzen> rvr, yes, we are working on it ;-)
[11:27] <ahayzen> waiting for the media-hub and indicator-sound bugs to be fixed first :-)
[11:27] <rvr> ahayzen: It should land for OTA7
[11:27] <ahayzen> rvr, it can't too many bugs at the moment in the background-playlists implementation
[11:27] <popey> it needs fixing in the platform _first_
[11:28] <rvr> ahayzen: Are you waiting for ready-for-testing silos?
[11:28] <ahayzen> rvr, i believe Jim has another one WIP, but i would need to check with him if any are ready
[11:29] <ahayzen> silo015 looks possible, but i need to link music-app to its new commands to totally test (which i'll do when back from lectures later :-) )
[11:30] <rvr> ahayzen: Ok, I will test silo 15 after I'm doing with the current one
[11:30] <rvr> I'm done
[11:31] <ahayzen> rvr, cool, not sure if Jim has any mini-apps to prove that the new command works or is faster
[11:32] <rvr> ahayzen: popey: Can you coordinate with Jim?
[11:32] <popey> rvr, we are and have been for some time
[11:32] <rvr> Great
[11:32] <ahayzen> yup :-)
[11:32] <popey> We keep getting asked about mpris in music app as if _we_ are the blocker here.
[11:32] <ahayzen> hehe
[11:32] <popey> Which most certainly isn't the case.
[11:33] <rvr> popey: Sorry, I understood that after the indicator sound landed, only music app update was required
[11:33] <popey> Yeah, a few people assumed that :)
[11:33] <ahayzen> hah
[11:33]  * ahayzen has a list of issues :(
[11:33] <popey> Fact is, if we gave you the in-progress branch for music app, you'd reject it from QA perspective.
[11:34] <popey> I don't think mpris should have landed _at_ _all_ until music issues were fixed
[11:34] <popey> but that's just me :)
[11:34] <ahayzen> popey, there is work in the scopes which needed it ;-)
[11:34] <rvr> popey: Yeah, I try to test it and it was a failure
[11:34] <popey> Yeah, it's big and invasive.
[11:34] <rvr> popey: The mpris silo had four or five failures
[11:34] <popey> Glad ahayzen and jim are working on it.
[11:35] <popey> erk
[11:35] <popey> :)
[11:35] <rvr> Ah, the scopes too
[11:35] <popey> fun times :)
[11:36] <ahayzen> \o/
[11:36] <rvr> Hmm
[11:42] <rvr> sil2100: ping
[11:43] <sil2100> rvr: pong
[11:45] <rvr> sil2100: Tomorrow is string freeze. Does it make sense to delay to Wednesday the language export, or to make a new one?
[11:45] <sil2100> rvr: I think we can make a new one on Wednesday
[11:46] <sil2100> Anyway, another auto-export will happen next week anyway
[11:47] <rvr> sil2100: I still wish more frequent exports :(
[11:48] <sil2100> For rc-proposed I guess we could have that, since the exports take like 5 minutes anyway
[11:48] <sil2100> But for now we can poke manually
[12:02] <jibel> sil2100, once a day is probably overkill but at least one on feature freeze day would be good.
[12:04] <sil2100> I originally wanted the syncs to happen on Wednesdays, so after FF, but I remember pitti had some concerns
[12:04] <sil2100> But those might have been related to time needed for the exports to happen
[12:04] <sil2100> (which is minimal here)
[12:05] <jibel> sil2100, lets freeze a day earlier ;)
[12:22] <Saviq> trainguards, qtmir won't migrate in xenial (bug #1510067), I'm gonna force merge&clean in silo 22 and skip the release to xenial, unless you think I shouldn't?
[12:31] <sil2100> Saviq: hm, I think +1 on force-merging
[12:31] <sil2100> As long as you'll keep track of that it's fine with me
[12:32] <Saviq> yeah I will
[13:03] <bfiller> sil2100: how do I change silo 48 to xenial+vivid? it was pre-existing and using wily+vivid
[13:04] <sil2100> bfiller: hey! You need to reconfigure it to xenial+vivid on bileto and then I'll do a binary copy of your wily packages to xenial
[13:04] <sil2100> bfiller: was it ready for release?
[13:04] <bfiller> sil2100: no not yet
[13:04] <bfiller> sil2100: need to rebuild it first and finish test
[13:05] <sil2100> Then I think a reconf to xenial+vivid and a rebuild of all packages would suffice, I'll remove the wily binaries then
[13:05] <bfiller> sil2100: thanks
[13:09] <jhodapp> jibel, hey, I had emailed you a few months ago about getting some non-autopilot integration tests running for media playback...have you learned anything more about who I would need to talk to in order to get these tests automated?
[13:10] <bfiller> sil2100: seems builds are failing, out of disk space: https://ci-train.ubuntu.com/job/ubuntu-landing-048-1-build/72/console
[13:11] <jgdx> sil2100, +2 on that, https://ci-train.ubuntu.com/job/ubuntu-landing-047-1-build/43/console
[13:12] <jgdx> “mktemp: failed to create directory via template '/tmp/debsign.XXXXXXXX': No space left on device”
[13:12] <sil2100> Argh, again
[13:12] <sil2100> On it
[13:12] <jgdx> thanks!
[13:13] <sil2100> hmm, not sure if this will help, could you guys check?
[13:14] <sil2100> The instance seems to have some size issues, I already see that there's not too much disk space available in overall
[13:14] <jgdx> check wat
[13:14] <sil2100> The disk that has all the pbuilders is like 10G, which seems really not enough
[13:14] <sil2100> jgdx: check if rebuilding now works
[13:15] <jgdx> ack
[13:15] <jgdx> !build 47
[13:15] <jgdx> :'(
[13:16] <jgdx> okay, in work
[13:21] <jgdx> sil2100, failed
[13:21] <jgdx> new error msg but same symptom
[13:22] <sil2100> hmmm
[13:22] <bfiller> same with me
[13:23] <sil2100> uh
[13:26] <jgdx> http://i.imgur.com/ji1pen2.png
[13:44] <Trevinho> trainguards, can I do a direct push to lp:unity and lp:unity/wily  just to make a release (bump version numbers, add changelog...). But I don't want to release that on distro (at least, not yet and not in wily)
[13:45] <Trevinho> I mean, I want do do an Upstream only release, with new tarball and such...
[13:50] <Trevinho> sil2100: maybe you know ^ ?
[13:50] <sil2100> Trevinho: hey!
[13:50] <Trevinho> sil2100: hey
[13:50] <sil2100> hmmm... why would you want to do an upstream release without releasing it to the archive?
[13:50] <sil2100> I mean, well, it poses some problems in the current model
[13:51] <Trevinho> sil2100: because I don't want to do an "SRU" for wily
[13:51] <sil2100> As normally everyone expects that trunk == release
[13:52] <Trevinho> I just want to close the 7.3.3 gate move all the bugs waiting love there to the 7.4 milestone and release a tarball
[13:52] <Trevinho> but i wanted that to be on the wily branch, not only on the xenial one
[13:53] <Trevinho> And if we'll do an SRU to wily later, archive and upstream will sync again
[13:55] <sil2100> hm, as said, I would personally prefer only doing releases when an actual release happens, but in the CI Train world we leave branch management to upstreams
[13:55] <sil2100> So I guess you can do that, but it would confuse me personally
[13:55] <Trevinho> mh, ok... All that I don't want is that to break anything
[13:56] <Trevinho> sil2100: I mean, if the changelog in upstream is newer than the one downstream
[13:57] <sil2100> Trevinho: you'll get a warning, nothing to worry about
[13:57] <Trevinho> ok thanks
[14:09] <pstolowski> hello trainguards, i've recurring problem with silo 8 - No space left on device
[14:10] <sil2100> pstolowski: hey, yeah, it's a known issue, looking into that in a moment
[14:10] <pstolowski> sil2100, ack, thanks
[14:34] <morphis> sil2100, robru: something seems to be really flawky with the citrain builds: https://ci-train.ubuntu.com/job/ubuntu-landing-052-1-build/64/console
[14:35] <morphis> tried five attempts but all failed: first because of no space, second cause of package upgrade, after that they fail with some jenkins errors
[14:35] <sil2100> Yeah...
[14:38] <sil2100> Poking IS/webops
[14:38] <sil2100> Since this is something I can't help with, as now jenkins jobs fail running at all
[14:47] <rvr> jgdx: Approving silo 39.
[14:47] <jgdx> rvr, thanks.
[14:58] <morphis> sil2100: ok
[14:59] <sil2100> Webops are on it, they're trying to make more space
[14:59] <sil2100> For now they freed up some but this might not be enough
[15:01] <sil2100> morphis, bfiller, pstolowski: could you guys re-try your builds?
[15:02] <pstolowski> ok
[15:02] <sil2100> Webops moved the pbuilder stuff to persistent storage, symlinking to the original place, want to see if it didn't break anything
[15:02] <morphis> sil2100: running again
[15:16] <bfiller> mterry: mind publishing silo 23?
[15:16] <Saviq> cihelp hey, can we please switch unity8-ci job to vivid and xenial? thanks
[15:16] <mterry> bfiller, let me look
[15:21] <mterry> bfiller, done ^
[15:21] <bfiller> mterry: ty
[15:24] <sil2100> \o/
[15:24] <sil2100> mterry: can I ask you for some other publishings? :)
[15:24] <mterry> sil2100, sure
[15:24] <sil2100> mterry: https://requests.ci-train.ubuntu.com/#/ticket/519 :)
[15:25] <mterry> sil2100, why no QA on an SRU?
[15:25]  * mterry is often confused about what needs QA
[15:25] <sil2100> mterry: it's a non-touch package, we only do QA sign-off for touch projects
[15:25] <sil2100> (non-touch packages)
[15:25] <mterry> sil2100, ok
[15:26] <sil2100> SRUs get the QA in the SRU queue :)
[15:26] <mterry> sil2100, fair...
[15:28] <sil2100> morphis, bfiller: the train working fine for you guys?
[15:29] <bfiller> sil2100: doesn't seem to be working correctly for silo 48.. not seeing anything building on the ppa
[15:29] <jgdx> same for 47
[15:29] <morphis> sil2100: still building :)
[15:29] <bfiller> sil2100: stuck on "preparing packages"
[15:30] <bfiller> sil2100: and silo 23 which mterry published seems to be stuck on status: publishing
[15:32] <dobey> rvr, jibel: can we have pay-ui in the qa review queue before the pay-service silo? this way the pay-ui change can be tested against the current pay-service, and we can get it in the store so that the new pay-service silo will then get tested against the new pay-ui too
[15:32] <rvr> dobey: Sure
[15:32] <sil2100> bfiller: looks like it moves ;)
[15:32] <dobey> rvr: great, thanks
[15:34] <jgdx> sil2100, https://ci-train.ubuntu.com/job/ubuntu-landing-047-1-build/47/console
[15:34] <jgdx> is that still the space issue?
[15:35] <fginther> Saviq, We're still in the process of getting builds setup for xenial. I'll add unity8 to the list once it's ready.
[15:36] <Saviq> fginther, ack, thanks
[15:37] <sil2100> jgdx: try again now, we just recently 'fixed' the space issue
[15:37] <sil2100> jgdx: I had to ask webops for help as I didn't have enough power myself
[15:43] <balloons> josepht, how's the re-deploy of my jenkaas coming? I'm thinking it might be useful to test the backup and restore as well. Can we manually take a backup before doing it though so I don't hate my life in redoing things? :-)
[15:46] <josepht> balloons: the MP just landed.  You will now need to request an upgrade of your jenkaas from IS.  I'd discuss with them doing a manual backup as well.  I don't think we have the means to easily backup the master from a job since we don't allow execution of jobs on the master.
[15:47] <josepht> fginther, psivaa: have a missed anything? ^
[15:47] <josepht> Ursinha: do you want to request the core-apps upgrade or should balloons do it?
[15:49] <sil2100> bregma: ping
[15:50] <balloons> josepht, ok, I'll make doubly sure I have a backup, and then I'd like to do a full redeploy if you don't mind. It needs to be tested
[15:51] <fginther> josepht, balloons, Correct, we don't have means to do a manual backup ourselves (we can make a backup of all the job configurations if that helps). IS can probably do that if specifically asked
[15:51] <sil2100> bregma: hey! So, I want to publish silo 55
[15:52] <balloons> fginther, how would you back up the job configs? That would represent a big portion of what's needed atm anyway, as little real data is in there right now
[15:52] <balloons> only about a week's worth
[15:52] <fginther> balloons, this is what I use - http://bazaar.launchpad.net/~canonical-ci-engineering/uci-jenkins-utils/trunk/view/head:/jenkins-dump-config.py
[15:52] <sil2100> bregma: actually, I see everything is ok, so nvm :)
[15:53] <fginther> balloons, it has a dependency on 'python-jenkins' from https://launchpad.net/~jenkaas-hackers/+archive/ubuntu/tools
[15:55] <ChrisTownsend> sil2100: I think if it publishes, then the last two changelog entries that are currently in the archive will be wiped out.
[15:55] <fginther> balloons, I have a script somewhere to push a config.xml file to jenkins as a job configuration. I should get it added to uci-jenkins-utils
[15:55] <ChrisTownsend> sil2100: A core dev updated libertine underneath this landing.
[15:55] <sil2100> ChrisTownsend: no problem, those were quick-fixes, I checked that those are actually 'fixed' in the landing itself
[15:55] <ChrisTownsend> sil2100: Right, I was just meaning the changelog history.
[15:56] <sil2100> Yeah, that's ok
[15:56] <ChrisTownsend> sil2100: Ok, then I won't worry about it:)  Thanks!
[15:58] <balloons> fginther, this looks great thanks. It will at least keep the job configs. I bet python-jenkins can do other fun things with automating job creation. Have any scripts around restoring jobs, or making a set of jobs from a template
[16:00] <fginther> balloons, I've relied on python-jenkins for most of my scripting in the past. It's not perfect, but it has a simpler API for most tasks (vs using curl for example). And yes, we've done some work around templating, this is basically what lp:cupstream2distro-config is for
[16:01] <fginther> balloons, But I am in now way endorsing lp:cupstream2distro-config as a way to solve any current or future problem :-)
[16:01] <brendand> fginther, is cu2d sticking around for the foreseeable?
[16:11] <Saviq> robru, hey, does train only push to lp:~ci-train-bot/ on successful builds?
[16:12] <sil2100> Saviq: not sure if that changed or not, but in the past it was pushed only on publish
[16:12] <Saviq> sil2100, yeah, that changed
[16:12] <Saviq> sil2100, https://code.launchpad.net/~ci-train-bot
[16:12] <sil2100> ACK
[16:13] <Saviq> just wondering when (i.e. I predict a build failure, but that's when I could use the branch ;))
[16:15] <bfiller> sil2100, fginther: anyone know what's up with the calexda-pbuilder instances? they've all been busy for quite a while. trying to build http://s-jenkins.ubuntu-ci:8080/view/click/job/camera-app-click/ and building silo 48 taking forever waiting for builders
[16:18] <fginther> bfiller, There is a large number of builds in progress right now on s-jenkins. All the builders are working right now, just appears to be an unusually high demand
[16:19] <bfiller> fginther: ok
[16:21] <jgdx> sil2100, okay, thanks.
[17:01] <pstolowski> sil2100, no disk space problem in silo 8, but there's something wrong with dependencies
[17:01] <sil2100> pstolowski: dependencies?
[17:04] <pstolowski> sil2100, http://pastebin.ubuntu.com/12971383/
[17:05] <pstolowski> hmm not sure if it actually breaks anything
[17:40] <Trevinho> robru: FYI, here https://ci-train.ubuntu.com/job/ubuntu-landing-011-1-build/306/consoleFull I got this
[17:40] <Trevinho> 2015-10-26 17:14:20,157 INFO Checking bamf for new commits...
[17:40] <Trevinho> 2015-10-26 17:14:20,382 INFO bamf has no new commits, skipping.
[17:40] <Trevinho> 2015-10-26 17:14:20,383 INFO Checking compiz for new commits...
[17:40] <Trevinho> it was a clean build though
[17:43] <rvr> jhodapp: Silo 15 approved
[17:43] <jhodapp> rvr, yay!
[17:43] <jhodapp> thanks
[17:44] <jhodapp> robru, mind publishing silo 15 please?
[17:45] <jhodapp> rvr, I feel like I asked this of you before but I forget what you said, but do you have publish permission as well?
[17:59] <jhodapp> cyphermox, ping
[18:00] <robru> jhodapp: if it's all MPs and no packaging changes then you can publish your own
[18:01] <jhodapp> robru, unfortunately it's not, includes a qtmultimedia source package change
[18:01] <robru> Saviq: it pushes the branches after a successful merge.
[18:01] <jhodapp> robru, so if everything is to land in the overlay ppa, then I have permissions...is that accurate?
[18:01] <robru> jhodapp: ah, you need a core dev then
[18:02] <jhodapp> yep
[18:02] <robru> jhodapp: nope, train treats overlay just like real archive, requires same permissions
[18:02] <Saviq> robru, oh good, missed it somehow
[18:02] <jhodapp> robru, then what's the general rule for which I have permission to publish?
[18:03] <robru> jhodapp: if there silo contains only MPs, no manual sources, and no packaging changes
[18:03] <jhodapp> robru, ok
[18:04] <jhodapp> ogra_, you're a core dev, care to publish my silo 15 if you're around?
[18:05] <sil2100> jhodapp: mterry might be a good choice too!
[18:05] <jhodapp> sil2100, cool, mterry ^
[18:05] <sil2100> He's usually nice enough to publish silos for us ;)
[18:09] <jhodapp> sil2100, mterry got scared away ;)
[18:12] <sil2100> hmmm
[18:12] <sil2100> ;)
[18:12] <sil2100> mterry: pong!
[18:12] <mterry> sil2100, hihi
[18:13] <sil2100> mterry: hey! Could you publish some silos for us? jhodapp has silo 15 for release :)
[18:13] <jhodapp> mterry, pretty please :)
[18:13] <mterry> sil2100, 15... ok
[18:13] <jhodapp> thanks
[18:14] <cyphermox> jhodapp: hey
[18:14]  * mterry is reviewing
[18:14] <jhodapp> cyphermox, yo, I got mterry to do my dirty work of publishing silo 15...so don't need your services anymore
[18:14] <mterry> doh  :)
[18:20] <jhodapp> thanks mterry
[18:21] <mterry> jhodapp, yw!
[18:23] <bfiller> popey: mind review latest camera-app in the store?
[18:24] <popey> bfiller, sure thing, will do right now.
[18:25] <bfiller> popey: ty
[18:36]  * sil2100 hugs mterry 
[18:36]  * sil2100 gets back to his system-image
[18:36] <Saviq> robru, so, how about automagically adding -gles twins to silos, without an MP even?
[18:44] <jhodapp> Saviq, I'm +1 to that idea
[18:45] <robru> Saviq: jhodapp yeah I like that idea too just a lot on my plate unfortunately
[18:46] <robru> Saviq: jhodapp can one of you guys send me a summary of how the packaging is different between gles and non-gles? Eg if I took your non-gles trunk, copied the dir, what change do I need to make there to make it build against gles instead?
[18:47] <jhodapp> robru, I don't know the details to that, Mirv always handles that for me for qtmultimedia
[18:47] <robru> hm
[18:48] <robru> Trevinho: that's because the previous build recorded the commit id even though it failed.
[18:48] <robru> Trevinho: I'll file a bug, not sure how to fix that though
[18:51] <robru> Trevinho: https://bugs.launchpad.net/cupstream2distro/+bug/1510230
[19:31] <Saviq> robru, I don't think that's possible, the changes are significant
[19:31] <Saviq> robru, basically, the -gles branch is a packaging branch almost in its own right
[19:31] <robru> Saviq: you don't think it's possible to determine -gles programmatically from trunk?
[19:32] <Saviq> robru, I'm afraid not
[19:32] <Saviq> robru, just getting a diff for qtmir
[19:33] <robru> Saviq: then I'm not sure how I could make the train handle it automatically, if you guys need to manually sync packaging changes from one to the other
[19:34] <Saviq> robru, http://pastebin.ubuntu.com/12973106/
[19:34] <Saviq> robru, well, you can just default to lp:qtmir/gles when you see qtmir
[19:35] <Saviq> robru, and if we need to supply MPs, we will
[19:35] <Saviq> robru, as you can see, it's already outdated, which is a pain, but I really don't think how we can create that diff automagically
[19:36] <Saviq> biab
[19:36] <robru> Saviq: yeah that's pretty crazy
[20:21] <Saviq> robru, but what I think we could do, is just get -gles trunks regardless if there are MPs against it or not
[20:26] <robru> Saviq: I suppose it's possible. Can you file a bug?
[20:26] <Saviq> robru, yup, will do, thanks
[20:35] <robru> Saviq: thanks
[20:35] <robru> Saviq: how's things otherwise though? train is operating smoothly for you?
[20:42] <Saviq> robru, just having issues with conflicts now, but that's rather bzr's fault than trains
[20:42] <Saviq> +'
[20:42] <Saviq> robru, one thing is somewhat confusing: https://requests.ci-train.ubuntu.com/#/ticket/564
[20:42] <robru> Saviq: what's confusing?
[20:43] <Saviq> robru, dirty flag has precedence over other states, which means I don't see the build job state here
[20:43] <Saviq> robru, it's build failed, but it only says silo dirty
[20:43] <robru> Saviq: right
[20:44] <robru> Saviq: the problem is that the silo status is just a free form string so it's getting increasingly hard to show different info there, because eg dirty state is determined at a different time than the build failure, so one clobbers the other
[20:44] <robru> Saviq: I guess that'll need to grow a bit more clever so it can have a dirty flag but still preserve the original failure message
[20:44] <robru> Saviq: for now if you look at the audit log you can see the failure
[20:45] <Saviq> robru, yeah, I understand, and not complaining, although the dirty flag seems also quite persistent
[20:45] <Saviq> robru, like if I kicked a build, and it failed, it shouldn't go back to dirty
[20:46] <Saviq> but it seems only a successful build clears it today
[20:46] <robru> Saviq: yeah that's because we had a problem where maybe one package is marked dirty, if you rebuild only a different package it clears the dirty state even though the other package is still dirty. so I made it more agressive about marking silos dirty
[20:46] <Saviq> robru, sure, I'm dealing with it
[20:46] <robru> Saviq: also silos now become dirty just by having a new commit on your MP (it used to be just by publishing a conflicting silo)
[20:47] <Saviq> robru, yeah yeah, I know, but it's not the case here
[20:47] <robru> Saviq: yeah I agree it's not great, need to think about it a bit to make it work better
[20:47] <Saviq> robru, it did become dirty due to commits, but now without a successful build (which I'm struggling with due to conflicts) it won't get cleared
[20:47] <robru> Saviq: that's odd, it should clear the dirty flag at the beginning of the build
[20:48] <robru> Saviq: oh, no, I see
[20:48] <robru> Saviq: the build clears the dirty flag but when the build fails, the cron job runs and marks it dirty again
[20:48] <Saviq> robru, could be, yeah
[20:49] <robru> Saviq: yeah I'll need to add a new field in the db for recording dirty states, so that the main status can be preserved.
[20:50] <robru> Saviq: I just rolled out a small train change (just a code cleanup, shouldn't be any noticeable behavior change), and I'm just going to step out for lunch, if anything explodes I'll be back in 30
[20:51] <robru> (I tested it in the staging instance though, should be harmless, but there's always a slight risk...)
[20:51] <robru> anyway brb
[20:51] <Saviq> robru, ack
[21:19] <michi> robru: I just sent an email, could you have a squiz please? I’m looking for inspiration...
[21:20] <robru> michi: looking
[21:20] <michi> Ta!
[21:24] <robru> michi: https://launchpadlibrarian.net/222338585/DpkgTerminalLog.txt this seems more instructive. I guess there's a missing dep on something or other? not sure
[21:25] <popey> bfiller, something is up with the store, i thought the camera was published, but it's still showing as needing manual review
[21:25] <popey> I don't know why
[21:25] <bfiller_> Popey: do I need to do anything?￼
[21:25] <michi> libapt-pkg.so.4.16
[21:25] <michi> Never heard of it...
[21:25] <popey> bfiller_, i think a store admin needs to look at it. i asked in #u1-internal
[21:27] <michi> robru: So, this looks like a problem with click.
[21:27] <michi> Thanks for your help!
[21:29] <popey> ok, bfiller_ got it, it's approved.
[21:31] <bfiller_> popey: thanks
[21:32] <robru> michi: more specifically it's an import error which suggests to me that there's a dep missing, not sure what package provides that but shouldn't be too hard to dig it up.
[21:33] <michi> robru: Hmmm… We have a dependency on click. But if click needs libapt-pkg, I guess that’s a missing dependency in click.
[21:33] <robru> michi: strange that that would be overlooked or that more people aren't affected
[21:33] <robru> michi: also strange that one bug is from utopic and one from vivid, I thought this might be some new issue in xenial or something
[21:33] <michi> So far, I’ve seen only these two reports
[21:34] <michi> But xenial isn’t involved, as far as I can see.
[21:34] <robru> brb tho, eating
[21:34] <michi> I don’t think any xenial build for scopes-api exists yet.
[21:34] <michi> Hey, take your time! :)
[21:38] <jgdx> bfiller_, hey, silo 47 just went to QA.
[21:41] <bfiller_> jgdx: cool. did 39 land?
[21:42] <robru> michi: yeah it wouldn't need a xenial build specifically, i was thinking maybe the wily build that got copied to xenial could have been broken by changes in click in xenial. But it's not xenial so that's not the issue anyway ;-)
[21:43] <michi> robru: Yes. I’ve added click to the bug. Let’s see what comes up. I’m pretty sure it’s not a scopes-api problem.
[21:43] <michi> Thanks again!
[21:44] <robru> michi: agreed, definitely an issue in click, you're welcome
[22:15] <brendand> Saviq, didn't silo 22 land in xenial?
[22:15] <Saviq> brendand, almost there, there was a toolchain issue that's now solved
[22:15] <brendand> Saviq, ah
[22:15] <Saviq> brendand, just waiting to test & migrate
[22:16] <Saviq> for some reason http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html#qtmir-gles still says i386: Regression, even though if you click on it it's passed twice already
[22:38] <brendand> Saviq, because they're all the same version?
[22:40] <robru> brendand: the passes have newer timestamps though
[22:41] <robru> That is weird
[22:43] <brendand> agh it's in perl !
[22:43] <brendand> my eyes
[22:43] <robru> What is?
[22:43] <brendand> robru, debci
[22:44] <robru> Heh.
[22:50] <brendand> Saviq, here it says pass too: http://autopkgtest.ubuntu.com/packages/q/qtmir/
[22:50] <brendand> except on armhf
[22:53] <brendand> Saviq,  this is what's stopping unity8 from migrating?
[22:58] <brendand> oh actually it's mostly ruby