[00:00] <michi> sec
[00:00] <michi> http://people.canonical.com/~platform/citrain_dashboard/#?distro=ubuntu&q=landing-015
[00:00] <michi> Sorry, I got those links from Pawel in an email yesterday.
[00:00] <michi> I don’t normally use the silo stuff, so I’m ignorant.
[00:01] <michi> Is there any chance that it’s just a matter of the silo machines being too busy some of the time?
[00:02] <michi> The pattern in the past was that the failures kept increasing as the workload went up on the build machines. Basically, a few days before a release milestone, things went bad.
[00:02] <infinity> There's no way that builder is so busy that it takes 10 seconds to run that test (which I assume should happen in under a second).
[00:02] <infinity> But the VM could be having a sad.  Checking.
[00:02] <michi> We have a number of tests that depend on finishing within certain wall clock time limits.
[00:02] <michi> The test limits are pretty generous.
[00:03] <michi> But, if we don’t get at least about ¼ of the performance of a Nexus 4, they’ll start failing
[00:03] <infinity> And I question your observation about tests and workloads, as most of the builders used for silos are physical machines that don't share with anyone else.
[00:03] <infinity> And the ppc/ppc64el ones (the oddballs in the group) are VMs with 2x overcommit, which should never cause an issue like this, even under extreme pressure.
[00:03] <michi> If we are the only ones running on each machine, load shouldn’t be an issue.
[00:04] <michi> But we are *never* seeing those failures elsewhere, except on Jenkins, and then only when Jenkins is sick.
[00:04] <michi> I strongly suspect some infrastructure issue rather than a problem with our code.
[00:04] <infinity> Oh, Jenkins is a whole different story.
[00:04] <michi> We don’t have anything in our code that would be arch specific.
[00:04] <infinity> But this isn't Jenkins.
[00:05] <michi> Yes, I know. Just mentioning it to explain.
[00:05] <michi> Test should finish in less than 2 secs, from memory
[00:05] <infinity> Poking at the machine now to see if it's full of hate.
[00:06] <infinity> Hrm, nope.  The VM is perfectly happy.
[00:07] <infinity> GOing to retry it on the same VM for kicks.
[00:07] <michi> Weird.
[00:08] <michi> OK. Pawel tried several times yesterday, and kept getting the failure
[00:09] <infinity> Well, one of the merges here *is* called "http client test timeout".
[00:09] <infinity> That seems a bit suspicious, no?
[00:10] <infinity> Sadly, the link to the MP is a 404, so I have no idea what the actual code change was.
[00:10] <michi> No. Pawel was thinking that the timeout might be too tight and upped id.
[00:10] <michi> upped it.
[00:10] <infinity> Oh.  Kay.
[00:11] <michi> I’m running Pawel’s branch in my chroot at the moment, running the test in a loop.
[00:11] <michi> It’s been going without failure for about 15 minutes now.
[00:11] <michi> Sec, I’ll try to find a link for you to the MR
[00:11] <infinity> On kelsey?
[00:11] <michi> https://code.launchpad.net/~unity-team/unity-scopes-api/http-client-test-timeout/+merge/247025
[00:11] <michi> Yes, whatever the 64-bit version is called.
[00:12] <michi> All the failures were on 64-bit PPC
[00:12] <infinity> I mean, is the chroot you're playing with on kelsey02?
[00:12] <michi> kelsey01
[00:12] <infinity> kelsey01 is ppc64el.
[00:12] <infinity> You want the other. :P
[00:12] <michi> Reallly?
[00:13] <michi> OK, I’ll make a chroot there.
[00:19] <infinity> michi: Anyhow, if something fails on powerpc and nothing else (including ppc64el), and you're pretty sure there's no arch-specific code, 9 times out of 10, it would suggest an endian bug.
[00:19] <infinity> michi: Either bad math in the actual code somewhere causing it to take longer than it should, or bad math in the test harness giving you bogus results.
[00:19] <michi> I’m very sure that it’s not an endian issue.
[00:19] <michi> Bad match might be possible.
[00:19] <infinity> (The retry failed, not shockingly)
[00:20] <michi> Strange though that it never fails on any other arch
[00:20] <michi> Building on kelsy01 now
[00:20] <infinity> 02...
[00:20] <infinity> Right?
[00:21] <infinity> (Less confusing to call them porter-ppc64el and porter-powerpc, so you're sure you have the right onw)
[00:22] <michi> porter-powerpc
[00:22] <michi> Didn’t see the failure on ppc64el
[00:22] <infinity> But yes, it looks like you're on kelsey02 (porter-powerpc), so it's all good. :)
[00:22] <michi> Ah, confused the names, sorry
[00:50] <michi> infinity: Can’t reproduce the failure in my chroot build. Tried both debug and release builds
[00:58] <infinity> michi: Testing myself on kelsey02...
[00:58] <michi> Thanks!
[00:58] <michi> Could it be something really stupid, like the VM stopping for a while to snapshort itself or some such?
[00:59] <infinity> Nope.
[00:59] <infinity> (A) These VMs don't do things like that but, (B) the failure would be inconsistent, and it's not.
[00:59] <michi> Yes.
[00:59] <michi> It was a long shot, I know :)
[01:00] <infinity> But I question how you built this, when the vivid-powerpc chroot didn't even have all the build-deps installed until I just fixed that now. :P
[01:00] <infinity> (Was missing libnet-cpp-dev)
[01:01] <michi> I tried apt-get build-dep unity-scopes-api, which doesn’t work.
[01:01] <infinity> dpkg-checkbuilddeps in the source tree helps.
[01:01] <michi> In the end, I just took all the build-deps from debian/control and did an apt-get install for those
[01:02] <michi> I’ve also rewound the branch to before Pawel’s timeout hack.
[01:02] <michi> Same thing. Test is running in a loop without failure.
[01:02] <infinity> Anyhow, the chroot is upgraded and such now, let's see if this fails for me.
[01:02] <michi> I suspect it won’t.
[01:03] <infinity> Well, if it doesn't, then it's up to me to figure out why, which is why I'm curious.
[01:03] <infinity> If it does fail, I'm blaming you. :)
[01:03] <michi> That sounds fair. Let’s bet a beer on it, for the next sprint ;)
[01:03] <michi> The failures are consistent with one or more threads not getting CPU time.
[01:03] <infinity> Or it could just appear to be hung...
[01:04] <michi> The build log from the Silo shows that the test failed because it didn’t get the excepted timeout exception.
[01:04] <infinity> tail -n 50 ~adconrad/usa/build.log
[01:04] <infinity> michi: ^-- What do you make of that?
[01:04] <michi> looking
[01:05] <michi> Try again.
[01:05] <infinity> Maybe because you're running your tests at the same time, and they don't play well together?
[01:05] <michi> The tests are not designed to run concurrently
[01:05] <michi> Exactly
[01:05] <michi> We are trying to use the same network endpoints in /tmp
[01:05] <michi> I’ve stopped my test
[01:06]  * infinity redoes this from a clean build again to replicate a buildd.
[01:07] <michi> infinity: I’ve removed the remnants of my network endpoints from /tmp, so you should be OK now.
[01:07] <michi> Might be better to change the tests to mkdir /tmp/$USER and put the endpoints there instead.
[01:07] <michi> So far, this hasn’t been an issue for us. It’s lazy, I know
[01:08] <infinity> I'd recommend the tmpdir being in the build tree, not in /tmp
[01:08] <michi> Unfortunately, we can’t do that.
[01:08] <infinity> So it doesn't generate cruft.
[01:08] <infinity> Oh.  Why?
[01:08] <michi> 107 char limitation on UNIX domain sockets means that things don’t work when they run on Jenkins.
[01:08] <infinity> Ahh.
[01:08] <infinity> Fair enough.  That's not what's causing your issue anyway.
[01:09] <infinity> buildd chroots are fresh on every build.
[01:09] <michi> How this mis-design has managed for this many years is amazing.
[01:09] <michi> managed to hang around…
[01:09] <michi> if open() can deal with long paths, so can bind()
[01:09] <michi> end of rant
[01:10] <infinity> I imagine there are POSIX/XOPEN reasons why changing it isn't trivial.
[01:10] <michi> Probably. But stuffing the path into the end of the struct was a silly thing to do in the first place.
[01:10] <michi> The mistake was made long before POSIX
[01:11] <infinity> There are a lot of silly things in UNIX. :)
[01:11] <michi> No, really? Have you read the UNIX Hater’s Handbook? ;)
[01:11] <infinity> But every time someone tries to start fresh, it never really catches on.
[01:11] <infinity> See: Plan 9, or Hurd.
[01:11] <michi> Yes. No surprise, really. Because it works amazingly well almost all of the time.
[01:12] <michi> Doesn’t mean that I can’t have a good ol’ bitch every now and then though :)
[01:14] <infinity> michi: So, good news (for me).  It fails the same on kelsey02 with a full package build.  ~adconrad/usa has the source and the build log.
[01:14] <michi> Looking. That indeed is good news
[01:15] <michi> So, that would point to a difference in the packages?
[01:15] <michi> How did you get the build-deps installed?
[01:15] <infinity> michi: So, I'd recommend unpacking the actual source (dpkg-source -x foo.dsc), building it with dpkg-buildpackage -B, and then fiddling with the results.
[01:16] <michi> OK, I’ll give that shot now.
[01:16] <michi> Where did you get the source package?
[01:16] <michi> Just want to make sure that I’m doing the same thing as you
[01:16] <infinity> michi: It might have something to do with libnet-cpp-dev having not been installed for your build, it might be that I upgraded the compiler to the latest version, or those might all be red herrings, and it could just be that you took a shortcut in your build that made it not the same to a real package build.
[01:16] <infinity> michi: I got the sources from the PPA, but you can just unpack the one in my home dir there.
[01:16] <michi> OK, will do
[01:17] <michi> Hmmm...
[01:17] <michi> I did an apt-get for all the deps.
[01:17] <michi> So that might have done it too.
[01:17] <michi> I’ll try with your source first.
[01:17] <michi> If I don’t see it then, I’ll use a new chroot
[01:19] <michi> infinity: what build mode did you use? The default, or debug?
[01:20] <michi> Forget it, doesn’t matter.
[01:20] <michi> dpkg-buildpackage -B
[01:22] <michi> infinity: can you blow away your endpoint dir in /tmp?
[01:22] <michi> Or anything else that looks like a socket?
[01:23] <michi> I think /tmp/priv, and maybe runtime-adconrad
[01:25] <infinity> michi: Done.
[01:25] <michi> Thanks!
[01:25] <michi> Still building…
[01:32] <infinity> Grr, didn't actually kill my testsuite successfully and those directories came back in /tmp
[01:32] <infinity> Hopefully that didn't break you.
[01:40] <cyphermox> infinity: you on trainguard duty tonight?
[01:40] <robru> cyphermox: I'm around
[01:41] <cyphermox> robru: ah, I was just wondering because.
[01:41] <robru> cyphermox: oh are you waiting for something?
[01:41] <cyphermox> rsalveti: ah, cool, I was about to ask if you had already started the rtm mtp landing
[01:41] <rsalveti> cyphermox: yeah, got a silo and just tested it
[01:41] <cyphermox> great
[01:42] <robru> cyphermox: you know you have all the trainguard permissions yourself... maybe you forgot, you can assign and publish your own silos ;-)
[01:43] <cyphermox> i know
[01:44] <michi> infinity: No, didn’t break me.
[01:44] <michi> OK, good news:
[01:44] <michi> I’m seeing the failure too now.
[02:06] <infinity> michi: Well, that's something.
[02:10] <imgbot> [02:23] <fginther> michi, I have a changed ready to remove the problematic jenkins builders from the unity-scope-api jobs
[02:23] <michi> Sweet, thank you!
[02:23] <michi> I just kicked off another build.
[02:23] <fginther> michi,  http://s-jenkins.ubuntu-ci:8080/job/unity-scopes-api-ci/541/ is doomed to fail, shall I abort it and get it restarted on the new set
[02:23] <michi> Should I stop that one?
[02:23] <michi> Yes please! :)
[02:24] <michi> Any idea what’s causing the issue yet?
[02:24] <fginther> michi, ack, will get it restarted
[02:26] <fginther> michi, The issue appears to be heavily influenced by the configuration of the build nodes themselves. I'm leaning towards the theory that the memory and disk configuration aren't right for this workload
[02:26] <michi> So the config for the working nodes is different?
[02:28] <fginther> They're all the same, so it's not a great theory... But there could be a relationship to the hypervisor node.  I.e. the 'bad' nodes may be causing thrashing on the hypervisor itself
[02:31] <michi> I see.
[02:31] <michi> I don’t envy you :(
[02:31] <fginther> michi, it hasn't been a fun problem :(
[02:31] <michi> Nope
[03:10] <imgbot> [03:35] <imgbot> [03:35] <imgbot> [04:02] <kgunn> trainguards is anyone around who can publish ?
[04:02] <kgunn> http://people.canonical.com/~platform/citrain_dashboard/#?distro=ubuntu-rtm&q=landing-006
[04:15] <imgbot> [04:15] <imgbot> [07:27] <satoris> ping cihelp, jenkins builders are still wonky: https://jenkins.qa.ubuntu.com/job/thumbnailer-vivid-amd64-ci/15/console
[08:41] <thomi> hey satoris, made it home OK?
[08:42] <satoris> thomi: yep, thanks. You too, I hope. :)
[08:42] <thomi> heh.. yeah, but that was a bit easier
[08:43] <thomi> just got rid of tych0 today - we had fun the last few days
[08:44] <satoris> I can imagine. Did it involve beer and the steepest road in the world?
[08:44] <thomi> it did! and whiskey ;)
[08:45] <thomi> anyway, catch you 'round :D
[09:16] <pete-woods> trainguards: hi, can someone push the publish button on vivid silo #12 again? I got my code reviewed this time :$ (https://ci-train.ubuntu.com/job/ubuntu-landing-012-2-publish/build?delay=0sec)
[09:16] <sil2100> pete-woods: sure!
[09:20] <pete-woods> sil2100: thanks :)
[09:25] <sil2100_> And there goes my morning work, just love GPU hang-ups
[09:30] <sil2100> huh
[09:51] <pstolowski> trainguars hey, silo 15 fails with some internal errors - https://ci-train.ubuntu.com/job/ubuntu-landing-015-1-build/102/console
[09:53] <sil2100> satoris: hey!
[09:53] <satoris> Hello.
[09:54] <sil2100> satoris: it seems that the latest thumbnailer upload caused some issues in the gallery-app
[09:55] <sil2100> satoris: LP: #1412442
[09:55] <sil2100> satoris: we would need this to be fixed ASAP, otherwise we'll have to revert thumbnailer as it's now causing regressions on the image
[09:55] <satoris> There are two fixes outstanding but I can't land them because jenkins fails to build.
[09:56] <satoris> https://jenkins.qa.ubuntu.com/job/thumbnailer-vivid-amd64-ci/16/console
[09:56] <satoris> A cihelp guy said to look into it yesterday but there have been no news since and my reping (an hour or few ago) got no replies. :(
[09:57] <vila> satoris: I juuuust finished reading the backlog here ;)
[09:57] <sil2100> \o/
[09:57] <satoris> sil2100: when the two fixes from kaleo land, then it should start working.
[09:58] <satoris> I'll corral them in as soon as possible.
[09:58] <vila> satoris: now I need to find what fginther did but I agree with him that the fix is around having 1) beefier workers 2) make sure they are properly spread on the physical nodes
[10:05] <satoris> sil2100: so if the builder thing is fixed in time the fixes can land immediately, if not then revert is probably the correct thing to do.
[10:06] <pstolowski> sil2100, hey, what's going on with https://ci-train.ubuntu.com/job/ubuntu-landing-015-1-build/104/console ?
[10:07] <vila> satoris: on it, the jenkins workspace permission bits are broken (read-only)
[10:07] <satoris> Excellent, thanks.
[10:08] <vila> damn it, I chmod'ed 755 and http://s-jenkins.ubuntu-ci:8080/job/thumbnailer-vivid-amd64-ci/18/console put it back to 711 !!
[10:08] <vila> at least I'm lucky enough that it keeps using the same worker
[10:08] <pstolowski> trainguards could you pls reconfigure silo 15, thanks
[10:09] <vila> satoris: rm'ing the damn dir allows progress: http://s-jenkins.ubuntu-ci:8080/job/thumbnailer-vivid-amd64-ci/19/console
[10:10] <vila> satoris: is that the MP you're after though ? Or should I look at another one and keep digging ?
[10:11] <satoris> vila: that's the one. (original mr is https://code.launchpad.net/~fboucault/thumbnailer/save_failures/+merge/246547)
[10:12] <vila> fginther: we've got a gremlin having fun at us ^ chmod'ing the dir didn't work, rm'ing it did
[10:13] <vila> fginther: on cloud-worker-10 so far, not sure the gremlin will stay there or if it already broke some other dirs though...
[10:13] <vila> satoris: success on http://s-jenkins.ubuntu-ci:8080/job/thumbnailer-vivid-amd64-ci/19/console
[10:13] <vila> satoris: what's next ?
[10:15] <satoris> vila: hmm, the launchpad page is not updated with approval. Let me restart it.
[10:16] <vila> satoris: ha, crap, the lower job succeeded, my bad, I need to re-run the higher level one.... /me sighs
[10:16] <satoris> vila: I already restarted it.
[10:16] <vila> satoris: ha good, url ?
[10:16] <vila> nm, stupid request
[10:17] <vila> satoris: hmpf, permission denied again ;-(
[10:20] <vila> satoris: ok, you're jinxed, three different workers involved, all with permission denied
[10:21] <satoris> That makes me a sad panda. :-(
[10:21] <vila> satoris: same here ;-/
[10:21] <vila> satoris: especially since this error makes no sense
[10:22] <vila> satoris: and is now spreading on different workers
[10:22] <satoris> Maybe it's Skynet?
[10:23] <sil2100> pstolowski: o/
[10:24] <vila> satoris: yay, removing the 'x' bit on dirs is especially nasty, can't even think which code can do that in that context (can  hardly think about a valid case for removing 'x' on a dir...)
[10:26] <satoris> Maybe some wildcard is doing the wrong thing?
[10:26] <sil2100> pstolowski: the error is really strange, let's see how things look like after the reconfigure
[10:30] <sil2100> pstolowski: reconfigured
[10:36] <mzanetti> sil2100: can you please reconfig rtm/16 for me (row 42)
[10:36] <sil2100> mzanetti: sure thing
[10:36] <sil2100> What has been added?
[10:36] <mzanetti> the gles twin
[10:43] <mzanetti> thanks :)
[10:45] <sil2100> o/
[11:00] <sil2100> Damn, one merge is not approved
[11:06] <vila> satoris: hotfixing the workers, this will take a while, I'll keep you posted
[11:18] <jibel> vila, what is the ETA? we are blocked on this fix to land another silo this afternoon. If it cannot land soon we'll have to revert.
[11:34] <sil2100> jibel: I'll prepare a revert in the meantime, so that everything is ready... could you confirm that installing thumbnailer 1.3+15.04.20141218~rtm.is.1.3+14.10.20141020-0ubuntu1 fixes the issues?
[11:34] <sil2100> jibel: since we'll be reverting to that version
[11:35] <sil2100> (which actually is a revert...)
[11:35] <jibel> sil2100, I'll test after lunch
[11:36] <vila> jibel: I finished "fixing" the cloud workers, I'm starting to fix the cyclops ones
[11:37] <Kaleo> nerochiaro, can you speak to sil2100 about the critical gallery bug from yesterday?
[11:38] <jibel> sil2100, don't do anymore revert of this package or you'll overflow the version number ;)
[11:38] <sil2100> Kaleo: I think the discussion above is about that bug ;)
[11:38] <Kaleo> sil2100, yes, and there is a pending fix in gallery IIRC
[11:38] <sil2100> jibel: I'll fine-tune the version so it's not a revert to a revert ;p Since otherwise the version number would really be too big!
[11:39] <sil2100> Kaleo: in gallery as well? I thought it was thumbnailer that caused the regression
[11:39] <Kaleo> sil2100, it is
[11:39] <Kaleo> sil2100, but the code in gallery was not ideal
[11:39] <Kaleo> sil2100, and making it ideal also fixes the bug
[11:39] <Kaleo> sil2100, https://code.launchpad.net/~phablet-team/gallery-app/avoid-thumbnailers-from-viewer/+merge/246997
[11:40] <Kaleo> sil2100, status to be checked with nerochiaro
[11:40] <jibel> Kaleo, if you want more fixes in the gallery it has to go in silo 14
[11:40] <jibel> Kaleo, it's the sync from vivid
[11:40] <sil2100> Oh, ok, that would be great, as it would mean that even in the worst case we can still not revert
[11:42] <Kaleo> jibel, ok; nerochiaro see jibel's comment as well ^
[11:42] <nerochiaro> Kaleo: sil2100: the status is that as far as i am concerned that branch Kaleo pointed out fixes the bug in gallery. Bill was supposed to test it yesterday and let me know but it did not happen. I expect him and Pat to take a decision on this today
[11:43] <nerochiaro> Kaleo: sil2100: jibel: i would say please chuck that branch in silo 14 so Bill can look and approve when he comes in
[11:47] <sil2100> nerochiaro: that might be a bit problematic as the gallery-app silo is a sync silo, so it has no merges assigned to it
[11:47] <sil2100> hmm
[11:47] <sil2100> But I guess we can work-around that somehow
[11:47] <sil2100> Will check after lunch
[11:48] <jibel> sil2100, 1.3+15.04.20141218~rtm.is.1.3+14.10.20141020-0ubuntu1 works on krillin/rtm 205
[11:48]  * jibel -> lunch
[11:48] <nerochiaro> sil2100: i am not very familiar with the silo system so please do as you feel is best. the goal is to allow bill and pat a quick path to check the patch solves the issue and approve for quick release if they are satisfied with it
[11:55] <alan_g> cihelp: we've seen a couple of builds time out on cloud-worker-11 - is it struggling? https://jenkins.qa.ubuntu.com/job/mir-android-vivid-i386-build/961/consoleFull https://jenkins.qa.ubuntu.com/job/mir-android-vivid-i386-build/957/consoleFull
[11:56] <vila> alan_g: it's in understatement ;) All cloud workers are struggling on mir builds
[11:56] <vila> s/in/an/
[12:10] <Wellark> hi guys
[12:10] <Wellark> where are the ddebs from RTM published?
[12:10] <Wellark> they don't appear to be in http://ddebs.ubuntu.com/
[12:11] <Wellark> and not under http://derived.archive.canonical.com/
[12:11] <Wellark> ogra_: do you know?
[12:11] <Wellark> sil2100 maybe?
[12:13] <Wellark> trainguards: help... --^
[12:15] <oSoMoN> trainguards: can I have an RTM silo for line 78, please?
[12:17] <cjwatson> Wellark: they should be on the main ddebs site
[12:17] <cjwatson> Wellark: http://ddebs.ubuntu.com/ubuntu-rtm/
[12:18] <Wellark> cjwatson: oh, there was a subdirectory!
[12:18] <Wellark> cjwatson: thanks, love you! :D
[12:57] <sil2100> oSoMoN: on it! Was on lunch :)
[13:00] <oSoMoN> sil2100, thanks
[13:12] <om26er> pete-woods, Hi!
[13:13] <om26er> pete-woods, How can I verify fix for bug 1411201 ?
[13:15] <sil2100> satoris: the best thing we can do regarding gallery-app is releasing your fix for vivid and then re-doing the sync to ubuntu-rtm
[13:15] <sil2100> satoris: anything else would require bfiller
[13:15] <sil2100> bfiller: ping
[13:20] <satoris> sil2100: if by "my fix" you mean the ones we are trying to land once Jenkins permits? If so then absolutely yes.
[13:22] <pmcgowan> sil2100, whats the question
[13:22] <pete-woods> om26er: I did it by adding a new collection of scopes in the store. it's called "canned data scopes"
[13:22] <pete-woods> om26er: one of them prints the current GPS coordinates. you can refresh the scope and check that the GPS coordinates change as you move around
[13:23] <pete-woods> om26er: the scope is called "Canned scope using location"
[13:23] <om26er> pete-woods, without the silo, will it not update co-ordinates ?
[13:24] <om26er> also do I need to run the complete test plan
[13:24] <pete-woods> om26er: well that's up to you. I can tell you that the only code that is touched is inside the location code
[13:25] <pete-woods> om26er: it doesn't update the coordinates without the silo, no
[13:27] <sil2100> pmcgowan: so we're in a sticky situation related to the recent thumbnailer regression (that broke gallery): we have fixes from both the gallery and thumbnailer side
[13:27] <om26er> pete-woods, is it unity-scope-canned on launchpad ?
[13:27] <sil2100> pmcgowan: whenever any one of them lands, we're fixed, but...
[13:28] <pete-woods> om26er: yes. but it's in the store, so you don't need to build it yourself or anything
[13:28] <pmcgowan> nerochiaro, can you discuss ^^
[13:28] <sil2100> pmcgowan: the thumbnailer fix is blocked on jenkins build problems (which are being worked on by CI), while the gallery-app fix well, would require bfiller around since the gallery-app fix needs to be somehow incorporated into the gallery silo
[13:28] <om26er> pete-woods, right, I found it in the store. Its actually called 'Canned data scopes'
[13:28] <pete-woods> ^ om26er: I did it by adding a new collection of scopes in the store. it's called "canned data scopes"
[13:29] <pmcgowan> sil2100, we would need to understand what the thumbnailer side fix is, I wasnt aware it could be fixed there
[13:29] <sil2100> pmcgowan: where the gallery silo is a sync silo, so adding an additional merge cannot be done... we would have to probably just release it in vivid and sync, or ask bfiller if we can do it somehow else
[13:29] <sil2100> pmcgowan: it seems it can be fixed from both sides
[13:29] <sil2100> pmcgowan: so both the thumbnailer can be fixed, and the gallery can be fixed
[13:29] <pmcgowan> satoris and nerochiaro  need to discuss
[13:30] <pete-woods> om26er: there is more than one scope in that package. you should probably favourite the one called "Canned scope using location"
[13:30] <nerochiaro> pmcgowan: the gallery fix will solve the problem as far as I am concerned, using the version of thumbnailer that is in the current image. The thumbnailer fix is something Kaleo did, more than Satoris
[13:30] <nerochiaro> sil2100: ^
[13:31] <satoris> Yes, I'm just trying to get the fixes through Jenkins.
[13:31] <pmcgowan> nerochiaro, do we want both fixes then?
[13:31] <sil2100> pmcgowan: both can be in, yes, but not both are required for the gallery to work again - just one is enough
[13:31] <satoris> The thumbnailer fixes some other issues too so we might want to get it in anyway.
[13:31] <sil2100> But currently we can't get either of them landed
[13:31] <nerochiaro> pmcgowan: as sil2100 and satoris said
[13:34] <bzoltan_> sil2100:  would you please release the silo8? I have approved the MR
[13:36] <sil2100> bzoltan_: o/
[13:37] <bzoltan_> sil2100:  thank you
[13:37] <sil2100> bzoltan_: need to contact an archive admin for this one
[13:38] <bzoltan_> sil2100: yes, as it provides a new package
[13:38] <sil2100> cjwatson: hey! Could you take a look at https://ci-train.ubuntu.com/job/ubuntu-landing-008-2-publish/lastSuccessfulBuild/artifact/packaging_changes_qtcreator-plugin-ubuntu_3.1.1+15.04.20150119-0ubuntu1.diff ? It's adding a new binary package and needs an ACK from an archive admin :)
[13:38] <sil2100> cjwatson: thanks!
[13:39] <bzoltan_> ogra_: how much time/trouble would it take to push an update to the sdk-libs-tools after the ubuntu-sdk-qmake-extras lands on Vivid?
[13:43] <cjwatson> sil2100: could you ask a more routine archive admin, please?  I'm phasing out of this sort of thing since I've moved to the Launchpad team
[13:43] <sil2100> cjwatson: ACK! Let me find someone ;)
[13:45] <om26er> pete-woods, ok, I can confirm the fix. Had to go out for the satellite to connect.
[13:45] <om26er> Now need to run the Test plan.
[13:46] <pete-woods> om26er: awesome, thanks. yeah I've been wandering around in the frost while testing it :)
[13:51] <sil2100> jibel: the vivid not running the emulator images might be a problem this week, as rsalveti is away...
[13:52] <sil2100> Not sure how you're supposed to do sanity testing on this then
[13:52] <jibel> sil2100, testing on utopic
[14:05] <jibel> Chipaca, can you have a look at silo 000 (ubuntu-push) Verified 2 days ago and still not published "Some merges are unapproved."
[14:05] <jibel> bfiller, silo 2 passed QA but can not publish "Some merges are unapproved."
[14:06] <bfiller> jibel: let me check
[14:07] <bfiller> jibel: all set, sil2100 mind republishing silo 2 please?
[14:07] <sil2100> bfiller: approved? :)
[14:07] <bfiller> sil2100: indeed :)
[14:08] <sil2100> This is what I like
[14:09] <jibel> bfiller, thanks. So gallery-app in silo 14 is the only remaining sync from vivid.
[14:10] <bfiller> jibel: that's great. we are working on the fix still for the black screen and thumbnails issue. hopefully we'll have that today
[14:10] <bfiller> jibel: caused by the thunbmailer that was released this weekend
[14:11] <ogra_> bzoltan_, a seed change, a rebuild of the meta and an upload of the meta
[14:11] <jibel> bfiller, yeah, I know the story. We need gallery app verified today to have it on tomorrow morning's image that will go through regression testing.
[14:12] <bzoltan_> ogra_: OK, I will ping you once the silo8 is landed
[14:12] <bfiller> jibel: ack, we'll either have a fix or will push to revert thumbnaiiler
[14:12] <ogra_> bzoltan_, prepare a seed MP please to make sure it doesnt get typoed or some such
[14:14] <mzanetti> sil2100: when you have a minute, a silo for row 80 please
[14:15] <bzoltan_> ogra_:  What? An MP without typo is like a horse without washing machine...
[14:15] <ogra_> true true :)
[14:16] <sil2100> mzanetti: remember that there's unity8 in landing-021 already
[14:17] <mzanetti> uh
[14:17]  * mzanetti checks
[14:17] <mzanetti> sil2100: thanks for that. I would have missed it
[14:17] <Chipaca> jibel: I believe that that message is in error or outdated; they've been approved since monday
[14:17] <mzanetti> sil2100: will move my stuff into that
[14:17] <sil2100> Oh, ok, since I already assigned a silo ;)
[14:17] <sil2100> mzanetti: you want 80 unassigned
[14:17] <sil2100> ?
[14:17] <mzanetti> sil2100: yep, don't need two
[14:18] <mzanetti> sil2100: sorry :/
[14:18] <sil2100> No worries, ok, freeing :)
[14:19] <kenvandine> mandel, what's up with rtm silo 19?
[14:21] <jibel> Chipaca, hm, which version should be on the phone? current version in the archive is 0.64.1+14.10.20141023.2~rtm-0ubuntu1, it looks old.
[14:21] <jibel> Chipaca, is it the right version?
[14:22] <Chipaca> jibel: that's likely
[14:23] <abeato> jibel, hi, is there anything stopping rtm silo 9 for being QA tested? I've seen the "not targeted" comment in the trello board for one of the bugs
[14:24] <jibel> abeato, hi, it is not on the list for ww05. Can you escalate it to pmcgowan for review or removed from the silo
[14:25] <abeato> pmcgowan, could you take a look at bug #1376250 ? iirc it was targeted for a landing in previous iterations
[14:26] <abeato> jibel, but in any case the changes for that are a pre-requirement for the fix for bug #1373388 , which is built on top of that
[14:30] <jibel> Chipaca, to be clear, the package we verified in silo 000 on Monday is not in the archive. Something is wrong with the publication.
[14:30] <jibel> sil2100, ^ can you have a look at what happened to ubuntu-push
[14:30] <Chipaca> jibel: ok, now i'm confused
[14:30] <bzoltan_> sil2100:  did you find anybody who could sign off the silo8?
[14:31] <Chipaca> jibel: are those checks triggered manually, or are they autoamtic?
[14:32] <Chipaca> jibel: because sil2100 told me about them being unapproved on monday, i corected that, but he'd left by the time it got done
[14:32] <pmcgowan> abeato, thats a very large change in that MR
[14:32] <Chipaca> jibel: so if they are manual, then somebody needs to hit it again
[14:32] <jibel> sil2100, it included a fix for bug 1376282 and bug 1380662 but cannot it anywhere, so maybe I'm doing it wrong
[14:32] <Chipaca> jibel: if they're automatic, something's broken
[14:32] <jibel> Chipaca, I've no idea how that part of the train works
[14:32] <abeato> pmcgowan, the changes for setting dynamically the slot with 3G capabilities are large
[14:33] <jibel> Chipaca, I was just checking that what the QA team verified actually landed
[14:33] <Chipaca> jibel: ah, ok
[14:34] <abeato> pmcgowan, it includes creation of new DBus interfaces and some new files
[14:34] <vila> satoris, jibel: cloud/cyclops workers issue still under investigation
[14:35] <jibel> vila, in other words: it won't land today?
[14:36] <jibel> vila, it = thumbnailer fix
[14:37] <vila> jibel: I still can't say, that job has been failing for quite some time (jobs are kept for only two weeks, I've seen evidence as old as 5 January), the only succesful one is http://s-jenkins.ubuntu-ci:8080/job/thumbnailer-vivid-armhf-ci/10/consoleFull
[14:37] <jibel> sil2100, ^ revert?
[14:38] <vila> jibel: and the success is probably because it ran for the first time on that specific worker
[14:38] <vila> jibel, sil2100 : will the revert require to run the same job ?
[14:38] <sil2100> jibel: one moment, still in UE Live
[14:38] <sil2100> So I'm a bit distracted
[14:40] <sil2100> jibel: ok, let me check the ubuntu-push silo and try to release it
[14:41] <sil2100> jibel: as for the revert, I still need to poke bfiller since we might simply be able to release the gallery-app fix instead then
[14:41] <sil2100> bfiller: hey! So, I was thinking, could we somehow release nerochiaro's gallery-app fix for the thumbnailer issues?
[14:45] <sil2100> bfiller: we might take nerochiaro's branch, release it for vivid and re-sync gallery-app in the silo
[14:45] <sil2100> bfiller: but this would have to be done quickly
[14:45] <bfiller> sil2100: that is the plan, just doing some final testing on it
[14:45] <bfiller> sil2100: not quite ready yet
[14:45] <sil2100> bfiller: excellent :)
[14:45] <sil2100> I'll prepare the thumbnailer revert just in case
[14:50] <vila> sil2100: will the revert require to run the same job ?
[14:50] <sil2100> vila: no, a revert would just require a push to the archive
[14:50] <sil2100> So no worries, we do soft reverts (only in the archive)
[14:51] <awe_> abeato, AFAIK RTM-silo #9 only contains ofono.  Is this what you're talking about when you say the landing has been stopped?
[14:51] <bzoltan_> ogra_: This is the MR -> https://code.launchpad.net/~bzoltan/ubuntu-seeds/qmake_enablers/+merge/247148 after the silo8 gets and ACK from an archive admin, as this silo brigs the new ubuntu-sdk-qmake-extras package
[14:52] <vila> sil2100: ack, let me know when you go with the revert
[14:52] <abeato> awe_, pmcgowan was complaining it was a rather large change
[14:52] <awe_> pmcgowan, ^^????
[14:52] <sil2100> bzoltan_: we got a +1, let me push it
[14:52] <abeato> but these changes are all actually needed for set3G, unfortunately
[14:52] <bzoltan_> sil2100:  \o/
[14:53] <awe_> abeato, let's hear from pmcgowan first...
[14:53] <sil2100> bzoltan_: there was one proposition though for the future, but let me first release this
[14:53] <sil2100> (too many things at once)
[14:54] <bzoltan_> sil2100: shoot
[14:56] <greyback_> trainguards: can I get a vivid silo for line 80 of the spreadsheet (qtmir)
[14:56] <sil2100> greyback_: sure
[14:56] <greyback_> thanks!
[14:56] <kenvandine> rsalveti, do you have a status on rtm silo 19?
[14:56] <sil2100> greyback_: done o/
[14:57] <sil2100> yw!
[14:57] <sil2100> kenvandine: rsalveti is away today it seems :)
[14:57] <kenvandine> sil2100, thanks... i'll pester mandel then :)
[14:57] <kenvandine> mandel...
[15:01] <ev> apologies for the CI team being quiet as we've worked through this today. We're firefighting a couple of issues here and will keep you all posted through the day on our progress.
[15:02] <fginther> alan_g, kgunn, I'm working on deploying some bigger build nodes for mir. If they test out ok, I'll have them switched on as soon as possible
[15:08] <kgunn> thanks a bunch!
[15:16] <sil2100> bfiller: how's the gallery-app fix testing going? Since if it's not available soon we'll have to revert thumbnailer
[15:16]  * sil2100 has the revert ready
[15:17] <bfiller> sil2100: silo building then we'll test
[15:17] <bfiller> sil2100: should know within 1hr or so
[15:19] <satoris> ping trainguards, could someone start silo 007 for me. It says I don't have permissions.
[15:20] <sil2100> satoris: oh, let me take a look then (and add you to the required group)
[15:21] <satoris> sil2100: thanks.
[15:22] <sil2100> satoris: ok, you're added now :)
[15:22] <sil2100> satoris: strange that you weren't, since you had access to the spreadsheet
[15:22] <satoris> Got the mail.
[15:24] <sil2100> jibel: how does ~1h sound for the gallery-app resolve? I can push the revert, but we need to be aware that we'll reopen any closed thumbnailer bugs until the CI problems get resolved
[15:24] <sil2100> I'm fine with either approach really
[15:25] <mandel> kenvandine, what?
[15:25] <mandel> kenvandine, ahh the silo, I've tested it and works as expected
[15:25] <mandel> sil2100, anything I have to do about that (silo 19, rtm)
[15:25] <kenvandine> mandel, please mark it as tested so it gets in the QA queue
[15:26] <mandel> kenvandine, ack
[15:29] <mandel> kenvandine, should be ok now
[15:29] <kenvandine> mandel, thx
[15:40] <pmcgowan> awe_, abeato sil2100 so after discussing with awe for now we propose to just land the auto answer branch and defer the rest until we can test it all together with settings and indicator
[15:42] <awe_> pmcgowan, ack.  abeato is working on the updated MR as we speak
[15:42] <bfiller> sil2100, jibel: there is a new click to test for gallery and it's linked off the dashboard for rtm silo 14, seems to fix the problem. I'm waiting for the vivid silo to finish building, then we can release that and sync to rtm. If we are out of time I can manually sync those MR's to trunk and we can release trunk in rtm first, then sync back to vivid
[15:43] <sil2100> awe_, pmcgowan: what changes is this about?
[15:43] <bfiller> sil2100: the debs don't really matter so much for gallery, it's just the release of the click to the store based on trunk
[15:43] <sil2100> bfiller: excellent
[15:43] <sil2100> jibel, brendand, davmor2: ^ can anyone of you take a look?
[15:43] <sil2100> Or om26er_ ^
[15:43] <sil2100> om26er_: you tested gallery originally, right?
[15:44] <om26er_> sil2100, yes, I did.
[15:44] <sil2100> om26er_: could you check out the new click for gallery? It should be attached to the silo desc/comment
[15:44] <sil2100> om26er_: it fixes the thumbnailer-caused issues
[15:44] <sil2100> :)
[15:44] <om26er_> bfiller, does it incorporate the fix for black images ?
[15:45] <bfiller> om26er_: yes
[15:54] <sil2100> om26er_: can you work on it now? :)
[16:00] <om26er_> sil2100, yes, on it.
[16:04] <sil2100> om26er: thank you!
[16:05] <cwayne> davmor2: rvr: i don't suppose there's been any testing yet on custom?
[16:07] <rvr> cwayne: I think davmor2 started to test it
[16:07] <davmor2> cwayne: started
[16:08] <sil2100> \o/
[16:12] <cwayne> excellent
[16:24] <abeato> trainguards, I was trying to reconfigure silo 9, but something went wrong, could you take a look?
[16:24] <abeato> I cleaned the silo before (with "no merge" set), maybe that was the issue
[16:24] <abeato> awe_, ^^
[16:25] <john-mcaleely> http://people.canonical.com/~jhm/barajas/ubuntu-rtm-14.09/device_krillin-20150121-61768b8.tar.xz
[16:25] <john-mcaleely> http://people.canonical.com/~jhm/barajas/ubuntu-rtm-14.09/device_krillin-20150121-61768b8.changes
[16:25] <john-mcaleely> http://people.canonical.com/~jhm/barajas/ubuntu-rtm-14.09/device_krillin-testresults-20150121-61768b8.ods
[16:26] <john-mcaleely> ogra_, sil2100 davmor2 ^ new device tarball for rtm
[16:26] <john-mcaleely> can I ask for a QA pass please?
[16:26] <davmor2> john-mcaleely: working on the custom tarball currently cwayne beat you to it :)
[16:27] <john-mcaleely> curse you cwayne :-)
[16:27] <cwayne> hah, take that boss
[16:28] <sil2100> Oh no!
[16:29] <sil2100> davmor2: will you be able to still sign off the device tarball today? Not sure how far you are with the custom one
[16:30] <cwayne> sil2100: is rvr scheduled to test out 14.09.es custom too? we really need to keep both of them in sync
[16:31] <bfiller> sil2100: could you publish ubuntu silo 16 please for gallery?
[16:32] <sil2100> bfiller: sure
[16:33] <abeato> sil2100, would you mind taking a look at line 51, was trying to reconfigure but had some problems, would it be better to remove that line and start again?
[17:02] <rvr> cwayne: I'll take a quick look, as I'll use it tomorrow during the regression test
[17:03] <cwayne> rvr: great, we've also sent off the translations today, hope to get them back monday :)
[17:04] <rvr> cwayne: Wee!
[17:04] <robru> abeato: indeed cleaning the silo frees it, you threw your silo away
[17:04] <abeato> robru, expected I guess
[17:04] <robru> abeato: i can reassign, one sec
[17:04] <abeato> robru, what should I do to clean-up line 50 in the spreadsheeet
[17:05] <abeato> robru, awesome, thanks
[17:06] <robru> abeato: ok you're in silo rtm 0 now
[17:07] <abeato> robru, nice
[17:07] <plars> satoris: sil2100: I think we are on track to a fix for the thumbnailer landing issue in jenkins
[17:08] <plars> testing it now
[17:11] <dbarth_> orgra, robru: hey, can you advise me how to ci-land signon-apparmor-extension in rtm?
[17:11] <dbarth_> jdstrand maybe also ^^
[17:12] <robru> dbarth_: are you wanting to just copy the version that exists in vivid or do you want to just cherry-pick certain patches?
[17:12] <dbarth_> we have it in vivid, and we landed all scopes + oa changes necessary to enable that new landing now, i think
[17:12] <dbarth_> just seed, i think you had made a utopic version already
[17:12] <dbarth_> ensure it's on by default
[17:13] <robru> dbarth_: just seed? so the package is already in rtm?
[17:13] <dbarth_> but i'd like that to go via the normal qa process; the test is quite simple; regression testing a bit longer
[17:13] <dbarth_> available in rtm afaict yes
[17:13] <dbarth_> pool/universe/s/signon-apparmor-extension/signon-apparmor-extension_0.1+14.10.20141002-0ubuntu1_armhf.deb
[17:19] <sil2100> plars: \o/ great news
[17:19] <sil2100> plars: we fixed it from a different side, but the thumbnailer fix landing would still be good to make sure that nothing else is broken
[17:20] <sil2100> plars: so just give us a sign once it works and the lander can continue his work on this ;)
[17:20] <sil2100> jibel: ^
[17:20] <plars> sil2100: I've restarted the run for https://code.launchpad.net/~fboucault/thumbnailer/save_failures/+merge/246547 so hopefully we should know something soon
[17:37] <robru> dbarth_: sorry was in a meeting. yeah it looks like the versions match between rtm and vivid. I guess you just need it seeded, I don't really deal with that. ogra_ is a better bet.
[17:40] <dbarth_> robru: but how can that pass via ci/qa though?
[17:41] <dbarth_> ie, which type of line should i had in the spreadsheet for this to happen?
[17:41] <robru> dbarth_: I don't think there is a line for that. my understanding is that ogra_ uploads seed changes manually.
[17:47] <bfiller> sil2100: we need to rebuild silo 14 to be in sync with vivid
[17:47] <bfiller> sil2100: going to do that now - you might have to do the manual upload thing if it fails
[17:47] <sil2100> bfiller: ok, did the click package get signed off?
[17:47] <dbarth_> robru: hmm ok; i guess the regression testing will be covered by the normal image qa process
[17:47] <bfiller> sil2100: I think it did as silo marked passed QA now
[17:48] <sil2100> Great
[17:48] <robru> dbarth_: yeah you should talk to ogra, sorry ;-)
[17:49] <bfiller> sil2100: I am preparing the click from trunk to upload to the store
[17:50] <sil2100> bfiller: will it be the same as what QA tested?
[17:50] <bfiller> sil2100: note, this will need a manual security ack from jdstrand or someone on the security team as it has special rules to allow access to SD card
[17:51] <bfiller> sil2100: yes, same but one rev higher - built from head of trunk which only has a packaging revision since the one i built
[17:56] <robru> boiko: you're missing either an mp or a source package or a sync there
[17:58] <boiko> robru: yeah, the spreadsheet didn't save the edition and I just noticed/fixed it, sorry
[17:59] <robru> boiko: k, you got vivid 0
[17:59] <boiko> robru: thanks :)
[17:59] <robru> boiko: you're welcome
[18:13] <rvr> cwayne: davmor2: Spanish image looks good so far
[18:13] <davmor2> rvr: thanks dude
[18:15] <bfiller> sil2100, popey: new gallery uploaded to store, not sure how the manual review process works but think this will need one
[18:16] <popey> bfiller: has it been through QA?
[18:16] <bfiller> popey: yup
[18:17] <popey> looks like it already went through..
[18:17] <popey> oh, it failed... one mo
[18:18] <popey> bfiller: can you set it to manual review in the store? should be a button here somewhere https://myapps.developer.ubuntu.com/dev/click-apps/507/
[18:18] <bfiller> popey: did that
[18:18] <bfiller> just did
[18:18] <popey> okay
[18:18] <popey> lemme see
[18:19] <bfiller> popey: jdstrand said it would need a manual security review each time because we have rules to read the entire SD card
[18:19] <popey> done
[18:19] <bfiller> popey: thanks!
[18:22] <plars> satoris: sil2100: It's working now: https://code.launchpad.net/~fboucault/thumbnailer/save_failures/+merge/246547
[18:22] <sil2100> \o/
[18:23] <plars> sorry for the delay, there was one last thing needed to correct the previous bad state so that things could progress smoothly
[18:23] <greyback_> trainguards: can someone please reconfigure vivid silo 12 - added gles twin for qtmir
[18:26] <robru> greyback_: uh, what spreadsheet row is that? dashboard is having a hiccup
[18:27] <robru> 79 i guess... right?
[18:27] <greyback_> robru: 79. CI train spreadsheet being very slow for me, more sure if my addition actually saved..
[18:27] <greyback_> s/more/not/
[18:28] <greyback_> yeah it did
[18:28] <robru> greyback_: yeah I see 2 MPs on row 79, one of them does say gles-sync
[18:28] <greyback_> robru: great, thanks
[18:28] <bfiller> sil2100: so new gallery is approved and in the store :) silo 14 is rebuilt and signed off by QA and needs to be published, but the status in the dashboard is not reflecting that
[18:29] <robru> greyback_: you're welcome. should be ready now
[18:29] <greyback_> ta
[18:29] <robru> bfiller: yeah I'm seeing some dashboard issues. sigh
[18:30] <robru> bfiller: ok I see it now. there seems to be some connectivity issue between the dashboard and spreadsheet, try reloading the page
[18:31] <bfiller> robru: I see it now, thanks!
[18:31] <bfiller> robru: ready for publish :)
[18:31] <bfiller> music to my ears
[18:32] <robru> brb
[18:32] <sil2100> bfiller, robru: excellent, thanks! :)
[18:41] <cwayne> davmor2: rvr: we need to revert one change, sorry for the late notice :/
[18:42] <davmor2> cwayne: damn you I'd nearly finished
[18:42] <cwayne> davmor2: sorry, I just got word of it literally right now
[18:42] <cwayne> :/
[18:43] <davmor2> slap them hard for me
[18:45] <cwayne> davmor2: we should be able to just test this delta though, it's only 1 click being reverted to the previous version
[18:45] <davmor2> cwayne: which click?
[18:46] <cwayne> davmor2: telegram
[18:46] <davmor2> hmmm okay
[18:47]  * davmor2 goes for tea
[18:53] <cwayne> davmor2: it's building now, one minor change snuck in (a fix for #1410742 new aggregator preview should higlight open article, diff was quite small)
[18:53] <cwayne> rvr: ^
[18:54] <rvr> cwayne: I'll take a look
[19:01] <cwayne> davmor2: rvr: sorry, was mistaken, there are no additional changed, just the revert
[19:06] <sil2100> cwayne, davmor2: please push the tarball whenever QA gives a +1 ;)
[19:06] <sil2100> See you tomorrow o/
[19:06] <cwayne> sil2100: might wanna check your email
[19:06] <cwayne> ha
[19:06] <cwayne> but yeah, the only change is a reverted click, so should be able to re-use most of the testing
[20:26] <cwayne> davmor2: rvr: what's the verdict?
[20:26] <davmor2> cwayne: I'm not telling you :P
[20:27] <davmor2> cwayne: I added the device tarball so I'm testing both rather than making john-mcaleely wait another 30 minutes on top should be done in 10-15 minutes
[20:28] <cwayne> davmor2: wonderful :) sorry again for all this, I owe you a few cokes at the next sprint for sure :)
[20:30] <dobey> cihelp: hi, i need to get lp:ubuntuone-credentials/rtm-14.09 set up in jenkins to test MPs and such
[20:31] <fginther> dobey, that can be done. I'll add it to the list of tasks and follow up when it's ready
[20:32] <dobey> fginther: ok great, thanks
[20:40] <davmor2> john-mcaleely, cwayne, robru: okay so it looks like the device tarball and custom tarball are both good feel free to land them whenever robru says you can
[20:41] <cwayne> \o/
[20:41] <cwayne> rvr: ^ do you concur with .es?
[20:43] <davmor2> cwayne: rvr confirmed first if the only change is reverting telegram then this should still be good right?
[20:44] <bfiller> robru: in silo 2 I think the myspell-hr is never going to migrate because there is a newer version in rtm, which is fine. we can just drop myspell-hr
[20:44] <bfiller> robru: that's rtm silo 2
[20:44] <cwayne> davmor2: yep
[20:46] <cwayne> hadnt realized he'd +1d yet, works for me :)  davmor2 sil said i could push when ready, should i wait for robru or push?
[20:46] <davmor2> cwayne: I guess you can push then
[20:47] <cwayne> cool beans
[20:47] <cwayne> pushed -- thanks guys!
[20:54] <john-mcaleely> davmor2, excellent
[20:54] <john-mcaleely> robru, is now a good time to push the device tarball ?
[21:09] <robru> john-mcaleely: I dunno, what metrics would distinguish a good time from a bad time? you have ~7 hours before the image gets built if that's what you're asking
[21:10] <john-mcaleely> robru, I think that makes it a good time. The criteria in the past has been: are the builders quiet, and is something planned for them to soon
[21:10] <john-mcaleely> do soon. (where the device build will occupy them for ~30 mins or so)
[21:10] <robru> john-mcaleely: yeah seems fine to me then. it's been a pretty quiet day from my perspective
[21:10] <john-mcaleely> robru, ok, will do!
[21:10] <robru> john-mcaleely: thanks
[21:11] <john-mcaleely> robru, pushed. we should see a new build over the next ~30 mins
[21:11] <robru> bfiller: ok, want me to just merge that silo and ignore myspell?
[21:11] <robru> john-mcaleely: cool
[21:11] <bfiller> robru: yes please
[21:12] <john-mcaleely> thank you!
[21:16] <robru> bfiller: ok got you silos rtm 2 and 3
[21:17] <robru> bfiller: and you now also have rtm 9, which conflicts with 3 and 21.
[21:17] <bfiller> robru: perfect, thanks!
[21:17] <robru> bfiller: you're welcome
[21:24] <rvr> cwayne: Yup, .es looks fine
[21:44] <jgdx> fginther, remember this [1] failure? I can't reproduce it on mako, using debs from that jenkins run on RTM. What do you reckon? [1] https://jenkins.qa.ubuntu.com/job/generic-deb-autopilot-runner-14.09-mako/14/testReport/junit/ubuntu_system_settings.tests.test_datetime/TimeDateTestCase/test_same_tz_selection/
[21:52] <fginther> jgdx, I don't have much good advice here. But I would like to rule out anything specific related to the environment the ci makos use to run tests...
[21:53] <jgdx> fginther, anything special about it? I'd love to replicate it.
[21:53] <fginther> jgdx, I have a mako locally that I can use to test, and will kick a run off here in a few minutes.
[21:54] <fginther> jgdx, the environment can be reproduced with a script, let me dig up the howto
[21:54] <jgdx> fginther, awesome, thanks!
[21:55] <fginther> jgdx, here is the method that jenkins is using: http://bazaar.launchpad.net/~ubuntu-test-case-dev/ubuntu-test-cases/touch/view/head:/README-cli.rst#L81
[21:56] <fginther> jgdx, there are 4 environment variables to set and then execute run-mp.sh. NOTE: This will flash your device and wipe out it's entire contents
[21:57] <fginther> jgdx, so don't do this on anything other then a test device
[21:57] <jgdx> fginther, gah, right. :))
[22:07] <jgdx> fginther, could the phone fall asleep? It's quite a long testcase
[22:09] <fginther> jgdx, it's not supposed to. The screen lock is disabled prior to running the test
[22:09] <fginther> jgdx, but that's something I can look for
[22:10] <jgdx> fginther, great
[22:12] <fginther> kgunn, the deployment of the larger build nodes for mir is complete. The are currently 4 of these nodes which we'll monitor this week
[22:13] <kgunn> thanks it was really starting to be a pain
[22:13] <pmcgowan> kgunn, why is your software so big
[22:27] <kgunn> pmcgowan: that's a personal question ;)
[22:29] <pmcgowan> kgunn, lol
[22:37] <pmcgowan> imgbot
[22:52] <fginther> dobey, lp:ubuntuone-credentials/rtm-14.09 is ready now
[22:59] <dobey> fginther: great, thanks
[23:04] <fginther> jgdx, I found an error in the MP testing instructions. For RTM tests, it's necessary to specify the image channel via an env var. See http://bazaar.launchpad.net/~fginther/ubuntu-test-cases/document-rtm-mp-testing/view/head:/README-cli.rst#L96
[23:06] <jgdx> fginther, 10-4