[08:55] <jibel> vila, the link /etc/apparmor.d/disable/usr.bin.rsyslogd is part a the default setup of the system not specific to our setup or anything required by otto.
[08:55] <jibel> s/a/of
[08:56] <vila> jibel: wow.... ok thanks
[08:57] <jibel> vila, also, I think I'll drop swap control from the default config file and add it dynamically only if swapaccount is enabled.
[08:57] <jibel> vila, unless you beat me to it and send me a patch of course:)
[08:58] <vila> jibel: hehe, ack, probably not, I'm still un-piling  a bunch of other stuff ;)
[09:00] <vila> jibel: out of curisosity, what should have I done to track where /etc/apparmor.d/disable/usr.bin.rsyslogd was coming from, 'dpkg -S' doesn't help there
[09:00] <vila> jibel: in the mean time, removed from the doc and pushed
[09:07] <jibel> vila, it must be created by rsyslog itself, check dpkg *inst scripts of rsyslog.
[09:08] <vila> jibel: ack
[09:08] <jibel> vila, ⟫ grep apparmor.d/disable /var/lib/dpkg/info/rsyslog.*inst
[09:08] <jibel> /var/lib/dpkg/info/rsyslog.preinst:    APP_DISABLE="/etc/apparmor.d/disable/usr.sbin.rsyslogd"
[09:09] <vila> jibel: excellent, thanks !
[09:10] <jibel> anyhow it is irrelevant to the configuration of otto.
[09:10] <vila> jibel: yup, fixed in the mp
[09:26] <vila> rhaaaa, damn it vila, you updated that doc to capture knowledge and failed to add the knowledge acquired during the review !
[09:28] <vila> jibel: sorry about that https://code.launchpad.net/~vila/otto/update-setup-instructions/+merge/193201
[09:57] <popey> ogra_: we spinning another build today?
[09:57] <ogra_> popey, waiting for bug 1245958 to be fixed i fear
[09:58] <popey> ok
[09:58] <popey> nobody assigned
[10:01] <ogra_> i havent herad from the Mir team, i think alan_g and kdub wanted to look into it though
[10:01]  * ogra_ hopes that gets fixed asap ... we couldnt release an image since friday
[10:01] <ogra_> and i fear rolling back the whole stack is as much work as fixing it :(
[10:03] <alan_g> ogra_: I *guess* that what is needed is https://code.launchpad.net/~afrantzis/mir/remove-client-rpc-timeout/+merge/193094 - but I'm still setting up phone to test.
[10:06] <popey> balloons / fginther any idea what's going wrong here? http://91.189.93.70:8080/job/generic-mediumtests/1247/? (from https://code.launchpad.net/~vthompson/music-app/fixes-1234990/+merge/192771 )
[10:06] <ogra_> alan_g, awesome, good to know someone looks into it, thans a lot !
[10:06] <ogra_> +k
[12:52] <fginther> popey, morning
[12:52] <fginther> popey, the last run for that MP passed (30 minutes ago)
[12:56] <popey> thanks fginther, can we prepare a click package soon for music app sergiusens ? I'd like to do some tesing before we upload to the store
[12:56] <sergiusens> popey, sure, the app should be autogenerated still
[12:57] <sergiusens> popey, fginther now, the problem with all click apps is that they need to be tested on stable AND on devel-proposed
[12:58] <popey> i can do that
[12:58] <popey> i have two devices setup exactly like that
[12:58] <popey> (manual, not automated testing that is)
[12:59] <fginther> sergiusens, hmm
[12:59] <sergiusens> fginther, that's before publishing to the store
[13:00] <fginther> sergiusens, stable == saucy right?
[13:00] <sergiusens> fginther, would be good to start hooking up the apps in http://10.97.0.26:8080/view/click/? to some sort of automated testing and do auto pushes; it would be a daily release extension I guess
[13:00] <sergiusens> fginther, today, yes
[13:01] <sergiusens> fginther, I'm not sure how you are going to deal with the non backwards compatible autopilot 1.4/1.3 though
[13:01] <fginther> sergiusens, then we have a problem... autopilot tests...
[13:02] <fginther> beat me to it
[13:03] <sergiusens> fginther, I'll leave that issue to QA; tbh I thought api breakages were a thing of the past
[13:03] <fginther> sergiusens, does this also apply to internal apps?
[13:04] <fginther> notes, webbrowser, etc
[13:04] <sergiusens> fginther, yes
[13:04] <sergiusens> fginther, since click apps are exposed to devices by the framework they support
[13:08] <fginther> sergiusens, https://bugs.launchpad.net/ubuntu-ci-services-itself/+bug/1246299
[13:08] <sergiusens> fginther, webbrowser I'm particularly not sure of it's path, but notes, camera and gallery for sure
[13:09] <sergiusens> fginther, subscribed, thanks
[13:14] <t1mp> are there new known issues with jenkins? Since yesterday or so CI is failing our MRs again
[13:14] <t1mp> for example, this one https://code.launchpad.net/~kalikiana/ubuntu-ui-toolkit/aviator/+merge/193219
[13:16] <cjohnston> t1mp: not that I know of
[13:18] <fginther> t1mp, there is a unity8 and qmlscene crash on that mako run
[13:19] <t1mp> fginther: what does that mean, that unity8 is broken and we need to wait for a fix?
[13:20] <fginther> t1mp, unity8 crash is probably there from the failed unlock attempt. It eventually succeeded and the first few tests passed...
[13:20] <fginther> t1mp, but I've never seen a qmlscene crash. looking for something in the logs that might indicate anything
[13:21] <ogra_> Mir is crashy in trusty atm ... (or rather apps are due to Mir)
[13:21] <ogra_> see bug 1245958
[13:21] <t1mp> kalikiana: ^ fyi
[13:21] <ogra_> not sure how much impact this can have on MP tests
[13:23] <sergiusens> ogra_, a lot if the MR test runner installs trusty-proposed
[13:24] <ogra_> yeah, i was suspecting that
[13:24] <sergiusens> ogra_, fginther would temprarily switching to surfaceflinger unblock people?
[13:24] <fginther> sergiusens, ogra_, t1mp, MPs are tested with trusty-proposed
[13:24] <ogra_> not so sure ... the behavior will be different (i.e. you might have other issues)
[13:24] <ogra_> but probably worth a try
[13:25] <sergiusens> fginther, ogra_ either that or set the phablet-flash to flash -r -2
[13:25] <sergiusens> or -r 6 (7 and 8 being the broken ones)
[13:26] <kalikiana> why wasn't it noticed before?
[13:27] <kalikiana> how did it pass despite you sounding as if it's obviously crashy as known?
[13:27] <ogra_> kalikiana, it doesnt happen immediately ... so if you are lucky all manual AP tests pass ... and even testing it manually works if you dont test for more than 10min
[13:27] <fginther> sergiusens, here's a better question, should apps even be tested with -proposed?
[13:28] <ogra_> (testing it manually as in "using it" )
[13:29] <kalikiana> hmm another memory issue? there was the one about not releasing closed apps that took n test runs to break
[13:29] <ogra_> there is a proposed fix on the bug
[13:29] <fginther> sergiusens, just use -proposed for unity8, uitk and other plumbing components?
[13:30] <ogra_> kalikiana, this one is rather "you use the app and watch it disapper underneath your fingers while using it"
[13:31] <sergiusens> fginther, well apps should work everywhere given they have no dependencies
[13:31] <sergiusens> fginther, it's a tough question
[13:31] <sergiusens> fginther, utah already tests them on proposed; maybe would be good to test the apps on non proposed
[13:32] <fginther> sergiusens, my thought is to give the apps a more stable environment, keep the plumbing in -proposed. We could also add some apps to the uitk and unity8 tests to make sure they continue to work
[13:33] <fginther> sergiusens, adding more test suites to uitk and unity8 has been the plan for a while, we just have to solve the (lack of) resource issue
[13:34] <kalikiana> first we need ci to be reliable otherwise it's hard to get better coverage in
[13:35] <kalikiana> a number of planned things never made it
[13:35] <fginther> kalikiana, right, if the tests aren't stable then adding more will just cause more issues
[13:36] <kalikiana> fginther: basically improvements in the uitk are being discussed all the time among the team - but we struggle to get anything in at all :-(
[14:21] <t1mp> so, if a new version of unity is proposed for a merge (or in the image), can it be tested with UITK trunk (should be known stable, right?) autopilot tests before merging?
[14:22] <t1mp> like that our MRs would be tested with an image that should be stable
[14:23] <kalikiana> +1000
[14:27] <dobey> do any ci-eng people have time to help with reviewing branches for tarmac?
[14:27] <dobey> fginther: ^^ would you maybe?
[14:28] <cjohnston> dobey: I could take a look
[14:28] <fginther> dobey, I would like to look as well
[14:30] <dobey> cjohnston, fginther: thanks. added both of you to ~tarmac-devs team.
[14:30] <dobey> https://code.launchpad.net/~dobey/tarmac/simple-setup/+merge/193147
[14:30] <dobey> and https://code.launchpad.net/~dobey/tarmac/bzr-export/+merge/193130
[14:30] <dobey> those could use a couple reviews right now
[14:33] <fginther> t1mp, that is something that can be added, but we have to build this up gradually. The last time we added additional test suites, we overtaxed our resources and created a huge wait queue
[14:46] <t1mp> fginther: on the other hand, we now have a large amount of MRs approved for UITK, but we know that they will fail CI/autolanding. So actually it is a waste of resources to execute the tests for those right now
[14:46] <t1mp> s/large amount of MRs/some MRs
[14:47] <fginther> t1mp, we can disable the autolanding job until we have a new image
[14:48] <t1mp> fginther: yes, okay.
[14:48] <fginther> t1mp, since there is other test feedback provided by the other non-touch jobs I would recommend keeping ci going
[14:48] <t1mp> kalikiana: ^ that's what happened to your MR. autolanding failed.
[14:49] <t1mp> executing all the tests first with the last-known stable image, and then with the proposed one, may be helpful in the future
[14:50] <t1mp> but I don't know if it is feasible considering the resources, and maybe a smarter solution can be thought of
[14:53] <fginther> t1mp, it would take more resourced, but for a few projects, it might be a better solution.
[14:54] <fginther> t1mp, If an MP passed stable, but failed proposed, that most likely indicates an issue in the image itself, what's the action for the MP owner?
[14:54] <fginther> if any
[14:55] <t1mp> fginther: find someone to fix it in the image ;)
[14:55] <t1mp> fginther: now we test with proposed, right?
[14:56] <fginther> t1mp, yes
[14:56] <t1mp> fginther: a lot of times, a bunch of tests "suddenly" start to fail for us, and then we spend quite an effort to track down what we did wrong
[14:56] <t1mp> ..until we figure out the bug is in the image.
[14:56] <t1mp> that can be avoided by testing with a stable image firs
[14:56] <t1mp> *first
[14:57] <fginther> t1mp, I can see that as a result we would be beating harder on the proposed image...
[14:57] <t1mp> and then, depending on the developer, we can decide whether we debug more and help with solving the issue, or simply wait for a good image to come again
[14:57] <t1mp> fginther: I don't understand your sentence
[14:58] <fginther> t1mp, if we test an MP on both stable and proposed, that would provide more test coverage of the proposed image
[14:59] <fginther> vs the stable one to expose issues
[14:59] <t1mp> yeah
[15:00] <fginther> mostly just thinking out loud
[15:00] <t1mp> me too
[15:00] <t1mp> it is difficult to find a way to have everything well-tested because there are so many large and small things coming together in an image
[15:01] <fginther> agreed
[15:23] <kalikiana> fginther: wouldn't a passing test on stable indicate it can land? given stable is a moving target that updates all the time and gets the latest from -proposed anyway
[15:23] <kalikiana> maybe that's too obvious to be an option?
[15:25] <cjwatson> For apps yes, for stuff that actually goes through trusty-proposed in the archive that could create deadlock
[15:25] <cjwatson> (Since the contents of trusty-proposed may be blocked waiting for what you're trying to land ...)
[15:25] <cjwatson> Though I guess if you mean the trusty-proposed *image* that's not relevant
[15:26] <kalikiana> I see deadlock right now with always testing proposed and mir/ unity keeps introducing new bugs resulting in a race with low chance of winning
[15:26] <kalikiana> oh yes
[15:26] <kalikiana> image is what I meant, sorry
[15:30] <fginther> kalikiana, the issue that comes to mind is that the MP being tested is not released on top of stable, it's released as the next -proposed. you don't want to get too far away from that target. Apps should be image agnostic (although a change to the image can break them as well)
[15:31] <kalikiana> fginther: apps fail the same way as the toolkit when the underlying system changes
[15:31] <kalikiana> I'm not convinced making a difference there is a genuine win
[15:32] <fginther> kalikiana, I need to think about it more...
[15:36] <kalikiana> fginther: is there any way for jenkins to compare test results for equality? for example to say "apparently 20 mrs fail lexactly ike this, please ask a qa engineer about it"
[15:36] <kalikiana> reducing the effort of investigating would at least alleviate some pain
[15:37] <t1mp> kalikiana: wow. how did you move the l of like to lexactly?
[15:38] <fginther> kalikiana, jenkins can't do this by itself.  but there are ways to pull the info out of jenkins to create a running report of test failurs
[15:38] <fginther> It's been on my wishlist for a while, but fires keep coming up
[15:38] <kalikiana> t1mp: I don't use the same finger to type an e, probably rested on the l for longer and accidentally touched it
[15:39] <t1mp> ah; lazy finger.
[15:42] <kalikiana> fginther: *nod* I would even say a brief message w/o any details could be helpful; doesn't have to replace anything but reduce time spent on the wrong end of the problem
[15:43] <kalikiana> though it might be equally non-trivial to implement I guess
[15:48] <fginther> kalikiana, https://bugs.launchpad.net/ubuntu-ci-services-itself/+bug/1246380
[16:10] <didrocks> ogra_: hey!
[16:10] <ogra_> didrocks, yo
[16:10] <didrocks> how are you?
[16:10] <ogra_> why did you revert the seed change again ?
[16:10] <ogra_> fine, thanks
[16:10] <didrocks> ogra_: yeah, missing stuff on the Mir…
[16:10] <didrocks> MIR*
[16:10] <didrocks> should be finally resolved today
[16:10] <ogra_> well, we can drop the package from the seed anyway
[16:11] <didrocks> do you know why image 8 didn't run the unity8 AP tests on mako?
[16:11] <ogra_> independently of the MIR i think
[16:11] <didrocks> ogra_: yeah, we want to do everything in a shot (late yesterday, so prefered to not do it)
[16:11] <ogra_> didrocks, because Mir crashes ?
[16:11] <didrocks> ok, it's confirmed it's the cause?
[16:11] <ogra_> no idea, its definitely one cause
[16:11] <didrocks> do you think it worth rekicking an image now?
[16:12] <ogra_> the tests are all shaky with that bug
[16:12] <ogra_> no, it isnt
[16:12] <ogra_> we need the Mir team to fix the crasher
[16:12] <ogra_> didrocks, still bug 1245958 ...
[16:13] <ogra_> i wouldnt consider any test reliable until thats gone
[16:13] <ogra_> (and apparently it broke other stuff like CI too)
[16:13] <didrocks> ogra_: ok, we have a branch apparently
[16:13] <ogra_> right, no package though
[16:13] <ogra_> i know davmor2 is eager to test it
[16:14] <didrocks> ken is testing as well
[16:14] <didrocks> and we'll deliver ASAP
[16:14] <ogra_> yay
[16:14] <ogra_> no idea if there are other subsequent issues, i fear this one shields other issues that we will only see after the fix
[16:15] <davmor2> if I have a deb I can just not got the time to learn to cross-compile from source
[16:16] <ogra_> especially something with complex deps
[17:24] <thomi> Hello CI team... Pretty soon we're going to be porting the autopilot test suites to python 3.
[17:24] <thomi> However, we'd like to be able to port them one by one
[17:24] <thomi> but, that means that some test suites will need to be run with the python3-package, and others with the older python-autopilot package
[17:24] <thomi> I'd like a way so we can do this without that being a massive PITA for you guys
[17:25] <thomi> like, maybe we can somehow specify which suites should be run with python3 somehow so you gusy don't need to update your configuration every time we port another test suite
[17:25] <thomi> ...or something like that.
[17:25] <thomi> Who should I be talking to to make something like that happen?
[17:26] <cjohnston> I'd guess fginther
[17:26] <thomi> As far as timeline is concerned, we're probably looking at a couple of weeks away...
[17:26] <thomi> maybe 3 weeks, actually
[17:26] <cking> plars, i see a i386 power test has completed, can you kick off an amd64 power test and then another i386 and amd64 test so I can sanity check the results?
[17:27] <plars> cking: will do - did they results look like they were ok with your new changes?
[17:27] <cking> plars, they look OK to me
[17:27] <plars> cking: great
[17:28] <cking> plars, just like to see some results of more runs now to ensure I've not introduced too much variability
[17:28] <plars> cking: understand
[17:30] <alan_g> ogra_: we've confirmed and top approved the fix for bug 1245958
[17:30]  * ogra_ hugs alan_g 
[17:33] <fginther> thomi, yes, lets talk
[17:34] <fginther> thomi, is there going to be a new python3-autopilot package?
[17:39] <thomi> fginther: already exists
[17:39] <thomi> :)
[17:40] <thomi> what doen't exist is python 3 versions of the meta-packages
[17:40] <thomi> i.e.- autopilot-touch and autopilot-desktop. but since those are meta-packages we can create them easily enough, if they'd help
[17:43] <fginther> thomi, the apps themselves will need to depend on the right python3-autopilot or python-autopilot, right. Does that not solve the problem?
[17:43] <thomi> fginther: is that what you guys do? just install that package?
[17:43] <thomi> if so, then yeah, that's easy :)
[17:44] <cjwatson> thomi: can't you just make it depend on both?  python-* and python3-* should be coinstallable
[17:44] <fginther> thomi, that might not be how it works now.
[17:44] <thomi> cjwatson: no, they both install /ust/bin/autopilot
[17:44] <cjwatson> that's a bug :)
[17:44] <cjwatson> should be in a common package
[17:44] <thomi> cjwatson: patches welcome :)
[17:45] <thomi> fginther: maybe we should do a single porting and see what happens
[17:45]  * cjwatson runs away to the warm friendly coreutils build
[17:45] <fginther> thomi, but if just don't pre-install any autopilot, then the deps should get us the right version
[17:45] <fginther> thomi, I'm all for a test case, I'm sure it will expose some issues
[17:46] <fginther> thomi, I can see it being a problem for image testing.
[17:47] <thomi> fginther: yeah, ok.
[17:47] <fginther> and testing as a click package
[17:47] <thomi> indeed
[17:47] <fginther> thomi, so yeah, lots of issue to fix first
[17:47] <thomi> well, I'm not doing this till 1.4 is out anyway
[17:52] <fginther> thomi, https://bugs.launchpad.net/ubuntu-ci-services-itself/+bug/1246425
[17:53] <thomi> fginther: sweet, thanks
[18:14] <davmor2> ogra_: so when does image 9 land :)
[18:14] <ogra_> davmor2, once the bug is fixed
[18:14] <ogra_> (once there is a package with the fix in the archive)
[18:16]  * davmor2 curses lp for not being instant at creating packages, why do you have to take minutes damn you :D
[18:39] <ogra_> davmor2, minutes ?!?
[18:39] <ogra_> LOL
[18:39] <ogra_> lets talk again in a few hours
[19:01] <davmor2> ogra_: shhhh
[19:01]  * ogra_ is grumpy 
[19:02] <ogra_> all google SMS services dont work for me :(
[19:04] <fginther> Mirv, can you dput qtcreator for trusty into ppa:ubuntu-sdk-team/ppa?
[19:10] <Mirv> fginther: sure, bzoltan was working on getting it sponsored to archives but it has not yet happened
[19:11]  * Mirv pushes
[19:11] <fginther> Mirv, bzoltan just pinged me in another channel, he may already be working on updating the ppa
[19:12] <Mirv> fginther: I see the discussion
[19:12] <fginther> Mirv, do we also need a trusty qtdeclarative-opensource-src? I can't find any packages
[19:12] <Mirv> fginther: we sure have qtdeclarative on top of which all apps run https://launchpad.net/ubuntu/+source/qtdeclarative-opensource-src
[19:12] <fginther> ... I can't find any packages using http://packages.ubuntu.com
[19:13] <Mirv> fginther: what we do need is qtcreator-plugin-ubuntu, after the qtcreator. so the cu2d-config rule should be extended to trusty
[19:15] <fginther> Mirv, I'm not sure what you mean
[19:17] <Mirv> fginther: qtcreator-plugin-ubuntu has autolanding to SDK PPA for precise,quantal,raring,saucy in head/sdk.cfg, trusty should be added. it needs the qtcreator-dev
[19:18] <fginther> Mirv, ah, yes, you're correct
[19:19] <fginther> Mirv, argh! they were all updated except for qtcreator-plugin-ubuntu
[19:20] <Mirv> fginther: well without this manual upload of qtcreator it wouldn't have compiled anyhow
[19:20] <Mirv> since the trunk doesn't work against QtC 2.7
[19:20] <fginther> Mirv, I'll get if fixed before the next MP lands
[19:20] <Mirv> fginther: thanks
[19:30] <sil2100> Mirv: hi! Remember guys that if you have some stuff that needs to be done, just assign it to me in the Landing Plan or the Ubuntu Unity Team Work, ok? ;)
[19:31] <sil2100> Mirv: in the meantime I'll deal with misc issues and the appmenu Qt5 for 5.2
[19:33] <sil2100> Mirv: have a nice work-day guys \o/ ;)
[20:02] <fginther> Mirv, thomi, not sure who to contact. They're might be a regression in autopilot in the daily-build ppa, but based on a very small sample size...
[20:02] <fginther> I have a notes-app MP that fails on desktop when using the daily build PPA, but passes when the PPA is removed
[20:03] <fginther> there are only a few differences in the dpkg log, one of them being autopilot 1.3.1+13.10.20131003.1-0ubuntu1 -> 1.3.1+14.04.20131030.2-0ubuntu1
[20:03] <fginther> but there have been any autopilot changes since 2013-10-14
[20:04]  * fginther realizes he was looking at the wrong branch
[20:05] <fginther> still, no changes on autopilot/1.3 since  2013-10-15
[20:26] <thomi> fginther: hey, sorry - was at lunch
[20:26] <thomi> fginther: so, autopilot hasn't changed in that period?
[20:26] <thomi> so we're off the hook?
[20:29] <fginther> thomi, I honestly don't know. I'm still trying to make sense of the logs
[20:30] <thomi> OK. I don't think we've merged much into 1.3
[20:36] <fginther> thomi, AFAIK notes-app is the only thing that's failing and it hasn't passed yet with trusty
[20:40]  * fginther updates laptop to trusty
[20:48] <thomi> Hi guys - I realise this is probably going to be a massive PITA, but I wonder if we could somehow run the Ap tests for the branches listed here: http://pad.ubuntu.com/autopilot-1-4 with autopilot 1.4?
[20:49] <thomi> We're porting these to be compatible with 1.4, but the CI runs use 1.3, so our tests fail in CI.
[20:49] <thomi> it'd be nice to know that the CI system passes the tests as wel
[20:49] <thomi> l
[20:49] <thomi> outside of developer machines
[20:49] <thomi> but if it's too hard to achieve, then I guess we'll do withotu
[20:49] <thomi> *wihtout
[20:49] <thomi> dammit
[21:01] <fginther> thomi, let me see what I can do
[21:03] <fginther> thomi, do you have a PPA with 1.4?
[21:04] <thomi> fginther: ppa:autopilot/experimental
[21:05] <fginther> thomi, thx
[21:43] <didrocks> fginther: hey!
[21:44] <fginther> didrocks, greetings!
[21:44] <didrocks> fginther: so, I have feelings, but I would prefer numbers on the upstream merger :)
[21:44] <ogra_> didrocks, mir is stuck in proposed
[21:44] <didrocks> ogra_: ken should look at that
[21:45] <ogra_> (and i'm really to tired to fix this tonight)
[21:45] <didrocks> ogra_: I'll have a look, don't worry
[21:45] <didrocks> fginther: with our "silos" for stack, how often do we have failures because of something else moved outside of the local repo?
[21:45] <didrocks> cyphermox: can you either grab ken or look at why Mir is stuck in proposed?
[21:46] <cyphermox> sure
[21:46] <fginther> didrocks, very rarely
[21:46] <cyphermox> I'll look now
[21:46] <ogra_> didrocks, it depends on ust ... which in turn depends on ltt-bin, lttng-tools, python-autopilot-trace ... one of them is stuck as i understand
[21:47] <didrocks> fginther: ok, thanks for your feedback :)
[21:47] <didrocks> cyphermox: thanks!
[21:47] <cyphermox> correct, that's the reason
[21:47] <cyphermox> ust is blocking it
[21:48] <ogra_> yep
[22:10] <didrocks> cyphermox: blocking because tests are running?
[22:12] <infinity> didrocks: No, SONAME transition.
[22:12] <infinity> didrocks: autopilot and ltt-contol need transitioning to the new ltt-whatever2.
[22:12] <infinity> (In progress)
[22:13] <didrocks> infinity: ok, thanks! (quite ironic Mir being block on another SONAME transition)
[22:13] <didrocks> blocked*
[22:13] <ogra_> didrocks, if i'm still around, i'll trigger a build, else tomorrow morning
[22:13] <ogra_> (or if infinity likes to ...)
[22:13] <didrocks> ogra_: don't worry, go home… well, go to bed :)
[22:14] <ogra_> heh, cant be more home :P
[22:14] <ogra_> i'm still up for a while and will take a look before going to bed later ... if its ready i'll trigger a build
[22:14] <didrocks> ogra_: ok, just ping me if it's the case (as there is no way for me to know if a build was kicked)
[22:15] <ogra_> yeah, i'll ping here
[22:20] <thomi> cihelp can someone help me figure out why this is failing? https://jenkins.qa.ubuntu.com/job/generic-mediumtests-runner-maguro/2839/console
[22:21] <thomi> it ends with "top write error", which sounds to me like a CI problem
[22:22] <fginther> thomi, did you mean this one? https://jenkins.qa.ubuntu.com/job/generic-mediumtests-runner-maguro/2838/consoleFull
[22:24] <fginther> thomi, the failure is caused by ppa:phablet-team/ppa not having any packages, /me wonders if this job even needs it
[22:24] <fginther> errr, any 'trusty' packages
[22:26] <xnox> fginther: yeap, i've pinged #qa team to upload _anything_ for trusty such that a pocket for trusty is published, but they didn't do that yeat.
[22:26] <xnox> fginther: wait phablet-team? i might have access. let me check.
[22:26] <thomi> fginther: hmmm
[22:27] <thomi> fginther: a better question to ask would be: why is this MP failing to land? https://code.launchpad.net/~elopio/ubuntu-ui-toolkit/fix1244518-composer_sheet_object_names/+merge/192641
[22:27] <xnox> nah i'm not.
[22:27] <thomi> I can't see anything obvious in any of the failing jenkins jobs
[22:28] <fginther> thomi, there is one test failure: https://jenkins.qa.ubuntu.com/job/generic-mediumtests-runner-maguro/2839/testReport/
[22:29] <thomi> fginther: oh.. thanks. Not sure how I missed that
[22:30] <fginther> thomi, xnox, I have to drop off for family time. I'll be back in 3 hours or so if I can help further
[22:30] <thomi> thanks fginther
[22:46] <ev> cihelp, can someone tell me how many machines and what spec are being used to do build on commit in our infrastructure, and are we generally keeping up with the queue (are we over/under provisioned)?
[22:46] <ev> retoaded: ^ ?
[22:46] <doanac> ev: probably more for fginther ^
[22:47]  * ev nods
[22:48] <ev> also, can someone tell me how many builds we're doing, and the total build time on average?
[23:06] <cjohnston> ev: sounds like an email question and allow ~a day to respond. I realize your probably in a meeting and that came up, but its after 5 for I think everyone on the team
[23:15] <ev> cjohnston: I didn't ask for a quick response
[23:20] <ev> fginther: just so you have the context, elmo is trying to get some idea of the load if we moved to a PPA-build-per-commit setup in the London DC
[23:21] <ev> so whatever numbers you can provide when you're back online would be much appreciated