[05:38] <Mirv> cihelp why does q-jenkins say it's being shut down?
[07:19] <didrocks> hey Mirv, are you building Mir? (is cowbuilder back up?)
[07:24] <Mirv> didrocks: I was, but there is a test failure
[07:24] <Mirv> didrocks: a fix is being merged now, but q-jenkins states it's shutting down to which I haven't yet gotten a response from cihelp
[07:25] <Mirv> I don't know about the cowbuilder
[07:26] <didrocks> Mirv: ok, you didn't see issues in the previous cu2d run?
[07:30] <didrocks> ev: vila: seems no cihelp, hence the direct ping (excuse us), but the cu2d jenkins is marked for shutting down, any idea?
[07:35] <Mirv> didrocks: so actually mir visited my mind on Saturday (shame on me) when I pushed the cu2d mir build and back then cu2d was fine. today I checked the build result + noticed q-jenkins is marked as shutting down ie further builds blocked.
[07:35] <didrocks> Mirv: ok, so cowbuilder fixed, great!
[07:35] <Mirv> didrocks: ken and robru marked that because of AP machine problems they didn't continue on mir on Friday, although they could have just built and skipped check jobs I believe back then
[07:36] <didrocks> yeah, let's see for today what's up with q-jenkins…
[07:37] <Mirv> didrocks: another note that mir stack build job was pending with arm64 build jobs, not sure why those weren't auto-ignored. but we'll see it again soon enough when q-jenkins is operational again.
[07:51] <didrocks> urgh
[07:51] <didrocks> ev: vila: ~desktop-team/cupstream2distro was **weeks** old
[07:51] <didrocks> at least 6-8 weeks
[07:51] <didrocks> on jatayu
[07:52] <didrocks> Mirv: that's why arm64 wasn't ignored
[07:52] <didrocks> asac: I can't tell when we are going to do releases again, see issues like that ^ (we are in an unknown state)
[08:00]  * didrocks worries as well about the network, seems bzr pull took ages (not sure if it's launchpad or the local CI network)
[08:10] <tsdgeos> waht's the new sjenkins ip?
[08:15] <Mirv> tsdgeos: seems 10.98.3.13, but you can also configure the DNS
[08:16] <tsdgeos> Mirv: yep, i saw the part of the email to do that, but just changing the hosts file seems easier :D
[08:16] <tsdgeos> manual dns ftw!
[08:16] <tsdgeos> tx
[08:22] <vila> didrocks: and I suppose there is no health check to warn you about that ?
[08:23] <didrocks> vila: to ensure that your home directory was restored with a version that is 3 months ago? No really
[08:23] <vila> didrocks: we all highlight 'cihelp' no need for a direct ping
[08:23] <didrocks> vila: seems you didn't have an health check either to ensure that what you migrate are consistant with latest in production?
[08:24] <vila> didrocks: could we get a bit more professional and stop pointing fingers, I can do that too but I don't think it's productive especially during fire fight
[08:25] <didrocks> vila: well, seems like you do something similar with "urgh", "we should stop doing that" and other expression like that without providing solutions
[08:25] <didrocks> I wish just that we could get back to work after a week and half, and I don't think we didn't help you
[08:26] <vila> didrocks: you did help
[08:26] <vila> didrocks: but not by yelling
[08:27] <didrocks> so, can you help in keeping/figuring out why jenkins want to shut down and ensuring that we do have the latest code in production?
[08:27] <vila> didrocks: and if I wasn't fire fighting an undocumented engine I would have more time to implement solutions I did propose
[08:27] <didrocks> vila: see, pointing fingers…
[08:29] <vila> didrocks: yeah, "yelling" is finger pointing at you, nothing good comes out of panic mode so please just stop, let's get back to facts and solutions
[08:29] <didrocks> vila: so, facts is that ~desktop-team wasn't up to date with latest in magners
[08:29] <didrocks> can you help ensuring it's now the case
[08:29] <didrocks> and no other thing like cowbuilder are forgotten between the 2 copies?
[08:30] <didrocks> second fact: can you figure out why jenkins is setup to stop?
[08:30] <didrocks> (and so, we can build anything)
[08:31] <vila> I asked why jenkins was in shutdown mode on Friday, didn't get an answer
[08:31] <vila> "no other thing like" is what I meant by undocumented, my crystal ball doesn't know either, do you ?
[08:32] <didrocks> vila: well, I think making a diff between old machine, at least in system path, would have been a first step for the transition
[08:32] <didrocks> vila: like as well asking for the tool prerequisite for the undocumented part from stackholders
[08:32] <didrocks> vila: on jenkins, so, what's the next step?
[08:32] <vila> I don't care about "would have", do you have "will" ?
[08:33] <didrocks> vila: "will" you make a diff between old machine and new one then?
[08:33] <didrocks> vila: and what are going to you do about jenkins?
[08:33] <didrocks> if you prefer future :)
[08:35] <vila> yes I do, thanks
[08:35] <vila> I'll ask again why it is in shut down
[08:36] <vila> and since I won't get an answer I will restart it
[08:36] <vila> which is not the smartest thing to do
[08:36] <didrocks> vila: so, in term of speaking futures, please apply this to you and stop telling "this stucks", "this should have never been done like that", "this was undocumented". That will help both sides, thanks :)
[08:37] <didrocks> vila: +1 on restarting if needed, everything is blocked (the new planned jobs are stucked and don't move)
[08:38] <didrocks> vila: from the list of stuck jobs, all the cu2d can be killed
[08:38] <didrocks> not sure about:
[08:38] <didrocks> daily-release_update-touch-images
[08:38] <didrocks> autopilot-ubuntu-applications
[08:38] <didrocks> jenkins-autocheck
[08:39] <didrocks> power-trusty-desktop-amd64-power-test-1
[08:39] <didrocks> power-trusty-desktop-i386-power-test-1
[08:40] <vila> didrocks: I said it *is* undocumented and this makes it harder to check after the 1ss move and harder to get it back up. That how I explain to the stakeholders why it's not up yet
[08:40] <didrocks> vila: but you keep ranting about everything single piece of the architecture, which isn't helpful either and doesn't fix anything
[08:41] <didrocks> anyway, let's move on, I hope the list above can help you ^
[08:41] <vila> didrocks: you're still pointing fingers, do we need to solve that with a third party ?
[08:42] <didrocks> vila: come on, I was just showing you that the issue is not unilateral, but it seems you want to continue when I'm proposing to move on and close the topic
[08:42] <didrocks> vila: and that's why I started to list the jobs I don't know if you can kill and you bring back the topic :/
[08:42] <didrocks> anyway, *sigh*
[08:42]  * vila moves on
[08:44] <vila> autopilot-nvidia needs a new power supply
[08:45] <vila> the alternatives are 1) setup a new host with any nvidia card 2) disable autopilot-nvidia in cu2d-config
[08:46] <didrocks> vila: go the road you prefer, you are handling the production of this. We can go over and hope that if you choose 2), there will be no real card regression (which is unlikely until we release an unity7)
[08:53] <Saviq> cihelp: hey guys, seems the otto runner for unity8-autolanding often fails with a lock password entry... https://jenkins.qa.ubuntu.com/job/autopilot-testrunner-otto-trusty/682/artifact/results/autopilot/artifacts/unity8.shell.tests.test_hud.TestHud.test_show_hud_button_appears%20%28Desktop%20Nexus%2010%29.ogv
[08:53] <Saviq> is that known? resolved?
[08:57] <vila> Saviq: not known to me
[08:59] <Saviq> vila, http://s-jenkins:8080/job/autopilot-testrunner-otto-trusty/683/ seems to be the last job that failed like this, maybe it got resolved since
[09:00] <Saviq> vila, or maybe simply one of the machines is locked...
[09:00] <Saviq> no, seems all the jobs ran on ps-radeon-hd8350
[09:00] <Saviq> and now are green
[09:01] <vila> Saviq: good, no time to investigate that one right now, especially if you think it's back to green
[09:02] <vila> didrocks: you did update q-jenkins:~desktop-team/cusptream2distro right ?
[09:02] <Saviq> vila, yeah, will let you guys know
[09:02] <didrocks> vila: yeah, I had to to unblock Mirv (but that was before seeing jenkins was getting shut down)
[09:02] <didrocks> vila: only thing I did this morning on the machine
[09:02] <vila> ok
[09:03]  * didrocks spent some time to look if there was a bug in the code first
[09:03] <vila> didrocks: sorry about that
[09:03] <didrocks> no worry
[09:04] <vila> restarting q-jenkins
[09:10] <vila> didrocks: I won't do a diff between two systems that had different purposes and now have  new different purposes, I would have no idea about how to interpret the diff (and I'm not even sure I know how to compare two systems anyway)
[09:10] <vila> didrocks: q-jenkins is back
[09:10] <didrocks> vila: hum, so we will go on try and error procedure then?
[09:10] <didrocks> vila: nice! want me start a stack?
[09:10] <vila> didrocks: yes please, as gently as possible, we're still on thin ice
[09:11]  * didrocks starts just one stack: Mir
[09:11] <didrocks> (no AP for that one)
[09:11] <didrocks> Mirv: starting Mir ^
[09:11] <sil2100> \o/
[09:12] <didrocks> vila: I'll let you decide on the autopilot-nvidia side
[09:12] <didrocks> * Mir started *
[09:12] <Mirv> didrocks: thanks, although the test fix is not yet in https://code.launchpad.net/~vanvugt/mir/fix-1252144.trunk/+merge/195547
[09:13] <didrocks> Mirv: at least, we'll ensure we can start building something :)
[09:13] <Mirv> yes, and it'll be nice to see no arm64 stuckness
[09:13] <didrocks> right
[09:16] <didrocks> vila: confirming the network is really really slow: http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/Mir/job/cu2d-mir-head-1.1prepare-unity-system-compositor/47/console
[09:16] <didrocks> Fetched 12.0 MB in 1min 1s (196 kB/s)
[09:24] <didrocks> Fetched 11.8 MB in 1min 48s (110 kB/s)
[09:24] <didrocks> on another job
[09:24] <didrocks> and some error: http://q-jenkins.ubuntu-ci:8080/view/cu2d/view/Head/view/Mir/job/cu2d-mir-head-1.1prepare-unity-greeter-session-broadcast/119/console
[09:31] <didrocks> ev: coming?
[09:31] <ev> on my way in
[09:31] <vila> didrocks: https://code.launchpad.net/~vila/cupstream2distro-config/no-nvidia/+merge/195563
[09:44] <Mirv> cihelp please check I'm seeing jenkins network problems and timeouts at merge proposal https://code.launchpad.net/~vanvugt/mir/fix-1252144.trunk/+merge/195547 - especially concerning connecting naartjie
[09:51] <didrocks> vila: just to confirm, the radeon machine is now working? (as it's in the config?)
[09:52] <vila> didrocks: well, last I checked it was... you mean the qa-radeon-7750 right ?
[09:52] <didrocks> vila: yep, ok, good, deploying with it, thanks
[09:53] <didrocks> Mirv: sil2100: think about reconfiguring your ~/.cu2d.cred with q-jenkins.ubuntu-ci btw
[09:53] <Mirv> didrocks: did that already
[09:57] <didrocks> vila: reprovisionning a recent trusty otto container: http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-setup_otto/23/console
[09:58] <sil2100> didrocks: updated \o/
[09:58] <didrocks> and removed nvidia from it
[09:58] <didrocks> sil2100: great! :)
[09:58] <asac> didrocks: the tests we usually would have run on those nvidia machines are AP tests, right?
[09:58] <ogra_> [09:59] <didrocks> asac: yeah
[09:59] <didrocks> cihelp: ok, can't reprovision the otto machines, seems it can't download the iso (it tries from q-jenkins)
[09:59] <didrocks> see http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-setup_otto/label=qa-intel-4000/23/console
[10:00] <didrocks> + wget --progress=dot:mega http://q-jenkins//iso/trusty//trusty-desktop-i386.iso -O autopilot-trusty-setup_otto_label=qa-intel-4000.QgzUAk
[10:00] <didrocks> --2013-11-18 09:57:35--  http://q-jenkins//iso/trusty//trusty-desktop-i386.iso
[10:00] <didrocks> Resolving q-jenkins (q-jenkins)... 10.98.3.12
[10:00] <didrocks> Connecting to q-jenkins (q-jenkins)|10.98.3.12|:80... connected.
[10:00] <didrocks> HTTP request sent, awaiting response... 404 Not Found
[10:00] <didrocks> 2013-11-18 09:57:36 ERROR 404: Not Found.
[10:00] <didrocks> ogra_: great!
[10:00] <didrocks> Mirv: we'll have to wait on the provisionning before running the Mir stack (all setup done apart from this FYI) ^
[10:00] <Mirv> ok
[10:00]  * asac wonders why it doesntloads from q-jenkins
[10:01] <didrocks> we should do a wrapper one day for wget, that a 404 doesn't create an empty file…
[10:01] <asac> didrocks: do you know how this normally works?: e.g. who ensures that there is an .iso on q-jenkins?
[10:02] <didrocks> asac: I think QA cached it some times ago for their smoke testing and they had a process for that
[10:02] <didrocks> asac: maybe it's a jenkins job, I don't really know
[10:02] <didrocks> ah, also:
[10:02] <didrocks> Errors were encountered while processing:
[10:02] <didrocks>  grub-pc
[10:02] <didrocks> E: Sub-process /usr/bin/dpkg returned an error code (1)
[10:02] <didrocks> _____________________________________________________________________________
[10:02] <asac> ok
[10:02] <didrocks> on radeon
[10:02] <didrocks> vila: I guess this is the conffile you changed ^
[10:03] <asac> wonder why they dont use squid
[10:05] <vila> didrocks: looking
[10:06] <vila> didrocks: first attempt at reprovisioining since the move probably
[10:06] <didrocks> vila: yeah, confirmed
[10:09] <vila> didrocks: /iso content is good, searching for :80 config :-/
[10:12] <xnox> Can this be merged please? https://code.launchpad.net/~xnox/ubuntu-keyboard/libpinyin4/+merge/193859
[10:12] <xnox> the r101 was a merge from trunk to resolve a merge conflict.
[10:12] <xnox> and the bot now thinks it needs to be re-reviewed.
[10:15] <didrocks> xnox: I approved it for you :)
[10:15] <didrocks> (seeing it was already reviewed)
[10:20] <retoaded> didrocks the URL for the ISO works now
[10:21] <didrocks> retoaded: thanks! vila: did you look at the grub-pc thing or should I just retake a setup snapshot?
[10:22] <vila> retoaded: thanks, haven't figured that one out, there is no /var/www/iso/trusty on m-o yet the log says there was one a couple of days ago 8-/
[10:22] <vila> didrocks: what grub-pc stuff ?
[10:23] <didrocks> 11:02:38 didrocks | ah, also:
[10:23] <didrocks> 11:02:40 didrocks | Errors were encountered while processing:
[10:23] <didrocks> 11:02:40 didrocks |  grub-pc
[10:23] <didrocks> 11:02:40 didrocks | E: Sub-process /usr/bin/dpkg returned an error code (1)
[10:23] <retoaded> vila, it is an alias setup within apache; see /etc/apache2/conf.d/isos.conf
[10:23] <didrocks> 11:02:43 didrocks | on radeon
[10:23] <didrocks> 11:02:47     asac | ok
[10:23] <didrocks> 11:02:51 didrocks | vila: I guess this is the conffile you changed ^
[10:23] <vila> didrocks: the only ring it bells was to pin a kernel version... hold on reading
[10:24] <didrocks> vila: yeah, that's why I'm pinging you, in case you changed a conffiles and grub isn't happy about it
[10:25] <vila> retoaded: thanks
[10:25] <vila> didrocks: ok, no I've restored everything to pristine after giving back qa-radeon-7750, thanks for thinking about that
[10:26] <vila> asac: the isos are downloaded by otto, the magic links were missing server side, just fixed
[10:27] <didrocks> vila: ok, so you think we can ignore this grub error then?
[10:29] <vila> didrocks: first time I see this grub error :-/
[10:30] <vila> didrocks: oh, locally modified...hmm, may be a leftover from debugging by the kernel team, I remember they had to force things a bit at one point to get a specific kernel
[10:30] <didrocks> vila: ah, so you are reverting back to vanilla? \o/
[10:31] <vila> didrocks: sounds like the safer bet
[10:31] <didrocks> and then I guess sudo apt-get install -f
[10:31] <didrocks> keep me posted
[10:31] <didrocks> I'll rerun a setup() then to try to get your image
[10:31] <didrocks> (on both)
[10:33] <ev> retoaded: good morning
[10:33] <vila> didrocks: reproduced with 'apt-get install grub-pc', confirmed it complained about the comment
[10:34] <retoaded> ev, it is definitely morning. I'm not sure about the "good" part though
[10:34] <ev> :)
[10:34] <cjwatson> vila: what's this?
[10:34] <cjwatson> <- grub maintainer
[10:34] <ev> retoaded: ev@jatayu:~$ time curl https://launchpad.net -> 21s; ev@mayura:~$ time curl https://launchpad.net -> 0.7s
[10:35] <ev> any idea what's going on?
[10:35] <vila> cjwatson: http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-setup_otto/label=qa-radeon-7750/23/console
[10:35] <retoaded> ev, no but will tale a peek
[10:35] <vila> cjwatson: caused by a comment in /etc/default/grub
[10:36] <cjwatson> vila: remind me how I set up DNS for .ubuntu-ci?
[10:36] <vila> cjwatson: https://wiki.canonical.com/UbuntuEngineering/QA/VPN
[10:37] <cjwatson> ta
[10:37] <vila> cjwatson: you had one working before ? NM or openvpn ?
[10:37] <cjwatson> NM, I'll just update it now
[10:39] <cjwatson> ah, right, yeah, that error isn't my problem :)
[10:39] <cjwatson> good
[10:39] <cjwatson> although the update from 2.00-19ubuntu3 -> 2.00-20 shouldn't have triggered a conffile prompt ...
[10:41] <cjwatson> <cjwatson@amber ~>$ diff -u <(deb-extract-file grub2-common_2.00-19ubuntu3_amd64.deb /usr/share/grub/default/grub) <(deb-extract-file grub2-common_2.00-20_amd64.deb /usr/share/grub/default/grub)
[10:41] <cjwatson> <cjwatson@amber ~>$
[10:42] <vila> cjwatson: but it did, I installed the maintainer version which removed the comments, hold on
[10:43] <cjwatson> no substantive change to postinst either
[10:43] <vila> cjwatson: scratch, that,
[10:43] <vila> was looking at the wrong machine
[10:44] <vila> on qa-radeon-7750:/etc/default/grub comments after the GRUB_DEFAULT line
[10:45] <cjwatson> I did change config a bit, but that change would have been a no-op unless GRUB_CMDLINE_LINUX_DEFAULT had been removed from the config file
[10:46] <cjwatson> Anyway, I probably can't debug any further unless there's some way to see the diff between the versions here
[10:46] <vila> cjwatson: ctrl-alt-del
[10:47] <vila> cjwatson: I did say: keep the installed version and the comments are still there
[10:47] <cjwatson> And of course if you change package-supplied configuration files directly then it's inevitable that you'll occasionally have to resolve conflicts
[10:47] <vila> cjwatson: (since I was looking at the wrong machine and stop seeing the comment I revised my memory and said I did chose the maintainer version, clearly not the case)
[10:47] <cjwatson> Could you show me "diff -u /usr/share/grub/default/grub /etc/default/grub" then?
[10:48] <vila> cjwatson: ha ! great: here is the *needed* change:
[10:48] <vila> -GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
[10:48] <vila> +GRUB_CMDLINE_LINUX_DEFAULT="quiet swapaccount=1"
[10:48] <cjwatson> OK, so you can fix this by making that change in debconf
[10:48] <cjwatson> dpkg-reconfigure grub-pc
[10:49] <cjwatson> Though it should have done that automatically ...
[10:49] <cjwatson> Before you do the above, "debconf-show grub-pc"
[10:50] <vila> cjwatson: http://paste.ubuntu.com/6436818/
[10:50] <cjwatson> Huh, so that's already in debconf
[10:51] <vila> cjwatson: because I did 'apt-get install grub-pc' ? That change was made.... when the machine was setup, say, two weeks ago ?
[10:52] <vila> cjwatson: no need for dpkg-reconfigure grub-pc then ?
[10:52] <cjwatson> No
[10:52] <cjwatson> Was the change originally made directly to /etc/default/grub, or via some user interface?
[10:52]  * vila forgot dentist, needs to run
[10:52] <cjwatson> (Either should have worked, but I need to know how to reproduce this)
[10:53] <vila> directly in file
[10:54] <cjwatson> vila: Oh, but you say there was a comment added above it?
[10:54] <cjwatson> I guess that would have been sufficient to confuse matters
[10:54] <cjwatson> (I'd hoped for the diff pastebinned, not just a part of the diff pasted into IRC)
[10:55] <Saviq> vila, it's back again... https://jenkins.qa.ubuntu.com/job/autopilot-testrunner-otto-trusty/691/ :/
[10:56] <Saviq> didrocks, session locking under otto ↑, ideas?
[10:56] <didrocks> Saviq: I guess it's something for cihelp ^
[10:57] <Saviq> didrocks, yeah, vila responded to cihelp before, asked me to follow up if it happened again
[10:58] <didrocks> Saviq: hard to tell without having access to kvm, which I don't anymore (for good reasons btw ;))
[10:58] <Saviq> didrocks, mhm, thanks
[11:00] <Saviq> cihelp, so yeah, otto runner has issues with session getting locked in some jobs - all autopilot videos are either black or show the screen lock password entry, and obviously the tests fail
[11:01]  * Mirv back in 1h
[11:08] <dednick> Cim
[11:08] <dednick> bleh
[11:12] <retoaded> ev, leworks@jatayu:~$ time curl https://launchpad.net -> real	0m0.611s
[11:12] <ev> retoaded: whoop. What was the problem?
[11:13] <retoaded> ev, /etc/resolv.conf. utah seems to modify it to put a nameserver 192.168.122.1 first in the order
[11:14] <retoaded> so resolving launchpad.net tries that first then fails over to the next nameserver in order
[11:14] <retoaded> I removed it
[11:15] <retoaded> ev, that could just be remnants of an older utah install and was sync'd over from m-o
[11:18] <ev> ugh, I knew there'd be something I hadn't checked
[11:18] <ev> thanks
[11:21] <vila> cjwatson: sorry, being late didn't help, here is the full diff http://paste.ubuntu.com/6436934/
[11:26] <vila> cjwatson: and I probably missed that GRUB_CMDLINE_DEFAULT when apt-get install presented the diff :--/
[11:26] <cjwatson> I tried a saucy chroot, apt-get install grub-pc vim, edit /etc/default/grub to apply that diff, edit /etc/apt/sources.list to upgrade to trusty, apt-get update, apt-get install grub-pc - no prompt
[11:27] <vila> cjwatson: so, reformulating to ensure I get it right: pat-get saw a conflict on GRUB_CMDLINE_DEFAULT and warned. It won't do it next time correct ? (Unless a new change introduce a new conflict)
[11:27] <cjwatson> I don't think so but I can't tell for sure because this is a bit out of the ordinary
[11:27] <cjwatson> Is the immediate problem dealt with?  If so I propose ignoring this until/unless it happens again :-)
[11:28] <vila> cjwatson: good enough for me, will tell you if I encounter such a case
[11:28] <cjwatson> (Or until/unless somebody can get me a reproduction recipe I can run locally)
[11:29] <cjwatson> FWIW I suspect the conflict would have related to the comment and not to the GRUB_CMDLINE_LINUX_DEFAULT change, since the latter appears to have been correctly carried over into debconf
[11:29] <cjwatson> Can't prove it right now though
[11:30] <vila> cjwatson: the weird thing is that qa-intel-4000 has the same changes and didn't encounter the issue: http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-setup_otto/label=qa-intel-4000/23/console
[11:31] <ogra_> == Image r23 DONE [11:32] <cjwatson> vila: Ah well, let's see if it happens on the next grub2 change
[11:32]  * popey updates to 23
[11:32] <cjwatson> Probably not worth ratholing on now
[11:33] <vila> cjwatson: agreed
[11:40] <vila> didrocks: so, grub-pc issue cleared for now, re-trying an otto setup
[11:41] <didrocks> vila: keep me up to date!
[11:42] <vila> didrocks: images downloaded on both, progress
[11:43] <vila> didrocks: and both failed later, reading the logs
[11:45] <vila> lxc_container: command get_cgroup failed to receive response >-/ not sure that's the trigger but...
[11:53] <vila> didrocks: nope, that one should be harmless, happened during the last successful run
[12:01] <tsdgeos> guys, i had a test segfault in jenkins autolanding while it passed fine on jenkins CI and the crash seems to be no related to my changes, we've just re top approved the change but t1mp thinks its a good idea to mention here, in case it happens again to someone else which may mean faulty hardware or something else
[12:01] <tsdgeos> i.e. not asking for help, just saying this happened in case it repeats
[12:01] <vila> tsdgeos: thanks, cihelp ^
[12:03] <t1mp> ^the MR that had issues https://code.launchpad.net/~aacid/ubuntu-ui-toolkit/ima_do_not_filter_disabled/+merge/194830
[12:06] <didrocks> vila: otto should be ready then? ;)
[12:07] <Mirv> tsdgeos: yep, I think it's good to report the hitches so that they can be tracked down one by one
[12:08] <vila> didrocks: :-/ apparently not despite update_host returning 0 (AFAICS) the jobs fail
[12:08] <didrocks> ah ok :/
[12:10] <vila> didrocks: last run succeeded, while I'm happy to report, I'm unhappy to not understand why :-/
[12:10] <vila> didrocks: I did some apt-get upgrade/dist-upgrade manually though...
[12:11] <vila> didrocks: so let's move on the next step
[12:11] <Mirv> cihelp more 'naartjie' host issues (ssh connection fails) https://code.launchpad.net/~thomas-voss/process-cpp/add_fork_and_run_facilities/+merge/194842
[12:11] <vila> didrocks: I also noticed that autopilot-nvidia last otto setup run failed before the move... we may have other issues to investigate when we get a nvidia replacement :-/
[12:15] <vila> didrocks: so, what's the next step ? Do you have a small stack in mind that is expected to succeed ?
[12:39] <didrocks> vila: was discussing in a hangout
[12:40] <didrocks> vila: so, otto should be up for those 2 machines? Mirv: can you launch Mir now?
[12:40] <didrocks> Mirv: Mir & co of course :)
[12:40] <didrocks> if the fix branches were merged
[12:40] <didrocks> vila: on the failure: I know that sometimes, the machine exits before the job finishes to process
[12:40] <didrocks> so you see the java stack
[12:40] <didrocks> and it's marked as failed
[12:45] <Mirv> didrocks: ok!
[12:45] <Mirv> the branch is still not merged, though
[12:46] <Mirv> I'm wondering if I could help from ci_help to get ETA if it'll work soon, or merge manually
[12:46] <psivaa> Mirv: naartjie is resolved ok from s-jenkins but not from the slave cyclops nodes
[12:47] <psivaa> retoaded or fginther could add more information
[12:47] <didrocks> Mirv: yes, please ;)
[12:48] <vila> didrocks: otto setup succeeded so yeah, that should mean otto is up, but I'd rather not rely on that without at least one successful run of one job
[12:48] <Mirv> doing manually for now, and good if the naartjie problems are on radar
[12:48] <retoaded> psivaa, which cyclops node(s)? node-06 resolves it.
[12:49] <didrocks> vila: yeah, we need to be able to start the Mir stack (pending Mirv to get merged/merge the Mir branch) ^
[12:49] <vila> didrocks: ok
[12:49] <vila> retoaded: node07 is the only one my test script can't reach
[12:49] <retoaded> vila, ack.
[12:50] <psivaa> retoaded: ohh? http://s-jenkins:8080/job/process-cpp-trusty-armhf-autolanding/11/console says 'Unable to connect to naartjie:http:'
[12:50] <retoaded> vila, it's possible node07 is not up.
[12:50] <psivaa> which uses node-06
[12:51] <vila> psivaa: ha, good, retoaded, my test script exercises ssh only
[12:51] <vila> retoaded: from my desktop
[12:52] <vila> Mirv: let me know when you start so I can monitor
[12:52] <retoaded> psivaa, I don't think that is an issue with being able to connect to naartjie but, instead, not finding what it is looking for:  http://naartjie trusty/ InRelease has spaces in it
[12:53] <retoaded> but that mey not be the actual URL
[12:53] <retoaded> s/mey/may
[12:55] <Mirv> vila: I kicked the https://code.launchpad.net/~thomas-voss/process-cpp/add_fork_and_run_facilities/+merge/194842 now for example
[12:55] <retoaded> psivaa, because the other job https://jenkins.qa.ubuntu.com/job/process-cpp-trusty-armhf-ci/16/console has the URL  http://naartjie/archive//head.mir/trusty/InRelease and InRelease does not exist under  http://naartjie/archive//head.mir/trusty
[12:56] <vila> Mirv: err, on s-jenkins ?
[12:56] <psivaa> retoaded: yea, that's true. so an issue with the job config i guess.
[12:57] <retoaded> psivaa, possibly
[12:57] <vila> didrocks, Mirv: I was asking for some validation on q-jenkins, can't we do that ?
[12:57] <psivaa> retoaded: thanks, i'll dig in if there is anything that is obvious
[12:57] <didrocks> vila: Mirv is going to run the Mir and dependant stacks, which will trigger that job
[12:58] <didrocks> vila: but the CI isn't able to merge one of the upstream branch it seems which enables to build Mir
[12:58] <Mirv> vila: ok I'm just not clear which issue you're referring to. I also kicked the Mir stack now running, after merging the CI failing commit manually.
[12:58] <didrocks> so Mirv told he will merge manually the mir branch
[12:58] <didrocks> and run the stacks :)
[12:58] <didrocks> and, already done
[12:58] <vila> ha ok, sorry, was confused
[12:58] <didrocks> Mirv: then, you take care of the platform stack? (so that it does build against latest Mir)
[12:59] <Mirv> vila: yeah platform + unity-mir (and recheck unity-system-compositor too)
[12:59] <didrocks> Mir Mirv… this is so confusing! :)
[12:59] <didrocks> thanks ;)
[12:59] <Mirv> hopefully then there'd be something ready for robru to test for example
[12:59] <didrocks> yep
[12:59] <didrocks> Mirv: is the network seems to still be slow btw?
[12:59] <vila> didrocks: hehe, yeah, funny, didn't fall for that Mir Mirv one ;)
[13:00] <Mirv> didrocks: yes, the prepare jobs seem to take ages
[13:00] <Mirv> "Fetched 12.0 MB in 2min 43s (73.2 kB/s)"
[13:01]  * didrocks goes for a run outside
[13:01] <Mirv> 512kbit/s not really 'internal network' speeds
[13:02]  * vila sighs, and we postponed the upgrades for the monitoring we had...
[13:02] <vila> bah, would probably won't help, I doubt we are monitoring the part that is acting right now
[13:09] <vila> Mirv: unity-system-compositor failed to build on amd64 and i386
[13:09] <vila> Mirv, didrocks: do we have something simpler to try ?
[13:16] <vila> Mirv: ?
[13:16] <vila> Mirv: is there a way to manually trigger a simple stack on q-jenkins ?
[13:17] <Mirv> vila: unity-system-compositor in itself is not too big. but the problem there is maybe related to boost transition, not these other problems?
[13:17] <Mirv> vila: the prepare jobs now completed without timeouts
[13:17] <vila> Mirv: no idea :-/
[13:18] <vila> Mirv: I try to stay focus on the otto nodes if I can :-/
[13:18] <Mirv> vila: or maybe it's CI after all, I can install libboost-all-dev just fine locally
[13:19] <vila> Mirv: the failed to build are on lp, so not related to otto is my point :-/
[13:19] <Mirv> vila: ok, but why did you ask about it failing then, or what were you searching to try?
[13:19] <vila> Mirv: ctrl-alt-del
[13:20] <vila> Mirv: I asked for a simple stack to run and validate the otto stuff, you pointed to an MP and I didn't make the connection. When mir-head came, I thought I got that job anyway and looked at how it went
[13:21] <vila> Mirv: it ended up failing *before* reaching otto IIUC, so I'm now asking if have a simpler way to validate otto
[13:21] <Mirv> vila: ok, so you want to see check job running?
[13:21] <Mirv> vila: sdk stack now running just the check job
[13:22] <vila> Mirv: thanks, sorry for not expressing my need more clearly
[13:22] <Mirv> except that it failed to start
[13:22] <vila> dang
[13:23] <vila> Mirv: http://q-jenkins:8080/job/cu2d-qa-head-2.2check/358/ ?
[13:23] <Mirv> vila: qa stack now running
[13:24] <Mirv> and yes,that
[13:24] <vila> Mirv: thanks
[13:24] <Saviq> josepht, we've see https://jenkins.qa.ubuntu.com/job/unity8-trusty-armhf-autolanding/86/console a few times now - connection time out to naartje
[13:25] <Saviq> josepht, that expected / known?
[13:26] <josepht> Saviq: looking, I know there've been some reports of networking issues in the lab so perhaps this is related.
[13:26] <vila> Mirv: http://q-jenkins:8080/job/autopilot-trusty-daily_release/513/console hu ho, autopilot-nvidia and qa-intel-4000, what did we miss ?
[13:27] <vila> Mirv: my understanding is that we should have qa-intel-4000 and qa-radeon-7750 there instead
[13:28] <cjohnston> cu2distro-config was updated, did that maybe run after it was updated? (or was the config maybe never merged?)
[13:28] <vila> and http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-daily_release/label=qa-intel-4000/513/console failed, no containers which IIRC is an issue with right access somewhere triggered by an lxc update (but it should not have triggered again >-/)
[13:30] <vila> cjohnston: or the jobs may need to be re-deployed, didrocks said he will after the hangout, let's wait for confirmation
[13:35] <vila> Mirv: qa-intel-4000 fixed (lxc upgrade striked again, already documented in the playbook), qa-radeon-7750 is not affected
[13:37] <vila> Mirv: do you know how to check that lp:cupstream2distro-config revno 925 has been deployed or do we need to check some other job (I think they are documented somewhere)
[13:37] <vila> Mirv: by deployed I mean I think some jobs needs to be injected again in jenkins but I don't know how
[13:38] <vila> Mirv: and didrocks mentioned having to tweak some other jobs too
[13:43] <vila> Mirv: http://q-jenkins.ubuntu-ci:8080/job/autopilot-trusty-daily_release/configure is the one that mention autopilot-nvidia instead of qa-radeon-7750 should I fix it or is it one that should re-generated by cu2d ?
[13:43] <josepht> Saviq: retoaded sorted it out
[13:44] <Saviq> josepht, ok thanks, will report back if we stumble upon it again
[13:44] <vila> Mirv: I fixed the slaves in the configuration matrix, we'll check with didrocks if that was appropriate when he's back
[13:45] <Mirv> vila: I guess no deploying of the configuration?
[13:45] <vila> Mirv: no idea
[13:46] <vila> Mirv: can you rebuild cu2d-qa-head or should I just click the build button (no specific parameters ? )
[13:46] <Mirv> vila: so lp:cupstream2distro-config is correct at the moment?
[13:46] <Mirv> vila: I can to the config redeploying, please don't run anything yet in that case yet
[13:46] <vila> Mirv: trunk has my proposal for disabling nvidia so AFAIK yes
[13:51] <vila> Mirv: ok, not touching my mouse , just my keyboard to reply here ;)
[13:51] <vila> Mirv: and sorry for the flood here, not pushing you, just trying to stay in sync
[13:51] <vila> (and have notes to summarize and document when the fires go down)
[13:51] <vila> no no no, I won't believe that I can't reach q-jenkins anymore, I just won't
[13:51] <vila> pfew, local network hiccup
[13:57] <Mirv> vila: all deployed, and started qa stack again
[13:57] <Mirv> also continuing on the mir builds but seeing problems still
[13:59] <vila> Mirv: thanks, I'll look at the q-jenkins part while you do that
[14:08] <vila> Mirv: autopilot tests running on both hosts
[14:08] <vila> Mirv: I haven't seen any failure yet but I didn't check thoroughly, wating for the jobs to finish
[14:11] <sergiusens> didrocks, seems it's nice day to upload clicks
[14:16] <vila> Mirv: http://q-jenkins:8080/job/autopilot-trusty-daily_release/lastCompletedBuild/testReport/ I think that's good enough to validate the otto setup
[14:16] <vila> didrocks: ^
[14:17] <Mirv> robru: passing the mir ball to you/ken. status is that bug #1252144 still affects mir in daily-build PPA, please continue to ping upstream about it. then I don't know what's this u-s-c fail about: https://launchpadlibrarian.net/156843289/buildlog_ubuntu-trusty-amd64.unity-system-compositor_0.0.1%2B14.04.20131118.1-0ubuntu1_FAILEDTOBUILD.txt.gz
[14:19] <Mirv> robru: however mir build for armhf, and so did u-s-c. platform-api and unity-mir are about to compile now in cu2d, so if the build for armhf you could run the mir 0.1.1 touch tests even without mir x86 builds or u-s-c, and report them to the landing plan.
[14:19] <Mirv> sil2100: didrocks: FYI too
[14:23] <Mirv> robru: correction, please force a rebuild of platform-api, it didn't have changes so didn't rebuild like others in the stack. unity-mir has changes so it'll rebuild.
[14:43] <didrocks> vila: I did redeploy, any pointer?
[14:44] <didrocks> vila: well, I deployed the one you changed, maybe I missed one
[14:44] <vila> didrocks: as mentioned above, http://q-jenkins:8080/job/autopilot-trusty-daily_release had autopilot-nvidia instead of qa-radeon-7750 I fixed that
[14:45] <didrocks> vila: hum, interested, I changed the setup job, and then check the otto job and I didn't see nvidia on it, I thought you did change it
[14:45] <didrocks> vila: maybe was missing coffee :)
[14:45] <didrocks> sergiusens: exactly! notes-app and all with AP failing please :)
[14:45] <didrocks> sergiusens: I guess http://reports.qa.ubuntu.com/smokeng/trusty/touch/maguro/22:20131115:20131111.1/4976/ gives a good list :)
[14:46] <didrocks> vila: nice on the otto setup! :)
[14:47] <didrocks> Mirv: thanks for your work!
[14:48] <vila> didrocks: yeah, at least we have some green around there (I'm not clear if we suffer from the slow network there but at least it doesn't break otto)
[14:49] <didrocks> yep
[14:49] <sergiusens> didrocks, landing plan doesn't show notes app from what I saw
[14:50] <didrocks> sergiusens: in fact, it was on it and treated deb-side
[14:50] <didrocks> sergiusens: but as it's both .deb and .click, I think you need to do it .click side as well
[14:50] <sergiusens> didrocks, ack, I'll tackle it as well
[14:50] <didrocks> thanks!
[15:00] <dobey> is there a way to see what args are being passed to the autoland script for current/past CI jobs? or any way to get that information at all?
[15:00] <dobey> fginther: ^^
[15:15] <ogra_> ev, there seem to be quite a lot of whoopsie crashes in r23
[15:15] <ogra_> (along with unity and maliit)
[15:15] <ev> ogra_: crashes of whoopsie, or crashes that whoopsie is uploading?
[15:16] <ogra_> whoopsie .crash files
[15:16] <ev> well that's really the problem of the crashing program, surely
[15:16] <ogra_> (which indicates crashes of whoopsie)
[15:16] <ev> oh, I see what you're saying
[15:16] <ev> in a meeting, will come back to this
[15:17] <ogra_> _usr_share_apport_whoopsie-upload-all.0.crash
[15:18] <didrocks> sergiusens: btw, just updated it as landing 304
[15:19] <sergiusens> ok
[15:20] <fginther> dobey, one moment, in a meeting
[15:43] <sergiusens> balloons, didrocks this fix won't work on click http://bazaar.launchpad.net/~ubuntu-filemanager-dev/ubuntu-filemanager-app/trunk/revision/88
[15:44] <didrocks> sergiusens: do not hesitate to annotate the landing spreadsheet with those infos btw (and turn the thing to a big, blinking, RED! :)
[15:44] <balloons> sergiusens, looking
[15:44] <sergiusens> didrocks, give me comment access and I will :-)
[15:44] <didrocks> sergiusens: oh? you don't have this? one sec
[15:45] <didrocks> sergiusens: you have POWER!
[15:45] <balloons> sergiusens, you mean temp_dir = tempfile.mkdtemp(dir=os.path.expanduser("~"))?
[15:45] <sergiusens> balloons, no, you can't patch home with click
[15:46] <balloons> sergiusens, ahh, so         patcher = mock.patch.dict('os.environ', {'HOME': temp_dir}) is no go :-)
[15:46] <didrocks> sergiusens: click has access to a separate tmp btw?
[15:46] <sergiusens> balloons, you need to re set the upstart environment and that might be a mess ;-)
[15:47] <sergiusens> balloons, the environment is managed by upstart
[15:47] <sergiusens> balloons, so you can get away with it; but if you don't reset properly; you break the entire env until you reboot/restart the session
[15:48] <balloons> sergiusens, ok makes sense. Well, hmm. I guess we've no choice but to redo the tests without patching home. I'm not sure if there are any implications we can't work around or not
[15:49] <balloons> sergiusens, I will say the patching home is not new.. the current tests do the same
[15:49] <sergiusens> balloons, was my implementation that bad?
[15:49] <balloons> sergiusens, ohh, you mean the if not click logic?
[15:49] <sergiusens> balloons, yes
[15:49] <balloons> it's been a bit since I did this. I assumed if I removed it, it didn't work
[15:50] <balloons> but perhaps this was me trying to be more elegant
[15:50] <sergiusens> balloons, it worked for me when I did it
[15:51] <balloons> sergiusens, ok, well let me re-enable it and do a quick test. if it works, I've no complaints
[15:55] <dobey> fginther: sure. i had a meeting as well. i'm just wondering how all that code is being called in practice at the moment, to see what exactly there is left to do to switch it to running tarmac for the landing
[16:10] <fginther> dobey, can you view http://s-jenkins:8080/job/generic-land/ ?
[16:11] <dobey> fginther: i don't have whatever vpn config is required for that, no
[16:11] <dobey> fginther: is it not visible from the public URLs?
[16:11] <fginther> dobey, no, those details are not published
[16:12] <fginther> s/details/jobs/
[16:12] <fginther> dobey, I can pastebin some to you
[16:13] <dobey> fginther: that would be fine. i'm only interested in the arguments passed to autoland such as --ppa and whatnot
[16:14] <fginther> dobey, http://paste.ubuntu.com/6438151/ and http://paste.ubuntu.com/6438155/
[16:16] <dobey> thanks
[16:31] <alesage> fginther, tedg noting this Jenkins failure, that dir apparently created in a second pass, maybe something in the "custom hooks" script to fix? https://jenkins.qa.ubuntu.com/job/indicator-sound-trusty-amd64-ci/16/console
[16:57] <fginther> alesage, looking into it
[16:57] <alesage> fginther, thanks
[17:01] <didrocks> sil2100: kenvandine: joining?
[17:01] <didrocks> plars: ^
[17:01] <plars> didrocks: yes, brt
[17:03] <didrocks> robru: you can't hear us/rejoin?
[17:03] <robru> didrocks, nope, can't hear a thing. gonna update & reboot
[17:03] <sil2100> Aaaah
[17:03] <sil2100> Joining
[17:21] <kenvandine> robru, uitk is building in the PPA now
[17:21] <robru> kenvandine, oh, great
[17:22] <kenvandine> robru, i need to run out for lunch, when the build is done can you get started testing?
[17:22] <kenvandine> robru, and when i get back i'll jump in too
[17:22] <robru> kenvandine, sure thing
[17:22] <kenvandine> cool
[17:22]  * kenvandine runs
[17:24] <Saviq> cihelp, can I file a bug somewhere to track issues with the CI infra? i.e. otto encounters locked unity7 session from time to time
[17:24] <fginther> Saviq, https://bugs.launchpad.net/ubuntu-ci-services-itself/+bugs
[17:26] <fginther> alesage, tedg, indicator-sound-ci is working now. still trying to figure out why it failed earlier
[17:26] <tedg> fginther, Cool, thanks!
[17:27] <vila> Saviq: from *time to time* ??? I could understand everytime, but from time to time... screen saver ?
[17:27] <alesage> fginther, yes maybe one pass was necessary to create that hook directory?  and then subsequent runs passed b/c it existed
[17:28] <Saviq> vila, yes, screen saver
[17:28] <fginther> alesage, hmm
[17:28] <Saviq> vila, I would expect for some projects the time to set up (apt-get etc.) is long enough that the lock screen kicks in
[17:28] <vila> Saviq: why now ?
[17:28] <Saviq> bug #1252386
[17:29] <vila> Saviq: ha ! slow network then, worked on
[17:29] <Saviq> vila, just wanted to say that maybe network is slow
[17:29] <Saviq> vila, still, we should disable screen lock completely on otto, shouldn't we ;)
[17:29] <vila> Saviq: oh yes
[17:29] <Saviq> or at least inhibit it during test runs
[17:30] <vila> Saviq: there is no screen to save there !
[17:30] <Saviq> vila, I imagined so
[17:31] <vila> Saviq: so when it happens it's for all tests but it doesn't happen for all jobs ?
[17:31] <Saviq> vila, yes, when it happens it's 100% failure
[17:32] <plars> didrocks: it looks like pitti has already fixed the whoopsie-upload-all crash I mentioned, should be better as soon as we pull in a newer apport
[17:32] <vila> Saviq: thanks
[17:32] <Saviq> vila, and it happens most often on unity8, gallery - the biggest dependencies probably
[17:38] <didrocks> plars: well… that's pitti, no surprise :)
[17:38] <didrocks> plars: thanks for the head's up!
[17:50] <sil2100> fginther: hello!
[17:50] <sil2100> fginther: https://code.launchpad.net/~sil2100/cupstream2distro-config/add_unity-scopes-api/+merge/195643 <- can you take a look and then maybe enable the automerger for this project :) ?
[17:51] <sil2100> fginther: thanks!
[18:07] <dobey> fginther: so there's a single jenkins job, and it runs the autoland script synchronously for every single MP that is approved?
[18:31] <fginther> dobey, yes, that's how it works
[18:31] <dobey> ok
[18:32] <dobey> fginther: and puppet isn't being used to manage jenkins config or anything, is it?
[18:33] <fginther> dobey, nope
[18:33] <dobey> ok
[20:01] <balloons> sergiusens, https://code.launchpad.net/~nskaggs/ubuntu-filemanager-app/disable-patch-home-click/+merge/195658. I remember the issue now. The tests fail if your /home directory is full of files I believe
[20:01] <balloons> I'm running now, we'll see
[20:02] <sergiusens> balloons, it shouldn't, I added a different count mechanism as well
[20:02] <sergiusens> balloons, did popey tell you the music app tests fail btw?
[20:02] <balloons> sergiusens, yes I know you count the files before running
[20:02] <balloons> sergiusens, yes, I'm looking after this run.
[20:03] <balloons> the trouble is so many files you'd have to scroll and the tests don't account for that
[20:05] <sergiusens> balloons, oh, that explains it
[20:07] <balloons> I think I should just next a folder under home and have the tests do there thing in there
[20:07] <balloons> *nest
[21:03] <fginther> robru, kenvandine, can either of you review: https://code.launchpad.net/~fginther/cupstream2distro-config/remove-duplicate-compiz/+merge/195673
[21:03] <kenvandine> fginther, looking
[21:04] <kenvandine> fginther, are we sure you should remove lp:compiz instead of lp:compiz/0.9.11?
[21:05] <kenvandine> fginther, i suspect so
[21:05] <robru> fginther, kenvandine: I'm not familiar with the 'no-dailies' stack, is that new?
[21:05] <kenvandine> i'm not either
[21:06] <fginther> robru, kenvandine "no-dailies" is to hold projects that need upstream merger support but are not part of daily release
[21:06] <fginther> for various reasons
[21:07] <fginther> lp:compiz/0.9.11 was updated on 10/22 by timo
[21:07] <fginther> so it's the "newer one"
[21:08] <robru> fginther, well, considering that we have been in manual publishing mode for *months* (eg, no dailies EVER), I don't see any value in having a "no-dailies" stack, so anything that makes it smaller looks good to me.
[21:09] <fginther> robru, kenvandine, the problem is that lp:compiz and lp:compiz/0.9.11 are the same branch. Having two different configs results in upstream merger running 2 different jobs for the compiz MPs
[21:09] <robru> fginther, ahh ok
[21:10] <kenvandine> fginther, makes sense
[21:10] <robru> (I'm not familiar with compiz version numbers)
[21:10] <robru> fginther, there's also a compiz-0.9.10 in the no-dailies stack as well, should that also be removed?
[21:11] <fginther> robru, looking
[21:12] <robru> fginther, oh, no, my mistake. so after your change lands, then no-dailies stack will have lp:compiz/0.9.10 and then head stack will have lp:compiz/0.9.11
[21:12] <robru> fginther, ok ok, looks good to me, approved.
[21:12] <fginther> robru, right, they are two different branches in that case
[21:15] <kenvandine> robru, ubuntu_keyboard passed, trying notes_app again
[21:16] <robru> kenvandine, hmmm ok. strange, so many flaky tests or weird issues today
[21:16] <robru> kenvandine, ok, gone for lunch for real this time
[21:25] <tedg> So it seems that dbus-test-runner has it's own PPA.
[21:25] <tedg> It is the only person building there.
[21:25] <tedg> Which, well, sucks.
[21:25] <tedg> Anyone know why that might be?
[21:25] <tedg> alesage perhaps?
[21:26] <alesage> tedg investigating
[21:27] <tedg> Or, wait, does that mean it gets released normally?
[21:28] <alesage> tedg not fully grokking the problem there
[21:28] <tedg> alesage, I need other packages to be able to dep on it.
[21:44] <alesage> fginther, quick q: does this ppa get added to every build under trusty, at least? https://launchpad.net/~ubuntu-unity/+archive/daily-build
[21:44] <alesage> or fginther am I recalling a bygone era
[21:44] <fginther> alesage, yes, that's the default behavior. There are a couple projects configured to *not* do that.
[21:46] <fginther> tedg, it's probably historical reasons or someone needed to build it for multiple series
[21:46] <alesage> tedg would you want to transpute your dbus-test-runner to this ppa?  or would indicators et.al. want a separate ppa?
[21:46] <tedg> I think there should probably one PPA to rule them all.
[21:46] <fginther> tedg, alesage, there are still some PPAs configured to 'backport' the current trunk
[21:47] <tedg> So let's get everything pointing there.
[21:47] <alesage> fginther, tedg but for the record, should what's in this ppa be "fresher" than daily-release?
[21:47] <tedg> The only reason I'd want a custom PPA is if it could remove my projects from the release craziness.  But I'm guessing that's inescapable.
[21:48] <alesage> I mean for projects so configured--is dbus-test-runner one?
[21:49] <fginther> alesage, it's not fresher.  The difference is that only trusty packages are in daily-build, the indicator-staging-ppa has raring, saucy and trusty
[21:49] <alesage> fginther, o ok I see
[21:51] <fginther> tedg, alesage, the custom PPAs are really only used to provide trunk to older series, if you don't need that for dbus-test-runner, we should eliminate that step
[21:53] <tedg> I'm not using the older builds for anything.
[21:53] <tedg> Seems distro version should be fine on older releases.
[21:55] <fginther> tedg, thanks
[22:01] <tedg> fginther, Is it possible to trigger a push of dbus-test-runner trunk until the daily-build PPA?  I'd like to use API that's in trunk.
[22:05] <fginther> tedg, not directly, that PPA is managed by the daily release tools...
[22:06] <tedg> Oh, my faves.
[22:06] <tedg> fginther, Then will we get a daily release to that PPA even if the target PPA isn't there?
[22:07] <popey> sergiusens: basically only one app passed
[22:07] <fginther> robru, is the daily release machinery working yet?
[22:07] <robru> fginther, hahahahhahhahahahaha
[22:07] <robru> fginther, oh, it probably 'works', sure. but it'll be manually published for the forseeable future
[22:08] <fginther> robru, what about builds to the daily-build PPA?
[22:08] <tedg> robru, Let's separate broken process and broken tools :-)
[22:08] <sergiusens> popey, notes?
[22:09] <robru> fginther, hmmmm. last time i tried to build friends, it built but the check step failed. so it should be building in the PPA fine.
[22:09] <sergiusens> popey, well, for the weather app, they lowered the failures by 1
[22:09] <robru> fginther, haven't had a chance to investigate fully. /still on lunch
[22:09] <sergiusens> popey, all the terminal ones passed for me
[22:09] <tedg> Yeah, it looks like Friends hit the PPA about 4h 40m ago.
[22:11] <fginther> robru, any chance you could build the qa stack?
[22:11] <robru> fginther, sure
[22:11] <fginther> tedg, that should get you dbus-test-runner
[22:11] <popey> sergiusens: let me reboot and re-run all the tests and get you logs
[22:11] <tedg> Sweet!  Thanks guys!
[22:12] <robru> tedg, fginther: ok, just started the build. not sure how long it usually takes. watch the PPA I guess (it shows up in the PPA before the job finishes running)
[22:13] <popey> sergiusens: http://paste.ubuntu.com/6439836/ notes crapped out part way through
[22:14]  * popey reboots and tries again
[22:14] <sergiusens> popey, worked fine for me, but I only have maguro
[22:14]  * tedg hits reload repeatedly to make it build faster
[22:14] <sergiusens> popey, wait; after you reboot, don't start the tests until mtp is ready
[22:14] <sergiusens> popey, it will reset your adb connection
[22:14] <popey> ah
[22:21] <popey> sergiusens: http://paste.ubuntu.com/6439879/ one failure on notes
[22:28] <popey> sergiusens: balloons http://paste.ubuntu.com/6439903/ music app 2 fails
[22:31] <sergiusens> popey, ack, going to run notes tests again
[22:32] <sergiusens> popey, that said, it's a lot better than current http://reports.qa.ubuntu.com/smokeng/trusty/touch/maguro/23:20131118:20131111.1/5017/notes-app-autopilot/
[22:33] <sergiusens> popey, and http://reports.qa.ubuntu.com/smokeng/trusty/touch/mako/23:20131118:20131111.1/5018/notes-app-autopilot/
[22:33] <sergiusens> popey, so I'd approve that one regardless if you agree
[22:41] <sergiusens> popey, you are right, test_delete fails on trusty, but works on saucy for notes app
[22:44] <sergiusens> popey, the slide emulation isn't enough
[22:44] <robru> tedg, ok, looks like dbus-test-runner is in the ppa
[22:44] <tedg> robru, Yup, thanks!  Requeueing all the other builds. :-)
[22:50] <popey> sergiusens: ok, notes I'll approve
[22:51] <popey> sergiusens: but not weather or music for now... also, bed
[22:51] <sergiusens> popey, I'm rerunning weather now, I'll leave a comment on that one :-)
[22:51] <popey> ok
[22:51] <sergiusens> popey, thanks for the run!
[22:51] <popey> will refresh and look in the morning
[22:51] <popey> thanks
[22:51] <sergiusens> popey, enjoy sleep while you can!
[22:51] <popey> :D
[23:41] <robru> fginther, seems not all is well just yet: http://q-jenkins.ubuntu-ci:8080/job/cu2d-sdk-head-3.0publish/295/console
[23:43] <robru> fginther, also, it seems like some communication issue between jenkins and test machines? http://q-jenkins.ubuntu-ci:8080/job/cu2d-friends-head-2.2check/253/console shows check job failing because it can't find the junit results from the (successful!) subjob.