[05:23] <pitti> Good morning
[06:58] <MangledBlue> can anybody assist? - simple install - my MD5 checks out - c7f439e864d28d9e5ca2aa885c4ec4cb *ubuntu-12.04.4-desktop-amd64.iso - any thoungts - please assist
[07:49] <jibel> Good morning
[08:08] <jibel> pitti, apport isn't installed by default on server?
[08:09] <jibel> on precise?
[08:09] <pitti> jibel: it shoudl, it has Tasks: cloud-image and server
[08:09] <pitti> jibel: ah, precise
[08:09] <pitti> still, Tasks: server
[08:09] <jibel> http://d-jenkins.ubuntu-ci:8080/view/Upgrade/job/upgrade-ubuntu-precise-trusty-server-tasks-amd64/44/artifact/results/bootstrap.log
[08:11] <pitti> jibel: hmm, maybe Tasks: server means something else then, and it's not in the base installation
[08:13] <jibel> pitti, nm, I'll find out. it's probably because in the container we install ubuntu-standard and not the task server
[08:13] <pitti> jibel: ah, it's not in ubuntu-standard
[08:13] <jibel> I'll fix the profile
[08:13] <pitti> merci
[08:14] <jibel> and file a bug against u-r-u to fail nicely
[08:14] <pitti> or at least do a check if apport-bug exists
[08:14] <DanChapman__> good morning all
[08:19] <jibel> pitti, yes, that's what I meant by nicely, and revert the system to its original state
[08:20] <jibel> in this case the failure as due to a hashsum mismatch
[09:05] <jibel> DanChapman, Good morning, are you fixing the custom installation tests? otherwise I'll do.
[09:09] <DanChapman> jibel I am indeed, working on it now. Just finishing off an emulator which solves it nicely. I'll give you a shout once i have MP ready
[09:10] <jibel> DanChapman, excellent, thanks so much for this work. I'll be happy to review and test it
[09:34] <jibel> \o/ upgrade tests dashboard back to green
[11:52] <pitti> jibel: hm, http://d-jenkins.ubuntu-ci:8080/job/trusty-adt-libnih-ppc64el/configure still has the -e
[11:53] <jibel> pitti, looking
[11:54] <pitti> jibel: I retried the libnih job, and it's still hanging (killing now)
[11:55] <pitti> jibel: that got dropped in r311, seems this didn't re-configure itself?
[11:57] <jibel> pitti, right, last request was on 2014-02-24. If the job is triggered from jenkins directly it is not reconfigured.
[11:57] <pitti> jibel: ah, that's what I did
[11:58] <pitti> jibel: I suppose I can manually change the configuration then, for manual rebuilds?
[11:58] <jibel> pitti, I removed -e from the config in jenkins
[11:58] <jibel> pitti, other solution is to retrigger a test from britney
[11:58] <pitti> oh, all of a sudden -- ah, thanks
[11:58] <pitti> jibel: for that job, I suppose?
[11:58]  * pitti does that for upstart as well
[11:59] <jibel> yes for that job
[11:59] <pitti> thanks
[12:02] <pitti> jibel: btw, I taught myself the basics of celery on Monday, I created two containers (master and a slave1) and got a simple job deployed
[12:02] <pitti> jibel: feels quite a bit more manual, but after seeing all the XML config that we do with Jenkins I'm not even sure that jenkins is easier :)
[12:04] <jibel> pitti, celery or jenkins, once it is setup you'll usually won't touch much the configuration
[12:04] <pitti> jibel: AFAICS there is not really much celery configuation, aside from which redis or rabbitmq server you use? everything is mostly programmatic through python functions
[12:05] <pitti> jibel: except that you seem to touch the job config all the time :)
[12:05] <jibel> pitti, I don't, only when you added new archs :)
[12:05] <pitti> jibel: anyway, I don't plan anything serious with this anytime soon, I was just curious to get a feeling what it is like
[12:06] <pitti> heh
[12:11] <jibel> pitti, I think I'll move most of the logic of adt-britney to britney itself directly in autopkgtest.py
[12:12] <jibel> pitti, it has all the required information to do the job and will remove lot a redundancy
[12:12] <pitti> jibel: ah, ok; that'll break the shiny new tests :) but if that makes things easier, so much the better
[12:12] <pitti> and we can probably adjust the test to do the file mocking at the lower level
[12:13] <jibel> pitti, sure, I'll update the testsuite. That'll remove lot of similar code, and both britney and adt-britney will use the same source or information
[12:13] <jibel> s/or/of
[12:14] <jibel> I'll leave in adt-britney the spool management and integration with jenkins
[12:30] <pitti> jibel: I guess from britney's POV there is no real reason why we can't introduce more states, like "ERROR" (test bed fail/uninstallable) or "BROKEN" (never succeeded), or is there?
[12:30] <pitti> as britney only really looks for "pass"
[12:31] <pitti> i. e. it should then accept PASS and BROKEN, but otherwise just show the error
[12:42] <jibel> pitti, it's correct. Currently the state PASS is hardcoded in britney.py. We could also add additional information like the reason it fails (test failure, infra, ...) the status on update_excuses is free text
[12:44]  * infinity is excited about all this talk of improving adt-britney.
[13:32] <pitti> infinity: so am I :)
[13:54] <DanChapman> jibel https://code.launchpad.net/~dpniel/ubiquity/autopilot_fix_custominstall/+merge/208605 when you get a chance mate
[14:01] <jibel> DanChapman, thanks, will do
[14:07] <pitti> jibel: so would you think it would make any sense at all for me to try and read the current code and fix those three bugs?
[14:08] <pitti> jibel: (of britney autopkgtest.py, to fix the order and issue of new source taking over a binary, etc.)
[14:19] <jibel> pitti, sure if you have time. There is no collision with what I'm doing.
[14:27] <DanChapman> balloons, howdy. is there a way to rotate the screen on grouper?
[14:29] <pitti> jibel: ah, it's not? good
[14:31] <jibel> pitti, the only change I did in britney.py is to teach read_sources about XS-Testsuite otherwise I'm in autopkgtest.py / request()
[14:31] <pitti> jibel: ah right, and the bugs we found are in britney.py itself
[14:32] <pitti> wow, how do you mean "no unstable jobs" at http://d-jenkins.ubuntu-ci:8080/view/Upgrade/ ?
[14:32] <pitti> that can't be!
[14:32] <pitti> QA engineer rule #1: tests fail as soon as you stop looking at them
[14:33] <pitti> jibel: so yes, I have time :)
[14:34] <jibel> pitti, that was easy, I followed the #2 qa engineer rule: if a test fails too often blame the infrastructure and skip it ;P
[14:34] <jibel> j/k
[14:36]  * pitti donne une accolade à jibel
[14:38]  * jibel donne en retour une accolade à pitti
[14:39] <pitti> so, off to making a date with britney to get to know her
[15:08] <jibel> DanChapman, in the diff: 163+ if num is self.total_number_partitions:
[15:08] <jibel> DanChapman, why do you use is to compare the values?
[15:39] <jibel> DanChapman, so, apart some inconsistencies like 68 + if num == self.part_table_rows:
[15:39] <jibel> [...]
[15:39] <jibel> 172 + if num is not self.part_table_rows + 1:
[15:40] <DanChapman> jibel, sorry was picking boy up from school. self.total_number_partitions is the total number of partitions defined in the config so when creating partitions there is always the 'freespace' row which on the last partition creation we don't gain a row but just replace freespace with the final partition. so instead of waiting for total rows to increment it currently sleeps to let the rows update when on the final partition
[15:40] <jibel> an unused call: 144+ item_table = tree_view.get_partition_table_dict()
[15:40] <balloons> DanChapman, I believe the grouper uses a fixed landscape mode
[15:40] <DanChapman> jibel, ahh i thought i'd removed that
[15:40] <jibel> and some TODOs, it looks fine. I did a few run and it worked
[15:41] <jibel> DanChapman, I meant the use of 'is' instead of '=='
[15:41] <jibel> to compare integers which are both the return value of len()
[15:44] <DanChapman> jibel ahh sorry i misread. :-) that's a typo i'll change it to ==
[15:47] <jibel> DanChapman, np, it is just that the 2 operators have different meanings 'is' compares identities and '==' compares equality
[15:48] <jibel> so, I thought you had a reason to do s
[15:54] <DanChapman> jibel, that makes sense :-) I'll quickly make those changes and fix the pep8/pyflake errors
[15:54]  * DanChapman forgot to check them
[15:59] <DanChapman> jibel updates pushed. Thanks for reviewing it.
[16:04] <DanChapman> balloons, ok thanks mate :-)
[16:04] <senan> balloons, DanChapman, hii
[16:04] <DanChapman> senan hey there :-)
[16:05] <balloons> DanChapman, I pushed some feedback on the runner and format checks.. Small tweaks, then we should be able to land.
[16:05] <jibel> DanChapman, awesome, maybe the world will be greener tonight :)
[16:06] <DanChapman> jibel I really hope so. :-)
[16:07] <DanChapman> balloons, yes i'm waiting on your response for the runner. It was just where would you prefer instead of /tmp.
[16:07]  * DanChapman goes to check the other comments
[16:08] <balloons> sent it along ;-) basically /home
[16:08] <senan> DanChapman,balloons, do I need to break down all  the big assertions into small ?
[16:12] <jibel> DanChapman, I love your comments starting from rev 6131 "hmmm", "ahh", "oops", "lets try this", ... :D
[16:12] <jibel> that makes me think I really need to update the runner to run from a local directory
[16:13] <DanChapman> jibel :-D yes there is only so much you can say trying to fix the same thing
[16:13] <DanChapman> local dirs would be awesome!
[16:22] <balloons> re: add pyflakes, DanChapman, I wouldn't worry too much about flake8.. I still don't use it, running them separately. I think the pain would be it's not packaged like the others.
[16:22] <balloons> just call the folder something besides tests :-)
[16:25] <DanChapman> balloons, sounds good to me :-)
[16:32] <senan> balloons, do you want me change all the lambda's I used for selecting gui elements and asserting ?
[16:38] <balloons> senan, ahh yes.. reducing complexity in your asserts is a good thing. As I said, when the assert fails now, it's hard to know why
[16:38] <balloons> and it's kind of cumbersome to read ;-)
[16:38] <senan> balloons , :D
[16:40] <jibel> pitti, -virtfs local,id=autopkgtest,path=%s,security_model=none,mount_tag=autopkgtest is the magic option to enable 9p, right?
[16:41] <jibel> and then mount -t 9p ... on the guest
[16:42] <jibel> DanChapman, meh :( testtools.matchers._impl.MismatchError: After 10.0 seconds test on GtkDialog.visible failed: False != dbus.Boolean(True, variant_level=1): Partition dialog did not close
[16:47] <pitti> jibel: jibel yes
[16:48] <DanChapman> jibel grrr and 'unable to find freespace item'. I didn't change any of that part. :-( i'll look into it
[16:48] <elopio> ping balloons, are you working on the terminal?
[16:48] <pitti> jibel: but mind that it got broken wiht the 2.0 PPA package, I filed bug 1285505 this morning
[16:50] <jibel> pitti, ack. I'm running the version from trusty on saucy, not sure I'll play with the PPA in production :)
[16:53] <balloons> elopio, yes I pushed an mp quickly
[16:54] <balloons> it need more work, but I stopped myself from going crazy
[16:54] <elopio> balloons: ok, thanks :)
[16:55] <elopio> I'm also trying to keep the head. That comes first :D
[17:02] <senan> balloons, DanChapman, Can you please check this snippet http://paste.ubuntu.com/7006042/
[17:03] <balloons> quick glance that looks more readable. Consider making it into paragraphs were it makes sense. Meaning, add a extra newline in there to group things
[17:04] <senan> balloons, okay :)
[17:05] <DanChapman> jibel do you think it's acceptable to try closing the dialog, say 3 times before letting it fail. Since it was probably something like the combobox hadn't closed so the click would have closed it and it just needed another click
[17:08] <elopio> ping ubuntu-qa, have any of you seen this? https://bugs.launchpad.net/ubuntu-keyboard/+bug/1285781
[17:20] <elopio> davmor2: have you? ^
[17:21] <davmor2> elopio: I haven't no
[17:21] <elopio> :( loneliness
[17:25] <davmor2> elopio: that's how I feel most of the time then some dev will pipe up :)
[17:27] <senan> balloons, I can use like this right self.assertThat(lambda: self.get_spinner_button_control().sensitive,Eventually(Equals(True)))
[17:28] <elopio> robotfuel: if I understand the rss correctly, it goes to the google apis site and the feed is parsed remotely.
[17:29] <elopio> so it's not possible to access a local feed.
[17:29] <robotfuel> elopio: I didn't realize that.
[17:29] <balloons> senan, consider breaking that down a bit
[17:30] <elopio> robotfuel: what we need is to wrap the api calls, so we can mock them, or inject a different dependency or something like that.
[17:30] <elopio> really easy to do if it was python :) On qml and javascript, I'm clueless.
[17:30] <asac> elopio: did we start the job yet?
[17:30] <senan> balloons, assign to a variable first and then do the assertion?
[17:30] <asac> elopio: i still see the stuck job :/
[17:31] <asac> oh now its gone
[17:31] <asac> odd
[17:31] <asac> hmm. now its back :/
[17:31] <balloons> senan, yes, grab the object.. then assert on it
[17:31] <senan> ok balloons
[17:31] <elopio> asac: I didn't, I was in a meeting. I'll do it know, because you seem to be in a hurry :)
[17:32] <robotfuel> elopio: I wonder if you could use wireshark to figure out what is needed to mock the api.
[17:33] <asac> elopio: well, no need to wait i think.
[17:33] <asac> elopio: but no hurry if you have other important things to do
[17:34] <elopio> huh, I don't have permissions
[17:34] <asac> just wanted to enusre it didnt fail etc.
[17:34] <asac> see :P
[17:34] <asac> thats what i wanted to ensure we know early :P
[17:34] <elopio> fginther: can you give me permissions to rebuild the autopilot release job in http://q-jenkins:8080 ?
[17:35] <elopio> cgoldberg: you probably have permissions for that. Can you give me a hand for now?
[17:36] <cgoldberg> elopio, my vpn is down.. i'm waiting on IS for new credentials
[17:40] <elopio> asac: yes, I see now :)
[17:42] <senan> balloons, DanChapman, I've updated the branch, can you please review it
[17:43] <asac> elopio: you can go into #ubuntu-ci-eng and ping cihelp
[17:43] <asac> in case fginther isnt around... there might be others that can help
[17:43] <asac> (all ci folks are subscribed to that keyword)
[17:44] <elopio> last time I did that, they told me to wait for fginther. It's worth a try anyway.
[17:45] <asac> yeah
[17:51] <senan> balloons,DanChapman, I just resubmitted MP
[18:00] <elopio> asac: it's running.
[18:05] <asac> nice!
[18:05] <asac> thx
[18:11] <DanChapman> jibel i just updated my iso's to todays image and it's not the tests causing the fails. see http://ubuntuone.com/1z3xRFF2BayykE5MjjP9Ju unity and window chrome aren't visible and the mouse and window/dialog focuses are all out of whack
[18:12] <DanChapman> I've ran 3/4 times on both i386 and 64 images, both presenting same
[18:22] <DanChapman> hmmm but that doesn't explain lubuntu
[19:06] <balloons> elopio, :-( http://paste.ubuntu.com/7006649/
[19:19] <elopio> balloons: could the application have crashed between swiping and confirming?
[19:19] <elopio> that seems to be hapenning a lot.
[19:20] <balloons> elopio, yes.. and is it a bug in the emulator?
[19:21] <elopio> balloons: if the application crashes, it's not a bug on the emulator. We hold a reference to a qml element, and when we try to click it, it's no longer in the screen
[19:21] <balloons> elopio, I can reproduce it easily as long as there is more than 1 item in the list
[19:23] <elopio> balloons: I don't get it. So it's not crashing?
[19:23] <balloons> elopio, the error I gave you happens whenever the list contains more than 1 element
[19:24] <balloons> if it's only 1 element, the error does not occur
[19:25] <elopio> balloons: weird, because on the toolkit the self test for that method is done with more than 1 element.
[19:25] <elopio> balloons: I need to pick my girlfriend.
[19:25] <elopio> when I return, I'll try to reproduce it.
[19:25] <balloons> elopio, :-) We should simply test both cases
[19:31] <elopio> I can't reproduce it. But I also can't tell my qmlscene not to be full screen.
[19:31] <elopio> balloons: please, take a screenshot before the error occurs, to see the size of your clock.
[19:32] <balloons> elopio, it was in wideview on the clock
[19:32] <balloons> elopio, I will help debug after fixing the test and getting this shipped :-)
[20:07] <elopio> I'm back.
[20:08] <elopio> balloons: I have the clock on wideview, but it is maximized on my screen
[20:08] <elopio> I don't know how to unmaximize it. I hate qmlscene.
[20:08] <balloons> elopio, hey.. lots of fun today
[20:08] <balloons> 2 second sidebar elopio : https://bugs.launchpad.net/ubuntu-clock-app/+bug/1283031
[20:09] <balloons> what was your plan for properly cleaning up and setting up test envs?
[20:09] <balloons> mocking /home was it?
[20:09] <balloons> and I'd like to make one bug and assign all the apps as affected I think so we can track properly
[20:11] <elopio> balloons: from the sprint, the plan was either override the environment variables to use a temp config directory, or use a fake in-memory database
[20:11] <balloons> ahh fake in-mem db was your other plan. I prefer mocking I think
[20:11] <balloons> in this case, since we rely on things outside our own app, I'm not sure in-mem db works
[20:12] <elopio> balloons: in this case, why don't we just delete the alarms we create?
[20:12] <balloons> elopio, when AP crashes, well ;-)
[20:13] <balloons> well not AP, but tests not finishing cleanly
[20:13] <elopio> ah, yes, that's what we need to fix first.
[20:13] <balloons> so I vote just mock the thing.. it'll never matter and go away on it's own even if we don't cleanup well
[20:14] <veebers> balloons: good save, I was going to have a word :-)
[20:14] <elopio> balloons: I like that too. Just a small detail is that it's not overriding $HOME, it's one of the xdg env vars.
[20:14]  * balloons waves to veebers :-)
[20:14] <veebers> how are things balloons?
[20:15] <balloons> not bad, yourself? I'm awaiting this: https://code.launchpad.net/~veebers/autopilot-qt/reintroduce-exporting-qobject-children-of-qml-items/+merge/207581
[20:15] <veebers> balloons: you might find this interesting from our convo in Oakland. Burger place in NZ menu http://www.velvetburger.co.nz/ check out "The Stag"
[20:15]  * veebers looks
[20:15] <balloons> so many things to get going.. testing is a full contact sport
[20:16] <veebers> balloons: aye, that MR is in the current release proposal
[20:16] <veebers> we're just doing the testing for it now
[20:16] <balloons> veebers, oO venison :-) Lots of it on the menu.
[20:16] <veebers> heh :-)
[20:17] <balloons> veebers, "This one used to be made from Moa (which was a big bird), but we ran out of them a few years back"
[20:17] <balloons> I love the humor
[20:17] <veebers> ^_^
[20:19] <elopio> balloons: you are running too much :) It's contagious if you stay for too long on #ci
[20:20] <elopio> I can't reproduce the failure deleting with multiple items on the alarms list, wide mode. :/
[20:22] <balloons> elopio, perhaps you can help solve problem 1 atm with clock.. fixing the last test that fails.. have a look at my MP in progress for where I'm going: https://code.launchpad.net/~nskaggs/ubuntu-clock-app/tweak-get-num-alarms/+merge/208681
[20:23] <balloons> the old version of get_num_alarms didn't seem to work for me (returning 0 all the time)
[20:24] <elopio> balloons: but if that doesn't work, is a problem on the QQuickView, that returns the wrong .count property
[20:24] <balloons> you just can't win.. it's odd because only 1 test failed, and it was the same one every time
[20:25] <elopio> your workaround seems more likely to make a bigger mess.
[20:25] <balloons> elopio, but actually listSavedAlarm will only ever be 1.. it's the list
[20:26] <balloons> my tweaks do seem to have gotten large, heh
[20:26] <elopio> balloons: self.page.get_num_of_alarms()
[20:26] <elopio> 2
[20:26] <elopio> the count property of a QQuickListView, returns the number of list elements it has.
[20:27] <balloons> ohh sorry, you returned count :-)
[20:27] <balloons> the code now reproduces the StateNotFoundError I originally attempted to avoid
[20:27] <balloons> brillant
[20:29] <balloons> elopio, so I agree.. scrapping this
[20:29] <elopio> balloons: you are going way too fast for me. Now I don't understand three issues you have:
[20:29] <elopio> - delete with more than one item failing
[20:29] <elopio> - count always returning 0
[20:29] <elopio> - this state not found StateNotFoundError you have just mentioned
[20:29] <elopio> :D can we choose one?
[20:29] <balloons> elopio, lol.. gotta keep up
[20:29] <balloons> they are all issues.. you dig into one thing and find a bunch
[20:31] <elopio> balloons: I can understand that, but I can't reproduce anyone.
[20:32] <elopio> nor jenkins, as it has merged all the branches so far.
[20:32]  * balloons feels special
[20:32] <balloons> all I want is to land something that will make the dashboard green.. From there, we can take everything one at a time
[20:32] <balloons> And perhaps it's time we disabled the 1 test that fails for me, test it manually and ship it with a bug to fix the tes
[20:36] <elopio> balloons: but just yesterday you were the one saying that we shouldn't be obsessed with getting everything to green, and try to make proper fixes
[20:36] <elopio> this failure for example: http://ci.ubuntu.com/smokeng/trusty/touch/mako/210:20140227:20140224/6849/ubuntu_clock_app/821577/
[20:36] <elopio> it sounds like we start with 0 alarms, so we can't delete them.
[20:37] <elopio> first step for me would be to try to improve that error message we are getting.
[20:39] <elopio> let me try something quick.
[20:42] <balloons> elopio, yes I was looking at that one.. it still fails after all you and nik90's changes
[20:43] <elopio> balloons: yes, nik got it, it's because we are not waiting for the alarm to be added to the list.
[20:54] <balloons> elopio, so add  alarm._confirm_addition() to add_alarm.  _confirm_addition() can poll waiting for the cont to increase.. do we have something more elegant?
[20:54] <elopio> balloons: working on it.
[20:55] <balloons> elopio, we could wait_select for the new object to be created.. I like that
[21:34] <asac> elopio: idea why we have 483 tests there?
[21:34] <asac> http://q-jenkins.ubuntu-ci:8080/job/autopilot-release-gatekeeper/label=mako-07/45/testReport/
[21:34] <asac> we should have 600+ i think
[21:35] <asac> at least just short of 600
[21:35] <asac> e.g. http://ci.ubuntu.com/smokeng/trusty/touch/mako/210:20140227:20140224/6849/
[21:35] <asac> rmeove all the non ap ones
[21:36] <asac> ok mediaplayer is not in there
[21:36] <asac> hmm. wrong
[21:36] <asac> thomi: ^^  did you guys exclude some APs?
[21:36] <asac> can you spot which ones from dashboard are not run?
[21:37] <thomi> asac: OTP, one second
[21:37] <thomi> veebers: that's for you it hink
[21:37] <thomi> *i think
[21:37] <asac> so install-and-boot security, sdk and default are not expected to be in there
[21:37] <veebers> asac: OTP will check in a moment
[21:38] <asac> thats 9 tests in total we can substract from the 610 we run on dashboard
[21:38] <asac> think shorts app
[21:38] <asac> ok thanks. let me know
[21:39] <asac> wondere where i can find ALL tests run at all
[21:39] <asac> only see the number, but cant find the complete list
[21:55] <elopio> asac: sorry, I'm with you now.
[21:57] <elopio> it's stuck on maguro again. veebers: I'll kill this job so you can use it for your autopilot release. I'll rerun it tomorrow after the 5.2 meeeting.
[22:00] <veebers> elopio: ack thanks, I've asked re the maguro being offline but got no answer. Will ask again
[22:01] <elopio> asac: I'll check the current results, which are enough to have an ugly night. For the run tomorrow I'll figure out which packages we are missing.
[22:01] <elopio> asac: sounds good?
[22:06] <elopio> I need a break now. bbl.
[22:07] <elopio> asac: oh, I forgot, here you can find all the tests that were run:
[22:08] <elopio> http://q-jenkins:8080/job/autopilot-release-gatekeeper/45/label=mako-07/testReport/?
[22:11] <asac> elopio: hmm. if you scroll down there is a table "All Tests"
[22:11] <asac> elopio: that thing doesnt have 483, right?
[22:11] <asac> (yes, i looked at that page)
[22:12] <asac> elopio: and yes, sounds good on getting hte rest of the tests into tomorrows run
[22:12] <asac> but at best pretty early in the day so we can digest those results in the meeting
[22:13] <veebers> asac, elopio: I'm not too sure why those numbers are different (483 in our jenkins job vs the ~600 on the dashboard) as far as I was told this job was a clone of what produces those dashboard results
[22:20] <asac> veebers: could be... we should really cross check. could also mean that some tests crash so miserably they dont even show up
[22:20] <asac> i would do it, but as above i just cant see the complete list
[22:20] <asac> that was run
[22:21] <asac> the "All Tests" table surely doesnt have 480 rows :)
[22:21] <asac> let me check with doanac` :)
[22:21] <asac> doanac`: http://q-jenkins:8080/job/autopilot-release-gatekeeper/45/label=mako-07/testReport/? :) ... 483 tests counted on top
[22:21] <asac> doanac`: while i would expect we have 600 or so
[22:21] <asac> doanac`: and "All TestS" table at bottom shows at most 100
[22:22] <asac> doanac`: can you shed some light?
[22:22] <asac> guess i am not good at reading the All Tests table :)
[22:22] <asac> but then the 600 vs. 480 quesiton still is interesting
[22:23] <thomi> asac: veebers elopio: that's a list of the test case classes, not the tests
[22:24] <thomi> # of rows * value in the 'total' column should give you number of tests
[22:24] <veebers> thomi: right, it has a count of tests in that class per row
[22:24] <thomi> yes, it sucks - I have no godly idea why jenkins decides to display test results like that
[22:24] <veebers> yeah, what thomi said :_)
[22:25] <veebers> thomi: do you know what the "(root)" is in that table of tests?
[22:25] <thomi> veebers: no
[22:26] <thomi> veebers: I suspect this is an artifact of collecting more than one junitxml file together
[22:26] <veebers> ah I see good point.
[22:26] <thomi> the sooner we kiull junitxml and stsart using subunit the better :-/
[22:26] <veebers> adding the Total column still gives 483 tests, not the 600 shown on the dashboard
[23:52] <asac> thomi: ok that make somewhat sense. thx