[02:51] <Dragon64> good evening!
[16:46] <balloons> ping DanChapman
[16:57] <flocculant> put it in a cron script balloons :p
[16:57] <balloons> lol
[16:57] <balloons> howdy flocculant
[16:57] <flocculant> hi :)
[16:58] <flocculant> balloons: do you know what's going on with the jenkins stuff? or is that the reason for the pings ;)
[17:02] <balloons> flocculant, indeed it's the reason
[17:02] <balloons> I have some updates, but I've not had a chance to play or write them down yet
[17:03] <flocculant> ok
[17:04] <flocculant> not seeing this being done in time for the last beta
[17:11] <balloons> if the tests are fixed, it's totally possible
[17:14] <flocculant> it'd be nice obviously
[17:30] <balloons> flocculant, do you happen to have a ubuntu touch device?
[18:31] <flocculant> balloons: have a nexus 7 I could install it to - but not immediately, would need to backup etc first before installing it
[21:17] <brendand> balloons, is there a particular test set you wanted me to use?
[21:18] <balloons> brendand, mm.. I don't care persay. I've been using 522 which is only 3 tests
[21:18] <balloons> probably useful to do the same for demo purposes I guess
[21:19] <brendand> balloons, although you might want to get a feel for what some different tests are like to run on the device
[21:23] <balloons> brendand, the interesting bit is although on the device you could semi-automate, none of the tests will be eh.
[21:24] <balloons> So I'm not sure what you mean
[21:25] <brendand> balloons, well as jibel pointed out some tests are specified in such a way that the process of checking the steps in the app could interfere, so having a wider variety of cases would make it easier to see how bad that is
[21:25] <brendand> balloons, if we use a sanity testset that should be a happy medium i think
[21:26] <balloons> brendand, gotcha. The set I'm using now is really just 3 tests, so it's probably not too representative. That said, I don't see us shipping something with 500 tests. Do you?
[21:27] <brendand> balloons, no, i wasn't thinking about the quantity of tests more just the definitions of specific tests being a problem
[21:28] <brendand> balloons, a sanity test set is about 40 test cases with a good bit of variety in there so i think that should do
[21:28] <balloons> veebers, btw, we didn't talk about reporting yesterday. Is there a way to display some test stats of the uploads so folks could publically see what's happening?
[21:29] <balloons> brendand, ack. That sounds good, and yes we should use what we expect, so folks can see what it would mean to do 40 tests on the device
[21:29] <brendand> balloons, btw iahmad was asking me if you were able to do a succesful upload
[21:29] <balloons> veebers, even just displaying a running log to start of the last few uploads. Obviously making a nice display would be nice but :-)
[21:29] <balloons> brendand, yes I did the whole thing start to finish.
[21:30] <balloons> I created set 522, tested it, and results are in PT
[21:30] <veebers> balloons: yep it's possible, as we're storing the upload each time (the full thing)
[21:30] <brendand> balloons, great
[21:30] <balloons> I also started adding to the bug list :-)
[21:31] <flocculant> balloons: just a fyi - didn't wait - backed up and just finishing install of touch to it
[21:31] <veebers> balloons: did you want to email or document something outlining what would be useful as a report?
[21:33] <balloons> veebers, I can do that. It would be helpful to talk about what would be easy to do right now. I'm flexible
[21:33] <balloons> Essentially I would like to be able to point everyone at the results if you will of the testing. It should display everyone's results publically. I don't see a need to associate or authenticate as someone specificaly
[21:35] <balloons> as far as what to display, maybe just the pass / fail percentage by suite, and perhaps later showing pass/fail percentage for an individual test
[21:35] <veebers> balloons: ok, that makes sense. Should be able to do that easily enough. Would you like me to iterate over this (i.e. produce something simple and go from there)
[21:35] <balloons> the primary thing atm is to allow people to 'see' there tests get aggregated
[21:35] <balloons> veebers, yea, let's iterate. If in doubt, choose simple :-)
[21:36] <veebers> balloons: sounds good. I should be able to take a look today. I'm off tomorrow at a conference
[23:06] <tsimonq2> ping balloons
[23:23] <balloons> tsimonq2, pong
[23:26] <tsimonq2> balloons: Hey, we have had conversations in the past about the Package QA tracker...has anything changed or are we still with the weird tracker(IMO :P)?
[23:26] <balloons> I've cahnged nothing. you mean with how one build for all cycle?
[23:26] <balloons> aka, the mailing list conversation?
[23:26] <tsimonq2> balloons: Yes
[23:28] <tsimonq2> balloons: I just wanted to follow up because we were still in the Alpha stages when I did a lot of Package QA for Lubuntu, and it is now in the Beta stages and it has not done anything
[23:28] <tsimonq2> It meaning (L)Ubuntu
[23:28] <tsimonq2> And the second time the Package tracker
[23:31] <balloons> I see some runs in there for lubuntu
[23:32] <tsimonq2> balloons: Yeah, at one point my nick in the tracker was sqawesome99(I didn't change it from initial creation for a while)
[23:33] <tsimonq2> balloons: (or rather Launchpad) but I did the majority of the Package QA this round for Lubuntu XD
[23:35] <balloons> this will show you what's been found and linked:
[23:35] <balloons> http://packages.qa.ubuntu.com/qatracker/reports/defects
[23:36] <balloons> so when you say 'not done anything', what do you mean? That the bugs haven't been fixed or ?
[23:50] <tsimonq2> bbl