[00:55] our crazy review queue is up to 143 bugs [00:55] and I was worried about running out of bugs for patch day [00:57] We'll never run out: we might catch up, but we can trust other folks to always give us more. [02:14] nigelbabu: I am interested in seeing what happens after the maverick repos open up. We will get a lot of uploads as well as a lot of submitted patches [13:11] nhandler: yes, it might be *very* interesting [13:11] I think the lucid release is likely more a factor than maverick open, in terms of patch submission. [13:13] yes, but maverick being open gets us the change to integrate in a lot of pachtes [13:13] *patches [13:14] btw, I'm trying to get a review page similar to the sponsorship overview [13:14] How do you mean "trying"? [13:14] Do you just need hosting? [13:15] Or are you still fiddling with code? [13:15] starting to fiddle with code [13:15] my people.ubuntu stuff sould be okay for hosting [13:16] OK. Hosting is fairly easy, and I know several ways to do that :) [13:16] people.ubuntu.com can't run code: only host static stuff. [13:16] or I can convince brian to put it up [13:17] I thought the python gave out an html page [13:17] It does, but you want to run the python every couple minutes. [13:18] Personally, I think the sponsorship report has less functionality than custom LP lists. [13:18] oh yeah. well, let me get there first :) [13:18] well, it does keep out multiple copies of smae bug [13:18] The main reason I support it is that there are multiple groups of developers who need different sponsors lists, and I agree that those submitting sponsor requests shouldn't have to know who to subscribe. [13:18] I don't think we need that for reviewers. [13:20] Why I'm looking to an overview page is to avoid the complex queries and just point a page to show what needs to be reviewed. [13:20] How does it differ from the queries? [13:20] Its similar for newer folks [13:20] It's trivial to hide static queries behind nice pretty links. [13:20] /similar/simpler [13:20] How? [13:21] I wanted a way to keep track of patch-fowarded-upstream where the upstream bug is closed [13:21] and a coupla other use cases [13:21] Ah, so the report would actually categorize stuff, etc. [13:22] yes [13:22] Whereas with LP all we can do is produce lists, but not an overview. [13:22] hence the *starting* to fiddle with code [13:22] OK. Please focus on making it a nice report/set of reports that tells us what is where, rather than on making it a landing page for reviewers to work from. [13:23] * persia believes any non-realtime report is inherently broken as a worklist [13:23] In that case can you help me write down goals? [13:23] Its more of a summary to look at [13:24] * persia should learn to be less expressive in the hopes of one day establishing a manageable activity list [13:24] hehe :) [13:28] Initial thoughts would be a summary overview of how many bugs are in which (descriptive) state based on analysis of tags (which is more than just raw tag count, which we can get easily from LP). [13:29] It would probably be interesting to produce a copy of this daily so we could later compare:analyse. Maybe have the script generate a date-marked (e.g. filename) machine-readable set of data, and have another script that makes that pretty. [13:30] I believe most of the worklists *should* be able to be hidden behind URLs: I'm happy to toss that on the front page of qa.ubuntuwire.com instead of the current patch links, and they could also be put on the report. [13:31] If there exists a workqueue that can't be generated with an LP URL (like "bugs submitted upstream where the upstream task is closed"), then maybe generate worklists there (although I think this needs to update frequently). [13:32] So, something in the likes of a list of bugs with each status. [13:33] Note that in the case of patches-submitted-elsewhere-and-relevant-bugs-closed we need to differentiate between closures that imply patch acceptance and closures that imply patch rejection. If we find that the algorithm manages to differentiate these successfully with no false positives, we can probably drop the workqueue lists, and instead have the script just mark for acceptance or rejection. [13:33] I don't think the lists of bugs with each status is useful *except* if we define a workqueue that can't be searched in LP. [13:33] Or rather, can't be defined with a URL. [13:34] (so it requires API to develop it). [13:34] That said, I think the right way to determine which workqueues to generate should be to try to figure out a set of bugs that has a known correct action (e.g. patch-submitted-upstream -> patch-accepted-upstream). [13:34] i.e. a point where we might have to work on it. [13:35] And then use the workqueues as tools to improve the algorithm with an eye towards automation. [13:35] +1 [13:35] Note that this is *very* different than how the sponsors report works, although the sponsors report code may be a useful example to base some of the analysis upon. [13:36] its just base code so that I dont have to do the initial ground work [13:38] http://pad.ubuntu-uk.org/7a7QjBIyuq [13:38] I'm working on a rough cut of what needs to be done [13:43] Makes sense. [13:43] I'd probably try to focus on patch states, rather than which tags happen to be present. [13:44] So "How many bugs are currently waiting for upstream comment?" is an interesting metric. [13:44] ah, that way [13:44] And fot athat, I'd want to count patch-submitted-upstream AND NOT patch-accepted-upstream or patch-rejected-upstream. The docs say that we're supposed to replace the tags, but I'm not confident everyone follows docs well. [13:45] So, we don't use the tags at all in the reports. [13:51] I don't think it's worthwhile to expose the actual tags. [13:51] I agree [13:51] That just leads to rough human estimates of interesting values by comparing the numbers. [13:51] If we're doing computational analysis, we can provide more interesting interpretation [13:56] Now, I remember why I hated hacking on LP API [13:57] Poor documentation. [14:16] nigelbabu: On (a)(ii): the analysis should be done for each bug, rather than doing arithmetic on the counts (you may already know this, but it wasn't clear to me from etherpad) [14:17] nigelbabu: I think it's also interesting to look at the total number of bugs that would be in the review queue if there wasn't a date restriction (so apply the same filters (excepting the date filter) as the script) [14:18] persia: I did know about (a)(ii) [14:18] and total number is already ready :) [14:18] I'm planning to take numbers using brian's script from the graphs [14:19] Do we have total numbers with the filters (sponsors, kernel, etc.)? [14:20] ah [14:21] My rough estimate is that we'd get down to ~1500 with the filter, but that's a complete guess. [14:22] for (a)(ii), you wanted patch-forwarded-upstream+patch-forwarded-debian+-patch-accepted-upstream+-patch-accepted-debian+-patch-rejected-upstream+-patch-rejected-debian [14:22] well, thats from modifying the link's query [14:24] Well, kinda. [14:25] I just suggested doing real analysis. [14:25] Only, I get no hits with that one :x [14:25] If you do that check for each bug iteratively (rather than doing arithmetic later), you should get the number of bugs waiting for upsteam feedback. [14:27] OTher interesting numbers would be a count of how many patches have been accepted usptream. [14:28] Or a count of how many patches have been applied upstream that still have open Ubuntu bugs. [14:28] (indicating how far we're behind on integration) [14:28] I'm sure there are others, but once you have a few, I suspect you'll get requests for more. [14:30] Thats the plan so far :) [14:30] for the places we have work to do, I'll just make a table or link to a query [14:33] Please don't. [14:34] Instead, for the candidates for automation, generate a table so we can review that the automation is safe to enable. [14:34] For the worklists, try to construct URLs for realtime LP queries. [14:34] * persia reads ago. [14:34] s/ago/again/ [14:34] Isn't that what I just said? [14:35] Right. Please *DO* :) Just differentiate in the way I describe :) [14:35] yup,sure :) [14:35] Only this can only happen in stages. [14:35] I'll first work on getting the numbers out [14:39] Sounds like a plan :) Let me know if you need hosting (either realtime, or cronjob). [14:39] okay :) [14:40] Is it normal that I end up doing support stuff for the team than actual patch review? [14:42] In all the teams where I have accepted a leadership or administrative role, I've found that I have greatly reduced time to spend doing the actual work of the team. [14:42] Well, so that explains that. [14:42] When I'm not spending time thinking about how to improve how the team works, I'm spending time supporting other folks on the team with their goals. [14:44] So far similar to what I've been doing. [14:49] Yep. You're the team leader for this team :) [14:51] I *hate* LP API! [14:51] (and it seems to hate me equally) [14:53] looks like I neeed help figuring out how LP API deals with tag combinations [14:55] #launchpad :) [14:56] weekend [14:57] Note to self: Trial and error /works/ [14:57] Well, and bad time of day for the couple folks that tend to be around on weekends. [14:57] I figured it out. API documentation is just bad. [14:57] instead of whitespace I needed a = [14:58] + rather [14:59] patch-forwarded-upstream shows 3 bugs with patches and patch-forwarded-upstream+patch-forwarded-debian shows 0 [14:59] wonder why [14:59] paste code? [15:00] http://paste.ubuntu.com/421691/ [15:00] I've worked on the code than brian uses for generating graphs [15:03] Actually, based on our workflow, I would expect that the number of bugs with that combination would be 0. [15:03] We haven't gone back to the patch-forwarded-upstream bugs and decided upstream was far too slow, and done forwarding to Debian yet. [15:04] ah, I have to try something with two tags to see if that works [15:06] Even with Any combination/ [15:12] ok, thiss is totally cuckoo! Whats works on LP site doesn't work via API [15:13] That doesn't suprise me. [15:13] An increasing amount of the web interface is being refactored to use the API, but it's not complete in any way. [15:14] I'm not sure if tag combinations work via API. [15:14] And there's extra oddities: like the behaviour of things being different if you use/don't use the Javascript interface. [15:14] I tried using whitespace to separate two tags and also +, still returns 0 [15:15] You could grab all the bugs that have some specific tag, and then select a subset (using Python's set interface) where they also have (or don't have) some other tag. [15:15] grabbing bug by bug is very inefficient. It takes a loong time. [15:16] That said, I can barely patch typo bugs in Python, so I can't really help you do that :) [15:16] No, grab a set of bugs that have tag A. [15:16] Stick them in a set. [15:16] Then create a subset from that set, based on properties of the bugs (not requerying LP). [15:17] Ah, that *might* work [15:17] I need food first. [15:17] Depends on whether you have local access to the bug properties. I don't know the data structures. [15:17] But that algorithm should get you the right set. [15:18] subsetting might involve querying LP again [15:29] nigelbabu: did the cheese apport hook land for karmic as well? [15:33] vish: no. need sru for that. [15:34] nigelbabu: hmm , a lot of the bugs are from karmic users.. would it get an sru? [15:34] vish: can ask someone from ~ubuntu-sru? [15:34] righto..