[00:26] bdmurray: 'bugtask-search [00:26] bdmurray: 'bugtask-search|patches-view' [00:26] mmm, possible with brackets, depends on the engine [01:57] it might be nice to give community devs - those that have reached reviewer level, for instance, access to the OOPS summaries [01:58] also [01:58] wow, 1135 second page request. ouch. [03:26] lifeless, it looks like your patch worked. The generic everyday "Threads left garbage" message was raising an error, which locked up the entire process stack in os.wait() calls. [03:27] and that is why it was random - some thread somewhere wasn't shut down before the testrunner proceeded. Maybe a race condition. [03:28] mars: yay for progress on that one [03:30] mwhudson, yes, I'm happy we finally found the source. Now we just have to land the fixes for each of the links in the chain of cascading failures. [03:31] the boring bit :-) [03:31] hehe [03:31] mars: do you know what's going on with ec2 thinking failing tests runs are successes? [03:31] that is a mystery to me. [03:32] I couldn't reproduce it. But thanks to this fix I now know where to look in the zope code for errors. [03:37] lifeless, That rpm.newrelic.com website crashes my browser :P [03:38] oops :P [03:39] mars: glad to have helped [07:06] lifeless: Why is bug #516709 Soyuz? Isn't it Code? All of the changes seem to be to the branch, not upload rights. [07:06] Bug #516709: revisit official package branch permissions === stub1 is now known as stub === stub1 is now known as stub [07:32] Hi there lp devs, do I need to run lp on my own computer before submitting a patch? I know it would be better to do that, but isn't lp too bulky to download? [07:35] bilalakhtar, ideally you would, you cacn try submitting the merge proposal and seeing if you can get a developer to run the tests for you [07:36] beuno: oh ok thanks [07:37] bilalakhtar, what is this patch about? [07:38] beuno: I haven't begun work yet, but I want to add the following feature in lp answers: One should be able to assign someone to a question or change its status using an AJAX overlay. [07:38] beuno: such a feature already exists in malone [07:39] ah [07:39] so, just to give you a tip, you may need to build the API for that, because those are old parts of the code and probably don't have APIs to leverage javascript [07:40] which means you will need to run LP, because it's a significant chunk of work :) [07:40] ahha [07:40] thanksf for the info, beuno [07:41] beuno: What do you mean by "API" ? [07:41] beuno: launchpadlib? [07:41] a level down, internal API [07:41] beuno: you mean the RESTful API that lp exposes? [07:42] yes [07:42] * bilalakhtar understands his task, and still he is determined to work on this feature [07:43] bilalakhtar, that is awesome [07:43] I look forward to it! [07:44] beuno: Thanks, it will take a week, since I don't get to code very often [07:45] beuno: Another question: Why are there 4 lp branches? On which one should I work? [07:45] the latter Q is easy. devel. the former... lengthy to explain. [07:47] No Answers stuff is exposed over the API yet. This is not going to be a simple task. [07:47] spm: I think the answer is this: All branches merge into devel, then go into edge soon, where they will be deployed to edge.lp.net . staging branch is the code behind staging.lp.net . the lp:launchpad branch is for running the production part of lp. [07:47] I don't say this very often, but what spm said [07:48] basically, the 4 branches allow devs to keep developing without DB changes blocking all updates. such that edge keeps getting updates till we do a release. staging, is where db mods are trialed. [07:48] Ignore stable (edge) and db-stable (staging). [07:48] unless your a losa :-) [07:48] And probably read http://dev.launchpad.net/Trunk [07:48] spm: Shh. [07:48] beuno and wgrant: I will try to copy the code from malone :) [07:48] heh [07:49] bilalakhtar: It's not that easy. [07:49] losa? [07:49] (launchpad, landscape, ubuntu-one and other stuff) operational sys admin [07:49] wgrant: ok, then I will try once, if I fail then will searchi for a bug to patch [07:49] bilalakhtar, that's great [07:49] It's probably best to try some smaller things first. [07:50] the l has becomes somewhat overloaded. I prefer the "l == legendary" explanation myself. [07:50] good idea though [07:50] wgrant: good tip, will search for some tiny bugs [07:50] API + JS is not the best combination to start with. [07:50] But Answers could certainly do with lots of AJAX. [07:50] hey noodles775 [07:50] Hiya [07:50] noodles775: hi there, buildd admin [07:51] ehem [07:51] erm, what's wrong... [07:51] * noodles775 starts loading pages :) [07:51] I can't believe that lp was proprietary once! Actually, I joined lp quite late [07:52] https://bugs.edge.launchpad.net/launchpad-answers/+bug/58670 would probably be easy and useful [07:52] Bug #58670: Highlight comments from the reporter [07:52] Yeah, lp has certainly benefited lots from being open :) (IMO) [07:52] or https://bugs.edge.launchpad.net/launchpad-answers/+bug/226690 [07:52] Bug #226690: Not obvious that expired questions can be reopened [07:52] poolie: thanks [07:52] I remember when it was closed, wgrant was cranky all the time instead of hyper-productive :) [07:53] and cranky :) [07:54] beuno: See, I wasn't just complaining for the sake of complaining. [07:55] * beuno hugs wgrant [07:55] * poolie hugs you both [07:55] :) [07:56] * bilalakhtar is amazed to find lp managed by bots :) [07:56] i certainly feel better about contributing now it's open [07:56] even though it was possible before, it just feels better now it's properly open not just internally open [07:56] and there's more infrastructure and documentation towards helping others [07:57] God has said: The world will end at a time when non-living things will take control over the jobs of people. [07:57] * bilalakhtar agrees with poolie beuno and wgrant === almaisan-away is now known as al-maisan [08:08] A question: How large is the lp devel branch? [08:08] wgrant: just in case you're gone when I try later, is there anything non-obvious that I should watch out for when trying to run a SPRecipeBuild locally (that's not on the runningsoyuzlocally wiki)? [08:12] wgrant: they seemed to be about upload rights to me [08:12] wgrant: besides which, code, soyuz, its all the same [08:16] noodles775: Barring the bug that you're trying to fix, it's pretty simple. Just 'make run_codehosting' (also starts the appserver), push a branch up, and create a recipe through the UI, request a build, start buildd-manager. [08:17] Also, recipe builds will crash the buildd due to another bug. [08:17] Thanks. [08:17] good morning [08:17] Bug #587109 [08:17] Bug #587109: Needs to cope with not receiving package_name from the master [08:21] noodles775: Is buildd-manager crashing? [08:21] Maybe it's having recipe fun. [08:21] The queue is large, and logtails appear to not be updating, at any rate. [08:22] bilalakhtar, about 213MB === al-maisan is now known as almaisan-away [08:22] poolie: oops, will take more than 2 hours on my connection === almaisan-away is now known as al-maisan [08:22] poolie, bilalakhtar: Plus 210MB for one set of deps, and another 100MBish for the other set. [08:23] Plus 100-200MB for the apt dependencies. [08:23] wgrant: Should I use rocketfuel? [08:23] bilalakhtar: You should use rocketfuel-setup, yes. [08:24] Doing it manually is possible, but difficult. [08:24] wgrant: ok. so which branch should I work on? I am confused between db-devel and devel, even though I read that runk page [08:25] bilalakhtar: If you need to make database changes, work on db-devel. Otherwise, use devel. [08:25] wgrant: the last synced log looks fine (up to 8:08 UTC). [08:26] noodles775: Argh. Maybe it's just being slow at processing uploads. [08:26] erm, that's obviously not utc. [08:26] Hey, you never know... [08:29] Mm, yeah, it's mostly filled up now. [08:29] Although something is still wrong. [08:34] noodles775: I've not had a chance to chase; but would the irregularity around retry-depwait be a possible issue here? [08:35] spm: possibly - einsteinium's last entry in the log is certainly 2010-06-04 07:42:40+0100 [-] ***** einsteinium is MANUALDEPWAIT ***** [08:36] ew. the retry-depwait log is *full* of entries like that. 2010-06-04 07:09:33 INFO Found 1076 builds in MANUALDEPWAIT state. [08:36] but the others (shipova, rosehip) simply have:2010-06-04 07:40:34+0100 [QueryWithTimeoutProtocol,client] marked as done. [0] [08:39] yep, last mention some of the other idle builders is also MANUALDEPWAIT. [08:39] s/idle/idle 386/ [08:39] noodles775: Um, is that nearly an hour ago? [08:40] Yes. [08:40] They're the latest? [08:40] Yep... latest mention in the log. [08:41] But it's still showing regular scans? [08:41] Yep. And starting new builds on other buildds. [08:41] wgrant: sorry... [08:42] wgrant: It's dispatching new builds, but you're right, last mention of starting scanning cycle is at: 2010-06-04 07:40:39+0100 [-] Starting scanning cycle. [08:43] noodles775: It's dispatching? [08:43] 2010-06-04 08:31:56+0100 [-] startBuild(http://dubnium.ppa:8221/, shotwell, 0.5.2-1~karmic1, Release) [08:43] As in, has been for the last hour? [08:44] That's pretty special. [08:44] Since the startBuild calls are asynchronous. [08:44] Unless the DB calls are slow? [08:44] Which they might well be.... [08:45] But not that slow. [08:45] Surely. [08:46] And no hints of any recipe builds firing accidentally? [08:47] I'll chekc in a tick.... but checking the frequency of "Starting scanning"... aside from one anomaly, they're all around 2hrs apart :/ [08:48] For whole long? [08:48] Er. [08:48] *how* long. [08:49] It was fine last night (a few times per minute)... [08:50] Ah, good. I was hoping the logging wasn't inconsistent. [08:50] Seems to have gotten progressively worse since 2010-06-03 19:15:53+0100 [-] Starting scanning cycle. [08:51] Since that would be... not unheard off in buildd-manager. [08:51] Hmm. [08:51] And there are startBuild calls spread throughout the intervals? [08:52] This isn't a failure mode I've seen before. [08:52] yeah, me either... it's very strange. === al-maisan is now known as almaisan-away [08:53] would it help if I mention that I have the utmost confidence in you guys to figure it out? morale booster? ;-) [08:54] spm: Ah, you EOD in 5 minutes, that's why you're so happy :P [08:55] my secret is out :-) === almaisan-away is now known as al-maisan [09:03] wgrant: the i386 buildds have filled up a bit (but all without logs). [09:09] noodles775: Are the startBuilds delayed (implicating the synchronous bit), or are they all within a couple of seconds (implicating the async bit, and ewwww Twisted)? [09:09] wgrant: there are definite breaks between some startBuilds calls... I'm including that on the bug I'm creating so we can collect info there. [09:11] OK, great. [09:11] This is a really, really odd one. [09:16] wgrant, bigjools : I've created bug 589577 which has a small snippet of the log before I lost my connection to the log server (and can't reestablish) [09:16] Bug #589577: buildd is not scanning regularly [09:16] bigjools: it's also got a link to the irc conversation so far. [09:16] Ugh. [09:17] 13 minutes. [09:19] my immediate thought is that the network has a problem [09:20] * noodles775 hopes bigjools is right :) [09:21] I am checking with IS [10:02] noodles775: So that build page that was timing out. Does this query provide all the information we need for the bits that are timing out? http://paste.ubuntu.com/444495/ [10:13] stub: checking [10:14] noodles775: Things also run more than twice as fast on a slave node (that particular query takes just over 5 seconds on the master, but 1.4 seconds on a slave). I don't think anyone will care if the stats are maybe a few seconds out of date. [10:18] stub: did you try with the SUM too? [10:19] That's what is in the pastebin, isn't it? [10:19] stub: ah, i didn't scroll past the first.. [10:19] stub: er, I was looking at the wrong paste... got it. [10:21] stub: great, so I can update the storm code to (1) run on the slave and (2) use the count/sum in the findspec rather than querying once for each. Certainly looks much better. [10:22] Yup. [10:22] Or should I just use the query verbatim (so we know exactly what's being executed)? [10:22] (and thanks!) [10:22] Using Storm to generate the query should give you pretty much the same thing [10:23] You can check by turning on the storm SQL tracing. Or getting a user requested oops report. [10:24] I've been looking into indexes - BuildFarmJob.status and Archive.require_virtualized help a little with the existing query, but not much and not at all with the count/sum query [12:02] stub: Why's the master so slow? Load? [12:04] noodles775: buildd-manager still looks a bit unhappy... any progress? [12:05] wgrant: bigjools is still investigating it (we tried disabling retryDepwait in case it was table locks), but not yet that I'm aware of (I've switched back to the builders index now that stub's provided one query to rule them all :) ). [12:05] Aha. [12:06] wgrant: the findBuildCandidate query is taking 10 minutes instead of 10 milliseconds [12:06] we have a missing index [12:06] Which one? (I thought stub added BFJ.status?) [12:07] bigjools: After seeing the location of the break in the logs, I had a suspicion it might be DBish. [12:07] noodles775: don't know, stub is looking at the query for me [12:07] ah, great. [12:08] oh, that's *the* query... ew. [12:08] Yes. *That* query. [12:08] I wonder if there is actually a bigger one in LP. [12:08] yes, the BUDQ [12:08] Maybe the one to expire PPA files. [12:08] add "F"s to taste in that acronym [12:09] Hm. [12:09] I wonder if this is related to the getBuildRecords timeouts that started with 10.05. === al-maisan is now known as almaisan-away [12:24] bigjools: Looking at that query, I'm tempted to say scrap it and start again. [12:25] stub: it's necessarily complicated [12:26] because it's built up from different parts of the code [12:26] Maybe scrap all the EXISTS, refactoring it to precalculate them into temporary tables. [12:26] stub: see lib/lp/buildmaster/model/builder.py: _findBuildCandidate [12:27] So that sounds like what needs to be refactored. If the code is generating something that complex and unoptimizable because it has to, there is a problem. [12:28] stub: Oh, you've recovered from the horror-induced coma already? [12:29] stub: I think that's a good approach, but how can we fix this critical problem right now? [12:29] can you see a missing index? [12:29] it was working fine until noodles' model change [12:31] Strangely enough, I just ran that query = 440ms [12:32] So the horrible bits only got executed 115 times because the raw queue isn't that big. [12:32] yeah, some of them run fast, some are slow [12:32] I think it depends on the architecture [12:33] So I need a slow one [12:34] we could run the b-m with storm tracing on [12:34] The planner will chose different plans depending on table statistics - eg. using a sequential scan instead of an index lookup if it believes a large percentage of the table needs to be retrieved anyway. [12:36] So all those exists get executed for each and every row not filtered by the preceding criteria. That means between 0 and 54k times I think. [12:37] I can try and chose some bad proceeding criteria. [12:38] stub, bigjools: if it's any help, you can see how little changed in that query with bzr diff -c10937 lib/lp/buildmaster/model/builder.py (shown here: http://pastebin.ubuntu.com/444570/) [12:38] I suspect the massive buildqueue is not helping [12:38] I'm gonna blow away any disabled archive buildqueues [12:41] Do you have the algorithm for finding the next build candidate in English? [12:43] I can try [12:49] stub: http://pastebin.ubuntu.com/444576/ [13:21] So if we have 35k items in the queue (such as we have now for processor=1 and virtualized=true), we order them by lastscore and check them one at a time until all our criteria match. That might be a lot of time. [13:22] If an item doesn't match criteria, why do we keep its lastscore high? If we bumped it to the end of the queue (or just increased it by some factor), the queue items with poor scores would bubble to the end. [13:30] stub: scores never change unless changed manually [13:30] bigjools: So for the slow cases I've found, it is the 80% utilization check that is the killer [13:30] :( [13:30] crap [13:30] Not really [13:31] are you going to tell me there's a quick fix? :) [13:31] If we have 10k items in the queue, all in the same archive, we currently end up issuing that query 10k times (failing each time) to get past them [13:32] So we move that out of the SQL. Instead, we do that check when filtering the first real item from the potential candidates, and cache it for subsequent checks in the loop [13:33] Or alternatively, we calculate the list of banned archives once first and filter that way. [13:36] Does the theory sound sane? [13:37] I'm not sure [13:38] the utilisation is very dynamic [13:38] the point is that the query is doing what we'd have to do in Python anyway, so caching seems the only option [13:39] I'm going to blitz 54k queue items which should speed this up a but [13:39] Do we know that the 80% query is actually doing much useful? [13:39] yes, it is [13:39] and why is destroying those items a good idea? Is it not possible to filter out the suspended ones first? [13:39] it stops the daily builds from monopolising the farm [13:40] Mmm not really. There are several daily PPAs. [13:40] So it stops a single PPA from monopolising it, and just makes several do it :P [13:40] well it depends on when they start building [13:40] yes it's still possible given enough PPAs [13:40] but at least they still get a look in [13:41] hmmm actually it won't help by blitzing them [13:43] stub: ok for now I suggest we cowboy out the 80% check until we find a better solution [13:43] Its a quick way of confirming the theory. I don't know if the slow query I manufactured is the same as the slow queries we are seeing on production. [13:44] Actually, I can confirm since I can see the important bit. Goes slow when virtualized=true and processor=1 [13:48] stub: the first part of the query filters out jobs that are not waiting [13:48] so it should not be checking that many rows in that 80% check [13:50] If I comment out that chunk, the query stops taking minutes (I give up and cancel), and instead takes 614ms. [13:51] ok I will run this stuff that blows away the suspended jobs [13:51] and see if it makes any difference [13:51] I've lost the analyze from before that pointed to it... I seem to recall about 56k checks but I'm not sure where they came from since I would have thought only 35k would be checked [13:51] 56k is the number of buildqueue rows [13:52] like I said, it's not supposed to be checking all those! [13:52] 54k of them are suspended [13:52] So PG decided to do the check before filtering because it thought it would be faster :-( [13:52] :( [13:52] this was fine before the rollout, I can't work out what's broken it [13:52] s/rollout/re-roll/ [14:02] Still slow if I force the filtering properly - only a max of 1.8k checks [14:02] gah [14:03] ok it's gotta go [14:04] Why is the archive=2 check inside the exists? [14:04] it only applies to PPAs [14:05] public PPAs [14:06] flacoste: just the man! [14:07] flacoste: we've got problems with the buildd-manager being very slow, I need to make a cowboy, can you approve this please:http://pastebin.ubuntu.com/444605/ [14:07] it removes the slow query part [14:07] Yes, but the Archive table is from the outer scope [14:08] bigjools: ??? [14:08] bigjools: what does the slow part do? [14:08] flacoste: something has changed to make the dispatcher query very slow, we don't know what's caused it [14:09] bigjools: iow, what functionality/conditions are we disabling? [14:09] flacoste: limits builder usage to 80% of an architecture for each archive [14:09] bigjools: why do we do that? or what are the consequences of not doing that? [14:09] flacoste: the consequences are that a single PPA can hog all the builders of a single architecture [14:10] basically the daily build ppas [14:10] but that's currently less worse than a 2 hour scan cycle [14:10] bigjools: i agree, should i worry about stub comment abouit the Archive table being from the outer scope? [14:11] flacoste: that's part of what we're removing from the query [14:11] Don't mind me - I'm just trying to decode this query [14:11] I'd like to restore the build farm first, then look at this problem with less pressure [14:11] +1 [14:12] I can restart retry-depwait as well and see if those indexes worked [14:14] While you're considering build-related queries, getBuildRecords timeouts are causing cron to spam me far too frequently since 10.05. It might be related, I guess, so I thought I might point it out. [14:14] wgrant: api? [14:14] bigjools: That's the one. [14:14] ok [14:14] did you file a bug? [14:15] No. I was going to wait to see if it persisted -- it has. I'll file one tomorrow. [14:15] bigjools: did you start an incident report? [14:15] bigjools: r=me with an incident report :-) [14:16] flacoste: I've not had time to fart, let alone write an incident report [14:16] but trust me when I say I've been thinking about it :) [14:16] good [14:21] I just can't decode how that EXISTS is supposed to work at all (the one we are removing). It is trying to filter out jobs if the archive is utilizing 80% capacity. It counts the number of jobs currently active for the archive, but does not count the total capacity so how does it make that calculation? [14:24] hmmmm [14:24] good point [14:25] It divides by %s, which is num_arch_builders, doesn't it? [14:28] ah yes - can't tell from the raw log :) [14:28] maxb ping [14:28] pong [14:29] maxb: I think I recall you used suggested a fix for a gpg error we were/are seeing when we import a key [14:29] s/used/once/ [14:29] Um. Can you show the exact error you are talking about, to try to jog my memory? [14:32] maxb: I updated bug 568456 [14:32] Bug #568456: GpgmeError raised importing public gpg key [14:33] maxb I recall someone suggested using str() in a cowboy one or two released to fix a gpg error. [14:34] I definitely recall discussing str vs. unicode issues. It's not, however, obvious to me that this is the same or a related issue [14:34] I believe we got a more informative message than 'General error' at that time [14:37] yes. I think the real error is masked. This would be easier to fix if we could reproduce it === sinzui changed the topic of #launchpad-dev to: Launchpad Development Channel | Week 4 of 10.05 | PQM is open | https://dev.launchpad.net/ | Get the code: https://dev.launchpad.net/Getting | On-call review in irc://irc.freenode.net/#launchpad-reviews | Use http://paste.ubuntu.com/ for pastes [14:44] sinzui: you might have been thinking of this: https://code.edge.launchpad.net/~michael.nelson/launchpad/ppa-generate-key-failure/+merge/24871 [14:46] noodles775, yes! [14:46] noodles775, this may help [14:53] stub, noodles775: buildd-manager healthy again with that query part ripped out [14:53] bigjools: How many builders for processorfamily 1 are there? [14:56] stub: https://edge.launchpad.net/builders [14:56] 17 [14:57] well, 14 for PPAs [15:00] I need food, BBIAB [15:03] bigjools: http://paste.ubuntu.com/444626/ is the query modified to use NOT IN to filter out archives that are over capacity, and runs reasonably fast. I'm not sure of the rules though - can only PPA archives go over capacity? [15:27] stub: yes, that rule only applies to PPAs [15:30] bigjools: http://paste.ubuntu.com/444637/ then? [15:32] stub: one other option is to factor out that query so we have a python list of archives that is evaluated once, and then we plumb that result into the bigger query [15:36] the list is never going to be very big [15:57] bigjools: The bit in the 'NOT IN' should only be evaluated once inside that query. If you are calling that function multiple times though in a transaction, then factoring it out would be better [15:57] stub: no, it gets called once for each polled builder [15:58] stub: anyway, that's awesome, thanks [16:31] "bin/test -m bugs" starts running *all* tests. what would be the correct way of running only tests related to bugs [16:32] "bin/test -t bugs" does the same [16:34] -v? [16:34] I don't remember exactly [16:39] -m bugs should only run the tests in the bugs module [16:42] -m bugs spends an eternity on setting up layers e.g. "Set up canonical.testing.layers.FunctionalLayer". is that normal? [16:48] unfortunately yes [16:50] ouch :-) okay [18:18] Ursinha, do you know if anyone from translations will be available today? [18:41] adiroiban, ping [19:04] Is it not possible to subscribe to wiki pages on dev.launchpad.net? I tried subscribe user and got a 'you are not allowed to perform this action' message. [19:11] bin/test fails with the error IOError: [Errno 2] No such file or directory: '/var/tmp/mailman/logs/smtpd'. i'm using karmic. any ideas what's causing it? [19:52] krkhan: how are you using bin test? what is the full command? [20:40] bdmurray: So, the problem is that whoever wrote the moin skin for dev.lp.net did a somewhat poor job, and removed the 'subscribe' action link [20:41] the 'subscribe user' link you see is, I think, for subscribing other people [20:42] bdmurray: Workarounds are to either manually append ?action=subscribe to the page url, or to go to https://dev.launchpad.net/UserPreferences and enter page names in the relevant form field [20:43] sinzui, if I have required=True in my interface field, and use use_template() to copy the field to a form schema, shouldn't the validation check it if it's blank? [20:44] gary_poster: thanks for digging into the oops view thing [20:44] gary_poster: I do enjoy finding turtles-all-the-way-down bugs :) [20:45] lifeless: sure. lol, yeah, that's what this was. :-) [21:02] rockstar, I think the answer is true. I have not use use_template, but I have used copy_field, and it does copy the required=* behaviour [21:03] * sinzui had to override the complete copy behaviour in fact. [21:03] sinzui, well, it doesn't seem to be grabbing that behavior. [21:03] do you know if copy_template is also using copy_field for build a schema? [21:04] sinzui, no, I don't. Lemme try something. [21:05] maxb: thanks for the work around! [21:07] sinzui, so, it looks like the required=True part is checked AFTER validate, so you have to check the data to see if the key exists in the validate method... [21:07] sinzui, that seems...backward. [21:08] rockstar, that is not right, field validation does happen before form validation in LaunchpadFormView. [21:08] sinzui, This oops says otherwise: https://lp-oops.canonical.com/oops.py/?oopsid=1615EA2203 [21:13] rockstar, LaunchpadFormVIew_validate() validates the widget input before the view's validate() method. That is why the view's method can check for field errors at the start [21:14] sinzui, hm, so I'm not sure why this bug is occurring then. If I add the "if data.get('name', None):" it works fine, gives me the validation error, etc. [21:15] sinzui, does it continue on, gathering all field errors from _validate and validate before it gives you all errors? [21:16] rockstar all field errors are created as the widgets are iterated. The the view's method validation() gets the data to do additional rule checking, invariant kinds of checking [21:17] rockstar, does this field also have a default? because that can create a value [21:17] sinzui, no, no default. [21:18] rockstar, is there another oops id, that url will not load [21:19] sinzui, I don't think so: OOPS-1615EA2203 [21:19] That's the one from barry's bug. [21:21] rockstar, looking at that oops, is the error that name has a space in it? [21:23] sinzui, the way I reproduced it, it was entirely empty. [21:23] ISourcePackageRecipe? [21:23] sinzui, yes. [21:24] that is a bad name [21:24] bdmurray: i was using bin/test -t lp.bugs. but i'm getting a lot of other failures as well. i guess the devel branches aren't supposed to pass each and every test, are they? [21:24] TextLine(title=_("Name"), required=True, constraint=name_validator,description=_("The name of this recipe.")) [21:25] rockstar, what view is this? I want to read the @action [21:26] rockstar, is this it: SourcePackageRecipeAddView(RecipeTextValidatorMixin...) [21:28] rockstar, I have seen this [21:30] rockstar, When the vocabulary field is invalid, it is not in the data. It may also be true for NameFields I think you want to check for field errors at the start of validate() [21:32] rockstar, several views have a guard at the start of validate() [21:32] if len(self.errors) > 0: [21:32] return [21:33] sinzui, ah, okay, that makes more sense. [21:33] I don't see any looking at the errors, they just get the len() and leave it is not zero [21:37] sinzui, okay. I would think that LaunchpadFormView would be a better place for that check. What do you think? [21:38] No, because validate() is allowed to overwrite those errors. The ones we get from zope field are so arcane that gary has to look them up [21:38] heh [21:38] hahahahahaha [21:39] The bug supervisor message is an example of one that is easy to put into a sentence, but the field validation about vocabularies make no sense to a user [22:07] gmb, ping [22:08] gmb, I'd love to use your django-xmlrpc library [22:36] EdwinGrubbs: hi [22:51] adiroiban, I was told you could answer some questions I have about LP Translations. I'm working on caching the sum of the POTemplate.messagecount in the DistributionSourcePackage table in a new column called po_message_count. I'm wondering if the best place in the code to update the cache would be POTemplate.importFromQueue() and POFile.updateStatistics() since they both update POTemplate.messagecount. [22:53] hm... well, in LP translations can be updated by both an import, or by direct submission via web interface [22:53] ah [22:53] sorry [22:54] for the POTemplate [22:54] so. for messagecount of a POTemplate, importFromQueue() should do it... [22:55] I am not sure why POFile.updateStatitics() is updating the POTemplate.messagecount ... [22:55] looking [23:03] EdwinGrubbs: I don't know why the code from POFile.updateStatistics() is touching the potemplate.messagecount. It was added to fix bug 371453, but I can not find any clue [23:03] Bug #371453: Broken statistics [23:05] adiroiban, do you think it's safe to add extra logic to those two methods? [23:06] The code from POFile.updateStatistics() that is updating the POTemplate looks strange, in before adding anything I would talk with Danilo ... [23:06] but Danilo is on leave [23:07] POTemplate.importFromQueue should be right place for updating POTemplate.messagecount cached value [23:09] EdwinGrubbs: also, POFile.updateStatistics() is called in POTemplate.importFromQueue() ... after [23:09] # Update cached number of msgsets. [23:09] self.messagecount = self.getPOTMsgSetsCount() [23:09] hmmm, that doesn't seem good [23:10] EdwinGrubbs: also [23:10] the code from POFile [23:10] is using potemplate.getPOTMsgSets().count() [23:10] instead of potemplate.getPOTMsgSetsCount() [23:12] but this is a minor fact [23:12] so, to askwer your initial question [23:13] I would say that POTemplate messagecount cache data should be updated only in importFromQueue [23:14] thanks [23:16] EdwinGrubbs: I could not find the MP for the branch that added those lines in POFile.updateStatistics() [23:16] if you can find it, maybe you could find some tips regarding those lines [23:16] ok