=== ddaa [n=ddaa@nor75-18-82-241-238-155.fbx.proxad.net] has joined #launchpad-meeting === carlos [n=carlos@138.Red-81-39-35.dynamicIP.rima-tde.net] has joined #launchpad-meeting [04:03] hi [04:04] hi [04:04] so, I have a call with mark in 1.5 hrs [04:04] and I wanted to catch up with you on how 1.0 stuff is going, and any other issues that are around currently [04:05] Ok, they didn't changed too much since last time we talked [04:05] Danilo told me that firefox seems to be done [04:05] but he suggested a new way to handle file imports [04:05] to help with OO.org native imports [04:06] that get a single file as input and produces more than one template [04:06] does firefox done mean the code is done, with tests? [04:06] the code is in RF? [04:06] so we are able to have 1 file as input and n potemplate or pofile as output [04:06] the code is committed to danilo's branch? [04:06] done means he's in testing stage [04:07] so, not in RF [04:07] not yet [04:07] but working prototype, pre review, on danilo's branch [04:07] I'm not quite sure whether his suggestion will require changes for firefox, we will have a meeting about it today [04:07] yeah [04:07] I think so [04:07] a single file as input... [04:08] what single file would that be? [04:08] I'm trying to understand what you're describing [04:08] OpenOffice has all translations inside a single file per language [04:08] for documentation, oo-writer, etc [04:08] ok [04:09] so, I can see why we'd want to split that up [04:09] is it obvious how to do that? [04:09] but Rosetta will not represent that as a single template because the amount of messages is huge [04:09] yeah, based on top directories where the sources are stored [04:09] we have such information inside the file we get [04:09] and it's more or less the same split we currently do with the .po bridge we use atm [04:10] ok [04:10] and, how does that help FF? [04:10] Well, that's something I need to discuss with danilo because the initial talk we had about this was more focused on OO.org and tarball uploads [04:11] I don't know exactly how would that affect FF [04:12] the thing is that most of the work for FF will be reused for OO.org so I guess he already saw that as an advantage for OO.org while working on the new Rosetta infrastructure changes (if FF is not affected) [04:13] ok [04:13] about TranslationReview and the view restructuration that we talked about [04:14] I will have the meeting with danilo today [04:14] so, you'll have an opinion about this tomorrow [04:14] as you suggested [04:14] yeah, I think so [04:15] after that, I will try to have a preimplementation call tomorrow to start with this as soon as possible [04:16] what was the TranslationReview and the view restructuring? [04:16] I don't remember it, bassed on those words along [04:16] alone [04:16] TranslationReview is the UI that will allow our users to review translations much more easy, it's a 1.0 goal [04:17] right [04:17] but, what was the view restructuring for it? [04:17] view restructuring is a request I got from kiko and that I see as a good thing to do to finish TranslationReview implementation [04:17] that improves the way the views that use the translation form work and reuse code [04:18] ok [04:18] because we have a couple of hacks to reuse POMsgSetView from POFileTranslateView [04:19] that at this point require more ugly hacks, kiko suggested a third class specific for the form that will not depend directly on POFile view or POMsgSet view [04:20] but on a list of POMSgSet objects that will have more than one entry when we have POFile as the context and just one item when we have POMsgSet as the context [04:20] anyway, this is not the final decision, it depends on what I agree with danilo and the reviewer from the preimplementation call [04:22] ok. so, you're starting work on TranslationReview now? [04:23] no [04:23] that task is actually blocked on this [04:23] before being blocked [04:23] I already did most of the UI changes [04:24] you're saying that TranslationReview is blocked on the view code refactoring? [04:24] and part of the view changes, that was the point when I was blocked looking on fixing the views [04:24] yes [04:24] I could finish TranslationReview spec [04:25] but that would require some hacks [04:25] that I would prefer to avoid and I think could be avoided with the restructuration [04:25] "restructuring" [04:26] ok, thanks [04:26] you know, my spanglish... [04:27] about Edgy translations, we already imported most of the .pot files and I think we already fixed the translation domain changes since dapper release [04:28] this is not an official 1.0 goal, but it's an Edgy one [04:28] I sent an email last week about translations for documentation that are not part of language packs [04:28] would be really good if Mark answers that email [04:29] I sent it to launchpad@lists.canonical.com with the subject: "What to do with non language pack translations" [04:31] and that will help us to kill what we still have pending in the import queue (atm, 4800 entries) [04:32] we were importing those entries, but some Ubuntu developers complained to me about the fact that they are not being used at all so translator's efforts are completely wasted [04:34] what are the translation domain changes? [04:35] some products use their version number as part of the translation domain so with a new release, it changes [04:36] and we need to detect those changes and apply them so language packs have the right info [04:36] for instance [04:36] for dapper, evolution used evolution-2.16 as the translation domain, with Edgy, it changed to 2.18 [04:36] I mean to evolution-2.18 [04:37] if we don't do that change, the application will be untranslated because it will not find the translations [04:39] ok [04:40] Do you want to talk about other things that I was working on? or just the ones related with Edgy and 1.0 ? [04:41] first, can we just summarize the conversation so far? [04:42] sure [04:43] - Firefox is in testing phase. Pending to see of the 1 file to n potemplates/pofiles mapping affects it [04:43] - TranslationReview is blocked on view changes that are blocked on a pending meeting with danilo and a reviewer (should be unblocked tomorrow) [04:44] - Edgy imports are mostly done, blocked on a final decision about whether we should import non language packs templates, if the answer is 'no' what should we do with what we already imported [04:45] I think that's all [04:46] ok [04:46] thanks, I like the way you produced a clear summary [04:46] it helps me a lot [04:47] you are welcome [04:47] what are the other non-1.0 things? [04:47] well, there were some bugs fixes and user support requests that I don't think we need to talk about [04:47] but, I detected a problem with Dapper language packs [04:47] we had a 'hole' of translations [04:48] that were not in the initial language packs and due a wrong timestamp are not part of the language packs updates [04:48] I think I detected all those files and agreed with Martin Pitt in a way to solve the situation [04:49] I'm going to prepare a brief summary to the mailing list [04:50] the problem was that the we had the wrong timestamp for the initial language pack for dapper, so it was around two days after final language pack for Dapper was released [04:50] and that prevented language packs updates to include the updates done in those dates [04:51] that's mainly koffice translations [04:54] ok [04:54] so koffice translations (mainly) missing in dapper langpacks [04:54] because of a timestamp problem [04:54] meaning that there were translations made after the initial langpack was shipped with dapper [04:54] that were missed out of the updates [04:54] is that right? [04:55] yeah [04:55] because those translations came from upstream [04:55] and no other ubuntu translator touched those pofiles [04:55] the plan to fix this is to 'touch' a translation in those pofiles to force a new export [04:55] ok [04:56] so, the fix is to "touch" a translation in each pofile with a hole [04:56] force a new export [04:56] and these will be in the next langpack update [04:56] how did you find out about the problem? [04:57] yeah, that's the solution [04:57] because from time to time, a KUbuntu user complained to someone that then complained to me [04:57] until a couple of weeks ago [04:58] when a KDE developer that tracks our KDE translations [04:58] warn me about the problem and help me to debug it [04:58] previous complains came from GNOME users that were not able to help me with that [04:59] and after check some language packs, I detected the problem, after that I had to develop a script to compare all dates and after some manual review, got a list of files with this problem [04:59] I'm not 100% sure that I got all files, but I think that I got most of them [05:00] .po file format is really poor with version tracking [05:03] ok [05:04] do you know how the timestamp problem occurred in the first place? [05:04] also, when you have a problem like this, please let me know that it has occurred [05:04] it's the kind of thing someone may ask me about, and I'd feel stupid for not knowing [05:05] well, it was a mix of communication problem between Martin pitt and me and the fact that we use a mirror to export language packs [05:06] so I put there the wrong date [05:06] (we still do this manually once the final release is done) [05:06] where did you put the wrong date? [05:07] I thought I would fix this much more fast than what it took to me, so I was thinking on sending the report to notify you too... sorry, I will try to do it better next time and write an initial report and another when I find the problem and possible solutions [05:07] SteveA: in our database [05:07] ok, so you did the export [05:08] I asked an UPDATE command on production [05:08] and then put a date in the database [05:08] but you put the wrong date in? [05:08] yeah [05:08] I do several exports [05:08] one per day [05:08] and Martin decides which one is the final one [05:09] I see [05:09] he tells me what he used and I put the right timestamp in our database [05:09] how can we avoid this problem in the future? [05:09] but seems like I forgot to take into account the mirror delay that we have [05:10] moving language pack exports to production and figure a way to handle all those timestamps automatically [05:10] I already did some steps in that direction [05:10] before moving to carbon to generate language packs [05:10] I improved a lot language pack exports [05:11] so it takes now between 1 and 2 hours less per distrorelease [05:12] without locking the database so much or killing the server with a high load as we were doing in asuka [05:12] I changed a couple of queries, the new ones do the same but eating less resources (less joins or rows fetching) [05:13] so, here is an idea [05:13] I don't know if it is practical [05:14] when you do an export, give it a unique ID, maybe by putting a timestamp in a .txt file in the export [05:14] and record that in the database [05:14] so, there is an automatic correspondence between the data exported, and the state in the database [05:14] so, no manual step, except perhaps saying in the database which is actually used [05:15] does that make any sense? [05:17] well, that's actually what we do atm or the infrastructure we have was thought that way [05:17] but the language packs use a read only database so we are not able to do that [05:18] and also, as we do daily lang packs exports, I still need to know which one will be used as the final one [05:18] and that still depend on Martin [05:18] but, yes, the idea is more or less that [05:19] atm, the tarball has a file with the timestamp when it was generated [05:19] but it's not completely reliable because it depends on the db mirroring [05:20] if one day it's not done, the timestamp would have the date for one day but using a database that is two days older than production [05:26] when you say "timestamp" [05:26] do you mean the time when the files were created [05:27] or the time that is written as text into some file? [05:28] when the language pack was generated