=== ddaa [n=ddaa@nor75-18-82-241-238-155.fbx.proxad.net] has joined #launchpad-meeting === carlos [n=carlos@138.Red-81-39-35.dynamicIP.rima-tde.net] has joined #launchpad-meeting === danilos [n=danilo@cable-89-216-150-63.dynamic.sbb.co.yu] has joined #launchpad-meeting [03:11] SteveA: please, ping us when you are ready [03:13] if you are busy to have the meeting now, tell us it too so we can have another meeting that we have pending [03:13] please [03:14] carlos: hi [03:14] I'm here [03:14] hi [03:14] two things [03:14] 1. I want to understand better the timestamp and manually entered data thing [03:15] 2. about the view refactoring, one point that came out of discussion with kiko and mark is that it is something to do if it will be a small job, and will improve things [03:15] don't spend time on it before 1.0 if it is a larger thing [03:15] it isn't a goal in itself [03:15] it is an idea tha tmight help work on TranslationReview [03:16] so, dealing with (2) first [03:16] what do you think? [03:17] well, the thing is that I could implement TranslationReview with current views, but adding some hacks to see, for instance, the copy button that the user clicked to copy a message in the text area to modify it [03:18] I don't think it would be much more ugly than what we have right now [03:19] so if you prefer to leave it for later, I can do it, but taking into account that on review time [03:19] maybe you should guesstimate it? [03:20] how long to do the refactoring in days? how long to do TranslationReview after the refactoring? how long to do TranslationReview before the refactoring? how long to do the refactoring after TranslationReview? [03:20] based on what kiko told me or what danilo and I talk ? [03:21] tell me what you think [03:21] after all, you'll be doing the work [03:22] well, I would need to think about it before I can give you an estimation [03:22] I know the idea behind kiko's suggestion, but I didn't think about it yet in depth because the pending meetings that would change a bit the solution [03:22] (just as a sidenote, this is exactly the reasoning why I want to work on generalized TranslationImport stuff: it will take a 2-4 days to implement it, but adding OOo, KDE PO would be much shorter and cleaner) [03:22] I could give you that info at the end of today [03:23] is that ok? [03:24] yes, that's fine. [03:24] that will give you some idea of whether to do this first or later [03:25] danilos: well, in that case you are preventing the hack, we already have that hack in production, so it makes mucho more sense in your case [03:25] so, about the exports and timestamps [03:25] s/mucho/much/ [03:25] ok [03:25] carlos: I know, just giving the reasoning behind it [03:25] SteveA: let me tell you what we have atm and how does it work, ok? [03:26] what I understand is that there is this process that involves producing exports [03:26] and when pitti says one is good enough, that latest one becomes the baseline [03:26] yeah [03:26] where does the manually entered timestamp come into it? [03:27] pitti tells me the timestamp of the tarball used for the baseline [03:27] and I ask Stuart to store it in our database [03:27] is that the same data that is the most recent baseline export? [03:28] not always [03:28] the tarball is made from the most recent baseline export? [03:28] why might it not be? [03:28] because there is a small delay [03:28] a small delay between what and what? [03:28] between when pitti gets a tarball, creates the .deb packages and notify me the timestamp [03:28] and we create a new export every day [03:28] the timestamp is based on data in the database [03:29] it is purely within the database's data [03:29] no, the timestamp is based on the date when the export was done [03:29] what happens to that data -- making a tarball or a deb -- doesn't affect the timestamp of when the export was produced [03:29] oh [03:29] I see what you mean [03:29] sort of [03:30] so, I cannot see a reason to put a timestamp *into* the database [03:30] it would work that way if our exports come from production [03:30] it is something that should only ever come out of the database [03:30] but it's not the case [03:30] what I can see going into the database is [03:30] marking which export is the one that is used [03:30] we use a read only mirror [03:30] we don't store the list of exports in our database [03:30] oh [03:31] our datamodel doesn't allow it [03:31] is not like Ubuntu packages, we don't need that complexity [03:32] ok [03:32] so, the way I'd approach that is [03:32] in the exported data, add a timestamp + checksum of timestamp [03:32] so there cannot be a typo [03:34] but, I'm more inclined to connect to the real database [03:34] me too [03:34] and store that an export was produced [03:34] and store the date of the latest translation used, or whatever is appropriate [03:34] and give that an export-id [03:34] so pitti can just say "this export-id is the baseline" [03:35] I guess we could use the link to the librarian as that 'export-id' [03:36] so we could have it for free adding a link to latest baseline langpack [03:37] SteveA: also, I was thinking on use the timestamp for latest translation in that export as the timestamp for the language pack so this problem wouldn't happen again [03:37] right [03:38] please file a bug or two on these issues [03:38] there is still another issue [03:38] what's that? [03:38] as we generate daily language packs [03:39] we need to provide Martin Pitti a way to go to launchpad and note himself which language packs has been used as the base package [03:40] how does pitti get a particular language pack? [03:40] does he get an email? [03:40] or go to a page in launchpad? [03:40] atm, he fetchs it from people.ubuntu.com [03:40] once we move to production, he will get an email === danilo_ [n=danilo@cable-89-216-150-89.dynamic.sbb.co.yu] has joined #launchpad-meeting [03:40] with a link to librarian [03:40] how does it get to people.ubuntu.com? [03:41] my script in carbon pushes the tarball there once built [03:41] I see [03:41] so, if we had a database table for langpacks produced [03:41] he could see it in a UI [03:41] and mark in that same UI if one has been used as the baseline for what [03:41] although Martin asked us for a fixed URL in librarian so he doesn't need to parse the email [03:41] SteveA: right [03:41] or we could offer him xmlrpc [03:42] I think that would be the right approach [03:42] if he wants to automate it [03:42] yeah, that would be also a good way to do it [03:42] ok [03:42] that seems like a small spec to me [03:42] couple of paragraphs explaining the background and proposed solution [03:42] so we can schedule it for post 1.0 [03:43] ok [03:43] also, I think that's the only missing part to move language packs to production [03:44] ok [03:44] so we talked about... [03:44] - having the export script write to the db [03:44] maybe in a special table [03:44] or maybe using the librarian id [03:44] well, we still need a table [03:45] so that there is a database-produced unique ID for a langpack that is generated [03:45] it's a one to many relation [03:45] so, use the 'id' in the table rather than the librarian id perhaps [03:45] one distrorelease has n language pack exports [03:45] ok [03:46] ok [03:46] and then we want to record there whether it is used as the baseline [03:46] or a baseline [03:46] and the timestamp of the most recent translation [03:46] ok [03:46] also we have another kind of language packs, the ones that only have updates... [03:47] but I don't think I should bore you with that [03:47] I will note that in the spec [03:47] I know that they exist [03:47] and the script that produces them can use this data to know what range of translations to include [03:47] right [03:48] and then we also talked about a UI + maybe xmlrpc for pitti [03:48] also, I'm thinking on adding something to launchpad that allows pitti to select whether we are going to generate updates or full exports [03:48] to get the langpacks, see what langpacks are available, and mark the one he chooses as a baseline [03:48] ok [03:48] so he can decide to do a new full export [03:48] so, this is something to discuss in person with pitti perhaps? [03:49] all these things together [03:49] to reduce the packages with just updates (it already happened with dapper point release, but he had to do it manually) [03:49] I guess, let's try first phone call after we have a draft [03:50] and see whether that's needed [03:50] anyway, if it's post 1.0, we could talk about it in the allhands meeting [03:51] well, when are you moving the langpack production into production? [03:51] once we have this new spec implemented [03:51] ok [03:52] adn that's not a 1.0 goal [03:52] as far as I'm aware [03:52] right, it's post 1.0 [03:52] ok, then I think that's settled [03:52] what do you think danilos ? [03:52] I'm fine with it [03:52] ok, great. [03:53] thanks for having this meeting. [03:53] and it has just moved to carbon [03:53] so performance should not be an issue in the near future [03:53] we must just be careful about those timestamps [03:53] well, the performance issues were already fixed even before moving it to carbon [03:53] until using a better system [03:53] maybe add a checksum to the timestamp as an interim measure... ? [03:54] or ensure that the mail sent to set it in the db [03:54] SteveA: don't worry, what I will try to do is to set as the timestamp latest modified translation [03:54] is sent to pitti too [03:54] SteveA: that way we solve the mirror problem [03:54] carlos: you still need to store that somewhere [03:54] SteveA: inside the tarball exported [03:54] so, there's still a timestamp coming out of the database [03:54] across to pitti [03:54] we already have a timestamp.txt [03:54] then back to you [03:54] and back into the database [03:54] right [03:55] and that is error-prone [03:55] I will add also the checksum as you suggested to be completely sure that nothing was lost [03:55] so, consider this [03:55] those are more or less trivial tasks [03:55] add a checksum to timestamp.txt [03:56] and write a small script to read a timestamp.txt, check checksum and set it in the databaes [03:56] if that will take under 2 hrs, then I'd say do that [03:56] if more, then it is too much work [03:56] hmmm, why should I do latest part? [03:56] what does that mean? [03:56] I still need to ask stuart to do the update sentence [03:56] I don't have direct access to production database [03:56] you can tell stuart to run that script [03:57] or you can send stuart the timestamp.txt [03:57] and tell stuart to run the script on it [03:57] I see [03:57] ok [03:57] the point is to avoid having someone typing in a manual query [03:57] so we reduce the chance to introduce a typo [03:57] copied from some email or text file [03:57] but, it is worth doing this only if it will be quick [03:57] good plan [03:57] don't bother if it will be more than 2 hours [03:57] ok [03:58] I think it would fit in a 2 hours slot [03:58] because it is just an interim thing [03:58] until the better system is designed and implemented [03:58] so pitti needs to send me the timestamp.txt file and that's all [03:58] yes [03:58] ok [03:59] >>> md5.new('timestamp').hexdigest() [03:59] 'd7e6d55ba379a13d08c25d15faf2a23b' [04:00] SteveA: yeah, I already used md5 module while debugging this problem [04:00] to know which files change between current language packs [04:00] and a full export I forced [04:00] so don't worry [04:01] something like that [04:01] dsfok, great [04:01] dsfok, gre [04:01] um [04:01] lag [04:01] great [04:03] lag + garbage :-P [04:04] SteveA: btw, now that we talk about this [04:04] there is another issue we should fix [04:04] what's that? [04:04] related with Rosetta and dapper [04:04] and perhaps breezy [04:04] https://launchpad.net/bugs/58221 [04:05] SteveA: pkgstriptranslations was stripping and feeding Rosetta with translations [04:05] for packages in the backports pocket [04:05] danilos: please, pay attention too, because we need to solve this issue [04:06] carlos: no problem, I am ;) [04:06] that means that we have some .pot files imported in dapper [04:06] that doesn't match with what dapper has in its official release [04:06] does this continue to happen? [04:07] or is it just that we have some bad data now? [04:07] no, 30 minutes ago, the buildds for backports has been fixed === ddaa [n=ddaa@nor75-18-82-241-238-155.fbx.proxad.net] has joined #launchpad-meeting [04:07] but we have some bad data [04:07] no to what? [04:07] that we need to fix [04:07] you're saying that the problem is fixed [04:07] but the bad data remains? [04:07] right [04:08] does rosetta get told the pocket by the buildds? [04:08] Well, I think we can know that from our datamodel [04:08] and is a protection we should implement to prevent such breakages in the future [04:09] right, please file a bug on that [04:09] or, add a task to that bug [04:09] I did already [04:09] let me look for it [04:09] now, about the bad data. what do we need to do? [04:10] https://launchpad.net/products/rosetta/+bug/58223 [04:10] I think that we need to get the right .pot files and reimport them again [04:10] that should be enough [04:11] how do you identify what pot files are needed? [04:11] because the .po imported didn't change anything in Rosetta other than what came from upstream, so nothing was broken from the Ubuntu translator point of view (we just added some new translations from upstream) [04:12] well, that's the problem I don't know exactly how to solve [04:12] I was thinking on ask for a list of packages in the backports pocket [04:12] and filter out the ones without translations [04:12] so I get that list [04:12] but I think this would be a manual process [04:13] in the other hand, I guess the amount of packages should be low because Dapper was released only 3 months ago [04:14] well... [04:14] all those packages will also need to be rebuilt [04:14] with non-stripped translations [04:14] breezy would be more complicated, but the amount of templates imported is quite low than dapper so I don't think the problem would be too bad (we didn't get a full import for Hoary or Breezy) [04:14] SteveA: right [04:15] because they are without translations atm [04:15] are there buildd logs we can use? [04:16] I guess, but I could check with Martin the right solution for this, because he would need to get that list too [04:17] to rebuild the packages [04:17] if there are buildd logs or records of this, that would be probably better [04:17] maybe ask celso [04:17] or infinity [04:17] there are buildd logs and I think we can see some output from pkgstriptranslations script [04:17] so it should be more or less doable [04:18] at least we use them from time to time to debug some problems with .pot regeneration [04:18] so, what is the plan? [04:18] talk with Martin, just in case he already got the list of packages to rebuild [04:20] if he doesn't have such list, talk with celso/infinity to see if they could provide the logs using any script (they are available from launchpad/librarian) so we don't need to use the web interface for every single package in backports [04:20] and get the list of packages with translations using those logs [04:21] ok. let me know how it goes. [04:21] once we get the list, I think the only chance we have is to upload again one by one the .pot files for those packages [04:21] (we have them available at people.ubuntu.com) [04:21] is there any risk of nuking work that has been done since? [04:21] no, we don't nuck anything [04:21] that work will appear as suggestions [04:21] so, there is work to do :-( [04:21] what we will do is to 'hide' them for dapper [04:21] especially without translation review [04:22] and show again some others that were removed when the backports one were imported [04:22] hmm [04:22] not really [04:22] I mean, they don't need to reactivate anything [04:22] if there are translations that were made, which were confirmed, and which are now just suggestions [04:22] then that's a step backwards [04:22] and work needs to be done confirming the suggestions [04:23] it's just a matter of setting some strings as obsolete and remove the obsolete mark from others [04:23] what I mean as suggestions is that if that string that we are hidding in Dapper appears later in other distro release [04:23] it will appear as a suggestion [04:23] we need to find out how many packages are involved. [04:23] ok [04:23] that's for strings that are in the backport [04:23] so we will reuse that work later as part of our translation memory [04:23] but not in the one in main [04:24] hmm [04:24] the problem is that the backports have packages in main [04:24] oh, you mean with 'main' release? [04:26] I'm concerned that after uploading the new POTs [04:26] that the state of translations in there will overrule work people have done since those POTs were produced [04:26] no [04:26] any translation done [04:27] will remain selected [04:27] what we do is that, for instance [04:27] a backport for ktorrent includes a new string 'ktorrent rules' [04:27] once we revert to previous .pot [04:27] that string will not appear anymore in dapper's imports [04:27] consider this [04:28] week 1: translation done on ktorrent [04:28] just because it doesn't belongs to dapper [04:28] week 2: dapper POT produced [04:28] week 3: more translation done on ktorrent [04:28] week 4: backport built [04:28] week 5: we fix problem by uploading the dapper POT [04:28] have we lost the work done in week 3? [04:28] no [04:29] we will be at that exact status [04:29] my point was that [04:29] week 4.5: translated something new from backport [04:29] afai get this, we might only have some additional translations which belong in backports, but these won't be used [04:30] in that case, those new translations will be hidden with the .pot change, nothing else [04:30] right, week 4.5 :) [04:30] danilos: ;-) [04:30] ok [04:30] that was my concern. if you're confident that's not an issue, then that's good [04:30] but we don't remove them so they would appear as suggestions for Edgy if we publish the same version that the backport had [04:31] that was my point [04:31] sorry if that introduced some misunderstandings [04:32] ok [04:32] I have a call now. let me know how the discussion with various people go. [04:32] thanks [04:33] SteveA: you are welcome [04:33] danilos: let me have a 15 minutes break and then, we could start our next meeting. Is that ok for you? [04:33] carlos: sure [04:34] so let's talk at 16:45 our time [04:34] carlos: I'd want to drop by store as well, so I wonder if I should do that first as well? [04:34] (but 10mins is not enough for that ;) [04:34] carlos: so how about 17h? [04:34] ok [04:34] great [04:34] 17h works for me [04:35] see you later [04:35] later; SteveA, carlos, thanks for bringing these issues up, even if I didn't have much to say on them :) [04:35] danilos: you are welcome ;-) [04:36] danilos: at least you should know about those issues [04:36] carlos: indeed === ddaa [n=ddaa@nor75-18-82-241-238-155.fbx.proxad.net] has joined #launchpad-meeting === ddaa [n=ddaa@nor75-18-82-241-238-155.fbx.proxad.net] has joined #launchpad-meeting === ddaa [n=ddaa@nor75-18-82-241-238-155.fbx.proxad.net] has joined #launchpad-meeting [05:03] danilos: hi, should we have the meeting here? [05:03] carlos: sure [05:03] carlos: so, lets start with TranslationImport thing [05:04] did you have a chance to take a look at the very drafty-spec, and more importantly, to think about it? [05:04] my idea is to create a simple interface which will provide all data we need, *without* any database stuff [05:05] the reasoning is that most of the database stuff is repeated (as experienced developing xpi import) [05:05] yeah I saw your document [05:05] but I'm not completely sure of how do you plan to do it... [05:06] I know the idea [05:06] well, I haven't written anything about implementation [05:06] to have an object with a single file as an input [05:06] and n files as output [05:06] and that's why I want to discuss it with you and have a preimplementation call with a reviewer [05:07] I think that the basic idea is good enough to work on it [05:07] my current idea is as written above: accept path/content in constructor, and produce something like a list of templates with all the needed data [05:07] specifically, I would make TranslationImport.templates a dict keyed by potemplate name [05:07] but I would like to know how do you plan to deal with the import queue (this changes it a lot) [05:08] well, I'd go for minimal changes in import queue [05:08] when there is only a single POT/PO being imported, we'd have the same thing we have now [05:08] sure [05:08] but most powerful part of the import queue [05:08] when there are more than one, we fully expect imported file to list all the templates it wants to go into [05:08] are tarball imports from Ubuntu packages [05:08] with multiple .po and .pot files [05:09] and the guessing code to decide were should that be imported [05:09] indeed, and no reason to abandon that [05:09] I'll just move that to separate TranslationImport class [05:10] the only problem I can see is that we'd have to approve/disapprove the entire tarball [05:10] so you will need to open tarballs every single time we scan all entries in the queue? [05:11] that's not possible, we should be able to reject or block single files within a tarball just like we do right now [05:11] hum, if I want to provide more details, yes [05:11] so I guess some extra metadata would be needed [05:11] hmmm [05:11] well, the other, probably better option is to also allow separation as it's done currently [05:12] for tarballs, that is [05:12] well, the code that guess were every entry should be imported needs to check the paths of every single file [05:12] so, eg. GSI files would present themselves as separate queue entries as well [05:12] and we can link to the same librarian file for all of the entries [05:12] so we would have more than one entry for a single GSI? [05:13] yeah, for different potemplates/languages [05:13] so we use 'path' field as the way to filter out the file from the tarball stored in the librarian? [05:13] that's right [05:13] hmm, I think I like that [05:14] the only thing we need to watch out for is not to delete file in librarian until all references to it are cleared ;) [05:14] that will not need too many changes in our current code [05:14] librarian handles that automatically [05:14] no entries are removed until there are no more references to it [05:14] exactly, and we get pretty sofisticated management of even other things, we'd be able to move KDE langpack support to that, etc. [05:15] how's that? [05:16] well, a single kde-l10n- will appear as several PO files in the queue entry; i.e. the same way as tarballs work now, just per-language, not per-template [05:17] and, we'd be able to approve "subcomponents" of files (eg. approve single language from GSI file, which may contain a number of languages) [05:18] I see [05:18] what I mean is that in this concrete case, kde-l18n handling will not change a lot [05:18] we will eat less disk space, but that's all [05:18] well, it won't change at all [05:19] phone... [05:19] except that code will be cleaner, imho ;) [05:20] I'm back, sorry [05:20] no problem [05:21] well, I'm not completely sure whether it would be really cleaner.... [05:21] I mean, we still need code to handle the tarball extract [05:21] o well, the list of the tarball [05:21] s/o/or/ [05:21] and the disk space needed will be reduced a lot [05:21] indeed, but there won't be things like hardcoding all the checks in translation_import_queue.py [05:22] how's that? [05:22] I know that's true for OO and FF [05:22] i.e. we currently check if path.endswith('.po') or path.endswith('.pot')... [05:22] and completely special-case kde stuff with another function [05:22] right [05:23] but there are other checks [05:23] that cannot be moved the way you plan [05:23] for instance [05:23] ok, let me rephrase that: instead of "cleaner", I should have said "more generalized" [05:23] GNOME tarballs mix two different layouts [05:23] one for the application with a po/foo.pot and several .po files insde that directory [05:24] and another for documentation with something/foo.pot and then subdirectories for the .po files [05:24] yeah, but what's the problem with that? [05:24] anyway, even leaving the code to handle GNOME things, I agree now, it would be cleaner so we move KDE specific code to KDE tarballs [05:25] danilos: it cannot be moved outside translation_import_queue [05:25] that's all [05:25] I don't understand why not [05:26] afaics, I'd have TranslationImport.providesTemplates which would return a list of all the templates a file provides, and then that would reference all translations for those templates [05:27] Hmmm, I see [05:28] and don't forget that we also have default_file_format to determine which importer to use [05:28] so you mean that you 'extract' a tarball only when you know exactly where its entries will be imported? [05:28] right, in fact I was thinking on default_file_format and it doesn't solve the problem with GNOME layout [05:28] well, you list the filenames when you create import queue entries [05:29] I'm a bit lost because I see some holes in the process you describe [05:29] could you please describe step by step [05:29] actually, I don't really care about space-savings, so I may also extract them right away, just like you do with current tarball imports [05:29] what would happen when we import, for instance, evolution 2.18 translations? [05:30] especially if I am going to lose a lot of speed with that approach [05:30] so we get a tarball and the reference to the sourcepackage and distrorelease [05:30] TranslationImport detects there is only one template in there, and creates a single template, and a bunch of pofile import queue entries [05:31] let's see what do you have in mind, and then, decide whether we need to untar the entries or not (I don't think we need to untar it, just extract it when the .po file is actually used) [05:31] danilos: doesn't it have doc + application .pot files? [05:31] (as for untarring, that can be handled independently of TranslationImport design) [05:31] sure [05:32] well, if it has both doc & application .pot files, then it will provide two entries for templates, along with a bunch of pofile entries for each of them [05:32] now, you're probably thinking of automatic po file matching? [05:33] especially because evolution POT file will probably need a rename [05:33] no, I just want to see the full process, not thinking yet on specific details [05:33] from what you just described to me [05:33] is the same thing we do right now [05:33] and we should establish the relation between template and pofiles once they are added to queue [05:34] get the tarball and fill the queue with all .pot and .po files included [05:34] ok so until this step, the code would be the same, perhaps moving it to other parts of our tree, but same procedure [05:34] well, yes, that's the point; it would work mostly the same for what we have already, yet allow easy extending to what we don't have (like Firefox, OOo, KDE PO, Zope...) [05:34] I don't really see much wrong in the current procedure, to be honest [05:35] except that I would move it outside the import queue code and generalize it [05:35] so you would have a kind of adaptor [05:36] for tarballs that will do what we do atm [05:36] another for .po and .pot files that do nothing [05:36] (if it's not a KDE PO or Zope one [05:36] ) [05:36] the only problem I see with the current implementation is that I need to modify like 5-6 places and add a couple of lines on each of them, and when I develop a new importer, I duplicate much of the code from poimport.py [05:36] that's right [05:36] well, even if it's a KDE PO or Zope one, as we agreed the format change will be done on import time not as a .po file [05:37] another for OOo that will split the file in smaller chunks [05:37] well, for KDE PO file, we probably need to descend from POParser only [05:37] etc [05:37] etc [05:37] that's right, but without duplication of database object creation [05:37] sure [05:38] all them inheriting from a common class [05:38] for Firefox, I ended up copying most of the import_po stuff, and just replacing relevant parts [05:38] that's right [05:38] and we only write the method to do the split [05:38] ok [05:38] well, all of that would be part of TranslationImport interface, that was my idea [05:38] so, how do you feel about that? [05:39] I see an easy optimisation to reduce wasted disk space, but let's leave that for a latter optimisation [05:39] That solves the problem with code duplication that you talk about [05:39] but [05:40] ? [05:40] I still don't see how do you plan to move the code from translation_import_queue to guess the POFile and POTemplate where an entry should be imported (we should start thinking on rename those table names...) [05:40] I see that when you extract an entry [05:41] you can try to guess it and link it [05:41] but [05:41] well, it's easy to have TranslationImport.template['evolution'] .guessed_template property [05:41] as you already pointed, when you need to do a translation domain change, the .pot file will not find a link [05:41] sure [05:41] and translation import queue will use the same method as now: you can override it [05:41] and you call that before creating the entry in the queue, right? [05:42] that's right [05:42] ok [05:42] let's see it this other way [05:42] let's say you have a layout like gtk+ [05:42] ok [05:42] where they use the package version as part of the path [05:42] so you have something like gtk+-2.10.3/po/gtk20.pot [05:43] and we have imported 2.10.2 [05:43] the automatic matching will not work here [05:43] hum, I don't follow [05:43] so you will not be able to do that link [05:43] we had this problem with Edgy [05:43] to link the new .pot file [05:43] if we have imported 2.10.2, where do we get gtk+-2.10.3 from? [05:43] is the new one [05:44] that we are handling [05:44] ah, ok [05:44] we have sourcepackage and distrorelease here, right? [05:45] we do TranslationImport.template['gtk20'] .... this has a problem, the translation domain is not always the same as the .pot filename so we look for pot files based on its path [05:45] yeah, you know sourcepackagename and distrorelease [05:45] that will appear in the queue as a separate entry, just like it appears now [05:46] and gtk20-properties will as well [05:46] i.e. I am still not getting what you are aiming at [05:47] ok, but without a link to a POTemplate or POFile, right? [05:47] that's right [05:47] just like we do right now [05:47] ok [05:47] what I do atm is [05:48] go to Edgy's gtk20 template [05:48] and change the path [05:48] optionally, I could link the .pot file with this POTemplate that I just fixed [05:48] ok? [05:49] ok, so you want us to automatize that as well? [05:49] no, it's not my point, we should do it, but it's not related to this discussion [05:49] now, what happens with the .po files? [05:49] with current code, the .po files will be visited again and this time we will be able to link them with POFiles [05:49] because we find now a POTemplate in the same path [05:50] well, that's the point I had above: we need to link them to this template queue entry once we create them as well [05:50] hmm I see your point [05:50] so, instead of doing "path matching" in queue entry code, we'd do that while adding queue entries [05:50] which means that we might need another column in translationimportqueue [05:51] rather, translationimportqueueentry ;) [05:51] like template_entry? [05:51] something like that, yes [05:51] and then we can just directly approve them once template gets imported [05:52] I see [05:52] would you note that we need a trigger or something to check [05:52] that the entry pointed by template_entry should have its template_entry set to NULL ? [05:53] sure, I'll summarize this entire discussion in the spec when we are done, so I'll note that as well [05:53] either that, or use an external table to represent this relationship so we don't need triggers [05:53] Stuart should tell you the best solution [05:53] ok [05:53] I agree more or less with this solution [05:54] but let's talk about some corner cases [05:54] ok? [05:54] sure [05:54] (I think this one is easy) [05:54] we get a tarball with a set of .po files and no .pot files at all [05:54] that link will not be done [05:54] which is fine [05:55] now, we get a new version of that tarball that includes the .pot file [05:55] your code should be able to detect the duplicated .po files and update those rows [05:55] I think this corner case is not a big deal [05:55] that's right [05:55] ok, next one [05:56] we implement a new layout support [05:56] for something that we already got imported into the queue [05:56] so the .pot and .po files aren't linked [05:56] bullocks, DELETE those, and reimport ;) [05:56] that's not possible, a reimport requires a new Ubuntu package release [05:57] we can also try to handle that using the same librarian reference and path's [05:57] the DELETE is not needed, a new import will update those entries [05:57] with that, you will need then to store references to the tarball [05:57] if we go without extracting, but if we extract everything, then the only thing we can work with are paths [05:58] instead of the content of the concrete entry [05:58] ok [05:58] so no extracting++ [05:58] after handling the queue [05:58] you will need to revisit every entry in the queue grouped by its librarian reference [05:58] and try again to detect its potemplate and pofile [05:59] that's right, and make sure that logic for linking those in TranslationImport is separated, so it can be run just on paths [05:59] hmmm [06:00] I'm not sure about that last thing [06:00] I mean, you are interested only in librarian links [06:00] well, how would we otherwise do the matching after the import? [06:00] so you think redoing the import would be better? [06:00] once you do that, you can handle it the same way a new import is handled [06:00] ok, sure, it's simpler [06:01] because you already have code to deal with already existing code [06:01] otherwise the complexity would be higher than needed, isn't it? [06:01] we are talking about opening a tarball and get the list of entries inside it [06:02] but we are not fetching its content [06:02] that's right, I agree [06:02] if you see it as a performance problem, we can do what you suggest [06:03] it shouldn't be too much of a problem, I believe [06:03] also, to prevent long delays in the queue [06:03] I think we should note last time we checked a set of entries [06:03] so we try to guess the same entries once per day [06:04] not sure If you understand what I mean [06:04] sure [06:04] ok, let me see if I find any other corner case based on what we found already.... [06:05] we also need to do the guessing iff we have a new template import entry [06:06] for product imports? [06:07] for tarballs and stuff [06:07] other than products [06:07] what's the point for that? [06:07] products relay on manual imports [06:08] so we could get a potemplate and later a tarball with languages [06:08] well, the other way, first translations and later a template [06:09] not sure I understand you [06:09] if we get a tarball without template [06:09] I meant that apart from checking entries only once a day, we also don't need to do that if there was no template [06:09] and later a new upload [06:09] ah, right [06:09] oh, I see [06:09] sorry, I misunderstood you :-P [06:10] no problem ;) [06:10] but, you need to do it anyway [06:10] right [06:10] just in case we already had a template importe [06:10] d [06:10] anyway, who do you suggest that I ask to be my pre-implementation call reviewer? :) [06:10] phone, sorry [06:18] ok, back [06:18] hmm [06:19] well, SI guess james or Bjorn would fit [06:19] grr [06:19] weel, Steve usually suggests james or Bjorn for this kind of things [06:19] s/weel/well/ [06:19] I'm a bit dyslexic today... [06:20] ok ;) [06:20] I'll probably ask Bjorn, since I haven't worked with him so far ;) [06:22] ok [06:22] but prepare an spec update with what we just talked [06:22] so he can read it before the call [06:22] ok? [06:23] let's take another 10 minutes break and we could start with our other meeting about view restructuring [06:23] ok? [06:24] carlos: anyway, we also planned to discuss your view restructuring work, right? [06:25] sure [06:25] :-) [06:25] I see you have lag... [06:26] let's meet again at 18:35 local time, ok? [06:26] yeah, that's fine [06:40] so [06:40] danilos: meeting time? (again...) [06:40] carlos: yay ;) [06:41] danilos: You would want to read the email from kiko [06:41] he sent it 31st August with the subject: "POFileTranslationView/POMsgSetView cleanup guide, was Re: Status of a few bugs" [06:42] ok, sure [06:43] ok, it's already marked as "important" in my mail folder ;) [06:43] danilos: the important bits are the ones related with pomsgset.py [06:43] :-P [06:46] ok, I've read the msgset.py bits [06:48] carlos: ping ;) [06:48] ok [06:48] sorry, I wasn't taking care of this channel O:-) [06:48] so [06:48] weren't you talking about not using several view classes? [06:49] well [06:49] the suggestion by kiko is not exactly that [06:49] and cleaning up _*_submissions is basically what bug 30602 was all about ;) [06:49] at the moment, the view for POMsgSet pages is used from POFile's view [06:49] and that's broken [06:50] ah, ok [06:50] I see your point [06:50] because we need to know when are we using it from a POFile view or a POMsgset view directly [06:50] in this case [06:50] the shared bits are moved to a different view [06:50] so, the plan is to use POMsgSetView from both? [06:50] yeah [06:51] but without using that view as a zope one [06:51] ok, sounds much better [06:51] so it's not linked without any web page [06:51] it just have information [06:51] that the POFile and POMsgSet views will use [06:51] yeah, understood [06:52] I still think that POFileTranslationView and POMsgSetPageView would use a common class from where they would inherit [06:52] because they would share a lot of code [06:53] because POFileTranslationView is for a set of messages and POMSgSetPageView is just for a single message [06:53] but I guess we could leave it for later [06:53] Yeah, I know [06:54] the thing, as I see it, is that it would mostly be about template sharing [06:54] so POFileTranslationView and POMsgSetPageView will have just the needed bits to render the web page [06:54] well, we already have that done [06:54] i.e. it would be mostly the same template for processing data from POMsgSetView [06:54] we are already sharing the template [06:54] and that doesn't need any change [06:54] to go with this solution [06:55] ok, so what code is useful for both, yet can't be moved to PoMsgSetView? [06:55] the main problem were with navigation links, that we had to check whether we were being used from a POFile view or a POMsgSetView directly [06:55] to generate them [06:56] right, but I am wondering what would need sharing? navigation would be separate, so that's cool ;) [06:56] tabindex generation, general statistics info (the one at the end of the form) [06:57] the alternative language selector code [06:57] more or less I think that's all, but anyway, we can leave it duplicated as it's atm and think on the inheritance later [06:57] hum, some of them might belong in other classes (like statistics being part of POFile, no?) [06:58] sure, I don't see much use of inheritance right away [06:58] could be [06:58] but don't worry, I don't want to handle that right now [06:58] ok [06:59] so, is there anything specific you want to discuss? [06:59] we just need POMsgSetView to handle all information that is part of the small section of a message [06:59] whether you think this is a good thing to do ;-) [06:59] ah, ok :) [06:59] I think this solves some problems that our current model has [07:00] for instance [07:00] well, I believe it's a very good thing, it will make it even more clear for anyone delving into code in the future as well ;) [07:01] i.e. I know it wasn't very simple for me to track down all the relations and dependencies this way, with views being used from views, etc. :) [07:01] the link problem [07:01] we currently solve that adding a flag to the POMsgSetView class to note if it comes from a POFile [07:01] and at the same time, you will be doing the _*_submissions() cleanup, probably reducing the number of queries as well ;) [07:01] to give one kind of links or others... [07:01] well [07:01] not really [07:02] yeah, which is a kludge, agreed [07:02] or not sure... [07:02] I should not change anything but the restructuring [07:02] anything else should be deferred to another branch [07:02] (I don't mind to take care of that task anyway, but not as part of that branch) [07:03] ok, maybe you won't, but then I will later on, and it will be simpler for me :) [07:03] yeah, that's the goal [07:03] so do you think this solution would require less time to fix that bug? [07:03] could you quantify it for me?, you don't need to be precise ;-) [07:04] because this would be another argument to do this now instead of post 1.0 [07:04] :-) [07:05] well, it will probably drop from 3-6 days of active work to 2-4 days, not really sure [07:05] the thing is that it requires some optimization work and profiling, which you never knows how much it will take (just remember our edgy migration work in london ;) [07:06] I know, but that's enough [07:06] I think I could do the restructuring in around 4 hours + test fixes + move from POST to GET [07:07] I guess that in 1 day and a half of work would get that done [07:07] I just need to move code around [07:07] and change the POST to be a GET [07:07] + fix a lot of tests [07:08] do you think this is something optimistic or realistic? [07:08] realo-optimistic ;) [07:08] in fact, add half day to the mix to cover me from being a bit lazy... [07:08] so 2 days [07:08] sure, sounds reasonable [07:09] so, I guess with what we win (nicer code, altogether maybe 1 day more work), I believe it's worthy it [07:09] s/worthy/worth/ [07:09] well, 1 day more work only related with your bug... [07:10] with TranslationReview we have less extra work [07:10] but is hard to me to estimate what would we save [07:10] I think that we would save nothing, just clear code [07:10] s/clear/clearer/ [07:11] which could save more time in the future... [07:11] yeah, right [07:11] so, if we stick to our time estimates, I believe it's the way to go [07:11] what do you think? [07:11] I think so, yes [07:12] I'm going to write to Steve and the list about this [07:12] in fact [07:12] due I'm not going to change anything there, I'm not sure whether another meeting with a reviewer would be necessary [07:13] the bigger part of this is part of your bug... [07:13] and I'm not going to do it a this stage [07:13] but later [07:13] what do you think? [07:13] yeah, sounds reasonable [07:14] ok, thanks [07:14] Is there anything else we should talk about? [07:14] and you need to be careful with the tests and GET/POST switch [07:15] I already did such change for the message filtering code, so don't worry, I felt the pain already.... [07:15] we were doing POST for them until some months ago [07:16] ok, great :) [07:16] I think that's all, enough meetings for today :) [07:17] yeah [07:17] today was pretty intense with meetings... [07:17] I didn't wrote any code :-( [07:18] thanks for your input === carlos -> out for today... [07:19] danilos: do you need anything from me? [07:19] no, that's all; enjoy your evening ;) [07:20] same for you [07:20] I am out myself, will be back later for some more action though :) [07:20] ok [07:20] cheers!