#launchpad-meeting 2006-09-18
<ddaa> Meeting in 17 mins
<ddaa> lifeless: spiv: SteveA: jamesh: MEETING STARTS
* spiv waits for the "who's here?"...
<ddaa> == Agenda ==
<ddaa> Next meeting 2006-09-25, 10:00-10:45 UTC.
<ddaa>  * roll call
<ddaa>  * production status
<ddaa>  * importd batch progress
<ddaa>  * release finder
<ddaa>  * Python import
<ddaa>  * strategic plan
<ddaa>  * bzr-lp features
<ddaa>  * interesting bzr list threads
<ddaa>  * advertising
<ddaa>  * 1.0 targets
<ddaa>  * critical bugs
<ddaa>  * any other business
<ddaa> If you wish to change the time of the meeting or add/remove agenda items, say "bzzzt!".
<ddaa> If we are short on time, the "any other business" item will be automatically, dropped. So if you ''want'' to discuss something more, speak up now.
<ddaa> == Roll call ==
<ddaa> mpool is on leave until September 19th.
<spiv> I'm here.
<ddaa> Good morning spiv.
<ddaa> Or maybe you wish to be called Ribri?
<spiv> lifeless said on #bzr he might be a bit late to this meeting.
<ddaa> righty, was chatting with him
<jamesh> I'm here
<ddaa> Good morning jamesh.
<ddaa> SteveA: ahoy Great Launchpad Overlord!
<spiv> I misread that as "Overboard", probably because of the "ahoy"...
<ddaa> Waters here are nothing you want to be bathing in. They are fully of ravenous zope sharks.
<ddaa> When they're done with you your split in at least four different bits: content, interface, browser and template.
<SteveA> is it speak like a pirate day already?
<jamesh> tomorrow
<ddaa> really?
<jamesh> http://www.talklikeapirate.com/
<ddaa> Anyway... so lifeless late, everybody else on board.
<ddaa> == Prodution status ==
<ddaa> j-a-meinel has reported sftp mirroring latency problem again, on Friday. I got supermirror privs on vostok, but killing hung branch-puller scripts did not help. Increasing verbosity did not help, since that script appears to have no logging whatsoever.
<ddaa> We need to fix that latency problem ASAP, it is harming public confidence in Launchpad hosting.
<spiv> (And in six months time, at the opposite end of the year, I recommend everyone observes Pork Like a Tyrant Day...)
<ddaa> I'm not quite sure where to start. But I think putting some logging to help debugging what's going on would help.
<jamesh> the current puller is single thread
<ddaa> I sure hope it's going to stay that way.
<spiv> At the moment, a single slow/hung branch pull will block the rest of the queue.
<jamesh> would we benefit from going multi-threaded?
<ddaa> If we go parallel, I'd like to go multi-process instead of multi-thread.
<ddaa> For one thing, that makes it much easier to kill hung tasks.
<ddaa> it's also less of a debugging nightmare.
<spiv> There's been some discussion on the launchpad list involving myself, lifeless and ddaa about how to improve things so that a) we can do things in parallel, b) so that hosted branch pulling is somewhat isolated from other branch pulling.
<ddaa> But I think that if we want to do a quick functional change it should be just running three branch puller, one for external branches, one for imports, one for hosted branches.
<spiv> It shouldn't be hard to log each URL before we try mirroring it.
<jamesh> for the hosted branches and imports, we should be able to tell which ones need mirroring too
<jamesh> if we can pass that info to the puller, then it can pull those classes more frequently
<spiv> jamesh: we already pull all hosted branches every run.
<ddaa> spiv: I talked a bit about my plan to SteveA, and how that related to importd-ng. It seems you and lifeless were thinking of something else. I'd like if you could help to clarify the disconnect.
<spiv> (because there's only ~250 of them and they're local so it's acceptably fast for now)
<jamesh> spiv: sure, but it won't be fast forever
<spiv> ddaa: lifeless is the best person to articulate the exact idea posted to the list a while back... I'd recommend scheduling some time to talk it over with him.
<jamesh> it'd probably also be worth looking at making the pull interval for a branch dynamic
<jamesh> and suspend pulls of branches that time out
<ddaa> Well, for starters. I'd lke to have each branch log at INFO level, for each branch: launchpad page url, source url, destination url. And when it fails, log the error that's put in Launchpad at WARNING level.
<spiv> jamesh: Of course.  Just saying the "pull those classes more frequently" for hosted branches is a solved problem, for some values of "solved" :)
<ddaa> jamesh: can you do that?
<spiv> ddaa: I wonder if DEBUG level would be more appropriate, but otherwise I agree with the extra logging plan.
<ddaa> maybe DEBUG and INFO then
<jamesh> ddaa: okay.
<ddaa> I would rather user DEBUG for INSANELY VERBOSE sort of things
<spiv> ddaa: logging each branch will be insanely verbose eventually :)
<jamesh> we can always run the script with "-q" to silence the INFO messages once everythings running acceptably
<ddaa> and -qq to silence warning, I think
<lifeless> hi
<ddaa> hi lifeless
<ddaa> ACTION: jamesh to put logging into the branch puller
<ddaa> I would like to start working on the new architecture for branch puller and importd-ng soon. Like this week or next.
<ddaa> Let's move on I think.
<ddaa> == Importd batch progress ==
<ddaa> Next meeting action:
<ddaa>  * ddaa to discuss BatchProgress testing with lifeless
<ddaa> Just a reminder. Lifeless asked to postpone that last week because of bzr release.
<lifeless> grab me at 2130 AEST
<ddaa> lifeless: anything you'd like to say about that right now?
<ddaa> lifeless: talk UTC to me please
<lifeless> 2130 UTC+1000
<ddaa> lifeless: okay
<ddaa> == Product release finder ==
<ddaa>  * jamesh: report on PRF progress. In particular the outcome of reviewing the product:series:version:tarball table.
<ddaa> jamesh: the stage is yours
<jamesh> the product release finder can now run to completion, and I posted some results to the list
<ddaa> was there noise-looking tarball names in that?
<jamesh> there were only a few issues with the output: the code for extracting version numbers from filenames tripped up on "foo-1.0.orig.tar.gz" filenames, and the pattern for redland was too loose
<jamesh> so picked up a redland-bindings tarball
<jamesh> it might do to use more complex match patterns (modelled after uscan, maybe) to fix these issues
<ddaa> What about allowing users to blacklist bad release names post hoc?
<jamesh> I also did a productreleasefile/sourcepackagereleasefile cross reference to match productreleases with sourcepackagereleases based on tarball identity, which gave promising results
<ddaa> jamesh: I did not see an email about that, that sounds interesting.
<jamesh> ddaa: i don't think any of the problem names got through the filename version extraction, but it is something to keep in mind when modifying the code
<ddaa> ACTION: ddaa to follow-up on ML about blacklisting bad release names to allow user to fixup bad glob matches.
<jamesh> Some of the next steps would be to simplify the UI for entering the release file details to a single field rather than URL+glob
<jamesh> and maybe move it to a different form than $series/+source
<lifeless> triage:
<ddaa> ACTION: jamesh to report on productrelease/sourcepackagerelease cross-checking
<lifeless> allow setting the release details
<lifeless> ddaa: hes already done that
<lifeless> first: allow setting the release details:
<lifeless> second: .orig filtering
<jamesh> since +source can't be submitted without specifying VCS details
<ddaa> lifeless: thanks, did not notice, maybe it's in the week-end email backlog
<lifeless> third: deploy
<ddaa> jamesh++
<ddaa> Well, that or fix +source
<jamesh> (we should probably make +source submittable with no VCS details anyway ...
<ddaa> since it's broken in a few other ways
<ddaa> I'd like to start working on +source soon too. It's incredibly confusing to users now. I'll put a few hours in that this week.
<ddaa> ACTION: ddaa to start fixing +source
<jamesh> ddaa: what do you think of making +source only about VCS details?
<jamesh> the source package bit is already available on another form
<ddaa> jamesh: there are good and bad things to it. The bad thing is that it would multiply the number of page loads for users setting up new products to fill in all the details.
<ddaa> so I'm not quite sure yet
<lifeless> focus gentlemen, the meeting is 50% done, and this is design that can be done on list or a bug
<ddaa> yeah, let's move on
<lifeless> do you agree with the priorities I suggested ? if so move on
<jamesh> lifeless: sounds good.
<ddaa> == Python import ==
<ddaa> https://launchpad.net/products/launchpad-bazaar/+bug/56360
<ddaa>  * ddaa: report on bzr testament encoding bug, and maybe progress of Python import.
<ddaa> So, using bzr-0.9, plus a couple of fixes to importd and cscvs seems to fix the testament encoding problem.
<ddaa> But the python import is still failing because after a few thousand revisions the svn server eventually reset the connection.
<ddaa> and pysvn does not give us good exceptions to detect that sort of error.
<ddaa> So I'll just keep restarting the import until it works.
<ddaa> SteveA: talk about that later if you wish.
<ddaa> == strategic plan ==
<ddaa> Last meeting action:
<ddaa>  * SteveA: jamesh: review 32/Bazaar
<SteveA> hi
<jamesh> I forgot about this.  I'll send some stuff to mbp tomorrow
<SteveA> i haven't been paying attention -- didn't realize it was a meeting
<SteveA> thought it was just high seas piracy
* SteveA catches up
<ddaa> Bah, let's move on.
<ddaa> == bzr-lp features ==
<ddaa>  * mpool: report on progress for bzr-lp features
<ddaa> mpool is still on leave, so unless somebody else has something relevant to say, we'll move forward.
<lifeless> move on
<ddaa> == Interesting bzr list threads ==
<ddaa> Follow up to last meeting. Do you guys have keywords for outstanding bzr threads from last week?
<ddaa> Did not have time to read any of the ML last week, so that would help.
<lifeless> nope, nothing interesting happened
<lifeless> actually, thats a lie
<lifeless> A bunch of interesting things happened, and they are now in my long term memory. Something will trigger an associative lookup but I'm *terrible* at date-based recall - its why history was really annoying for me
* lifeless is done
<ddaa> spiv: jamesh: anything in particular your remember?
<spiv> ddaa: Btw, there's a daily snapshot of the Python SVN repo at http://svn.python.org/snapshots/projects-svn-tarball.tar.bz2 -- perhaps doing the initial import from a local copy of that would be better.
<jamesh> ddaa: not really.
<SteveA> lifeless: I found history interesting after I left school.  Schools teach history wrong, in general.
<SteveA> svn servers will reset connections.  it happens.
<ddaa> spiv: good suggestion.
<ddaa> ACTION: ddaa to look at tarball-based import of python
<spiv> ddaa: j-a-meinel did a huge bunch of reviews of the smart server branches, I don't remember much else :)
<lifeless> SteveA: oh I found it very interesting. And I'll occasionally make comments about william of orange at the right point in a conversation
<SteveA> so, we should be able to deal with this. (eventually.)  do we have some kind of plan for that?
<lifeless> SteveA: but asking me what year WWI started, and I'm screwed.
<ddaa> SteveA: move away from pysvn and use python-subversion, presumably that would give more helpful exceptions.
<SteveA> lifeless: the year isn't important.  its place in the flow of events is, though.  and that's what school teaching of history often gets wrong.
<SteveA> ddaa: is there a better pysvn upstream we can use?
<lifeless> SteveA: well, the flow I'm fine on ;)
<jamesh> so they should teach students the partial ordering of events rather than the cardinality?
<lifeless> jamesh: rotfl
<SteveA> jamesh: exactly.  history as a digraph
<ddaa> SteveA: no idea, but it's custom bindings for a GUI app, so it does not have the same requirements as cscvs.
<SteveA> anyway, when you learn relativity too, you find it really is a digraph
<SteveA> or many digraphs
<ddaa> It looks like the bzr-lp threads thing is not a success. I suggest to drop it.
<ddaa> == Advertising ==
<ddaa> Last meeting actions:
<ddaa>  * spiv: blog about similarities between SVN and bzr checkouts, in relation to Launchpad.
<SteveA> threads?
<SteveA> oh, right, asking people to note interesting happenings
<ddaa> SteveA: your suggestion to asks the folks here for stuff they found interesting in the bzr mailingh list
<SteveA> I would say that this week, perhaps nothing too interesting happened
<SteveA> keep trying it
<ddaa> SteveA: ok
<ddaa> spiv: news?
<spiv> ddaa: I have a draft.  I'll post it to the list shortly.
<ddaa> Way cool.
<ddaa> == 1.0 targets ==
<ddaa> supermirror-smart-server: spiv: still looking on track for october 8th?
<ddaa> importd-bzr-native: removal of Arch support code almost complete. Missing one importd patch, and the database patch. After the launchpad patch lands, I believe we will be able to delete pybaz, gnarly and bzrtools from the dists tree.
<ddaa> bzr-roundtrip-svn: not for 1.0
<ddaa> Pending action:
<ddaa>  * mpool: read up/tick off svn roundtripping discussion
<ddaa> spiv: how the supermirror-smart-server looking?
<spiv> Very good.
<spiv> Most of the work to date is in the 0.11 release candidate branch.
<spiv> I have a branch where "bzr+ssh://" urls work.
<jamesh> spiv: how easy will it be to integrate with twisted/conch?
<spiv> (which we'll ask to merge into 0.11)
<lifeless> nit: bzr 0.11 is frozen; rc1 is next monday
<spiv> jamesh: Worst case, just let it spawn a bzr process.  But I think in-proc will be fairly straightforward.
<ddaa> Anyway, I do not think the release mgmt would be a blocker to the lp feature.
<spiv> jamesh: the serialisation logic is farily cleanly seperated.
<lifeless> ddaa: it is if the protocols are not compatible
<ddaa> lifeless: mh, I thought it was an entirely new transport.
<spiv> Next steps: smart server-over-http, and supermirror integration.
<ddaa> So, the supermirror-smart-server is on track according to spiv.
<ddaa> == Critical bugs ==
<ddaa> https://launchpad.net/bugs/31308 Cannot set branch associated to a product series. Fix commited.
<ddaa> No new critical bugs. Will remove this section on the next meeting unless somebody really enjoys me saying "no critical bug" every week.
<spiv> Yes.  lifeless has been a big help this last week.
<SteveA> well
<jamesh> I've also got a branch to add Product.development_focus in the PQM queue
<SteveA> lower the importance
<SteveA> so we get the top most important bugs
<SteveA> it is great that we have few critical bugs
<SteveA> do we have any highly important bugs?
<lifeless> I have to go prepare the next meeting
<jamesh> which will allow us to do the "lp:/python" style URIs (picking the default series)
<ddaa> SteveA: many many
<SteveA> so, oldest critical + oldest high
<SteveA> max 7
<ddaa> SteveA: that's why I asked for your feedback on bug triaging last week
<SteveA> that's what we do in the launchpa dmeeting, anyway
<ddaa> So. I think that meeting is done.
<SteveA> ok, thanks david
* ddaa goes to nature call place
<lifeless> ddaa: did you deal with A
<lifeless> https://lists.ubuntu.com/archives/launchpad-users/2006-September/000608.html
<ddaa> lifeless: done on tuesday last week
<lifeless> sweet, I can forget about it then
<ddaa> look for emails with Yate in the Subject in launchpad-users
<ddaa> lifeless: so, talk about BatchReport?
<ddaa> hu, BatchProgress I mean
<lifeless> -> #launchpad
#launchpad-meeting 2006-09-19
<carlos> hi
<SteveA> hi
<SteveA> so, I have a call with mark in 1.5 hrs
<SteveA> and I wanted to catch up with you on how 1.0 stuff is going, and any other issues that are around currently
<carlos> Ok, they didn't changed too much since last time we talked
<carlos> Danilo told me that firefox seems to be done
<carlos> but he suggested a new way to handle file imports
<carlos> to help with OO.org native imports
<carlos> that get a single file as input and produces more than one template
<SteveA> does firefox done mean the code is done, with tests?
<SteveA> the code is in RF?
<carlos> so we are able to have 1 file as input and n potemplate or pofile as output
<SteveA> the code is committed to danilo's branch?
<carlos> done means he's in testing stage
<SteveA> so, not in RF
<carlos> not yet
<SteveA> but working prototype, pre review, on danilo's branch
<carlos> I'm not quite sure whether his suggestion will require changes for firefox, we will have a meeting about it today
<carlos> yeah
<carlos> I think so
<SteveA> a single file as input...
<SteveA> what single file would that be?
<SteveA> I'm trying to understand what you're describing
<carlos> OpenOffice has all translations inside a single file per language
<carlos> for documentation, oo-writer, etc
<SteveA> ok
<SteveA> so, I can see why we'd want to split that up
<SteveA> is it obvious how to do that?
<carlos> but Rosetta will not represent that as a single template because the amount of messages is huge
<carlos> yeah, based on top directories where the sources are stored
<carlos> we have such information inside the file we get
<carlos> and it's more or less the same split we currently do with the .po bridge we use atm
<SteveA> ok
<SteveA> and, how does that help FF?
<carlos> Well, that's something I need to discuss with danilo because the initial talk we had about this was more focused on OO.org and tarball uploads
<carlos> I don't know exactly how would that affect FF
<carlos> the thing is that most of the work for FF will be reused for OO.org so I guess he already saw that as an advantage for OO.org while working on the new Rosetta infrastructure changes (if FF is not affected)
<SteveA> ok
<carlos> about TranslationReview and the view restructuration that we talked about
<carlos> I will have the meeting with danilo today
<SteveA> so, you'll have an opinion about this tomorrow
<carlos> as you suggested
<carlos> yeah, I think so
<carlos> after that, I will try to have a preimplementation call tomorrow to start with this as soon as possible
<SteveA> what was the TranslationReview and the view restructuring?
<SteveA> I don't remember it, bassed on those words along
<SteveA> alone
<carlos> TranslationReview is the UI that will allow our users to review translations much more easy, it's a 1.0 goal
<SteveA> right
<SteveA> but, what was the view restructuring for it?
<carlos> view restructuring is a request I got from kiko and that I see as a good thing to do to finish TranslationReview implementation
<carlos> that improves the way the views that use the translation form work and reuse code
<SteveA> ok
<carlos> because we have a couple of hacks to reuse POMsgSetView from POFileTranslateView
<carlos> that at this point require more ugly hacks, kiko suggested a third class specific for the form that will not depend directly on POFile view or POMsgSet view
<carlos> but on a list of POMSgSet objects that will have more than one entry when we have POFile as the context and just one item when we have POMsgSet as the context
<carlos> anyway, this is not the final decision, it depends on what I agree with danilo and the reviewer from the preimplementation call
<SteveA> ok.  so, you're starting work on TranslationReview now?
<carlos> no
<carlos> that task is actually blocked on this
<carlos> before being blocked
<carlos> I already did most of the UI changes
<SteveA> you're saying that TranslationReview is blocked on the view code refactoring?
<carlos> and part of the view changes, that was the point when I was blocked looking on fixing the views
<carlos> yes
<carlos> I could finish TranslationReview spec
<carlos> but that would require some hacks
<carlos> that I would prefer to avoid and I think could be avoided with the restructuration
<SteveA> "restructuring"
<carlos> ok, thanks
<carlos> you know, my spanglish...
<carlos> about Edgy translations, we already imported most of the .pot files and I think we already fixed the translation domain changes since dapper release
<carlos> this is not an official 1.0 goal, but it's an Edgy one
<carlos> I sent an email last week about translations for documentation that are not part of language packs
<carlos> would be really good if Mark answers that email
<carlos> I sent it to launchpad@lists.canonical.com with the subject: "What to do with non language pack translations"
<carlos> and that will help us to kill what we still have pending in the import queue (atm, 4800 entries)
<carlos> we were importing those entries, but some Ubuntu developers complained to me about the fact that they are not being used at all so translator's efforts are completely wasted
<SteveA> what are the translation domain changes?
<carlos> some products use their version number as part of the translation domain so with a new release, it changes
<carlos> and we need to detect those changes and apply them so language packs have the right info
<carlos> for instance
<carlos> for dapper, evolution used evolution-2.16 as the translation domain, with Edgy, it changed to 2.18
<carlos> I mean to evolution-2.18
<carlos> if we don't do that change, the application will be untranslated because it will not find the translations
<SteveA> ok
<carlos> Do you want to talk about other things that I was working on? or just the ones related with Edgy and 1.0 ?
<SteveA> first, can we just summarize the conversation so far?
<carlos> sure
<carlos> - Firefox is in testing phase. Pending to see of the 1 file to n potemplates/pofiles mapping affects it
<carlos> - TranslationReview is blocked on view changes that are blocked on a pending meeting with danilo and a reviewer (should be unblocked tomorrow)
<carlos> - Edgy imports are mostly done, blocked on a final decision about whether we should import non language packs templates, if the answer is 'no' what should we do with what we already imported
<carlos> I think that's all
<SteveA> ok
<SteveA> thanks, I like the way you produced a clear summary
<SteveA> it helps me a lot
<carlos> you are welcome
<SteveA> what are the other non-1.0 things?
<carlos> well, there were some bugs fixes and user support requests that I don't think we need to talk about
<carlos> but, I detected a problem with Dapper language packs
<carlos> we had a 'hole' of translations
<carlos> that were not in the initial language packs and due a wrong timestamp are not part of the language packs updates
<carlos> I think I detected all those files and agreed with Martin Pitt in a way to solve the situation
<carlos> I'm going to prepare a brief summary to the mailing list
<carlos> the problem was that the we had the wrong timestamp for the initial language pack for dapper, so it was around two days after final language pack for Dapper was released
<carlos> and that prevented language packs updates to include the updates done in those dates
<carlos> that's mainly koffice translations
<SteveA> ok
<SteveA> so koffice translations (mainly) missing in dapper langpacks
<SteveA> because of a timestamp problem
<SteveA> meaning that there were translations made after the initial langpack was shipped with dapper
<SteveA> that were missed out of the updates
<SteveA> is that right?
<carlos> yeah
<carlos> because those translations came from upstream
<carlos> and no other ubuntu translator touched those pofiles
<carlos> the plan to fix this is to 'touch' a translation in those pofiles to force a new export
<SteveA> ok
<SteveA> so, the fix is to "touch" a translation in each pofile with a hole
<SteveA> force a new export
<SteveA> and these will be in the next langpack update
<SteveA> how did you find out about the problem?
<carlos> yeah, that's the solution
<carlos> because from time to time, a KUbuntu user complained to someone that then complained to me
<carlos> until a couple of weeks ago
<carlos> when a KDE developer that tracks our KDE translations
<carlos> warn me about the problem and help me to debug it
<carlos> previous complains came from GNOME users that were not able to help me with that
<carlos> and after check some language packs, I detected the problem, after that I had to develop a script to compare all dates and after some manual review, got a list of files with this problem
<carlos> I'm not 100% sure that I got all files, but I think that I got most of them 
<carlos> .po file format is really poor with version tracking
<SteveA> ok
<SteveA> do you know how the timestamp problem occurred in the first place?
<SteveA> also, when you have a problem like this, please let me know that it has occurred
<SteveA> it's the kind of thing someone may ask me about, and I'd feel stupid for not knowing
<carlos> well, it was a mix of communication problem between Martin pitt and me and the fact that we use a mirror to export language packs
<carlos> so I put there the wrong date
<carlos> (we still do this manually once the final release is done)
<SteveA> where did you put the wrong date?
<carlos> I thought I would fix this much more fast than what it took to me, so I was thinking on sending the report to notify you too... sorry, I will try to do it better next time and write an initial report and another when I find the problem and possible solutions
<carlos> SteveA: in our database
<SteveA> ok, so you did the export
<carlos> I asked an UPDATE command on production
<SteveA> and then put a date in the database
<SteveA> but you put the wrong date in?
<carlos> yeah
<carlos> I do several exports
<carlos> one per day
<carlos> and Martin decides which one is the final one
<SteveA> I see
<carlos> he tells me what he used and I put the right timestamp in our database
<SteveA> how can we avoid this problem in the future?
<carlos> but seems like I forgot to take into account the mirror delay that we have
<carlos> moving language pack exports to production and figure a way to handle all those timestamps automatically
<carlos> I already did some steps in that direction
<carlos> before moving to carbon to generate language packs
<carlos> I improved a lot language pack exports
<carlos> so it takes now between 1 and  2 hours less per distrorelease
<carlos> without locking the database so much or killing the server with a high load as we were doing in asuka
<carlos> I changed a couple of queries, the new ones do the same but eating less resources (less joins or rows fetching)
<SteveA> so, here is an idea
<SteveA> I don't know if it is practical
<SteveA> when you do an export, give it a unique ID, maybe by putting a timestamp in a .txt file in the export
<SteveA> and record that in the database
<SteveA> so, there is an automatic correspondence between the data exported, and the state in the database
<SteveA> so, no manual step, except perhaps saying in the database which is actually used
<SteveA> does that make any sense?
<carlos> well, that's actually what we do atm or the infrastructure we have was thought that way
<carlos> but the language packs use a read only database so we are not able to do that
<carlos> and also, as we do daily lang packs exports, I still need to know which one will be used as the final one
<carlos> and that still depend on Martin
<carlos> but, yes, the idea is more or less that
<carlos> atm, the tarball has a file with the timestamp when it was generated
<carlos> but it's not completely reliable because it depends on the db mirroring 
<carlos> if one day it's not done, the timestamp would have the date for one day but using a database that is two days older than production
<SteveA> when you say "timestamp"
<SteveA> do you mean the time when the files were created
<SteveA> or the time that is written as text into some file?
<carlos> when the language pack was generated
#launchpad-meeting 2006-09-20
<carlos> SteveA: please, ping us when you are ready
<carlos> if you are busy to have the meeting now, tell us it too so we can have another meeting that we have pending
<carlos> please
<SteveA> carlos: hi
<SteveA> I'm here
<carlos> hi
<SteveA> two things
<SteveA> 1. I want to understand better the timestamp and manually entered data thing
<SteveA> 2. about the view refactoring, one point that came out of discussion with kiko and mark is that it is something to do if it will be a small job, and will improve things
<SteveA> don't spend time on it before 1.0 if it is a larger thing
<SteveA> it isn't a goal in itself
<SteveA> it is an idea tha tmight help work on TranslationReview
<SteveA> so, dealing with (2) first
<SteveA> what do you think?
<carlos> well, the thing is that I could implement TranslationReview with current views, but adding some hacks to see, for instance, the copy button that the user clicked to copy a message in the text area to modify it
<carlos> I don't think it would be much more ugly than what we have right now
<carlos> so if you prefer to leave it for later, I can do it, but taking into account that on review time
<SteveA> maybe you should guesstimate it?
<SteveA> how long to do the refactoring in days?  how long to do TranslationReview after the refactoring?  how long to do TranslationReview before the refactoring?  how long to do the refactoring after TranslationReview?
<carlos> based on what kiko told me or what danilo and I talk ?
<SteveA> tell me what you think
<SteveA> after all, you'll be doing the work
<carlos> well, I would need to think about it before I can give you an estimation
<carlos> I know the idea behind kiko's suggestion, but I didn't think about it yet in depth because the pending meetings that would change a bit the solution
<danilos> (just as a sidenote, this is exactly the reasoning why I want to work on generalized TranslationImport stuff: it will take a 2-4 days to implement it, but adding OOo, KDE PO would be much shorter and cleaner)
<carlos> I could give you that info at the end of today
<carlos> is that ok?
<SteveA> yes, that's fine.
<SteveA> that will give you some idea of whether to do this first or later
<carlos> danilos: well, in that case you are preventing the hack, we already have that hack in production, so it makes mucho more sense in your case
<SteveA> so, about the exports and timestamps
<carlos> s/mucho/much/
<carlos> ok
<danilos> carlos: I know, just giving the reasoning behind it
<carlos> SteveA: let me tell you what we have atm and how does it work, ok?
<SteveA> what I understand is that there is this process that involves producing exports
<SteveA> and when pitti says one is good enough, that latest one becomes the baseline
<carlos> yeah
<SteveA> where does the manually entered timestamp come into it?
<carlos> pitti tells me the timestamp of the tarball used for the baseline
<carlos> and I ask Stuart to store it in our database
<SteveA> is that the same data that is the most recent baseline export?
<carlos> not always
<SteveA> the tarball is made from the most recent baseline export?
<SteveA> why might it not be?
<carlos> because there is a small delay
<SteveA> a small delay between what and what?
<carlos> between when pitti gets a tarball, creates the .deb packages and notify me the timestamp
<carlos> and we create a new export every day
<SteveA> the timestamp is based on data in the database
<SteveA> it is purely within the database's data
<carlos> no, the timestamp is based on the date when the export was done
<SteveA> what happens to that data -- making a tarball or a deb -- doesn't affect the timestamp of when the export was produced
<carlos> oh
<carlos> I see what you mean
<carlos> sort of
<SteveA> so, I cannot see a reason to put a timestamp *into* the database
<carlos> it would work that way if our exports come from production 
<SteveA> it is something that should only ever come out of the database
<carlos> but it's not the case
<SteveA> what I can see going into the database is
<SteveA> marking which export is the one that is used
<carlos> we use a read only mirror
<carlos> we don't store the list of exports in our database
<SteveA> oh
<carlos> our datamodel doesn't allow it
<carlos> is not like Ubuntu packages, we don't need that complexity
<SteveA> ok
<SteveA> so, the way I'd approach that is
<SteveA> in the exported data, add a timestamp + checksum of timestamp
<SteveA> so there cannot be a typo
<SteveA> but, I'm more inclined to connect to the real database
<carlos> me too
<SteveA> and store that an export was produced
<SteveA> and store the date of the latest translation used, or whatever is appropriate
<SteveA> and give that an export-id
<SteveA> so pitti can just say "this export-id is the baseline"
<carlos> I guess we could use the link to the librarian as that 'export-id'
<carlos> so we could have it for free adding a link to latest baseline langpack
<carlos> SteveA: also, I was thinking on use the timestamp for latest translation in that export as the timestamp for the language pack so this problem wouldn't happen again
<SteveA> right
<SteveA> please file a bug or two on these issues
<carlos> there is still another issue
<SteveA> what's that?
<carlos> as we generate daily language packs
<carlos> we need to provide Martin Pitti a way to go to launchpad and note himself which language packs has been used as the base package
<SteveA> how does pitti get a particular language pack?
<SteveA> does he get an email?
<SteveA> or go to a page in launchpad?
<carlos> atm, he fetchs it from people.ubuntu.com
<carlos> once we move to production, he will get an email
<carlos> with a link to librarian
<SteveA> how does it get to people.ubuntu.com?
<carlos> my script in carbon pushes the tarball there once built
<SteveA> I see
<SteveA> so, if we had a database table for langpacks produced
<SteveA> he could see it in a UI
<SteveA> and mark in that same UI if one has been used as the baseline for what
<carlos> although Martin asked us for a fixed URL in librarian so he doesn't need to parse the email
<carlos> SteveA: right
<SteveA> or we could offer him xmlrpc 
<carlos> I think that would be the right approach
<SteveA> if he wants to automate it
<carlos> yeah, that would be also a good way to do it
<SteveA> ok
<SteveA> that seems like a small spec to me
<SteveA> couple of paragraphs explaining the background and proposed solution
<SteveA> so we can schedule it for post 1.0
<carlos> ok
<carlos> also, I think that's the only missing part to move language packs to production
<SteveA> ok
<SteveA> so we talked about...
<SteveA>  - having the export script write to the db
<SteveA>   maybe in a special table
<SteveA> or maybe using the librarian id
<carlos> well, we still need a table
<SteveA> so that there is a database-produced unique ID for a langpack that is generated
<carlos> it's a one to many relation
<SteveA> so, use the 'id' in the table rather than the librarian id perhaps
<carlos> one distrorelease has n language pack exports
<carlos> ok
<SteveA> ok
<SteveA> and then we want to record there whether it is used as the baseline
<SteveA> or a baseline
<SteveA> and the timestamp of the most recent translation
<carlos> ok
<carlos> also we have another kind of language packs, the ones that only have updates...
<carlos> but I don't think I should bore you with that
<carlos> I will note that in the spec
<SteveA> I know that they exist
<SteveA> and the script that produces them can use this data to know what range of translations to include
<carlos> right
<SteveA> and then we also talked about a UI + maybe xmlrpc for pitti
<carlos> also, I'm thinking on adding something to launchpad that allows pitti to select whether we are going to generate updates or full exports
<SteveA> to get the langpacks, see what langpacks are available, and mark the one he chooses as a baseline
<SteveA> ok
<carlos> so he can decide to do a new full export
<SteveA> so, this is something to discuss in person with pitti perhaps?
<SteveA> all these things together
<carlos> to reduce the packages with just updates (it already happened with dapper point release, but he had to do it manually)
<carlos> I guess, let's try first phone call after we have a draft
<carlos> and see whether that's needed
<carlos> anyway, if it's post 1.0, we could talk about it in the allhands meeting
<SteveA> well, when are you moving the langpack production into production?
<carlos> once we have this new spec implemented
<SteveA> ok
<SteveA> adn that's not a 1.0 goal
<SteveA> as far as I'm aware
<carlos> right, it's post 1.0
<SteveA> ok, then I think that's settled
<SteveA> what do you think danilos ?
<danilos> I'm fine with it
<SteveA> ok, great.
<SteveA> thanks for having this meeting.
<danilos> and it has just moved to carbon
<danilos> so performance should not be an issue in the near future
<SteveA> we must just be careful about those timestamps
<carlos> well, the performance issues were already fixed even before moving it to carbon
<SteveA> until using a better system
<SteveA> maybe add a checksum to the timestamp as an interim measure... ?
<SteveA> or ensure that the mail sent to set it in the db
<carlos> SteveA: don't worry, what I will try to do is to set as the timestamp latest modified translation
<SteveA> is sent to pitti too
<carlos> SteveA: that way we solve the mirror problem
<SteveA> carlos:  you still need to store that somewhere
<carlos> SteveA: inside the tarball exported
<SteveA> so, there's still a timestamp coming out of the database
<SteveA> across to pitti
<carlos> we already have a timestamp.txt
<SteveA> then back to you
<SteveA> and back into the database
<carlos> right
<SteveA> and that is error-prone
<carlos> I will add also the checksum as you suggested to be completely sure that nothing was lost
<SteveA> so, consider this
<carlos> those are more or less trivial tasks
<SteveA> add a checksum to timestamp.txt
<SteveA> and write a small script to read a timestamp.txt, check checksum and set it in the databaes
<SteveA> if that will take under 2 hrs, then I'd say do that
<SteveA> if more, then it is too much work
<carlos> hmmm, why should I do latest part?
<SteveA> what does that mean?
<carlos> I still need to ask stuart to do the update sentence
<carlos> I don't have direct access to production database
<SteveA> you can tell stuart to run that script
<SteveA> or you can send stuart the timestamp.txt
<SteveA> and tell stuart to run the script on it
<carlos> I see
<carlos> ok
<SteveA> the point is to avoid having someone typing in a manual query
<carlos> so we reduce the chance to introduce a typo
<SteveA> copied from some email or text file
<SteveA> but, it is worth doing this only if it will be quick
<carlos> good plan
<SteveA> don't bother if it will be more than 2 hours
<carlos> ok
<carlos> I think it would fit in a 2 hours slot
<SteveA> because it is just an interim thing
<SteveA> until the better system is designed and implemented
<carlos> so pitti needs to send me the timestamp.txt file and that's all
<SteveA> yes
<carlos> ok
<SteveA> >>> md5.new('timestamp').hexdigest()
<SteveA> 'd7e6d55ba379a13d08c25d15faf2a23b'
<carlos> SteveA: yeah, I already used md5 module while debugging this problem
<carlos> to know which files change between current language packs
<carlos> and a full export I forced
<carlos> so don't worry
<SteveA> something like that
<SteveA> dsfok, great
<SteveA> dsfok, gre
<SteveA> um
<SteveA> lag
<SteveA> great
<carlos> lag + garbage :-P
<carlos> SteveA: btw, now that we talk about this
<carlos> there is another issue we should fix
<SteveA> what's that?
<carlos> related with Rosetta and dapper
<carlos> and perhaps breezy
<carlos> https://launchpad.net/bugs/58221
<carlos> SteveA: pkgstriptranslations was stripping and feeding Rosetta with translations
<carlos> for packages in the backports pocket
<carlos> danilos: please, pay attention too, because we need to solve this issue
<danilos> carlos: no problem, I am ;)
<carlos> that means that we have some .pot files imported in dapper
<carlos> that doesn't match with what dapper has in its official release
<SteveA> does this continue to happen?
<SteveA> or is it just that we have some bad data now?
<carlos> no, 30 minutes ago, the buildds for backports has been fixed
<carlos> but we have some bad data 
<SteveA> no to what?
<carlos> that we need to fix
<SteveA> you're saying that the problem is fixed
<SteveA> but the bad data remains?
<carlos> right
<SteveA> does rosetta get told the pocket by the buildds?
<carlos> Well, I think we can know that from our datamodel 
<carlos> and is a protection we should implement to prevent such breakages in the future
<SteveA> right, please file a bug on that
<SteveA> or, add a task to that bug
<carlos> I did already
<carlos> let me look for it
<SteveA> now, about the bad data.  what do we need to do?
<carlos> https://launchpad.net/products/rosetta/+bug/58223
<carlos> I think that we need to get the right .pot files and reimport them again
<carlos> that should be enough
<SteveA> how do you identify what pot files are needed?
<carlos> because the .po imported didn't change anything in Rosetta other than what came from upstream, so nothing was broken from the Ubuntu translator point of view (we just added some new translations from upstream)
<carlos> well, that's the problem I don't know exactly how to solve
<carlos> I was thinking on ask for a list of packages in the backports pocket
<carlos> and filter out the ones without translations
<carlos> so I get that list
<carlos> but I think this would be a manual process
<carlos> in the other hand, I guess the amount of packages should be low because Dapper was released only 3 months ago
<SteveA> well...
<SteveA> all those packages will also need to be rebuilt
<SteveA> with non-stripped translations
<carlos> breezy would be more complicated, but the amount of templates imported is quite low than dapper so I don't think the problem would be too bad (we didn't get a full import for Hoary or Breezy)
<carlos> SteveA: right
<carlos> because they are without translations atm
<SteveA> are there buildd logs we can use?
<carlos> I guess, but I could check with Martin the right solution for this, because he would need to get that list too 
<carlos> to rebuild the packages
<SteveA> if there are buildd logs or records of this, that would be probably better
<SteveA> maybe ask celso
<SteveA> or infinity
<carlos> there are buildd logs and I think we can see some output from pkgstriptranslations script
<carlos> so it should be more or less doable
<carlos> at least we use them from time to time to debug some problems with .pot regeneration
<SteveA> so, what is the plan?
<carlos> talk with Martin, just in case he already got the list of packages to rebuild
<carlos> if he doesn't have such list, talk with celso/infinity to see if they could provide the logs using any script (they are available from launchpad/librarian) so we don't need to use the web interface for every single package in backports
<carlos> and get the list of packages with translations using those logs
<SteveA> ok.  let me know how it goes.
<carlos> once we get the list, I think the only chance we have is to upload again one by one the .pot files for those packages
<carlos> (we have them available at people.ubuntu.com)
<SteveA> is there any risk of nuking work that has been done since?
<carlos> no, we don't nuck anything
<carlos> that work will appear as suggestions
<SteveA> so, there is work to do :-(
<carlos> what we will do is to 'hide' them for dapper
<SteveA> especially without translation review
<carlos> and show again some others that were removed when the backports one were imported
<carlos> hmm
<carlos> not really
<carlos> I mean, they don't need to reactivate anything
<SteveA> if there are translations that were made, which were confirmed, and which are now just suggestions
<SteveA> then that's a step backwards
<SteveA> and work needs to be done confirming the suggestions
<carlos> it's just a matter of setting some strings as obsolete and remove the obsolete mark from others
<carlos> what I mean as suggestions is that if that string that we are hidding in Dapper appears later in other distro release
<carlos> it will appear as a suggestion
<SteveA> we need to find out how many packages are involved.
<SteveA> ok
<SteveA> that's for strings that are in the backport
<carlos> so we will reuse that work later as part of our translation memory
<SteveA> but not in the one in main
<carlos> hmm
<carlos> the problem is that the backports have packages in main
<carlos> oh, you mean with 'main' release?
<SteveA> I'm concerned that after uploading the new POTs
<SteveA> that the state of translations in there will overrule work people have done since those POTs were produced
<carlos> no
<carlos> any translation done
<carlos> will remain selected
<carlos> what we do is that, for instance
<carlos> a backport for ktorrent includes a new string 'ktorrent rules'
<carlos> once we revert to previous .pot
<carlos> that string will not appear anymore in dapper's imports
<SteveA> consider this
<SteveA> week 1: translation done on ktorrent
<carlos> just because it doesn't belongs to dapper
<SteveA> week 2: dapper POT produced
<SteveA> week 3: more translation done on ktorrent
<SteveA> week 4: backport built
<SteveA> week 5: we fix problem by uploading the dapper POT
<SteveA> have we lost the work done in week 3?
<carlos> no
<carlos> we will be at that exact status
<carlos> my point was that
<carlos> week 4.5: translated something new from backport
<danilos> afai get this, we might only have some additional translations which belong in backports, but these won't be used
<carlos> in that case, those new translations will be hidden with the .pot change, nothing else
<danilos> right, week 4.5 :)
<carlos> danilos: ;-)
<SteveA> ok
<SteveA> that was my concern.  if you're confident that's not an issue, then that's good
<carlos> but we don't remove them so they would appear as suggestions for Edgy if we publish the same version that the backport had
<carlos> that was my point
<carlos> sorry if that introduced some misunderstandings
<SteveA> ok
<SteveA> I have a call now.  let me know how the discussion with various people go.
<SteveA> thanks
<carlos> SteveA: you are welcome
<carlos> danilos: let me have a 15 minutes break and then, we could start our next meeting. Is that ok for you?
<danilos> carlos: sure
<carlos> so let's talk at 16:45 our time
<danilos> carlos: I'd want to drop by store as well, so I wonder if I should do that first as well?
<danilos> (but 10mins is not enough for that ;)
<danilos> carlos: so how about 17h?
<carlos> ok
<danilos> great
<carlos> 17h works for me
<carlos> see you later
<danilos> later; SteveA, carlos, thanks for bringing these issues up, even if I didn't have much to say on them :)
<carlos> danilos: you are welcome ;-)
<carlos> danilos: at least you should know about those issues
<danilos> carlos: indeed
<carlos> danilos: hi, should we have the meeting here?
<danilos> carlos: sure
<danilos> carlos: so, lets start with TranslationImport thing
<danilos> did you have a chance to take a look at the very drafty-spec, and more importantly, to think about it?
<danilos> my idea is to create a simple interface which will provide all data we need, *without* any database stuff
<danilos> the reasoning is that most of the database stuff is repeated (as experienced developing xpi import)
<carlos> yeah I saw your document
<carlos> but I'm not completely sure of how do you plan to do it...
<carlos> I know the idea
<danilos> well, I haven't written anything about implementation
<carlos> to have an object with a single file as an input
<carlos> and n files as output
<danilos> and that's why I want to discuss it with you and have a preimplementation call with a reviewer
<carlos> I think that the basic idea is good enough to work on it
<danilos> my current idea is as written above: accept path/content in constructor, and produce something like a list of templates with all the needed data
<danilos> specifically, I would make TranslationImport.templates a dict keyed by potemplate name
<carlos> but I would like to know how do you plan to deal with the import queue (this changes it a lot)
<danilos> well, I'd go for minimal changes in import queue
<danilos> when there is only a single POT/PO being imported, we'd have the same thing we have now
<carlos> sure
<carlos> but most powerful part of the import queue
<danilos> when there are more than one, we fully expect imported file to list all the templates it wants to go into
<carlos> are tarball imports from Ubuntu packages
<carlos> with multiple .po and .pot files
<carlos> and the guessing code to decide were should that be imported
<danilos> indeed, and no reason to abandon that
<danilos> I'll just move that to separate TranslationImport class
<danilos> the only problem I can see is that we'd have to approve/disapprove the entire tarball
<carlos> so you will need to open tarballs every single time we scan all entries in the queue?
<carlos> that's not possible, we should be able to reject or block single files within a tarball just like we do right now
<danilos> hum, if I want to provide more details, yes
<carlos> so I guess some extra metadata would be needed
<carlos> hmmm
<danilos> well, the other, probably better option is to also allow separation as it's done currently
<danilos> for tarballs, that is
<carlos> well, the code that guess were every entry should be imported needs to check the paths of every single file
<danilos> so, eg. GSI files would present themselves as separate queue entries as well
<danilos> and we can link to the same librarian file for all of the entries
<carlos> so we would have more than one entry for a single GSI?
<danilos> yeah, for different potemplates/languages
<carlos> so we use 'path' field as the way to filter out the file from the tarball stored in the librarian?
<danilos> that's right
<carlos> hmm, I think I like that
<danilos> the only thing we need to watch out for is not to delete file in librarian until all references to it are cleared ;)
<carlos> that will not need too many changes in our current code
<carlos> librarian handles that automatically
<carlos> no entries are removed until there are no more references to it
<danilos> exactly, and we get pretty sofisticated management of even other things, we'd be able to move KDE langpack support to that, etc.
<carlos> how's that?
<danilos> well, a single kde-l10n-<LANG> will appear as several PO files in the queue entry; i.e. the same way as tarballs work now, just per-language, not per-template
<danilos> and, we'd be able to approve "subcomponents" of files (eg. approve single language from GSI file, which may contain a number of languages)
<carlos> I see
<carlos> what I mean is that in this concrete case, kde-l18n handling will not change a lot
<carlos> we will eat less disk space, but that's all
<danilos> well, it won't change at all
<carlos> phone...
<danilos> except that code will be cleaner, imho ;)
<carlos> I'm back, sorry
<danilos> no problem
<carlos> well, I'm not completely sure whether it would be really cleaner....
<carlos> I mean, we still need code to handle the tarball extract
<carlos> o well, the list of the tarball
<carlos> s/o/or/
<carlos> and the disk space needed will be reduced a lot
<danilos> indeed, but there won't be things like hardcoding all the checks in translation_import_queue.py
<carlos> how's that?
<carlos> I know that's true for OO and FF
<danilos> i.e. we currently check if path.endswith('.po') or path.endswith('.pot')...
<danilos> and completely special-case kde stuff with another function
<carlos> right
<carlos> but there are other checks
<carlos> that cannot be moved the way you plan
<carlos> for instance
<danilos> ok, let me rephrase that: instead of "cleaner", I should have said "more generalized"
<carlos> GNOME tarballs mix two different layouts
<carlos> one for the application with a po/foo.pot and several .po files insde that directory
<carlos> and another for documentation with something/foo.pot and then subdirectories for the .po files
<danilos> yeah, but what's the problem with that?
<carlos> anyway, even leaving the code to handle GNOME things, I agree now, it would be cleaner so we move KDE specific code to KDE tarballs
<carlos> danilos: it cannot be moved outside translation_import_queue
<carlos> that's all
<danilos> I don't understand why not
<danilos> afaics, I'd have TranslationImport.providesTemplates which would return a list of all the templates a file provides, and then that would reference all translations for those templates
<carlos> Hmmm, I see
<danilos> and don't forget that we also have default_file_format to determine which importer to use
<carlos> so you mean that you 'extract' a tarball only when you know exactly where its entries will be imported?
<carlos> right, in fact I was thinking on default_file_format and it doesn't solve the problem with GNOME layout
<danilos> well, you list the filenames when you create import queue entries
<carlos> I'm a bit lost because I see some holes in the process you describe
<carlos> could you please describe step by step
<danilos> actually, I don't really care about space-savings, so I may also extract them right away, just like you do with current tarball imports
<carlos> what would happen when we import, for instance, evolution 2.18 translations?
<danilos> especially if I am going to lose a lot of speed with that approach
<carlos> so we get a tarball and the reference to the sourcepackage and distrorelease
<danilos> TranslationImport detects there is only one template in there, and creates a single template, and a bunch of pofile import queue entries
<carlos> let's see what do you have in mind, and then, decide whether we need to untar the entries or not (I don't think we need to untar it, just extract it when the .po file is actually used)
<carlos> danilos: doesn't it have doc + application .pot files?
<danilos> (as for untarring, that can be handled independently of TranslationImport design)
<carlos> sure
<danilos> well, if it has both doc & application .pot files, then it will provide two entries for templates, along with a bunch of pofile entries for each of them
<danilos> now, you're probably thinking of automatic po file matching?
<danilos> especially because evolution POT file will probably need a rename
<carlos> no, I just want to see the full process, not thinking yet on specific details
<carlos> from what you just described to me
<carlos> is the same thing we do right now
<danilos> and we should establish the relation between template and pofiles once they are added to queue
<carlos> get the tarball and fill the queue with all .pot and .po files included
<carlos> ok so until this step, the code would be the same, perhaps moving it to other parts of our tree, but same procedure
<danilos> well, yes, that's the point; it would work mostly the same for what we have already, yet allow easy extending to what we don't have (like Firefox, OOo, KDE PO, Zope...)
<danilos> I don't really see much wrong in the current procedure, to be honest
<danilos> except that I would move it outside the import queue code and generalize it
<carlos> so you would have a kind of adaptor
<carlos> for tarballs that will do what we do atm
<carlos> another for .po and .pot files that do nothing
<carlos> (if it's not a KDE PO or Zope one
<carlos> )
<danilos> the only problem I see with the current implementation is that I need to modify like 5-6 places and add a couple of lines on each of them, and when I develop a new importer, I duplicate much of the code from poimport.py
<danilos> that's right
<carlos> well, even if it's a KDE PO or Zope one, as we agreed the format change will be done on import time not as a .po file 
<carlos> another for OOo that will split the file in smaller chunks
<danilos> well, for KDE PO file, we probably need to descend from POParser only
<carlos> etc
<carlos> etc
<danilos> that's right, but without duplication of database object creation
<carlos> sure 
<carlos> all them inheriting from a common class
<danilos> for Firefox, I ended up copying most of the import_po stuff, and just replacing relevant parts
<danilos> that's right
<carlos> and we only write the method to do the split
<carlos> ok
<danilos> well, all of that would be part of TranslationImport interface, that was my idea
<danilos> so, how do you feel about that?
<carlos> I see an easy optimisation to reduce wasted disk space, but let's leave that for a latter optimisation
<carlos> That solves the problem with code duplication that you talk about
<carlos> but
<danilos> ?
<carlos> I still don't see how do you plan to move the code from translation_import_queue to guess the POFile and POTemplate where an entry should be imported (we should start thinking on rename those table names...)
<carlos> I see that when you extract an entry
<carlos> you can try to guess it and link it 
<carlos> but
<danilos> well, it's easy to have TranslationImport.template['evolution'] .guessed_template property
<carlos> as you already pointed, when you need to do a translation domain change, the .pot file will not find a link
<carlos> sure
<danilos> and translation import queue will use the same method as now: you can override it
<carlos> and you call that before creating the entry in the queue, right?
<danilos> that's right
<carlos> ok
<carlos> let's see it this other way
<carlos> let's say you have a layout like gtk+
<danilos> ok
<carlos> where they use the package version as part of the path
<carlos> so you have something like gtk+-2.10.3/po/gtk20.pot
<carlos> and we have imported 2.10.2
<carlos> the automatic matching will not work here
<danilos> hum, I don't follow
<carlos> so you will not be able to do that link
<carlos> we had this problem with Edgy
<carlos> to link the new .pot file
<danilos> if we have imported 2.10.2, where do we get gtk+-2.10.3 from?
<carlos> is the new one
<carlos> that we are handling
<danilos> ah, ok
<danilos> we have sourcepackage and distrorelease here, right?
<carlos> we do TranslationImport.template['gtk20']  .... this has a problem, the translation domain is not always the same as the .pot filename so we look for pot files based on its path
<carlos> yeah, you know sourcepackagename and distrorelease
<danilos> that will appear in the queue as a separate entry, just like it appears now
<danilos> and gtk20-properties will as well
<danilos> i.e. I am still not getting what you are aiming at
<carlos> ok, but without a link to a POTemplate or POFile, right?
<danilos> that's right
<carlos> just like we do right now
<carlos> ok
<carlos> what I do atm is
<carlos> go to Edgy's gtk20 template
<carlos> and change the path
<carlos> optionally, I could link the .pot file with this POTemplate that I just fixed
<carlos> ok?
<danilos> ok, so you want us to automatize that as well?
<carlos> no, it's not my point, we should do it, but it's not related to this discussion
<carlos> now, what happens with the .po files?
<carlos> with current code, the .po files will be visited again and this time we will be able to link them with POFiles
<carlos> because we find now a POTemplate in the same path
<danilos> well, that's the point I had above: we need to link them to this template queue entry once we create them as well
<carlos> hmm I see your point
<danilos> so, instead of doing "path matching" in queue entry code, we'd do that while adding queue entries
<danilos> which means that we might need another column in translationimportqueue
<danilos> rather, translationimportqueueentry ;)
<carlos> like template_entry?
<danilos> something like that, yes
<danilos> and then we can just directly approve them once template gets imported
<carlos> I see
<carlos> would you note that we need a trigger or something to check
<carlos> that the entry pointed by template_entry should have its template_entry set to NULL ?
<danilos> sure, I'll summarize this entire discussion in the spec when we are done, so I'll note that as well
<carlos> either that, or use an external table to represent this relationship so we don't need triggers
<carlos> Stuart should tell you the best solution
<carlos> ok
<carlos> I agree more or less with this solution
<carlos> but let's talk about some corner cases
<carlos> ok?
<danilos> sure
<carlos> (I think this one is easy)
<carlos> we get a tarball with a set of .po files and no .pot files at all
<carlos> that link will not be done
<carlos> which is fine
<carlos> now, we get a new version of that tarball that includes the .pot file
<carlos> your code should be able to detect the duplicated .po files and update those rows
<carlos> I think this corner case is not a big deal
<danilos> that's right
<carlos> ok, next one
<carlos> we implement a new layout support 
<carlos> for something that we already got imported into the queue
<carlos> so the .pot and .po files aren't linked
<danilos> bullocks, DELETE those, and reimport ;)
<carlos> that's not possible, a reimport requires a new Ubuntu package release
<danilos> we can also try to handle that using the same librarian reference and path's
<carlos> the DELETE is not needed, a new import will update those entries
<carlos> with that, you will need then to store references to the tarball
<danilos> if we go without extracting, but if we extract everything, then the only thing we can work with are paths
<carlos> instead of the content of the concrete entry
<carlos> ok
<carlos> so no extracting++
<carlos> after handling the queue
<carlos> you will need to revisit every entry in the queue grouped by its librarian reference
<carlos> and try again to detect its potemplate and pofile
<danilos> that's right, and make sure that logic for linking those in TranslationImport is separated, so it can be run just on paths
<carlos> hmmm
<carlos> I'm not sure about that last thing
<carlos> I mean, you are interested only in librarian links
<danilos> well, how would we otherwise do the matching after the import?
<danilos> so you think redoing the import would be better?
<carlos> once you do that, you can handle it the same way a new import is handled
<danilos> ok, sure, it's simpler
<carlos> because you already have code to deal with already existing code
<carlos> otherwise the complexity would be higher than needed, isn't it?
<carlos> we are talking about opening a tarball and get the list of entries inside it
<carlos> but we are not fetching its content
<danilos> that's right, I agree
<carlos> if you see it as a performance problem, we can do what you suggest
<danilos> it shouldn't be too much of a problem, I believe
<carlos> also, to prevent long delays in the queue
<carlos> I think we should note last time we checked a set of entries
<carlos> so we try to guess the same entries once per day
<carlos> not sure If you understand what I mean
<danilos> sure
<carlos> ok, let me see if I find any other corner case based on what we found already....
<danilos> we also need to do the guessing iff we have a new template import entry
<carlos> for product imports?
<danilos> for tarballs and stuff
<carlos> other than products
<carlos> what's the point for that?
<carlos> products relay on manual imports
<carlos> so we could get a potemplate and later a tarball with languages
<carlos> well, the other way, first translations and later a template
<danilos> not sure I understand you
<carlos> if we get a tarball without template
<danilos> I meant that apart from checking entries only once a day, we also don't need to do that if there was no template
<carlos> and later a new upload
<danilos> ah, right
<carlos> oh, I see
<carlos> sorry, I misunderstood you :-P
<danilos> no problem ;)
<carlos> but, you need to do it anyway
<danilos> right
<carlos> just in case we already had a template importe
<carlos> d
<danilos> anyway, who do you suggest that I ask to be my pre-implementation call reviewer? :)
<carlos> phone, sorry
<carlos> ok, back
<carlos> hmm
<carlos> well, SI guess james or Bjorn would fit
<carlos> grr
<carlos> weel, Steve usually suggests james or Bjorn for this kind of things
<carlos> s/weel/well/
<carlos> I'm a bit dyslexic today...
<danilos> ok ;)
<danilos> I'll probably ask Bjorn, since I haven't worked with him so far ;)
<carlos> ok
<carlos> but prepare an spec update with what we just talked
<carlos> so he can read it before the call
<carlos> ok?
<carlos> let's take another 10 minutes break and we could start with our other meeting about view restructuring 
<carlos> ok?
<danilos> carlos: anyway, we also planned to discuss your view restructuring work, right?
<danilos> sure
<carlos> :-)
<carlos> I see you have lag...
<carlos> let's meet again at 18:35 local time, ok?
<danilos> yeah, that's fine
<carlos> so
<carlos> danilos: meeting time? (again...)
<danilos> carlos: yay ;)
<carlos> danilos: You would want to read the email from kiko
<carlos> he sent it 31st August with the subject: "POFileTranslationView/POMsgSetView cleanup guide, was Re: Status of a few bugs"
<danilos> ok, sure
<danilos> ok, it's already marked as "important" in my mail folder ;)
<carlos> danilos: the important bits are the ones related with pomsgset.py
<carlos> :-P
<danilos> ok, I've read the msgset.py bits
<danilos> carlos: ping ;)
<carlos> ok
<carlos> sorry, I wasn't taking care of this channel O:-)
<carlos> so
<danilos> weren't you talking about not using several view classes?
<carlos> well
<carlos> the suggestion by kiko is not exactly that
<danilos> and cleaning up _*_submissions is basically what bug 30602 was all about ;)
<carlos> at the moment, the view for POMsgSet pages is used from POFile's view
<carlos> and that's broken
<danilos> ah, ok
<danilos> I see your point
<carlos> because we need to know when are we using it from a POFile view or a POMsgset view directly
<carlos> in this case
<carlos> the shared bits are moved to a different view
<danilos> so, the plan is to use POMsgSetView from both?
<carlos> yeah
<carlos> but without using that view as a zope one
<danilos> ok, sounds much better
<carlos> so it's not linked without any web page
<carlos> it just have information
<carlos> that the POFile and POMsgSet views will use
<danilos> yeah, understood
<carlos> I still think that POFileTranslationView and POMsgSetPageView would use a common class from where they would inherit 
<carlos> because they would share a lot of code
<carlos> because POFileTranslationView is for a set of messages and POMSgSetPageView is just for a single message
<carlos> but I guess we could leave it for later
<danilos> Yeah, I know
<danilos> the thing, as I see it, is that it would mostly be about template sharing
<carlos> so POFileTranslationView and POMsgSetPageView will have just the needed bits to render the web page
<carlos> well, we already have that done
<danilos> i.e. it would be mostly the same template for processing data from POMsgSetView
<carlos> we are already sharing the template
<carlos> and that doesn't need any change
<carlos> to go with this solution
<danilos> ok, so what code is useful for both, yet can't be moved to PoMsgSetView?
<carlos> the main problem were with navigation links, that we had to check whether we were being used from a POFile view or a POMsgSetView directly
<carlos> to generate them
<danilos> right, but I am wondering what would need sharing? navigation would be separate, so that's cool ;)
<carlos> tabindex generation, general statistics info (the one at the end of the form)
<carlos> the alternative language selector code
<carlos> more or less I think that's all, but anyway, we can leave it duplicated as it's atm and think on the inheritance later
<danilos> hum, some of them might belong in other classes (like statistics being part of POFile, no?)
<danilos> sure, I don't see much use of inheritance right away
<carlos> could be
<carlos> but don't worry, I don't want to handle that right now
<danilos> ok
<danilos> so, is there anything specific you want to discuss?
<carlos> we just need POMsgSetView to handle all information that is part of the small section of a message
<carlos> whether you think this is a good thing to do  ;-)
<danilos> ah, ok :)
<carlos> I think this solves some problems that our current model has
<carlos> for instance
<danilos> well, I believe it's a very good thing, it will make it even more clear for anyone delving into code in the future as well ;)
<danilos> i.e. I know it wasn't very simple for me to track down all the relations and dependencies this way, with views being used from views, etc. :)
<carlos> the link problem
<carlos> we currently solve that adding a flag to the POMsgSetView class to note if it comes from a POFile
<danilos> and at the same time, you will be doing the _*_submissions() cleanup, probably reducing the number of queries as well ;)
<carlos> to give one kind of links or others...
<carlos> well
<carlos> not really
<danilos> yeah, which is a kludge, agreed
<carlos> or not sure...
<carlos> I should not change anything but the restructuring
<carlos> anything else should be deferred to another branch
<carlos> (I don't mind to take care of that task anyway, but not as part of that branch)
<danilos> ok, maybe you won't, but then I will later on, and it will be simpler for me :)
<carlos> yeah, that's the goal
<carlos> so do you think this solution would require less time to fix that bug?
<carlos> could you quantify it for me?, you don't need to be precise ;-)
<carlos> because this would be another argument to do this now instead of post 1.0
<carlos> :-)
<danilos> well, it will probably drop from 3-6 days of active work to 2-4 days, not really sure
<danilos> the thing is that it requires some optimization work and profiling, which you never knows how much it will take (just remember our edgy migration work in london ;)
<carlos> I know, but that's enough
<carlos> I think I could do the restructuring in around 4 hours + test fixes + move from POST to GET
<carlos> I guess that in 1 day and a half of work would get that done
<carlos> I just need to move code around
<carlos> and change the POST to be a GET
<carlos> + fix a lot of tests
<carlos> do you think this is something optimistic or realistic?
<danilos> realo-optimistic ;)
<carlos> in fact, add half day to the mix to cover me from being a bit lazy...
<carlos> so 2 days
<danilos> sure, sounds reasonable
<danilos> so, I guess with what we win (nicer code, altogether maybe 1 day more work), I believe it's worthy it
<danilos> s/worthy/worth/
<carlos> well, 1 day more work only related with your bug...
<carlos> with TranslationReview we have less extra work
<carlos> but is hard to me to estimate what would we save
<carlos> I think that we would save nothing, just clear code
<carlos> s/clear/clearer/ 
<carlos> which could save more time in the future...
<danilos> yeah, right
<danilos> so, if we stick to our time estimates, I believe it's the way to go
<danilos> what do you think?
<carlos> I think so, yes
<carlos> I'm going to write to Steve and the list about this
<carlos> in fact
<carlos> due I'm not going to change anything there, I'm not sure whether another meeting with a reviewer would be necessary
<carlos> the bigger part of this is part of your bug...
<carlos> and I'm not going to do it a this stage
<carlos> but later
<carlos> what do you think?
<danilos> yeah, sounds reasonable
<carlos> ok, thanks
<carlos> Is there anything else we should talk about?
<danilos> and you need to be careful with the tests and GET/POST switch
<carlos> I already did such change for the message filtering code, so don't worry, I felt the pain already....
<carlos> we were doing POST for them until some months ago
<danilos> ok, great :)
<danilos> I think that's all, enough meetings for today :)
<carlos> yeah
<carlos> today was pretty intense with meetings...
<carlos> I didn't wrote any code :-(
<carlos> thanks for your input
* carlos -> out for today...
<carlos> danilos: do you need anything from me?
<danilos> no, that's all; enjoy your evening ;)
<carlos> same for you
<danilos> I am out myself, will be back later for some more action though :)
<carlos> ok
<carlos> cheers!
#launchpad-meeting 2008-09-17
<barry> #startmeeting
<barry> moooootbooootttttt
<sinzui> He's dead Jim
<barry> anyway.  welcome to this week's ameu reviewers meeting.  who's here today?
<sinzui> me
<abentley> you
<intellectronica> me
<barry> i'm a doctor not a software engineer!
<bigjools> me
<salgado> me
<flacoste> me
<bac> me
<barry> gmb sends his apologies
<cprov> me
<barry> BjornT, danilos ping
<barry> EdwinGrubbs: ping
<EdwinGrubbs> me
<danilos> me
<BjornT> me
<barry> i think that's everyone...
<barry> [TOPIC] agenda
<barry>  * Roll call
<barry>  * Naming conventions for unit test methods. `testFooBar`, `test_fooBar` and `test_foo_bar` all exist. Recommend settling on `testFooBar` and only changing existing ones as encountered in normal work. -- jml [<<Date(2008-09-10T13:48:09+1000)>>]
<barry>  * Reviewers remove requests from Pending Reviews when you start a review.  If you forget the next on-call reviewer may duplicate your work.  -- bac [<<Date(2008-09-16T10:07:09-0500)>>]
<barry>  * If there's time, the old boring script
<barry>    * Next meeting
<barry>    * Action items
<barry>    * Queue status
<barry>    * Mentoring update
<bac> jtv is having trouble getting into this channel
<barry> bac: dang
<barry> [TOPIC] naming conventions
<barry> i'm just going to paste this one since it was submitted by an asiapacker.  i don't have any background on it since my intarwebs went out monday night
<barry>  * Naming conventions for unit test methods. `testFooBar`, `test_fooBar` and `test_foo_bar` all exist. Recommend settling on `testFooBar` and only changing existing ones as encountered in normal work. -- jml [<<Date(2008-09-10T13:48:09+1000)>>]
<barry> i think it's fairly self evident
<barry> what do y'all think?
 * sinzui hugs PEP-8, then kicks it out the door.
<kiko_> I don't care myself as long as there's a single standard. :)
 * barry would like to see more pep-8 rather than less
 * abentley is conflicted, because test_fooBar is irregular, but fooBar would match the method name.
<flacoste> in all honestyu
<sinzui> ï»¿testFooBar is consistent with our rules. So I think it is the right decision
<flacoste> i find test_ easier to read for tests
<intellectronica> i think test_methodName is better
<flacoste> especially when you can use test_nameOfMethodIMTesting_and_special_consideration
<flacoste> intellectronica: +1
<barry> flacoste: very good point
<bac> intellectronica: +1
<BjornT> +1 to test_fooBar
<bigjools> +1
<salgado> +1
<barry> +1
<sinzui> +1
<barry> any objections?
<flacoste> test_fooBar_plus_special_case
<flacoste> ?
<flacoste> or test_fooBarPlusSpecialCase
<flacoste> ?
<flacoste> we often have more than one tests for one method
<barry> flacoste: the former (IMO)
<intellectronica> test_fooBar_plus_special_case
<intellectronica> !
<bac> +1 on test_fooBarPlusSpecialCase
<barry> intellectronica: +1
<salgado> please!
<salgado> test_methodName_plus_special_case
<flacoste> salgado: +1
<barry> bac: why?
<bac> it is simpler and looks better to me.
<barry> any other comments?
<salgado> I vote on the former because it makes the methodName stand out from the rest
<flacoste> same rationale over here
<salgado> and because we already use underscores for test_
<flacoste> and it's more PEP-8 compliant
<barry> agreed
<bac> PEP-8 supports mixing camelCase and underscores?
<barry> bac: not really, but it's the price we pay for being zopey
<barry> okay, anyway, let's move on.  i'll forward the results of this discussion to the ml and we can decide from there
<barry> [ACTION] barry to forward results of test naming discussion to ml
<barry> [TOPIC] reveiwers remove requests
<barry>  * Reviewers remove requests from Pending Reviews when you start a review.  If you forget the next on-call reviewer may duplicate your work.  -- bac [<<Date(2008-09-16T10:07:09-0500)>>]
<barry> bac: the floor is yours
<bac> last week and this week i reviewed a branch from the general queue on PendingReviews only to discover later each had already been reviewed.  yes, had i double-checked with the launchpad-reviews mailing list i could have avoided the duplicated work.  but reviewers need to be diligent about removing branches from the General Queue when they take them.
<sinzui> Wow, I did that last week too
<barry> bac: yes, especially now that we're back to using PR exclusively for the time being
<barry> duplicate work REALLY sucks
<intellectronica> bac: my sympathies :(
<bac> indeed.  of course, the duplicated review did raise some interesting issues, but it is still annoying.
<intellectronica> bac: reviewers should pay attention to this, but ideally i think reviewees should take care of that
<barry> bac: silver lining :)
<bac> intellectronica: if you're doing an on-call review off the GQ the reviwee may not be around.  the reviewer should move it to his queue.
<intellectronica> bac: yes, if the reviewee is absent then definitely
<bac> that's all.  just raising awareness that the problem exists.
<barry> bac: thanks
<barry> that's it for the new items.  since we have time i'd like to go hit the old agenda items, but i'm going to skip ahead
<barry> [TOPIC] mentoring update
<barry> we need a few mentors, one for rockstar and possibly soon for mars and leonardr
<barry> do we have any volunteers?
<barry> we currently have one mentat: abentley (who i'm mentoring)
<bac> i recall promising to step up for the next round.
<bac> i'll volunteer to mentor rockstar
<barry> bac: awesome thanks
<sinzui> I'm very busy for the next month. after that, I'm happy to take a mentat
<barry> sinzui: great.  leonardr and mars have not yet officially asked to be reviewers, so we have time to wait on that
<bac> is rockstar starting next cycle?
<barry> bac: i'd like him to
<bigjools> resistance is futile, they will be assimilated
<bac> ok.  i'll contact him after the meeting.
<barry> lol
 * barry thinks bigjools should change his nick to borgjools
<barry> bac: thanks
<bigjools> guffaw :)
<barry> [TOPIC] action items
<bac> after i mentor rockstar next cycle i'm going to request a one month sabbatical.
<barry> bac: from reviewing or mentoring?
<bac> both
<barry> bac: cool.  everybody needs sabbaticals now and then
<barry>  * barry will move the preimp discussion to the ml
<barry> i have a 1/2 composed email on this, so not done
<barry> [TOPIC] queue status
<barry> any comments?
<barry> i notice lots of crossed off branches in pending-reviews.  let's try to clean those up (he says as an offender)
<barry> anyway, that's all i have.  does anybody have anything not on the agenda?
<barry> well then, we can end early!  thanks everyone and have a good day
<barry> #endmeeting
<bac> barry: do you want to edit PR to remove the reference to MergeProposals?
<barry> bac: will do, thanks for the reminder
<bigjools> thanks barry, and BCTL, wow!
<barry> bigjools: yeah, wtf am i thinking?!
#launchpad-meeting 2008-09-18
<gary_poster> morning!
<mrevell> gary_poster: Hey
<Rinchen> me
<sinzui> you?
<sinzui> ewe?
<danilos> me
<flacoste> danilos: didn't hear the news?
<danilos> flacoste: nope
<flacoste> danilos: there is no Launchpad meeting anymore
<flacoste> danilos: unless you are the translations QA contacts
<Ursinha> flacoste, he is
<danilos> flacoste: and where is production QA meeting happening?
<Ursinha> danilos, it'll happen here
<Ursinha> in a few seconds, i presume
<danilos> Ursinha: ok, thanks
<Rinchen> me me
<danilos> I am not exactly sure what should I have ready for this meeting (i.e. QA status or something), so I have nothing :)
<Rinchen> hey danilos
<danilos> hey Joey
<Rinchen> danilos, I need to get a status on your RC item
<matsubara> #startmeeting
<MootBot> Meeting started at 13:01. The chair is matsubara.
<MootBot> Commands Available: [TOPIC], [IDEA], [ACTION], [AGREED], [LINK], [VOTE]
<Rinchen> danilos, ping me over in -code and tell me please
<danilos> Rinchen: it should be RCFIXED
<matsubara> Welcome to this week's Launchpad Production Meeting. For the next 45 minutes or so, we'll be coordinating the resolution of specific Launchpad bugs and issues.
<matsubara> [TOPIC] Roll Call
<MootBot> New Topic:  Roll Call
<Rinchen> me
<Ursinha> me
<sinzui> me
<herb> me
<matsubara> me
<gary_poster> me
<beuno> me
<danilos> me
<flacoste> me
<matsubara> rockstar: ping?
<rockstar> I'm here.
<Ursinha> registry, translations, foundations, losas are here
<rockstar> Sorry, had a pup emergency.
<Ursinha> code is here
<mthaddon> me
<sinzui> Is Bugs represented?
<Ursinha> who's missing
<matsubara> bigjools: around?
<Ursinha> sinzui, it should be BjornT or intellectronica
<bigjools> I'll stand in for cprov since he's not here, but I have to leave shortly
<Ursinha> well, soyuz is here
<sinzui> Intellectronica is in a pub right now
 * bigjools waves
<bigjools> sensible man
<Ursinha> sinzui, i've noticed that
<matsubara> ok. let's get moving then
<matsubara> btw, who's the dba contacts?
<matsubara> contact, even
<flacoste> i am
<matsubara> all right, thanks flacoste
<matsubara> [TOPIC] Agenda
<MootBot> New Topic:  Agenda
<matsubara>  * Next meeting
<matsubara>  * Actions from last meeting
<matsubara>  * Oops report & Critical Bugs
<matsubara>  * Operations report (mthaddon/herb/spm)
<matsubara>  * DBA report (DBA contact)
<matsubara>  * Sysadmin requests (Rinchen)
<matsubara> so, next meeting same time ok for everyone?
<danilos> I'd prefer a few hour earlier
 * Rinchen pokes matsubara to change the topic in mootbot
<matsubara> oops, sorry
<flacoste> stub would also prefer a few hours earlier
<matsubara> [TOPIC] Next meeting
<MootBot> New Topic:  Next meeting
<danilos> actually, this one is really hard for me to attend to, especially since I've started coming for work earlier
<Ursinha> matsubara, i have to take all the suggestions and find the best time
<Rinchen> I propose doing what Ursinha just said
<matsubara> right, so, we'll discuss the next meeting in the list
<Ursinha> danilos, jtv told me that, and i'm considering this
<danilos> Ursinha: ok, cool
<matsubara> all right, moving on
<matsubara> [TOPIC] Actions from last meeting
<MootBot> New Topic:  Actions from last meeting
<danilos> I remember the last time it was because of New Zealenders, and I don't see any this time around so...
<rockstar> Well, that's not official yet.
<Rinchen> yes, we should be able to go earlier if it's more convenient
<matsubara> * stub to patch our fti regexp to avoid OOPSes (bug 174368) and discuss a proper fix with jtv
<ubottu> Launchpad bug 174368 in launchpad-foundations "Search query triggering error in tsearch" [Undecided,Confirmed] https://launchpad.net/bugs/174368
<Ursinha> and considering the people's suggestions, it'll be much probably changed to earlier
<flacoste> no progress on that
<Ursinha> than now
<Ursinha> flacoste, stub wasn't here in the last meeting
<danilos> Ursinha: stub is away until Monday, afaik
<flacoste> yes
<flacoste> like i said: no progress on that
<flacoste> i'll check that up with him next week
<Ursinha> danilos, flacoste, thanks for pointing that
<Ursinha> flacoste, thanks
<matsubara> thanks flacoste.
<matsubara> [TOPIC] Oops report & Critical Bugs
<MootBot> New Topic:  Oops report & Critical Bugs
<matsubara> Today's oops report is aboug bug 271561 and OOPS-991SMPM37, OOPS-991EB136, OOPS-992C476.
<ubottu> Launchpad bug 271561 in launchpad-bazaar "OOPS calling __repr__ in xmlrpc method" [Undecided,New] https://launchpad.net/bugs/271561
<ubottu> https://devpad.canonical.com/~jamesh/oops.cgi/992C476
<flacoste> is that one for us or code?
<matsubara> flacoste: for code, but you could jump in and give the guys some help. they think it's related to Zope stuff
<flacoste> they always do :-)
<matsubara> :-)
<rockstar> :)
<Ursinha> :)
<flacoste> the OOPS seems unrelated though
<flacoste> i mean the oops reference is about Archive:+index
<matsubara> btw, OOPS-992C476 is another one that seems related to zope stuff
<ubottu> https://devpad.canonical.com/~jamesh/oops.cgi/992C476
<flacoste> err, sorry,
<rockstar> flacoste, I'll follow up with the rest of the team this evening, and let them know that you're willing to help
<flacoste> fortunately, we have a zope master that joined this week
<flacoste> everyone, meet gary_poster
<gary_poster> heh
<Rinchen> hi gary_poster!
<flacoste> gary_poster, meet everybody :-)
<matsubara> sorry, I don't have bugs for those yet as I they showed up in today's report
<Ursinha> gary_poster, welcome :)
<gary_poster> hi everybody
<matsubara> rockstar, there is a double tilde in the URL in OOPS-991SMPM37. It looks quite
<matsubara> strange. Do you know why that might be happening?
<danilos> gary_poster: hey, welcome :)
<bigjools> howdy gary_poster
<gary_poster> thanks :-)
<matsubara> hi gary_poster, welcome!
<matsubara> I've asked salgado about OOPS-991EB136. salgado suspects it's caused by a bug
<matsubara> in IE causing the map to not render properly. I'll test it a bit and follow up
<matsubara> with salgado (updating the already filed bug or file a new one if necessary)
<flacoste> matsubara: /C476 seems to be a fallout of stub db policy branch
<flacoste> matsubara: so we will take it
<danilos> (so that's why we didn't start this meeting with "hi, I am an alcoholic": leaving a good impression on the new guy)
<rockstar> matsubara, I'm unclear as to what you're asking me.
<gary_poster> :-)
<matsubara> flacoste: all right. i'll file a bug and assign to you after the meting
<Ursinha> haha
<matsubara> meeting
<flacoste> matsubara: regarding code bug: "I lean towards the scanner."
<flacoste> that's from jml
<matsubara> rockstar: re: https://devpad.canonical.com/~matsubara/oops.cgi/2008-09-17/SMPM37 there's a double tilde in the URL which is quite strange
<rockstar> Yea, the scanner has been undergoing some changes recently, so it's probably an acurate guess.
<rockstar> matsubara, someone hacking the url?
<rockstar> Or possibly a bzr branch config problem.
<matsubara> well, that's branch puller oops
<rockstar> Yea, that's confusing.
<matsubara> rockstar: I'll file a bug about it and we can try to figure out if it's a problem on our side or just broken bzr branch config
<matsubara> we don't have a Bugs team contact, but I'd like to ask about the api timeout bug
<matsubara> I'll have to follow that up with them later then.
<matsubara> btw, thanks for identifying the problem on that timeout flacoste
<flacoste> thanks goes to leonard
<flacoste> matsubara: for the record, i think we have a similar issue in answers
<matsubara> I'm done with OOPSes. anything else shows up in the summaries today. I'll follow up with the teams
<Ursinha> great
<Ursinha> well, there are four critical bugs, three fix committed, one opened
<Ursinha> bug 269384
<ubottu> Launchpad bug 269384 in bugzilla-launchpad/bugzilla-3.2 "comment_ids parameter causes a DB error in Launchpad.comments() in bugzilla-launchpad 0.9-3.2" [Critical,New] https://launchpad.net/bugs/269384
<Ursinha> is a bug related to bugzilla
<matsubara> flacoste: thanks for letting me know. can you assign one of yours to investigate if it's a problem?
<flacoste> matsubara: you mean answers, it's not yet because it's not exposed in the API, but I know the same algorithm is used there
<matsubara> flacoste: right. I'll open a bug task for it and add your comment there then so we won't forget
<flacoste> matsubara: thanks
<matsubara> btw, answers is under your jurisdiction isn't it?
<matsubara> Ursinha: I guess will have to check with the Bugs team about that one later
<Ursinha> matsubara, seems the best option
<Ursinha> i'll do that when intellectronica returns from pub
<matsubara> thanks Ursinha
<flacoste> matsubara: i think it's registry jursidiction nows
<Ursinha> i'll try
<sinzui> ha ha
<matsubara> flacoste: oh, I wasn't aware.
<matsubara> the one man show!
<sinzui> I am happy to take Answers
<matsubara> sinzui: cool! thanks!
<flacoste> matsubara: well registry team has just been formed, so it's normal
 * sinzui was in Answer bugs looking for a dup of the fti OOPs anyway
<matsubara> right. moving on
<matsubara> [TOPIC] Operations report (mthaddon/herb/spm)
<MootBot> New Topic:  Operations report (mthaddon/herb/spm)
<herb> * 2008-09-11 - Cherry pick r6972
<herb> * Had a couple of incidents of codebrowse needing to be restarted (2008-09-12, 2008-09-15)
<herb> * Had a couple of incidents of app servers dying and leaving a stale PID file (2008-09-15, 2008-09-17)
<herb> * Send a core dump of last lpnet app server death to flacoste for analysis
<herb> * 2008-09-17 - Roll out 2.1.9
<herb> * Have retired authserver(s) and separate restricted librarian
<herb> * Pending DB query on LPS from jtv still needs approval (or otherwise)
<herb> That's it from Tom, Steve and me unless there are any questions.
<flacoste> herb: aargh, i completely forgot that one, thanks for reminding me :-/
<herb> flacoste: no problem. :)
<Rinchen> herb, rockstar - is the codebrowse issue still being working on by mwh?
<rockstar> Rinchen, I know there was a production issue last night, but I think jml was working on it.
<herb> Rinchen: I'm not aware if it is. I haven't been as tenacious about it as perhaps I should have been.
<Rinchen> k, thanks
<matsubara> thanks herb
<matsubara> [TOPIC] DBA report (DBA contact)
<MootBot> New Topic:  DBA report (DBA contact)
<matsubara> flacoste: stage is yours
<flacoste> that will be a short one
<flacoste> replication scripts branch was cut-off when PQM went release-critical
<flacoste> they'll land once stub is back and PQM re-opens
<flacoste> which means we should see some testing of replication on demo next week
<flacoste> EOT
<cprov> me (apologies, I'm late, because I'm a moron !)
<rockstar> Wow, that was short.
<Rinchen> yay! finally testing on demo
<herb> flacoste: how did we miss the xmlrpc session user?
<matsubara> thanks flacoste
<flacoste> herb: because the session was not used previously
<herb> ah
<flacoste> the multi-store supprot make it used on every erequest
<flacoste> which made the xml-rpc server needed it
<matsubara> [TOPIC] Sysadmin requests (Rinchen)
<MootBot> New Topic:  Sysadmin requests (Rinchen)
<herb> flacoste: thanks.
<Rinchen> Is anyone blocked on an RT or have any that are becoming urgent?
<flacoste> sinzui has one
<flacoste> well, not urgent
<flacoste> but a new one
<flacoste> that came in late
<sinzui> One that we want settled by next week
<Rinchen> lay the number on me
<flacoste> sinzui: i wanted to comment on that
<flacoste> buyt didn't have the number
<flacoste> i don't thin it's required for landing
<flacoste> only for testing
<Rinchen> ok, get that to me when you can with a due date
<matsubara> ok. is that it?
<matsubara> any comments before closing the meeting?
<Rinchen> think so, back to you matsubara
<matsubara> thanks Rinchen
<danilos> matsubara: thanks for running this first instance of the production meeting :)
<Rinchen> thanks matsubara. You'll be talking to Tom B about attendance ?
<matsubara> Rinchen: Ursinha will do :-)
<Ursinha> Rinchen, i'll do
<matsubara> danilos: np. will sort out a better time for you dude
<matsubara> Thank you all for attending this week's Launchpad Production Meeting. See the channel topic for the location of the logs.
<Ursinha> sweet
<matsubara> #endmeeting
<MootBot> Meeting finished at 13:32.
<danilos> thanks and good night
<Ursinha> danilos, bye, thanks for attending
<rockstar> Thanks!
<Ursinha> thanks all for attending :)
#launchpad-meeting 2009-09-16
<barry> #startmeeting
<henninge> Hey barry!
<MootBot> Meeting started at 09:04. The chair is barry.
<MootBot> Commands Available: [TOPIC], [IDEA], [ACTION], [AGREED], [LINK], [VOTE]
<barry> hello everyone and welcome to this week's ameu reviewer's meeting.  who's here today?
<BjornT> me
<abentley> me
<henninge> me
<bac> me
<noodles775> moi
<sinzui> me
<gary_poster> me
<EdwinGrubbs> me
<henninge> noodles775: toi?
<flacoste> me
<noodles775> henninge: nah, it's just "Australish"
<adeuring> me
<intellectronica> me
<henninge> ;-)
<EdwinGrubbs> noodles775: instead of "ping" do you say "Oi!"
<noodles775> You got it Ezza ;)
<barry> deryck, salgado cprov bigjools danilo-afk allenap mars ping
<barry> [TOPIC] agenda
<MootBot> New Topic:  agenda
<cprov> me
<allenap> me
<bigjools> meh
<deryck> me, sorry
<salgado> me
<barry>  * Roll call
<barry>  * Action items
<barry>  * UI review call update
<barry>  * new heading/title rules [barry]
<barry>  * 4-space indents for CSS styles [barry]
<barry>  * Peanut gallery (anything not on the agenda)
<barry>  
<barry>  
<barry> [TOPIC] * Action items
<MootBot> New Topic:  * Action items
<barry> [TOPIC] * Action items
<MootBot> New Topic:  * Action items
 * barry thanks mootbot
<barry>  * gary_poster and barry will transfer review guidelines from the old wiki and old old wiki to the new wiki
<gary_poster> postponed to post 3.0
<barry> sorry, gary_poster asked me to get mrevell a list of pages to pull over but 3.0 is swamping me
<danilos> me
<barry> what gary_poster said :)
<barry>  * cprov to update guidelines to clarify how code sensitive to env changes should be written
<cprov> oi! sorry I forgot about it.
<barry> cprov: we'll just carry that one over.  i'm hoping we'll be less stressed after this week or maybe next
<barry> so no worries
<cprov> barry: sure, thanks.
<barry> [TOPIC]  * UI review call update
<barry>  
<MootBot> New Topic:   * UI review call update
<barry> let me see if beuno is around...
<intellectronica> i think he's on holiday
<barry> intellectronica: ah.
<barry> right
<intellectronica> one important thing we discussed is how overburdened ui reviewers are
<barry> intellectronica: yes.  and that this is an extra-ordinary time because of the push for 3.0
<intellectronica> to make it a bit easier until 3.0 is out, it's ok for people to land very simple changes (mechanical conversions and such) with ui=rs
<barry> beuno did say he was going to start working with a few people to get them ui graduated
<intellectronica> i think martin has writted to the list about that
<barry> yep.  intellectronica thanks
<barry> that's all i can think of.  intellectronica, anything else?
<intellectronica> i think that's it
<sinzui> Does everyone know that Blueprint pages are ui=rs?
<noodles775> Woops, I just landed the sprint-add mechanical change with r=rs and ui=rs... sorry.
<barry> noodles775: right, still need a normal code review
<noodles775> Yep.
<bigjools> yeehaw
<intellectronica> sinzui: does that include volunteers? i bet there are many people out there itching to give us a hand with blueprint conversions. if that happens, can we land their changes without a proper ui review?
<sinzui> noodles775: That is better than me. I was juggling 4 branches yesterday. I landed one by accident. I am glad bac approved it
<noodles775> heh
<barry> i promised bac to land one thru ec2 and then promptly pqm-submitted it ;)
<sinzui> intellectronica: I do not have an answer
<barry> intellectronica: for volunteers, we still have to land the branch so i think a quick look couldn't hurt
<flacoste> intellectronica, sinzui: if they are mechanical changes, yes
<flacoste> i still think we should do a UI review for non-mechanical blueprints UI change
<danilos> intellectronica, I've got 12 templates converted to generic-edit for blueprints, already reviewed, should land soon
<flacoste> doesn't need to be much involved
<danilos> intellectronica, i.e. make that 12 templates removed and replaced with generic-edit.pt
<intellectronica> danilos: you're a star
<sinzui> intellectronica: blueprint 2.0 pages will not work when we release, so doing a bad conversion job fixes more than will be broken in production
<flacoste> there is only 12 unclaimed blueprints templates on https://dev.launchpad.net/VersionThreeDotO/BlueprintsConversion
<barry> moving on..
<sinzui> The two blueprint pages I converted yesterday had whitespace issues. text overlapped text. I had to make some markup changes to make the page layout
<barry> [TOPIC] * new heading/title rules [barry]
<barry>  
<MootBot> New Topic:  * new heading/title rules [barry]
<barry> i want to make sure any question you have about the new rules get answered
<danilos> barry, excellent
<barry> i know that the breadcrumbs/titles/headers need some refinement, but that will have to wait until after 3.0.  the rules we have now will not change until after 3.0
<barry> so... ask your questions now! :)
<bigjools> plz barry just fix everythin kthxbye
<noodles775> barry:  did you see deryck's email?
<flacoste> that's actually a job for salgado :-)
<deryck> barry, beuno, and I talked off list about breadcrumbs in titles a lot already
<barry> noodles775: yes, but only in the sense that it made my inbox grow larger.  i haven't read it yet.  but deryck and i spoke this morning
<noodles775> Great.
<barry> salgado is the breadcrumb man
<deryck> I just replied to my own mail with the results of our call
<barry> deryck: cool, thanks
<deryck> np
<danilos> ok, so, I am wondering about the h2/h1 issue... it seems the h2 with context.title is gone on most pages? is that expected?
<danilos> or, how do we make it appear again?
<barry> danilos: yes
<noodles775> Add view.label?
<danilos> (sorry if I didn't pay close attention to the new rules page)
<barry> noodles775: yes, it can be something like:
<barry> @property
<barry> def label(self): return self.context.label
<danilos> noodles775, right, I am looking into figuring these out for all the translations page now that we've basically got everything converted
<barry> noodles775: i would like to make that simpler (e.g. fall back to context.title) but not for 3.0
<noodles775> Yep - that'd be great!
<bigjools> we get rs=barry for fixing stuff like this I think you said?
<intellectronica> barry: maybe have a mixin that does that?
<barry> intellectronica: i think it can be done on LaunchpadView and/or base-layout.pt
<barry> bigjools: sorry, what was the "stuff like this" in that sentence?
<bigjools> barry: fixing existing 3.0 changed pages to conform to the new heading rules
<barry> bigjools: right, +1
<bigjools> we were waiting for your branch to land so we could do that
<barry> yep.  now you can jfdi! :)
<bigjools> thanks for the extra work :)
<barry> :)
<flacoste> actually, it's less work
<flacoste> because otherwise, there were tweaks on a per-page basis
<bigjools> for future changes, yes
<flacoste> now, it's the same thing everywhere
<barry> updating pagetests might not be fun
 * flacoste mumble about approval tests
<barry> which is actually the main reason we aren't going to change the <title> rules until post 3.0
 * bigjools dreams of killing pagetitles.py
<sinzui> A lot of converted pages appear to be using pagetitle :(
<barry> please everyone, take a healthy swing with a large baseball bat at pagetitles.py
<sinzui> We'll know who cheated on day 1 of week 0 when pagetitles is deleted.
<deryck> heh
 * bigjools uses a cricket bat instead
 * sinzui uses a small cannonette
<barry> bigjools: just give it two healthy swings then
<bigjools> :)
<sinzui> um
<barry> anything else on titles/headings/breadcrumbs?
<barry> cool
<sinzui> our designer is not available and I have not seen a design for the front page. Are we changing the page macro and declaring victory?
<bigjools> while we're on the subject of UI, some of the two-column <dl>s are a bit borked if an item uses an icon
<sinzui> leading icon?
 * bigjools looks for an example
<bigjools> yes leading
<bigjools> sinzui: https://edge.launchpad.net/ubuntu/+source/kdepim/
<bigjools> anyway, OT for this meeting I guess
<barry> k, let's move on
<barry> [TOPIC]  * 4-space indents for CSS styles [barry]
<MootBot> New Topic:   * 4-space indents for CSS styles [barry]
<barry> fairly simple one i think.  most of our css in styles-3-0.css are 4 space indents
<bigjools> JFDI
<barry> any objections to keeping 4 space indents?
<barry> 5
<barry> 4
<noodles775> Nope - guessing it was just 2-spaces from pre-minimization days?
<barry> 3
<sinzui> keeping it? I intentionally set it to 4, who put something else
<barry> sinzui: dunno
<noodles775> sinzui: there's old stuff in style.css with 2 spaces I think.
<barry> rs=me on fixing them for the next person to touch the file
<barry> 4 spaces it is
<barry> [TOPIC] peanut gallery
<MootBot> New Topic:  peanut gallery
<sinzui> There should not be. stuff requires porting from specific CSS to generic
 * bigjools has a peanut
<barry> bigjools: shuck it!
<bigjools> ok
<bigjools> real quick, but I sometimes see some obtuse page test code that does this:
<sinzui> There should not be a zillion p.<class_that_is really_useful_but_can_only_be_used_by_paragrpahs>
<bigjools>  >>> browser.getLink("xxx").click()
<bigjools>  >>> print browser.url
<bigjools>  http://launchpad.dev/blah
<abentley> bigjools: Then take some pictures of it, and you'll have your own peanut gallery.
<bigjools> which causes an unnecessary page load
<bigjools> you can just do:
<bigjools>  >>> print browser.getLink("xxx").url
<bigjools> instead
<bigjools> EOT
<BjornT> bigjools: shouldn't the test make sure that the link leads to the right page?
<barry> bigjools: oh, you mean if you aren't doing anything else on the target page?
<bigjools> BjornT: they are the exact same test
<sinzui> bigjools: barry: I think there is a misunderstanding here
<bigjools> barry: yes
<BjornT> bigjools: no. the first one makes sure the link isn't a 404. your version doesn't
<bigjools> BjornT: if the test is doing nothing else, my version is better.  Loading a page and doing nothing is a bad test.
<sinzui> I do what bigjools is suggesting when I test that a view provides expected content, but a story is about link traversal
<BjornT> bigjools: well, it still depends on the intention of the test
<bigjools> it should ideally have a separate test for that page
<bigjools> BjornT: yes, agreed
<bigjools> but it's just something to look out for
<bigjools> I am on the rampage against slow tests
<bigjools> and this is low-hanging fruit for a big gain
<sinzui> bigjools: most of the slow tests are pagetests that are checking the contract of a view. Covert them to a unittest or a doctest of the view.
<barry> sinzui: +1  view tests rock
<bigjools> yes, I think that's a separate issue to this though
<bigjools> if another page test file tests the page, you don't need to load it somewhere else just to check a link exists
<bigjools> anyway, that's it
<sinzui> bigjools: right. That is why I argue we have far too many pagetests
<barry> bigjools: cool thanks
<barry> 3m left.  anyone else?
<BjornT> bigjools: well, if some other test makes sure you can navigate to that page, yes. that's what really is happening there. from page A we can reach page B
<barry> okay, i think we're done.
<barry> #endmeeting
<MootBot> Meeting finished at 09:44.
<noodles775> Thanks barry!
<barry> thanks everyone
<deryck> thanks barry
<bigjools> BjornT: I don't think you need to click through to ascertain that though
<bigjools> cheers barry
<bigjools> BjornT: if the other test opens the same URL
<henninge> bigjools: I don't get the core of the discussion here.
<BjornT> bigjools: again, it's testing that you can navigate to the page, not that you can open it. (and again, it depends on the intention of the test, which isn't always easy to know)
<henninge> bigjools: are you saying that 'print browser.url' triggers a page load like 'click()' does?
<bigjools> all you need to know is that the link exists
<bigjools> henninge: no, it doesn't, that's the point
<bigjools> if the link exists then you can assume it works
<BjornT> bigjools: what if the link is a 404?
<bigjools> *provided* you have another test that opens the same URL
<BjornT> bigjools: the point with test is that you don't have to assume that things work, you can make sure ;)
<bigjools> if you don't open the link anywhere else then yes, you must click it
<BjornT> bigjools: and provided you keep the two tests in sync
<bigjools> that's easy enough
<BjornT> bigjools: it is? how?
<bigjools> but it also separates tests in a nicer way
<bigjools> both tests will fail if you change the URL
<sinzui> bigjools: I don't think this is an issue is we separated the test of the view's contracts and expected out put from the simple story of link traversal
<sinzui> s/is/if/
<BjornT> bigjools: not necessarily. what if the page name is hardcoded in the template?
<bigjools> BjornT: I don't understand the point?
<BjornT> bigjools: page A links to page B (having a <a href="B">B</a> in the page. you have a test that opens B.
<BjornT> bigjools: what happens if you rename B to C, forgetting to change the link in page A?
<bigjools> nothing changes from the scenario I posted in the meeting
<bigjools> it would fail in the same way
<bigjools> ie not fail :)
<BjornT> bigjools: not. the original version would fail. your version wouldn't.
<BjornT> bigjools: click() fails if it goes to a 404
<bigjools> yes, but we always redirect from old URLs.  if we don't then that's bad
<bigjools> but I see your point
<BjornT> bigjools: my main point is that it's not always a good idea to fix those cases. pagetests should test workflows, not single pages
<BjornT> bigjools: there are many cases where we don't redirect from old URLs
<bigjools> I still think we need to avoid extra page loads
<bigjools> they are expensive
<bigjools> how often do we change URLs?
<BjornT> bigjools: you're suggesting we should stop testing for things we only change rarely?
<sinzui> bigjools: What is wrong with my suggestion to separate content testing in a view test from a browser test that checks links?
<bigjools> sinzui: nothing, it's great
<bigjools> BjornT: rarely, or never?
<BjornT> bigjools: both. there's no such thing as 'code that will never change' ;)
<bigjools> BjornT: have you looked at Soyuz lately? :)
<bigjools> </joke>
<BjornT> bigjools: my main point is, we shouldn't blindly reduce test coverage to make things faster
<bigjools> BjornT: it's a case-by-case consideration, I understand your point and agree, but I think that in some cases the test is slow for no reason
<BjornT> bigjools: right, i agree, it's a case-by-case consideration, that was where i was going
<bigjools> so we agree in our agreement :)
<barry> #startmeeting
<MootBot> Meeting started at 17:30. The chair is barry.
<MootBot> Commands Available: [TOPIC], [IDEA], [ACTION], [AGREED], [LINK], [VOTE]
<barry> hi guys
<rockstar> ni!
<wgrant> me
<thumper> hi
<barry> jml, mwhudson ping
<jml> hi
<barry> so.  let's start with a recap of ameu
<mwhudson> hi
<barry> all updates to blueprint pages and all mechanical changes are ui=rs
<barry> they still need a code review though
<barry> wgrant: that includes volunteers :)
<mwhudson> blueprints?  what's that again? :)
<barry> :D
<barry> use 4 space indents in css files
<wgrant> barry: Tempting, tempting...
<barry> that's it!
<mwhudson> cool, short and sweet
<barry> thumper: has something for today...
<rockstar> thumper landed a new toy today.  He should talk about it.
<thumper> :)
<thumper> r9475 of devel has a new method on TestCase
<thumper> def assertStatementCount(self, expected_count, function, *args, **kwargs):
<thumper> it uses the StormStatementRecorder
<thumper> (also in r9475)
<thumper> you need to be careful about the state of the store cache
<jml> thumper, wow, that's a great idea :P
<thumper> jml: :)
<thumper> but it is good for asserting that you aren't hitting the DB when you don't expect to
<mwhudson> thumper: can you use the StormStatementRecorder to find out more about the actual queries than just the count?
<rockstar> thumper, can we find a way to get the information from StormStatementRecorder into every HTTPResponse for our dev server?
<mwhudson> although i'm having a hard time thinking about why you'd want that, now i've said it...
<thumper> mwhudson: if the count doesn't equal, it prints out the actual statements in the fail message
<barry> thumper: when you say "you need to be careful about the state of the store cache" what does that mean?
<mwhudson> thumper: cool
<thumper> barry: if you are creating objects in the test, then they are in the cache
<thumper> barry: so factory.makeBranch will already have the branch owner in the cache
<thumper> where as if you have just loaded the branch itself, it won't be
<thumper> so
<thumper> you need to do something like:
<thumper> store = Store.of(obj)
<thumper> store.flush()
<thumper> store.reset()
<thumper> reload the obj somehow using a utility
<thumper> then assert counts
<thumper> the store.reset() removes the hidden storm store attribute of the object
<thumper> so you can't just use: store.reload(obj)
<thumper> it doesn't work
<barry> thumper: this sounds cool even today when i'm more fully awake :)  please email launchpad-dev to let people know about it and/or add it to our dev wiki
<thumper> this _maybe_ solvable with enough storm-fu
<rockstar> thumper, I think the landscape guys might like to know about it too.
<thumper> perhaps
<thumper> I'm using it to confirm that my "priming of the storm cache" actually does what I expect
<barry> thumper: this is cool, thanks for adding it, and please do let everyone know about
<thumper> ok
<barry> anything else guys?
<rockstar> Not from me.
<thumper> that's it from me
<barry> wgrant, jml, mwhudson ?
<jml> not from me
<mwhudson> nope
<barry> great, thanks everyone!
<wgrant> no
<barry> #endmeeting
<MootBot> Meeting finished at 17:42.
 * rockstar takes the dog for a walk.
#launchpad-meeting 2009-09-17
<matsubara> #startmeeting
<MootBot> Meeting started at 10:00. The chair is matsubara.
<MootBot> Commands Available: [TOPIC], [IDEA], [ACTION], [AGREED], [LINK], [VOTE]
<matsubara> Welcome to this week's Launchpad Production Meeting. For the next 45 minutes or so, we'll be coordinating the resolution of specific Launchpad bugs and issues.
<matsubara> [TOPIC] Roll Call
<MootBot> New Topic:  Roll Call
<rockstar> ni!
<allenap> me
<mbarnett> whee!
<Chex> hello
<matsubara> Apologies from Ursinha and Stuart
<matsubara> gary_poster, bigjools_, danilo-afk2: ho
<matsubara> s/ho/hi/
<matsubara> sinzui, hi
<gary_poster> mo, me
<sinzui> me
<matsubara> ok, soyuz and translations are awol. let's move on and they can join later
<matsubara> [TOPIC] Agenda
<MootBot> New Topic:  Agenda
<matsubara>  * Actions from last meeting
<matsubara>  * Oops report & Critical Bugs & Broken scripts
<matsubara>  * Operations report (mthaddon/Chex/spm/mbarnett)
<matsubara>  * DBA report (stub)
<matsubara>  * Proposed items
<matsubara> [TOPIC] * Actions from last meeting
<MootBot> New Topic:  * Actions from last meeting
<matsubara>     * barry to continue debug on bug 403606 after finishing 3.0 UI stuff
<matsubara>     * intellectronica to take a look or find someone to take a look on bug 408738 after finishing 3.0 UI stuff
<matsubara>     * chex to trawl logs and add request that caused 500 error to bug 422960
<matsubara>         * spm added more info to the bug report
<matsubara>     * ursinha to file a bug for OOPS-1345G2533 and coordinate a fix with gary/stub
<matsubara>         * Filed bug 427397
<ubottu> Launchpad bug 403606 in launchpad-registry "ExpatError errors should be handled to not generate the OOPSes" [High,Triaged] https://launchpad.net/bugs/403606
<matsubara>     * matsubara to trawl logs related to high load on edge yesterday and ping Chex about it
<ubottu> Launchpad bug 408738 in malone "OOPS when rendering bug activity" [High,Triaged] https://launchpad.net/bugs/408738
<ubottu> Launchpad bug 422960 in launchpad-foundations "appear to be failing to record oops for all +translate HTTP 503 errors" [Undecided,New] https://launchpad.net/bugs/422960
<ubottu> https://lp-oops.canonical.com/oops.py/?oopsid=1345G2533
<ubottu> Launchpad bug 427397 in launchpad-foundations "Search triggering error in tsearch query again" [Low,Triaged] https://launchpad.net/bugs/427397
<sinzui> matsubara: I looked into Bug  403606. Still no progress. This needs barry and a losa on staging I think
<matsubara> [action]  * matsubara to trawl logs related to high load on edge yesterday and ping Chex about it
<MootBot> ACTION received:   * matsubara to trawl logs related to high load on edge yesterday and ping Chex about it
<matsubara> i still have to do that
<matsubara> sinzui, shall I re-add the action item? I'm assuming barry is still busy with 3.0 stuff?
<bigjools> me
<matsubara> hi bigjools, welcome
<bigjools> sorry I'm late
<sinzui> yes. I expect him to have time next week
<matsubara> cool. thanks
<matsubara> [action]  * barry to continue debug on bug 403606 after finishing 3.0 UI stuff
<MootBot> ACTION received:   * barry to continue debug on bug 403606 after finishing 3.0 UI stuff
<ubottu> Launchpad bug 403606 in launchpad-registry "ExpatError errors should be handled to not generate the OOPSes" [High,Triaged] https://launchpad.net/bugs/403606
<allenap> matsubara: None of the Bugs team has time right now to address bug 408738 sadly; we're all maxed out. I expect we won't get to this until after 3.0.
<ubottu> Launchpad bug 408738 in malone "OOPS when rendering bug activity" [High,Triaged] https://launchpad.net/bugs/408738
<matsubara> allenap, ok. I'll retarget to 3.1.10 then. do you want to keep the action item as reminder or should I remove it/
<allenap> matsubara: Remove it. If it's in the milestone it's reminder enough :)
<allenap> Thanks.
<matsubara> cool. thanks allenap
<matsubara> let's move on
<matsubara> [TOPIC] * Oops report & Critical Bugs & Broken scripts
<MootBot> New Topic:  * Oops report & Critical Bugs & Broken scripts
<matsubara> sinzui, can you take a look at https://bugs.edge.launchpad.net/launchpad-registry/+bug/429802 ?
<ubottu> Launchpad bug 429802 in launchpad-registry "Merged teams showing up in the web ui" [Undecided,Triaged]
<matsubara> sinzui, and assign/schedule it please
<matsubara> there was a bunch of DisconnectionErrors last week
<matsubara> which might be related to the outage we had
<sinzui> matsubara: We know what we are going to do with that. It the unassignment scheduler for deactivated and suspected accounts
<matsubara> sinzui, what's that?
<sinzui> There are several manifestations on this problem. the solution is a nightly cronjob that cleans up the teriary data left behind after the account was changed.
<matsubara> I see.
<matsubara> sinzui, looks like you have everything sorted :-)
<sinzui> matsubara: This will fix every where you see a deactivated person/team in a list
<matsubara> cool. so probably something for 3.1.10?
<sinzui> matsubara: yes, but then we started the release and redesign so I did no do it.
<sinzui> matsubara: I will create a spec today for it. I think 3.1.10 or 3.1.11
<matsubara> sinzui, thank you
<matsubara> On the critical bugs front, we have only one, which is fix committed
<matsubara> and on the scripts failures front
<matsubara> we have a bunch which failed this week :-(
<matsubara> the rosetta-poimport issue, which danilo is taking care of
<matsubara> checkwatches also hung and spm/gmb debugged and replied. it's all good now
<matsubara> process-pending-packagediffs which julian and cprov took care of with help from steve
<matsubara> and we now have a rosetta-approve-imports which is failing
<matsubara> probably related to the poimport failure
<matsubara> since danilo is not here, I'll ask him later
<danilo-afk2> matsubara, poimport has been fix released, approve imports unrelated to rosetta-approve-imports, and the latter should be fixed as well by now
<matsubara> [action] matsubara to ask danilo about rosetta-approve-imports failures
<MootBot> ACTION received:  matsubara to ask danilo about rosetta-approve-imports failures
<bigjools> fyi process-pending-packagediffs are caused by a bug in "diff" iteself!
<matsubara> hi danilo-afk2
<danilo-afk2> hi, henninge was supposed to be here for the meeting, I am supposed to be afk
<matsubara> thanks for the update. it'd be awesome if you could reply to the Scripts failed to run: loganberry:rosetta-approve-imports email sent to the list					
<matsubara> danilo-afk2, I pinged both
<danilo-afk2> matsubara, I got sms from him that he forgot about it and went out, sorry for the lack of participation from translations, /me goes afk again
<matsubara> danilo-afk2, hmm ok. thanks
<matsubara> I think that's all for this section
<matsubara> thanks everyone
<matsubara> [TOPIC] * Operations report (mthaddon/Chex/spm/mbarnett)
<MootBot> New Topic:  * Operations report (mthaddon/Chex/spm/mbarnett)
<Chex> hi everyone, here is our report for this week:
<Chex> - staging buildbot is now setup so we can test new  versions of buildbot in a sandbox
<Chex> - there have been ongoing discussions & plans to make codebrowse more reliable this week
<Chex> - poimport query caused Production DB disk issue on 11-Sep.  Bug: #408718 was applied to production on
<Chex> 15-Sep to fix the issue, poimport process was re-enabled after the fix applied.
<Chex> anyone have any questions?
 * mbarnett chirps
 * gary_poster notices the sound of the wind in the trees...
<gary_poster> or maybe that's the air conditioning...
<mthaddon> as always, losa concerns knock everyone over with their excitement levels
<mbarnett> nice work chex, you broke the meeting!
<bigjools> or put them to sleep
<Chex> sorry all... :/
<gary_poster> yay losas for staging buildbot!
<gary_poster> matsubara: :-D ?
<matsubara> guess not. thanks Chex
<matsubara> [TOPIC] * DBA report (stub)
<matsubara> nothing to report about the DB
<MootBot> New Topic:  * DBA report (stub)
<matsubara> [TOPIC] * Proposed items
<MootBot> New Topic:  * Proposed items
<matsubara> no proposed items
<matsubara> anything else before I close?
<matsubara> 5
<matsubara> 4
<matsubara> 3
<matsubara> 2
<mthaddon> 1.5
<matsubara> 1
<matsubara> Thank you all for attending this week's Launchpad Production Meeting. See https://dev.launchpad.net/MeetingAgenda for the logs.
<matsubara> #endmeeting
<MootBot> Meeting finished at 10:28.
<gary_poster> thanks matsubara
<mbarnett> victory is ours!
#launchpad-meeting 2010-09-21
<bac> hi lifeless
<bac> do you have a moment?
* bac changed the topic of #launchpad-meeting to:  ** No LP Reviewer Meeting this week ** |Launchpad Meeting Grounds | Channel logs: http://irclogs.ubuntu.com/ | Weekly meetings: https://dev.launchpad.net/IRCMeetings | Lost in time? date -u
#launchpad-meeting 2010-09-22
<lifeless> bac: sure
<lifeless> bac: whats up?
<bac> reminder, no Launchpad Reviewers meeting today
#launchpad-meeting 2010-09-23
<bac> reminder, no Launchpad Reviewers meeting today
