[12:02] <daf> kiko: sure
[12:02] !lilo:*! Hi all. If you're interested in tracking, discussing, political coordination, regarding the U.S. INDUCE act, designed to outlaw technology which can be used to circumvent entertainment industry licensing, please feel free to stop by ##induce .... thanks!
[12:03] <kiko> daf, we're in need of some quality rosetta time with someone in the know, and I'm thinking it could be you.
[12:03] <kiko> do you have a couple of hours to commit this week to us?
[12:04] <daf> I might be able to do that on Thursday or Friday
[12:04] <daf> what can I help you with?
[12:05] <kiko> getting rosetta data displayed in soyuz, basically.
[12:05] <kiko> translations for certain packages.
[12:05] <kiko> is that feasible?
[12:06] <daf> can you give me more details? :)
[12:06] <daf> you're thinking of integration between soyuz and Rosetta?
[12:07] <kiko> right. getting something like open translations for release X, and for package Y?
[12:07] <daf> sure, spending some time thinking about that would be good
[12:08] <daf> I'm not 100% sure I'll have some time this week for that, but I'm certainly happy to do it
[12:08] <kiko> we're partially blocked on that
[12:10] <daf> in that case, how about we allocate an hour to talk about it on Thursday, with a follow-up meeting on Friday if needed?
[12:10] <carlos> daf: please, could you recreate the database at rosetta server?
[12:10] <daf> carlos: sure
[12:10] <kiko> two hours, I think, would be required just for talking :)
[12:10] <carlos> daf: thanks
[12:10] <daf> carlos: done
[12:11] <daf> kiko: ok, let's do that then :)
[12:11] <carlos> daf: :-?
[12:12] <carlos> daf: https://rosetta.warthogs.hbd.com/++skin++Debug/rosetta/prefs
[12:12] <kiko> cprov, debonzi: daf's on for thursday for our rosetta-mini-sprint. 
[12:12] <kiko> cprov, debonzi, daf: what's the good time.
[12:12] <carlos> daf: it works here...
[12:12] <daf> carlos: interesting
[12:12] <daf> carlos: looks like some sample data is missing
[12:12] <carlos> your name :-)
[12:13] <daf> :)
[12:13] <cprov> kiko: I'm  online every time , better ask daf 
[12:13] <kiko> daf, what's the good time?
[12:13] <carlos> kiko,cprov: what time is it there?
[12:13] <daf> kiko: sometime during the afternoon, I think
[12:14] <daf> kiko: i.e. after the rosetta daily meeting and lunch
[12:14] <cprov> carlos: 19 PM
[12:14] <carlos> cprov: thanks
[12:15] <daf> carlos: TZ=America/Sao_Paulo date
[12:15] <carlos> daf: thanks
[12:15] <cprov> daf: when will be in UTC ? 
[12:17] <daf> I suggest 3pm UTC
[12:17] <daf> i.e. 11am for you guys
[12:17] <daf> er, no
[12:17] <daf> 12pm for you guys
[12:17] <carlos> 12pm?
[12:17] <cprov> daf:  you can kill us :)
[12:17] <carlos> daf: I don't think so
[12:17] <carlos> :-)
[12:18] <carlos> hmm
[12:18] <carlos> :-P
[12:19] <daf> cprov: is that too early for you? :)
[12:19] <cprov> daf:  nop, exaclty the opposite, 1/2 hour earlier maybe ? 
[12:20] <daf> :D
[12:20] <daf> by 12pm, I mean 12:00, by the way
[12:20] <daf> not 00:00
[12:20] <cprov> daf: of course :)
[12:20] <daf> just checking :)
[12:21] <daf> ok, what time would you like?
[12:22] <daf> I'm usually around in the evenings
[12:22] <cprov> daf: let's do in that way: 12pm UTC thursday as you said or earlier if possible, ok ?
[12:22] <daf> we could make it 11am UTC if that suits you better
[12:24] <cprov> daf: not much ... is 12 UTC nice for you too ?
[12:24] <cprov> daf: I think 40 minutes will be more than enough 
[12:24] <daf> actually, the Rosetta team meeting is at 12:00 UTC
[12:25] <daf> oops
[12:25] <kiko> cprov, I don't think 40 minutes is enough, really.
[12:25] <cprov> kiko: how much do you think ?
[12:26] <daf> and the meeting will last somewhere around 30m-1h, and I generally have lunch after that so...
[12:26] <carlos> daf: if it's needed, we could move it
[12:26] <cprov> kiko: we just need a set of stocked queries AFAIK
[12:27] <kiko> I think 2h, cprov. there's a lot up in the air
[12:27] <daf> we *could* make it 10:00 UTC, which is 7am for you guys :)
[12:27] <daf> carlos: that is true
[12:28] <cprov> daf:  uhm, kiko cannot wake so early :)
[12:28] <daf> :D
[12:29] <kiko> it breaks my legs
[12:29] <daf> that's funny -- my knees hurt when I've been awake too long
[12:29] <cprov> daf:  make it easier, just say when you have 2h free
[12:30] <daf> ok
[12:30] <daf> any time from about 14:00 UTC would be fine
[12:31] <carlos> X-)
[12:32] <daf> carlos: ok, Rosetta seems to be working now
[12:32] <carlos> daf: where was the problem?
[12:32] <daf> carlos: dunno, I just restarted the server ;)
[12:33] <carlos> :-P
[12:33] <cprov> kiko: debonzi : 14 UTC is ok for you ? just have lunch and talk :) ok ?
[12:33] <kiko> it's 11am here, okay by me.
[12:34] <cprov> daf:  then ok, thursday 14 UTC 
[12:35] <daf> cprov: ok!
[12:35] <debonzi> cprov, for me is ok..
[12:36] <cprov> daf: thanks 
[12:36] <daf> cprov: de nada :)
[12:36] <cprov> daf:  just to warn, if you can think in stocked queries to show relevant information from rosetta in soyuz, let me know .
[12:37] <daf> cprov: you might be able to use existing Rosetta interfaces for some things
[12:37] <cprov> daf: I will send an email to launchpad explaning better wishes. 
[12:38] <daf> I'm worried about this "idea" of releases -- Rosetta doesn't really pay attention to those
[12:38] <daf> ok, that sounds good
[12:38] <cprov> daf: yep
[12:38] <kiko> daf, as long as translations are tied to certain source packages (and source package versions?)
[12:38] <kiko> daf, we're okay.
[12:39] <daf> um...
[12:39] <cprov> daf: carlos see you later
[12:39] <carlos> cprov: later!
[12:39] <daf> cprov: later
[12:39] <kiko> daf, um what? :)
[12:40] <daf> well
[12:40] <daf> translations are tied to templates are tied to products are tied to projects
[12:40] <daf> Rosetta knows nothing about source packages
[12:40] <daf> (yet)
[12:42] <kiko> products are tied to source packages, I believe, no?
[12:44] <daf> carlos: the form looks really good
[12:45] <carlos> daf: stolen from bugzilla
[12:45] <carlos> but It seems like it's not working
[12:45] <carlos> :-(
[12:45] <carlos> doing some local tests now
[12:47] <carlos> daf: are you sure you have latest db schema?
[12:47] <carlos> daf: it works in my laptop
[12:47] <carlos> but If I try to change your name with the password 'test' it does not change anything
[12:48] <daf> hmm
[12:48] <daf> let me try reloading it again
[12:49] <daf> ok, done
[12:50] <carlos> perfect
[12:50] <carlos> daf: your name now it's Dafydd2
[12:50] <carlos> :-)
[12:52] <daf> heh :)
[01:08] <carlos> daf: do you want wishlist bugs into our bugzilla?
[01:15] <daf> carlos: yes
[01:15] <carlos> ok
[01:15] <daf> I filed one against Malone today
[02:04] <kiko> hey stub, have some minutes free?
[02:04] <stub> Sure
[02:05] <kiko> stub, us soyuz lowlives are looking into getting some malone data presented in soyuz.
[02:05] <kiko> stub, do you think you could put some time down this week for sorting the issues out with us?
[02:06] <stub> I can spare a little time - I'm still at my old work this week.
[02:07] <kiko> we'd need a solid 1h something, and the timezone scheduling means it's not trivial to fit in.
[02:07] <kiko> can you say what time is best? perhaps friday?
[02:08] <stub> whenever - pick a time. sooner is fine by me.
[02:09] <stub> (as long as I am awake ;) )
[02:17] <carlos> daf: 4 bugs left (without counting steve's ones)
[02:18] <kiko> stub, what is a good time UTC for you?
[02:18] <carlos> daf: hey, lalo is alive!!
[02:18] <stub> Now until Now + 12 hours is good for me
[02:58] <carlos> spiv: ping?
[02:58] <carlos> daf: I'm getting this error:
[02:58] <carlos> TypeError: DBProject() did not get expected keyword argument datecreated
[02:59] <carlos> daf: datecreated has a default value, and I want to use it
[02:59] <carlos> I mean.... the Project table has a default value for the field datecreated
[02:59] <carlos> and I want to use it, but the SQLObject does not let me to do it
[03:02] <stub> carlos: Do you mean you want to use the default datecreated defined in the database, or the default datecreated defined in the SQLObject subclass?
[03:02] <carlos> stub: the one defined in the database
[03:03] <carlos> I didn't know that sqlobject could define a default value
[03:03] <carlos> what's the best option?
[03:03] <stub> Then you have to set 'required=False' in the SQLObject subclass, so it lets it be uninitialized.
[03:03] <carlos> ok
[03:04] <stub> SQLObject defaults don't work for datetimes, since it sets the default when the module is loaded (ie. it is not calculated).
[03:04] <stub> This needs to be fixed - there is some discussion on the SQLObject mailing list about this
[03:05] <carlos> TypeError: __init__() got an unexpected keyword argument 'required'
[03:05] <carlos> DateTimeCol('datecreated', notNull=True, required=False),
[03:05] <stub> Sorry - notNull=False 
[03:05] <carlos> stub: that does not works
[03:06] <stub> DateTimeCol('datecreated', notNull=False, default=None)
[03:06] <carlos> I tried the notNull=False option.
[03:06] <carlos> btw, it should be notNull=True
[03:06] <carlos> hmm
[03:06] <carlos> If I do an insert
[03:06] <carlos> with datetime=NULL
[03:07] <carlos>  will be used the default value?
[03:07] <stub> Yes
[03:07] <carlos> ok
[03:07] <stub> (default = None, as python None == SQL NULL
[03:08] <carlos> stub: I know, It's just that I thougt that the default value is only used if you don't specify a value for that field
[03:08] <carlos> ok, no more errors about it. Thanks
[03:09] <stub> Err... you are right. Might be SQLObject magic that makes it work then... possibly by accident.
[03:10] <carlos> I'm not able to do the insert, so perhaps it fails when it commits it into the database (I have other errors)
[03:10] <carlos> I mean, that I didn't tested it 100%
[03:11] <stub> Can you please add a bug to bugzilla? We have an SQLObject developer starting soon who should be able to fix this properly and feed it back upstream.
[03:12] <stub> If it doesn't work, you need to pass 'datecreated=datetime.utcnow()' as an argument when creating the object (which is what Malone is doing to avoid this problem).
[03:12] <carlos> stub: what should I specify in the bug report?
[03:12] <stub> (so hopefully the clocks on the app servers are in sync 'good enough')
[03:13] <stub> 'SQLObject needs to use the DEFAULT value for a column as defined in the database'
[03:14] <carlos> ok
[03:14] <stub> I think the correct syntax should be something like DateTimeCol('datecreated', notNull=True, default=SQLObject.DEFAULT)
[03:15] <stub> But the SQLObject developers should probably agree on the syntax before I hack it up myself ;)
[03:16] <carlos> hmm, I get this error after fixing all bugs I found:
[03:16] <carlos> ValueError: Unknown SQL builtin type: <class 'canonical.rosetta.sql.RosettaPerson'> for <RosettaPerson at 0x30703370>
[03:18] <stub> That would be the value for the 'person' foreign key
[03:18] <stub> Is RosettaPerson an SQLObject subclass? Or do you need to adapt it to a database.foaf.Person ?
[03:19] <carlos> RosettaPerson represents the Person table
[03:19] <carlos> I fixed it, don't worry
[03:19] <carlos> but as you said, the sqlobjects fails
[03:19] <carlos> psycopg.IntegrityError: ERROR:  null value in column "datecreated" violates not-null constraint
[03:19] <stub> :-P
[03:20] <stub> Have to do it the malone way for the time being - flag it with a TODO 
[03:20] <carlos> I will fix it the way you told me for now
[03:20] <carlos> sure
[03:32] <carlos> stub: should we have a sqlobject component inside launchpad?
[03:32] <stub> carlos: I don't follow you
[03:32] <carlos> stub: or should the bug report be filed against launchpad?
[03:32] <carlos> stub: to file the bug we talked about some minutes ago
[03:33] <carlos> at bugzilla
[03:33] <carlos> (sorry, it's too late here to think in a verbose mode :-P)
[03:33] <stub> oh - we should have a component for it - not sure what the best is.
[03:34] <carlos> I don't see anyone now
[03:34] <stub> Maybe a database product, with components sqlobject, sqlos, schema, psycopg? I don't know if it is launchpad specific.
[03:34] <carlos> justdave: could we have it added ?
[03:34] <stub> Just stick it anywhere and assign it to me so it doesn't get lost :-)
[03:34] <carlos> stub: at this time it's only used by launchpad
[03:35] <carlos> stub: ok, I will file it against launchpad, is it right for you?
[03:35] <stub> sure
[03:35] <carlos> ok
[03:39] <justdave> which, the new product with those 4 components in it?
[03:40] <carlos> justdave: not sure, stub?
[03:40] <stub> I think a 'database' component for launchpad might be best, with me as the owner.
[03:41] <stub> (stuart@stuartbishop.net is still my bugzilla id)
[03:43] <justdave> ok, done.
[03:43] <carlos> justdave: thanks
[03:46] <stub> carlos: I don't know if you want Bug1965 to depend on 1968 if you have a working workaround. If the work around is good enough, Bug1968 won't be looked at until the soyuz sprint at the earliest.
[03:47] <carlos> well, we could move it for later 
[03:47] <carlos> when we look at beta remaining bugs 
[03:47] <stub> np
[03:47] <carlos> the alpha is for tomorrow
[03:47] <carlos> I suppose the beta will be for the end of the month or something like that
[03:48] <carlos> is a way to have it present when fixing bugs in rosetta
[04:14] <kiko> stub, I'm going to propose 11am UTC on friday, which is like 8am here. how does that sound?
[04:19] <stub> Fine here
[04:23] <kiko> wonderful. emails will go out to launchpad confirming ;)
[04:29] <kiko> lifeless, yo?
[04:30] <lifeless> ?
[04:30] <kiko> how goes it?
[04:31] <lifeless> frenetic, as usual :}. You ?
[04:34] <kiko> lifeless, at least as mad as you. 
[04:34] <lifeless> whats this alpha/beta thing I'm hearing about?
[04:34] <kiko> did you get cprov's request (and made heads or tails out of it)?
[04:35] <kiko> lifeless, alpha release of rosetta tomorrow, which means launchpad alpha in a way, right?
[04:35] <lifeless> launchpad is in production already on macquarie
[04:35] <lifeless> :)
[04:36] <carlos> lifeless: two bugs left from the rosetta team and I'm starting importing data into the DB
[04:36] <lifeless> https://macquarie.warthogs.hbd.com/launchpad/
[04:36] <kiko> hah.
[04:36] <kiko> and on rosetta, right :)
[04:36] <lifeless> carlos: into emperor ?
[04:37] <carlos> lifeless: in my laptop
[04:37] <lifeless> arh. right.
[04:37] <carlos> I need to automate the import task
[04:37] <lifeless> kiko: I got cprov's request yes, and both stuart and I have answered.
[04:37] <carlos> when that's done, we will move it into a real server
[04:37] <kiko> lifeless, stub: thanx
[04:38] <stub> kiko: Patch was just accepted a  few seconds ago
[04:38] <lifeless> carlos: cool, be sure to test with an extract of the live data, you don't want to overwrite anything or stuff
[04:39] <carlos> lifeless: we will not work with the production database for the alpha release
[04:39] <lifeless> oh, so its a proof-of-concept period.. K.
[04:39] <carlos> yes
[04:39] <lifeless> stub: what do you think about doing a new production drop on tuesday ?
[04:40] <stub> Fine by me
[04:40] <lifeless> ok. who do we need involved? I have access to the launchpad on macquarie, you ahve the db.
[04:41] <carlos> stub: are you the "db nazi" again?
[04:41] <lifeless> carlos: not till monday
[04:42] <lifeless> (not that nazi's ever really let go)
[04:42] <carlos> ok, I have some pending changes to send
[04:42] <carlos> I will send them later
[04:51] <carlos> kiko: rosetta's products will be mapped to source packages, I'm not sure if we have a full relation with the source table, but It makes no sense to use binary packages with it
[04:51] <carlos> because several binary packages will use the same translation source
[04:53] <carlos> I need a break. See you in about 30 minutes
[04:57] <kiko> carlos, hey, answer my email with more gems like that and I'll send you some vintage brazilian coffee on the next sprint
[04:58] <carlos> :-D
[04:58] <kiko> must.. sleep... soon..
[04:58] <kiko> my new house doesn't even have a friggin shower yet and I'm at midnight at the office.
[04:58] <carlos> kiko: it's too early to go to sleep :-D
[04:58] <kiko> this has got to change.
[04:58] <carlos> here it's 5:00AM
[05:11] <kiko> here it's *sleeptime* :)
[08:57] <limi> morning all
[08:58] <carlos> limi: hi
[08:58] <limi> carlos :)
[09:00] <carlos> limi: ready for the count down?
[09:02] <limi> yup
[09:04] <carlos> well, as I said, time to take a shower 
[09:04] <carlos> later
[11:18] <SteveA> lulu: I just checked out http://ubuntulinux.org/  Is someone going to produce a favicon.ico file to replace the plone logo?
[11:18] <lulu> that's a good point! I'll ask Limi
[11:19] <limi> SteveA: we lack an Ubuntu icon
[11:20] <SteveA> better not to have one than to have the wrong one
[11:23] <lulu> elmo's asking for someone to do it
[11:26] <SteveA> rosetta.ubuntulinux.org is listening on the https port
[11:26] <SteveA> but it is not listening on the http port
[11:28] <carlos> SteveA: ask elmo_mf
[11:28] <SteveA> I guess we can run rosetta over https.  It isn't a big deal, and means we don't need to worry about redirecting people to use HTTPS when they need to be authenticated with a password.
[11:29] <carlos> SteveA: we need something at http post alpha or people will have troubles to find rosetta (by default all people opens http instead of https)
[11:30] <elmo_mf> SteveA: ?
[11:30] <SteveA> hello elmo
[11:30] <elmo_mf> what's up?
[11:30] <elmo_mf> I did that rosetta thing you asked for last night?
[11:31] <SteveA> now I have to remember exactly what I asked for ;-)
[11:31] <elmo_mf> the rosetta.ubuntulinux.org, going to :9010 on rosetta
[11:31] <carlos> X-)
[11:31] <elmo_mf> so you can run a second launchpad invocation for alpha?
[11:31] <SteveA> yes
[11:31] <elmo_mf> ok.. lu said you wanted me tho - was there anything else?
[11:32] <SteveA> rosetta.ubuntulinux.org is listening on https
[11:32] <SteveA> but not https
[11:32] <SteveA> but not http
[11:32] <SteveA> (Rather)
[11:32] <elmo_mf> oh, yes, I asked daf and he told me to do that
[11:32] <SteveA> oh, ok
[11:32] <elmo_mf> do you want me to a) switch them, or b) make http available but redirect to https ?
[11:32] <elmo_mf> (or c, make both available - seems least good alternative tho)
[11:32] <SteveA> making http available but redirecting to https would be neat
[11:32] <elmo_mf> ok
[11:32] <SteveA> and probably save us a bunch of email saying "use https"
[11:33] <elmo_mf> right, I'll do that in a bit
[11:33] <SteveA> thanks!
[11:34] <SteveA> now I just have to wait for daf to get in (after his late night closing rosetta bugs last night), to get everything working
[11:34] <elmo_mf> daf was up pretty late...
[11:35] <carlos> SteveA: I don't think he will wake up early, I think he went to bed about at 5:00AM
[11:35] <carlos> no, at 6:30
[11:35] <SteveA> I hope he's up for the launchpad meeting later today
[11:36] <carlos> the meeting is "late" so I don't think it will be a problem
[11:39] <SteveA> do you think we'll be able to turn on rosetta today?
[11:40] <carlos> SteveA: I'm working on a script to import the .po/.pot from an XML defining the projects, trying to figure a way to automatize it 
[11:40] <SteveA> cool
[11:40] <carlos> SteveA: rosetta is ready (should be)
[11:41] <carlos> if it's needed we could import the files by hand
[11:41] <SteveA> is there anything you need help with on the script?
[11:41] <carlos> perhaps some ideas about the way to solve the problem will be welcomed
[11:42] <carlos> because I'm not completely sure I'm handling the best way I should
[11:42] <SteveA> ok, let's talk about it when I've come back from the cafe
[11:42] <carlos> ok, thanks
[11:46] <elmo_mf> steve: http's there now, redirecting
[11:46] <elmo_mf> only for r.ul.o / r.u.c tho, r.w.h.c just gives you an empty page - I guess that's a feature as we don't want joe random seeing the dev dev version
[12:21] <daf> SteveA: hi
[12:21] <carlos> daf: hey, you are alive!!!
[12:21] <carlos> :-D
[12:21] <daf> :)
[12:22] <carlos> daf: I think we will need to import the projects/products by hand
[12:22] <carlos> I'm having hard problems with the script
[12:24] <sabdfl> carlos: yes, projects and products will need to be done by hand, lifeless has started already because of arch syncing
[12:24] <sabdfl> please discuss a wiki page with him where we can finetune them
[12:24] <sabdfl> and include some review
[12:24] <carlos> sabdfl: we are working with tar.gz for the alpha
[12:24] <sabdfl> once i've signed off on the project / product names, they can be put into the database
[12:25] <carlos> because we don't have the needed modules in arch yet
[12:25] <sabdfl> yes, i'm just talking about getting the project / product names into the db
[12:25] <sabdfl> then you attache the POT to the product, right?
[12:25] <carlos> sabdfl: I thought that we should import all ubuntu packages now
[12:25] <carlos> right
[12:25] <sabdfl> one at a time, by hand
[12:26] <carlos> based on arch for the alpha?
[12:31] <daf> hmm, I think if we can't import all packages in an automated fashion, doing it by hand will take a very long time
[12:32] <carlos> daf: I was working tonight in a kind of xml so we only "import" it by hand one time and future updates will be automatic, but it could still fail
[12:33] <carlos> where import is write some info we need to get by hand and then let a script execute it over an onver again
[12:33] <carlos> but I'm not sure it's a good approach
[12:34] <carlos> waiting for SteveA to talk about it
[12:34] <daf> hmm, that's an idea
[12:54] <carlos> daf: could we talk about it now 
[12:54] <carlos> if we should do it by hand I think we should start as soon as possible...
[12:58] <daf> let's wait for Steve
[12:58] <carlos> ok
[12:59] <SteveA> I wonder whether we should decide on a canonical address for rosetta.ubuntusomething, and have everything else redirect to there.
[01:00] <SteveA> daf, carlos: let's talk about importing stuff
[01:00] <carlos> ok
[01:00] <daf> right
[01:01] <SteveA> so, let's set the scene
[01:01] <SteveA> what raw materials do we have?
[01:02] <carlos> SteveA: at this moment we have the list of packages from wiki or from apt source
[01:03] <carlos> http://gollum.pemas.net/~carlos/import-ubuntu.xml.txt
[01:04] <carlos> the apt source let's us to fill some fields automatically
[01:04] <SteveA> did you come up with the xml pattern?
[01:04] <carlos> sorry, I don't understand what do you mean. xml pattern? 
[01:05] <SteveA> the DTD
[01:05] <SteveA> that particular xml format
[01:05] <carlos> SteveA: no, I don't have the DTD wrote
[01:05] <SteveA> I recommend you don't write a DTD
[01:05] <SteveA> but, what I mean is, did you come up with this XML schema?
[01:06] <SteveA> with this xml file format
[01:06] <daf> i.e. did you invent the structure of the XML?
[01:06] <SteveA> thanks daf :-)
[01:06] <carlos> yes
[01:06] <carlos> :-)
[01:06] <carlos> based on the database fields
[01:06] <SteveA> cool
[01:06] <SteveA>       <command>./debian/rules common-configure-indep && cd po && intltool-update -P && cd ..</command>
[01:06] <SteveA> why is there a 'cd ..' at the end/
[01:06] <SteveA> ?
[01:06] <SteveA> also, note that && isn't valid xml
[01:06] <carlos> to come back to the default dir
[01:06] <carlos> hmm, right
[01:06] <SteveA> it needs to be &amp;&amp;
[01:07] <SteveA> the processor for this should ensure that each command starts in the appropriate directory
[01:07] <SteveA> it shouldn't expect the <command> to clean up
[01:07] <elmo_mf> btw, what the heck is that common-configure-indep thing? is that a cdbs/gnome thing?
[01:07] <carlos> I execute a cd po and I undo it later, that's all
[01:07] <carlos> elmo_mf: cdbs
[01:08] <carlos> I don't need to build the package, only to configure it
[01:08] <SteveA> if every command is always '&&' after the last one, we could also represent the commands each on a new line
[01:08] <SteveA> this might make it easier to maintain / diff from
[01:08] <carlos> with serveral <command> tags or inside the same tag?
[01:08] <SteveA> no, just new lines
[01:08] <carlos>  /s/serveral/several/
[01:08] <SteveA> so:

[01:09] <SteveA> ./debian/rules common-configure-indep
[01:09] <SteveA> cd po
[01:09] <SteveA> intltool-update -P
[01:09] <SteveA> cd ..

[01:09] <carlos> ok
[01:09] <SteveA> maybe call it <commandscript> instead, or something like that
[01:10] <carlos> ok
[01:10] <daf> if you run the command inside its own shell, you don't have to worry about the "cd .."
[01:11] <SteveA> so, we have a list of packages, and we want a filled-in XML file that looks like carlos's file, right?
[01:11] <carlos> or perhaps with pushd popd...
[01:11] <carlos> the projects fields should be handled by hand
[01:11] <SteveA> these are important implementation details, but let's discuss what we actually need to do overall
[01:12] <carlos> ok
[01:12] <SteveA> carlos: did you fill in this xml file by hand?
[01:12] <carlos> yes
[01:12] <SteveA> do you think it is straightforward to write a tool that reads in the XML, and imports the packages?
[01:12] <SteveA> (I'm trying to find out what part you want help with)
[01:13] <carlos> yes, it should be easy when the xml is filled completely
[01:13] <daf> SteveA: the difficult part is writing the XML file, I think
[01:13] <SteveA> ok
[01:13] <carlos> yes, that's the problem
[01:13] <daf> after all, that should be the only non-automated part
[01:13] <carlos> every package should be handled by hand
[01:13] <SteveA> well... we can get a lot of information from the .deb can't we?
[01:14] <carlos> SteveA: yes
[01:14] <SteveA> what information can we easily get from a .deb?
[01:14] <carlos> the "hard" part is the commandscript commands
[01:14] <daf> right, so we need another script to generate a version of the XML which is then fixed up by hand?
[01:14] <SteveA> all the ones in your file are the same.
[01:14] <carlos> name, short description and description have a direct mapping
[01:14] <carlos> daf: yes
[01:14] <SteveA> So, we can start by trying that script, and seeing if it works.
[01:15] <SteveA> Will it be apparrent if it doesn't work properly?
[01:15] <carlos> apparrent?
[01:15] <SteveA> It will be obvious if no .pot file is there
[01:15] <SteveA> apparent
[01:15] <SteveA> um, obvious
[01:15] <SteveA> but what about if there are several .pot files?
[01:15] <carlos> we could easily do some basic checks
[01:15] <SteveA> so, what about having a step where you run find to locate .pot files?
[01:16] <carlos> potemplate
[01:16] <carlos> SteveA: I have some code to do it in C
[01:16] <SteveA> to do what?
[01:16] <carlos> from a the GNOME status pages
[01:16] <carlos> to detect po directories
[01:16] <SteveA> ok
[01:16] <carlos> so it's easy to port it to python
[01:16] <SteveA> does it work for other projects than gnome?
[01:16] <carlos> yes
[01:17] <SteveA> great
[01:17] <carlos> the detection does not depends on any GNOME specific
[01:17] <carlos> feature
[01:17] <SteveA> what we should aim for is going through the names of packages.  Get the .debs.  See if it fits something we can automate.  If so, import it.  If not, log that somewhere to be done manually.
[01:17] <SteveA> does that sound reasonable?
[01:18] <daf> yes, an "if not len(pot_files) == 1" should work
[01:18] <carlos> yes, that's a start
[01:18] <daf> sabdfl: I'm a bit confused -- will the Rosetta alpha have all the Ubuntu packages in it or not?
[01:18] <SteveA> I don't understand what you just asked, daf
[01:18] <carlos> but we should review later all packages because we need to apply the debian/ubuntu patches to get all strings
[01:19] <SteveA> we would like the rosetta alpha to have lots of packages in it
[01:19] <SteveA> we should aim for the low-hanging fruit first
[01:19] <SteveA> that is, those we can easily automate
[01:19] <carlos> ok
[01:19] <daf> we would like that, but I would like the team to concentrate on development if importing all the packages is going to be a significant task
[01:21] <daf> SteveA: eventually, we want Rosetta to have all the information needed to translate all of Ubuntu
[01:22] <daf> SteveA: I'm asking Mark how soon we want to have all that information
[01:22] <SteveA> the important thing right now is to get rosetta used by some of our target audience, while at the same time doing something useful for ubuntu
[01:23] <SteveA> so, if we can get 70% of the ubuntu packages in there, the 70% that go all the same, then that's great
[01:23] <daf> ok, the question is the extent to which we shuold spend time on the "doing something useful for Ubuntu" part
[01:23] <SteveA> let's try to come up with a reasonable estimate of what it would take
[01:24] <SteveA> then we can make an informed decision as to what to do
[01:24] <carlos> ok
[01:24] <SteveA> and, we can keep it as simple as possible
[01:24] <SteveA> so, let's agree on our goals
[01:24] <SteveA> I'll propose some
[01:25] <SteveA> * get enough data into rosetta alpha so that it can be well tested by early adopters
[01:25] <SteveA> * get some ubuntu packages translated
[01:25] <SteveA> anything else?
[01:26] <carlos> I don't think wee need more goals for the alpha
[01:26] <carlos> makes sense for me
[01:26] <SteveA> well, the point of these goals is so that we know what things we have to balance against each other
[01:26] <daf> yes, those are good goals
[01:26] <SteveA> another goal would be to get it done pretty soon
[01:26] <daf> there are other goals:
[01:27] <daf> * fix bugs in the alpha
[01:27] <daf> * start closing beta-critical bugs
[01:27] <carlos> but those goals are to release the beta
[01:27] <daf> carlos: yes, but if we spend lots of time on goals 1 and 2, we can't work on goals 3 and 4
[01:28] <carlos> I don't think we shouls expend more than some days for the 1 and 2
[01:28] <carlos>  /s/shouls/should/
[01:28] <SteveA> ok, let's think about what's involved in importing packages
[01:28] <SteveA> our plan should be to import packages that come easily
[01:28] <SteveA> and to leave those that are more difficult
[01:28] <SteveA> until later
[01:28] <SteveA> right?
[01:29] <carlos> yes
[01:29] <daf> yes
[01:30] <SteveA> carlos: do you already have code that uses your xml file to import stuff into rosetta?
[01:30] <carlos> not yet, but I have a script that I think could be adapted easily
[01:31] <SteveA> how much time would it take to complete the task "have software that takes an xml file of package data, and imports each one into rosetta" ?
[01:31] <carlos> SteveA: one thing... I'm forgeting about the branch field in our database. We don't expose it in the UI and I don't think we should care about it until beta
[01:31] <carlos> SteveA: about 1 hour
[01:31] <SteveA> daf: do you agree with carlos' estimate?
[01:32] <daf> SteveA: I would estimate 2 hours at least
[01:32] <SteveA> so, let's allow 3 hours
[01:33] <carlos> :-)
[01:33] <SteveA> 1+2 = 3 ;-)
[01:33] <SteveA> Is the XML schema complete?  does it need any more work?
[01:33] <SteveA> Well, the command part needs a little work, but we already discussed that.
[01:34] <SteveA> Actually, I want to ask about that.
[01:34] <carlos> SteveA: if we can forget about the branches (not needed until we start with arch)
[01:34] <carlos> yes, I think it's completed
[01:34] <SteveA> each of the commands in the example file is exactly the same
[01:34] <SteveA> so, do we need to list the commands there?
[01:34] <carlos> SteveA: because all packages use cdbs
[01:34] <SteveA> can we just say <standard-import-commands />
[01:35] <carlos> but that will not work always, if we have a way to use a custom command, it's ok for me
[01:35] <daf> or, we could identify commands by name
[01:35] <SteveA> will it work in a lot of cases?
[01:35] <carlos> yes
[01:35] <SteveA> then, let's do that for now
[01:35] <carlos> almost all GNOME packages should work
[01:35] <daf> each package would have a <command> or <named-command name="cdbs" />
[01:35] <SteveA> it makes your task easier
[01:35] <carlos> daf: makes sense
[01:35] <carlos> SteveA: ok
[01:35] <SteveA> don't over-design it now
[01:35] <SteveA> we want to get many packages imported
[01:36] <daf> I think the redundancy is very much due to the packages carlos has chosen
[01:36] <carlos> daf: the most importants ones are gnome packages
[01:36] <SteveA> so, what %age or how many packages, can we get using this?
[01:36] <daf> carlos: right
[01:36] <SteveA> ok
[01:36] <SteveA> I think that's good enough for now
[01:36] <daf> also, it seems like Mozilla and OO might be feasible to do
[01:36] <SteveA> we can improve and generalize the system once we have those important gnome packages imported
[01:36] <carlos> yes, and that's cool :-)
[01:36] <SteveA> but do not work on the general case now
[01:37] <daf> no, you're right
[01:37] <SteveA> so, we can just have a <cdbs-import /> element
[01:37] <SteveA> and this means "do the standard cdbs import"
[01:38] <SteveA> and later on, we'll have other imports, one of which may involve ad-hoc commands
[01:38] <daf> this is starting to sound like jhbuild
[01:38] <daf> are you familiar with that, Steve?
[01:38] <carlos> ok, so will we forget also the multiple .pot packages, right?
[01:38] <SteveA> what is that?
[01:38] <SteveA> for now, yes
[01:39] <SteveA> although we may still want to check for that
[01:39] <daf> it's a tool James Henstridge wrote, originally to build GNOME from CVS
[01:39] <daf> it became more general after that
[01:39] <daf> it has a list of all the packages as XML
[01:39] <SteveA> will a simple find name="*.pot" be good enough to check that we have just one pot file?
[01:39] <daf> and it knows how to fetch and build each one
[01:40] <daf> SteveA: no
[01:40] <daf> touch foo.pot
[01:40] <daf> touch bar.pot
[01:40] <daf> name="*.pot"
[01:40] <SteveA> so?
[01:40] <daf> echo name  # ==> "foo.pot bar.pot"
[01:40] <SteveA> find will find that, and will reject that package
[01:40] <SteveA> works for me
[01:41] <SteveA> we want the simplest thing that will work for some of the important packages
[01:41] <SteveA> if we reject too many to start with, that's ok
[01:41] <carlos> it's ok for me
[01:41] <daf> sorry, I think I misunderstood you
[01:41] <daf> if we make a list of all the files which match *.pot, then that will work for most cases
[01:41] <SteveA> want me to try to explain better?
[01:42] <daf> we can just check that exactly 1 file has been foudn
[01:42] <daf> found
[01:42] <daf> for a couple of GNOME packages, there will be multiple .pot files
[01:42] <SteveA> I'm proposing that we look for all files inside a .deb that end in .pot.  If there is more than one of these, we reject this .deb for now
[01:42] <daf> and if there is less than one?
[01:42] <SteveA> we also reject it
[01:43] <SteveA> but, the cdbs-import will fail anyway then
[01:43] <daf> no, we can't use .debs
[01:43] <SteveA> or, source packages
[01:43] <daf> right, source packages
[01:43] <SteveA> ok
[01:43] <SteveA> read "source packages" for when I said "debs"
[01:43] <daf> so, the algorithm should be:
[01:43] <daf> run the update commands defined for this package
[01:43] <daf> look for pot files
[01:44] <daf> if the number of pot files != 1, then add this package to the list of failed ones and go to the next one
[01:44] <daf> otherwise, import it
[01:44] <daf> --
[01:44] <daf> then, we can look at the list of ones which failed and deal with them individually later
[01:45] <daf> gtk+ and gnome-applets will fail, because they will have 2 POT files
[01:45] <SteveA> yes.
[01:45] <carlos> ok
[01:45] <SteveA> will cdbs import always work if there is just one pot file, like this?
[01:45] <carlos> SteveA: yes, if it's not broken (like gnome-applets)
[01:46] <carlos> well, bad example, gnome-applets has two pot files but it's also broken 
[01:46] <carlos> but that's a corner case
[01:46] <SteveA> how is it broken?
[01:46] <daf> carlos: actually, in the case of gtk+/gnome-applets, it will only generate one POT file, right?
[01:46] <SteveA> will we get an error condition from cdbs-import ?
[01:46] <carlos> daf: yes, it's easy to do that, but it's an specific "hack"
[01:46] <daf> carlos: since it only updates the po/ directory, and not po-locations or po-properties
[01:47] <carlos> SteveA: yes, it should raise an execption
[01:47] <carlos> SteveA: it misses some files needed to rebuild the .pot file
[01:47] <carlos> daf: but if we reject any package with more than a .pot file, we will not know that
[01:48] <daf> carlos: will not know what?
[01:48] <carlos> daf: that the package will work with the standard cdbs-import
[01:48] <SteveA> Ok, so here's what we need to do.  It will miss a lot of packages, but it won't fail silently:
[01:48] <SteveA> * for each package name:
[01:48] <SteveA>     - get the source package
[01:48] <SteveA>     - run the update commands defined for this package
[01:48] <SteveA>     - look for pot files
[01:48] <SteveA>     - if the number of pot files == 1, import it
[01:48] <SteveA>     - else add this package to the list of failed ones
[01:49] <SteveA> 
[01:49] <SteveA> agreed?
[01:49] <daf> it might have false positives in some cases
[01:49] <carlos> yes
[01:49] <daf> but we know about the most important ones, I think
[01:49] <SteveA> daf: do you mean that this plan might have "false positives" ?
[01:49] <daf> SteveA: yes
[01:49] <SteveA> what exactly do you mean?
[01:50] <daf> i.e. it will fail to detect that a package intended to have two POT files because only one of them will be generated
[01:51] <carlos> daf: the .pot files are always there
[01:51] <carlos> come from the .tar.gz
[01:51] <daf> they are?
[01:51] <carlos> yes
[01:51] <daf> oh, but not in CVS?
[01:51] <carlos> the problem you are talking about will come when we move to arch
[01:51] <carlos> daf: yes, that's it
[01:51] <daf> right
[01:52] <daf> SteveA: this plan is definitely good enough for now
[01:52] <carlos> but the code I have does not checks for .pot files
[01:52] <carlos> but for POTFILES.in
[01:52] <carlos> that exists always
[01:52] <carlos> so it's easy to "fix" later
[01:52] <daf> great
[01:53] <carlos> could we talk then about tasks?
[01:54] <SteveA> yes
[01:54] <SteveA> what tasks are needed to make the plan in software?
[01:55] <SteveA> * for each package name:
[01:55] <SteveA>     - get the source package
[01:55] <SteveA>     - run the update commands defined for this package
[01:55] <SteveA>     - look for pot files
[01:55] <SteveA>     - if the number of pot files == 1, import it
[01:55] <SteveA>     - else add this package to the list of failed ones
[01:55] <carlos> 1.- Parse the list of packages and download them 
[01:55] <carlos> wait
[01:56] <carlos> it depens on the way we will handle this...
[01:56] <carlos> specific scripts that does only one thing
[01:56] <SteveA> what is the *simplest* way of handling this?
[01:56] <carlos> or a global script that does all
[01:56] <daf> what about packages for which we have no update commands defined?
[01:56] <carlos> daf: will be rejected
[01:57] <carlos> daf: as we talk, only cdbs packages will work in this phase
[01:57] <daf> ok
[01:57] <carlos> I think we should do several scripts/methods that do every point of the algorithm
[01:58] <carlos> that way, when we move to arch is easier to change that part
[01:58] <carlos> without break anything else
[01:58] <carlos> :-D
[01:58] <daf> SteveA: that's a shame :(
[01:59] <SteveA> it is sad, but I get further acting as an ignorant foreigner than in trying to speak the language
[01:59] <SteveA> anyhow, back to imports
[02:01] <daf> right
[02:01] <SteveA> I think a script that takes an argument
[02:01] <SteveA> the argument is a file containing a list of package names
[02:01] <SteveA> the script gets the packages names into a python list
[02:02] <SteveA> and instantiates a class PackageImporter for each item in the list
[02:02] <SteveA> the class can live in the same module as the script
[02:03] <carlos> and the class executes all steps?
[02:03] <SteveA> the class can have a "runimport()" method
[02:03] <SteveA> that method contains the workflow we described above
[02:03] <SteveA>   def runimport(self):
[02:04] <SteveA>       self.getSourcePackage(tmp_directory)
[02:04] <SteveA>       self.runCdbsImport()
[02:04] <SteveA>       self.lookForPotFiles()
[02:04] <SteveA>         num_pots = self.findPotFiles()
[02:05] <SteveA> (rather)
[02:05] <SteveA>       if num_pots == 1:
[02:05] <daf> perhaps "class CDBSPackageImporter(PackageImporter):"?
[02:05] <SteveA>           self.importIntoDB()
[02:05] <carlos> SteveA: what happens when we move to arch? or when we import other packages that are not using cdbs?
[02:05] <SteveA> daf: we have only CDBS now.  Don't Generalize Now.
[02:05] <carlos> we should use different classes that implements the same interface
[02:05] <SteveA> but, by all means call it CDBSPAckageImporter
[02:05] <carlos> SteveA: ok
[02:05] <daf> SteveA: what values of "now" is this?
[02:06] <SteveA> we don't need to generalize now, in order to meet our goals
[02:06] <daf> what about Mozilla?
[02:06] <SteveA> we will generalize when we need to do meet our goals
[02:06] <SteveA> we can get all the packages that work with this algorithm, and get them working, and then work on any that need more special treatment
[02:07] <SteveA> if mozilla is one of these, then we'll refactor the script / class when we come to do mozilla
[02:07] <SteveA> but not before
[02:07] <SteveA> we need to keep it as simple as possible right now
[02:07] <SteveA> write the script however you want to. the class I sketched is only a suggestion.
[02:08] <carlos> ok
[02:08] <SteveA> but, do only what is needed to meet our goal of making this one import command and one flow of control working
[02:08] <daf> I think the timescale in which we will want to import non-CDBS projects is very short
[02:08] <daf> i.e. today
[02:08] <SteveA> that's ok
[02:08] <SteveA> it is still important to get this simple thing working, before making it mre complex
[02:08] <daf> ok
[02:09] <SteveA> it is much easier and more reliable, and more fun, to modify something simple that exists into something more complex, rather than coming up with something complex to start with.
[02:09] <SteveA> we want to make progress today.
[02:09] <SteveA> if things take longer than planned, and we only get the simplest cases done, well, that's still good progress
[02:09] <daf> concur
[02:10] <SteveA> but, if we plan something complex, and get 90% of the way there, that's no tangible progress.
[02:10] <SteveA> ok, we need to think about the individual tasks involved, and estimate this.
[02:10] <daf> carlos: can you make it a task to write an importer + XML file that does maybe 6-12 packages?
[02:10] <SteveA> that's more like a "story"
[02:10] <daf> I suppose that's two tasks
[02:11] <daf> right
[02:11] <carlos> daf: sure
[02:11] <SteveA> that's a specific goal
[02:11] <SteveA> let's go through the workflow in detail.
[02:11] <SteveA> 1. get the source package
[02:11] <daf> so, one part is to write an importer that can import POT+PO files from CDBS source packages
[02:11] <SteveA> we have the package name.
[02:11] <SteveA> how to get the source pacakge?
[02:11] <daf> right
[02:11] <daf> apt-get source
[02:11] <daf> so you need to have APT and the appropriate sources.list
[02:11] <SteveA> we'll need to put it somewhere, right?
[02:11] <carlos> we will use the name of the source package
[02:11] <daf> yes
[02:12] <SteveA> ok, est. time to write code that gets a source package, and unpacks it into a working directory, under the name of the source package ?
[02:13] <carlos> if there is any way to do "apt-get source *" no more tha 15 minutes :-P
[02:13] <carlos> but I don't think we have something like that
[02:13] <SteveA> carlos and daf: if you both independently estimate this, we'll have a good idea of whether the estimate is accurate, and whether the task is well-defined
[02:13] <SteveA> os.system
[02:13] <carlos> * comes from a file that should be parsed
[02:13] <daf> this is a trivial task, yes
[02:13] <daf> estimate 5 minutes
[02:14] <SteveA> ok, I'll mark it as 20 minutes
[02:14] <carlos> wow
[02:14] <carlos> grep SOMETHING Sources | apt-get source ?
[02:14] <SteveA> 0. write script + class that reads in given file, and goes through each name in turn instantiating class and calling "run" method
[02:14] <carlos> yes, it's trivial :-D
[02:15] <SteveA> carlos: don't forget to deal with error conditions
[02:15] <SteveA> estimate for 0
[02:15] <SteveA> ?
[02:16] <carlos> I don't think I will be able to do it in less than 20 minutes
[02:16] <SteveA> for 0 ?
[02:16] <carlos> yes
[02:16] <SteveA> let's say 30 mins then
[02:16] <SteveA> 2. run the update commands defined for thsi package
[02:16] <carlos> SteveA: but perhaps daf is faster than me on it
[02:17] <SteveA> carlos: 30 mins is fine.  The important thing is that we've thought through the problem, and committed to a reasonable estimate we can achieve
[02:17] <carlos> ok
[02:17] <SteveA> if you improve on the estimate, even better
[02:17] <daf> carlos: you didn't have any sleep last night, right?
[02:18] <carlos> daf: true
[02:18] <carlos> daf: why, am I missing anything?
[02:19] <SteveA> 2. run the update commands defined for this package.  Estimate for this, please
[02:19] <carlos> hmm, perhaps it's better.. forgetting...
[02:19] <carlos> If we already know how to do it
[02:20] <carlos> no more than 10 minutes
[02:20] <SteveA> I'm not sure what "run the update commands" means
[02:20] <daf> carlos: no, just checking
[02:20] <carlos> I think it's possible to do it in not more than 5 minutes
[02:20] <carlos> SteveA: execute the commandscripts list
[02:20] <SteveA> oh, okay
[02:20] <SteveA> this involves checking for errors etc.
[02:21] <carlos> hmm, right
[02:21] <carlos> I always forget it :-( 30 minutes
[02:21] <SteveA> if the commands return an sppropriate exit value, that's ok
[02:21] <SteveA> otherwise, you need to parse the output, perhaps
[02:21] <SteveA> let's give it in hour
[02:22] <daf> carlos: yes, it's not very strict error checking
[02:22] <daf> it's more or less "did this command script succeed or fail?"
[02:22] <daf> it might be worth running command scripts in shells which have -e on
[02:22] <SteveA> want to say 45 minutes?
[02:22] <carlos> daf: I know, I did it already for the pwgen and the msgfmt to get .mo files
[02:23] <daf> carlos: right
[02:24] <carlos> for this concrete example using cdbs, no more than 30 minutes
[02:24] <carlos> we already know what should be executed
[02:24] <SteveA> 45 sounds good then.
[02:24] <carlos> X-)
[02:24] <SteveA> remembering that it is okay to be well ahead
[02:24] <SteveA> ok, next
[02:24] <SteveA> 3. look for pot files
[02:25] <SteveA> sounds very easy to me
[02:25] <SteveA> just a few lines of obvious python
[02:25] <carlos> SteveA: using find as an external tool?
[02:25] <SteveA> I'd do it in python, personally
[02:26] <carlos> using find I could do it in about 15 minutes (I have all code in C already), with python I don't know how to "emulate" find
[02:26] <daf> os.walk?
[02:27] <SteveA> ok, spend 20 mins trying it in python.  If that gets nowhere, use find.
[02:27] <SteveA> total time, 40 mins
[02:27] <SteveA> and, you'll have learned how to do this in python :)
[02:27] <carlos> :-)
[02:27] <SteveA> 4. if the number of pot files == 1, import it
[02:28] <SteveA> 30 secs for the first part
[02:28] <carlos> daf: yes, seems like it's with os.walk (carlos saw the documentation)
[02:28] <carlos> SteveA: right
[02:28] <SteveA> for "import it" ?
[02:29] <daf> I think this will more or less be a copy + paste of code from the import script
[02:29] <carlos> for the == 1 :-)
[02:30] <SteveA> can you make the import script into a library, and make the import script and this script use the same thing?
[02:30] <carlos> right, it's only a matter of execute all scripts we have already to import it
[02:30] <SteveA> or, call the import script?
[02:30] <daf> hmm, could do
[02:31] <SteveA> is the easies thing to call the import script?
[02:31] <daf> the import script is fairly reusable
[02:31] <SteveA> ok, then we'll call the import script
[02:31] <SteveA> does it report error conditions well?
[02:32] <daf> I mean, you can do "from poimport import PODBBBridge"
[02:33] <SteveA> an estimate?
[02:33] <daf> it mostly just raises exceptions when it fails
[02:33] <daf> so the exit code should be correct
[02:34] <SteveA> so...
[02:34] <SteveA> an estimate?
[02:35] <carlos> I think it should not be more than 30 minutes...
[02:35] <carlos> but better, an hour
[02:35] <carlos> because I'm thinking 
[02:35] <carlos> that we should check if the project exists
[02:35] <carlos> or the product
[02:35] <carlos> and create them if they don't exists...
[02:36] <carlos> we have the scripts to create them and we should integrate all to work together
[02:38] <SteveA> daf, what do you think?
[02:40] <SteveA> while daf is thinking, let's talk about the last task
[02:40] <daf> if we have all the information needed to create projects and produts, an hour should be enough
[02:40] <SteveA> and if you don't, reject it?
[02:41] <carlos> SteveA: good question
[02:41] <SteveA> 5. else, add this package to the list of failed ones
[02:41] <carlos> we don't have it without manual changes
[02:41] <carlos> the list of packages don't tell use the project
[02:42] <carlos>  /s/use/us/
[02:42] <SteveA> This needs us to write out a list of failed packages to some file, with reasons why they failed, perhaps
[02:42] <carlos> SteveA: the problem is that we missed the point that we need to process the initial list of packages to discriminate them in different projects
[02:43] <SteveA> what would a file look like that represents that?
[02:43] <SteveA> mozilla-firebird    firebird
[02:44] <SteveA> oops
[02:44] <SteveA> into projects...
[02:44] <SteveA> mozilla-firebird    mozilla
[02:44] <SteveA> gnome-applets    gnome
[02:44] <SteveA> etc. ?
[02:44] <carlos> yes, that's a good example
[02:45] <SteveA> so, we can have a task that is to go through the list of packages, and assign them to products, like this
[02:45] <carlos> yes
[02:45] <SteveA> then, the import script can take this too, and make it into a mapping
[02:45] <carlos> also, we need another task to collect the data for every project
[02:45] <carlos> so we can create them
[02:46] <SteveA> I meant "assign them to projects" above
[02:46] <SteveA> ok.  We can do this in parallel with writing the import script.
[02:46] <carlos> yes
[02:47] <SteveA> maybe an xml or rfc822 file of projects, and a list mapping products to projects
[02:47] <SteveA> how many products do we have?
[02:47] <SteveA> how many projects?
[02:47] <SteveA> I suppose we need to do this only for those products that can be imported easily
[02:47] <SteveA> so, that might be only one project: gnome
[02:48] <carlos> SteveA: then we need to execute first the download code
[02:48] <SteveA> so, first job is to get the script written except for the "import" step
[02:48] <carlos> try to detect the po files
[02:48] <SteveA> and then look at what packages we can import
[02:48] <carlos> and then make a list to associate then the projects for those products...
[02:49] <SteveA> 
[02:49] <SteveA> plan:
[02:49] <SteveA> 1. write import script except for "import" step
[02:49] <SteveA> 2. look at what packages we can import easily
[02:49] <SteveA> 3. make list mapping package to project
[02:49] <SteveA> 4. import data about each project
[02:49] <SteveA> 5. finish import step (can be done in parallel with 3 and 4)
[02:49] <SteveA> 6. import for real
[02:50] <SteveA> 
[02:50] <SteveA> how about that?
[02:51] <carlos> makes sense for me
[02:51] <daf> where does the data for 4 come from?
[02:51] <SteveA> in the script, we can write out a file saying the status of each package:  package-name: IMPORTED  or  package-name: FAILED
[02:51] <SteveA> then, we can look through the packages that imported okay, and map those to projects
[02:52] <SteveA> daf: we enter the data about projects manually, once we know what projects we need
[02:52] <carlos> daf: by hand
[02:52] <SteveA> if there are just 3 or 4 projects, we can write SQL to do it, for example
[02:52] <SteveA> if there are more, we should come up with an XML or RFC822 file that gets parsed to do it
[02:53] <carlos> SteveA: we have it already in the xml I proposed
[02:53] <SteveA> carlos: Okay.  We need to have the "projects" information as an input to the script that produces the XML file
[02:54] <SteveA> so in that case, it must be in an XML or RFC822 file itself
[02:54] <carlos> hmm
[02:54] <carlos> SteveA: I prefer what daf said
[02:54] <SteveA> daf said "where does the data for 4 come from"
[02:54] <SteveA> what else did he say?
[02:54] <carlos> write a script that modifies the same xml to append the products or potemplates
[02:54] <carlos> some hours ago
[02:54] <SteveA> oh, okay
[02:55] <SteveA> yeah, we can do that
[02:55] <daf> so, what data files do we have now?
[02:55] <daf> one XML file containing package information, and one file which maps products to projects?
[02:56] <SteveA> 
[02:56] <SteveA> plan:
[02:56] <SteveA> 1. write import script except for "import" step
[02:56] <SteveA> 2. look at what packages we can import easily
[02:56] <SteveA> 3. make list mapping package to project
[02:56] <SteveA> 4. import data about each project
[02:56] <SteveA> 5. finish import step (can be done in parallel with 3 and 4)
[02:56] <carlos> daf: yes
[02:56] <SteveA> 6. write script that puts project data into the xml
[02:56] <SteveA> 7. write real xml
[02:56] <SteveA> 8. modify with project data
[02:56] <SteveA> 9. pass this to the thing that imports the xml into the database
[02:56] <carlos> it's fine for me
[02:57] <SteveA> 
[02:57] <SteveA> * for each package name:  (read in from file of package names) 30 mins
[02:57] <SteveA>     - get the source package.  20 mins
[02:57] <SteveA>     - run the update commands defined for this package 
[02:57] <SteveA>       (beware of error conditions) 45 mins
[02:57] <SteveA>     - look for pot files  40 mins
[02:57] <SteveA>     - if the number of pot files == 1, import it.  90 mins???
[02:57] <SteveA>     - else add this package to the list of failed ones
[02:57] <SteveA>       write out package: IMPORTED or package: FAIL.  20 mins
[02:57] <SteveA> 
[02:57] <SteveA> that's 245 minutes I think
[02:57] <daf> 4 hours
[02:57] <SteveA> 5 minutes
[02:57] <daf> +
[02:58] <SteveA> the script to write the xml into the database is 3 hours
[02:58] <SteveA> and we still have to deal with project data, and write a script that reads in our xml file and modifies it, and writes it out
[02:59] <SteveA> altogether, at least a person day-and-a-half
[02:59] <carlos> I think we should import by hand some pot/po files so the alpha testing could begin today
[02:59] <daf> yes
[03:00] <SteveA> how many?
[03:00] <carlos> with two or three projects could be enough so the alpha testers will be able to play with rosetta
[03:00] <daf> yes
[03:00] <carlos> sorry, products
[03:01] <SteveA> someone should file bugs for the story "import products that have a simple pot file" (steps 1..9 above)
[03:02] <carlos> SteveA: also, we need extra help from elmo
[03:02] <SteveA> and "write script to automatically create rosetta xml from source packages" (* and - points above)
[03:03] <SteveA> and "refine and document carlos' XML format"
[03:03] <carlos> SteveA: we need a chroot with all possible packages installed from warty to be able to run the cbdb script
[03:03] <SteveA> and "script to take carlos' xml file, and import the information into the database"
[03:03] <SteveA> carlos: can't we run this on a laptop?
[03:03] <carlos> SteveA: I'm doing it
[03:03] <SteveA> we just need to run it on some non-server machine 
[03:04] <SteveA> and get the xml file onto the rosetta machine
[03:04] <carlos> but at the end
[03:04] <SteveA> non need to get elmo involved
[03:04] <carlos> we need it on the server
[03:04] <SteveA> why?
[03:04] <carlos> to get the updated .pot files
[03:04] <SteveA> when we get arch involved, then we will
[03:04] <SteveA> I don't think we need this during the alpha
[03:04] <carlos> with arch, we will not need it (or we should not)
[03:05] <SteveA> ok, then we can do it on individuals' machines, and get the xml files onto the server
[03:05] <daf> second thoughts:
[03:05] <carlos> SteveA: and the .pot files will be uploaded also?
[03:05] <carlos> the .pot files sometimes will be updated
[03:05] <daf> rather than modifying the XML, I suggest that the packages and project information be kept separate
[03:05] <carlos> daf: yes?
[03:06] <daf> so we have three files: a projects file, a packages file, and a file which maps packages to projects
[03:06] <SteveA> carlos: why do you need chroot to install source packages in some working directory?
[03:06] <daf> does this make sense?
[03:06] <carlos> daf: yes
[03:07] <SteveA> makes sense to me
[03:07] <carlos> SteveA: because the dependencies, we start building the .deb packages
[03:07] <carlos> to apply the debian/ubuntu patches
[03:07] <carlos> and we need to satisfy the dependencies
[03:07] <carlos> we never compile it
[03:07] <carlos> but I don't think the checks are avoidable
[03:07] <SteveA> but, why do you need chroot?
[03:07] <carlos> easily
[03:08] <carlos> as a security mesure, that's not the important point
[03:08] <carlos> hmm
[03:08] <carlos> I see your point
[03:08] <carlos> we could install them as a normal user 
[03:08] <SteveA> you can do all this as a user, can't you?
[03:08] <carlos> SteveA: yes
[03:08] <SteveA> so, no need to hassle elmo
[03:09] <carlos> true
[03:09] <SteveA> great
[03:09] <daf> SteveA: in how many places is the name of the database stored?
[03:09] <SteveA> for launchpad, just one.
[03:10] <daf> great
[03:10] <daf> where is that?
[03:10] <SteveA> in stand-alone scripts, in each script
[03:10] <SteveA> I think
[03:10] <SteveA> launchpad-sql-configure.zcml
[03:11] <daf> if I create the new database and change that value, the alpha server should be ready to import data
[03:11] <SteveA> you need to just put a different file in override-includes
[03:11] <SteveA> launchpad-sql-configure.zcml is symlinked into there
[03:11] <SteveA> I need to take a break
[03:11] <daf> right
[03:11] <daf> I need to eat
[03:12] <carlos> SteveA: so I will fill a bug report to fill the database by hand with some products, that bug will block the alpha release, the other ones we were talking about will block the beta, is that ok for  you?
[03:12] <carlos> SteveA: me too, I'm hungry :-)
[03:12] <SteveA> sounds reasonable
[03:12] <carlos> ok
[03:12] <SteveA> but, it would be good to import more stuff using this script before the end of the alpha
[03:12] <SteveA> that is, we want to get this stuff imported while the alpha is still going on
[03:13] <carlos> SteveA: the idea is work on it now so we finish them this week (if possible)
[03:13] <SteveA> so, maybe we want a bug called "before alpha can end we want the following things to have been tried out" ;-)
[03:13] <carlos> but it does not block the alpha because we will release it today, right?
[03:13] <carlos> :-)
[03:14] <carlos> I will file all bugs after luch
[03:14] <carlos> later
[03:44] <daf> hahaha!
[03:44] <daf>  createdb -E UNICODE launchpad_test || echo ${DBNAME} already exists
[03:44] <daf> *oops*!
[03:45] <daf> brown paper bag for stub, I think :)
[03:48] <elmo_mf> ?
[03:48] <elmo_mf> oh
[03:54] <sabdfl> SteveA: i will have to keep an eye on the launch process, so please lead the launchpad meeting whether or not i'm here, i'll read the logs
[03:55] <SteveA> ok
[03:56] <SteveA> daf: what's the latest from lalo?
[03:57] <daf> SteveA: I don't think I've heard anything you haven't
[03:58] <SteveA> ok, so we won't expect him at this meeting, unless he manages to get to an internet cafe
[03:58] <daf> indeed
[03:58] <SteveA> can you send him a summary of the relevant parts?
[03:59] <SteveA> I'll mail the log to stub
[03:59] <daf> a summary, or the log?
[04:00] <daf> hi stub 
[04:00] <SteveA> hi stub.  shouldn't you be asleep? ;-)
[04:00] <SteveA> ok, let's start
[04:00] <stub> Morning
[04:00] <stub> Yes mom
[04:00] <kiko> we're on
[04:01] <daf> stub: I noticed this line in the schema Makefile just now: "createdb -E UNICODE launchpad_test || echo ${DBNAME} already exists"
[04:01] <SteveA> daf, carlos from rosetta; spiv, kiko, cprov, debonzi from soyuz; stub from malone and DBA land; lulu from rosetta website experience; limi from "everyone's bitch" land; sabdfl half here, half at the warty launch, stevea 
[04:02] <SteveA> lalo, sends apologies, with a broken computer
[04:02] <SteveA> let's start with malone
[04:02] <lulu> SteveA - May Limi and I be excused - the website goes live in an hour
[04:02] <SteveA> sure
[04:02] <lulu> thanks
[04:03] <SteveA> maybe keep 1/2 an eye, and we'll holler your names if we need something
[04:03] <SteveA> stub: I posted the plan for malone to the list, and proposed it in the last meeting
[04:03] <SteveA> what do you think about it?
[04:04] <stub> Sounds sane
[04:04] <SteveA> will you lead this, when you get back to working for us?
[04:04] <stub> Also sounds like we aren't rolling  out Malone in the next two weeks, and are initially sticking with Bugzilla
[04:05] <SteveA> yes
[04:05] <stub> I'm happy to keep leading the Malone work
[04:05] <SteveA> ok, great.
[04:06] <SteveA> justdave: are you around?
[04:06] <SteveA> justdave is working on a list of the top 100 bugzillas, to start importing from
[04:06] <SteveA> we'll catch up with dave later.
[04:07] <justdave> yep
[04:07] <SteveA> stub: we'll need to work out what the changes needed to use malone with ubuntu and with launchpad are
[04:07] <SteveA> I think daf has been finding bugzilla's dependency charts very useful
[04:07] <SteveA> but, we've also been using bugzilla as an issue / project tracking tool as well as strictly a bug tracking tool
[04:08] <SteveA> I'm not sure what to do about that, wrt the malone plan
[04:08] <SteveA> justdave: how's the list of sites going?
[04:09] <justdave> has not been started yet because I've been dealing with bug-buddy and then release issues with bugzilla.ubuntu.com
[04:09] <justdave> should have it by next week though
[04:09] <SteveA> launchpad team in general: we'll need to think about what features malone *needs* to have in order to start using malone for launchpad. At present, we have "assign a person as responsible for fixing a bug". 
[04:10] <SteveA> and I hope that's all we need.
[04:10] <SteveA> justdave: maybe start doing just a few each day to ease into it?
[04:10] <daf> dependency tracking is useful
[04:10] <stub> And there is a feature request in Bugzilla for dependency charts
[04:10] <spiv> Being able to close a bug would help ;)
[04:10] <daf> dependency graphs arenot as essential
[04:10] <SteveA> spiv: what do you mean?
[04:11] <daf> I shuold have filed a bug about dependency tracking first
[04:11] <daf> (and then made the carts bug depend on it :))
[04:11] <spiv> SteveA: Just taking your statement a bit too literally.
[04:11] <daf> * charts
[04:11] <justdave> planning on 20 or so a day right now.
[04:12] <SteveA> we need two categories:  stuff we *need* in order to start using malone for launchpad development.  stuff we'd *like*.
[04:12] <justdave> but I may adjust that number as I get going and see how long it takes for each
[04:12] <SteveA> let's put a wiki page up for that
[04:12] <spiv> SteveA: I was about to ask wehere these lists are being kept :)
[04:12] <stub> And the wiki page shall be called --- MaloneUseCases!
[04:13] <SteveA> justdave: as part of the malone plan, we also need scripts to import open bugs from bugzilla.
[04:14] <daf> SteveA: alternatively, we could file the things we need as bugs
[04:14] <SteveA> let's stick them on a wiki page for now
[04:14] <daf> SteveA: and track it all using a metabug
[04:14] <daf> ok
[04:14] <SteveA> justdave: any thoughts?
[04:14] <justdave> certainly doable.  how soon do we need it?
[04:15] <SteveA> within the next month, probably
[04:15] <SteveA> depends on when stub can get malone ready for use for the launchpad bugs
[04:16] <SteveA> justdave: can you file a bug on yourself for malone to write a description of exactly what the "import open bugs" thing needs to do?
[04:16] <justdave> ok, a month is plenty far enough off to do that after I get the list done.
[04:16] <justdave> import script shouldn't take more than a week
[04:17] <justdave> sure
[04:17] <SteveA> the important thing is to get the detailed description of what it should do fairly soon, so stub can use that for planning malone
[04:17] <SteveA> thanks
[04:17] <SteveA> ok, I think that's it for malone
[04:17] <SteveA> soyuz next
[04:18] <cprov> yep
[04:18] <SteveA> kiko: you've arranged some meetings to get soyuz presenting relevant information from malone and rosetta
[04:18] <sabdfl> sorry to chip in, just reading scroll back
[04:18] <sabdfl> and i'd like to say that dependency tracking is really for enhancements to software
[04:18] <daf> I disagree
[04:18] <sabdfl> which will be better handled by the project management tool i'd like to work on once malone is nicely bedded down
[04:19] <sabdfl> daf: go ahead
[04:19] <cprov> SteveA: yes, we are trying to arrange this soon w/ daf, stub, lamont/elmo
[04:19] <daf> we 've had bugs in Rosetta which can't be fixed because there is another bug that needs to be fixed first
[04:20] <sabdfl> are those bugs, or features that are not yet implemented?
[04:20] <stub> We have had cases where bugs in rosetta required bugs in sqlos to be fixed.
[04:21] <daf> e.g. you can't commit translations which include % signs
[04:21] <carlos> sabdfl: sometimes features, sometimes bugs
[04:21] <sabdfl> so that's one bug, in sqlos, that victimises rosetta
[04:21] <daf> this can't be fixed until a bug in Zope is fixed
[04:22] <sabdfl> ok, keep going, i just wanted to point out that the project management tool is coming once we get these first launchpad apps bedded down
[04:22] <sabdfl> and that will be a whole new thing we need to figure out how to use
[04:22] <daf> it is often feature enhancements, but not always
[04:22] <sabdfl> and the interaction between that, and malone, will i think be very powerful indeed
[04:22] <kiko> SteveA, right. 
[04:22] <daf> and there are cases where feature enhancements are blocked on bugs
[04:23] <daf> i.e. I can't implement this feature until this bug is fixed
[04:23] <sabdfl> we'll get to the point where we can handle each of those scenarios very powerfully in launchpad
[04:23] <sabdfl> keep going... kiko?
[04:23] <daf> but I am certain that there are cases where it is a straight bug-to-bug dependency
[04:23] <stub> I think issues will block bugs, and vice versa (we already know issues and bugs are intimatly mated)
[04:24] <kiko> sabdfl, we've been looking into integration with rosetta and malone, because those are high on our todo list.
[04:24] <kiko> most of the other basic tasks are well underway and accounted for
[04:24] <SteveA> have any problems come up so far in integrating the apps?
[04:24] <kiko> but integration is a murky area, particularly because a good part of the concepts don't match up automatically.
[04:25] <kiko> we haven't even started them yet; the meetings we've set up are to start discussing their concepts and our concepts and seeing what matches. from there fill up the templates with real data.
[04:25] <kiko> I wonder if what we want is to coax daf and stub into writing portlets for us, or if we want to go into the database and pull data out.
[04:26] <daf> there should be clean interfaces to the data you need
[04:26] <SteveA> I think it would work best if the rosetta team is responsible for writing the thing that gives you what objects you need.
[04:26] <kiko> okay. so we shouldn't expect some ready-made "rosetta components" that come with UI and all?
[04:26] <SteveA> I think it would work best if the rosetta team is responsible for writing the thing that gives you what objects you need from rosetta
[04:27] <SteveA> well, that might work even better
[04:27] <kiko> but on the data level -- we organize that into the interface as we like?
[04:27] <SteveA> but I think you need to discuss in these meetings which approach fits best
[04:28] <SteveA> what I want to avoid is for the soyuz team to be dealing with the low-level rosetta stuf
[04:28] <daf> we have allocated a meeting specifically to talk about this tomorrow
[04:28] <kiko> yeah. 
[04:28] <kiko> I don't want to overburden rosetta/malone people, and I suspect that data sans UI is easier to obtain at this point, so I'd shoot for that.
[04:28] <daf> SteveA: agreed: Soyuz should not need to grok Rosetta internals
[04:28] <SteveA> is there any stuff you need to discuss going the other way?  presenting soyuz information within malone or rosetta?
[04:29] <kiko> just to make things clear -- we want to be able to list open translations for a certain source package. I want to know if that makes sense or if I'm on crack.
[04:30] <cprov> SteveA: does make sense presenting Soyuz data on Rosetta/Malone world ?
[04:30] <stub> Are these mini-reports or summaries that link into rosetta and malone? Or a duplication?
[04:31] <kiko> stub, I don't know what a duplication would mean, so the first option smells better. 
[04:31] <cprov> stub: yes, mini-report, we are thinking in something like that.
[04:31] <carlos> kiko: as I said yesterday (sorry, I forgot to send it to the list) Rosetta will work with source packages (or should do it)
[04:31] <carlos> so the mapping should be one to one
[04:32] <stub> I think there is a place for summaries ('There are X bugs in this package, click here to see'), or even portlets that don't take up much screen real estate.
[04:32] <kiko> carlos, that's really good news. I wonder if you guys know about source package releases as well (cutoff points, I guess)
[04:32] <kiko> stub, that's what I want, a summary.
[04:33] <SteveA> cprov: that's what I was asking :-)
[04:33] <carlos> kiko: I suppose we know, we use same tables
[04:33] <cprov> stub: we also handle with persons ans should be able to get a report about them bugs (Mark has X assigned bug, Y resolved bugs, and so on)
[04:33] <SteveA> ok, the rest you can talk about in the meeting tomorrow
[04:33] <kiko> okay.
[04:34] <SteveA> what is the tangible outcome of the meetings tomorrow?
[04:34] <cprov> ok
[04:34] <kiko> SteveA, things are under control on our side. Debonzi and Celso have nice little laundry lists and will be picking things off. No major roadblocks except for:
[04:34] <kiko> - Integration issues
[04:34] <kiko> - Package browsing (which you've pushed off)
[04:35] <kiko> - Component browsing (which we haven't dreamed of and doesn't seem important -- we are just going to display where a sourcepackage is on-screen in the sourcepackage view)
[04:35] <kiko> that's it.
[04:35] <SteveA> ok
[04:35] <SteveA> did all the issues raised at the soyuz meeting on monday get addressed?
[04:36] <kiko> ZODB's still not up, so that blocks part of the work afaict.
[04:36] <spiv> SteveA: You have a bug about it.
[04:36] <SteveA> my note says "look at the bug"
[04:36] <kiko> and getting larger amounts of data is not in our backyard if possible.
[04:36] <spiv> :)
[04:37] <kiko> we'd like to be able to get something automated to push stuff in beyond what's being manually inserted.
[04:37] <SteveA> which work does the zodb block?
[04:37] <kiko> just notices, which aren't even critical (but which I find cool).
[04:37] <spiv> Nothing urgent that I can think of -- just the "latest notices" thing.
[04:37] <SteveA> ok, the notices.
[04:37] <SteveA> cool
[04:38] <SteveA> but, it would be good to allow you to experiment with interesting ideas like that
[04:38] <SteveA> did all the issues raised at the soyuz meeting on monday get addressed?
[04:39] <cprov> yes, I think
[04:39] <kiko> SteveA, that was one of them. the other is DB fillage. the other is limi's assistance on page-browsing.
[04:39] <kiko> the rest is dealt with.
[04:39] <SteveA> spiv: you didn't mail to the list
[04:39] <kiko> (I was being verbose)
[04:39] <kiko> SteveA, sure he did.
[04:39] <SteveA> did he?
[04:39] <kiko> Subject: Bugs filed as a result of Soyuz meeting on Monday
[04:39] <spiv> SteveA: Sorry, I mailed it just before this meeting, which was a bit later than ideal :(
[04:40] <SteveA> oh, okay
[04:40] <SteveA> thanks for posting it :-)
[04:40] <spiv> There's two points that didn't have bugs about them yet:
[04:41] <spiv> "leave links to bugs until stub gets back to help you with it"
[04:41] <spiv> I think that one is being taken care of, though.
[04:41] <SteveA> you have the meeting
[04:41] <kiko> right.
[04:41] <spiv> And "make Distro --> DistributionRole really work" -- cprov, I want to chat with you about this first, so I can file a bug that makes sense :)
[04:42] <spiv> (Just to confirm that I understand what the issue there is)
[04:42] <kiko> spiv, yeah, this may tie into my teams email I wrote this morning.
[04:42] <SteveA> ok
[04:42] <SteveA> I asked earlier: what is the tangible thing you'll get out of your meetings tomorrow?
[04:42] <SteveA> will someone mail a summary to the list?
[04:43] <cprov> spiv: sure we can chat
[04:43] <spiv> cprov: After this meeting suit you?
[04:43] <cprov> spiv: yes, it's nice for me
[04:43] <spiv> Ok.
[04:43] <kiko> SteveA, we don't know yet, tbh. We're investigating possibilities. We'll probably get information and a strategy.
[04:44] <SteveA> kiko: ok, please make sure someone mails a summary of the meetings to the launchpad list after the meetings
[04:44] <kiko> I'll do that. 
[04:44] <SteveA> Let's move on to rosetta
[04:44] <SteveA> thanks.
[04:44] <SteveA> daf: how is rosetta going?
[04:47] <daf> I think we're on track to release the Alpha today
[04:47] <daf> we've closed a lot of bugs over the past week
[04:47] <daf> (and filed even more in preparation for the Beta :))
[04:47] <daf> http://rosetta.ubuntulinux.org has been set up
[04:48] <SteveA> it requires a username and password
[04:48] <SteveA> i don't think it should do
[04:48] <daf> elmo_mf: can you turn this off?
[04:49] <carlos> SteveA: does  the launchpad login screen works?
[04:49] <elmo_mf> err, yeah, in a bit
[04:49] <SteveA> carlos: nothing will work while apache requires auth
[04:49] <SteveA> what about importing packages to be translated?
[04:49] <carlos> SteveA: as soon as it disapear, it will start working automatically?
[04:50] <SteveA> carlos: yes, but we need to look at what permissions you have set.
[04:50] <carlos> ok
[04:50] <SteveA> daf, carlos: after this meeting, let's look at permissions
[04:50] <carlos> ok
[04:51] <daf> yes, let's
[04:51] <SteveA> what about importing packages?
[04:51] <SteveA> what have you decided to do about that?
[04:52] <daf> we discussed package imports this morning
[04:52] <SteveA> we worked out that it is 1.5 days work for someone to write the import script
[04:53] <SteveA> so you were going to import a few by hand today 
[04:53] <daf> since it seems developing an import script is going to be a non-trivial effort, we're going to import a few packages by hand for the Alpha
[04:53] <SteveA> did you decide which ones?
[04:53] <daf> no
[04:53] <carlos> not yet, but I think we could follow mark's list of packages, we have them sorted by importance
[04:53] <daf> I'll take them from Mark's list
[04:53] <carlos> :-)
[04:54] <daf> :)
[04:55] <SteveA> ok
[04:55] <SteveA> so, we're aiming to have the rosetta alpha working by the end of today?
[04:55] <daf> that's right
[04:55] <SteveA> so, you'll plan to announce it to the list of "rosetta sounders" tomorrow
[04:55] <daf> I was planning to announce it as soon as it's up
[04:56] <SteveA> if it goes up late, don't announce it when you're very tired
[04:56] <SteveA> sleep on it, and check it out in the morning
[04:56] <daf> I'll bear that advice in mind
[04:56] <carlos> daf: then you should send the announcement :-P
[04:57] <carlos> I should not
[04:57] <SteveA> and, let's make time to give the alpha, once it is up, a thorough look at together
[04:57] <daf> that would be good
[04:57] <SteveA> ok.
[04:57] <carlos> SteveA: I have a doubt about rosetta and ubuntu website
[04:57] <carlos> we have a link to rosetta 
[04:58] <carlos> (or we will have it)
[04:58] <carlos> if we have it closed only for alphatesters...
[04:58] <SteveA> I don't think it will be closed for them
[04:58] <carlos> but we don' have a way to register new users
[04:58] <SteveA> only that we'll need to have people registered in the database to do certain things
[04:58] <carlos> we register them by hand
[04:58] <SteveA> daf is registering people by hand using your script, isn't he?
[04:58] <carlos> yes
[04:59] <daf> SteveA: that's the plan
[04:59] <daf> I'll do a mass registration before I send out the announcement
[04:59] <SteveA> what about people who see the site in the link from ubuntulinux.org?
[04:59] <carlos> SteveA: that's my question :-)
[05:00] <daf> that's undefined behaviour :)
[05:00] <daf> let's define it
[05:00] <SteveA> ok, let's define that in a rosetta meeting
[05:00] <SteveA> I'd like to declare the whole launchpad team meeting over.
[05:00] <daf> I think we'll want sabdfl's input on that
[05:01] <SteveA> thanks for coming, launchpad team
[05:01] <daf> thanks for chairing, Steve
[05:02] <kiko> thanks SteveA
[05:02] <kiko> time to hack!
[05:02] <kiko> night stub
[05:02] <daf> night stub 
[05:02] <carlos> stub: bye
[05:02] <daf> stub: before you go...
[05:02] <daf> stub: shall I fix that Makefile bug?
[05:02] <stub> Bug? Looked fine to me...
[05:03] <daf> stub: look again
[05:03] <daf> stub: or shall I explain? :)
[05:04] <SteveA> daf, carlos: let's meet again in 30 minutes to talk about permissions and anything else pertaining to rosetta alpha
[05:04] <stub> hardcoded launchpad_test?
[05:04] <daf> stub: exactly
[05:04] <carlos> SteveA: ok
[05:04] <stub> sure - fix it if you want.
[05:04] <daf> stub: ok, thanks
[05:05] <SteveA> spiv: you've finished the xml-rpc auth server?  I know we had a recent change in an encoding.
[05:05] <stub> Oh - was there a decision on the unique constraint for schemas and labels?
[05:05] <carlos> stub: I will send you a patch
[05:05] <carlos> well, to lifeless
[05:06] <SteveA> spiv: can you talk with the admins about getting it running on maquarie?  Upfront are aiming to deliver their part at the end of the week, so it would be good to get the xml-rpc server running by then.
[05:06] <SteveA> cheerio stub
[05:07] <spiv> SteveA: In theory, yes, but I haven't written as many tests as I'd like to be 100% confident that there's no stupid bugs.
[05:07] <spiv> SteveA: I'll talk to the admins.
[05:08] <SteveA> can you mail them today?
[05:08] <spiv> Yes :)
[05:09] <spiv> (Assuming they don't all collapse for 24hrs after the Ubuntu release :)
[05:34] <elmo_mf> password removed from r.ul.o
[05:34] <daf> elmo_mf: thanks!
[05:50] <carlos> SteveA, daf: meeting?
[05:52] <SteveA> hi
[05:52] <carlos> hi
[05:53] <daf> hi
[05:53] <SteveA> so, let's talk about what different people should be able to do with rosetta
[05:54] <SteveA> we have the following situations
[05:54] <SteveA> - pages / actions that anyone can see / do.
[05:54] <SteveA> - pages / actions that only People in the database may see / do
[05:55] <SteveA> - pages / actions that noone may see / do
[05:55] <daf> what things fall into the last category?
[05:55] <SteveA> accessing attributes or methods for which no security declaration is made 
[05:55] <SteveA> you've been doing this all along :)
[05:55] <daf> indeed :)
[05:56] <SteveA> this may sound trivial, but it used to be a real pain with zope 2 development
[05:56] <SteveA> before the zope2 security system was partially fixed by making it deny before allowing
[05:56] <SteveA> we also have the following situations with people being logged in
[05:57] <SteveA> - no-one is logged in (it is the unauthenticated principal)
[05:57] <SteveA> - a Person is logged in
[05:57] <daf> right
[05:57] <carlos> ok
[05:57] <SteveA> we have the following combinations of what people can see / do, and who is logged in
[05:57] <SteveA> - no-one is logged in; pages/actions that anyone can see/do
[05:58] <SteveA> - a person is logged in; pages/actions that anyone can see/do
[05:58] <SteveA> - a person is logged in; pages/actions that only people in the database may see/do
[05:58] <SteveA> We need to make a list of those pages/actions that only people in the database may see/do
[05:59] <carlos> I think the only pages that should be accesible only for people logged are the forms (except the searcher)
[05:59] <SteveA> doing it based on pages is a good first step
[05:59] <carlos> https://rosetta.warthogs.hbd.com/rosetta/projects/$Project.name/$Product.name/$POTemplate.name/translate
[06:00] <daf> translator dashboard, maintainer dashboard, preferences, translation template
[06:00] <daf> those are the ones that come to mind
[06:00] <SteveA> let's open a bug called "set restrictive permissions on forms in rosetta"
[06:00] <carlos> https://rosetta.warthogs.hbd.com/rosetta/projects/$Project.name/$Product.name/$POTemplate.name/$Language.code/+edit
[06:03] <SteveA> ok.
[06:04] <SteveA> How about if daf and I pair-program on getting the first few permissions sorted out in the code?
[06:04] <SteveA> carlos: when do you need to sleep ?
[06:04] <carlos> SteveA: I can work still some extra hours
[06:04] <carlos> hmm, I don't have access to the server to do the initial import data
[06:04] <daf> carlos: your stamina is impressive :)
[06:05] <carlos> daf: as soon as I leave the laptop It will go down :-)
[06:05] <daf> stay where you are!
[06:05] <carlos> I only need to be busy
[06:05] <carlos> :-D
[06:06] <carlos> what other important tasks do we have for today?
[06:06] <carlos> hmmm
[06:06] <carlos> ok, I could prepare a .sql file with the initial server data
[06:06] <carlos> in my laptop
[06:06] <daf> why can't I run the import script on the server?
[06:06] <carlos> so daf you will only need to load it into rosetta's serve
[06:06] <carlos> daf: I don't have an account :-)
[06:07] <daf> but I could run the script
[06:07] <daf> or I could load SQL
[06:07] <carlos> but you will be busy with steve
[06:07] <daf> that would probably be easier, actually
[06:07] <carlos> that's my point I work now on a sql
[06:07] <carlos> with the initial data for the alpha
[06:07] <carlos> upload it to chinstrap
[06:07] <carlos> and you import it into the database
[06:10] <carlos> SteveA, daf?
[06:11] <daf> yes?
[06:11] <carlos> do you agree?
[06:11] <carlos> :-D
[06:11] <daf> yep
[06:11] <carlos> ok
[06:24] <SteveA> elmo_mf: ping?
[06:26] <elmo_mf> yeah?
[06:26] <SteveA> any chance of andrew getting an account on rosetta until Monday, and being able to run some twisted listening on an unpriv port to the outside world, and being able to create a postgres DB? 
[06:27] <SteveA> I'd like him to be able to run his xml-rpc auth service, with a dummy db, so that upfront can test against the software their stuff will be used with.
[06:29] <SteveA> this in turn will make it more likely that the "turning on auth on the ubuntu site" will work properly on friday
[06:33] <elmo_mf> SteveA: yeah, still busy with random release stuff, I'll try and do it in a bit
[06:33] <SteveA> ok, shall I mail to admins@... ?
[06:34] <elmo_mf> never hurts to mail :)
[06:35] <SteveA> thanks elmo
[09:28] <carlos> daf: how is going the bug fixing ?
[09:31] <daf> I have a fix for the recently translated feature locally
[09:31] <daf> Steve and I are working on permissions
[09:33] <carlos> I'm importing .po files but it's sloooooow
[09:33] <daf> yeah :(
[09:34] <daf> even on the Rosetta server, it's slow
[09:34] <carlos> hmm, that's bad
[09:38] <carlos> daf: hmm, we have a problem...
[09:38] <carlos> psycopg.IntegrityError: ERROR:  duplicate key violates unique constraint "pomsgidsighting_pomsgset_key"
[09:38] <carlos> INSERT INTO POMsgIDSighting (id, pluralform, pomsgid, inlastrevision, datelastseen, datefirstseen, pomsgset) VALUES (1836, 1, 924, 't', 'NOW', 'NOW', 918)
[09:38] <carlos> msgfmt -c -v does not detect anything and it's a .pot file...
[09:39] <daf> must be a bug
[09:41] <carlos> I know, I'm debugging it now
[09:43] <daf> do you know what exactly the "pomsgidsighting_pomsgset_key constraint is?
[09:43] <carlos> yes, I think it's a problem with a plural form
[09:43] <carlos> a unique key
[09:43] <carlos> don't worry, I will handle it
[09:44] <daf> well, I'm not surprised we're finding bugs in the importer when testing it with real files
[09:45] <carlos> I think I have it, need to look at the .pot file
[09:45] <carlos> SHIT
[09:45] <carlos> #: mailcheck/mailcheck.c:1091
[09:45] <carlos> #, c-format
[09:45] <carlos> msgid "%d unread"
[09:45] <carlos> msgid_plural "%d unread"
[09:45] <carlos> msgstr[0]  ""
[09:45] <carlos> msgstr[1]  ""
[09:46] <carlos> How could we handle it?
[09:46] <carlos> it's not a bug in the importer
[09:46] <daf> ooh
[09:46] <daf> I think it is a bug
[09:46] <carlos> not in the importer
[09:46] <daf> this is a valid message set, I think
[09:46] <carlos> the database does not handle it
[09:47] <daf> oh, right, not in the importer
[09:47] <daf> the constraint is invalid, I think
[09:47] <carlos> seems like it's invalid
[09:47] <carlos> yes
[09:47] <carlos> funny...
[09:47] <carlos> hope lalo did not assume any check based on that restriction...
[09:49] <daf> will you submit a patch to the scema?
[09:49] <carlos> yes
[09:49] <carlos> with the labels changes
[09:52] <carlos> seems like lalo is connected
[09:52] <carlos> I got an orkut invitation from him
[10:27] <debonzi> hi elmo, do you have some time to give me a breaf explanation about sourcepackage builddepends and builddependsindep?
[10:30] <debonzi> if some else know about it are wellcome too :)
[10:32] <daf> debonzi: what do you need to know?
[10:33] <daf> builddepends are packages another package needs install to build architecture-dependent packages
[10:33] <daf> builddependsindep are packages needed to build architecture-independent packages
[10:34] <debonzi> daf, That's what I need to know indep means arch-indep :) tks
[10:35] <daf> no problem
[10:58] <carlos> daf: what should I do with the .po files that fails importing them? bug report with it attached is enough?
[10:59] <daf> no
[10:59] <daf> just include the message set that causes the error, I think
[10:59] <carlos> daf: I need to debug it
[10:59] <carlos> and I don't want to expend time on it tonight
[10:59] <daf> sure
[11:00] <carlos> I have the bt
[11:00] <daf> BT?
[11:00] <carlos> and I have the .po file, I will save them for tomorrow
[11:00] <carlos> trace
[11:00] <carlos> backtrace
[11:01] <carlos> pfff, it an hour I have only three po files imported. I think we should fix it before the beta release....
[11:01] <carlos>  /s/it/in/
[11:01] <daf> :(
[11:02] <carlos> well two, and it's working on the third (and the second failed)
[11:07] <carlos> SteveA: do we have a profile for python?
[11:07] <SteveA> I don't know what that means
[11:07] <kiko> carlos, profile.py?
[11:07] <kiko> or hotspot?
[11:07] <spiv> carlos: There is a profiler, if that's what you're asking.
[11:08] <carlos> any URL I could look at
[11:08] <carlos> or a name
[11:08] <SteveA> hotshot
[11:08] <carlos> hotspot?
[11:08] <SteveA> hotshot
[11:08] <carlos> ok
[11:08] <carlos> thanks *
[11:08] <daf> I think the database is the bottleneck
[11:08] <spiv> carlos: It may also be worth turning on the debug flag in SQLObject, if you suspect DB calls are the problem.
[11:08] <carlos> phone
[11:09] <spiv> (or the equivalent at the postgres end)
[11:09] <SteveA> we may want to provide some db calls that do a lot in one query, and see if that helps
[11:18] <carlos> sorry, I'm back 
[11:20] <carlos> daf: the bugs I file now should block the Alpha release or the beta one?
[11:21] <daf> what are the bugs?
[11:21] <carlos> are bugs inside the imported code that fails
[11:21] <carlos> for instance, it does not handle custom headers (like X-Generator)
[11:21] <daf> file the bugs, we can fix the dependencies later
[11:21] <carlos> ok
[11:22] <daf> I think the parser should be fairly liberal with regards to ignoring headers it doesn't recognise
[11:23] <carlos> daf: right
[11:32] <kiko> lulu! late night eh? :)
[11:36] <lulu> kiko:hey hon - yup - last nigtht and tonight, but just had a yummy dinner as a celebration!
[11:37] <lulu> how u?
[11:45] <carlos> The sql sentences does not show anything that seems to be unnecesary but when it's a SELECT COUNT... in that case it execute it twice
[11:45] <carlos> 2004-09-15 23:44:45 [13570]  LOG:  statement: SELECT COUNT(*) FROM POMsgIDSighting WHERE pluralform = 0 AND pomsgset = 1345
[11:45] <carlos> 2004-09-15 23:44:45 [13570]  LOG:  statement: SELECT COUNT(*) FROM POMsgIDSighting WHERE pluralform = 0 AND pomsgset = 1345
[11:46] <daf> hmm
[11:46] <carlos> but I doubt that's the problem to go so slow
[11:46] <carlos> we have about 10 selects/updates
[11:46] <daf> me too
[11:46] <carlos> hmmm
[11:46] <daf> 10 per message set?
[11:47] <carlos> I didn't count them
[11:47] <carlos> It's difficult to do it
[11:47] <daf> yeah :(
[11:47] <carlos> it was a feeling  :-)
[11:47] <daf> perhaps you could create a test PO file which only contains one or two message sets
[11:48] <carlos> wow
[11:48] <carlos> 28 select/updates
[11:48] <carlos> every field is updated in different queries:
[11:48] <carlos> 2004-09-15 23:50:01 [13570]  LOG:  statement: UPDATE POMsgIDSighting SET inlastrevision = 't' WHERE id = 2691
[11:48] <carlos> 2004-09-15 23:50:01 [13570]  LOG:  statement: UPDATE POMsgSet SET commenttext = '' WHERE id = 1345
[11:48] <carlos> 2004-09-15 23:50:01 [13570]  LOG:  statement: UPDATE POMsgSet SET sourcecomment = '' WHERE id = 1345
[11:48] <carlos> 2004-09-15 23:50:01 [13570]  LOG:  statement: UPDATE POMsgSet SET filereferences = 'accessx-status/GNOME_AccessxStatusApplet.server.in.in.h:3' WHERE id = 1345
[11:48] <carlos> 2004-09-15 23:50:01 [13570]  LOG:  statement: UPDATE POMsgSet SET fuzzy = 'f' WHERE id = 1345
[11:49] <carlos> 2004-09-15 23:50:01 [13570]  LOG:  statement: UPDATE POMsgSet SET flagscomment = '' WHERE id = 1345
[11:49] <carlos> 2004-09-15 23:50:01 [13570]  LOG:  statement: UPDATE POMsgSet SET obsolete = 'f' WHERE id = 1345
[11:49] <carlos> instead of execute only one UPDATE
[11:49] <daf> ouch
[11:49] <carlos> that could be a big problem
[11:49] <carlos> with long .po files
[11:49] <daf> we might be able to improve that
[11:50] <daf> can you identify where these calls are being made in Python code?
[11:50] <daf> spiv: should SQLObject be clustering these changes?
[11:50] <daf> spiv: or is it not that clever?
[11:52] <carlos> SQLObject
[11:52] <carlos> pofile_adapters.py
[11:53] <carlos> it's an sqlobject that changes its attributes
[11:53] <carlos> and every time, it seems like it executes an update
[11:54] <daf> how about this:
[11:55] <daf> (we don't have to do this now)
[11:55] <daf> we take a PO file that is known to import
[11:55] <daf> i.e. it doesn't trigger any bugs
[11:55] <daf> and we time how long it takes to import it
[11:56] <daf> then, we add a magic method which does the above six updates using _connection.query()
[11:56] <daf> make the PO file adapters use that method
[11:56] <daf> and time it again
[11:56] <daf> that way, we know how much of a preformance improvement we can expect from reducing the number of queries
[11:57] <carlos> makes sense (I suppose using_connection.query() will let us to execute sql sentences directly, right?)
[11:58] <daf> yes, I think there are some cases of it in sql.py already
[11:58] <carlos> ok
[11:58] <carlos> I will file a bug about it now
[11:58] <daf> brilliant, thanks
[11:58] <daf> I think perhaps you should sleep now :)
[11:59] <carlos> daf: yes, I was thinking about it already, don't worry :-)