[01:35] <jsgotangco> good morning
[02:44] <kiko> yawn
[02:47] <jsgotangco> =)
[05:30] <mpt> Gooooooooooooooooooood afternoon Launchpadders!
[05:43] <stub> lifeless: I need to merge upstream Z3 changes from sftp://chinstrap/home/warthogs/archives/stub/zope/devel into our Zope. Do you want to handle it or should I?
[05:55] <root__> Aqui  o Canal da distribuio para Linux Ubuntu?
[07:23] <lifeless> stub: if you can, please do. EP is keeping me plenty busy
[07:57] <mpt_> "(Use your email address and Launchpad password create an account.)"
[08:01] <spiv> mpt_: I've mailed the admins about that.
[08:03] <mpt_> ok :-)
[08:03] <mpt_> At least it's an improvement
[08:03] <spiv> Yeah.
[08:57] <carlos> morning
[09:00] <mdke> morn
[09:01] <jsgotangco> hello
[09:08] <stub> So if I break out Product.name, Distribution.name and Project.name into a PillarName table there will be a lot of nasty performance fallout. Most of this could be fixed using prejoins, but identifying all the call sites will take time and cause disruptions due to increased page timeouts.
[09:09] <stub> Alternatively, I could keep the product, distribution and project tables as they are, and maintain uniqueness in other ways.
[09:09] <stub> I'm leaning towards keeping the existing tables as they are at the moment
[09:11] <spiv> I guess it depends on the alternatives.
[09:11] <stub> Although perhaps changing the database in such a way that all calls to Product.name fail, I can slowly work through the failing tests.
[09:11] <mpt_> I guess keeping the tables separate also makes it more undoable
[09:14] <spiv> What sort of alternatives do you have in mind?  Something like having triggers that update a PillerName table, which in turn has the unique constraint?
[09:14] <stub> Yes. That would be the implementation.
[09:15] <spiv> Doesn't seem too horrible.
[09:15] <stub> The only code that would use the PillarName table would be the name validators
[09:16] <spiv> Right.
[09:16] <stub> It is sacrificing correct design for a quick fix, although 'correct' here is arguable.
[09:17] <jamesh> "correct" might be to use postgresql table inheritance
[09:17] <jamesh> have product, proejct and distro inherit from a "pillar" table with a name column which has a uniqueness constraint
[09:18] <stub> Perhaps. We have never done that in the past in order to ensure we could switch to a different back end if needs be. Although we could emulate that with views if necessary.
[09:19] <spiv> Well, we already have portability issues because we have the plpython routines.
[09:19] <jamesh> sure.  It is a way to ensure uniqueness though.
[09:20] <stub> There isn't too much PostgreSQL specific stuff to worry about. The most annoying to work around would be the PostgreSQL SQL extensions we are using in a few places because performing those queries without the extensions would be expensive
[09:20] <stub> I guess table inheritance wouldn't be major provided we don't go crazy.
[09:21] <jamesh> the above usage is basically just to enforce a constraint
[09:21] <jamesh> I'm sure we could find other ways to enforce the constraint with other dbs (if we ever decided to move away from pg)
[09:22] <stub> jamesh: We would also use the base table to check if the constraint is about to be violated in the validation code, to avoid needing to do 3 queries.
[09:22] <stub> Yup. Same result really as using triggers to maintain PillarName, but without the triggers so it will be cleaner and faster.
[09:23] <stub> DB patch will be a bitch though - all those foreign key constraints to rebuild :P
[09:25] <jamesh> could you actually do it as a db patch?
[09:25] <jamesh> doesn't it involve recreating all those tables?
[09:25] <stub> It can be done as a DB patch. I only need to recreate the Product, Distribution and Project tables. All the related tables can be handled with ALTER TABLE.
[09:27] <jamesh> UML does ...
[09:27] <stub> UML wasn't invented when I was doing those subjects ;)
[09:28] <spiv> Hmm, the branch-scanner is getting a heap of ForbiddenAttribute: ('last_scanned', <Branch at 0x2aaab16ac990>) errors.
[09:33] <lifeless> mpt_: ping
[09:36] <lifeless> stub: what about an updatable view that looks like e.g. product, but has the name coming from PillarName ?
[09:36] <lifeless> stub: that should stop sqlobject being stupid
[09:38] <jamesh> spiv: weird.  I wonder why the tests didn't pick that up?
[09:38] <lifeless> local configuration vs production server configuration perhaps
[09:38] <lifeless> was security.py updated correctly, and do the tests run the scanner as the right db user ?
[09:38] <lifeless> erm, security.cfg
[09:40] <jamesh> lifeless: branchscanner user has select,insert,update perms on the Branch table.  That exception would be from Zope security framework
[09:41] <lifeless> interesting
[09:41] <lifeless> so we have a sec proxy in the way
[09:42] <jamesh> well, the scanner uses utilities to get and create branches,etc
[09:44] <stub> lifeless: Updatable views are a pita to create, although I'll look into that too.
[09:47] <jamesh> updatable views would need to be updated after each modification to the underlying table
[09:53] <stub> spiv, lifeless: ForbiddenAttributet is a Zope security wrapper exception - nothing to do with the DB.
[09:54] <sivang> morning !
[09:59] <lifeless> jamesh: so, there are two things, why do the tests miss this, and lets fix it :)
[10:04] <jamesh> lifeless: it might be easiest to move the bzrsync code out of lib/importd and have it run by the LP test runner
[10:04] <lifeless> jamesh: is it not run by the lp test runner already ?t
[10:04] <lifeless> jamesh: anyway, I think that is fine,and sensible
[10:05] <jamesh> lifeless: lib/importd/tests/harness.py does some custom setup (see ZopelessUtilitiesHelper), which probably doesn't setup the security stuff
[10:07] <lifeless> jamesh: I trust you :). 
[10:25] <jamesh> spiv: hmm.  Branch.last_scanned is defined with Attribute() in the interface.  Could that be the problem?
[10:27] <spiv> jamesh: That should be fine.
[10:31] <BjornT> spiv, jamesh: Attribute could be the problem. Attributes don't get security declarations if you use set_schema
[10:32] <jamesh> BjornT: ... and branch.zcml uses set_schema
[10:32] <jamesh> I guess that answers the question
[10:33] <spiv> Oh, huh.  I didn't know that.
[11:11] <stub> We use Attribute too much because we are lazy. We should replace them with genuine schema definitions as we go (most of them would be name = Object(IFoo) or one of the more complex schema field types I suspect).
[11:48] <stub> lifeless: Do you have any issues with migrating Balleny to Dapper?
[11:56] <sabdfl> we should not use set_schema
[11:56] <sabdfl> it's like "chmod 666"
[12:37] <lifeless> stub: Do It
[12:38] <jordi> can a product owner change their RCS source, or is that only available to LP admins?
[12:39] <LarstiQ> for vcs-imports?
[12:40] <jordi> https://launchpad.net/products/silva/trunk
[12:40] <jordi> they moved from CVS to SVN
[12:40] <LarstiQ> https://launchpad.net/products/silva/trunk/+source should allow that methinks
[12:44] <jordi> I get a perission denied, so I don't know
[12:44] <jordi> I mailed him
[02:32] <carlos> see you later
[03:02] <rodarvus> hi there
[03:03] <rodarvus> I tried to upload a package to upload.ubuntu.com 20 minutes ago, but can see any sign of it in the Edgy queue (nor any email is sent to edgy-changes)
[03:03] <rodarvus> some debugging:
[03:03] <rodarvus> - package name is 'x11proto-damage'
[03:03] <rodarvus> - I have my gpg key uploaded to LP *and* I used rodarvus@ubuntu.com as uploader email
[03:04] <rodarvus> - I am on the relevant LP group (ubuntu-core-dev)
[03:04] <rodarvus> is there any place I can check to see what went wrong with the upload?
[03:05] <salgado> rodarvus, I guess cprov will be able to help you
[03:05] <malcc> rodarvus: I can see your upload in the failed folder, I'm just checking why now
[03:06] <rodarvus> malcc, in this case shouldn't it also show up on the Rejected Edgy queue? (in LP)
[03:06] <malcc> rodarvus: No. Rejected is for uploads which the system understood and managed to import, and which the administrator later decided to reject.
[03:07] <malcc> rodarvus: This is what happens when the code breaks while trying to read your upload into the database
[03:07] <rodarvus> oh, right
[03:13] <rodarvus> malcc, I got the rejection mail now, thanks!
[03:13] <malcc> rodarvus: Great!
[03:41] <jgi> hello everyone
[03:42] <jgi> I'm one of WengoPhone"s developers. I've created a launchpad project a while ago, but I can't use rosetta. I sent an e-mail to an administrator about this but I got no answer so far.
[03:42] <jgi> How should I proceed to be able to use Rosetta?
[03:46] <matsubara> jgi: have you seen https://help.launchpad.net/RosettaFAQ ?
[03:47] <matsubara> carlos, jordi: ^^
[03:49] <jgi> matsubara: yes
[03:50] <jgi> sorry, last minute meeting, brb
[03:54] <salgado> carlos, around?
[04:17] <kiko-zzz> morning!
[04:18] <kiko-zzz> flacoste, thanks for the answer -- and yes, insightful. Do you think the issue is HTTPS non-caching?
[04:19] <kiko-zzz> SteveA, why is it difficult to fix the zope3 logger to DTRT for us?
[04:19] <flacoste> kiko-zzz: HTTPS non-caching, that might be possible
[04:19] <kiko-zzz> SteveA, lifeless, do you know if that's the case?
[04:21] <jgi> matsubara: in https://help.launchpad.net/RosettaNewImportPolicy , it says "Contact the upstream authors of the product, and tell them about his plans. We suggest using the mail template at the end of this page." 
[04:21] <jgi> matsubara: and then "If they agree to use Rosetta as their infrastructure for translation, the product will be marked as "Rosetta official","
[04:21] <jgi> matsubara: but what happens in between? How rosetta admins know that the project mantainer agreed?
[04:22] <jgi> matsubara: I thought it was supposed to be done my emailing rosetta@launchpad.net
[04:22] <jgi> matsubara: I did send an e-mail few weeks ago, and I never got any response
[04:22] <flacoste> kiko-zzz: Pages accessed by HTTPS can never be cached in a shared cache. Since the conversation between browser and server is encrypted, intermediate caches are unable to see the content to cache it. Worse, some browsers will not even cache HTTPS documents in their local per-user caches.
[04:22] <flacoste> that comes from http://www-uxsup.csx.cam.ac.uk/~jw35/courses/using_https/html/x191.html
[04:23] <flacoste> (found by googling https caching)
[04:23] <kiko-zzz> flacoste, I thought that firefox would cache the HTTPS content. Don't you?
[04:23] <kiko-zzz> flacoste, I think perhaps our Javascript is being wonky
[04:23] <flacoste> i don't know if firefox is included in 'some browsers' - it represents 50% of our browsers
[04:23] <matsubara> jgi: you probably want to chat with jordi or carlos, but they apparently aren't available now.
[04:25] <matsubara> jgi: kiko-zzz might also help. 
[04:25] <kiko-zzz> matsubara, not right now, I can't :)
[04:25] <matsubara> kiko-zzz: ok.
[04:27] <flacoste> kiko-zzz: well the PNG aren't cached (according to Page info) so I don't think the CSS/JS is
[04:28] <jgi> matsubara: ok, thank you very much
[04:28] <kiko-zzz> flacoste, the PNGs aren't cached? we reload every single one of them every page load? waaah
[04:29] <flacoste> well, that would explain the statistics
[04:29] <flacoste> 70% of the requests are non-HTML
[04:29] <flacoste> i.e. 29.9 are dynamic URL the rest is static content
[04:30] <kiko-zzz> cprov, Kinnison, malcc: argh, we're getting spammed by sync requesters
[04:30] <kiko-zzz> cprov, Kinnison, malcc: should we add a mailing list contact, or should we use a separate team for upload admins?
[04:30] <lifeless> kiko-zzz: the pngs are cached
[04:30] <lifeless> kiko-zzz: but people hit 'ctrl-f5'
[04:30] <lifeless> or ctrl-R
[04:31] <flacoste> kiko-zzz: putting a proxy cache in front of the app servers (where static content could be cached) would offload processing these requests from the app server but that wouldn't solve the bandwitdh issue
[04:31] <lifeless> and that will always do a full request for every item.
[04:31] <flacoste> lifeless: according to Page Info in my browser, it doesn't cache them
[04:32] <lifeless> oh, I was tracing locally. Https will force all documents to not cache in some browsers
[04:32] <lifeless> what browser are you using
[04:32] <flacoste> Firefox
[04:32] <flacoste> from Dapper
[04:32] <lifeless> IIRC that has that behaviour
[04:32] <lifeless> this is a reason to have pngs served via http
[04:32] <kiko-zzz> lifeless, that gives the end-user the broken lock icon.
[04:32] <flacoste> exactly
[04:33] <kiko-zzz> probably the right solution is to use SSL just for the login page and redirect back.
[04:33] <lifeless> let me check something
[04:33] <lifeless> kiko-zzz: except for private data like security bugs
[04:33] <kiko-zzz> I think this will have a serious performance benefit for us, fwiw
[04:33] <kiko-zzz> lifeless, *shrug*
[04:34] <lifeless> kiko-zzz: I agree we need to do something
[04:34] <lifeless> http will perform better, as long as we are careful about it I am fully supportive of that
[04:40] <carlos> jgi: hi
[04:40] <jgi> carlos: hello
[04:40] <carlos> jgi: you did the right thing
[04:40] <jgi> carlos: ok :-)
[04:40] <carlos> I guess jordi missed your email
[04:40] <jgi> no problem
[04:40] <carlos> jgi: which product are we talking about?
[04:41] <cprov> kiko-zzz: yes, there should be an options in team subscription to avoid it 
[04:41] <jgi> carlos: WengoPhone
[04:41] <jgi> carlos: I can send you the e-mail back
[04:41] <carlos> no, I found it
[04:41] <jgi> ok
[04:41] <carlos> jgi: I will ping jordi about it
[04:41] <carlos> jordi: ?
[04:43] <cprov> kiko-zzz: those emails are sent us because the sync-requester has explictly subscribed UPAA team to the bug, right ?
[04:43] <kiko-zzz> correct.
[04:43] <flacoste> I confirm, Firefox doesn't really cache https
[04:43] <kiko-zzz> Keybuk, ping?
[04:43] <flacoste> about:cache?device=disk doesn't contain any https link
[04:43] <flacoste> about:cache?device=memory contains them
[04:44] <flacoste> but the Fetch count increase by one every time I visit a launchpad page
[04:44] <kiko-zzz> it probably fetches once and then reuses for the elements in the page.
[04:44] <flacoste> so, it still pulls them out on every request
[04:44] <flacoste> you mean, if the image would appear more than one on the page, that would make sense
[04:45] <lifeless> ff will request the pngs on every single time it is restarted, but should not during a session, unless f5 is pressed. (we'd have a worse than 70% ratio if it requested every single time)
[04:45] <lifeless> flacoste: what headers are we serving the pngs with /
[04:46] <flacoste> Cache-control: public, max-age=86400
[04:46] <flacoste> Expires: Date one day in the future
[04:46] <flacoste> Last modified: Date of last modification
[04:46] <flacoste> should be fine
[04:46] <lifeless> yeah, if that is not kicking ff, there is SFA we can do
[04:52] <LarstiQ> flacoste: your bug is next on the todo list btw
[04:53] <flacoste> LarstiQ: ok, so what is your patch fixing?
[04:55] <LarstiQ> flacoste: being able to do 'bzr log -r revid:francis.lacoste@contre.com-20060623145323-e01f1a4246557f3e..revid:francis.lacoste@contre.com-20060623145356-8f4ba6313ad3237d'
[04:55] <LarstiQ> flacoste: that is what I understand the report to be about
[04:56] <flacoste> LarstiQ: that is indeed an entirely different issue
[04:56] <LarstiQ> flacoste: so I'll file a new bug later on and notify you about that
[04:56] <flacoste> LarstiQ: should I post my comment as a new/different bug then?
[04:56] <LarstiQ> flacoste: if you want, sure
[04:56] <LarstiQ> either of us will do :)
[05:00] <salgado> carlos, ping?
[05:00] <carlos> salgado: pong
[05:01] <salgado> carlos, I'm implementing KarmaContext, and I need to check with you what the context should be in some rosetta-related callsites of IPerson.assignKarma. do you have a few minutes to talk about that now?
[05:02] <carlos> sure
[05:02] <carlos> where could I read about KarmaContex?
[05:03] <salgado> carlos, launchpad.canonical.com/KarmaContext
[05:03] <carlos> ok
[05:04] <salgado> I have the callsites noted down here, with what I think should be the context. I'll paste them in /query for you
[05:04] <carlos> matsubara: hmm, stub is not around
[05:05] <carlos> matsubara: I introduced a change with latest production update that needs data migration
[05:05] <carlos> I think stuart already did such data migration and added the unique restriction
[05:05] <matsubara> oh, I see, that's data bug then. do you need me to report it?
[05:05] <carlos> matsubara: but I don't know it for sure
[05:05] <carlos> matsubara: it's not really a bug
[05:05] <carlos> it would be just that stub was doing the migration at that time
[05:06] <carlos> we need a confirmation from stub first
[05:06] <flacoste> LarstiQ: bug 51980
[05:06] <Ubugtu> Malone bug 51980 in bzr "bzr log <file> displays irrelevant log record" [Untriaged,Unconfirmed]  http://launchpad.net/bugs/51980
[05:09] <doko> hmm, I did build the same binary, same version from a different source, it was sucessfully built, but I didn't get a message that it cannot enter the archive ...
[05:10] <LarstiQ> flacoste: thanks
[05:13] <flacoste> kiko-zzz: did you receive an email notification from the spec tracker about my request for comment?
[05:14] <matsubara> carlos: thanks. I wrote on the report about it and asked stub to confirm if he did the data migration.
[05:14] <carlos> matsubara: data migration + unique restriction
[05:15] <carlos> he had to remove it until the data migration is done
[05:34] <Keybuk> kiko-zzz: hi
[05:34] <Keybuk> back from doctor's again now
[05:42] <Kinnison> Naturally :-)
[05:43] <Kinnison> Anything I can help with?
[06:56] <Keybuk> Kinnison: so, err
[06:56] <Keybuk> 1004     32353  0.6  0.0   5264  1160 ?        D    17:03   0:20 cp -a /srv/launchpad.net/ubuntu-archive/ubuntu/dists /srv/launchpad.net/ubuntu-archive/ubuntu/dists.new
[06:56] <Keybuk> has been running for 55 minutes
[06:58] <malcc> Hmm
[06:58] <Kinnison> Keybuk: impressive
[06:58] <Keybuk> cron.daily is safe, yes? :)  we won't get a second one starting in three minutes
[06:59] <Kinnison> it is locked
[06:59] <malcc> Yes, it's got super soyuz locking technology
[06:59] <Keybuk> mkdir .lock || exit 0 ? :p
[07:00] <Keybuk> WE HAVE REACHED PUBLISH-DISTRO \o/
[07:00] <Kinnison> if ! lockfile -r1 $LOCKFILE; then
[07:01] <malcc> dists is only 10000 files and 4GB, it shouldn't take a long time to copy
[07:02] <malcc> I should have thought more about this when you said it was taking 12 minutes earlier, that seemed longer than before...
[07:02] <Keybuk> 12 minutes to 53 minutes is a bit of a jump
[07:03] <malcc> Keybuk: Now there's a worryingly possible thought
[07:06] <Kinnison> malcc: any chance of your enhanced rsync based cron.daily any time soon?
[07:07] <malcc> Kinnison: I'm working on it as we speak
[07:07] <malcc> Kinnison: But I'd rather find out why copy is taking an hour for a few gigs than rush it out, if possible
[07:10] <Kinnison> indeed
[07:11] <malcc> A copy of 700 megs and 400 files (what we've got in dists on mawson) takes seconds
[07:12] <jordi> jgiaway: hey there.
[07:12] <jordi> jgiaway: I'll reply to your email now, sorry about this
[07:12] <malcc> s/400/4000/
[07:24] <jgi> jordi: hello
[07:24] <jgi> jordi: no problem, thank you very much for your feedback
[07:25] <Keybuk> Kinnison: queue builder still does not appear to be working
[07:25] <Keybuk> cron.daily is finished
[07:25] <Keybuk> sequencer ran queue_builder
[07:25] <Keybuk> but it took no time, and has not queued the builds I expected it to
[07:27] <bradb> and of course that's when the message arrives in my inbox...
[07:28] <Kinnison> Keybuk: Hmm, I can't see why this is the case, I know for sure that if I stop the sequencer and run the queue builder it works, I'll have to ponder, but unfortunately I'm about to leave for the night
[07:31] <Keybuk> Kinnison: ok, if it continues to appear to not do anything, I'll do it by hand in a minute
[07:31] <Kinnison> remember to stop the sequencer before you do
[07:40] <Keybuk> Kinnison: given that the cron.daily from hell run actually appears to have just died, rather than completed normally ...
[07:40] <Keybuk> could you investigate that?
[07:41] <Kinnison> I'll try
[07:43] <Kinnison> publish-distro got a db-closed error
[07:44] <Keybuk> so it died?
[07:44] <Keybuk> will it run ok in 20 minutes time?
[07:45] <Kinnison> should do
[08:00] <Kinnison> right, I gotta go
[08:00] <Kinnison> ciau
[08:03] <kiko-zzz> heeelo
[08:04] <kiko-zzz> Keybuk, so, we're getting spammed with sync requests
[08:04] <kiko-zzz> because ubuntu-archive is subscribed to these bugs by default
[08:13] <kiko> matsubara, timeouts and soft timeouts seem to be much better, eh?
[08:14] <matsubara> kiko: yep.
[08:14] <kiko> BjornT, are you okay with me working on the process for fetching bug messages to improve perf?
[08:15] <BjornT> kiko: sounds good, i'm not touching that code atm.
[08:15] <kiko> BjornT, thanks.
[08:16] <kiko> matsubara, is the Build.lastscore traversal error already fixed?
[08:17] <kiko> BjornT, question 2: did you end up landing those CSS fixes we did together?
[08:17] <BjornT> kiko: ah, no, forgot about those.
[08:19] <matsubara> kiko: it's assigned to cprov and it's in progress. I left a comment there the last time that oops appeared in the reports.
[08:19] <matsubara> kiko: by there I mean bug 44227
[08:19] <Ubugtu> Malone bug 44227 in soyuz "When the buildqueue_status is None +rescore page OOPS" [Medium,In progress]  http://launchpad.net/bugs/44227
[08:20] <kiko> matsubara, thanks a million.
[08:20] <kiko> BjornT, will you land them?
[08:22] <matsubara> damn pqm doesn't like me
[08:23] <BjornT> kiko: sure. not tonight, though. i'll file a bug so i won't forget it.
[08:23] <kiko> thanks!
[08:24] <matsubara> I'm having weird test failures on bzrlib and test_CVS.py.
[08:24] <cprov> matsubara: sorted, bug 44277
[08:24] <Ubugtu> Malone bug 44277 in Ubuntu "nothing on ctrl alt F# (Dapper)" [Medium,Rejected]  http://launchpad.net/bugs/44277
[08:24] <cprov> bug 44227
[08:24] <Ubugtu> Malone bug 44227 in soyuz "When the buildqueue_status is None +rescore page OOPS" [Medium,Fix released]  http://launchpad.net/bugs/44227
[08:25] <matsubara> cprov: thanks!
[08:25] <kiko> matsubara, like was discussed in the list today?
[08:26] <matsubara> kiko: it's not exactly the same error. I sent the request twice and got 2 different failures
[08:26] <matsubara> kiko: should I try the third?
[08:26] <kiko> yes
[08:28] <cprov> matsubara: np
[08:31] <milosz> hey i got a question about lauchpad, i've seen that this trunk branch showed up for my project (drapes) but i cannot add a bzr branch to it, only cvs or svn
[08:32] <kiko> let's see, milosz 
[08:32] <milosz> or a tarball
[08:32] <kiko> https://launchpad.net/products/drapes/+addbranch
[08:33] <kiko> is the issue that you'd like to add the branch for that specific series?
[08:33] <milosz> i alredy have branch registered (called main) but some reason on https://launchpad.net/products/drapes it get this
[08:34] <milosz> trunk: The "trunk" series represents the primary line of development rather than a stable release branch. This is sometimes also called MAIN or HEAD. 
[08:34] <milosz> and when i click on it, it tells me i don't have trunk registered, and redirects me to https://launchpad.net/products/drapes/trunk/+source ...
[08:34] <milosz> it's ... confusing
[08:37] <milosz> am i missing something?
[08:44] <kiko> no
[08:45] <kiko> it's actually very confusing
[08:51] <milosz> so can i import the trunk branch or i can't?
[09:00] <kiko> milosz, well, I think you can just add the branch to your product.
[09:00] <kiko> we'll figure out later how to tie these two things together
[09:00] <milosz> ok
[09:04] <flacoste> how can I access the staging database using psql?
[09:04] <salgado> flacoste, usually you can't. access there is quite restricted
[09:06] <Keybuk> kiko: ok, remove soyuz team from ubuntu-archive then
[09:07] <kiko> Keybuk, well, how do we handle the queue permissions then?
[09:07] <Keybuk> ubuntu-archive
[09:07] <Keybuk> and put soyuz-team in admins
[09:07] <Keybuk> (where it already is)
[09:08] <Keybuk> to look at it another way, if the soyuz-team need to be able to modify the ubuntu queue, there is something wrong
[09:08] <Keybuk> because then they also need to be a member of the queue team for every distribution on launchpad
[09:11] <Keybuk> having a different team for queue permissions than for administrativia doesn't make sense either
[09:11] <Keybuk> because then you have different people receiving the quests to those who can actually act on them
[09:13] <kiko> Keybuk, that doesn't help us in the practical situation we are in now, does it?
[09:13] <Keybuk> what is the situation we're in now
[09:13] <Keybuk> I must admit, I don't understand why soyuz-team needs to be in ubuntu-archive
[09:13] <kiko> I want to move soyuz-team out of admins.
[09:13] <kiko> this doesn't give me a path forwards..
[09:13] <kiko> cprov, Kinnison?
[09:13] <Keybuk> but it sounds like you can't move them out of admins?
[09:14] <Keybuk> either soyuz-team has to be a member of every distribution's upload team
[09:14] <Keybuk> OR soyuz-team has to be specially privileged somehow
[09:14] <Keybuk> why do soyuz-team need to be able to use the ubuntu upload stuff?  nobody in there has permission to actually approve things
[09:16] <Keybuk> I guess the question is; what do soyuz-team need to be able to do in Launchpad?
[09:18] <kiko> Keybuk, if they don't have those permissions, they can't actually look at the queue UI.
[09:18] <Keybuk> do they need to?
[09:18] <Keybuk> (probably a silly question, but... )
[09:18] <kiko> well, if you want to be able to show them what is wrong about it, then, yes
[09:18] <Keybuk> when Guadalinex is on Launchpad, do soyuz-team need to look at their UI?
[09:19] <kiko> need is a hard word
[09:19] <kiko> but it might make things a lot easier
[09:19] <Keybuk> I suspect, for now, the right answer is either
[09:19] <Keybuk> a) soyuz-team in admins
[09:19] <Keybuk> b) soyuz-team in ubuntu-archive and procmail away the bugs
[09:19] <Keybuk> c) have a TEMPORARY ubuntu-upload-manangers team that includes ubuntu-archive and soyuz-team
[09:19] <Keybuk> with the explicit mark that c) is temporary only, and will go away when soyuz works
[09:24] <kiko> I think b)
[09:24] <cprov> kiko: as it is right now.
[09:24] <kiko> yes.
[09:24] <kiko> and procmail away
[09:26] <cprov> I have a suggestion for Soyuz Team, we can fix all soyuz related security adapters to grant Admin for us and avoid this management overhead, what do you think kiko ?
[09:27] <kiko> using a celebrity, you mean? 
[09:27] <cprov> we still missing several sec adapters anyway, good chance to land them.
[09:27] <cprov> yes, of course, soyuz-team will be a celebrity
[09:27] <kiko> I don't like that idea very much
[09:27] <kiko> it sounds weird to have the team backdoored
[09:28] <cprov> kiko: I see, but with the current global permission systems we can't do anything better
[09:28] <kiko> yeah
[09:28] <kiko> I'm thinking
[09:32] <salgado> is it possible to run all tests inside launchpad/doc/?
[09:38] <kiko> I don't know myself
[09:38] <BjornT> salgado: this could work: python test.py -f --test='/doc/[^/] *\.txt'
[09:40] <BjornT> salgado: or maybe better: python test.py -f test_system_documentation
[09:40] <salgado> BjornT, matsubara suggested using --layer=SystemDoctestLayer. should that work too?
[09:41] <BjornT> salgado: using SystemDoctestLayer won't run zopeless tests (i think)
[09:41] <salgado> hmmm. the layer thing doesn't work. it ran only 59 tests and they all passed
[09:43] <salgado> thanks BjornT!
[09:52] <flacoste> salgado: what did you use to check python source file pylint?
[09:53] <kiko> flacoste, pyflakes!
[09:54] <flacoste> how does it work?
[09:55] <flacoste> no man page, pyflakes -h or --help just gives me an error
[09:56] <Keybuk> ok
[09:56] <Keybuk> cron.daily is now up to "every 3 hours"
[09:56] <Keybuk> this is getting decreasingly amusing
[09:56] <kiko> flacoste, pyflakes $filename
[09:56] <flacoste> kiko: no configuration possible then i guess
[09:57] <kiko> flacoste, what configuration would you want? it is ultra-simple
[09:57] <salgado> flacoste, no, it only does some basic syntax/name checking
[09:57] <flacoste> ok, most other lint checker i know have a bunch of configuration checks you can enable/disable
[09:57] <flacoste> pylint has modules which can be used to test for a given coding style for example
[09:58] <kiko> flacoste, pyflakes only catches real errors
[09:58] <flacoste> lol
[09:58] <kiko> seriously!
[09:58] <elmo> actually, I have pyflakes consistenly false positive-ing for me, it's very frustrating
[09:59] <elmo> because even pyflakes catches-almost-nothing approach is better than nothing at all
[09:59] <kiko> elmo, can you give examples of false-positives?
[09:59] <elmo> kiko: it's not LP (or even work) code
[09:59] <elmo> but sure, if that doesn't matter
[09:59] <kiko> elmo, sure, but I'm still interested -- the tool should work well
[10:00] <elmo> here?
[10:03] <kiko> elmo, or in a pastebin if you have an example
[10:04] <elmo> it's just one line, I more meant off topicness, anyway
[10:04] <elmo> init_db.py:25: redefinition of unused 'daklib' from line 24
[10:04] <elmo> it's all stuff like that
[10:04] <elmo> which comes from import daklib.utils on 24 and import daklib.database on 25
[10:04] <elmo> and both of those modules are in use in the init_db.py
[10:08] <elmo> neither pychecker or pylint complain about this, and pyflakes does the same thing even on the bzr source (at least the version in dapper)
[10:11] <kiko> that's a bug
[10:11] <kiko> I think I reported it
[10:12] <elmo> hmm, nothing in launchpad or debbugs
[10:14] <kiko> did I use their trac? I can't remember
[10:14] <kiko> elmo, I'll chase it for a bit and update you
[10:15] <kiko> elmo, meanwhile, can you tell me if we have any web stats being generated currently for launchpad & co?
[10:16] <elmo> for launchpad.net, yes
[10:17] <kiko> really!
[10:17] <kiko> elmo, can you give me a URL?
[10:17] <elmo> hmm, except for july, but that's a minor detail
[10:18] <elmo> kiko: not easily, the webpage isn't setup very well, it's currently IP protected, I'd need to fix that bfore I could give you access
[10:18] <elmo> esp. if this is al ong term thing
[10:20] <kiko> elmo, ah. hmmm. is it awstats?
[10:20] <elmo> yes
[10:23] <kiko> elmo, well, I'd love to take a look at the stats, if you could arrange a way for me (and launchpad developers, for added points) to see it
[10:24] <elmo> kiko: yeah, I can - can you mail rt@ and I'll try and deal with it in the next couple of days
[10:24] <kiko> elmo, thanks.
[10:48] <Keybuk> kiko: can you undo a monkey patch on drescher for me?
[10:51] <Keybuk> oh, s'ok, the file's owned by lp_archive ... I can undo it ! :p
[10:51] <cprov> Keybuk: I can. which one ?
[10:52] <Keybuk> cprov: making buildd-sequencer run queue-builder
[10:52] <Keybuk> he fucked around with it earlier, it didn't work, then he buggered off
[10:52] <Keybuk> so I've been having to run the queue-builder by hand
[10:53] <cprov> Keybuk: do you mean fix the config for ftpmaster ?
[10:53] <Keybuk> please
[10:53] <Keybuk> if you could take queue-builder out of that
[10:54] <cprov> Keybuk:  the config still fine, i.e, not running queue-builder.
[10:55] <Keybuk> hmm?
[10:55] <Keybuk>         <buildsequencer_job queue_builder>
[10:55] <Keybuk>             command /bin/echo cronscripts/buildd-queue-builder.py
[10:55] <Keybuk>             mindelay 600
[10:55] <Keybuk>         </buildsequencer_job>
[10:55] <Keybuk> ^ that looks like "running queue builder" to me :p
[10:58] <cprov> Keybuk: it is running `echo "PATH"`, isn't it ?
[10:59] <Keybuk> oh
[10:59] <Keybuk> this almost certainly explains why Kinnison's monkey patch from hell didn't work

[10:59] <Keybuk> "I'll run queue-builder from buildd-scanner
[10:59] <Keybuk> THERE WE GO!
[10:59] <Keybuk> Oh, it's not working, B'BYE NOW!
[10:59] <Keybuk> "
[10:59] <Keybuk> clearly he forgot to take the "echo" out
[11:00] <Keybuk> so
[11:00] <Keybuk> cprov, man of wisdom
[11:00] <cprov> needs restart
[11:00] <Keybuk> do we take the echo out there, or do we leave it running from cron?
[11:01] <cprov> Keybuk: depends what do you want ? is the cron at :52 working for you ?
[11:01] <lifeless> BjornT: around ?
[11:01] <Keybuk> well, the cron was working until cron.daily took > 1 hour
[11:01] <Keybuk> so let's just leave it as cron
[11:01] <Keybuk> now that cron.daily is sensible times again
[11:01] <BjornT> lifeless: yeah
[11:01] <cprov> Keybuk: it's disabled anyway
[11:02] <Keybuk> ok
[11:02] <lifeless> can you do reviewer-review-allocations tomorrow and friday? I'm travelling
[11:02] <Keybuk> let's leave things as they are
[11:02] <Keybuk> thanks
[11:02] <cprov> Keybuk: just to make it clear, queue-builder isn't running.
[11:02] <cprov> Keybuk: np
[11:04] <BjornT> lifeless: well, i could probably do it tomorrow since it's a quick thing to do and i don't have any plans (tomorrow and friday are public holidays), but i'm not sure i'll be around on friday.
[11:04] <lifeless> ok.
[11:05] <lifeless> I'll ask spiv to then, as I'm sure its not holidays in .au ;)
[11:30] <Keybuk> cprov: about?
[11:32] <Keybuk> ccccccprooooov
[11:36] <malcc> I don't think his client alerts any louder when you stretch his name. If it's an easy one I might be able to help?
[11:37] <Keybuk> malcc: you may
[11:37] <Keybuk> buildd-slave-scanner doesn't work
[11:37] <Keybuk> OSError: [Errno 2]  No such file or directory
[11:37] <Keybuk> 21:37:33 DEBUG   Removing lock file: /var/lock/buildd-master.lock
[11:38] <Keybuk> is the preceeding debug
[11:38] <malcc> How are you running it?
[11:38] <Keybuk> LPUSER=lp_buildd LPCONFIG=ftpmaster  /srv/launchpad.net/codelines/current/cronscripts/buildd-slave-scanner.py -v
[11:39] <malcc> Looks right. I'll see what I can see
[11:39] <Keybuk> 21:37:33 DEBUG   Invoking uploader on /srv/launchpad.net/builddmaster
[11:39] <Keybuk> 21:37:33 DEBUG   ['scripts/process-upload.py', '-Mvv', '--context', 'buildd', '--log-file', '/srv/launchpad.net/builddmaster/incoming/20060705-223733-222294-154279/uploader.log', '-d', u'ubuntu', '-r', u'edgy', '-b', '222294', '-J', '20060705-223733-222294-154279', '/srv/launchpad.net/builddmaster'] 
[11:39] <Keybuk> 21:37:33 DEBUG   Removing lock file: /var/lock/buildd-master.lock
[11:39] <Keybuk>   File "/srv/launchpad.net/codelines/soyuz-production/cronscripts/../lib/canonical/launchpad/scripts/builddmaster.py", line 687, in buildStatus_OK
[11:39] <Keybuk> OSError: [Errno 2]  No such file or directory
[11:39] <Keybuk> (I think that's the most interesting line of the traceback)
[11:40] <Keybuk> does it just need to be run from a particular location, perhaps
[11:40] <malcc> Yes. Looks like it's cunningly swallowing all useful information from the child process traceback
[11:40] <malcc> My yes was not in reply to your last question :)
[11:40] <cprov> weird
[11:40] <Keybuk> Traceback (most recent call last):
[11:40] <Keybuk>   File "scripts/process-upload.py", line 346, in ?
[11:40] <Keybuk>     sys.exit(main())
[11:40] <Keybuk>   File "scripts/process-upload.py", line 91, in main
[11:40] <Keybuk>     lock = GlobalLock('/var/lock/launchpad-upload-queue.lock')
[11:40] <Keybuk>   File "/srv/launchpad.net/codelines/soyuz-production/scripts/ftpmaster-tools/../../lib/contrib/glock.py", line 121, in __init__
[11:40] <Keybuk>     self.flock = open(fpath, 'w')
[11:40] <Keybuk> IOError: [Errno 13]  Permission denied: '/var/lock/launchpad-upload-queue.lock'
[11:40] <Keybuk> ah
[11:40] <Keybuk> that's more useful
[11:40] <Keybuk> what does that have to be owned by?
[11:42] <malcc> Well in order to make this script run, I'd say lp_buildd, but I'm a bit scared about making sure a lock error goes away in case it's supposed to be happening
[11:42] <malcc> cprov: Can you provide some more certainty here?
[11:42] <Keybuk> if it's owned by lp_buildd, then the publisher can't take it
[11:42] <Keybuk> I've just made it 666 for now
[11:43] <malcc> Should be ok; now I think of it we don't rely on permissions for that locking anyway. I'm thinking that's safe
[11:43] <Keybuk> *nods*
[11:43] <Keybuk> it'll do
[11:43] <Keybuk> for the record, drescher is much happier now
[11:43] <malcc> Well I was very happy to stand by while you solved your own problem :)
[11:43] <cprov> it start happen after you kill cron.daily
[11:43] <Keybuk> elmo gave it a new kernel, and a reboot, and a red bicycle and a pony
[11:43] <Keybuk> malcc: you make a good teddybear :p
[11:43] <malcc> Can I have a pony too?
[11:44] <elmo> malcc: no
[11:44] <malcc> elmo: Waaaaaaah
[11:44] <Keybuk> malcc: make cron.daily run fast enough so we can have 30 minute days again
[11:46] <cprov> Keybuk: break also the /srv/launchpad.net/ubuntu-queue/incoming/.lock
[11:46] <Keybuk> break also?
[11:47] <Keybuk> broke
[11:48] <malcc> Should I file a bug for this? We're using this file-locking based locking, processes shouldn't create locks with bogus permissions so they stay effectively locked after a blowup.
[11:48] <Keybuk> yes please
[11:48] <Keybuk> otherwise it makes it harded to start lp
[11:48] <Keybuk> as one has to manually frob locks
[11:55] <malcc> Ok, that's bug 52025
[11:55] <Ubugtu> Malone bug 52025 in soyuz "Some lockfiles have bad permissions" [Medium,Confirmed]  http://launchpad.net/bugs/52025
[12:03] <despai> hello
[12:03] <despai> I need to speak with somebody who manage cd's sending
[12:04] <despai> It's very important
[12:05] <Keybuk> I'm afraid the person is asleep now
[12:05] <Keybuk> wrong timezone
[12:06] <Keybuk> please e-mail info@shipit.ubuntu.com