[12:08] <Diablo-D3> hey guys
[12:08] <Diablo-D3> is there any way to search all of malone for something?
[12:11] <muadda> Diablo-D3, seems not possible ...
[12:11] <Diablo-D3> no kidding
[12:34] <Burgundavia> Diablo-D3, there is currently no way to search all of malone or in fact, any part of LP
[12:36] <Diablo-D3> ... damnit.
[12:37] <Diablo-D3> I should be able to search for "powernow-k7" and get all bugs that have that in the summary
[12:39] <spiv> Diablo-D3: https://launchpad.net/malone/distros/ubuntu?field.searchtext=powernow-k7&search=Search&orderby=-priority%2C-severity ?
[12:39] <spiv> (search done from https://launchpad.net/malone/distros/ubuntu)
[12:39] <Diablo-D3> how did you do that?
[12:40] <spiv> Which only searches "ubuntu" bugs, rather than all of malone, but that's probably close enough to what you want.
[12:40] <spiv> I got to that search form from the "View Ubuntu Bugs" link on https://launchpad.net/malone
[12:40] <Diablo-D3> er, oh hey
[12:40] <Diablo-D3> theres a giant search box there
[12:41] <Diablo-D3> proof malone's ui is too complex =/
[12:41] <spiv> Yeah, it seems it's really easy for people to miss things at the moment :(
[06:10] <stub> spiv: Can I get a review? https://chinstrap.ubuntu.com/~dsilvers/paste/fileTmiueF.html
[06:10] <stub> I'm heading our for lunch, but will be back in about an hour
[06:11] <stub> It is the celebrities optimization stuff
[06:11] <spiv> stub: Sure.
[08:57] <jordi> hello
[08:58] <carlos> morning
[08:59] <carlos> stub, hi, when and at what time the staging DB mirror ends?
[09:00] <stub> staging db mirroring is currently disabled for publisher testing. You need it updated?
[09:03] <ddaa> The gimp baz2bzr conversion is driving me crazy
[09:03] <ddaa> it has been giving an estimated time left of 140 hours
[09:03] <carlos> stub, Oh, I assumed that as launchpad_carlos db disappear staging was being updated as usual...
[09:03] <ddaa> for five days...
[09:03] <stub> It disappeared?
[09:04] <carlos> I got errors while trying to connect to it
[09:04] <carlos> let me check again...
[09:04] <stub> carlos: oh - it was likely never upgraded from 8.0
[09:04] <stub> carlos: I'll create you a fresh one
[09:04] <carlos> stub, thank you
[09:04] <carlos> stub, could you update it weekly?
[09:04] <carlos> to generate weekly language pack snapshots
[09:05] <stub> carlos: Still not ready to trust your scripts on the production db?
[09:05] <carlos> stub, fixing them to move then into production
[09:05] <stub> You might need to poke me until I automate the restore
[09:05] <carlos> that's why I want to do the snapshots
[09:05] <carlos> ok
[09:22] <sivang> Hola amigos!
[09:22] <ddaa> blah, where's SteveA?
[09:22] <ddaa> helo sivang
[09:22] <sivang> hey ddaa 
[09:22] <carlos> siretart, morning
[09:22] <siretart> hey carlos ;)
[09:22] <carlos> s/siretart/sivang/
[09:23] <carlos> anyway siretart morning ;-)
[09:23] <siretart> hehe :)
[09:23] <carlos> xchat autocompletation sucks...
[09:23] <ddaa> you mean it's not psychic?
[09:32] <sivang> ddaa: 14 hours? something must have gone wrong in the calculation there...or bad network?
[09:32] <ddaa> No, 140 hours
[09:33] <ddaa> and it's not bad network, it's local fs
[09:33] <ddaa> and the calculation is fine, it's making progress
[09:33] <ddaa> just agonizingly slow progress
[09:33] <sivang> ddaa: does it at at least decline in 10th factor? :)
[09:34] <ddaa> just  a combination of pathologic "baz replay" and "bzr commit" performances. Many many files, large history.
[09:35] <ddaa> The only thing that kills me is that all bets are off for when it will actually complete.
[09:35] <sivang> ddaa: ah, so it's not sure it will complete even after 140hrs?
[09:36] <sivang> ddaa: baz replay "replays" the old branch's history ?
[09:36] <ddaa> it applies a changeset
[09:37] <ddaa> really, these are boringly arcane details
[09:37] <ddaa> and it won't help you to understand the insanity of the Arch command set.
[09:43] <sivang> ddaa: I understand. just feel lucky for having bzr :)
[09:45] <ddaa> when somebody has been telling you "I'll be done in six days, I think", every day for a week, how long do you keep one believing that?
[09:45] <sivang> ddaa: heh
[09:49] <jblack> Did anybody try the new rocketfuel docs yet? 
[09:52] <sivang> jblack: I am going to try them, *today* even if it costs me all night :)
[09:52] <jblack> all night? 
[09:52] <jblack> They should be really easy to use, and be about as quick as rocketfuel can be with bazaar-ng. 
[09:53] <sivang> jblack: ofcourse, but slow network, suddenly urgent stuff at work, can push it down until I get home :)
[09:53] <sivang> jblack: I am sure they're as good as they get , or at least close to that.
[09:54] <sivang> jblack: was at work until 2:30am last wedensday, then sick throughout weekend, only now cam back to myself
[09:54] <sivang> jblack: otherwise I would already have tried them :)
[09:56] <ddaa> Hey jblack
[09:56] <ddaa> should be the launchpad-bzr meeting soon
[09:56] <ddaa> at least assuming SteveA manages to be back from vacation in time...
[09:56] <ddaa> which I double since it's now meeting - 4 mins.
[09:56] <ddaa> * which I doubt
[09:58] <jblack> where?
[09:58] <jblack> On sunday? 
[09:59] <sivang> TP?
[10:02] <ddaa> It's monday here, dude
[10:03] <ddaa> ThinkPad
[10:09] <ddaa> daf: can pull optional-branch-title now
[10:09] <ddaa> I do not guarantee the data will not eat your hard drive and take your grandma hostage though.
[10:40] <sivang> ddaa: heh
[11:14] <daf> ddaa: bzr claims there's nothing to merge
[12:02] <lifeless> review meeting time
[12:02] <lifeless> whos art here ?
[12:03] <BjornT> i'm here
[12:03] <spiv> I'm here
[12:05] <lifeless> ok
[12:05] <lifeless> agenda - next meeting time - one week @ this time
[12:05] <lifeless> that 30th at 1100 UTC
[12:06] <spiv> Fine with me.
[12:06] <BjornT> fine with me
[12:06] <lifeless> ok
[12:06] <lifeless> queue status - theres 4 outstanding reviews in total
[12:06] <lifeless> which is 'under control' IMNSHO
[12:08] <lifeless> any new business ?
[12:08] <spiv> All seems good to me.
[12:08] <BjornT> no
[12:08] <daf> oustanding == merge-approved?
[12:09] <spiv> (maybe we should have a new segment in the meeting: silliest code I've reviewed this week ;)
[12:11] <lifeless> daf: outstanding = needs-review
[12:11] <lifeless> spiv: so my silliest was 'if thing and hasattr(thing, 'foo'): ...thing.foo'
[12:11] <daf> looking at the Pending Branch Summary page, there's 8 branches in needs-review
[12:12] <lifeless> rather than 'getattr(thing, 'foo', sane_default)'
[12:12] <lifeless> daf: indeed, I miscounted
[12:12] <lifeless> still, under control 
[12:13] <daf> 2N ~= N? :)
[12:13] <spiv> lifeless: Mine was this:
[12:13] <spiv> +    def _empty(self):
[12:13] <spiv> +        """Foo."""
[12:13] <spiv> +        raise AssertionError
[12:13] <spiv> lifeless: I think I win ;)
[12:13] <niemeyer> Morning!
[12:13] <lifeless> daf: FSVO 2
[12:14] <lifeless> spiv: unless BjornT has a new one ?
[12:14] <spiv> lifeless: I hope for the sake of launchpad he can't beat that :)
[12:14] <BjornT> nope, haven't actually reviewed that much lately :)
[12:14] <daf> spiv: gosh
[12:18] <lifeless> ok, meeting over
[12:29] <matsubara> good morning!
[12:31] <daf> bom dia brasil!
[12:36] <Mez> hmm - anyone here I can talk to about something we want to use LP for, but want to figure out the best way to do ?
[12:37] <Mez> basically we want people to be able to request backports through launchpad....
[12:37] <Mez> but dont know the best way to do it
[12:37] <Mez> I would suggest using a new product and filing bugs against it ?
[12:37] <sivang> morning matsubaram, spiv , Mez 
[12:38] <Mez> s morning sivan
[01:32] <spiv> Mez: Hmm, you mean backports of bug fixes from one upstream branch to another, or backports of new releases from e.g. dapper to breezy?
[01:33] <Mez> spi: we want people to be able to request new backports
[01:33] <Mez> as in - what we currently do ...
[01:33] <Mez> dapper rebuilt for breezy etc etc
[01:34] <Mez> thinking of ust setting up a new product and using bugs on that
[01:37] <spiv> Mez: I guess that works, although maybe a seperate ubuntu-backports distro would be a better solution.
[01:38] <spiv> Mez: Try asking bradb when he's around, or mail the launchpad-users list about it.
[01:38] <Mez> spiv: is that possible to do?
[01:38] <Mez> spiv: though that still doesnt give us somewhere to request - but actually makes life a lot easier for support ;)
[01:39] <spiv> If you had a seperate distro, you could then mirror the warty/hoary/breezy/dapper release structure in it, and use those for targetting your bugs.
[01:39] <Mez> very true
[01:40] <Mez> and then can file bugs on packages within the "distro"
[01:40] <spiv> But I'm not certain that's the best route, so see what brad thinks :)
[01:40] <spiv> Right.
[01:40] <Mez> does it matter that it's not self contained?
[01:40] <spiv> In some sense, the backports are just a derivate of ubuntu, so registering a deriviative distro of ubuntu for them makes sense.
[01:41] <Mez> spiv: true, but backports isnt self contained...
[01:41] <Mez> It relys on links to current ubuntu
[01:41] <spiv> But I suspect we'll hit some rough edges no matter what we do, because you'd be the first to do this, however we do it :)
[01:41] <spiv> Right, that's more-or-less the idea of a "derivative" distro.
[01:42] <spiv> We want to make it easy for people to make variations of ubuntu where 99% of the packages are the same, but a few select ones are customised.
[01:42] <Mez> spiv: oh... I thought you meant derivative as in ubuntu being a debian derivatie
[01:42] <Mez> spiv: then surely backports would be the best start for a stomping ground?
[01:43] <Mez> s/stomping/testing/
[01:43] <spiv> I think so.
[01:44] <Mez> hmm 
[01:44] <Mez> sounds like a good plan
[01:44] <Mez> and would be quite useful for us tbh
[01:44] <spiv> Yeah, it seems to make good sense.
[01:44] <spiv> E.g., if I'm looking at a breezy bug, and I see in launchpad that it's fixed in "ubuntu-backports (breezy)", then that's good to know.
[01:45] <Mez> sort of ;)
[01:45] <Mez> lol
[01:45] <Mez> actually 
[01:45] <spiv> It means that I know I could grab the backports package to fix my problem, rather than waiting for dapper :)
[01:45] <Mez> thats a good way of linking them
[01:45] <Mez> yes
[01:45] <Mez> ;)
[01:45] <Mez> hmm
[01:45] <Mez> this could be interesting
[01:45] <spiv> This is the sort of thing we always wanted malone to do :)
[01:45] <Mez> hehe
[01:45] <Mez> yeah
[01:46] <Mez> or, if we see it as fixed in dapper, but not in breezy, we can try and backport it ;)
[01:46] <spiv> Although typically we've been thinking more about package maintainers than users (e.g. if an ubuntu maintainer sees that gentoo has fixed a bug, then they can look at their fix and include it in the ubuntu package).
[01:46] <spiv> Heh, if you like ;)
[01:47] <Mez> spi: yeah - I've heard the spiel from mark about LP many a time (and read the interviews too!)
[01:47] <spiv> Ah, cool.
[01:48] <Mez> I think it's a great (and he explained it really well in a LXF issue somewhere!)
[01:48] <Mez> and at LRL
[01:48] <Mez> and everywhere else
[01:48] <Mez> but yeah - I think thats actually a good way of doing it
[01:48] <Mez> a REALLY good idea
[01:49] <Mez> I mean - we have them currently registered as a "project" and "products" for each ubuntu release
[01:49] <Mez> but doing it that way would be even easier
[01:49] <Mez> though I'm not too sure how we'd get stuff linked in (like it knowing what packages etc are in it)
[01:50] <spiv> Yeah, me either :)
[01:50] <spiv> It depends on where the derivatives code in launchpad is at.
[01:50] <spiv> It might be that for now products are the best we can do, and we can migrate it later when we can do it better.
[01:52] <Mez> evening bradb ;)
[01:53] <Mez> before I start - you can blame spiv for all this ;)
[01:53] <bradb> hey Mez 
[01:53] <jbailey> I have a timeout error that I think might be a duplicate of 28528.
[01:53] <jbailey> Mm, no, that one referes to translation.
[01:53] <spiv> jbailey: What's the oops id?
[01:54] <Mez> bradb - we've been looking at a new way of processing requests for backports - and we wass originally looking at using the support system.
[01:54] <jbailey> spiv: is OOPS-23A312.
[01:54] <spiv> Hmm, it hasn't rsynced to chinstrap yet...
[01:54] <Mez> At the moment - I think thats a little too primitive... and was looking at using malone for it - but making a new package for requests doesnt seem right - nor does mixing it with current actual problems
[01:55] <Mez> I asked in here if anyone had any ideas on how to do it
[01:55] <Mez> and spiv came up with the idea of making backports a derivative distro in LP
[01:55] <jbailey> spiv: For background, I was searching for the evolution source package so that I could search it for bugs.
[01:55] <jbailey> This was the search for packages timing out.
[01:55] <Mez> bradb: which would work out really well for us - it'd make managing things a lot easier for us.
[01:55] <Mez> bradb: what do you think?
[01:55] <bradb> Mez: For backport fixes, use the "Target Fix to Releases" link.
[01:56] <Mez> bradb ... ?
[01:56] <spiv> Mez: He means target it to releases of real Ubuntu
[01:57] <Mez> I dont get what you mean though
[01:57] <spiv> bradb: I'm not sure that's quite right in this case, because e.g. breezy is released and will never change (well, except for security bugs).
[01:57] <jbailey> spiv: Not really true.
[01:57] <jbailey> spiv: We fix important bugs in breezy through breezy-updates, too
[01:57] <bradb> Mez: e.g https://launchpad.net/distros/ubuntu/+source/openoffice.org2-amd64/+bug/24000, the "Target Fix to Releases" is designed specifically for managing backport/security bug fixes.
[01:57] <Ubugtu> Malone bug 24000: "Unable to open the linked document ( ACPI specification )" Fix req. for: openoffice.org2-amd64 openoffice.org2-writer (Ubuntu), Severity: Normal, Assigned to: Matthias Klose, Status: Unconfirmed
[01:57] <spiv> jbailey: Ok, security and high-priority bugs :)
[01:58] <jbailey> spiv: Specifically, I will occasionally upload fixes that I've done for clients anyway, since I've had to do the QA work on the fix anyway.
[01:58] <spiv> jbailey: There's some sort of distinction between the backports project and breezy-updates/breezy-security.
[01:58] <spiv> I guess the question is how to represent that in malone (if at all)
[01:58] <jbailey> spiv: Right.  backports is as many packages from dapper as they can sanely map onto breezy.
[01:58] <mdke> i.e. the former isn't enabled by default in peoples' systems
[01:59] <jbailey> I don't know what the community around backports is like.
[01:59] <Mez> bradb: I dont think you've got the point of it. We want people to be able to request new backports
[01:59] <jbailey> Like, if there's a bug in a breezy-backport that doesn't appear in breezy and doesn't appear in dapper, do they care?
[01:59] <Mez> It's basically keeping them up to the bleeding edge... those people that want the latest of everything now
[02:00] <Mez> jbailey: the backports team do - but cant do much about it
[02:00] <jbailey> Mez: Does that change when we switch to Soyuz?
[02:00] <Mez> jbailey: I've no idea
[02:00] <spiv> jbailey: well, another case is "the bug is in 'ubuntu (breezy), but fixed in ubuntu (dapper) and ubuntu (backports)".
[02:01] <Mez> We've basically been told "well - we dont know - we;ll see later"
[02:01] <bradb> Mez: Are you asking how to take bug #NNNN and say "backport this fix to Breezy"?
[02:01] <Mez> bradb: not at all
[02:01] <spiv> jbailey: In that case, as a user looking at the bug report (because it just happened to me), it's useful to know that it's fixed in backports.
[02:01] <jbailey> spiv: Mmm, right.  For people who really need a bug gone and are willing to take some risk with it.
[02:01] <Mez> bradb: currently, backports is technically a derivative of ubuntu - yes?
[02:01] <spiv> jbailey: So I know I have the option to try a backport of just one package before I upgrade to the bleeding edge dapper :)
[02:01] <bradb> Mez: Does your question deal with bug reports?
[02:02] <Mez> well - people come along every day and ask us to backport things to breezy
[02:02] <spiv> jbailey: Exactly.
[02:02] <jbailey> spiv: Makes sense, then.
[02:02] <Mez> bradb: not exactly
[02:02] <Mez> but vaguley yes
[02:02] <spiv> bradb: Well, "I wish version 1.7 of frobnosticator was available for breezy" is sort of like a bug ;)
[02:03] <bradb> ah, ok
[02:03] <Mez> spiv: as is "well this annoying bug here is in breezy but not in dapper-  can you backport it please"
[02:03] <jbailey> spiv: FWIW, a second query for evolution takes a while but does succeed.
[02:03] <spiv> Mez: Right.
[02:03] <spiv> jbailey: Must be on the edge of the timeout, so it depends on luck if the db is too busy or not :/
[02:03] <Mez> we get so many people requesting backports all the time - 
[02:03] <Mez> some for vanity (I want this new app now)
[02:03] <jbailey> spiv: Fair enough.  Is there anything else I can give you before this drops out of my short term memory? =)
[02:04] <Mez> some for the reason that it's buggy (scim in breezy is buggy as hell but fixed in dapper - cant backport but it's an example)
[02:04] <spiv> jbailey: How about a pony? ;)
[02:04] <spiv> Ah, there's the oops.
[02:04] <Mez> and well - we'd love a way to be able to manage those requests propler.y
[02:04] <jbailey> spiv: http://www.hasbro.com/mylittlepony/
[02:04] <Mez> spiv came up with the idea of registering backports as a distro in launchpad
[02:04] <Mez> then we can link all the things together - for example, if someone reports a bug in breezy
[02:05] <Mez> and it's fixed in dapper, then we can backport it and mark it as fixed in backports and wontfix in breezy
[02:05] <Mez> or similar
[02:05] <Mez> we're just looking for an easier way to manage the request process - and the idea spiv came up with seems a good one
[02:05] <spiv> jbailey: Hmm.
[02:05] <Mez> get it yet bradb?
[02:05] <bradb> Mez: If there's an existing bug report, the "Target Fix to Releases" page is designed to request a backport fix.
[02:06] <spiv> jbailey: It's a actually a pretty simple query that's killing it :/
[02:06] <Mez> bradb: not exactly
[02:06] <bradb> If there's no bug report, and it's just "please backport version X.XX of package foobar", that's something different.
[02:06] <Mez> to me it looks liek it's there to request an "updates"
[02:06] <spiv> jbailey: It's using a full-text search, but otherwise is really simple, no joins or anything exciting.
[02:07] <Mez> bradb: the backports team get nothing to do with that link- surely that link is for security/updates team
[02:07] <Mez> not backports
[02:08] <bradb> Mez: It's designed to handle backport fix requests. What makes you think it can't handle requesting a backport fix?
[02:08] <spiv> bradb: It seems to me that the backports aren't producing official ubuntu packages, so targeting fixes to ubuntu releases doesn't really seem to fit.
[02:09] <spiv> bradb: i.e. the issue is that the bug isn't being fixed in "ubuntu (breezy)", it's being fixed somewhere else.
[02:09] <Mez> what spiv said :P
[02:09] <bradb> Ok, that makes things a little clearer.
[02:09] <spiv> bradb: It's a confusing world we live in :)
[02:09] <bradb> I vaguely remember talking with Mark and stub about this in a cab on the way to the airport in Spain.
[02:10] <spiv> Hah.
[02:10] <Mez> bradb: I think the confusion is what is meant by backports ;)
[02:10] <Mez> you're meaning for example, breezy-security and breezy-updates - yes?
[02:11] <Mez> I'm meaning https://wiki.ubuntu.com/UbuntuBackports
[02:11] <bradb> I guess so. But we usually use the word "backport" every time we talk about that page.
[02:11] <Mez> bradb: It's a valid use of the word ;)
[02:12] <Mez> but not the way I meant it ;)
[02:12] <Mez> It is technically backporting, yes - but when I say "backports" I meant he backports project (like backports.org for debian)
[02:13] <Mez> at the moment - we're using the forums for requests.
[02:13] <Mez> and this is major ...
[02:13] <Mez> *shudders*
[02:14] <Mez> it doesnt work
[02:14] <Mez> I refuse to go there
[02:14] <Mez> we're looking at a new way to manage the bugs for backports (all the bugs for say, breezy backports are lumped into one product)
[02:15] <Mez> and also a better way to manage requests
[02:15] <Mez> I believce this can be done easily with backports being listed as a derivatie distro
[02:16] <bradb> Mez: Is the backporting project a derivate distro of Ubuntu?
[02:16] <Mez> technically, yes
[02:16] <bradb> Does it have its own repo, can I order backport CDs, etc?
[02:16] <Mez> no.
[02:16] <Mez> lol
[02:16] <Mez> well
[02:16] <Mez> it's hosted on the ubuntu repos
[02:16] <Mez> so it has it's own repo
[02:17] <Mez> http://us.archive.ubuntu.com/ubuntu/breezy-backports
[02:17] <Mez> but they dont do CDs yet
[02:18] <bradb> Mez: IIUC, there are two types of backporting you're talking about: 1. Requesting that bug #NNNN get fixed in Breezy, 2. Requesting that version X.YZ of package foobar be brought into Ubuntu, correct?
[02:18] <Mez> 2) yes 1) sort of
[02:19] <Mez> 1) bug #NNNN be fixed in breezy by bringing a newer version of "package" into breezy from dapper
[02:19] <bradb> Any other types of backport request that relate to this?
[02:20] <Mez> those are the only 2 types that we deal with
[02:20] <bradb> ok
[02:20] <Mez> we dont bugfix things - (well - not as backports - we can as MOTU if we want)
[02:21] <Mez> we basically just bring things from XtoY and test that they work
[02:21] <Mez> but sometimes we get bugs in stuff we'e backported
[02:21] <bradb> And to get bugfixes for either of #1 or #2, I have to modify my sources.list, right?
[02:21] <spiv> In some ways this would be clearer if the backports weren't distributed on archive.ubuntu.com ;)
[02:22] <Mez> bradb: yes - to get the backported packages, you have to modify your sources.list
[02:24] <Mez> spiv: they never used to be - but they're "official" now
[02:24] <Mez> official but unsupported
[02:24] <spiv> Mez: Yeah, that's the "problem" :)
[02:25] <spiv> Mez: If they were called "Jimbo's backports for Ubuntu", there'd be no confusion ;)
[02:25] <bradb> I don't know much about how derivatives work in Launchpad. If the backport project is very similar to a derivative, then it would make sense for it to be a derivative. If the derivative solution is more like a hack than a close model of the Real World, then it's probably not the right solution.
[02:25] <spiv> bradb: FWIW, I think a derivative is a very close but not quite perfect fit.
[02:26] <spiv> bradb: But probably a better fit that, e.g. a product for every backport release (i.e. having a ubuntu-breezy product).
[02:26] <spiv> Er, "ubuntu-backports-breezy" product.
[02:26] <Mez> spiv: thats how we do it at the moment
[02:26] <Mez> bradb: what do you mean more like a hack than a close model of the real world"
[02:27] <spiv> The main issue I see is that the backports project doesn't have full repo, they are just an auxilliary source of newer packages for some things.
[02:27] <Mez> spiv: yeah - we're not self contained
[02:27] <spiv> I'm not sure what our assumptions for derivatives are/will be, so that might be an issue.
[02:27] <Mez> we wont work without having ubuntu there in the background
[02:28] <bradb> Mez: If the backport project is not almost exactly like an Ubuntu derivative, then this would be a hack. :)
[02:28] <bradb> But I don't know enough about derivatives to make that call.
[02:28] <Mez> bradb: nor do i :D
[02:28] <Mez> It depends on what you're calling a derivative
[02:28] <bradb> But, I can email launchpad@ and get more information for you.
[02:29] <bradb> Do you want me to get more information for you?
[02:29] <spiv> I think probably the main issue would be "I want to request a backport of frobnosticator 1.7 to breezy" -- would "frobnosticator" be available in malone's view of the ubuntu-backports derivative to file a bug against?
[02:30] <spiv> I think derivatives in general might want that, but I'm not sure.
[02:30] <Mez> bradb: put simply  if we importe everything from breezy into backports that we havent backported from dapper - we'd be a derivative ;)
[02:30] <Mez> the stuff we havent backported lies in breezy ;)
[02:30] <Mez> bradb: sure - can you CC: me ?
[02:31] <spiv> Hmm, to be clear, I mean would all packages in the parent distro be available for filing bugs against in the derivative?
[02:31] <bradb> Mez: Sure, I'll email launchpad@ and Cc you.
[02:31] <Mez> bradb: no problem :D
[02:31] <Mez> mez@ubuntu.com
[02:32] <bradb> sounds good, thanks
[02:32] <Mez> spiv: if so - it'd make our life easier if this were to go ahead
[02:32] <Mez> if not: not as easy - but still ;)
[02:33] <Mez> spiv: to me - we only need the packages in backports, any "requests" can be filed against backports itself
[02:33] <Mez> though, for general derivative distros
[02:33] <Mez> I think that weould be the best
[02:33] <spiv> Mez: It seems like the sort of problem we want launchpad to help with, but you're right on our bleeding edge :)
[02:33] <Mez> because well - same as with us in ubuntu
[02:34] <Mez> we're a derivative of debian
[02:34] <Mez> if we pulled a package from debian - should we just file in the debian BTS?
[02:34] <Mez> not really - because it could something else in ubuntu causing the bug
[02:35] <spiv> Mez: Yeah, that's a bit hairy -- what if the bug in a pure-debian only shows itself when used with a backport package?
[02:35] <Mez> yeah
[02:35] <Mez> lol
[02:35] <Mez> which is why you have it filed against the distro first and then "occurs in X" aswell
[02:35] <spiv> I suspect there's not one answer for that sort of thing, it depends on common sense, the relationship with the relevant debian maintainer, etc.
[02:36] <Mez> the whole thing maline is meant for
[02:37] <Mez> spiv: It's very true
[02:37] <Mez> spiv: but it's the same thing
[02:37] <spiv> jbailey: I filed https://launchpad.net/products/launchpad/+bug/29444
[02:38] <Ubugtu> Malone bug 29444: "Timeout querying DistributionSourcePackageCache when searching for the evolution source package" Fix req. for: launchpad (upstream), Severity: Normal, Assigned to: Nobody, Status: Unconfirmed
[02:38] <Mez> which is why I think the parent stuff should be available in general
[02:38] <Mez> hmm
[02:38] <Mez> though I would love the ability for launchpad to auto-file a bug with the debian bts if it's marked as being a bug in debian too
[02:38] <Mez> (or to grab a list and see if the bugs there and link it to the bts if it is)
[02:39] <spiv> Mez: Yeah, although debian might love that a bit less ;)
[02:39] <jbailey> spiv: Thanks.  I've subscribed to it in case there's something more you need from me.
[02:41] <Mez> spi: I dont know 
[02:41] <Mez> if it's set as being a bugoccuring in debian aswell
[02:41] <Mez> surely - it would be good to check the debian BTS and link to that bug
[02:42] <Mez> or if it's not there - file a bug?
[02:43] <spiv> Mez: It would be good, but it just needs some care to do it without offending debian.
[02:43] <Mez> true
[02:44] <spiv> If we accidentally make it easy to spam debian with often irrelevant or duplicate bug reports, they won't be happy.
[02:44] <Mez> true - but how many times have you seen a bug marked as occuring in debian ?
[02:44] <spiv> And possibly some maintainers will simply be disinterested in bugs reported from ubuntu, on the basis that they don't care about anything but debian, so unless a debian user reports it they're not interested.
[02:44] <Mez> even just the "occuring in debian" checking for a current bug
[02:45] <Mez> spiv, usually it's a dev that marks it as occuring in debian
[02:45] <spiv> And there's the need to try avoid duplicates, and also our web form tends to gather much less info than reportbug does by default.
[02:46] <spiv> Mez: Right.  It's not a bad idea, it's just one we need to be careful to do right.
[02:46] <Mez> spiv: definately
[02:46] <Mez> espescially atm
[02:47] <spiv> Which will probably involve asking debian what they think, etc.  We're not really there yet, but I'm sure the discussion will happen sooner or later.
[02:48] <Mez> well there are massive kicks going on atm to try and get ubuntu and deban closer
[02:54] <mantiena> Hi all
[03:33] <salgado> BjornT, around?
[03:37] <BjornT> hi salgado 
[03:38] <salgado> hi BjornT. do you have a few minutes to discuss an issue I'm having while trying to fix bug 5394?
[03:38] <Ubugtu> Malone bug 5394: "Clicking on "Advanced search" should preserve simple search criteria" Fix req. for: malone (upstream), Severity: Normal, Assigned to: Guilherme Salgado, Status: In Progress http://launchpad.net/bugs/5394
[03:38] <BjornT> sure
[03:39] <salgado> cool
[03:40] <salgado> BjornT, so, to preserve the simple search criteria I have to stick the values returned by getExtraSearchParams() as GET parameters on the URL I'm redirecting the user to
[03:41] <salgado> the problem is how to construct this URL in a way that's not too fragile, like building it manually and hardcoding the field names
[03:42] <salgado> maybe https://chinstrap.ubuntu.com/~dsilvers/paste/fileKOkfuR.html can be of some help to understand the problem
[03:53] <BjornT> salgado: hmm, can't think of a good way of doing it. the field name you could get with getattr(self, search_key + '_widget', 'name') but to encode the actual values is harder.
[03:55] <bradb> lifeless: ping
[04:20] <spiv> bradb: He should be in bed (like I should be ;)
[04:20] <bradb> I emailed launchpad@.
[04:23] <stub> Cool. Launchpad is running with Zope 3.2. Tomorrow I'll see how much of the test suite still works :-)
[04:25] <bradb> stub: Sweet.
[04:26] <Sonderblade> how do i find all bugs that i have reported?
[04:27] <bradb> Sonderblade: /people/$you/+reportedbugs
[04:29] <Sonderblade> thanks
[04:29] <bradb> no prob
[04:29] <Sonderblade> that list seem to exclude the closed ones though
[04:31] <bradb> Sonderblade: That's a known problem. You can use the advanced search to tune your criteria, until we improve the reports to show current filter criteria, and perhaps make the default criteria more useful.
[04:31] <bradb> I spend Friday, and will spend more of today, prototyping an improvement.
[04:31] <bradb> s/spend/spent/
[04:32] <Sonderblade> awesome! i would really like that feature
[04:32] <stub> bradb: pqm was stuck on a test, which a kill -9 appears to have cleared.
[04:33] <bradb> I wonder how that went unnoticed for so long.
[04:34] <bradb> stub: Thanks for killing it.
[04:35] <stub> I did see it earlier, but just assumed we were just having slow mirrorings again or something. I prefer to leave poking at it to Robert, as I can really only belt at it with a hammer.
[04:36] <bradb> heh
[04:51] <dilys> Merge to devel/launchpad/: Remove all dependencies shipit had on www.ubuntu.com. r=kiko (r3019: Guilherme Salgado)
[05:05] <Mez> is there a way to authenticate a username/password against LP ?
[05:05] <Kinnison> Currently the authentication service is only an internal thing
[05:06] <Kinnison> I don't imagine we'll be opening it up any time soon
[05:06] <Kinnison> (but I could be wrong)
[05:07] <Mez> damnit
[05:07] <Mez> cause well - I'm sorta talking to ubuntuforums
[05:07] <Mez> and well - if we could get the forums to auth against LP ;)
[05:07] <Mez> it'd be uber
[05:07] <Mez> hows things anyways Daniel?
[05:08] <Kinnison> Hmm, auth for the forums might be possible
[05:08] <Kinnison> I'm good thanks, just sold my house
[05:08] <Kinnison> well, pending the chain completing
[05:21] <gneuman> dll
[05:27] <Mez> Kinnison, good to hear - did you sell pepperfish too ? 
[05:27] <Mez> Kinnison, well seeing as I'd be the one making the LP -> forums integration - how would it be done ?
[05:27] <Mez> I could do a really crude hack for it
[05:29] <Kinnison> Mez: If we can open it up, I think it's an XMLRPC service
[05:29] <Kinnison> Mez: But it'd be up to others, not me, to decide
[05:29] <Kinnison> Pepperfish is still mine
[05:30] <Mez> Kinnison, It wouldnt matter what it be as long as it can be gotten via PHP ;)
[05:30] <Mez> I was thnking a crude hack of trying to login and then storing the details localcly if ti worked
[05:33] <Kinnison> Urgh, that'd be really nasty
[05:33] <Kinnison> and would essentially be a password-scraper
[05:33] <Mez> pretty much
[05:33] <Kinnison> and would fail if someone changed their password on LP within whatever cache timeouts you have
[05:34] <Kinnison> I've been pushing to have Livejournal be an OpenID server. Once that is done, it'd be easy for the forums to use that
[05:34] <Mez> well it would only scrape on sign up/now and then ;)
[05:34] <Kinnison> Until then, I'm not sure what to suggest
[05:34] <Mez> lol
[05:34] <Mez> LP supports openID?
[05:35] <Kinnison> Not yet
[05:35] <Kinnison> I've been pushing for it
[05:36] <Kinnison> Mostly because I'm fed up of using random openid servers around the net instead of running my own or using LP
[05:36] <ddaa> I'm sure if you got a patch, people would be happy to roll it out :P
[05:39] <Mez> hmm
[05:39] <Mez> OpenID wouldnt be too hard to implement in the new forums
[05:40] <dilys> Merge to devel/launchpad/: [trivial]  change oops ID generation to make IDs fully unique (r3020: James Henstridge)
[05:42] <Kinnison> ddaa: Aye, I'm sure they would. Perhaps I'll hack on it after soyuz deployment
[05:44] <Kinnison> hehe
[05:46] <ddaa> oh, my last merge attempt was killed...
[05:47] <ddaa> hang on a buildbot test case, apparently
[05:47] <ddaa> buildbot.hate += 1
[05:48] <ddaa> IntegerOverflowError
[05:50] <Kinnison> buildbot.setHate( buildbot.getHate() + 1 );
[05:50] <ddaa> ARRRRRGH
[05:59] <sivang> Kinnison: what language was that in? :)
[06:00] <Kinnison> sivang: Vomit^WJava
[06:00] <ddaa> that looks like buildbot code anyway
[06:00] <sivang> Kinnison: hehe :)
[06:01] <Kinnison> It doesn't
[06:02] <mantiena> bradb, hi
[06:07] <dilys> Merge to devel/launchpad/: [r=spiv]  Optimize LaunchpadCelebrity database access (r3021: Stuart Bishop)
[06:15] <salgado> BjornT, what do you think of something like https://chinstrap.ubuntu.com/~dsilvers/paste/filecH6gEW.html to solve that issue I raised early?
[06:24] <BjornT> salgado: yeah, something like that could work. do you really need the new_view.initial_values.update() call? the widgets should pick up those values from the request i think.
[06:24] <salgado> BjornT, they might not be in the request
[06:25] <salgado> if I enter some values and then try to go to the advanced search before having submitted that form once at least
[06:25] <salgado> ..., for instance
[06:31] <BjornT> salgado: hmm, i'm not sure i follow. if they are not in the request, you will set them to None. if they are in the request, you will set them to the same value that the widgets will read from the request. but maybe i'm missing something, i'm tired...
[06:35] <salgado> BjornT, you're right, I don't really need that. I was confused by something else
[06:35] <zyga> carlos: ping
[06:48] <carlos> zyga, pong
[06:49] <zyga> carlos: hi I'd like to talk to you about a translation filtering and fixing system if you have the time
[06:50] <carlos> zyga, I need to leave now, I suppose I will be back in a couple of hours
[06:50] <zyga> carlos: fine, could you ping me once you get back? :)
[06:50] <zyga> I've got some code for you
[06:51] <carlos> zyga, sure
[06:51] <carlos> zyga, cool
[06:54] <jordi> hey
[06:55] <zyga> jordi: hello :)
[06:55] <jordi> carlos: saw that Thai request?
[06:56] <carlos> jordi, no, sorry. I will take a look when I'm back
[06:58] <carlos> jordi, btw, I just asked for 20Mbps with ya.com ;-)
[06:58] <carlos> jordi, and fixed IP address
[06:58] <jordi> carlos: w00t. How much?
[06:59] <carlos> 47.95, in six months 44.95 and after another six months 41.95
[06:59] <carlos> +VAT
[07:00] <jordi> is that including calls?
[07:00] <carlos> but I will move my server to my home
[07:00] <carlos> jordi, yes
[07:00] <jordi> hm. Is my telephone thingy accepting 20Mb now?
[07:00] <jordi> or only yours?
[07:00] <carlos> Hmm, not sure...
[07:00] <carlos> let me check
[07:01] <carlos> jordi, no, only 4Mb
[07:01] <carlos> same prize
[07:04] <jordi> HMM
[07:04] <jordi> oops
[07:04] <jordi> another user requests translating KDE via rosetta.
[07:05] <carlos> see you
[07:05] <jordi> laters
[07:20] <Mez> Kinnison, who would it be best to talk to about integrating the forums with LP ?
[07:20] <Mez> I'm haing fun at the moment with it :D
[07:28] <Kinnison> Mez: Umm, start with spiv and stub I guess
[07:54] <dilys> Merge to devel/launchpad/: [r=kiko]  Guess the right binary/source packages when filing a bug. (r3022: Brad Bollenbach)
[08:09] <salgado> Kinnison, around?
[08:17] <Kinnison> salgado: yes
[08:23] <salgado> hi Kinnison, the implementation of the MirrorManagement spec is blocked because I need more details about some methods that are mentioned there. they're guessSources(), guessBinaries() and updateContentModel(). Maybe you can add some docstrings to them or tell me a little bit about what they're supposed to do?
[08:40] <Kinnison> salgado: Hmm
[08:40] <Kinnison> salgado: let me refresh my memory
[08:42] <Kinnison> salgado: guessSources is to probe the mirror to decide which distroreleases the mirror has source for
[08:42] <Kinnison> salgado: guessBinaries is to probe for which disroarchreleases are present
[08:43] <Kinnison> salgado: updateContentModel is to write that data back to the database
[08:43] <Kinnison> salgado: need anything else?
[08:45] <bradb> BjornT: Around?
[08:50] <salgado> Kinnison, what do you mean by "probe for which disroarchreleases are present"? I thought the mirror's owner is responsible to tell us what is present in the mirror...
[08:50] <Kinnison> salgado: make http/ftp requests and work it out
[08:50] <Kinnison> salgado: I.E. look for Packages.gz and Sources.gz etc
[08:52] <salgado> Kinnison, so, the guessFoo methods are the ones that will tell me what MirrorDistroArchRelease/MirrorDistroReleaseSource entries I need to create
[08:52] <Kinnison> yep that's the idea
[08:52] <Kinnison> so launchpad shows what we found, not what they think they have
[08:52] <Kinnison> more sane that way
[08:53] <salgado> indeed, it makes sense
[08:53] <Kinnison> salgado: coolio
[08:53] <salgado> and what would be the mirror.desiredPrefferedProbes() for?
[08:54] <Kinnison> That, is a very good question
[08:58] <salgado> heh
[08:58] <salgado> I guess I'll have to wait for cprov
[08:59] <Kinnison> worked it out
[08:59] <Kinnison> it returns a set of URLs in that mirror which are the URLs for guessSources and guessBinaries
[08:59] <Kinnison> in other words, it works them out based on the distrorelease and distroarchrelease records present in the database for the distribution that the mirror is meant to have
[09:00] <Kinnison> Do you see?
[09:03] <salgado> ah, right. now it makes sense
[09:04] <salgado> I'm still not sure how I'm supposed to check how up-to-date the mirror is, though
[09:04] <Kinnison> Look at publishing records
[09:04] <cprov> salgado: investigating the publishing records
[09:04] <Kinnison> you have a timeline of things appearing and disappearing
[09:04] <cprov> ahhah
[09:05] <Kinnison> thus you can see "libfoo 1.0" appears 10 hours ago, therefore if it's on the mirror, we are at worst 10 hours old
[09:05] <Kinnison> Do you see?
[09:05] <salgado> aha
[09:05] <Kinnison> We have to go now
[09:05] <Kinnison> If you're still stuck, put your queries in a mail to me and I'll look tomorrow morning
[09:05] <salgado> and to check if it is in the mirror, should I rely only on what we have in Packages.gz and Sources.gz?
[09:05] <Kinnison> ciau dude
[09:06] <Kinnison> Either will do
[09:06] <Kinnison> Both would be better
[09:06] <salgado> great
[09:06] <Kinnison> that way we can confirm consistency
[09:06] <Kinnison> ciau dudes
[09:06] <salgado> thanks, Kinnison. :)
[09:16] <stub> BjornT: I've disabled process-email.py as it isn't doing anything useful except emailing an exception out every few minutes
[09:25] <ddaa> Kinnison: niemeyer: can any of you guys spare a few minutes to chat about buildd-ng design?
[09:25] <niemeyer> ddaa: Sure
[09:25] <ddaa> I'm trying to spec up the stuf I would like ATM.
[09:26] <ddaa> So, it's going to be a master-slave architecture
[09:26] <niemeyer> Ok
[09:26] <ddaa> with the master in charge of scheduling, user reporting, and back-end access.
[09:27] <ddaa> One things I'm seeking is having a system that does force killing running jobs on slaves when the master goes offline.
[09:27] <ddaa> For example for a system upgrade, new hard-drive, slave-compatible bugfix, whatever.
[09:27] <ddaa> * that does _not_ force killing
[09:27] <niemeyer> Makes sense
[09:28] <ddaa> I'm also looking for really simple network protocols, something like xmlrpc + streaming socket
[09:28] <ddaa> the streaming socket bit is where stuff starts getting complicated
[09:29] <ddaa> one requirement is the ability to get the output of a running task in real time
[09:29] <niemeyer> It'd be very cool if that scheme could be made generic, so that any kind of task delivery needs could be addressed by the same infrastructure. Is this your intention?
[09:29] <ddaa> that's very useful with buildbot
[09:29] <ddaa> niemeyer: we already talked about it, remember?
[09:29] <niemeyer> Yep
[09:29] <niemeyer> I'd like to understand where we stand now :)
[09:30] <ddaa> Well, still in early design and feasability evaluation.
[09:30] <ddaa> I tried doing the "gap analysis" that lifeless would like me to do, but I find it too hard and frustrating to write up abstract requirements.
[09:31] <ddaa> So I started speccing up the design so I could cover the various annoying cases with more confidence.
[09:31] <niemeyer> Cool
[09:31] <niemeyer> Do you think that streaming is really needed?
[09:31] <niemeyer> As opposed to checkpoint reading?
[09:32] <niemeyer> That's usually more scalable, and covers the case of not killing the client when the server dies transparently.
[09:33] <ddaa> It's totally useful with buildbot. I can see how something is progressing, it gives me a fuzzy feeling, allows making ballpark ETAs, and stuff like knowing in real-time when something completed.
[09:33] <ddaa> Probably all the use cases can be covered in a specific manner, but that's a feature I have come to really like.
[09:34] <ddaa> That's also quite useful for debugging from time to time.
[09:34] <ddaa> just get the streaming page in a browser, and peek at it now and then
[09:34] <lifeless> bradb: ping
[09:34] <niemeyer> Indeed. I'm not thinking about removing what buildbot offers you.. just a different implementation.
[09:35] <bradb> lifeless: pong
[09:35] <lifeless> no need to be quite so bitchy with your rhetorical questions y'know
[09:35] <lifeless> I know you know the answers to most of them, and its demoralising to see that when you first wake up in the morning
[09:36] <ddaa> niemeyer: but I guess you have a point about it making things more complex...
[09:36] <bradb> lifeless: I don't mean for them to be rhetorical. I want to know why it happened, and what needs to be changed for that not to happen anymore. It's equally demoralising to wake up to a 70 hour old merge request.
[09:36] <niemeyer> ddaa: Just an idea..
[09:37] <lifeless> bradb: bradb I dont know what you mean by a 70 hour old merge request - the queue is empty now. Are you saying the round trip time for a failure was 70 hours ?
[09:37] <niemeyer> ddaa: Anyway, what about the attributed-slaves idea, in the sense of being able to ask what capabilities a slave has? What's your opinion about it?
[09:38] <bradb> lifeless: I mean the merge request was in #3 position this morning, and it was 70 hours old at that point. I submitted it Friday morning, my time.
[09:39] <bradb> lifeless: It's cleared out now because stub kill -9'd it. My concern was how it came to be to begin with.
[09:39] <ddaa> niemeyer: I have no yet found a need for that. I realise that the Scheduler may want to assign some tasks specifically to some slaves (that makes sense for importd when the required data set is very large)
[09:39] <bradb> Having said that, I put it through again this afternoon, and it was pleasantly fast.
[09:39] <bradb> If it would keep behaving the way it did this afternoon, I'd be very, very happy.
[09:39] <lifeless> what time UTC did he do that? 
[09:39] <ddaa> niemeyer: but I rather considered centralizing this kind of logic into the master. The scheduler would know it all.
[09:40] <bradb> lifeless: 15:34
[09:40] <ddaa> niemeyer: the plan I have now is "Scheduler.slaveWaiting(slave)", would tell the scheduler that a given slave is ready to receive a job.
[09:40] <ddaa> niemeyer: did I answer your question?
[09:41] <niemeyer> ddaa: I imagine being able to have slaves that may burn images, and others that may build packages, while others will do both and also do importdd related tasks..
[09:41] <lifeless> bradb: ok, I have mail from stub.
[09:41] <lifeless> bradb: pqm wasn't stuck, the lanuchpad test suite was.
[09:42] <lifeless> stub has killed the test suite, not pqm.
[09:42] <bradb> lifeless: My main concern is how that went unnoticed for so long.
[09:42] <lifeless> 70 hours on the weekend. *weekend*
[09:42] <niemeyer> ddaa: But I'm not sure if you intend to work in the generalization of that master-slave infrastructure..
[09:42] <ddaa> niemeyer: Mh... I never considered going _that_ generic... I think that would lead unecessary contention on deployment. Better not to share computing resources accross responsibility boundaries. 
[09:42] <niemeyer> ddaa: That'd be awesome, for sure
[09:43] <ddaa> I like to be in control of my slaves :)
[09:43] <bradb> lifeless: 48 of those hours on a weekend, yeah.
[09:43] <niemeyer> ddaa: Going generic doesn't mean you won't control your slaves.. Those slaves could be told to only accept "importdd" tagged jobs.
[09:44] <ddaa> niemeyer: when I say "deployment contention", I mean for example that I like to be able to rollout new code without asking anybody.
[09:44] <niemeyer> ddaa: Hummm
[09:45] <ddaa> So that implies that systems are dedicated to specific, well scoped tasks. Sure, my slaves could do two or three kinds of tasks.
[09:45] <ddaa> And it's definitely useful to have affinity between some specific tasks and some systems.
[09:46] <niemeyer> I perfectly understand your desire. I'm trying to figure out if there would be a way to match your needs with a system that would be generic enough to be reused by Soyuz..
[09:46] <ddaa> For example, I would rather have python, gimp and coreutils (huge bitches) be statically associated to specific machines to avoid transferring large amount of data over the net.
[09:47] <ddaa> niemeyer: exactly what I'm asking for :)
[09:47] <niemeyer> You're still thinking about keeping slave data persistently in a single slave?
[09:47] <ddaa> I think that should be optionale.
[09:47] <niemeyer> Wouldn't it be possible to have some network FS handling that?
[09:47] <ddaa> i.e. the system should allow it
[09:48] <ddaa> but for the common stuff, "bzr branch" or rsync, and bzr push would allow me to remove the dependence.
[09:48] <niemeyer> Have you thought about "movable code"? i.e. streaming code over the wire to tell what the client should do?
[09:49] <ddaa> I thought about it.
[09:49] <ddaa> About half a second :)
[09:49] <niemeyer> pyro does that automatically, if enabled
[09:49] <niemeyer> Hehehe :)
[09:49] <ddaa> I'd rather have a very simple vocabulary for communication.
[09:50] <niemeyer> Doing something like this I imagine that it'd be possible to keep a quite reduced and generic core
[09:50] <niemeyer> Humm
[09:50] <ddaa> No really strong reason... just a desire to keep interfaces simple.
[09:50] <ddaa> That seems to be often a good idea.
[09:51] <niemeyer> Indeed
[09:52] <ddaa> in any case, the task will be dependent on local tools (bzrlib, twisted...) so I do not really see what the movable code thing would buy me.
[09:52] <lifeless> deploy the code too
[09:53] <lifeless> rsync over a freaking big tarball
[09:53] <lifeless> then run shit inside it
[09:53] <ddaa> okay, I'll download a launchpad tree FOR EVERY FUCKING IMPORT TASK
[09:53] <zyga_> carlos: ping?
[09:53] <lifeless> ddaa: exactly
[09:53] <lifeless> ddaa: now its just a matter of optimisation
[09:53] <carlos> zyga_, pong
[09:53] <ddaa> so elmo will come and tear off my toenails using a rusty spoon
[09:53] <carlos> zyga_, dude, you smell me...
[09:54] <carlos> just arrived
[09:54] <zyga_> carlos: developers feel alike
[09:54] <niemeyer> ddaa: The infrastructure is usually in place..
[09:54] <niemeyer> ddaa: You do not move it.. :)
[09:54] <zyga_> carlos: I wanted to talk to you about a plugin for rosetta's importer
[09:54] <niemeyer> ddaa: Anyway, just an idea.. no need to freak out.. :)
[09:54] <ddaa> niemeyer: let's get back on track...
[09:55] <niemeyer> ddaa: Ok
[09:55] <zyga_> carlos: do you prefer in private or public?
[09:56] <ddaa> niemeyer: you mentioned slave capability
[09:56] <carlos> zyga_, as you prefer, I suppose public is better so people can read it just in case they are interested on it
[09:56] <zyga_> carlos: fine by me
[09:56] <ddaa> is there a specific reason why storing that information as Scheduler configuration is not good enough for soyuz?
[09:56] <carlos> ok
[09:56] <zyga_> carlos: the issue is that sometimes a utf-8 valid but otherwise corrputed files get thru the validator
[09:56] <ddaa> (i.e. something entirely stored on the master)
[09:57] <carlos> right
[09:57] <zyga_> carlos: I wrote a bit of code that will look at given text and try to determine if it contains some common corrupted characters for given language
[09:57] <zyga_> carlos: obviously my example data files are for polish but the infrastructure (big word for a small class) is most generic
[09:58] <zyga_> carlos: I'd gladly contribute code for rosetta under any license as long as you help me incorporate it
[09:58] <niemeyer> ddaa: Yes, that's a viable approach as well..
[09:58] <niemeyer> ddaa: Are you thinking about having a bus of slaves or having a single registry of them in the server side?
[09:58] <carlos> zyga_, hmm sounds interesting...
[09:58] <zyga_> I can create a full set of data files for polish with various corruptions
[09:58] <carlos> zyga_, what's the API you are using or you want to use?
[09:59] <lifeless> niemeyer: oh btw - that difflib recursion problem
[09:59] <lifeless> niemeyer: has ddaa chatted with you about that ?
[09:59] <niemeyer> ddaa: Asking it another way, do we have to change the server configuration and restart it to get a new slave in?
[09:59] <ddaa> niemeyer: I was more thinking along the "registry in the master" line.
[09:59] <zyga_> carlos: the code is at my bzr tree, ubuntu.suxx.pl/2006--1/bzr-archive/fix-broken-encoding--main
[09:59] <zyga_> carlos: pull it and examine the only source file
[09:59] <carlos> zyga_, the best way to integrate it is to develop it as an external module, you put it as LGPL and we call your code, just like the GNOME integration script you were working on
[09:59] <ddaa> niemeyer: I was thinking that, yes
[09:59] <carlos> ok
[10:00] <zyga_> carlos: right I dual licensed it just in case GPL and LGPL for launchpad
[10:00] <niemeyer> ddaa: Or would it be interesting to have "I'm a slave, what do you want me to do?"
[10:00] <carlos> zyga_, ok, thanks
[10:00] <niemeyer> lifeless: I don't think so
[10:00] <zyga_> carlos: the corruptions are described as ocet streams  (asci hex values) and corresponding unicode character name
[10:01] <lifeless> niemeyer: I remember you fixing it for bzrlib last year, by correcting python
[10:01] <niemeyer> lifeless: I'm talking to Tim Peters about it.. but he gave me a non-optimal answer. :)
[10:01] <lifeless> niemeyer: we're triggering the same thing in production, so a local patch would be mucho appreciated
[10:01] <zyga_> carlos: I want to add a class that will look at given text (as sequence of lines or whatever) and determine the following:
[10:01] <carlos> zyga_, what's a catalog file?
[10:01] <niemeyer> lifeless: Like, I wouldn't use difflib in anything serious..
[10:01] <lifeless> niemeyer: !@#!@#!
[10:01] <lifeless> :(
[10:01] <zyga_> 1) if the text is valid and can go right into the database
[10:01] <zyga_> carlos: data/*
[10:02] <niemeyer> lifeless: Yep.. that was my reaction when reading it
[10:02] <ddaa> niemeyer: I was definitely thinking about bouncing the master to let it recognize a new slave that has specific job affinities. Which should be painless if bouncing the master does not disrupt tasks already running on slaves.
[10:02] <niemeyer> lifeless: But I guess we shouldn't take it too seriously, and just fix it.
[10:02] <lifeless> niemeyer: yes please!
[10:02] <zyga_> carlos: (continuing): if the file is corrupted but can be recovered + log message to the uppler layer
[10:02] <carlos> zyga_, so you need to define for every language/encoding the set of characters that are not valid?
[10:02] <zyga_> carlos: and lastly if the file is corrupted and cannot be improrted and automatically fixed + log message to the uppler layer
[10:03] <ddaa> niemeyer: please, can you dig out that difflib patch. It's a critical blocker for importd->bzr transition.
[10:03] <zyga_> carlos: each translator should help on his own but this can be added gradually
[10:03] <niemeyer> lifeless: Ok, I'll give you some feedback about this shortly.
[10:03] <niemeyer> ddaa: Oops..
[10:03] <lifeless> niemeyer: me & ddaa please. ddaa is the one directly affected.
[10:03] <carlos> zyga_, I like the concept, but the catalog file... is a bit difficult to have...
[10:03] <niemeyer> ddaa: In that case we have to apply it locally
[10:03] <niemeyer> Sure.. will do that today
[10:03] <lifeless> niemeyer: thats fine
[10:03] <zyga_> carlos: I started by adding a set of corrputions that often appers when someone re-encodes a ill-tagged .po file
[10:04] <ddaa> niemeyer: yes, exactly, we can nag our local python packager to do just that.
[10:04] <zyga_> carlos: the catalog file can be changed to anything you like, it was simple to me
[10:04] <carlos> zyga_, couldn't we get that information already from the encoding information for every encoding?
[10:04] <carlos> zyga_, it's not the format, it's the kind of information we need to provide
[10:04] <zyga_> carlos: humm? I don't understand
[10:04] <lifeless> niemeyer: would upstream be pissed if that went into the ubuntu package ?
[10:04] <zyga_> carlos: no no
[10:04] <niemeyer> ddaa, lifeless: Ok, will prepare a clean patch for you, and push that upstream a bit harder. :)
[10:04] <niemeyer> lifeless: Not at all
[10:05] <lifeless> then - we need to get it to doko
[10:05] <niemeyer> lifeless: I'll apply the patch upstream.
[10:05] <zyga_> carlos: I don't think we can automatically build a catalog file
[10:05] <zyga_> carlos: (automatically at runtime)
[10:05] <niemeyer> lifeless: Let's see if anyone screams.. :)
[10:05] <lifeless> niemeyer: ! sweet
[10:05] <zyga_> it should contain things caught in the wild
[10:05] <zyga_> thing corrupting the main database for example
[10:05] <ddaa> niemeyer: let's move our discussion to #buildd-ng
[10:05] <ddaa> too noisy here
[10:06] <zyga_> carlos: or maybe I didn't quite undersand what you were referring to
[10:10] <lifeless> ddaa: what distro are the importd machines on at the moment ?
[10:10] <lifeless> ddaa: I'm organising the info to get you a fixed package
[10:11] <lifeless> niemeyer: can you tell doko when you commit that to svn, and give him the revno ?
[10:11] <zyga_> carlos: ?
[10:12] <niemeyer> lifeless: Will do that
[10:12] <lifeless> thanks
[10:14] <zyga_> carlos: ?
[10:14] <carlos_> zyga_, hi, sorry, mi DSL line went down
[10:14] <zyga_> carlos_: ok
[10:15] <carlos_> what's the last thing you read?
[10:15] <zyga_> carlos	zyga_, it's not the format, it's the kind of information we need to provide
[10:15] <carlos_> carlos zyga_, for latin1/iso-8859-15 we have a set of valid characters
[10:15] <carlos_> carlos and others are invalid
[10:15] <carlos_> carlos hmm, thinking it twice, that's not a good solution
[10:15] <carlos_> carlos zyga_, so we have here two different things...
[10:15] <carlos_> carlos zyga_, 1.- Detect that the file is using the encoding that the .po header says it's using
[10:15] <carlos_> carlos 2.- Try to 'fix' it automatically
[10:15] <carlos_> carlos I think we should require the catalog information only of the second point
[10:16] <carlos_> carlos so language teams would provide information to prevent some imports to be imported
[10:16] <carlos_> carlos but we should be able to detect if the encoding is wrong without requiring a catalog
[10:16] <carlos_> carlos or at least a manually maintained catalog
[10:16] <zyga_> carlos_: agreed
[10:16] <ddaa> lifeless: breezy
[10:16] <ddaa> lifeless: I know I had to fix an compatibility glitch with python-subversion
[10:16] <zyga_> carlos_: how can we detect the set of valid characters for $LANGUAGE?
[10:16] <niemeyer> lifeless: The exact phrasing was "I don't consider the current algorithm to be general-purpose (it can
[10:16] <niemeyer> be provoked into uncomfortable sloth several ways).
[10:16] <niemeyer> "
[10:17] <zyga_> carlos: I know that for specific encoding it's possible but what about language?
[10:17] <ddaa> mh... actually that's wrong... I know because the admin told me he upgraded them...
[10:17] <lifeless> meh
[10:17] <carlos> zyga_, it should not be too difficult to get that information from glibc
[10:17] <zyga_> carlos: also, note that detecting a know corruption also helps
[10:18] <jbailey> carlos: Hmm?
[10:18] <niemeyer> lifeless: That's better than not using for serious purposes.. it just means that it we may get more problems.. ;-)
[10:18] <zyga_> carlos: I didn't know glibc has that kind of data, any hints?
[10:18] <carlos> jbailey, not from the API
[10:18] <carlos> ;-)
[10:18] <zyga_> s/know/known/
[10:18] <jbailey> carlos: Right.  The locales parsing sucks a bit that way.
[10:18] <lifeless> ddaa: what are the machine names this needs to roll out onto ?
[10:19] <carlos> zyga_, glibc has the list of languages that are spoken in a country at least most of them, and the same for the list of encodings that a language has (or at least that's what I think) so we could get it from there and improve it later like we already did with the spoken languages
[10:19] <carlos> jbailey, we did it before ;-)
[10:19] <jbailey> carlos: The goal is also to have Rosetta provide all of this information and have glibc generate its tables from that.
[10:20] <jbailey> carlos: So as a long term solution, you'll *have* all the data already in Launchpad. =)
[10:20] <lifeless> jbailey: in realtime ?
[10:20] <lifeless> jbailey: how will you boot without net ?
[10:20] <ddaa> lifeless: galapagos neumayer marambio russkaya leningradskaya
[10:20] <jbailey> lifeless: No, snapshotted as the language packs are done.
[10:20] <lifeless> jbailey: YHBTHANDHTH
[10:20] <carlos> jbailey, yeah, that's what I'm talking about atm, to "migrate" that info from glibc into Rosetta to get some initial values
[10:21] <ddaa> lifeless: escudero could come in handy too, but I'd rather not push too much, it's supposed to be a lean and clean web-facing system.
[10:21] <jbailey> lifeless: I'm sorry, I don't speak czech.
[10:21] <jbailey> carlos: Ah, cool.
[10:21] <ddaa> lifeless: and macquarie too (with the galapagos etc.)
[10:21] <lifeless> ddaa: meh, too late.
[10:21] <ddaa> lifeless: so macquarie galapagos neumayer marambio russkaya leningradskaya
[10:21] <zyga_> carlos: so glibc->list-of-langs->list-of-encodings-per-lang->list-of-unicode-chars-per-lang-that-map-from-each-encoding
[10:21] <carlos> zyga_, more or less, yes
[10:22] <ddaa> bah, the initial list is the important one
[10:22] <jbailey> carlos: Probably simplest to just parse the Belocs files directly.
[10:22] <lifeless> ddaa: why macquarie? elmo gets nervous about macquarie, I bet that that would slow down processing of the request
[10:22] <jbailey> carlos: Then you're at least just porting an existing parser.
[10:22] <lifeless> jbailey: you have been trolled have a nice day hope that helps
[10:22] <ddaa> lifeless: cause it could come in handy, drop that one.
[10:23] <zyga_> carlos: does python have any useful bindings that could help me with this?
[10:23] <carlos> jbailey, yeah, when I say glibc I'm talking too about belocs
[10:23] <zyga_> carlos: the first two stages
[10:23] <carlos> zyga_, no idea
[10:24] <zyga_> carlos: okay, let's talk about the problem of fixing broken files automatically, can you give me the api you'd like to use
[10:24] <zyga_> I'll implement it
[10:24] <jbailey> carlos: Right.  Belocs presents troubles in that their format changes slightly from time to time.
[10:24] <jbailey> carlos: So we've patched glibc to understand them, and their newest files have changed the format slightly again.
[10:24] <jbailey> carlos: The binary output is still compatible with glibc, but Denis has made the source format more expressive.
[10:25] <carlos> jbailey, that's bad for the parsers but better to maintain the files....
[10:25] <jbailey> carlos: Right.
[10:26] <lifeless> ddaa: niemeyer: how much will having 'Branch.initialize' no longer setup the working tree break you ? You can get the same behaviour via 'WorkingTree.initialize' OR b = Branch.initialize(); wt = WorkingTree.checkout(b, dir)
[10:26] <carlos> zyga_, I think the easier would be a method that gets a stream of data and a string with an encoding
[10:26] <carlos> zyga_, and you return true or false if it's or not in that encoding
[10:26] <ddaa> lifeless: bzrsyncd does not create branches
[10:26] <lifeless> ddaa: what about your test suite ?
[10:27] <ddaa> It would certainly break.
[10:27] <ddaa> just do it, supposedly pqm won't let you merge until that's fixed :)
[10:27] <zyga_> carlos: even if the stream is technically correct?
[10:27] <niemeyer> lifeless: I don't see any problems as well.. it's a trivial fix..
[10:27] <lifeless> how badly? Its a bitch to keep compatible :( but I probably can, the alternative is to do it, and then coordinate with you for putting bzr with this change into lp
[10:28] <carlos> zyga_, ?
[10:28] <ddaa> lifeless: you can also break the compatibility and fix launchpad in a multi-tree commit :)
[10:28] <lifeless> ddaa: when I have multi tree commits done, sure.
[10:28] <zyga_> carlos: like the file that has brought me to this issue, it's valid utf-8 and contains broken characters (polish)
[10:29] <lifeless> hmm
[10:29] <zyga_> carlos: so I should return more-less or false?
[10:29] <ddaa> lifeless: it's probably worthy for you to know how to fix that sort of simple breakage, that will save many roundtrips for small patches.
[10:29] <lifeless> we can add workingTree.initialize now
[10:29] <lifeless> fix lp
[10:29] <lifeless> then the actual split comes with 0.8
[10:30] <lifeless> how does that sound? Branch.initialize gets left alone for a little bit 
[10:30] <carlos> zyga_, hmm, that's a good question...
[10:30] <ddaa> if I'm the least busy when you need the patch, I'll do it
[10:30] <lifeless> well
[10:30] <ddaa> (which means I'll probably arrange for somebody else to do it :)
[10:31] <lifeless> branch-formats does this change now
[10:31] <carlos> zyga_, so it's set as UTF-8 but it's not real UTF-8 characters (but valid UTF-8 stream)?
[10:31] <zyga_> carlos: real utf-8 characters just without any sense
[10:31] <zyga_> carlos: gibberish ;-)
[10:31] <carlos> right
[10:31] <ddaa> lifeless: fire me a mail if you need a patch done, once the prerequisites are in rocketfuel
[10:32] <lifeless> ok
[10:32] <lifeless> I'll do a lp specific pre-requisite patch
[10:32] <carlos> for that, the only solution is the catalog file you designed ...
[10:32] <ddaa> lifeless: ATM I'll try to focus on my conversation with niemeyer in #buildd-ng
[10:32] <zyga_> carlos: examine: ubuntu.suxx.pl/2006--1/for-specific-people/cassidy
[10:32] <lifeless> which lets you use the new api but does nothing else
[10:32] <lifeless> and I'll mail you when thats in lp/sourcecode/bzr
[10:33] <carlos> zyga_, http://trific.ath.cx/software/enca/ Do you think this would be useful?
[10:34] <carlos> bzr get http://ubuntu.suxx.pl/2006--1/for-specific-people/cassidy
[10:34] <carlos> bzr: ERROR: Not a branch: http://ubuntu.suxx.pl/2006--1/for-specific-people/cassidy
[10:35] <zyga_> carlos: partially, our problem as I see it is dual: from badly tagged encoding (where it can indeed help)  and from corrupted files (where it cannot as far as I can see) it's worth looking at
[10:35] <zyga_> carlos: ah, that's a generic webpage :-)
[10:36] <carlos> oh
[10:36] <carlos> ;-)
[10:37] <zyga_> bzr: ERROR: Not a branch: http://www.google.com/
[10:39] <zyga_> carlos: both files are valid, only one makes sense
[10:40] <carlos> yeah, I remember it now
[10:40] <carlos> we have a spec to prevent this kind of things with updates (not always, but most of the times it should be detected)
[10:41] <zyga_> carlos: link if you could?
[10:41] <carlos> zyga_, we will reject any upload that changes more than the 75% of the previous translations
[10:42] <zyga_> carlos: ??
[10:42] <zyga_> carlos: hardly a protection if you ask me
[10:43] <carlos> this is to prevent the upload of de.po as es.po by mistake
[10:43] <carlos> hmm, I don't find the spec...
[10:43] <zyga_> carlos: I see
[10:44] <carlos> zyga_, https://wiki.launchpad.canonical.com/LaunchpadPoImport
[10:44] <carlos> zyga_, look for CONTINUITY_THRESHOLD
[10:45] <zyga_> reading
[10:46] <zyga_> k, reasonable
[10:47] <zyga_> carlos: so about the api question?
[10:47] <carlos> zyga_, I really don't know...
[10:48] <carlos> Hmm, I suppose
[10:48] <carlos> a stream, the language it's supposed to be and the encoding it's also supposed to use
[10:48] <carlos> and return true or false if that's true or not
[10:48] <carlos> zyga_, is that enough?
[10:48] <sivang> carlos, zyga_ : still fighting with the uncide import problem ?
[10:49] <zyga_> carlos: you could also give it a chance to try to fix the stream
[10:49] <zyga_> sivang: yes
[10:50] <carlos> zyga_, yeah, but that should be another API call
[10:51] <carlos> zyga_, and I think we should do that as a second step
[10:51] <zyga_> carlos: agreed
[10:51] <zyga_> carlos: same info and it should return a fixed stream or raise an exception if it cannot
[10:53] <carlos> zyga_, makes sense
[10:53] <zyga_> carlos: fine I adapt current code
[10:53] <zyga_> carlos: language as in 'polish' or 'pl' or something else?
[10:54] <carlos> zyga_, I think the language code is the easier way to do it
[10:54] <carlos> 'pl', 'es', etc...
[10:54] <zyga_> carlos: good
[10:56] <carlos> zyga_, anything else? I should go to have dinner
[10:56] <zyga_> carlos: no, bon apetit :-)
[10:57] <zyga_> carlos: I'll ping you when I have it ready
[10:57] <carlos> ok, thanks
[10:57] <carlos> see you later
[10:59] <Anjo_Malvado> 
[11:10] <zyga_> carlos: by stream you mean a file-compatible object?
[11:17] <carlos> zyga_, stream of bytes
[11:18] <zyga_> carlos: python has some standard stream class or is this just a sequence
[11:18] <carlos> zyga_, just a buffer of bytes
[11:18] <zyga_> k
[11:19] <zyga_> carlos: (hint, use array module)
[11:31] <doko_> is there any place where the malone status "Fix committed" and "Fix released" is described?
[11:38] <Ants> respect Ubuntu!
[11:39] <Burgwork> Ants, um?
[11:39] <Ants> i'm russian!
[11:40] <sivang> night all
[11:41] <uws>  cold in russia :)
[11:44] <carlos> doko_, Don't know
[11:44] <Ants> what does it mean: night all?
[11:44] <carlos> doko_, but 'Fix committed' is when you fixed it but your fixes are not yet released to the public (in Ubuntu context, it's not yet in the archive)
[11:45] <carlos> doko_, 'Fix released' it's when the users get the fix using the standard methods (just a dist-upgrade)
[11:46] <carlos> doko_, this makes more sense when you are developing a product and you do a commit for a bug fixing and then release a new version with that fix
[11:46] <carlos> at least that's the workflow we use with launchpad (or the one I use)
[11:47] <carlos> good night
[11:47] <carlos> Ants, just someone saying 'good night' to all people