[12:03] <carlos> :-)
[12:25] <BradB> spiv: how do I reopen an IZopeConnection once I've closed it?
[12:25] <BradB> I can't reopen the same object directly, but there must be some way.
[12:26] <BradB> I need to disconnect long enough to drop the db I'm connected to, then reconnect again.
[12:42] <spiv> BradB: Hmm, I don't know how to do that.  The interface certainly doesn't give any way to do it.
[12:45] <BradB> indeed
[12:51] <daf> carlos: yo
[12:52] <carlos> daf: hi
[12:52] <carlos> daf: spiv found the answer to my question already, thanks :-)
[12:55] <daf> carlos: ah, good
[12:55] <daf> carlos: how're you doing?
[12:56] <carlos> well, all bugs I'm working on have dependencies on other people to be finished, out that, fine :-P
[12:58] <daf> shall we have a quick meeting or is it too late?
[12:59] <carlos> it's 1:00 here and I'm tired, do we need it before tomorrow's meet?
[12:59] <daf> no, it can wait
[12:59] <carlos> hmmm, in the other side.. will you attend the meeting?
[12:59] <carlos> 12:00 UTC
[01:03] <daf> yes
[01:04] <carlos> daf: ok, then, could we have the meeting after the launchpad one?
[01:04] <daf> if it's still necessary, sure
[01:05] <carlos> ok
[02:19] !Md:*! Hi everybody. Just wanted to let you know that there is discussion and live coverage on this year's US presidential election in #election2004
[02:27] <BradB> http://paste.husk.org/1910
[02:29] <spiv> It looks like the exit status is confused.
[02:29] <spiv> Oh, no, hmm.
[02:30] <spiv> test_ok = errlines[-1]  == 'OK\n'
[02:30] <spiv> So, apparently, despite the last line of stderr being 'OK\n', there was still a traceback.
[02:30] <BradB> the failure email i got back to me there were 6 failures. from my command line, uh, that just doesn't make any sense at all.
[02:31] <BradB> s/back to me/back told me/
[02:32] <BradB> spiv: do the UT's get run by the wya?
[02:33] <spiv> I believe so, yeah.
[02:34] <BradB> i hope not, because otherwise it becomes difficult to explain how a batching failure got checked in
[02:35] <spiv>     print 'Running tests.'
[02:35] <spiv>     stdin, out, err = os.popen3('cd %s; python test.py %s < /dev/null' %
[02:35] <spiv>         (here, ' '.join(sys.argv[1:] )))
[02:35] <spiv> (from test_on_merge.py)
[02:37] <spiv> Oh, hmm!
[02:37] <BradB> i see stuff like this:
[02:37] <spiv> I think the test_on_merge.py script has a bug :/
[02:37] <BradB> http://paste.husk.org/1911
[02:38] <BradB> (when i run the ftests)
[02:38] <BradB> my brain is in the freezer right now, but damn i just wanna land these page tests
[02:38] <spiv> test.py runs several "batches" (for lack of a better word) of tests, but test_on_merge.py only looks for a final "OK".
[02:38] <spiv> That's fine.
[02:38] <spiv> Those are all ok.
[02:39] <spiv> Just a bunch of warnings, but they don't break the tests.
[02:39] <BradB> yeah, it ends in "OK\n", so...
[02:39] <spiv> Right.
[02:39] <BradB> how is "Exception exceptions.AttributeError" "ignored" though?
[02:39] <spiv> It's from a __del__
[02:39] <BradB> why is that ignored though? :)
[02:40] <BradB> ok, it's outside a test, presumably
[02:40] <spiv> Because it's triggered by the garbage collecter...
[02:40] <spiv> Where would you expect an error in __del__ to get raised to? :)
[02:41] <BradB> i dunno...does it happen in tearDown?
[02:41] <spiv> No idea.  Quite likely.
[02:42] <spiv> But __del__ isn't something that's called directly by your code, it's called by python internals.
[02:42] <BradB> that would make sense then. if i just del foo in a test and it raises an error, i'd expect the test to be an error.
[02:42] <spiv> So the exception bubbles up to python's internals, which goes "Oh!  Fancy that.  I'll make a note of that on sys.stderr and get on with my life."
[02:42] <BradB> but in tearDown, i'd expect it to be ignored.
[02:43] <BradB> heh
[02:43] <spiv> Well, doing "del foo" may or may not cause __del__ to be triggered.
[02:43] <spiv> It's triggered when there are no references left, which can happen for all sorts of reasons.
[02:44] <spiv> I've looked into that particular error, and although the warnings are annoying, I don't think it has an significant sideeffects.
[02:46] <spiv> (so it's not that it's outside a test, it's that it's outside the normal execution path of python code altogether)
[02:46] <BradB> yeah...well that's arse then, my tests end in "OK\n" when I run "make check" locally, but I get 6 failures on the server.
[02:47] <spiv> I think there's possibly a seperate problem, which is that test_on_merge might not notice failures all the time -- I'm not convinced that checking that the end is "OK\n" is sufficient.
[02:47] <lifeless> one should be checking $?, not string output
[02:47] <spiv> If that combines with e.g. the server running the tests in a different order to you.... (easy to imagine, you have a different FS, yeah?)
[02:47] <spiv> lifeless: Yeah.
[02:47] <BradB> Actually, I can't figure out if http://paste.husk.org/1910 means everything passed, or means something failed.
[02:48] <lifeless> it failed
[02:48] <spiv> BradB: It means something failed.
[02:48] <lifeless> there was a traceback.
[02:48] <spiv> BradB: But it means that test_on_merge.py didn't realise it.
[02:48] <spiv> (Due to aforementioned issues :)
[02:49] <BradB> spiv: ...and yet i still got a failure email.
[02:50] <spiv> BradB: Right.  But I suspect that if the tests had run in a different order, it would have noticed.
[02:50] <BradB> spiv: What's the exact command pqm runs to check if all the tests pass?
[02:51] <spiv> make check, I believe.
[02:51] <BradB> ouch, pain pain pain
[02:55] <BradB> i'm seeing errors in the PT's though on the server, which is more worrying
[02:55] <BradB> where's the log of the failed test run? the email doesn't give me enough info.
[02:55] <spiv> I don't know; lifeless?
[02:56] <spiv> lifeless: Can we get more detailed failure logs from PQM that what it mails to us?
[02:56] <lifeless> define more detailed ?
[02:56] <BradB> lifeless: i wanna see all the failure output
[02:57] <lifeless> ok, thats doable.
[02:57] <BradB> rather than "Last 20 lines of log output" :)
[02:57] <BradB> lifeless: for now, where's the file that i can read to see it?
[02:57] <lifeless> BradB: there isn't one.
[02:57] <lifeless> its mail and toss.
[02:57] <BradB> oh, heh
[02:58] <BradB> my brain's pickled for tonight anyway, but if there's a chance to have that output in the mail for tomorrow, that'd be really good
[02:58] <lifeless> I'll see what I can do.
[02:58] <lifeless> I plan to add a new command
[02:59] <lifeless> so you'll want your command email to have 'debug' on a line on its own.
[02:59] <lifeless> that will trigger the output-from-hell mode.
[02:59] <spiv> BradB: I'll see if I can make test_on_merge a little more sensible.
[02:59] <BradB> lifeless: i think we'd probably always want it
[02:59] <lifeless> spiv: look at the cscvs make check.
[03:00] <BradB> lifeless: it's only one person that'll get it anyway, so if one person gets an email-from-hell every once in a while, it's no big deal (and they have all they need to figure out what failed on the server.)
[03:00] <lifeless> BradB: well, some build processes make a hella lotta output, and kindly do fail at the first error.
[03:01] <BradB> spiv: that'd be nice. :)
[03:01] <spiv> lifeless: In theory, so does this one :)
[03:01] <BradB> I'm not a big fan of no output. "All tests OK." or something would be nice.
[03:02] <lifeless> BradB: mmm, for now, you can change your submit-arch-merge to just have debug in there.
[03:04] <BradB> have it where?
[03:05] <BradB> presumably at the end of the body
[03:06] <lifeless> echo -e debug\nstar..
[03:06] <lifeless> you'll want it before star merge :)
[03:06] <BradB> oh, ok
[03:07] <lifeless> I'll code it up this afternoon, things going well elsewhere.
[03:11] <stub> Any code that relies on __del__ is broken. There is a Canonical Buzilla bug open on this about SQLObject
[03:11] <stub> (which is broken)
[03:14] <stub> (well... probably not broken, since it doesn't rely on it, but damn annoying)
[03:31] <debonzi> hi stub, is it possible to update launchpad.ubuntu.com? there are some soyuz stuff missing there
[03:34] <stub> Sure. I'm not sure now to restart the server though.
[03:34] <stub> BradB: ping
[03:35] <BradB> yo
[03:36] <debonzi> stub, I must go home now.. its realy late.. if you need something else from us please drop me an email.. thanks
[03:36] <BradB> i can restart it if you're ready...or i could just stop it and let you restart it when you want
[03:36] <stub> How do we restart the dogfood launchpad server atm. Last time you did it, which I suspect means it is running under screen?
[03:36] <BradB> yeah
[03:36] <debonzi> see you guys
[03:36] <BradB> see ya
[03:37] <BradB> stub: how 'bout i just stop it now and you do what you want with it?
[03:38] <stub> Gee... thanks ;)
[03:38] <BradB> heh
[03:39] <stub> So you just did 'make run' ? If so, kill it and I'll take it over.
[03:39] <BradB> launchpad@rosetta:/srv/launchpad.ubuntu.com/launchpad-dogfood $ make run
[03:40] <stub> Blergh.... can't run screen as launchpad... :-P
[03:40] <BradB> no, you run it as your normal user :)
[03:40] <BradB> then sudo, etc.
[03:40] <stub> I was hoping other people could connect until proper start/stop scripts were running :-)
[03:42] <stub> So the test suite brick walls hurt?
[03:42] <BradB> painful. very painful.
[03:43] <BradB> i was completely boggled by that connection bug
[03:43] <BradB> took me a hell of a long time to find out where the heck the stray connection came from
[03:43] <stub> Well.. makes me feel better anyway. I wasted two days on it and was wondering if I was just being real stoopid :-)
[03:43] <BradB> which was in the placelessauth service
[03:44] <BradB> i've only wasted about 3/4 of day, and have ready the stage of only being really confused now, instead of completely lost :P
[03:44] <stub> mmm.... which the rosetta tests were using in conjunction with initZopeless, which seemed rather dodgy...
[03:44] <BradB> s/ready/reached/ # it's affected my typing
[03:44] <stub> I'll owe you a big sloppy kiss if you can crack it
[03:44] <BradB> heh heh
[03:44] <BradB> if i would have known that today, it'd have taken an hour to fix, no doubt
[03:45] <BradB> stub: you read z3 checkins?
[03:46] <BradB> the "docs" (i.e. functional doctests) for the PAS are just so...discouraging. it's like 95% http session barf and 5% telling the user how to use the PAS.
[03:46] <stub> I scan them once or twice a week
[03:47] <stub> If it is http session stuff I should have a look - I've been waiting on people to actually use it to see how it goes :-)
[03:48] <BradB> stub: it's horrid. the concept of functional doctests is brilliant, but i'm starting to think of how better to do it. ideas so far: none.
[03:49] <BradB> "better" i.e. make them actual readable documentation.
[03:49] <BradB> at the end of today i had all the tests passing locally (so i could land the malone forcefield), but somehow 6 fail on the server, and i don't get enough error output back in the email to know what the problem is (so hopefully lifeless have time to fix that by tomorrow.)
[03:51] <stub> Mmm... my first suspicion is the launchpad databases. Tests may be making assumptions about the databases, but the db's on chinstrap only get updated if 'make check' bothers to do so. 
[03:52] <BradB> stub: the ftests should get init'd properly no?
[03:52] <stub> Are you back for the meeting in 9 hours time? 
[03:52] <BradB> (i have no idea how or where, but i thought that just somehow worked)
[03:52] <BradB> 9 hours? :)
[03:52] <BradB> er, crap, yeah, it's only 9 hours from now
[03:53] <stub> BradB: In theory, yes. However, until recently tests were doing dodgy things like connecting to launchpad_dev (which the test suite doesn't touch, because it shouldn't be using it).
[03:53] <BradB> they still do
[03:53] <stub> BradB: I think I've got a handle on what you have been chasing, so if you need to skip the meeting go for it.
[03:53] <BradB> i did much watch -n 2 'psql launchpad_dev -c "select * from pg_stat_activity"' today
[03:54] <stub> Mmm... fragile tests. Once this damn harness is debugged it should be fairly simple to fix the remaining tests (my approach was more brute force - I just dropped the launchpad_dev database and looked at what broke ;) )
[03:55] <BradB> i got to: http://paste.husk.org/1912
[03:56] <BradB> in pgsql.py, line 70, setUp
[03:56] <BradB> i just couldn't figure out how to reopen the conn after closing, so everything after that dies
[03:58] <BradB> that's after having first explored the possiblity that it was a problem with our custom publisher, or with the base class that bootstraps individual tests, or with the placeless auth, etc. heh
[03:59] <stub> Mmm... I had monkey-patched psycopg.connect to give me connections with a 'reconnect' method, allowing me to close them all, drop and recreate the db, and reconnect them. I didn't finish chasing it hrough though.
[04:00] <BradB> why knowing how to say "hey, connect back to the db" didn't come obvious to me (and still doesn't), i'm not sure
[04:01] <stub> Hmm... looking at that paste, I suspect we are going to have to get the adaptor and poke into its internals Urgh :-(
[04:02] <stub> Or close it and delete the utiliity, drop the db, and recreate it...
[04:03] <BradB> yeah, that's what i was thinking
[04:03] <stub> Anyway... I have db patches for kiko and dogfood rollout stuff to do. I won't touch the test stuff today since it would just conflict with your work.
[04:03] <BradB> indeed. i'll take a look again after the meeting tomorrow.
[04:04] <BradB> steve might have some ideas.
[04:04] <BradB> if he'll be there? maybe it's you admining it then.
[04:04] <stub> I suspect I am, which will be amusing ;)
[04:04] <BradB> heh
[04:05] <spiv> Hmm.  That's not a good sign.
[04:06] <spiv> Trying to merge a change to test_on_merge to make it check exit values as well as stderr fails.
[04:06] <lifeless> hur hur hur
[04:06] <spiv> And the last line of stderr is indeed "OK"
[04:06] <lifeless> dounds like its not quite right :)
[04:06] <spiv> Although FAILED (failures=1, errors=1)
[04:07] <spiv> occurs higher up.
[04:07] <lifeless> so, who ever looked at stdout ?
[04:07] <lifeless> actuall, I don't want to know
[04:09] <spiv> BradB|away: And from what I can see in the limited output from PQM, batching does indeed fail.
[04:10] <lifeless> spiv: have you looked at the cscvs make check ?
[04:10] <lifeless> I've patched unittest to stop on the first error.
[04:10] <lifeless> rather than blindly careening off into never never land.
[04:10] <stub> lifeless: did you follow the dogfood rollout thread? I was going to create a configs directory in launnchpad--devel--0 for build-config files so we can just to 'tla get launchpad--whatevver--0 whatever; cd whatever; tla build-config configs/whatever'. Sane or stoopid?
[04:10] <lifeless> stub: we have 'dists' for that.
[04:11] <lifeless> what would be different ?
[04:11] <stub> lifeless: The difference would be that the rollout build-config files would be version controlled with the codebase.
[04:12] <lifeless> I'm confused, I don't see what you mean.
[04:12] <stub> (well... at least all the stuff in the launchpad package - pyarch and a few others are seperate).
[04:13] <lifeless> pyarch, cscvs, zope, sqlobject, sqlos, pyscopgda
[04:13] <lifeless> buildbot
[04:13] <lifeless> twisted
[04:13] <stub> I could just do 'tla get launchpad--dogfood--0.6' and I would have the build-config file that listed all the revisions of the 3rd party tools that were rolled out with it.
[04:13] <lifeless> python
[04:13] <lifeless> stub: so, what is different to 'tla get dists--devel--0', 'tla buildconfig configs/canonical/launchpad/dogfood--0.6' ?
[04:14] <lifeless> have you looked at the production-* configs in 'dists' ?
[04:15] <stub> lifeless: That would work, although doesn't lend itself to automatition. If instead of using versions we just used 'launchpad--dogfood--0', we could do an automatic update of the dogfood server (cron job just does 'tla get launchpad--dogfood--0; tla build-config' and it all magically works...)
[04:15] <lifeless> right, so you just do that with configs/canonical/launchpad/dogfood
[04:15] <dilys> New bug 2155 for Launchpad/Launchpad: test_on_merge.py doesn't check exit status, and thus ignores failures.
[04:15] <dilys> https://bugzilla.warthogs.hbd.com/bugzilla/show_bug.cgi?id=2155
[04:16] <lifeless> its as automatible.
[04:17] <lifeless> I have several production configs, so that I can roll back trivially, but if you just have one, that will work just as well.
[04:17] <stub> ok - I see that now. We just have to list the explicit revision of launchpad--dogfood to allow us to easily retrieve historical revisions.
[04:17] <lifeless> in fact, its more automatable.
[04:17] <spiv> lifeless: The problem isn't unittest.  The problem is the script that runs the test script fails to interpret the results of that script correctly (i.e. it ignores the exit status)
[04:17] <lifeless> spiv: I was meaning for reducing the output to sane lengths :)
[04:18] <spiv> lifeless: Oh, right.  I'm not particularly concerned by long output :)
[04:18] <lifeless> stub: its more automatable because a dogfood config can hold you at a specific level in the dogfood branch, if something is wrong with newer revisions.
[04:19] <lifeless> so your release process would be: get the code ready, then generate the config to a dists tree that you merge with rocketfuel. 
[04:19] <lifeless> done.
[04:19] <lifeless> tla logs -rf | head -n 1 gives you a tree revision.
[04:22] <lifeless> how does that sound to you ?
[04:22] <lifeless> tla inventory --nested -t gets you the trees
[04:23] <BradB|away> lifeless: we don't want it to stop on the first failure
[04:23] <stub> Sounds like a winner. I think the '--dir' option to build-config solves controlling where the checkout goes which is also good.
[04:23] <lifeless> tla inventory --nested -t | xargs bash -c "tla logs -d $0 -rf | head -n1" or similar gives you the config
[04:23] <lifeless> stub: erm, no.
[04:23] <lifeless> the -d says where to go to get the configs
[04:23] <lifeless> the config says how its checked out under there.
[04:23] <BradB|away> stub: failing UT's are allowed to sneak by the checker, it seems
[04:23] <lifeless> so you just build the config to match your checkout desires.
[04:24] <stub> BradB|away: Sod
[04:24] <lifeless> BradB|away: K.
[04:24] <BradB|away> stub: just a guess, but it's because a failing UT doesn't affect the fact the last line can still be "OK\n"
[04:24] <BradB|away> since the FT's run last
[04:25] <BradB|away> AFAIK
[04:25] <lifeless> BradB|away: its simpler: checking stdout is not  a reliable process for interrogating the test routines.
[04:26] <BradB|away> indeed
[04:26] <stub> I guess so. Problem is we can't really fix it until the tests all pass ;-) 
[04:26] <BradB|away> hehehe
[04:26] <stub> I think Steve designed makeoncheck to just work with page-tests
[04:27] <lifeless> I could just force the merge in.
[04:27] <lifeless> of course, you couldn't commit anything else until the tests pass.
[04:27] <stub> Bad lifeless - bad!
[04:27] <lifeless> :)
[04:27] <lifeless> technically, I'd be fixing a bug.
[04:28] <stub> I'll change my 'enforce explicit FROM clauses' warning to an error at the same time :-)
[04:32] <spiv> BradB|away, stub: see bug 2155 :)
[05:31] <BradB|away> stub: did steve say you're officially <management_term>"chairing"</management_term> the meeting tomorrow then?
[05:32] <BradB|away> if so, and i don't need to be there, maybe i'll wake up at a more reasonable time, like 8:30, instead of 6:30, heh
[05:35] <sabdfl> hi all
[05:35] <sabdfl> what's news at the launchpad?
[05:39] <stub> meeting tonight, dogfood rollout today, people working on the more fragile aspects of the test suites
[05:40] <stub> production schema update was run yesterday, as lifeless did a production code rollout.
[05:41] <stub> gina seems to need some lovin' to get it to import the Hoary archive
[05:42] <sabdfl> stub: thanks
[05:43] <sabdfl> i'm in back to back meetings all day through to post-dinner, so will try get an update from you again in your morning
[05:43] <stub> no worries :-)
[05:44] <lifeless> sabdfl: gnu arch issue resolved, in flying colours.
[05:45] <sabdfl> lifeless: good job
[05:45] <lifeless> I've emailed you the outcome, can you please ok that for elmo to act on ?
[05:58] <sabdfl> lifeless: i'm way behind on email, just get a couple of minutes each day this week, please fwd this irc comment to elmo
[05:58] <sabdfl> if he needs the nudge
[05:59] <stub> sabdfl: Now that you have delved into the Z3 form machinery stuff, do you think we need to do work to improve Z3 in this area? The December conference could be an opportunity to hold a Z3 sprint if Garrett Smith is interested in coming (we would need to fund him). 
[05:59] <lifeless> sabdfl: ok.
[05:59] <sabdfl> elmo: please get the arch community servers up for jblack asap, we discussed it in october and i never heard anything to suggest they would not be ready for the termination of jblack's bandwidth, we need to get them up and running at once
[06:00] <sabdfl> stub: yes, some small tweaks
[06:00] <sabdfl> but i may not have unlocked the full power of z3
[06:01] <sabdfl> for example, i noticed you created an IMaloneBugAddForm schema
[06:01] <sabdfl> so you could do more than just create a bug
[06:02] <sabdfl> that seemed unnecessary to me but i couldn't easily find a way around it
[06:03] <stub> Mmm.... that one doesn't particularly worry me. The form machinery simply generates forms based on a specification. In most cases, that specification already exists in the form of an existing interface so we get used to it being magic ;)
[06:05] <stub> I'm more concerned with the dead-chicken functions I had to add to the containers to get the addforms working - that seemed to be simply because they were designed to add persistent objects as children of ZODB folders. It works the way we have done it, but seems to be a hack.
[06:08] <stub> Should I approach Garrett (who didn't write the form machinery, but has been the person most actively involved in championing and refactoring the implementation over the last 12 months) to see if he is interested in coming to Barcelona?
[08:28] <dilys> Merge to rocketfuel@canonical.com/launchpad--devel--0: soyuz views (patch-725)
[10:00] <stub> lifeless: ping
[10:31] !Md:*! rehubbing in progress
[11:31] <Kinnison> Morning
[11:32] <Kinnison> Meeting in 1h30m yes?
[11:33] <debonzi> Kinnison, yep
[11:33] <Kinnison> morning debonzi 
[11:34] <Kinnison> debonzi: how does the db on mawson look? gina finished an import yesterday
[11:34] <debonzi> Kinnison, morning
[11:34] <Kinnison> there were some failures so I'm investigating those
[11:34] <debonzi> Kinnison, it seems to be very nice, but the soyuz code still need to be update.. 
[11:35] <debonzi> stub, did the soyuz update failed?
[11:35] <debonzi> stub, better, the launchpad.ubuntu.com update
[11:39] <Kinnison> debonzi: I see. Well; I'm glad the data looks okay
[11:40] <debonzi> Kinnison, me too :-)
[11:43] <cprov> Kinnison: may I run Nicole over it ?
[11:43] <Kinnison> cprov: as far as I'm concerned; yes. I certainly have no intention to start deleting data :-)
[11:52] <cprov> Kinnison: ok in some minutes 
[11:59] <Kinnison> You can see the output of the latest top-up run of gina in /tmp
[11:59] <Kinnison> I'm going through trying to work out why some packages fail to import properly
[12:00] <cprov> Kinnison: should I backup the the lp_dogfood DB before start ?
[12:00] <Kinnison> cprov: probably worth it; in case nicole breaks horribly
[12:10] <dilys> Merge to rocketfuel@canonical.com/launchpad--devel--0: general templates fix; related with bug #2089. Authorization required for +edit soyuz pages (patch-726)
[12:14] <stub> debonzi: the launchpad.ubuntu code drop hasn't been done yet. 
[12:14] <debonzi> stub, right..
[12:14] <cprov> Kinnison: do you know why restoring a dump from lp_dogfood fails (psql soyuz_tmp -f lp_df-20041103.sql) ?
[12:14] <stub> Kinnison: Do you need launchpad_dogfood for this run? Or can up upgrade now?
[12:14] <Kinnison> stub: I'm running gina out of ~ I only need the db
[12:15] <Kinnison> cprov: Nup; sorry
[12:15] <Kinnison> stub: I can cancel the run if you want; I'll have enough data by now to carry on fidding
[12:15] <stub> keep it going for the time being
[12:16] <cprov> Kinnison: does it works for you ? how did you copy the DB ?
[12:16] <Kinnison> cprov: copy the db?
[12:18] <cprov> Kinnison: I want to run Nicole in a copy of launchpad_dogfood 
[12:18] <Kinnison> I see
[12:20] <cprov> Kinnison: so, any suggestion ?
[12:21] <Kinnison> I'd probably do something like: createdb -E UNICODE -t template0 <foo>; lp_dump launchpad_dogfood | psql <foo>
[12:30] <cprov> Kinnison: yes :) "createdb -E UNICODE -T launchpad_dogfood soyuz_df" should work for me, but the DB is being accessed by other users
[12:31] <Kinnison> cprov: which is why I suggested my way
[12:35] <stub> urgh.... the hourly dump is trying to create the functions used as constraints *after* the tables :-P
[12:35] <cprov> Kinnison: psql <foo> -f <dump> always fails 
[12:36] <Kinnison> cprov: Hmm
[12:37] <Kinnison> cprov: Not sure what to suggest. stub is our database stud :-)
[12:42] <stub> ok - disabling triggers works. I'll update the latest hourly dump.
[12:42] <carlos> how is going the bugzilla -> malone move?
[12:53] <cprov> stub: I can't use the hourly dump to generate my own (tmp) lp_dogfood, I got "invalid command \N", does it means something to you ?
[12:54] <stub> Yes - I'm looking at that now. Creation of the tables is failing, as the dump has decided to create functions the tables need *after* the tables.
[12:55] <stub> If you create a freshdb and run database/schema/trusted.sql in it, the restore should work
[01:00] <cprov> stub: psql:trusted.sql:13: ERROR:  language "plpythonu" does not exist
[01:01] <stub> createlang plpythonu mydbname
[01:02] <stub> I hope it isn't a postgresql upgrade problem - the dumps are in the correct order on my local box ;-/
[01:02] <stub> Anyway....
[01:02] <stub> who is here!
[01:03] <debonzi> stub, yes
[01:03] <stub> I believe I am acting Steve for this meeting unless anyone else feels like volunteering ;)
[01:04] <stub> I'm also acting BradB who won't be joining us
[01:04] <stub> daf: ping
[01:05] <carlos> stub: do the USA people change the hour like we did this weekend?
[01:05] <cprov> stub: the restore seems to be working 
[01:07] <Kinnison> carlos: IIRC USA goes forward one week later; comes back the same time
[01:07] <stub> Mmm... I'm now at UTC+11 now btw.
[01:07] <Kinnison> so it's 11pm for you? yowch
[01:08] <Kinnison> better than UTC-11 I guess :-)
[01:09] <stub> carlos: Are you able to give a short state-of-rosetta?
[01:09] <carlos> stub: I could try, but I only know the status of my work
[01:10] <carlos> daf: told me that he is going to attend the meeting
[01:11] <stub> ok. We can wait to see if daf comes online in a tick.
[01:11] <carlos> but I think that perhaps he will be late 
[01:11] <carlos> ok
[01:11] <stub> As for Malone, work has been progressing on dogfood-for-launchpad, in particular email notifications. Brad has also been beating his head against test harnesses so he can land a full suite of page tests.
[01:11] <stub> I'm sure people who have been playing with it have noticed usability issues - please submit them as bugs in the dogfood malone, or in Bugzilla if that fails.
[01:12] <stub> BradB wants to get the Ubuntu people on board with it as soon as possible, but I feel a little more conservative and want to concentrate on the launchpad group. Maybe we can do both.
[01:13] <Kinnison> can we make the default https://launchpad.ubuntu.com/ use the port which gives tracebacks to the browser?
[01:13] <carlos> stub: are we able to use malone for launchpad bugs then?
[01:13] <stub> Kinnison: That would be a great idea
[01:13] <spiv> carlos: I filed 3 bugs against malone in malone yesterday :)
[01:13] <debonzi> Kinnison, I agree with you
[01:14] <spiv> Does the dogfood malone do emails yet?
[01:14] <stub> Kinnison: I'll try and do that when I drop a fresh snapshot in, which I'll do after this meeting if enough Gina data has been collected or people don't want to wait another 9 hours
[01:15] <Kinnison> stub: feh; anyone would think you thought gina was slow :-P
[01:16] <carlos> spiv: but we don't have (yet) the old bugs imported, so i cannot use it yet 
[01:16] <stub> spiv: The code is all there for notifications, but no incoming email. It is currently switched off though - I will want to clear it with Brad before switching it on, as a screwup could spam Mark (default owner of a lot of things) or elmo (as the postmaster)
[01:17] <stub> Kinnison: I was talking about when I wake up tomorrow. Hopefully gina will be finished by then ;)
[01:17] <Kinnison> stub: oh righty. I'm happy for you to upgrade the software; no question
[01:17] <spiv> stub: Fair enough.  The reason why I ask is that if launchpad devs are going to have to deal with two seperate bug systems at once, I fear the dogfood one will be forgotten unless it sends emails.
[01:18] <Kinnison> stub: if you have to reset the db; gina will have to run again; which might suck
[01:18] <stub> spiv: Good point.
[01:18] <spiv> stub: But importing bugzilla bugs into malone would fix that as well, I guess.
[01:19] <stub> At the moment, I don't think we have any import code. I need to talk to elmo about switching off basicauth on the Canonical bugzilla (from Mawson's IP) so we can watch bugs though.
[01:19] <stub> If it is a problem, we can talk to kiko or justdave about a quick'n'dirty import script.
[01:20] <kiko> talk to me
[01:20] <kiko> what's up? 
[01:28] <stub> And then kiko just walked out of the room!
[01:28] !Md:*! this was the scheduled outage...
[01:28] <stub> ok... I think we have everyone back.
[01:29] <stub> sabdfl: Greetings. Just intime for the next netsplit
[01:29] <sabdfl> :-)
[01:29] <sabdfl> launch here in a few minutes, so wont likely be able to enjoy the fun
[01:29] <Kinnison> sabdfl: good luck
[01:30] <stub> anyone need mark luvin' while he is here or shall we move onto soyuz?
[01:31] <kiko_> sorry, something happened :)
[01:33] <stub> kiko_: Just remember that IRC was designed by a pack of drug induced hippies over a decade ago.
[01:33] <kiko_> what wasn't?
[01:33] <Kinnison> and has been perpetuated by weenies ever since
[01:33] <kiko_> you could say the same of tcp/ip I suspect
[01:33] <kiko_> and BSD
[01:33] <Kinnison> ban it all I say
[01:34] <Kinnison> let's all use appletalk
[01:34] <kiko_> !
[01:34] <cprov> stub: I just can see the CSV style import from bugzila and it just import a query not a bug 
[01:34] <stub> Kinnison: So what is the status of Gina - is it now importing hoary on mawson, or is that what is currently being worked on?
[01:34] <Kinnison> stub: gina has imported a nearly complete hoary archive
[01:35] <Kinnison> stub: I'm working on the few corner packages left making sure that gina is doing the right thing
[01:35] <stub> So is the end in sight, and are there any blockages?
[01:35] <Kinnison> stub: then I'll get the last twiddles done for the corner-case of package versions moving between distroreleases which I've spotted
[01:36] <Kinnison> Once that corner-case has been solved; we should be able to repopulate the sourcepackagerelease tables on demand via gina
[01:36] <Kinnison> No major blockages that I can see
[01:36] <Kinnison> Just gotta keep slogging
[01:36] <stub> excellent.
[01:37] <Kinnison> the dogfood db contains a workable-with set of data and I believe cprov is going to get nicole going on it today
[01:38] <cprov> Nicole works perfectly on soyuz_df at 50/1033 packages of Hoary/Main should I stop and run it on lp_dogfood ?
[01:38] <stub> cprov: And is nicole ready to chew on the data?
[01:39] <cprov> stub: yes, I think 
[01:40] <stub> ok. So the soyuz dogfood environment will have lots and lots of stuff to play with
[01:41] <stub> kiko_, debonzi, cprov: Anything you need to tell us about soyuz?
[01:41] <kiko_> debonzi and I need to work on perf
[01:41] <kiko_> there are a lot of database queries being spawned on simple pages
[01:42] <kiko_> salgado and debonzi are going to work together to create one or two more views for the binary package side of things
[01:42] <kiko_> cprov is busy fixing up the personal view of packages (reproducing QA functionality)
[01:43] <kiko_> or main open task is reproducing the packages.qa data in the package pages
[01:43] <kiko_> and cleaning up the house which is quite messy atm
[01:43] <stub> This is all minimum functionality stuff for the dogfood environment?
[01:44] <kiko_> packages.qa pretty much is, though we *could* do without it it wouldn't be functionally equivalent to the read-only pages debian provides and thus..
[01:45] <Kinnison> the packages.qa page is a very very valuable information page for debian developers. Having it in soyuz is a must IMO
[01:46] <kiko_> Kinnison, no objections to not following the layout of the original page if the equivalent is acceptable?
[01:47] <carlos> kiko_: I suppose if we have the same information, that's enough
[01:47] <carlos> :-)
[01:47] <stub> What is the current coverage of unit, functional and page tests?
[01:47] <kiko_> whee
[01:47] <Kinnison> kiko_: It's the data that's most important. We can always tweak layout at a later date
[01:47] <kiko_> yeah.
[01:51] <kiko_> are we still waiting on my input?
[01:51] <stub> I was wondering how you rate the soyuz test coverage
[01:51] <kiko_> oh!
[01:51] <kiko_> sorry, I didn't see that as a soyuz-specific question
[01:52] <stub> ;)
[01:52] <kiko_> "not good" would likely be my answer
[01:52] <stub> Sounds familiar.
[01:52] <kiko_> we've not done practically any progress there, mainly because the pages are still not spec'ed enough
[01:53] <kiko_> and there's still rather experimental infrastructure change going in
[01:53] <stub> I think after bootstrapping, we all need to get test coverage done. I think it is fair to leave this until things have solidified, but I suspect that is 'soon' in the case of soyuz and 'now' in the case of malone and rosetta.
[01:53] <lifeless> stub: pong
[01:53] <stub> lifeless: tag -S rocketfuel@canonical.com/launchpad--devel--0 rocketfuel@canonical.com/launchpad--dogfood--0
[01:54] <lifeless> you'd like me to do that?
[01:54] <kiko_> to be honest there's still a few domain patterns in the backend that we haven't quite understood yet
[01:54] <stub> lifeless: Yes please. pqm didn't like me telling it to, but didn't tell me why :-)
[01:54] <kiko_> for instance using views is.. interesting
[01:55] <kiko_> we're going to end up with some inheritance between views and the "real" class
[01:55] <kiko_> I haven't envisioned how it's turning out yet but this is all this week's material
[01:55] <carlos> kiko_: how does sqlobject handle views?
[01:56] <spiv> carlos: You can just treat them like tables.
[01:56] <kiko_> yes.
[01:56] <spiv> (so long as you don't try to update their values...)
[01:56] <carlos> ok
[01:56] <kiko_> you can actually insert into a view AIUI
[01:56] <kiko_> though I've never tried that
[01:57] <kiko_> stub, do we have any other views in the codebase?
[01:57] <cprov> spiv: ping
[01:57] <kiko_> also, how concerned are we that the views are actually quite separate from where they are used?
[01:57] <stub> I think we should avoid views where easy to do so, as there will be issues. For instance, setting attributes on a real class will not cause the view-classes to be updated because the data has not yet been committed to the database (and if it was, sqlobject wouldn't know)
[01:57] <stub> kiko_: There are no other views at the moment, but Kinnison is going to land more soon.
[01:57] <kiko_> I see
[01:57] <spiv> cprov: pong.
[01:58] <spiv> kiko_: Inserting into views scares me :)
[01:58] <kiko_> the main reason for requiring views is that it's not possible to get joined data when you are getting large lists of things
[01:58] <kiko_> package listings are the trivial example since stuff there is spread all over
[01:59] <cprov> spiv: are you the DOAP pages king ? I have a request 
[01:59] <spiv> cprov: I am.
[01:59] <lifeless> stub: done.
[01:59] <stub> lifeless: ta :-)
[01:59] <lifeless> you'll need to be able to merge to that branch too right ?
[02:00] <spiv> kiko_: I wonder if for relatively simple cases if BradB's SQLMethod idea might be a better option than building views?
[02:00] <stub> lifeless: Yes. I'd assumed I could tell pqm to star-merge. I suspect we want most people to be able to once I've written some procedure guides.
[02:00] <Kinnison> stub: Is there some way to tell sqlobject to flush its cache for a given class?
[02:00] <kiko_> it could be
[02:00] <kiko_> but it's vapor
[02:00] <lifeless> what is the config that it should use to checkout, when doing a make check for dogfood ?
[02:01] <cprov> spiv: the product-index has a hardcoded link to warty/src/${package.name} ... it's broken for dogfood, since we are importing from hoary
[02:01] <lifeless> for instance, for l--devel--0 its:
[02:01] <lifeless> build_config=rocketfuel@canonical.com/dists--devel--0/configs/canonical.com/launchpad/development
[02:01] <spiv> cprov: Ah, right.  Thanks.
[02:01] <stub> Kinnison: I have no idea.
[02:01] <stub> build_config=rocketfuel@canonical.com/dists--devel--0/configs/canonical.com/launchpad/dogfood
[02:02] <spiv> Kinnison: You can expire an individual instance from SQLObject's cache (with the .expire method).
[02:02] <Kinnison> spiv: hmm
[02:02] <lifeless> stub: 
[02:02] <lifeless> [rocketfuel@canonical.com/launchpad--dogfood--0] 
[02:02] <lifeless> precommit_hook=make check
[02:02] <lifeless> build_config=rocketfuel@canonical.com/dists--devel--0/configs/canonical.com/launchpad/dogfood
[02:02] <lifeless> done
[02:02] <spiv> So so long as the views have the ids of the objects that expiring...
[02:03] <cprov> spiv: btw, it's bronken for dogfood but not in our dev-code, I don't know if you can just change it to hoary there and work in dev code to figure out which distro from SourcePackagePublishing
[02:04] <Kinnison> spiv/stub: what about setting _cacheValues to False in the view classes?
[02:04] <stub> I suspect we can work around issues we hit with views - identifying all of them might take a while though, so thanks to kiko_ for volunteering :-)
[02:04] <spiv> Kinnison: That doesn't solve the full issue, though.
[02:05] <kiko_> I volunteered to buy two DVDs too, where is that plaque with my name on it?
[02:05] <Kinnison> spiv: any other issues are to do with insufficiently-timely transaction commits; surely?
[02:05] <spiv> Kinnison: e.g. "foo = get_object_from_view(); bar = get_real_object(); bar.x = 1"... foo.x would then be out-of-date.
[02:06] <Kinnison> _cacheValues:
[02:06] <Kinnison>     If set to False then values for attributes from the database won't be cached. So everytime you access an attribute in the object the database will be queried for a value.
[02:06] <spiv> Oh, hmm.  Yeah, it would work, I guess.  It might kill performance, though :)
[02:06] <Kinnison> I'm not sure if that'll ruin the benefits of using the view though :-(
[02:08] <stub> ok. I think we should not worry about solving these problems unless we need to - my example might not be a problem, especially if people are aware and can work around it.
[02:08] <stub> But we are running overtime.
[02:08] <kiko> we are also working overtime ;)
[02:08] <stub> spiv: How are things in your world? foaf & doap at the moment?
[02:10] <spiv> It's been pretty quiet on the doap and foaf front, seeing as they don't have much impact on the dogfooding, and I've been working on shipit/authserver issues.
[02:10] <stub> Any problems with that, apart from the no-downtime issue? shippit passwords compatible for instance?
[02:13] <spiv> I'm pretty sure they're plaintext in the shipit db :)
[02:13] <stub> wonderful :-)
[02:14] <stub> So authserver is in production. Is the librarian in production now, or just dogfood?
[02:14] <spiv> Just dogfood afaik.
[02:14] <spiv> Kinnison/
[02:14] <spiv> er, ?
[02:15] <Kinnison> spiv: ?
[02:15] <stub> I'll take that as a no then ;-)
[02:16] <spiv> Kinnison: You've been the one actually *using* librarian :)
[02:16] <spiv> Yeah, I think that's a no :)
[02:16] <stub> daf: ping
[02:16] <daf> stub: pong
[02:16] <daf> bah!
[02:16] <Kinnison> librarian is on dogfood
[02:16] <Kinnison> I've not done anything with it wrt. production
[02:16] <Kinnison> daf!
[02:16] <daf> alarm went off this time, but I failed to stay awake :(
[02:17] <carlos> daf: welcomed!
[02:17] <stub> daf: a quick sentence on rosetta? We are 17mins overtime atm.
[02:18] <daf> summary: things have been a little quiet on the Rosetta front
[02:18] <daf> we've mostly been working on tidying things up rather than on new features
[02:19] <daf> we need to decide what we're doing for our beta release and get going on it
[02:19] <daf> we recently did a triage of all our bugs, which I think was useful
[02:19] <Kinnison> cprov/kiko/debonzi: Good news; gina looks like she's running as cleanly as the archive will let her :-)
[02:19] <daf> carlos: anything to add to that?
[02:19] <kiko> w00t!
[02:20] <Kinnison> we'll see what she does when we get the next archive drop from elmo later today
[02:20] <carlos> daf: well, just that we will need to see a way to execute an script to update the pot/po files in launchpad every time a new package is updated from gina
[02:20] <kiko> let's hope
[02:20] <daf> carlos: ooh, interesting
[02:21] <carlos> I have the script partially finished
[02:21] <carlos> well, a initial version
[02:22] <carlos> so we should start thinking on it soon
[02:22] <stub> daf: How do you rate the rosetta test coverage?
[02:23] <kiko> here it comes
[02:23] <debonzi> Kinnison, great
[02:24] <kiko> stub, everybody goes silent on that question. is that a hint?
[02:24] <daf> stub: I think our unit tests for the browser module are pretty good, although not complete
[02:24] <daf> stub: I don't think our page test coverage is as good
[02:24] <cprov> Kinnison: nice, you can also see results from nicole in Projects now
[02:25] <Kinnison> cprov: excellent
[02:26] <stub> ok. I think a good approach for Rosetta now will be to do the tests in tandem with code. I think the project is at the stage now where that will save time rather than waste it.
[02:26] <daf> you mean write tests as we write code?
[02:27] <kiko> I believe so, daf
[02:27] <stub> If you write or update a screen, make a page test for it. If you fix a bug, write a unit/functional test for it. I generally write the tests during or straight after a chunk of code.
[02:27] <stub> Dogma says write them before, but I don't seem to be able to work that way.
[02:28] <kiko> I don't either
[02:28] <stub> Also if your brain needs a break, scan over your code looking for any methods without doctests and add them.
[02:29] <daf> before doesn't really work for page tests
[02:29] <stub> Malone is reaching that stage too, so I'll need to be taking my own advice :-)
[02:29] <daf> but yes, that approach sounds good to me
[02:29] <spiv> Doing unit tests before works for me, for some code.  But I can't manage it all the time :)
[02:29] <carlos> I could try to do it...
[02:30] <stub> Ideally we want at least 1 page test for every screen, so at the very least we know if they fail to render.
[02:30] <daf> carlos: well, writing code and tests together is something we should all do
[02:31] <lifeless> sounds like someone needs the lifeless treatment :).
[02:31] <stub> We should also be doing walkthroughs or stories, so the form handling and workflow gets tested.
[02:31] <lifeless> night all
[02:31] <carlos> daf: I know, but I need to be used to that, this is the first development where I'm doing it
[02:31] <daf> carlos: ok
[02:32] <daf> carlos: you're not the only one :)
[02:32] <carlos> :-)
[02:33] <stub> From my experience, tests are fantastic but fail abysmally if you don't keep the dicipline - if new code goes in that isn't being tested, it tends to spread like cancer.
[02:34] <stub> Also, writing tests for existing code is a pita, as tests are easier if the code was designed to be easily tested.
[02:34] <daf> I understand we're working on fixing test_on_merge.py
[02:34] <stub> Yes - there is a bugzilla bug on it. I'll sort it if nobody beats be to it.
[02:35] <daf> good
[02:35] <spiv> daf: #2155
[02:35] <daf> it would be great if we couldn't commit with broken tests
[02:35] <spiv> Unless there's another issue I'm not aware of ;)
[02:35] <daf> spiv: ta
[02:35] <stub> Also, Brad has been sorting out some test harness issues (postgresql junk) trying to ensure tests all run in isolation. The existing harnesses fail in some situations, but that is being worked on by Brad and me.
[02:37] <stub> So have I missed anyone, or are there other issues that need to be raised now? Nobody blocked on anyone else?
[02:39] <kiko> no, soyuz is doing okay this week
[02:39] <carlos> stub: we are soft blocked by htc
[02:39] <kiko> hct?
[02:39] <carlos> was that the name?
[02:39] <carlos> I don't remember it :-P
[02:40] <carlos> Scott's work
[02:40] <stub> neither acronym makes any sense to me ;)
[02:40] <carlos> the one that fills the Manifest tables
[02:40] <stub> Scott is aware of this?
[02:40] <carlos> yes
[02:40] <carlos> I think so
[02:41] <carlos> I sent an email last week
[02:41] <stub> ok. Can you confirm, or should I?
[02:41] <carlos> I could confirm, don't worry
[02:41] <stub> ok.
[02:42] <stub> So I need to:
[02:42] <stub> launchpad code drop
[02:42] <stub> turn on malone dogfood emails (if Brad agrees)
[02:42] <stub> launchpad dogfood should use debug skin by default
[02:43] <stub> Anyone else volunteer to handle bug 2155?
[02:43] <spiv> I can do that.
[02:44] <daf> ah, you already have a patch :)
[02:44] <stub> Ta. I have no problems with tests being turned off if necessary provided bugs are filed. 
[02:45] <daf> so the plan is to disable these two tests, apply the patch, then fix the tests?
[02:45] <spiv> daf: Yep.
[02:45] <stub> I've found at least one test_suite() method returning None :-)
[02:45] <daf> spiv: sounds good
[02:45] <stub> daf: Yup. Most important is to fix test_on_merge.py so new broken tests don't creep in. We are lucky there are only two (that I can see)
[02:45] <daf> stub: interesting :)
[02:46] <daf> stub: what happens to the test results in that case?
[02:46] <stub> daf: No tests are run for that module
[02:46] <spiv> (Aside: I wish we didn't need the test_suite dead chicken... Twisted's test runner is much better in that respect)
[02:46] <stub> daf: It is a quickndirty way of switching tests off (but liable to acidentally being commited...)
[02:47] <stub> spiv: Feel free to fix the test runner ;)
[02:47] <spiv> :)
[02:47] <daf> spiv: what's the alternative?
[02:47] <stub> Anyway - I suspect the meeting finished a while ago
[02:47] <spiv> stub: Yeah, I think so.
[02:48] <carlos> ok, I'm going to have lunch now then
[02:48] <carlos> stub: thanks 
[02:48] <stub> Have fun :-)
[02:48] <carlos> later
[02:49] <spiv> daf: Twisted's approach (iirc) is that any module matching *.tests.test_* is searched for subclasses of TestCase.  All TestCases are run.
[02:49] <daf> spiv: sounds good
[02:49] <daf> spiv: what about doctests?
[02:49] <stub> debonzi: Can the launchpad code drop wait another 10 hours? I wanna go to bed, and will do a better job tomorrow :-)
[02:49] <spiv> It doesn't have support for those yet :(
[02:50] <debonzi> stub, sure.. no problem
[02:50] <debonzi> stub, good night :)
[02:50] <daf> spiv: I'm _all_ for a better test runner
[02:50] <spiv> stub: G'night :)
[02:51] <daf> night stub!
[02:51] <daf> spiv: if we can work out how to deal with doctests, I'd love to go with the approach you describe
[02:51] <spiv> daf: "Feel free to fix the test runner ;)"
[02:51] <daf> :)
[02:51] <daf> perhaps it could check for a test_suite and fall back to TestCase subclasses?
[02:52] <spiv> Sure.
[02:52] <stub> I believe Marius has a better test runner that he may be using for his zope3 projects. He gave a talk on it at Europython.
[02:52] <spiv> Or even look for a doctests_to_run attribute in a test_* module.
[02:52] <spiv> Yeah, we should definitely look at reusing existing code :)
[02:53] <daf> mmm, yes
[02:53] <daf> perhaps we should ask Marius about it
[02:53] <spiv> Unfortunately, the Twisted test runner (trial) is heading in a direction where it's more and more twisted-specific :/
[02:57] <stub> http://mg.pov.lt/blog/using-schooltool-test-runner.html
[02:58] <stub> Works with Z3 stuff.
[02:59] <stub> Might really help with the postgresql setup/teardown - you can easily group tests with a single setup/teardown to run at the start and end of the group (where the existing harnesses do it for every test). Lots of other funky features.
[03:02] <daf> mm, looks nice
[03:05] <stub> the europython talk might be online which describes the hooks'n'stuff, or marius is in #zope3-dev atm. I'm  off to bed :-)
[03:10] !Geert:*! Irssi is in the process of starting the re-design of the Irssi webpages. If you have ideas for the new logo or layout, join #irssi-site or submit your ideas to contest@irssi.org. The best design will be the official logo  / site for the next 5 years! (More info on www.irssi.org)
[03:13] <spiv> Twisted's trial has a setUpClass/tearDownClass, which sounds similar.
[03:14] <spiv> Basically, there's clearly a lot of good ideas whose time has come... it would be really nice if there were One True Test Runner :)
[03:21] <BradB> morning
[03:21] <BradB> stub: Is there a PostgreSQL command that does pretty much the same as dropping and recreating the DB, but without actually dropping and recreating the db?
[03:22] <BradB> I'm thinking that's a more practical approach to solving the data init problem.
[03:22] <daf> I think you just missed stub
[03:22] <daf> I thought we were doing clever tricks with templates now
[03:25] <stub> BradB: Nope. You can drop all the tables and stuff one by one, but the disadantage to that approach is that it is slow (it is 10 or 20 times faster to do 'createdb --template' to build a launchpad instance than it is to build all the tables and populate sample data).
[03:26] <stub> BradB: The closest I could come up with would be to override psycopg.connect to return connection wrappers. These wrappers ignore calls to 'commit', so we can rollback all database changes at the end of the test. However, that would fail in some situations where the tests are relying on rollback to work normally (but that might not be a problem)
[03:28] <BradB> I think my current approach might be close enough to try to finish then.
[03:28] <BradB> lifeless: ping
[03:35] <Kinnison> pizza and garlic bread methinks
[03:39] <lulu> BradB:ping!
[03:40] <BradB> lulu: pong
[03:43] <lulu> BradB: what is your bugzilla email address? got a little bug to assign to ya!
[03:43] <lulu> :o)
[03:44] <BradB> bradb@bbnet.ca
[03:45] <daf> lulu: if you type "brad" in, it will work
[03:47] <lulu> daf, brad: thank you!
[04:01] !dmwaters:*! Hi all! I'm going to do some slight rehubbing to clean up from this morning's outage.
[04:05] !dmwaters:*! All done, thank you for your patience, and thank you for using freenode!
[04:10] <carlos> daf: could we decide the "fuzzy" change I suggested ?
[04:53] <BradB> debonzi: ping
[04:53] <debonzi> BradB, pong
[04:53] <BradB> debonzi: i seem to remember reading a checkin message that you changing something in the batching, correct?
[04:54] <BradB> s/batching/batching code/
[04:54] <debonzi> BradB, yep
[04:55] <debonzi> BradB, is something wrong?
[04:55] <BradB> this seems to have broken the batching unit tests. it slipped by pqm because of the way pqm was verifying whether everything worked or not. could you fix the test so that it passes, because lifeless and spiv will be making it so that this causes checkin failures in the next day or two probably.
[04:56] <debonzi> BradB, yes Ill check it
[04:56] <BradB> thanks
[04:57] <debonzi> BradB, no problem.. What is the easy way to track it? make check?
[04:57] <daf> carlos: ok, let's discuss it
[04:57] <BradB> python test.py -h will tell you how to run specific tests
[04:57] <debonzi> BradB, right.. thanks
[04:57] <carlos> daf: did you saw my last comments?
[04:59] <daf> where?
[04:59] <carlos> at bugzilla
[05:00] <carlos> https://bugzilla.warthogs.hbd.com/bugzilla/show_bug.cgi?id=2147
[05:03] <daf> my feeling at the moment is "the fuzzy flag should only be changed explicitly by the translator, so I don't see the problem"
[05:04] <carlos> so how will you handle the "iscomplete" flag?
[05:05] <daf> good question :)
[05:06] <carlos> that's the point behind this change request :-)
[05:06] <daf> I think this is just a cached result of "does this message set have a translation sighting for each plural form it's supposed to have?"
[05:07] <carlos> yes, we could see it that way
[05:07] <daf> I think complicated part is the relationship between the two flags
[05:08] <daf> e.g. a message must be fuzzy if it is not complete
[05:08] <daf> does that make sense?
[05:09] <daf> so a message set can either be (complete and fuzzy), (complete and not fuzzy), or (not complete and fuzzy)
[05:10] <daf> and we don't allow a translator to unset fuzziness on an incomplete message set
[05:11] <carlos> that's not true with gettext
[05:11] <carlos> :-(
[05:11] <carlos> so we should fix that at import time
[05:11] <daf> good point
[05:11] <daf> that's a bit messed up, I think
[05:12] <daf> let's do a test:
[05:12] <daf> make a PO file with an incomplete message set
[05:12] <daf> use it in a program
[05:12] <daf> see what gettext does with the message that's missing a translation
[05:13] <carlos> daf: msgfmt does not gives you any warning
[05:13] <carlos> I did some tests already
[05:13] <daf> ok, but I want to see what happens when the program runs
[05:13] <carlos> not sure about the gettext function...
[05:13] <daf> whether gettext uses the partial translation or not
[05:13] <carlos> ok
[05:13] <carlos> will do it today
[05:13] <daf> any ideas on how to do it?
[05:15] <carlos> yes
[05:15] <carlos> a simple C program
[05:15] <carlos> with a plural form
[05:15] <carlos> that prints both messages
[05:16] <daf> sounds good
[05:16] <daf> estimated time to do that?
[05:17] <carlos> 30 minutes
[05:18] <carlos> I do it and then we continue talking?
[05:18] <daf> yep
[05:39] <carlos> hmm, I'm forgetting anything...
[05:39] <carlos> daf: could you look at:
[05:39] <carlos> #include <libintl.h>
[05:39] <carlos> #include <stdio.h>
[05:39] <carlos> void
[05:39] <carlos> main ()
[05:39] <carlos> {
[05:39] <carlos>         bindtextdomain("test", "/home/carlos/test");
[05:39] <carlos>         textdomain("test");
[05:39] <carlos>         printf ("%s\n", ngettext ("Test singular", "Test plural", 1));
[05:39] <carlos>         printf ("%s\n", ngettext ("Test singular", "Test plural", 2));
[05:39] <carlos> }
[05:39] <carlos> it does not tries to access /home/carlos/test
[05:40] <carlos> I'm missing anything but I don't see it
[05:40] <daf> hmm
[05:40] <daf> did you try stracing it?
[05:41] <carlos> yes, that's why I know it does not looks at that directory
[05:41] <carlos> I don't get any open request
[05:41] <daf> interesting
[05:41] <daf> what files does it try to open?
[05:41] <carlos> only the usual ones (the linker ones)
[05:42] <carlos> carlos@frodo ~/test $ strace ./a.out 2>&1|grep open
[05:42] <carlos> open("/etc/ld.so.preload", O_RDONLY)    = -1 ENOENT (No such file or directory)
[05:42] <carlos> open("/etc/ld.so.cache", O_RDONLY)      = 3
[05:42] <carlos> open("/lib/libc.so.6", O_RDONLY)        = 3
[05:42] <daf> you can do "strace -e open", by the way
[05:42] <carlos> ok
[05:44] <kiko> right.
[05:44] <daf> carlos: hmm, the only thing I can think is that you need to do -DGETTEXT or something
[05:45] <carlos> that's what I'm looking now
[05:45] <daf> gettext is gnarly :(
[05:46] <carlos> grre
[05:46] <carlos> setlocale
[05:46] <carlos> :-)
[05:46] <carlos> that's the key
[05:47] <daf> oh, of course!
[05:47] <daf> :)
[05:47] <carlos> perfect
[05:47] <carlos> I'm starting to forget things :-(
[05:49] <carlos> daf: it prints an empty string
[05:50] <daf> ouch
[05:50] <daf> I think this is a bug in gettext
[05:50] <carlos> carlos@frodo ~/test $ LC_ALL=C ./a.out
[05:50] <carlos> Test singular
[05:50] <carlos> Test plural
[05:50] <carlos> carlos@frodo ~/test $ ./a.out
[05:50] <carlos> primera prueba
[05:51] <carlos> daf: so, what should we do?
[05:52] <carlos> mark it always as fuzzy?
[05:52] <daf> I think gettext should have fallen back to the untranslated string in that case
[05:52] <daf> yes, I think we should mark such message sets as fuzzy on import
[05:52] <dilys> Merge to rocketfuel@canonical.com/launchpad--devel--0: batching unit test fixed. (patch-727)
[05:52] <daf> it would be nice if we could warn the user about such things
[05:52] <carlos> daf: well, I don't think that's correct, it's possible (but not probable) that a translator wanted to leave a translation empty...
[05:53] <daf> why?
[05:53] <carlos> I'm talking about the gettext behaviour you are talking about
[05:53] <carlos> not the import thing :-)
[05:53] <daf> oh, ok :)
[05:54] <daf> perhaps we should open a bug "warn the user when marking message sets fuzzy on import"
[05:54] <debonzi> BradB, the fix is in rocketfuel.. I hope is all ok
[05:54] <carlos> daf: the user == who imports the file?
[05:55] <carlos> that will be a script 99% of the imports...
[05:58] <BradB> debonzi: great, thanks
[05:58] <debonzi> BradB, no problem
[06:07] <carlos> daf: ?
[06:08] <daf> that's true
[06:08] <daf> forget it :)
[06:08] <carlos> ok
[06:09] <carlos> then, back to the "iscomplete" problem
[06:09] <carlos> when should we set it?
[06:10] <carlos> as we are at this moment, if it's singular is easy, if it's plural... no idea
[06:10] <carlos> (to do it implicit inside makeTranslationSighting
[06:11] <BradB> stub: dude, i ROCK
[06:12] <BradB> stub: db_adapter.disconnect(), db_adapter.connect()!
[06:12] <BradB> stub owes me a kiss in spain now (it was his offer)
[06:12] <carlos> X-)
[06:13] <kiko> jesus
[06:13] <BradB> now we can finally drop and recreate the db properly between test stories
[06:13] <BradB> a sloppy, wet one too, he said
[06:15] <daf> perhaps we should add a database constraint for NOT ((complete = TRUE) AND (fuzzy = FALSE))
[06:15] <carlos> daf: sure, but that's to improve the schema
[06:16] <carlos> the problem at this moment is that makeTranslationSighting is called two times, one with the singular and another one with the plural
[06:17] <carlos> so if you don't want to change the schema
[06:17] <daf> yes, that was incidental
[06:17] <carlos> I see two options (not sure if they are correct):
[06:17] <daf> go for it
[06:18] <carlos> 1.- Assume that every time we update a plural form we will always call makeTranslationSighting twice
[06:18] <carlos> hmm
[06:18] <carlos> no, this one will not work
[06:18] <carlos> we could have n plural forms...
[06:19] <carlos> then, we only have an option:
[06:19] <carlos> 2.- Change makeTranslationSighting and give a list with all translations
[06:20] <daf> hrm
[06:20] <carlos> so if it's singular the list will have only an argument
[06:20] <carlos> if it's a plural forms we will have all translations for that set
[06:20] <daf> I was thinking along the lines of "when makeTranslationSighting is called, calculate the flag and set it"
[06:20] <carlos> and we could know if it's complete or not
[06:20] <daf> the information needed to set it is in the DB
[06:20] <carlos> me too
[06:20] <daf> the problem is just keeping the flag up to date
[06:20] <carlos> not really
[06:21] <carlos> think on this scenay
[06:21] <carlos> think on this scenary
[06:21] <carlos> :
[06:21] <carlos> we have: mststr[0]  = "foo" and msgstr[1]  = "bar"
[06:21] <carlos> and that msgset is marked as fuzzy
[06:22] <daf> ok
[06:22] <carlos> foo is correct, but bar is not correct
[06:22] <carlos> so we call makeTranslationSighting with "foo"
[06:22] <daf> which plural form?
[06:22] <carlos> at that moment we have all needed strings but it's still fuzzy
[06:22] <carlos> only two plural forms
[06:22] <carlos> so we don't need extra strings
[06:23] <carlos> only review them
[06:23] <daf> right, but makeTransltionSighting with plural_form = 1
[06:23] <carlos> plural_form=0
[06:23] <carlos> so foo with plural_form = 0
[06:23] <daf> that doesn't change anything, does it?
[06:23] <carlos> it should not
[06:24] <carlos> when we do the second call with plural_form = 1
[06:24] <carlos> we assume it's fixed the fuzzy, so we change  the flags
[06:24] <carlos> ok
[06:24] <daf> we don't make the first call if nothing has changed
[06:24] <daf> is it safe to assume it's no longer fuzzy?
[06:24] <carlos> not really
[06:24] <carlos> that's my point
[06:24] <daf> ok, then we shouldn't :)
[06:25] <daf> that's the user's choice
[06:25] <carlos> but we talked about remove the fuzzy concept from rosetta
[06:25] <carlos> to simplify the UI
[06:25] <daf> ok, my memory is bad
[06:25] <daf> I don't remember that
[06:25] <daf> can you remind me?
[06:26] <carlos> the fuzzy does not appears in the stats because that and I think we said it should not appear as a flag when translating for the same thing, simplicity
[06:26] <carlos> we show the strings as a hint
[06:26] <carlos> or suggestions
[06:26] <carlos> that's why I remember from what we talked at Oxford
[06:27] <daf> hmm
[06:27] <daf> I see the benefits of this
[06:28] <carlos>  /s/why/what/
[06:28] <carlos> :-)
[06:28] <daf> what about the use case "I think I know the translation for this, but I'm not sure, so I'll put it in and mark it fuzzy so it gets reviewed"?
[06:29] <carlos> [ ]  Finished [ ]  Review and mark finished by default?
[06:30] <dilys> Merge to rocketfuel@canonical.com/launchpad--devel--0: Make test_on_merge.py check test results more accurately (Bug #2155) (patch-728)
[06:30] <carlos> daf: next time we should a fuzzy string it will appear as a hint (when we implement that part)
[06:30] <carlos> grr
[06:31] <carlos>  /s/should/show/
[06:31] <daf> ok, but how do we represent "this message set needs review" in the DB?
[06:31] <carlos> or we should change it to show also those msgsets wit hte review mark set (it's another option)
[06:31] <carlos> fuzzy
[06:31] <carlos> :-)
[06:31] <daf> hmmm
[06:32] <carlos> finished = iscomplete review = fuzzy
[06:32] <carlos> in fact, we could forget about finishe
[06:32] <carlos> d
[06:32] <daf> I don't think icomplete should be up to the user
[06:32] <daf> right
[06:32] <dilys> Bug 2155 resolved: test_on_merge.py doesn't check exit status, and thus ignores failures.
[06:32] <dilys> https://bugzilla.warthogs.hbd.com/bugzilla/show_bug.cgi?id=2155
[06:32] <carlos> so we should introduce the fuzzy mark (with other name) 
[06:32] <carlos> it's ok for me
[06:33] <carlos> we can always remove it in the future if needed
[06:36] <daf> ok, I'm going to write a reply to your email to the list, in light of what we've discussed
[06:36] <daf> this will hopefully help me get things clear in my mind
[06:36] <carlos> ok
[06:36] <carlos> thanks
[07:04] <BradB> eeg, malone sucks
[07:04] <BradB> the way the UI is designed i have no real way to resolve a bug.
[07:06] <BradB> unless "Closed" is a good way of saying "I fixed this"
[07:09] <daf> well, that depends how you view bugs
[07:11] <daf> if it's "a bug is a report of a problem", and the problem is not currently there (because it was fixed, or because the report was innacurate, etc.) then it's valid to say "there is no problem, so the bug should be closed"
[07:12] <daf> I've found Bugzilla's complex states a little confusing (closed, resolved, verified)
[07:12] <BradB> daf: one word: "rejected" :)
[07:12] <BradB> "closed" conveys very little meaning
[07:13] <spiv> There should be a way to differentiate between "fixed" and "won't fix" :)
[07:13] <BradB> there is, in infestations
[07:14] <spiv> What's the difference between "affected" and "victimized"?
[07:15] <kiko> lol
[07:23] <BradB> spiv: affected means the bug affects that release.
[07:24] <BradB> spiv: victimized means the bug manifests itself in that sp release, but it's actually a bug that comes from another sp.
[07:25] <spiv> BradB: Yeah, I figured it out from the descriptions in the dbschema :)
[07:26] <spiv> BradB: Btw, test_on_merge.py behaves a bit more intelligently now, as you probably noticed from the commit message.
[07:27] <BradB> cool...i can't wait till pqm is fixed so that i can actually see what's failing when i try to merge. :)
[07:31] <daf> BradB: rejected because the report is incomplete, or can't be reproduced, or because you don't want to fix it, or for some other reason? :)
[07:31] <daf> I think bug tracking is one of those problems that look easy from a distance but are really hairy close up
[07:32] <daf> carlos: ok, sent
[07:34] <daf> sorry I've been slow on this issue, but I wanted to understand the problem well before making a decision
[07:36] <carlos> daf: no problem, I had other things to do, if I were blocked you would know that :-)
[08:44] <BradB> daf: what's the db table that represents one localization of a msg id?
[08:45] <BradB> it looks like it might be pomsgidsighting, but hopefully there's something clearer
[08:59] <daf> I'm not completely sure what you mean, but I suspect you want potranslationsighting
[08:59] <daf> depends whether you want the localisation of the message ID as a whole, or for a particular plural form
[09:06] <BradB> how do i know that the german translation of "hello" is fuzzy, for example (but that the german translation of "hey there", "what's up", and "yo dude" aren't?)
[09:10] <BradB> kiko: dude, we need title :)
[09:11] <kiko> then kill summary :)
[09:11] <BradB> yes! (i corrected my question in a followup email)
[09:12] <kiko> heh
[09:12] <kiko> why is title needed, btw?
[09:12] <BradB> kiko: so that i can see "Mozilla not saving .ch bookmarks" in a listing.
[09:13] <kiko> why isn't that summary, just so I know? :)
[09:14] <BradB> Either/or...in its current form though, "summary" is a textarea where you summarize the problem.
[09:14] <BradB> Then description is the essay on the bug, heh.
[09:15] <kiko> this is bizarre :)
[09:18] <BradB> you come from bugzilla terminology, i come from plone collector terminology.
[09:18] <BradB> also, "title" is used elsewhere in LP with similiar semantics
[09:19] <daf> BradB: you find the pomsgset which has a (pomsgidsighting which has a message ID of "hello") and a pofile which has a (language which is German) and a (potranslationsighting that's active) and has the fuzzy flag set to true
[09:21] <BradB> hm, i don't understand. all that criteria would be met by a non-fuzzy localization of a msg id.
[09:22] <daf> ahem: "and has the fuzzy flag set to true"
[09:22] <BradB> daf: yes
[09:23] <BradB> there's no fuzzy flag on a potranslationsighting, only on the msg set.
[09:23] <daf> correct
[09:23] <BradB> thus if i'm a non-fuzzy localization, living in a fuzzy msg set, i'm screwed.
[09:23] <daf> why?
[09:24] <BradB> because there's no way of knowing that i'm not fuzzy, but some of my buddies in the same po are.
[09:24] <BradB> unless i'm missing something.
[09:24] <daf> I think you misunderstand what a message set it
[09:24] <daf> s/it/is/
[09:25] <BradB> nah, it's a cp .pot .po => give to translotor => out comes a message set.
[09:25] <daf> it's not a great name (but I didn't choose it :))
[09:26] <daf> a PO message set is a collection of a message ID, a possible plural message ID, and the associated translation(s)
[09:26] <BradB> yep
[09:26] <BradB> some of which might be fuzzy, some not
[09:26] <daf> no
[09:26] <daf> the whole thing is fuzzy
[09:26] <BradB> why not?
[09:26] <daf> why? :)
[09:26] <BradB> why? i might be certain on some translations, and uncertain of others.
[09:26] <daf> a PO file has many message sets, each of which may be fuzzy
[09:27] <daf> each message ID in a PO file has its own message set
[09:27] <BradB> oh
[09:27] <daf> yes
[09:27] <BradB> i didn't except that
[09:27] <BradB> i thought a message set was a set of messages, rather than a set of localizations
[09:27] <daf> yeah, the name is a bit misleading
[09:28] <daf> blame sabdfl :)
[09:28] <daf> I would have called it "message" myself :)
[09:28] <BradB> ok, so that makes more sense....but what if only one translation of that msg set is fuzzy?
[09:29] <daf> well, I'm still not entirely decided on that point
[09:29] <daf> there's an argument which says you want each indivisual localisation to have its own fuzzy flag
[09:30] <BradB> it might be overkill at this point
[09:30] <daf> right, I suspect that one fuzzy flag for the set is good enough
[09:30] <daf> I'd argue you can only decide that a localisation is correct in the context of the other localisations that accompany it
[09:31] <daf> and that any win in control is outweighed by the loss in UI complexity
[09:31] <BradB> i would have thought that only the msg id (and "domain"? if i have my i18n terminology correct) matters
[09:32] <daf> well, if you have 3 plural forms, you want the 3 localisations to be consistent in terminology and sentence structure
[09:32] <daf> and that's something you can only decide for the message set as a whole
[09:35] <BradB> really? hm, i haven't really faced the issues before, but i would have thought that all i care about as the japanese translator is: 1. the original msg id, 2. its intended meaning (conveyed hopefully by the domain). i would have thought that the way each language deals with conveying that same thing may be wildly different.
[09:35] <daf> sure
[09:36] <daf> I'm talking about localisations within one language
[09:36] <daf> e.g.
[09:36] <daf> msgid "I have one lemon."
[09:36] <daf> msgid_plural "I have %d lemons."
[09:36] <daf> msgstr[0]  "J'ai une citron."
[09:37] <daf> msgstr[1]  "Citrons, j'ai %d."
[09:37] <BradB> j'en ai %d! :P
[09:37] <daf> (excuse my appalling French)
[09:37] <BradB> hm
[09:38] <daf> the point is that you want the different translations for the same language to be consistent to each other
[09:39] <BradB> yeah, i see what you mean
[09:51] <dilys> Bug 2153 resolved: create-a-bug and too many required fields
[09:51] <dilys> https://bugzilla.warthogs.hbd.com/bugzilla/show_bug.cgi?id=2153
[10:02] <daf> BradB: we should look into integrating Malone with Dilys
[10:03] <BradB> yes
[10:04] <daf> all Dilys needs is some way to be notified when two events happen: a bug is opened, a bug is closed
[10:04] <daf> for Bugzilla, it's done through email
[10:04] <BradB> dilys: actually didn't quite give enough info there. i marked that bug as "NOTWARTY" when resolving it, because i couldn't think of a better status that expressed "we've already reported this in malone itself"
[10:04] <BradB> s/://
[10:05] <BradB> daf: it is in malone too
[10:05] <daf> you've already implemented it?
[10:05] <BradB> yes
[10:05] <spiv> That's why dilys says "resolved" rather than "fixed" ;)
[10:05] <daf> I mean email notification
[10:05] <daf> Dilys doesn't pay much attention to how bugs were closed -- I suppose she could
[10:05] <BradB> daf: yes, that works in malone for all add and edit events for things related to bugs
[10:06] <BradB> daf: it's still using stub's fancy email thing though, which redirects all email to me
[10:06] <BradB> but it's just a matter of changing the zcml to say "send mail to the real addresses that it's addressed to"
[10:06] <daf> ok, if you can arrange for me to receive all Malone mail for new bugs and closed bugs, I can probably do the rest
[10:07] <BradB> i implemented global notification too, so i could easily add you to that
[10:07] <daf> it helps if the emails are easy to parse :)
[10:07] <BradB> well, they are, but there's lots of different kinds of notifications
[10:07] <BradB> about 15 in all
[10:07] <daf> well, I could always make Dilys receive them all and ignore the uninteresting ones
[10:08] <daf> but if they can be filtered at source, that would be great
[10:08] <BradB> hm, it'd be more consistent to make dilys decide which reports she doesn't care about
[10:08] <BradB> imho
[10:09] <daf> well, that's what she does for Bugzilla email
[10:09] <BradB> (the subject line makes it easy to figure out which ones interest you)
[10:09] <daf> cool
[10:15] <BradB> what's dilys's email addy?
[10:25] <BradB> daf: ping
[10:27] <daf> dilys@muse.19inch.net
[10:32] <lifeless> BradB: pong
[10:33] <BradB> lifeless: woo
[10:33] <BradB> lifeless: think you'll have a chance to include full error output in pqm emails today? i can't checkin my changes until that happens.
[10:36] <dilys> Merge to rocketfuel@canonical.com/arch-pqm--main--0: implement replay for pqm (patch-13)
[10:46] <lifeless> BradB: send a debug email
[10:46] <lifeless> tell me how it goes.
[10:46] <lifeless> you can be my test case (test infrastructure for pqm is ... lacking)
[10:47] <BradB> ok...he wee go
[10:48] <BradB> lifeless: echo -e debug\nstar-merge... right?
[11:02] <lifeless> BradB: yes
[11:02] <lifeless> unknown commands trigger warnings not errors, you can just put that debug in your script.
[11:19] <kiko> why is debugging conflicts in arch so painful?
[11:20] <lifeless> kiko: how is it painful ?
[11:21] <BradB> lifeless: echo -e debug\nstar-merge "$(tla tree-version)" "$(cat {arch}/+upstream)" | gpg --clearsign | mail -s "$1" "$2" gave:
[11:21] <BradB> an email with subject "no valid commands given"
[11:21] <kiko> lifeless, the fact that conflicts aren't displayed inline combined with the fact that the .rej is a contextual diff makes it *very* tiresome 
[11:22] <kiko> I end up rewriting the whole code usually
[11:22] <lifeless> kiko: erm. are you using a visual conflict resolution tool ?
[11:22] <lifeless> (And if not, why not ?)
[11:22] <BradB> i guess it has to be quoted, hm
[11:22] <lifeless> BradB: yeah, single quote the lot
[11:22] <kiko> lifeless, because I want to be able to work on code over ssh and without X?
[11:23] <kiko> lifeless, it would be essential to have some sort of inline-conflict mode if arch was to be adoptable here.
[11:23] <spiv> lifeless: I've tried using both vimdiff and meld, but I couldn't figure out how to make it work with the way arch organises things.
[11:23] <kiko> (here meaning async where we use svn and cvs)
[11:23] <lifeless> spiv: ah. thats not good.
[11:24] <spiv> The gnuarch wiki has a rather unhelpful "well, I use emacs" page on the subject.
[11:24] <lifeless> kiko: start-merge -t will do inline markers.
[11:24] <kiko> it will?!
[11:24] <kiko> that's awesome!
[11:24] <spiv> kiko: Yep, with diff3 (-t for --three-way)
[11:24] <lifeless> but I really want to understand the issues before I consider making that sort of output the default, as it (currently) changes the merge algorithm slightly.
[11:24] <kiko> if an option cuts it, I'm happy 
[11:25] <lifeless> consider yourself happy
[11:25] <lifeless> :)
[11:26] <spiv> lifeless: I'd love to use a happy shiny graphical merge tool...
[11:26] <spiv> lifeless: Perhaps there should be a "dealing with conflicts in arch" BoF at the December conference?
[11:27] <lifeless> perhaps :)
[11:27] <lifeless> my intuition says that merge resolution tools are inherently orthogonal, and that arch should just setup things for easy use.
[11:27] <spiv> Yeah, I'd expect so too.
[11:28] <lifeless> I.E. if we had a script to take foo.c, foo.c.orig, foo.c.rej and make foo.c, that would make kiko happy.
[11:28] <spiv> Except for my inability to know how to invoke vimdiff or meld or anything in a useful manner :)
[11:28] <lifeless> if we can teach meld how to grok whats there, that would make you happy.
[11:28] <spiv> Yep!
[11:28] <lifeless> if we can then publish those lessons, as policy for tla, we can have tla trigger your-favorite-tool.
[11:29] <lifeless> are you using baz yet guys ?
[11:29] <spiv> I'm not.. but I'll fix that now.  deb source on the web page, yeah?
[11:29] <lifeless> deb binary on the web page.
[11:29] <kiko> lifeless, interesting. when I used -t for inline conflicts I got *less* one conflict than I had originally.
[11:30] <spiv> Hmm, no apt line?
[11:30] <lifeless> kiko: as I said, it /changes/ the merge algorithm, in what on the surface is 'good', but actually is extremely dangerous manner.
[11:30] <lifeless> deb http://bazaar.canonical.com/packages/debs ./
[11:30] <lifeless> I'll add that to the page later.
[11:30] <spiv> lifeless: That should be on t-- yeah :)
[11:32] <kiko> one second
[11:32] <kiko> there's a rule, star-merge and commit.
[11:32] <kiko> can I star-merge, solve conflicts and *then* commit?
[11:32] <spiv> Yeah.
[11:33] <kiko> how much "solving conflicts" is allowed?
[11:33] <kiko> I mean, you could solve the conflict by world-hacking the tree.
[11:33] <spiv> The rule is really "merge and commit" in my understanding, and solving conflicts is part of "merging".
[11:34] <daf> spiv: yeah, that's my understanding
[11:35] <BradB> lifeless: 
[11:35] <BradB> SIG_ID kaawvOjzSnhm1MKkzIlT91hn4Tc 2004-11-03 1099520964
[11:35] <BradB> Command failed!
[11:35] <BradB> All lines of log output:
[11:35] <BradB> (that's it)
[11:35] <spiv> Heh.
[11:35] <spiv> $ baz help | head -1
[11:35] <spiv>                         tla sub-commands
[11:36] <kiko> are you guys getting spurious errors with favicon.ico or is it just us?
[11:36] <BradB> yes, depends on the browser you're using
[11:36] <BradB> e.g. with firefox, yes, with safari, no.
[11:36] <kiko> why is that?
[11:36] <BradB> i guess it's up to the browser to ask
[11:37] <kiko_> I can't seem to understand your last remark. :)
[11:37] <BradB> i guess it's up to the browser to ask...for favicon.ico.
[11:38] <BradB> i'm not sure what rfc that belongs in.
[11:38] <kiko> I mean, it only fails half the time :)
[11:38] <BradB> oh
[11:38] <BradB> i was noticing that firefox was requesting favicon.ico where safari didn't seem to be at all.
[11:39] <lifeless> BradB: created my MS, not gonna be in an RFC
[11:39] <kiko> by the way, also.
[11:39] <kiko> lifeless, can a set of users share a revlib?
[11:39] <lifeless> no
[11:39] <lifeless> not safely and reliably anyway.
[11:39] <kiko> on our local net revlibs are going up to 10G..
[11:40] <BradB> revlibs are nasty
[11:40] <kiko> is there a way to purge them automatically and nicely?
[11:40] <lifeless> thats returned by du ?
[11:40] <kiko> yes
[11:40] <lifeless> garh
[11:40] <kiko> that's for all users.
[11:40] <lifeless> summed together ?
[11:40] <kiko> yes.
[11:40] <lifeless> # of users ?
[11:40] <kiko> 4.
[11:40] <lifeless> so 2.5GB per user.
[11:41] <kiko> right.
[11:41] <lifeless> are they greey + sparse ?
[11:41] <kiko> indeed, they are.
[11:41] <kiko> greedy.
[11:41] <lifeless> are they sparse ?
[11:41] <kiko> and sparse. that was just my typo comment, sorry to be unclear.
[11:42] <lifeless> np
[11:42] <lifeless> http://wiki.gnuarch.org/moin.cgi/Controlling_20Home_20Directories_20With_20Arch
[11:42] <lifeless> look for shrink_library
[11:42] <kiko> I'll take a look, thanks.
[11:43] <kiko> any caveats I should be aware of?
[11:44] <lifeless> I haven't used it :)
[11:44] <lifeless> oh, don't run it while running tla, things could get confused.
[11:44] <lifeless> so put it in a 3am cron job or something.
[11:48] <spiv> (Not much point having things like the "soyuz" category in there anymore...)
[11:49] <spiv> Although most of the space was in launchpad revisions, so I removed most of the oldest ones.
[11:50] <daf> lifeless: "Canonical Limited doesn't build these and cannot vouch that they are unadulterated" -- who does provide them and why aren't there official packages?
[11:50] <lifeless> they build on commits to pqm.
[11:50] <lifeless> except for the rpms (jamesh) and mac debs (justdave)
[11:50] <daf> so "Some volunteers make binary packages available." is misleading?
[11:51] <lifeless> as to when, well, when soyuz lets pqm upload a source package and get a mini repository automatically populated and build for linux ix86, amd64 & ppc, and rpms for the same.. then we will have official ones.
[11:51] <lifeless> daf: no, not misleading
[11:51] <BradB> lifeless: dude, by the way, what's with that pqm mail then?
[11:51] <daf> well, I was misled by it :)
[11:51] <lifeless> jamesh & justdave *are* volunteers, and I haven't audited, nor can, their build environments.
[11:52] <lifeless> daf: chinstrap isn't a trusted machine either. I really *cannot* claim that those binaries are trustworthy.
[11:52] <lifeless> so - I don't.
[11:52] <lifeless> BradB: what do you mean ?
[11:52] <daf> lifeless: hmmm
[11:52] <BradB> [17:35]  	<BradB>	lifeless: 
[11:52] <BradB> [17:35]  	<BradB>	SIG_ID kaawvOjzSnhm1MKkzIlT91hn4Tc 2004-11-03 1099520964
[11:52] <BradB> [17:35]  	<BradB>	Command failed!
[11:52] <BradB> [17:35]  	<BradB>	All lines of log output:
[11:52] <BradB> [17:35]  	<BradB>	(that's it)
[11:52] <daf> lifeless: to what degree is chinstrap not trusted?
[11:52] <BradB> i have no idea what that means
[11:53] <lifeless> oh, I thought you meant 'thats fixed' :)
[11:53] <lifeless> daf: its not as secured, nor isolated, as the buildds.
[11:54] <lifeless> BradB: can you forward the mail to me please?
[11:54] <lifeless> actually, don't bother I know the problem.
[11:54] <BradB> lifeless: sure, email?
[11:55] <daf> lifeless: neither are most machines
[11:55] <lifeless> daf: exactly my point
[11:55] <BradB> lifeless: oh, ok :)
[11:55] <lifeless> BradB: try again please
[11:57] <daf> lifeless: I understand you're being cautious, but I suspect that warning may be scaring people off