[02:38] <peitschie> hiya everybody :)
[02:49] <spiv> Hi :)
[06:31] <spiv> mgz: thanks so much for reviewing doxxx's mergetools patch
[06:47] <spiv> vila: I just used run_script for the first time, and it basically Just Worked for me.  Nice!
[07:26] <vila> spiv: cool !
[07:26] <vila> hi all !
[07:28] <peitschie> hiya vila :)
[07:29] <vila>  _o/
[08:01] <vila> alleluia, bzr-2.2.1 made it into maverick-updates :)
[08:08] <peitschie> vila: congrats!  was it as painful as it sounds?
[08:09] <vila> no, not painful, but since I didn't understood the process ,I was impatient :)
[08:10] <peitschie> vila: hehe... i understand that feeling :S
[08:23]  * KombuchaKip needs get Stallman to consume more raw fruits and vegetables.
[08:32] <rryan> hey.. I screwed up and accidentally pushed a branch I was working on to my projects trunk because its remembered location was the trunk. I then pulled this push from my trunk branch and so I don't have a copy of what the trunk was before I pushed it. Nobody else has pulled this yet, so I'd like to push --overwrite the Launchpad trunk back to what it was. How can I restore my repository to the state it was in at a certain revision-spe
[08:33] <rryan> sorry this is all on Launchpad
[08:34] <rryan> like, in my history the trunk head was 'rryan@mit.edu-20101024063322-8b3jtqyn77k5c32t'
[08:36] <vila> rryan: bzr push --overwrite -rrevid: rryan@mit.edu-20101024063322-8b3jtqyn77k5c32t
[08:36] <vila> It seems I can't connect to lp via ssh right now though
[08:37] <vila> Can anybody confirm or is it only me ?
[08:37] <vila> right, confirmed in #launchpad
[08:54] <rryan> thanks vila, all better now
[10:19] <poolie> hi vila?
[10:20] <vila> poolie: hey ! Wow, you're up ?
[10:20] <poolie> yeah, up a bit early
[15:05] <poolie> hi jam?
[15:05] <jam> hi poolie
[15:05] <jam> Having a bzr sprint at UDS next spring sounds good. I just need to double check if it overlaps with my wife's travel
[15:05] <jam> I think that was april, though
[15:06] <poolie> cool, thanks
[15:07] <poolie> Kareem can come :)
[15:07] <poolie> we have plenty of +easy bugs
[15:16] <jam> :)
[15:31] <jam> beuno: if you are around, I just checked the meliae dump you sent me, and it only sees 11MB of content... :(
[15:32] <jam> you seem to have a lot of "function" objects (15k), but that really doesn't explain 1GB of ram
[15:35] <beuno> jam, we don't know either  :/
[15:35] <jam> beuno: what code is this? (do I have access to it?)
[15:37] <beuno> jam, this is a django server from ubuntu one
[15:38] <beuno> I can surely give you access to it
[15:39] <jam> beuno: I don't know django code particularly well. I don't see any smoking guns just looking at the memory dump
[15:39] <jam> meliae could only find about 113k objects, which is pretty small.
[15:40] <beuno> yeah. we are puzzled
[15:40] <jam> beuno: what version of python?
[15:40] <jam> I think you mentioned that this is only reproducible on the server/ec2 ?
[15:41] <jam> I assume this dump is from that process while it was at high mem?
[15:41] <beuno> jam, 2.4, and yes, this dump is from when the usage is high
[15:41] <jam> beuno: there is a possibility w/ 2.4. Do you do any heavy math?
[15:42] <jam> integer or floating point?
[15:42] <jam> IIRC 2.4 allocates integer arenas, and never returns that memory to the os
[15:42] <jam> so any sort of "do lots of stuff here, and then stop" will still have a lot of memory
[15:42] <jam> it can be re-used by the process, but it isn't returned to the os
[15:43] <beuno> jam, not really. It's actually pretty simple, we stream files from amazon S3 and do basic db requests to return some data
[15:43] <jam> beuno: would you be holding the content?
[15:44] <jam> I certainly don't see big strings here
[15:44] <jam> Are there any custom extension types? like S3 apis, or db apis, etc?
[15:45] <jam> I'm assuming the "Method", "InterfaceClass", "Implements", etc classes are all coming from zope.interfaces code?
[15:46] <beuno> we do have custom S3 apis
[15:47] <beuno> I don't think we use any zope
[15:47] <jam> beuno: are they Python "extensions" ? (C level code, not Python level code)
[15:48] <beuno> jam, nope, this is all python
[15:48] <beuno> we thought about that as well
[15:48] <beuno> but no c extensions
[15:48] <mwhudson> i _guess_ you could try running it under valgrind
[15:48] <mwhudson> but i guess it only happens in production?
[15:48] <jam> beuno: twisted uses zope.interfaces, and you certainly have 'twisted.python.zipstream' loaded
[15:49] <beuno> mwhudson, right, can't reproduce it locally
[15:49] <beuno> jam, right, we may from twisted
[15:49] <jam> (total of 773 modules loaded, so I don't know everything, but there is certainly a lot loaded)
[15:50] <jam> beuno: psycopg2.extensions
[15:50] <beuno> yes, we use psycopg2
[15:51] <jam> beuno: are you using python2.4 locally as well?
[15:51] <jam> beuno: you are also using protocol buffers, would those have compiled extensions?
[15:52] <jam> (it looks like python code, but it is something that might be involved)
[15:52] <jam> 'ubuntuone.web.musicstreaming.views', sounds like something that could have large content blobs
[15:53] <beuno> jam, no, 2.6 locally
[15:53] <jam> beuno: that could certainly be the isuses
[15:53] <jam> issue
[15:53]  * beuno nods
[15:53] <jam> protocol buffers could be doing lots of integer ops
[15:53] <beuno> ubuntuone.web.musicstreaming.views accesses S3 to streaming multi-mb files
[15:53] <beuno> aha
[15:53] <jam> though it should be "peak ops"
[15:54] <jam> for example, encoding a multi-mb file into a protocol buffer
[15:54] <jam> would be a lot of 4-byte integers
[15:54] <jam> beuno: also 64-bit vs 32-bit could be something
[15:54] <jam> beuno: I also don't 100% guarantee meliae works on python2.4
[15:54]  * beuno nods
[15:54] <beuno> ha
[15:54] <beuno> ok
[15:55] <jam> I know some of the code is 2.5, but I think the scanner is 2.4 safe
[15:55] <beuno> so a good thing to do is push for the lucid upgrade
[15:55] <jam> beuno: it sounds useful, but I won't guarantee it solves your problems :)
[15:56] <beuno> heh
[15:56] <mwhudson> uh
[15:56] <beuno> jam, it's a lot of new things to chase, thanks
[15:56] <mwhudson> protocol buffers might have leaks?
[15:56] <jam> beuno: my biggest suspect is python2.4
[15:56] <jam> mwhudson: python2.4 doesn't return integer allocations back to the os
[15:56] <jam> so it always allocates the peak integer arena
[15:56] <jam> which should be reusable (without "leaks")
[15:56] <jam> but you might have a big peak
[15:57] <jam> beuno: which would show up as VmPeak being high, but VmRss being much lower
[15:57] <beuno> it does seem like we get peaks, and in fact see some MemoryErrors now and then
[15:57] <beuno> we usually see virtual mem at 1.5gb, and rss at 250mb
[15:59] <jam> beuno: well, VM includes mmaped files, etc
[16:00] <jam> I mean 'cat /proc/PID/status' and the VMPeak vs VMSize or VMRss sort of thing
[16:00] <jam> if python2.6 "fixes it" the Peak would be the same, but the active size would be lower
[16:00]  * beuno nods
[16:00] <beuno> I'd have to look at it again
[16:01] <mwhudson> the integer blocks might show up in /proc/$pid/maps?
[16:01] <mwhudson> i can't remember how all this works
[16:01] <mwhudson> beuno: you should try using pypy instead :-)
[16:03] <beuno> heh
[16:03] <beuno> I have not written 90% of this code, so it's been intersting trying to debug it
[16:03] <jam> beuno: so it looks like the Implements and InterfaceClass may be protocol buffers stuff
[16:03] <jam> beuno: clearly you need to look closely at that 10% then :)
[16:03] <beuno> yes, that makes sense
[16:03] <beuno> jam, heh. It has been failing for over a year
[16:04] <beuno> so the only thing that can be blamed on me is increased usage
[16:04] <jam> beuno: you've peaked my interest, if you want to give me a peek at the code
[16:04] <jam> beuno: I can blame anything I want on you. Doesn't mean I'll be correct :)
[16:04] <beuno> heh
[16:04] <beuno> jam, I will add you to the team now
[16:06] <jam> beuno: so the idea behind python2.4 vs 2.6 is that the actual peak is happening at some point in the past
[16:06] <jam> and we are just holding on to the memory now
[16:06] <jam> but it isn't in "active" objects, so meliae can't find it
[16:07] <jam> I wonder about doing evil hacks and walking the actual PyInt buffers
[20:42] <aaronfay> I have been using the bzr_upload plugin, fantastic btw, but I have a problem: the first time I uploaded I used a location that was wrong, now when I specify a new location, then use saved location it uses the old one, how can I change that?
[20:43] <lifeless> --remember
[20:46] <mgz> funny post on the mailing list. I wonder how large the set of users is that we render bzr inaccessible to by using such things as 1) configure 2) https 3) the gpl
[20:47] <mgz> it might only be him.
[20:51] <aaronfay> lifeless: Ah, fantastic.  And now I see it in the manpage also, I couldn't find it before.  Thanks.
[20:53] <lifeless> mgz: 'not using configure' ?
[20:54] <mgz> he even linked us to his website where he explains why he doesn't like configure.
[20:54] <mgz> I... didn't try and understand too hard.
[20:54] <lifeless> yeah
[20:54] <lifeless> I can understand
[21:14] <roryy> didn't know one had to run ./configure for bzr
[21:20] <mgz> I looked at the message again, and he appeared be be complaining about bzr using python... which uses configure
[21:23] <roryy> ah
[21:23] <roryy> logical
[21:23] <roryy> the taint of configure
[22:01] <AJenbo> Hi i'm trying to pull a branch of openarena, but i keep getting a permission denied error :(
[22:01] <AJenbo> Permission denied (publickey).
[22:03] <AJenbo> Do i need to identify my self or some thing?
[22:03] <AJenbo> bzr branch lp:ubuntu/maverick/openarena
[22:14] <poolie> AJenbo, hi, there's a bug on the server, it should be fixed soon
[22:15] <poolie> AJenbo, hi, i think it's bug 666642
[22:15] <poolie> we're going to look in to it
[22:18] <AJenbo> ok, it's been there for a while now
[22:19] <AJenbo> any workarounds?
[23:30] <peitschie> mornin all :)
[23:39] <maxb> AJenbo: I think poolie may have mixed up two projects with "open" in their name, I can't see how that bug can apply to your issue