[00:01] <maxb> the vast burst of ssh connection reset errors (which I've just requeued) might be why it was stopped
[00:09] <maxb> I've emailed the list
[00:09] <poolie> thanks
[00:10] <poolie> there was a bug where the importer was tickling an lp bug
[00:10] <poolie> i think we can ramp it up more now
[00:28] <poolie> maxb, did you see my pm?
[00:32] <james_w> hi
[00:33] <james_w> I stopped it as it was failing hard and spamming error messages every 5 minutes
[00:33] <james_w> so I disabled it until we could take a more in depth look
[00:33] <james_w> I should have mailed the list when I did though, sorry
[00:43] <maxb> james_w: hmm, I did a single import via bin/import-package and it completed fine
[00:43] <maxb> i'm tempted to start it going with max_threads 1 and see what happens
[00:43] <james_w> yeah, the issue was sqlite contention
[00:43] <james_w> so a single import succeeding isn't surprising
[00:43] <maxb> ok, starting it up
[00:44] <maxb> huh
[00:44] <maxb> apache2 is chewing jubany's cpu
[00:45] <james_w> maxb, for packages.ubuntu.com perhaps?
[00:46] <maxb> urgh, it appears to be being hit by 4+ webspiders simultaneously
[00:50] <maxb> Started, seems ok, going to 4 threads
[00:53] <maxb> seems ok, going to 8 threads
[00:55] <maxb> seems ok, going to 16 threads
[00:55] <lifeless> uhm
[00:55] <lifeless> what does each thread entail ?
[00:56] <lifeless> you'll get blacklisted and firewalled from LP shortly if that is full concurrent API use.
[00:56] <maxb> there's a fair amount of unpacking tarballs and local bzr operations
[01:01] <maxb> hmm, running into sqlite contention again
[01:03] <maxb> down to 1 thread
[01:05] <maxb> most finished ok, so back to 8 and seeing if the sqlite db can handle that
[01:05] <poolie> maxb, maybe we should set up tmpreaper or similar on the tmpdir
[01:05] <poolie> i thought there might have already been one
[01:05] <poolie> anything over a month old could probably safely be killed
[01:05] <maxb> we could. or we could just empty it manually every year :-)
[01:05] <james_w> heh
[01:11] <maxb> It's not entirely happy
[01:11] <maxb> looks like it fails whenever two import threads try to complete at the same time
[01:12]  * maxb drops it to 6 threads and hopes coincidence will mean that happens rarely whilst it slogs through the backlog
[01:15] <james_w> maxb, the old code had an sqlite timeout of 30 seconds, the new one defaults to 5
[01:15] <james_w> maxb, what do you think about trying 30 again?
[01:15] <maxb> definitely do it
[01:20] <james_w> maxb, https://code.launchpad.net/~james-w/udd/sqlite-timeout/+merge/101317
[01:27] <maxb> approved
[01:29] <james_w> ok, deploying now then
[01:47] <james_w> I would get stuck waiting for gcc-snapshot to push wouldn't I?
[01:50] <james_w> oh, it's pulling
[01:50]  * james_w kills
[01:57] <james_w> back up and running on the new code
[02:00] <james_w> maxb, thanks for tracking this
[03:18] <mgrandi> hmm, got a tree transform is malformed exception o.o
[05:49] <lifeless> james_w: why don't you move to postgresql?
[07:45] <poolie> o/ lifeless
[08:02] <jelmer> 'morning
[08:02] <mgz> morning all
[08:03] <vila> hi guys
[08:03] <jelmer> hey mgz, vila
[08:11] <lifeless> poolie: hi thar :)
[08:11] <poolie> hi
[08:30] <mgz> we doing standup?
[08:32] <jelmer> in 30m according to my calendar
[08:34] <mgz> yeay timezones.
[08:47] <LarstiQ> ooh, standup
[08:49] <LarstiQ> jelmer: is landing in bzr-svn just pushing to lp:bzr-svn?
[08:50] <jelmer> LarstiQ: basically, yep
[08:51] <jelmer> LarstiQ: perhaps push a merge of the changes rather than the individual changes
[08:51]  * LarstiQ nods
[08:51] <jelmer> I haven't reallyt been doing that myself though, so me asking is a bit hypocritical :)
[08:52] <LarstiQ> jelmer: not watching the monkey ;)
[08:54] <jelmer> (:
[08:56] <jam1> jelmer, poolie: where are you guys doing standups these days? Still mumble? or is it a hangout now?
[08:57] <jelmer> hey jam1, are you joining us?
[08:57] <jam1> Yeah, I should be today
[08:57] <jelmer> we usually use hangouts
[08:57] <jam1> and then going forward
[08:58] <jelmer> \o/
[08:59] <poolie> hi all, hangouts
[09:22] <LarstiQ> jelmer: could you lend a hand with https://bugs.launchpad.net/bzr-svn/+bug/920411 ?
[09:27] <LarstiQ> jelmer: bzrlib/info.py +499 seems to want to instantiate all registered bzrdirs
[09:27] <jelmer> LarstiQ: that seems like the man issue
[09:27] <jelmer> *main
[09:27]  * LarstiQ nods
[09:28] <jelmer> perhaps this bug should be reassigned to bzr
[09:28] <poolie> o/ larstiq
[09:28] <LarstiQ> hey poolie :)
[09:30] <LarstiQ> jelmer: alternatively perhaps bzr-svn shouldn't register a format if subvertpy is not available, or somesuch
[09:32] <LarstiQ> but info's behaviour is not nice anyway
[09:34] <jelmer> LarstiQ: that would cause bzr to silently e unable to open svn woring copies if subvertpy is mising
[09:35] <LarstiQ> jelmer: so ideally it will complain that subvertpy is missing when it tries to open an svn working copy, but not before?
[09:42] <jelmer> LarstiQ: yeah
[10:06] <rbasak> Hello! Any chance of an answer to https://answers.launchpad.net/bzr-fastimport/+question/192715 please? Is this a bug?
[10:09] <poolie> thanks for the reproduction
[10:10] <poolie> it does sound like a bug
[10:19] <jelmer> rbasak: I think you want to specify the marks file to bzr fast-import too
[10:19] <rbasak> jelmer: bzr fast-import uses its own internal marks file inside .bzr
[10:20] <rbasak> jelmer: I don't think it needs to know about the id to git hash mapping, does it? Just it's own id to bzr id mapping
[10:25] <jelmer> hmm, true
[11:57] <LarstiQ> jelmer: cheers, enough digging for now
[12:49] <james_w> lifeless, that's exactly what these problems are being caused by :-)
[17:51] <sudarshans> please note: as I  mentioned here https://answers.launchpad.net/bzr/+question/192935 the developer docs need to be updated
[17:52] <sudarshans> lifeless: ^
[17:53] <sudarshans> jelmer: thanks for helping me locate the answer to my question
[21:07] <lifeless> jelmer: are you RFA'ing the whole bzr stack ?
[21:12] <jelmer> lifeless: no, just (trying to) reduce the number of packages I'm involved in
[22:56] <maxb> james_w: ugh ugh ugh. Yes, the problem with requeue --full / --zap-revids *is* non-unicodeness
[22:56] <maxb> how on earth is storm managing to break so badly that it silently does nothing :-(
[23:24] <poolie> hi all
[23:25] <jelmer> hey poolie