[00:01] the vast burst of ssh connection reset errors (which I've just requeued) might be why it was stopped [00:09] I've emailed the list [00:09] thanks [00:10] there was a bug where the importer was tickling an lp bug [00:10] i think we can ramp it up more now [00:28] maxb, did you see my pm? [00:32] hi [00:33] I stopped it as it was failing hard and spamming error messages every 5 minutes [00:33] so I disabled it until we could take a more in depth look [00:33] I should have mailed the list when I did though, sorry [00:43] james_w: hmm, I did a single import via bin/import-package and it completed fine [00:43] i'm tempted to start it going with max_threads 1 and see what happens [00:43] yeah, the issue was sqlite contention [00:43] so a single import succeeding isn't surprising [00:43] ok, starting it up [00:44] huh [00:44] apache2 is chewing jubany's cpu [00:45] maxb, for packages.ubuntu.com perhaps? [00:46] urgh, it appears to be being hit by 4+ webspiders simultaneously [00:50] Started, seems ok, going to 4 threads [00:53] seems ok, going to 8 threads [00:55] seems ok, going to 16 threads [00:55] uhm [00:55] what does each thread entail ? [00:56] you'll get blacklisted and firewalled from LP shortly if that is full concurrent API use. [00:56] there's a fair amount of unpacking tarballs and local bzr operations [01:01] hmm, running into sqlite contention again [01:03] down to 1 thread [01:05] most finished ok, so back to 8 and seeing if the sqlite db can handle that [01:05] maxb, maybe we should set up tmpreaper or similar on the tmpdir [01:05] i thought there might have already been one [01:05] anything over a month old could probably safely be killed [01:05] we could. or we could just empty it manually every year :-) [01:05] heh [01:11] It's not entirely happy [01:11] looks like it fails whenever two import threads try to complete at the same time [01:12] * maxb drops it to 6 threads and hopes coincidence will mean that happens rarely whilst it slogs through the backlog [01:15] maxb, the old code had an sqlite timeout of 30 seconds, the new one defaults to 5 [01:15] maxb, what do you think about trying 30 again? [01:15] definitely do it [01:20] maxb, https://code.launchpad.net/~james-w/udd/sqlite-timeout/+merge/101317 [01:27] approved [01:29] ok, deploying now then [01:47] I would get stuck waiting for gcc-snapshot to push wouldn't I? [01:50] oh, it's pulling [01:50] * james_w kills [01:57] back up and running on the new code [02:00] maxb, thanks for tracking this [03:18] hmm, got a tree transform is malformed exception o.o [05:49] james_w: why don't you move to postgresql? [07:45] o/ lifeless [08:02] 'morning [08:02] morning all [08:03] hi guys [08:03] hey mgz, vila [08:11] poolie: hi thar :) [08:11] hi [08:30] we doing standup? [08:32] in 30m according to my calendar [08:34] yeay timezones. [08:47] ooh, standup [08:49] jelmer: is landing in bzr-svn just pushing to lp:bzr-svn? [08:50] LarstiQ: basically, yep [08:51] LarstiQ: perhaps push a merge of the changes rather than the individual changes [08:51] * LarstiQ nods [08:51] I haven't reallyt been doing that myself though, so me asking is a bit hypocritical :) [08:52] jelmer: not watching the monkey ;) [08:54] (: [08:56] jelmer, poolie: where are you guys doing standups these days? Still mumble? or is it a hangout now? [08:57] hey jam1, are you joining us? [08:57] Yeah, I should be today [08:57] we usually use hangouts [08:57] and then going forward [08:58] \o/ === jam1 is now known as jam [08:59] hi all, hangouts [09:22] jelmer: could you lend a hand with https://bugs.launchpad.net/bzr-svn/+bug/920411 ? [09:22] Ubuntu bug 920411 in Bazaar Subversion Plugin "bzr info fails due to missing subvertpy" [Medium,Triaged] [09:27] jelmer: bzrlib/info.py +499 seems to want to instantiate all registered bzrdirs [09:27] LarstiQ: that seems like the man issue [09:27] *main [09:27] * LarstiQ nods [09:28] perhaps this bug should be reassigned to bzr [09:28] o/ larstiq [09:28] hey poolie :) [09:30] jelmer: alternatively perhaps bzr-svn shouldn't register a format if subvertpy is not available, or somesuch [09:32] but info's behaviour is not nice anyway [09:34] LarstiQ: that would cause bzr to silently e unable to open svn woring copies if subvertpy is mising [09:35] jelmer: so ideally it will complain that subvertpy is missing when it tries to open an svn working copy, but not before? [09:42] LarstiQ: yeah [10:06] Hello! Any chance of an answer to https://answers.launchpad.net/bzr-fastimport/+question/192715 please? Is this a bug? [10:09] thanks for the reproduction [10:10] it does sound like a bug [10:19] rbasak: I think you want to specify the marks file to bzr fast-import too [10:19] jelmer: bzr fast-import uses its own internal marks file inside .bzr [10:20] jelmer: I don't think it needs to know about the id to git hash mapping, does it? Just it's own id to bzr id mapping [10:25] hmm, true === Wellark_ is now known as Wellark [11:57] jelmer: cheers, enough digging for now [12:49] lifeless, that's exactly what these problems are being caused by :-) === yofel_ is now known as yofel === deryck is now known as deryck[lunch] [17:51] please note: as I mentioned here https://answers.launchpad.net/bzr/+question/192935 the developer docs need to be updated [17:52] lifeless: ^ [17:53] jelmer: thanks for helping me locate the answer to my question === deryck[lunch] is now known as deryck [21:07] jelmer: are you RFA'ing the whole bzr stack ? [21:12] lifeless: no, just (trying to) reduce the number of packages I'm involved in [22:56] james_w: ugh ugh ugh. Yes, the problem with requeue --full / --zap-revids *is* non-unicodeness [22:56] how on earth is storm managing to break so badly that it silently does nothing :-( [23:24] hi all [23:25] hey poolie