[00:01] <mwhudson> losa ping!
[00:01] <spm> mwhudson: yo!
[00:01] <mwhudson> spm: it's the pick-a-staging-import-slave and run http://pastebin.ubuntu.com/411296/ time
[00:02] <mwhudson> (hopefully with correct urls this time)
[00:02]  * thumper heads for coffee
[00:02] <spm> mwhudson: nah; just make em up; urls are overrated
[00:03] <thumper> mwhudson: that doesn't store the git tree properly though
[00:03] <mwhudson> spm: :)
[00:03] <mwhudson> thumper: oh yes, you're right
[00:03]  * thumper really heads for coffee
[00:03] <spm> mwhudson: so raspberry again?
[00:03]  * mwhudson adjusts his iron-o-meter
[00:03] <mwhudson> spm: no?
[00:04] <spm> that was a joke - at myself.. but :-)
[00:04] <mwhudson> oh good
[00:04]  * mwhudson turns the gain back down
[00:04] <spm> :-)
[00:04] <mwhudson> spm: i'll need to whip up another patch first so no hurry
[00:05] <spm> mwhudson: hrm. yes cause the apply patch in that paste is telling me it's applied already
[00:06] <spm> mwhudson: the otehr steps are done; fwiw
[00:06] <mwhudson> oh heh
[00:23] <spm> thumper: I guessing this is realted to the rt we discussed yesterday? https://pastebin.canonical.com/30326/
[00:34] <mwhudson> jelmer: does bzr-git now depend on bzr 2.2?
[00:35] <mwhudson> jelmer: http://pastebin.ubuntu.com/411354/
[00:35] <jelmer> mwhudson: Have you recompiled dulwich?
[00:36] <mwhudson> ah
[00:36] <mwhudson> yes, but probably only with python 2.6
[01:09] <mwhudson> spm: can you apply this patch to strawberry's launchpad tree: http://pastebin.ubuntu.com/411369/
[01:11] <spm> raspberry?
[01:11] <mwhudson> some kind of fruit
[01:11] <mwhudson> not an antartic base
[01:11] <spm> sweet, that narrows it down fopr me to ~ 80% of our servers... ;-)
[01:12] <spm> mwhudson: patched
[01:14] <thumper> mwhudson: well that saved me some work for today :)
[01:14] <thumper> mwhudson: testing the kernel?
[01:15] <mwhudson> thumper: the bzr-git thing?
[01:15] <mwhudson> thumper: yeah, just hit go like a minute ago
[01:15]  * thumper watches
[01:17] <jelmer> oh, I didn't know you were going to try the kernel again
[01:18] <jelmer> the unusual modes thing isn't fixed yet
[01:18] <jelmer> although if the batch size is 5k you might not notice it
[01:20] <wgrant> Is it faster now?
[01:22] <jelmer> wgrant: a full kernel import should be less than 6 hours now
[01:22] <wgrant> Wow.
[01:23] <lifeless> mwhudson: and not a penguin?
[01:23] <wgrant> Isn't that almost three orders of magnitude faster?
[01:23] <lifeless> I think we should just use the last 4 hex digits of their IP address
[01:24] <jelmer> wgrant: two things made it fast - there was a bug in the sqlite cache that made it store a few billion rows rather than less than a million
[01:24] <wgrant> Haha.
[01:24] <lifeless> jelmer: !
[01:24] <jelmer> wgrant: and InventoryDirectory.children in Bazaar should be -Devil
[01:24] <lifeless> jelmer: in 2a it should be tolerable.
[01:24] <lifeless> jelmer: what was it doing that sucked? We can tune it further...
[01:24] <jelmer> lifeless: it's still very slow in 2a - I haven't compared it to pack-0.92
[01:25] <SlonUA> mwhudson: hi
[01:25] <lifeless> jelmer: pack based is full inventory load
[01:25] <lifeless> jelmer: so editing one dir is definitely cheaper in 2a :>
[01:25] <lifeless> bbs
[01:25] <jelmer> lifeless: I'm now walking the git tree objects rather than InventoryDirectory.children
[01:25] <wgrant> jelmer: I notice that you have a branch to move upload processing into buildd-manager itself -- why that rather than letting a separate daemon handle upload processing, thus making buildd-manager less horribly slow?
[01:26] <SlonUA> mwhudson: r u going create guide for launchpad.dev usage as remote access + code_hosting
[01:26] <wgrant> We are wasting a *huge* amount of buildd time at the moment because of the synchronous upload processing.
[01:26] <jelmer> lifeless: that works well too so there's no need to tweak anymore from bzr-git's POV
[01:27] <jelmer> wgrant: It's the first step towards doing things in a more asynchronous manner
[01:27] <jelmer> wgrant: and it at least avoids the overhead of importing all of launchpad each time
[01:27] <wgrant> jelmer: I'm not sure I see how embedding the synchronous process further into buildd-manager helps with asynchronicity.
[01:27] <wgrant> But yes, removing that overhead is a good step.
[01:28] <SlonUA> pals, could u say how much stuff will be run under 'make run_all'
[01:37] <wgrant> You know, that import does seem to be going a little more quickly.
[01:42] <jelmer> wgrant: :-)
[01:49] <lifeless> SlonUA: I doubt a guide for production use of launchpad.dev will be made - it doesn't make sense :)
[01:49] <lifeless> SlonUA: as for what is started by make, you can look at the makefile; or is there some reason you care about what stuff starts ?
[01:50] <SlonUA> lifeless: thanks for notice ... just interesting about targets in make .. yup i will look
[01:51] <SlonUA> lifeless: about guide .... yes we have two ... remote access and code-hosting ... but for use together we hadn't ... but i done already ... i just use vm for launchpad.dev
[01:52] <lifeless> SlonUA: what do you mean by remote access?
[01:53] <SlonUA> lifeless: access to launchpad.dev from another host
[01:54] <jelmer> mwhudson: I'm curious how fast the second batch will be, since it will have to reconstruct some of the base texts
[01:55] <wgrant> It doesn't look like the initial 'finding revisions to fetch:generating index' is any faster.
[01:55] <jelmer> wgrant: that really can't be optimized much further, at least not in bzr-git
[01:55] <mwhudson> no, so maybe we should import even more than 5000 revisions in one go
[01:56] <jelmer> mwhudson: yeah, that'd make sense
[01:56] <wgrant> It looks like only half the import time was actual rev imports.
[01:56] <wgrant> Impressive.
[01:58] <jelmer> lifeless: git imports now actually spend most of their time in add_inventory_by_delta()
[01:58] <lifeless> jelmer: interesting
[01:58] <lifeless> jelmer: how well does that perform for you ?
[02:02] <jelmer> lifeless: reasonably
[02:03] <jelmer> lifeless: it just happens to be the main bottleneck left
[02:17] <mwhudson> it seems like the second kernel import is quite a lot slower
[02:18] <mwhudson> still 100 times faster than what's in production, mind...
[02:21] <thumper> will be interesting to watch how it slows
[02:22] <thumper> hmm, I wonder why postgresql 8.4 didn't restart automatically on restart
[02:28] <thumper> mwhudson: I'm removing the is_personal_branch from IBranch and adding supports_short_identities to BranchTarget
[02:28] <thumper> mwhudson: after a talk with jml this morning
[02:28] <mwhudson> thumper: +1
[02:48]  * mwhudson afk for lunch/emma collection
[04:51] <thumper> mwhudson: watching the git import speed on strawberry
[04:51] <mwhudson> thumper: it seems to be doing about 150 revs a minute?
[04:51] <thumper> it spends < 50% of the time doing the import
[04:52] <thumper> 25min for 5000
[04:52] <thumper> so about 200 per minute I guess
[04:52] <thumper> the rest is determining revisions, packing et al
[04:52] <thumper> and overhead
[04:52] <thumper> do you think 5k is big enough?
[04:53] <mwhudson> mm
[04:53] <thumper> 10k maybe?
[04:53] <mwhudson> 10k sounds about right
[04:53] <mwhudson> could even go higher i guess
[04:53] <mwhudson> lemme check the memory usage
[04:56] <thumper> is the same number used by bzr-svn for the incremental imports?
[04:57] <mwhudson> currently yes
[04:58] <mwhudson> but that'll be very easy to change to something separate
[04:58] <thumper> actually the number per minute seems to vary wildly
[04:59] <wgrant> Hm, it's already up to 50 minutes per run?
[04:59] <thumper> wgrant: but seems relatively constant
[04:59] <thumper> wgrant: and I'm not sure that strawberry is particularly fast
[04:59] <wgrant> IIRC the production import started at ~3 hours, and is now >20.
[05:00] <thumper> wgrant: much of that we think is dealing with the gazillion row sqlite db
[05:00] <thumper> wgrant: let it run for a day and see where we are
[05:00] <mwhudson> when select count(*) from trees; takes an hour...
[05:00] <wgrant> Right, but I'm just saying that initial looks can be deceiving.
[05:00] <wgrant> Heh.
[05:01] <thumper> mwhudson: want to get those bzr-git changes into a branch for me to review?
[05:01] <wgrant> And it's already taking almost twice as the first couple...
[05:01] <mwhudson> thumper: yeah ok
[05:01] <thumper> wgrant: don't panic
[05:02] <thumper> mwhudson: how's the non-git work going?
[05:03] <wgrant> I guess even at this speed it will be done in two or so days.
[05:03] <wgrant> A tad better than production!
[05:04] <mwhudson> thumper: i think the first pipe that changes codehosting is basically done, barring the fact that most of the new docstrings are "XXX"
[05:04] <thumper> :)
[05:05] <thumper> this import run is down to 50/minute
[05:05] <thumper> for the last 10 minutes anyway
[05:07] <wgrant> thumper: It looks like around 200 from here...
[05:08] <thumper> variable...
[05:12] <wgrant> How do production's DB indices compare to the distributed schema?
[05:12] <lifeless> what do you mean ?
[05:13] <wgrant> Is the dev schema kept up to date with index changes that presumably occur first on production?
[05:14] <lifeless> usually yes;
[05:14] <lifeless> every now and then you'll see stub add a db patch saying 'already done'
[05:14] <lifeless> but usually they are done in dev and cp'd to prod
[05:14] <wgrant> Aha.
[05:15] <stub> They have to be the same - the same tools used to upgrade dev databases are used to upgrade production databases.
[06:17] <stub> Is the destination select in the PPA copy form broken? I'm trying to copy packages to Lucid but getting told "The The following sources cannot be copied:  slony1 1.2.20-1~launchpad.hardy1 in hardy (same version already has  published binaries in the destination archive)"
[06:19] <wgrant> stub: Where are you copying to/from?
[06:20] <wgrant> Both archive and series are important.
[06:22] <wgrant> mwhudson: Landed; thanks.
[06:32] <stub> wgrant: I'm trying to copy from https://edge.launchpad.net/~maxb/+archive/launchpad/+copy-packages
[06:33] <stub> I need to get the packages rebuilt for lucid
[06:34] <wgrant> stub: You can't use an intra-PPA copy to do a rebuild.
[06:34] <wgrant> That would result in different binaries with the same name.
[06:34] <stub> Which makes me think the form is bogus, since it is asking me for the series.
[06:34] <wgrant> That's useful for when you want to copy to another PPA.
[06:35] <wgrant> But yes, it should more obviously forbid source-only copies to the same PPA.
[06:35] <stub> So I can copy it to my PPA with the same series, and then rebuild it in my PPA?
[06:35] <wgrant> You could do that, but you won't be able to copy it back in.
[06:36] <stub> This all seems terribly arbitrary and confusing to an outsider.
[06:39] <stub> Garh.... deleted the previous copy (for karmic, now useless), and still get the same error message.
[06:39] <stub> I think I need to delete the PPA
[06:40] <wgrant> You cannot ever have different binaries of the same name again.
[06:40] <wgrant> The restriction is not arbitrary and confusing -- you cannot have multiple distinct files of the same name in the same PPA, because that is a lie and is very confusing.
[06:41] <wgrant> However, we may eventually have a facility for requesting binary rebuilds which append some as-yet-undecided string to the version.
[06:43] <wgrant> (note, however, that this is the Soyuz definition of 'eventually')
[07:05] <adeuring> good morning
[07:29] <spm> morning adeuring!
[07:30] <spm> thumper: ref that configs merge proposal. 1. +1, 2. unless I hear otherwise in the next 2-3 mins :-) I'll submit the merge for you
[07:57] <spm> thumper: scratch that merge via me - appear to be having key issues between me/bzr/pqm dued to a new key of mine; so will need to fix that. aka yak shaving. probably quicker if you submit when ready.
[08:15] <wgrant> noodles775: Can I move lp.soyuz.browser.archive.traverse_distro_archive to Distribution.getArchive and export it? It does violate the separation a little more than now, but such is life with the webservice...
[08:16] <spm> mwhudson: did you see this earlier? staging codehost: https://pastebin.canonical.com/30326/ appears to be a diff one to your other patch I've been using.
[08:17] <mwhudson> spm: that one's thumper's fault!
[08:18] <mwhudson> spm: i guess hacking out the update_preview_diff section from the config will let it start it up
[08:20] <spm> mwhudson: oh fun :-)
[08:21] <spm> mwhudson: semi seriously tho - is it problematic that these sorts of changes can get past the test suite to blow up staging? ie is this testable and hence do I need to bug report?
[08:21] <wgrant> Speaking of configs... is edge going to update at some point?
[08:21] <spm> heh; not if staging blows up.
[08:21] <wgrant> But I thought the config key removals were reverted :(
[08:21] <spm> the updates typically run; crash; so we roll back.
[08:23] <mwhudson> spm: i guess buildbot/ec2-when-run-by-a-launchpad-dev could test that all the configs in the lp:lp-production-configs branch load
[08:24] <spm> interesting. the edge updates aren't actually disabled atm; and should be running right now.
[08:24] <wgrant> mwhudson: You mean like is suggested in bug #557271?
[08:24] <mup> Bug #557271: Unable to remove config entries from the schema <Launchpad Foundations:New> <https://launchpad.net/bugs/557271>
[08:24] <mwhudson> spm: at least thumper's branch is only on db-devel
[08:24] <spm> haha. failing to build/sync. blech.
[08:25] <maxb> stub: Hi. It would be rather horridly confusing if "1.2.20-1~launchpad.hardy1" was to be rebuilt in lucid, no? :-)
[08:25] <spm> maxb: (mock indignation for stub) I Fail To See The Problem (but privately hahahaha oops)
[08:26] <mwhudson> wgrant: great mind's think alike!
[08:27] <mwhudson> (or fools never differ, depending on your pov)
[08:29] <wgrant> maxb: What are we going to do about Python 2.6's changed output (tarfile trailing slashes, "[111] Connection refused" vs "(111, 'Connection refused')", that sort of thing)? Should we adjust tests to cope with both, or just fix them in the 2.6 branch and do another mass merge on migration day?
[08:31] <maxb> Given doctests don't allow for "this or that", I think fix-in-branch-and-mass-merge for those
[08:32] <wgrant> Well, we can easily and somewhat safely ellipsise the latter case.
[08:32] <wgrant> But I guess since we're going to have to mass-merge some stuff, we might as well fix it properly in the mass-merge.
[08:32] <maxb> The tarfile madness might be more amenable to making compatible, I'm not sure
[08:32] <lifeless> use better assertions ;)
[08:33] <wgrant> Also, are you going to finish off your assertion SyntaxWarning branch?
[08:33] <lifeless> assertThat is your friend
[08:33] <wgrant> initialise-from-parent.py is the only 2.6 Soyuz-related failure that isn't trivially fixable.
[08:34] <maxb> wgrant: I'm a bit blocked on what to do re initialise-from-parent.py. Creating an entire distroseries in the test is beyond my skill level at present
[08:34] <maxb> I was thinking of just seeing if the test works if I make it init from warty instead of hoary in the present sampledata
[08:37] <wgrant> maxb: The error is with the pending builds, right?
[08:37] <wgrant> If so, as a quick fix I would just make the builds no longer pending.
[08:37] <wgrant> Until we can replace most of these abominable tests.
[08:38] <maxb> i.e. modify the sampledata hoary in the test setup?
[08:38] <wgrant> Exactly.
[08:39] <wgrant> We already do similar things in other places.
[08:39] <wgrant> Just flip the builds to FAILEDTOBUILD or similar.
[08:40] <maxb> And the test harness will magically notice that I've dirtied the DB, and reinit it after my test?
[08:40] <wgrant> Right.
[08:41] <wgrant> If you commit, the DB will be torn down and recreated when your test finishes. If you don't commit, there's no problem with not tearing it down.
[08:42] <maxb> And in this case I shall need to commit, since it needs to exec a separate initialise-from-parent.py
[08:42] <wgrant> True.
[08:42] <wgrant> It probably already explicitly commits.
[08:42] <wgrant> Since otherwise the DB wouldn't be marked dirty, even though i-f-p.py alters it.
[08:43] <maxb> oh, actually yes, it already commits to actually create the to-be-inited series
[08:44] <noodles775> wgrant: re. the move of traverse_distro_archive, perhaps the bug would be a good spot for the conversation? (that way Julian can see the history etc.)?
[08:44] <wgrant> noodles775: Probably true.
[08:44] <wgrant> maxb: So, two lines of code should fix it...
[08:45] <wgrant> adeuring: I'm confused. Why should the bug page have to deal with missing LFCs? garbo-daily shouldn't be removing files that are in use, right?
[08:46] <adeuring> wgrant: in theory, you are right. But we had the case that LFC records were missing, see the bug mentioned in the commit message.
[08:46] <wgrant> adeuring: Right, i reported that bug.
[08:46] <wgrant> But it was more of a "why is garbo-daily removing these at all?" bug.
[08:47] <adeuring> wgrant: Ah, I see. Admittedly, I don't have a definitive answer. My guess is that a LOSA deleted the LFCs because the files contained private data
[08:48] <adeuring> Regarding the code: Only those bug attachments are deleted that are useless anyway because thes don't have an LFC record. So, no harm done, I think
[08:48] <adeuring> s/thes/they/
[08:49] <wgrant> If the LFCs are deleted for a good reason, that's fine.
[08:54] <wgrant> maxb: There's no modern 2.6 branch beyond salgado's?
[08:54] <maxb> Not at present, no
[08:54]  * wgrant wonders if we want a fresh ~launchpad-committers one soon.
[09:03] <maxb> given salgado is going leave according to the wiki page - yes!
[09:06]  * maxb runs tests on a tweaked initialise-from-parent.txt
[09:07]  * wgrant is collecting some fixes that are safe for devel.
[09:15] <spm> mwhudson: also lazr.config.interfaces.ConfigErrors: ConfigErrors: schema-lazr.conf does not have a mpcreationjobs section. <== edited away both this and the prior and it started
[09:16] <mwhudson> https://code.staging.launchpad.net/~mwhudson/linux/trunk <- slowly slowing down, but mostly i think it's the bits before and after (eg packing) that are getting slower
[09:16] <mwhudson> spm: woo
[09:16] <spm> https://pastebin.canonical.com/30343/ for the now obvious details
[09:17] <wgrant> mwhudson: Yeah, but I don't think it's going to slow down to even the initial production time.
[09:17] <mwhudson> spm: um, i guess someone (tm) should land a lp-production-configs branch
[09:17] <dhastha> Danilos: are you there?
[09:17] <mwhudson> wgrant: yeah
[09:17] <spm> mwhudson: https://code.edge.launchpad.net/~thumper/lp-production-configs/merge-proposal-jobs/+merge/22995 perhaps? I tried, but apper to be having key fail atm
[09:17] <wgrant> It'll overtake production in 7 hours.
[09:17] <mrevell> Morning all
[09:17] <spm> hey mrevell!
[09:17] <mrevell> Howdy spm :)
[09:18] <mwhudson> spm: well that doesn't appear to delete the old sections
[09:22] <mwhudson> wgrant: also i think pear is ~twice as fast as strawberry
[09:22] <spm> mwhudson: oh thumpers branch? oh right. yes. bleh.
[09:22] <wgrant> mwhudson: Handy.
[09:22] <wgrant> Interesting.
[09:22] <wgrant> Latest import was only 48 minutes.
[09:29] <stub> So how do I get the hardy packages at https://edge.launchpad.net/~stub/+archive/launchpad2 rebuilt for Lucid?
[09:44] <noodles775> stub: currently, assuming they require rebuilding, you'll need to change the target in the changes file and re-upload.
[09:44] <stub> So I can't do that with the Launchpad UI? Ok.
[09:46] <stub> I seem to be able to do the copy without a rebuild, so maybe that will work.
[09:46] <stub> I'm certainly being offered way too many choices that appear to be impossible due to deb packaging limitations.
[09:50] <jml> good morning
[09:50] <noodles775> stub: you can indeed copy to another series including the binaries, but it's only safe *if* you are certain they do not need to be rebuilt. See https://help.launchpad.net/Packaging/PPA/BuildingASourcePackage#Versioning
[09:50] <noodles775> and yes, it is confusing.
[10:44] <maxb> stub: Why are you rebuilding slony1 in lucid at all?
[10:49] <stub> maxb: So I can test the staging rebuild procedure locally
[10:50] <maxb> So, you want a pg8.3 version?
[10:50] <stub> Yes
[10:50] <stub> I can't recall right now why I couldn't use the 8.4 version...
[10:51] <maxb> Well, 1.2.20 is already in lucid for 8.4, so you shouldn't need the plain slony1 source from my PPA at all (as you can see it's greyed out "Newer version available" in your PPA)
[10:53] <maxb> What you probably want to do is take the slony1pg83 source, and increment its version to 1.2.20-1launchpad~stub1, and build that in lucid
[10:59] <deryck> Morning, all.
[11:04] <stub> maxb: I should learn how to do that one day :-)
[11:06] <maxb> stub: It'll take me 30 seconds, shall I just do it?
[11:06] <stub> maxb: That would be nice, ta :-)
[11:06] <maxb> oh waitaminute, didn't I already put a pg83 slony1 for lucid into ~launchpad/ppa ?
[11:07] <maxb> yes, yes I did :-)
[12:07] <stub> maxb: Ahh... yes. I'm going around in circles. Unfortunately, those packages are broken. postgresql-8.3-slony1 contains nothing but the documentation, and is missing the important stuff in /usr/lib.
[12:07] <maxb> !
[12:07] <maxb> oops
[12:07] <stub> https://bugs.edge.launchpad.net/launchpad-foundations/+bug/559046 (which I opened before getting distracted)
[12:07] <mup> Bug #559046: Slony-I 1.2.20 for pg8.3 package missing critical files <Launchpad Foundations:New> <https://launchpad.net/bugs/559046>
[12:08] <stub> I was going to try the rebuild in my ppa to see if it was a simple fix :)
[12:15] <maxb> stub: Oops, I suck. Fix uploaded.
[12:16]  * maxb glares at the build queues with loathing
[12:19] <wgrant> maxb: Most of that is just an archive rebuild.
[12:19] <maxb> The amd64 queue is still 2 hours for fresh single uploads :-(
[12:19] <wgrant> But it looks like i386/amd64 could actually be 40ish minutes behind.
[12:19] <wgrant> Oh :/
[12:19] <wgrant> Fortunately you have quite a few buildd admins around.
[12:22] <maxb> stub wants the packages more than I, and he's a rubber ducky :-)
[12:24] <wgrant> Very true.
[12:38] <maxb>     from lp.soyuz.interfaces.build import BuildStatus
[12:38] <maxb>     ImportError: cannot import name BuildStatus
[12:38] <maxb> ^ That's in a doctest I'm trying to modify. Anyone have any idea why that breaks?
[12:38] <maxb> especially given I copied the import line from the code I'm testing!
[12:39] <jelmer> maxb: I think BuildStatus was moved recently
[12:39] <jelmer> it might also just be a glitch because of circular imports..
[12:40] <maxb> ugh, I must have copied it from one branch and then merged devel....
[12:40] <wgrant> maxb: I moved it to lp.buildmaster.interfaces.buildbase a month or so ago.
[12:52] <jml> "make schema" seems to fail on API doc generation for me... looks like a fairly shallow makefile bug:
[12:52] <jml> mv lib/canonical/launchpad/apidoc.tmp/* lib/canonical/launchpad/apidoc
[12:53] <jml> mv: target `lib/canonical/launchpad/apidoc' is not a directory
[12:53] <jml> make: *** [lib/canonical/launchpad/apidoc/index.html] Error 1
[12:53] <jml> is this a known issue?
[12:54] <wgrant> You've resolved the conflict that occurred when you pulled?
[12:54] <wgrant> I've not seen that in any of my branches.
[12:54] <jml> I resolved the conflict by deleting lib/canonical/launchpad/apidoc
[12:55] <jml> maybe I did it the wrong way
[12:55]  * jml tries against stable
[12:55] <wgrant> I think that's the right solution.
[12:57] <jml> yeah, it happens with stable
[12:58] <jml> "make clean; make schema" to reproduce the error
[12:58] <wgrant> Oh, does schema not depend on build?
[12:58] <wgrant> Hm, it does.
[13:02] <jml> I'll fix it now. https://bugs.edge.launchpad.net/launchpad-foundations/+bug/559159 for reference
[13:02] <mup> Bug #559159: "make clean; make schema" fails with API doc error <build-infrastructure> <Launchpad Foundations:New for jml> <https://launchpad.net/bugs/559159>
[13:03] <jml> oh hmm
[13:03] <jml> lib/canonical/launchpad/apidoc _is_ supposed to be versioned and present
[13:04] <jml> and empty :\
[13:04] <wgrant> Oh, right, that's why it conflicted.
[13:04] <wgrant> I misremembered :(
[13:05] <jml> it's kind of weird that it's not just created on build
[13:05] <wgrant> It used to have content.
[13:05] <jml> but only weird, not actually bad
[13:07] <jml> crap
[13:07] <jml> that's messed up a lot of my branches
[13:09] <jml> ok. now that I've got a working build, I'm going offline to code & eat lunch
[13:36] <jtv> abentley: hope you may be able to help with this... fun & games trying to run a codehosting cron job on dogfood: https://pastebin.canonical.com/30356/
[13:36] <jtv> abentley: might that indicate that I'm running it on the wrong server, or on a branch that's on the wrong server?
[13:38] <wgrant> dogfood doesn't have codehosting...
[13:42] <wgrant> jtv: ^^
[13:42] <wgrant> dogfood uses production codehosting.
[13:43] <jtv> Ah, so I guess I'd need to run the script there, against the dogfood config.  :/
[13:44] <wgrant> Which script?
[13:45] <jtv> rosetta-branches
[13:46] <wgrant> Ah.
[13:46] <wgrant> That sounds disastrous.
[13:47] <wgrant> You'll probably need to stuff with DB perms to let yo urun it from crowberr and blah blah.
[13:48] <jtv> That too, probably—but at least that part is familiar.
[13:48] <jtv> dogfood isn't.
[13:48] <wgrant> dogfood isn't really familiar to anyone.
[13:49] <wgrant> I'm not quite sure of the point of dogfood now that most stuff is reasonably testable locally. It seems like staging could be given a build farm for the rarer production-like test cases. staging has codehosting.
[13:52] <jtv> And the work we're doing right now involves buildfarm, codehosting, and translations—so maybe per-team islands are not an effective way to test this any more.
[13:52] <wgrant> I would think not.
[13:52] <wgrant> dogfood seems to mostly be used for QA now, which could be done on staging.
[13:53] <jtv> If staging had a build farm.
[13:53] <wgrant> AIUI it was often used for development or pre-merge testing previously.
[13:53] <wgrant> Right.
[13:53] <jtv> There's also the easy, fast-turnaround patching, database access etc.  I wouldn't discount that—but then we all need that AFAICS.
[13:53] <wgrant> True.
[14:01] <SlonUA> hi pals ... could u help me ... =)
[14:01] <SlonUA> i have redirection to https://launchpad.dev from http://bazaar.launchpad.dev/~bla/vla/devel/files
[14:03] <SlonUA> so, just redirection from  http://bazaar.launchpad.dev to  https://launchpad.dev
[14:03] <SlonUA> i can't 'View the branch content'
[14:05] <maxb> SlonUA: Please report what the command `getent hosts bazaar.launchpad.dev` prints for you
[14:05] <wgrant> The problem is probably that branch-rewrite.py isn't running.
[14:05] <SlonUA> maxb: 1sec
[14:06] <SlonUA> 192.168.241.11  bazaar.launchpad.dev ... i use remote access
[14:06] <maxb> Remote access in incompatible with bazaar.launchpad.dev
[14:06] <maxb> *is
[14:07] <maxb> "A full Launchpad development setup requires two IP addresses on the local machine, on which to run two HTTPS listeners - one for main Launchpad, and one for Loggerhead (code browsing) of private branches. As most developer workstations have only one non-local IP address, and as the second one is only required for Loggerhead on private  branches, you may well not bother to set up an additional IP address. If you do want to do this, identif
[14:07] <maxb> y a suitable IP address, and add it to your machine's network configuration now. "
[14:07] <maxb> quoted from Running/RemoteAccess
[14:08] <abentley> jtv, sorry, just starting work now.  Looks like wgrant helped you?
[14:08] <abentley> wgrant, +10 on staging buildfarm
[14:08] <jtv> abentley: helped me realize what depths of shit I'm in, but yes.  :-)
[14:09] <wgrant> abentley: It shouldn't actually be hard to get just a build farm up and running. A full archive is harder, though.
[14:09] <maxb> SlonUA: Oh, sorry, except that's only required for *private* branches.
[14:09] <SlonUA> maxb: i see .. thanks .. so ..
[14:10] <maxb> SlonUA: Anyway, you should read Running/RemoteAccess again carefully, because I think you've not quite performed the tweaking described there correctly
[14:10] <maxb> In particular, the "Amending the Apache configuration" part
[14:10] <SlonUA> maxb: ohh .. thanks .. i will read again
[14:11] <abentley> wgrant, Do you mean that it's hard to get a full copy of an archive on disk, or do you mean that building to archive is harder?
[14:12] <wgrant> abentley: Lots of config.
[14:12] <wgrant> The former, though.
[14:13] <wgrant> Hmm, I guess we really need on-disk archives for SPRBs, since they need build-deps.
[14:13] <wgrant> But for jtv's purposes it doesn't matter.
[14:14] <wgrant> jtv just needs codehosting and a buildd-manager.
[14:14] <jtv> Well... and a build slave, and the Translations db.  Not asking for much.  :)
[14:41] <dhastha> error while make run:    http://paste.ubuntu.com/411614/
[14:54] <wgrant> noodles775: New buildfarm project? Yes please...
[14:55] <wgrant> They are sort of split between Soyuz, Code and slightly Translations now...
[14:56] <noodles775> Yep.
[14:57]  * wgrant wonders who needs to be convinced.
[15:01] <noodles775> wgrant: we can point Julian to the bug next week, I think he'll be keen too.
[17:23] <SlonUA> maxb: hi again
[17:23] <maxb> hello
[17:24] <SlonUA> maxb: i have to ip on my vm ... but https://bazaar.launchpad.dev/ work perfect ... but http://bazaar.launchpad.dev/ still redirect to https:/launchpad.dev
[17:25] <SlonUA> i mean 2 ips
[17:25] <maxb> pastebin your apache configuration
[17:28] <SlonUA> maxb: http://pastebin.com/Qe3VpmHp
[17:37] <maxb> SlonUA: huh. looks ok to me. Sorry, I'm not sure what to check next
[17:38] <SlonUA> maxb: thank for look in .. i think i will hack launchpad-lazr.conf
[17:38] <SlonUA> maxb: btw .... in section [codehosting]
[17:38] <SlonUA> it will be correct: port: tcp:5022:interface=bazaar.launchpad.dev
[17:39] <SlonUA> instead port: tcp:5022:interface=127.0.0.88
[18:06] <jml> g'night all
[19:58] <sinzui> flacoste, ping
[19:59] <flacoste> hi sinzui
[19:59] <sinzui> flacoste, I discovered the cause of the adaption insanity. We use provideAdapter() in the browser.pillar module, import that module cause the registrations to be wrong
[20:00] <sinzui> flacoste, I fixed the issue by switching to ZCML
[20:01] <flacoste> sinzui: provideAdapter in the browser.pillar modules sounds like a bad idea
[20:01] <sinzui> we have a few navigation menus registered that way. It is all fine until you intend to subclass
[20:02] <flacoste> sinzui: you mean registered using an annotation? or a call to provideAdapter direclty?
[20:03] <sinzui> directly
[20:03] <flacoste> directly in the module is never a good idea
[20:03] <sinzui> so I have learned
[20:03] <flacoste> well, unless gary disagrees
[20:03] <gary_poster> reading back
[20:04] <sinzui> provideAdapter(InvolvedMenu, [IInvolved], INavigationMenu, name="overview") was the code
[20:04]  * sinzui wrote it
[20:05] <sinzui> in any case, I have accidentally fixed an issue for EdwinGrubbs.
[20:05] <gary_poster> +1 on zcml (or grok) in the abstract sounds safer.  I don't have enough context to have a strong opinion, but if there's a problem with what you did, sounds like you found a good solution. :-)
[20:07] <sinzui> gary_poster, you can reproduce the insanity by adding
[20:07] <sinzui>     import lp.registry.browser.pillar
[20:07] <sinzui> to lp.registry.browser.productseries, then run
[20:07] <sinzui>     ./bin/test -vvc -t pillar-views
[20:09] <sinzui> gary_poster, As I said, I have learned something. I will propagate this in the reviewers meeting
[20:09] <gary_poster> sinzui: great.  (sorry for latency, on call)
[20:43] <mars> sinzui, around?
[20:57] <sinzui> mars, I am
[20:58] <mars> hi sinzui, have a moment to tell me your thoughts on killing the old style.css?
[20:58] <sinzui> yes
[20:58] <mars> sinzui, do you have mumble set up?
[20:58] <mars> or Skype
[20:59] <sinzui> I have mumble, but need to choose a public server
[20:59] <mars> sinzui, why not the company server?  It has a channel
[21:00] <sinzui> The email I got from IS implies they are not giving out anymore accounts until they implement the new login
[21:01] <mars> sinzui, skype then?
[23:51] <mwhudson> jelmer: hmmm... latest completed linux incremental import on staging -- 20 mins determining which revs to import, 30 mins importing, *90* minutes repacking, twice (!)
[23:51] <mwhudson> hm
[23:51] <mwhudson> actually i think it's packing three times
[23:52] <mwhudson> that's pretty daft
[23:52] <maxb> Is that the "pack every 1000 revisions" thing?
[23:53] <mwhudson> no
[23:53] <mwhudson> that wouldn't be so bad if it were happening, i think, because aiui that only packs the 1000 new revisions