[00:19] <lifeless> mars: so it looks like PQM devel isn't open, the RE is looking for release-critical there as well db-devel
[00:19] <lifeless> mars: if you're around, could you confirm that that is not intentional and I can talk with a losa to tweak the active config
[00:28] <mars> lifeless, checking
[00:30] <mars> lifeless, that is intentional - devel was to close today, especially given the staging issues.  What did you need to land?
[00:43] <lifeless> mars: I was hoping to land my uniquefileallocator refactoring
[00:43] <lifeless> mars: its not RC
[00:44] <lifeless> mars: when will it reopen ?
[00:44] <mars> lifeless, I would guess Thursday early UTC, if there is no re-roll
[00:45] <lifeless> ok cool
[00:45] <thumper> mars: when are we releasing?
[00:45] <mars> thumper, 2300 UTC
[00:45] <mars> thumper, your time :)
[00:45] <thumper> isn't that 45 minutes ago?
[00:46]  * rockstar hates UTC math.
[00:46] <mars> hehe
[00:46] <lifeless> rockstar: it drives you around the clock?
[00:46] <mars> thumper, 2300 UTC July 6th :P
[00:46] <thumper> so in 23.25 hours
[00:46] <mars> yep!
[00:47] <thumper> ok
[00:51] <mars> spm, so to summarize your mail: we stuck a wrench in the machine to test something, left it in there by accident.  The rest is history :)
[00:51] <spm> mars: ha! coulnd't (and didn't) have phrased it better.
[01:00] <spm> mars: so at this stage we have one cowboy that doesn't appear to have been landed. they're cronscripts, so perhaps not as critical, but can you confirm if we need to reapply that or not?
[03:46] <mars> spm, bigjools told me that that cowboy has landed.  However I did not identify the revision.
[03:46] <spm> mars: coolio; I guess one to watch for.
[05:39] <poolie> lifeless: your performance plan could be described as "crash through or crash" :)
[06:50] <foxxtrot> I'm getting a database authentication failure when I try to 'make run' my development launchpad instance. I used the instructions of the wiki. Is something broken in the current branch, or did I likely miss a step?
[06:50] <lifeless> try make schema ?
[06:51] <foxxtrot> Did that, and redid it, just to be sure.
[06:51] <foxxtrot> I'll run it one more time
[06:54] <maxb> It is likely some confusion between multiple postgres instances
[06:54] <maxb> Please pastebin the output of pg_lsclusters
[06:55] <foxxtrot> Ah, I have an 8.3 and an 8.4 cluster.
[06:57] <foxxtrot> http://paste.ubuntu.com/459708/
[06:57] <foxxtrot> I'm not using postgres for anything but lp on this box, but I do think I installed it a while back
[06:59] <maxb> right... so I think the problem is that utilities/launchpad-database-setup will have configured the 8.4 cluster, but lp is trying to use the 8.3 cluster
[07:00] <foxxtrot> Launchpad prefers 8.4, right?
[07:00] <maxb> it's all a bit in transition right now
[07:00] <maxb> If you are not using pg for anything else, I recommend:
[07:00] <maxb> pg_dropcluster --stop 8.3 main
[07:01] <maxb> followed by rerunning utilities/launchpad-database-setup
[07:27] <foxxtrot> maxb, Got it running, the two postgres instances was the problem.
[08:06] <adeuring> good morning
[09:14] <henninge> jtv-afk: so, even after your testfix (and that's all it was), the TestSetCurrentTranslation_Ubuntu is still producing database violations when I merge db-stable.
[09:14] <henninge> jtv-afk: where could the merge have gone wrong?
[09:14] <henninge> Or how do I approach this?
[09:15] <henninge> jtv-afk: it's all the same: IntegrityError: duplicate key value violates unique constraint "potemplate__distroseries__sourcepackagename__name__key"
[09:15] <henninge> *they are* all the same, I mean
[09:34] <jtv> henninge: I just got my RF working again; I'll try once again to reproduce the problem.
[09:35] <henninge> jtv: I already tried it out with removed flush_order calls but that did not help.
[09:35] <jtv> henninge: I wonder if there's some kind of version difference that might cause this...
[09:36] <henninge> what do you mean by "version difference" ?
[09:40] <jtv> IIRC we're running EC2 tests on an older Ubuntu release than we're running locally.
[09:40] <jtv> Something in the software stack might be different.
[09:41] <jtv> Or could it even be a schema change that's not being applied for some reason?
[09:49] <jtv> henninge: not much luck reproducing the problem locally so far... trying ec2 as well
[09:52] <jtv> henninge: the test fix was only for the test_helpers failure which is unrelated to the database constraints.  Do you still have a link to the failures somewhere?
[10:03] <henninge> jtv: http://people.canonical.com/~henninge/merged-lp.translations.tests.log
[10:03] <jtv> thanks
[10:04] <henninge> jtv: I am also trying to push the branch but I have some DNS problems. Let me restart my router ...
[10:20] <jtv> henninge: I found what's going on... and this time the code is relying on a new distroseries _not_ becoming the current series.  Fix is underway.
[10:21] <henninge_> jtv: good news!
[10:21] <jtv> I triggered this by giving a new template that should share with an old one the same name as the old one.  That revealed that the two templates are in the same sourcepackage.  The remaining question is: why doesn't this happen in local tests?
[10:23] <henninge_> jtv: what do you mean by "local tests"?
[10:23] <jtv> henninge_: when I run the tests on my local machine, they pass.
[10:24] <henninge_> jtv: they don't for me
[10:24] <jtv> henninge_: I thought yesterday you said they did
[10:24] <henninge> jtv: on the current recife branch
[10:24] <henninge> but not when I merge db-stable into it.
[10:24] <jtv> ah
[10:24] <henninge> I am pushing that right now
[10:26] <jtv> well I see one possible explanation: the default status of a new distroseries may have changed, so that when you self.factory.makeDistroSeries(distribution=ubuntu) results in ubuntu.currentseries not being what it used to be.
[10:26] <jtv> Missing word there.
[10:26] <jtv> "it" results in ubuntu.currentseries not being what it used to be.
[10:35] <henninge> jtv: this is the branch btw bzr+ssh://bazaar.launchpad.net/~henninge/launchpad/merge-db-stable-9518
[10:42] <henninge> jtv: Distribution.getSeries was converted to storm
[10:44] <henninge> but nothing in registry/model/distroseries.py that looks like it could have caused this.
[10:45] <henninge> no related changed in the factory either, it seems.
[10:47] <jtv> henninge: I couldn't find anything either...  :/
[11:14] <danilos> adiroiban, henninge, jtv: please add things that you think we can consider fixing during the Epic to https://dev.launchpad.net/Translations/Reports/LaunchpadEpic2010, anything that is ugly code that you hate very much will do :)
[11:14] <jtv> danilos: cool, will do
[12:02] <henninge> jtv: bug 602227
[12:02] <_mup_> Bug #602227: Windmill tests failing in recife branch <Launchpad Translations:New> <https://launchpad.net/bugs/602227>
[12:02] <jtv> henninge: thanks...  my impression is it's a login failure.
[12:02] <henninge> jtv: this is unrelated to the merge
[12:03] <henninge> jtv: on some
[12:03] <henninge> but I watched the first one and the dismiss button for the documentation balloon does not show up and when windmill "clicks" it does not disappear - followed by the failure.
[12:04] <henninge> jtv: ^
[12:04] <jtv> henninge: does not show up and then it does not disappear?
[12:04] <henninge> jtv: the button does not show and the ballon does not disappear.
[12:04] <jtv> oh, the dismiss button does not show up
[12:05] <henninge> although I looked at the html and it seems to be there ...
[12:05] <henninge> the button
[12:05] <jtv> henninge: shall I try this on db-stable?
[12:05] <henninge> jtv: good idea
[12:06] <jtv> running...
[12:06] <henninge> jtv: btw, trying it out with "make run" works just fine.
[12:06] <henninge> button shows, ballon bursts ... ;-)
[12:07] <danilos> henninge, jtv: it might be related to how it's stored in a cookie for the duration of the session (i.e. perhaps windmill session ends up being more persistent than a session should be)
[12:07] <jtv> ooh nasty
[12:17] <henninge> jtv: what's the result on db-stable?
[12:17] <jtv> henninge: still waiting for the timeout on the documentation-links test
[12:17] <jtv> but I'm obviously not logged in; it's asking me to.
[12:19] <jtv> I canceled the login dialog, and now it's testing while logged in.
[12:21] <jtv> henninge: passing tests so far
[12:23] <henninge> jtv: I have to go to lunch now. ;)
[12:23] <jtv> ok
[13:28] <bigjools> wgrant: around?
[13:32] <wgrant> bigjools: Hi.
[13:33] <bigjools> wgrant: hi - just wanted your opinion on something.  I'm thinking about adding a second buildd-manager so that we have one for virtual and one for non-virtual.
[13:34] <bigjools> I can't think of any cons, just pros
[13:34] <wgrant> What would be the purpose of that?
[13:34] <bigjools> increased throughput
[13:34] <wgrant> Once it's fixed, is that really going to be a problem?
[13:34] <ricotz> hello, can someone tell me what this error is? OOPS-1648ED2488
[13:35] <bigjools> and isolation of problems
[13:35] <wgrant> Perhaps.
[13:35] <bigjools> depends on your definition of "fixed"
[13:35] <bigjools> mine involves re-writing it ;)
[13:36] <wgrant> I'm not terribly sure that adding a fourth concurrent uploader to the already racy set of three is a good idea, but I guess it's not too bad.
[13:36] <bigjools> ricotz: what were you doing at the time? (I'm waiting for the oops data to sync to where I can see it)
[13:36] <wgrant> bigjools: Don't you have a fair bit of that done already?
[13:36] <bigjools> wgrant: concurrent build uploads should be fine
[13:36] <ricotz> bigjools, i am trying to delete a bzr branch
[13:37] <bigjools> wgrant: no, we're *really* re-writing it from scratch and building it into a new job system
[13:37] <wgrant> bigjools: Concurrent source uploads can go horribly wrong. But I guess a non-virt buildd-master wouldn't be doing it.
[13:37] <wgrant> Oh!
[13:37] <wgrant> So it's never going to happen.
[13:37] <bigjools> lol
[13:37] <bigjools> yes, it will, we're committed to doing it
[13:37] <wgrant> *ahem*
[13:37] <bigjools> we already started designing it
[13:37] <wgrant> Ah.
[13:38] <bigjools> ricotz: ok, rockstar or abentley can help you there
[13:41] <bigjools> wgrant: anyway, exactly yes, virt/non-virt is a good split
[13:41] <ricotz> bigjools, ok
[13:41] <elmo> bigjools: so, err, at one stage the long term plan was to collapse down the virt/non-virt distinction
[13:42] <bigjools> ricotz: I can see the oops data now
[13:42] <elmo> bigjools: once the virt feature set was on parity with non-virt (langpacks, debug builds, etc.)
[13:42] <wgrant> bigjools: It's not a good split. It's a revolting hack which might improve things slightly.
[13:42] <elmo> bigjools: maybe that's no longer the plan, I dunno, but if it is, separating them at the b-m level, if it's work, doesn't seem to make sense
[13:42] <benji> That's the one important piece of home-office equipment I don't have yet, a coffee machine.
[13:43] <bigjools> elmo: yes that can still happen, but this is a trivial change I can make that will speed up builds and then we can amalgamate the machines when we re-design the build manager
[13:43] <bigjools> which as I said, is happening RSN
[13:43] <wgrant> How soon is RSN?
[13:43] <elmo> sure - if it's trivial, then *shrug* ignore me
[13:43] <bigjools> it's in progress
[13:44] <wgrant> I guess it should be pretty simple.
[13:45] <bigjools> elmo: consider this analagous to lowering interest rates.  Brief pain relief until proper economic changes happen.
[13:45] <bigjools> the redesign is not simple :)
[13:45] <wgrant> Is the redesign using that thing that was pointed out at UDS?
[13:45] <wgrant> I forget the name.
[13:46] <wgrant> Celery?
[13:46] <bigjools> celery
[13:46] <bigjools> possibly, we're considering a few things
[13:53] <ricotz> bigjools, seems they are not around, what is the error saying?
[13:53] <bigjools> ricotz: it's a bug in the code
[13:54] <bigjools> looks like it has a recipe build associated?
[13:54] <wgrant> I noticed that code last week.
[13:54] <ricotz> ok, i never used a recipe
[13:55] <wgrant> It doesn't seem to cover the case where a build succeeds and produces an SPR.
[14:06] <jtv> wgrant: lamont has launchpad-buildd 64 rolled out on the dogfood/staging buildfarms... is revision 63 df-tested?
[14:07] <wgrant> jtv: I don't believe so.
[14:07] <wgrant> But I don't know what has gone on behind the scenes.
[14:07] <jtv> wgrant: then this is the time to test!
[14:08] <jtv> yes, that is a bit of a sticking point...  but we now get to q/a 63 & 64.
[15:15] <jtv> wgrant: my test seems to have gone down /dev/null somehow...  buildd-manager "Dispatched" it, ferraz ran it, it was "marked as done," but then there was no output.
[15:15] <jtv> wgrant: any idea what the situation with your patch is?
[16:15] <Ursinha> rockstar, hello :) I wonder if you can help me with an oops?
[16:17] <Ursinha> hmm, maybe not
[16:17] <Ursinha> :)
[16:17] <Ursinha> abentley, ping
[16:17] <abentley> Ursinha, hi.
[16:18] <Ursinha> abentley, I see an oops in the branch scanner, do you know if the problem is known? https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1647SMS1
[16:18] <abentley> I don't think the issue is understood.
[16:19] <abentley> We have seen this occasionally.
[16:20] <Ursinha> do you know what can be causing that?
[16:20] <abentley> Ursinha, no, I don't think the issue is understood.
[16:20] <Ursinha> abentley, oh, I see what you mean now
[16:21] <Ursinha> abentley, I'll file a bug for that
[16:21] <Ursinha> thanks
[16:22] <mwhudson> abentley, Ursinha: i've always assumed that that meant the branch has been deleted while it was being scanned
[16:22] <abentley> mwhudson, the branch in question exists, so I don't think that's the cause.
[16:22] <mwhudson> abentley: maybe it was deleted and repushed?
[16:23] <mwhudson> but maybe not, indeed
[16:23] <abentley> jtv, have you ever deleted and re-pushed https://edge.launchpad.net/~jtv/launchpad/recife?
[16:26] <jtv> abentley: I don't think so, no... why?  henninge's working on that branch
[16:27] <Ursinha> jtv, because of OOPS-1647SMS1
[16:27] <jtv> Oh, sorry—the one in ~jtv.  That's garbage anyway—I thought --remember would work for bzr push and it didn't.
[16:27] <jtv> I think I deleted it.
[16:40] <Ursinha> mwhudson, abentley, does jtv answer "solve the mystery"?
[16:40] <abentley> Ursinha, OTP
[16:40] <mwhudson> hmm, not really
[16:40] <Ursinha> ok! I'll lunch then.
[16:40] <Ursinha> oops.
[16:40] <Ursinha> mwhudson, go ahead
[16:41] <mwhudson> Ursinha: no, go have lunch, i'm only chucking comments in from the peanut gallery
[16:41] <Ursinha> mwhudson, I'll file a bug for that pasting this conversation
[16:43] <Ursinha> mwhudson, bug 602323
[16:43] <Ursinha> I guess the bot is dead...
[17:39] <henninge> jtv: got a minute or are you busy painting the house oranje? ;-)
[17:39] <jtv> henninge: I've done my 12 hours for today, sorry!
[17:39] <henninge> jtv: well done! I'll get back to you tomorrow!
[17:39] <henninge> ;-)
[17:39] <jtv> Uh-oh!
[20:05] <Ursinha> :)
[21:47]  * maxb boggles
[21:47] <maxb> Someone has requested a Subversion vcs-import of http://bazaar.launchpad.net/~opensatnav-admins/opensatnav/release-1.0/files
[21:47] <maxb> I have no words
[21:48]  * nettrot chuckles
[21:57] <lifeless> maxb: awesome
[21:57] <lifeless> maxb: we should totally support that
[21:58] <lifeless> maxb: (I'm serious - tarball import, keep the upstream branch if lp hosts that too, join all the dots together)
[22:02] <thumper> :-o
[22:02] <thumper> maxb: I think the user is confused
[22:03] <lifeless> thumper: could be ;)