[00:02] <wallyworld> sinzui: yes, looks like it was r13573, bug 809508
[00:02] <_mup_> Bug #809508: Suggest branches with current bug number when associating a bug with a branch. <qa-ok> <Launchpad itself:Fix Released by benji> < https://launchpad.net/bugs/809508 >
[00:02] <wallyworld> commit msg: Make the search box stay disabled and the spinner visible until all outstanding searches are finished
[00:03] <wallyworld> mp was self reviewed, so something slipped through there
[00:05] <wgrant> wallyworld: Should we abort the rollout?
[00:05] <StevenK> wallyworld: If you're unclear, just revert it
[00:28] <mhall119> does anybody else get an error on this page: https://api.launchpad.net/1.0/~ubuntu-br/homepage_content ?
[00:28] <mhall119> wgrant: ^^
[00:29] <mhall119> or lifeless
[00:29] <mhall119> whoever is awake
[00:29] <mhall119> cjohnston: this is the cause of our lpupdate failure I think
[00:29] <StevenK> That doesn't look like JSON to me
[00:30] <cjohnston> hmm.. where is that coming from
[00:31] <mhall119> it's part of the response from https://api.launchpad.net/1.0/~locoteams/members
[00:31] <mhall119> that part of the json, brazil's homepage_content
[00:32] <mhall119> is where our script if throwing exceptions
[00:35] <cjohnston> so something is wrong it looks like on LPs end in the Homepage Content area
[00:35] <mhall119> possibly
[00:35] <mhall119> I'm still not quite sure what though
[00:36] <cjohnston> I wander if its something with the localization of it
[00:38] <mhall119> I think it's more likely that it's not specifying that it's unicode
[00:51] <wgrant> StevenK: Your browser Accepts HTML, so it will give it HTML.
[00:52] <StevenK> wgrant: Even wget doesn't show JSON
[00:52] <wgrant> It does.
[00:52] <wgrant> Well, curl does.
[00:52] <wgrant> "Usu\u00e1rios do Ubuntu no Brasil.\n\nTime para usu\u00e1rios do Ubuntu no Brasil.\n\nComece Aqui:  -  http://www.ubuntu-br.org/comece\n\nP\u00e1gina Principal:  -  http://www.ubuntu-br.org\nWiki:  -  http://wiki.ubuntu-br.org\nPlaneta:  -  http://planeta.ubuntu-br.org\nF\u00f3rum  -  http://ubuntuforum-br.org/\nDocumenta\u00e7\u00e3o:  -  http://wiki.ubuntu-br.org/Documentacao\n\nAntes de entrar neste time, voc\u00ea deve ler, concordar ...
[00:52] <wgrant> ... e assinar o C\u00f3digo de Conduta do Ubuntu (http://www.ubuntu-br.org/codigodeconduta)."
[00:53] <wgrant> wget does too.
[00:53] <wgrant> Content-Type is correctly "application/json"
[00:53] <wgrant> No encoding is required, as it's ASCII.
[00:53] <wgrant> mhall119: What's the error your script is throwing?
[00:58] <mhall119> yeah, I'm getting json
[00:58] <mhall119> wgrant: http://paste.ubuntu.com/658298/
[00:58] <wgrant> I can access the members collection without trouble.
[00:58] <mhall119> I've tracked it down so simplejson trying to decode a string
[00:59] <wgrant> Is it possible you've got a corrupt cache?
[00:59] <wgrant> Try removing your launchpadlib cache.
[00:59] <mhall119> I don't have access to do that on cranberry
[00:59] <mhall119> and my local dev isn't even getting me past lp_login, not sure why
[01:02] <mhall119> ok, I just need to get my local dev working to the point where I can watch this with a debugger
[01:03] <wgrant> I doubt it will happen locally.
[01:09] <cjohnston> mhall119: there is a vangard on
[01:10] <mhall119> cjohnston: I've already got them doing it
[01:10] <cjohnston> cool
[01:10] <mhall119> now we wait until 10pm
[01:10] <mhall119> the cron should have run again by then
[05:12] <LPCIBot> Project devel build #945: STILL FAILING in 5 hr 48 min: https://lpci.wedontsleep.org/job/devel/945/
[07:02] <jtv> wgrant: did you have any luck trying the new publish-ftpmaster yesterday?
[07:06] <wgrant> jtv: It finished not long after you EODed.
[07:06] <wgrant> jtv: Expedited security processing works fine.
[07:07] <jtv> You have made me so happy.
[07:07] <wgrant> I'm pretty happy with the whole thing now, but I need to check over the hooks a bit.
[07:07] <jtv> I'd appreciate that — as soon as you're satisfied with them, we can make the switch.
[07:12] <wgrant> jtv: They all look pretty sensible.
[07:12] <wgrant> What's the worst that it could do...
[07:12] <jtv> *cough*
[07:14] <jtv> wgrant: then I shall proceed to move this towards production…
[07:14] <jtv> …albeit with some trepidation.
[07:15] <wgrant> *immense
[07:15] <wgrant> But I think we can do little else now.
[07:26] <adeuring> good morning
[08:31] <mrevell> Hi
[08:42] <jtv> ah yes, bigjools: when copying custom uploads from a previous Ubuntu series, I don't suppose the pocket matters much?
[08:50] <bigjools> jtv: nup
[08:51] <jtv> Argh.  But the changes file does.  It'd be nice if I could just copy the LFA id, but that's not currently supported.
[08:51] <jtv> DistroSeries.createQueueEntry expects to have to store the changes file into the Librarian.
[08:52] <wgrant> You are creating new PUs and PUCs? :/
[09:15] <jtv> wgrant: Have to.  What's the problem with that?
[09:16] <wgrant> you don't have to; it's just easier than the rather messy and time consuming proper fix.
[09:17] <wgrant> Perhaps understandable.
[09:17] <wgrant> But definitely another pile in the hack pile.
[09:17] <wgrant> another hack in the hack pile
[09:17] <wgrant> That's the one.
[09:19] <wgrant> It's like delayed copies, except even worse defined :(
[09:30] <jtv> Internet connections: you just can't trust 'em.
[10:49] <jelmer> Is staging used to test anything in particular ? I wonder if now would be a good moment to get the  branch with the newer bzr deployed there for the extended QA discussed on the list.
[10:51] <wgrant> jelmer: DB patches (at least for the next week) and mailing lists and some build farm stuff.
[10:51] <wgrant> Pretty much everything else is on qastaging these days.
[10:53] <jelmer> wgrant: Ah, cool - thanks.
[10:53] <jelmer> Time to land my branch and QA it then.
[10:54] <wgrant> jelmer: Land it? Do you mean get it cowboyed onto staging?
[10:54] <wgrant> Also, staging is having a bit of evil performed on it a bit the last couple of weeks, for testing fastdowntime.
[10:54] <wgrant> Not sure if stub has much more of that planned.
[10:54] <jelmer> ah, hmm
[10:54] <lifeless> wgrant: waiting on pgbouncer just now, for prod.
[10:55] <wgrant> lifeless: We're going with the 3m downtime?
[10:55] <lifeless> wgrant: stub has it down to ~2m
[10:55] <lifeless> wgrant: slony deletes and creates ~ 500 triggers on all nodes.
[10:55] <wgrant> I had a 2-slave no-op update down to 45s here by skipping trusted.sql, IIRC.
[10:56] <wgrant> The master takes around 12 seconds.
[10:56] <wgrant> So the triggers don't take tooooo long.
[10:56] <lifeless> we may be seeing cold cache table metadata on staging timings
[10:56] <stub> I'm still looking at shaving time off. Skipping trusted.sql if unmodified is doable - just need to store a hash of the file in the db somewhere.
[10:57] <wgrant> stub: Can't we treat trusted.sql like the rest of the schema?
[10:57] <wgrant> Have a base + patches?
[10:57] <stub> And we can deprecate updating it
[10:57] <wgrant> lifeless: Surely the metadata should be pretty tiny.
[10:58] <stub> wgrant: We already have had to with the bugsummarywork
[10:58] <jelmer> wgrant: I'm wondering with whom to coordinate this exactly, is there a point of contact/process?
[10:58] <jelmer> stub: Are you planning to do more testing on staging for fastdowntime?
[10:58] <stub> jelmer: fastdowntime is live on staging, any more tests are just improving the process.
[10:58] <wgrant> stub: Right, so let's do that everywhere and stop recreating all those functions each time :)
[11:01] <stub> wgrant: The nice thing about trusted.sql is you can edit your code in place, get diffs etc. Current approach by punting that to the dbpatches means cut & paste to tweak a stored procedure, and locating the current implementation involves grep, and source will eventually disappear when we clean out the old db patches. Not sure if we can easily solve any of the downsides, but thoughts welcome.
[11:01] <wgrant> stub: Agreed, and that really hurt during the bugsummary debacle.
[11:02] <wgrant> But reapplying trusted.sql is really slow.
[11:02] <stub> (cycling the baseline will need some more thought too, as it is tied up with trusted.sql, but I need to fix that already)
[11:02] <stub> wgrant: Yup. It seems to be a per-statement overhead in there somewhere, which is why I already dropped comments.sql out of patch application.
[11:03] <stub> I could be evil and package trusted.sql up in a stored procedure that applies the contents :-)
[11:04] <stub> your timing on trusted.sql is good though - I hadn't gone to that granularity yet
[11:05] <wgrant> I forget exactly, but I think removing trusted.sql saved around 40s. But it only started at around 100s, so I'm not sure what's up with staging.
[11:06] <wgrant> make -C database/replication devsetup is awesome, btw.
[11:09] <LPCIBot> Project devel build #946: STILL FAILING in 5 hr 56 min: https://lpci.wedontsleep.org/job/devel/946/
[11:21] <wgrant> stub: What intervals are staging's slons running at?
[11:24] <wgrant> The variance gets a lot lower if I reduce the no-SYNC interval to 2s from 10s.
[11:24] <wgrant> 39-41s without trusted.sql, rather than 45-55s.
[11:25] <wgrant> Which I guess makes sense.
[11:26] <wgrant> Since it's waiting for synchronization, with no replicatable write activity.
[11:26] <wgrant> And if it's waiting for an extra sync on both sides of trusted.sql, that could explain much of the difference.
[11:29] <wgrant> Nope, adding trusted.sql adds 45s even with 2s forced syncs.
[11:49] <stub> wgrant: I landed a branch last night that drops staging down to 1 second syncs and 5 second forced syncs to see if it changes things significantly.
[11:49] <stub> 1 second sync check, 5 second forced sync
[11:49] <wgrant> stub: What was it before?
[11:49] <wgrant> We may want to push through some update right before each sync.
[11:49] <stub> 2 second sync check, 10 second forced sync (defaults)
[11:49] <wgrant> Ah.
[11:50] <stub> I don't think it will be significant - maybe shave 10 seconds off, or 15 if we drop the sync poll time to 0.5.
[11:50] <stub> But worth testing in case.
[11:50] <wgrant> Adding a third slave has added 15s to the trusted.sql time...
[11:51] <wgrant> Which is about right.
[11:51] <wgrant> 12-14s to apply to the master, then apparently a similar time to each slave, serially?
[11:51] <wgrant> I'm still not entirely clear on how the DDL gets propogated.
[11:52] <wgrant> I guess I should read more on that.
[11:54] <wgrant> Blah, but it slows down the no-trusted.sql upgrade by nearly 4s too.
[11:54] <wgrant> I guess that matches up fairly well, but is still disappointing.
[11:54] <wgrant> Particularly if prod has ... 8 replicas?
[11:54] <wgrant> Maybe 6.
[11:54] <wgrant> I can't count.
[11:58] <jtv> wgrant: I'm looking into where to copy the latest custom uploads to a new distroseries.  Julian suggests I might use the hook we already have for the case where a distroseries is being published for the first time.  What do you think?
[12:00] <wgrant> jtv: I suggest you do whatever is quickest and easiest to remove when we fix custom uploads properly. That may be that.
[12:00] <wgrant> But it's possibly appropriate to put it right in IDS, if you're creating new PUCs.
[12:00] <wgrant> Since it doesn't need to manipulate the disk directly.
[12:00] <jtv> But it also doesn't currently use DSP.
[12:01] <wgrant> DSP?
[12:01] <wgrant> DistroSeriesPArent?
[12:01] <jtv> Yes.  Well, the code I have now doesn't actually care whether you use it or not; but the initial plan is to do this based on previous_series.
[12:01] <jtv> Or whatever the old parent_series got renamed to.
[12:01] <wgrant> Right, that sounds somewhat sensible.
[12:01] <wgrant> Still more appropriate for IDS IMO, if you're creating PUCs.
[12:02] <jtv> Does IDS get run just once per derived series?
[12:02] <wgrant> Yes.
[12:02] <bigjools> yes, but even creating PUCs is not enough, you need to force them through process-accepted
[12:03] <wgrant> You'll need to create PUs and PUCs, and process-accepted will handle them in its own special way.
[12:03] <bigjools> it makes me shudder
[12:03] <wgrant> I believe I've expressed my opinions on process-accepted and this new strategy.
[12:04] <bigjools> least-worst option for now.  We don't have time to redesign this
[12:04] <wgrant> Yes.
[12:04] <wgrant> It is terrible, but probably least-worst.
[12:04]  * bigjools observes the ivory towers in the distance
[12:06] <stub> wgrant: An interesting thing about trusted.sql is that it doesn't need to be applied via slonik at all
[12:06] <wgrant> stub: It shouldn't have to be, no, so we could possibly work around it.
[12:06] <wgrant> But it may still be slow without it.
[12:07] <stub> yes, but I'm looking at 2 minutes. If I can shave 40 seconds off that, it is a big win percentage wise.
[12:07] <stub> It does mean it is being applied outside the db patch transaction
[12:07] <wgrant> The no-op upgrade SQL (just the two UPDATE LDR statements) takes 15s here when executed directly through slonik with three slaves.
[12:07] <wgrant> So we have 30s of overhead in the script.
[12:07] <wgrant> Probably the sync on either side.
[12:08] <jtv> bigjools: I need to force my copied PUCs through process-accepted?  Won't that run anyway?
[12:08] <wgrant> jtv: Yes.
[12:08] <wgrant> Just create them on new ACCEPTED PUs, and all will be good.
[12:08] <bigjools> jtv: it's ok as long as there's a PU in ACCEPTED state
[12:08] <wgrant> Well, bad, but it will work.
[12:09] <bigjools> it is a horrible hack but then custom uploads are a horrible hack :(
[12:09] <jtv> So I have to create my PUs as ACCEPTED?
[12:09] <bigjools> yes
[12:09] <stub> Given we have already applied stuff from trusted.sql live, I guess we can live without it being applied in-transaction with the db schema updates
[12:09] <wgrant> jtv: Maybe just create them and then pu.setAccepted.
[12:09] <bigjools> that will also work
[12:09] <wgrant> Er.
[12:09] <wgrant> No, there's setDone... don't think there's setAccepted.
[12:10] <wgrant> Just acceptFrom*
[12:10] <wgrant> But one of those might work.
[12:10] <wgrant> stub: Hm.
[12:10] <wgrant> stub: trusted.sql applies in 100ms directly.
[12:10] <wgrant> Let me force it through slonik and see what happens...
[12:11] <wgrant> 13s.
[12:11] <wgrant> WTF
[12:11] <wgrant> So something is causing it to execute all the syncs serially, or something.
[12:12] <wgrant> Where that doesn't happen with the no-op patch SQL.
[12:12]  * wgrant tries merging them into one file.
[12:13] <wgrant> With the minor issue that they already end up as one file.
[12:13] <wgrant> So where is the extra 45s coming from.
[12:14] <wgrant> Blaaah.
[12:14] <wgrant> The 13s overhead is on each execute_script.
[12:17] <stub> wgrant: When we check for sync, we wait for the sync to be confirmed by ALL nodes. So that blocks until a node has finished doing something like apply a patch and it gets around to acking.
[12:17] <wgrant> stub: Yeah, but the patches take <100ms each.
[12:17] <wgrant> So either the slon goes to sleep for 15s after applying the patch, or something else is happening.
[12:24] <wgrant> stub: Overhead vanishes if I change upgrade.py's run_sql to merge it all into one big file, with a single execute_script.
[12:24] <stub> wgrant: I'm thinking on why patches sometimes get serialized, rather than applied on slaves simultaneously. I think slave2 might block on slave1 saying 'sure - go ahead' because slave1 has already started processing a dbpatch.
[12:24] <wgrant> Well, it's down to below 2s, at least.
[12:24] <stub> cool
[12:24] <stub> we don't do that already?
[12:24] <wgrant> No.
[12:24] <stub> oh... one execute script
[12:24] <wgrant> We create one slonik script.
[12:25] <wgrant> With multiple SQL files, and multiple execute_script stanzas.
[12:25] <wgrant> It looks like the syncs around the second one end up serialised, somehow. Might watch slon logs tomorrow.
[12:25] <wgrant> Which is really going to hurt production, if it's 16s per slave rather than 3s.
[12:25] <stub> so we want an assert in there that len(open(script).readlines()) < 1000, or we overflow a slony constant I think set at compilation time
[12:25] <wgrant> dafuq?
[12:26] <wgrant> Seriously?
[12:26] <stub> yup
[12:26] <stub> we are weird. agile is foreign to most db environments and dbas
[12:26] <stub> Our sort of automation tends to scare the bejesus out of people.
[12:27] <wgrant> stub: Are the syncs in execute_script insufficient?
[12:27] <wgrant> We currently seem to sync on either side.
[12:27] <wgrant> Which is an extra $while.
[12:27] <wgrant> Overhead in upgrade.py is still 30s.
[12:28] <wgrant> Will be more on prod.
[12:28] <wgrant> But I wonder if you can try the upgrade.py SQL merge thing on staging?
[12:28] <wgrant> See how far down it takes it.
[12:28] <wgrant> Even better if you can reduce the forced sync interval.
[12:29] <stub> wgrant: Sure. Stick in a MP for db-devel (we want the fastdowntime stuff on that branch still)
[12:29] <stub> I have no problem lowering the sync check and forced sync times if it helps. It won't affect our load.
[12:30] <wgrant> As you say, it should only save a few times the reduction. Might get better numbers tomorrow when I am awake.
[12:31] <wgrant> ./src/parsestatements/scanner.h:#define MAXSTATEMENTS 1000
[12:31] <wgrant> Kill me now.
[12:31] <wgrant> aaaaaaa
[12:31] <wgrant> This is not sensible!
[12:31] <wgrant> ./src/parsestatements/scanner.c:int STMTS[MAXSTATEMENTS];
[12:31] <wgrant> D:
[12:32] <wgrant> malloc is hard, I guess.
[12:32] <stub> wgrant: Shouldn't be a genuine problem given we won't want a backlog of patches to apply in one hit.
[12:32] <stub> wgrant: esp if we decide to pull trusted.sql out
[12:33] <wgrant> $ wc -l /tmp/trusted.sql
[12:33] <wgrant> 2111 /tmp/trusted.sql
[12:33] <wgrant> I guess there are some nice long statements in that.
[12:33] <wgrant> Will be fun counting them.
[12:33] <stub> yes - a whole stored procedure counts as '1'
[12:33] <stub> oic
[12:33] <wgrant> Yeah.
[12:34] <stub> You are counting semi colons out side of strings, which you can assume are 'string' or $$string$$. Or just ignore the problem for now.
[12:34] <wgrant> I think it will be ignored for now.
[12:34] <wgrant> Will see how upgrade.py copes if it's violated.
[12:34] <wgrant> I mean, it will probably cause slon to segfault or something...
[12:34] <wgrant> No biggie.
[12:35] <stub> it will barf in the transaction and roll back.
[12:35] <stub> meybe picked up at slonik compilation time
[12:35] <wgrant> Hah, all my logging changes conflict with yours.
[12:35] <stub> comments.sql will make it explode :)
[12:39]  * stub disappears for an hour
[12:51] <wgrant> stub: https://code.launchpad.net/~wgrant/launchpad/single-execute_script/+merge/70432
[12:59]  * wgrant sleeps.
[13:00] <jelmer> g'night wgrant
[13:03] <gary_poster> abentley, when you are around, if you could let me know which of bugs 820510, 820511 and 820516 are worthy of critical, or help me decide at least, maybe yellow squad could take one or more of those.
[13:06] <abentley> gary_poster:  820511 and 820510 give the best bang for buck, but I don't think either one is truly critical, and marking them critical makes it hard for us to see what's truly critical.
[13:07] <henninge> abentley, adeuring: I have to switch locations now. Should be back in time for the stand-up.
[13:07] <abentley> henninge: cool.
[13:09] <gary_poster> abentley, ok thanks.  "hard to see what's truly critical"...I don't think it's any different than the existing situation with the critical bug tag, but let's not go there.  Practically, do you agree with wgrant that this is something that should be tackled now, even when our focus is supposed to be on "critical" bugs?
[13:11] <abentley> gary_poster: I think it makes sense to tackle soon.  I don't think the focus on "critical" bugs is meant to exclude us working on anything else.
[13:12] <gary_poster> abentley, ok.  I'll encourage yellow to grab at least one of those, asking you for guidance when we get there.
[13:12] <gary_poster> thanks
[13:13] <abentley> gary_poster: np.
[13:15] <StevenK> allenap: O HAI!
[13:15] <allenap> StevenK: HAI!
[13:16] <allenap> Whassup?
[13:16] <StevenK> allenap: You marked https://code.launchpad.net/~stevenk/launchpad/populate-bprc/+merge/69412 as Needs Fixing a week ago, and you have since been smacked down by wgrant and lifeless. Can you reconsider? :-)
[13:17] <allenap> StevenK: There were other points in the review :)
[13:18] <StevenK> allenap: Right, so I'll look at getCandidateBPRs(). I disagree about the loop size -- if it gets killed half-way through a loop, for example?
[13:18] <StevenK> allenap: And I don't agree the test is dependant on test data. How?
[13:20] <allenap> StevenK: Why do you only want to do one thing round the loop? I suppose it keeps transactions short, but it's meant to tune for than anyway isn't it?
[13:21] <StevenK> allenap: So, each iteration reads a .deb from the librarian, parses the contents and starts tossing rows into BPRC and BPP. If that loop gets interruptted, do I end up with half of the data in the tables and half not?
[13:22] <allenap> StevenK: No? Why is that a concern? I don't think it interrupts in the middle of a run. And transactions.
[13:22] <StevenK> allenap: My basis for this was the PSC work -- and that was actually unpacking source packages and dealing with a bunch of temporary files, so it was more critical it wasn't killed in the middle of a run
[13:23] <StevenK> allenap: Conceeding the loop tuning question -- I'll bump it 20 or so
[13:23] <allenap> StevenK: Okay :)
[13:24] <StevenK> allenap: So, the test is dependant on sample data?
[13:25] <allenap> StevenK: It looked that way, but I didn't dig much. If you say it's not then that's all I want to know.
[13:26] <allenap> StevenK: What about 3? That doesn't actually look like it's going to work how it is.
[13:26] <StevenK> allenap: I didn't want to include another .deb in the tree, so I make use of one that's already there due to archiveuploader tests
[13:26] <allenap> StevenK: Oh, ignore me. EPARSE.
[13:27] <allenap> StevenK: So, r=me now, but consider point 4.
[13:27] <StevenK> You'd prefer self.layer... ?
[13:27] <StevenK> Wait, ignore me, I can't read.
[13:33] <StevenK> allenap: -1 for including the IRC log in the MP :-P
[13:34] <allenap> StevenK: I just added a tl;dr. What's wrong with an IRC log?
[13:34] <StevenK> allenap: Just teasing :_P
[13:34] <allenap> :)
[14:20] <rvba> allenap: I've fixed my MP, could you add it to your queue? https://code.launchpad.net/~rvb/launchpad/bug-820900/+merge/70434
[14:20] <allenap> rvba: Sure.
[14:20] <rvba> allenap: Thanks.
[14:22] <allenap> rvba: r=me
[14:22] <rvba> allenap: \o/
[15:34] <cjohnston> thanks mrevell
[15:43] <mrevell> Hi cjohnston
[15:43] <mrevell> no problem at all cjohnston :) thanks for your help
[16:13] <sinzui> jcsackett, can you review https://code.launchpad.net/~sinzui/launchpad/person-picker-expand-1/+merge/70459
[16:14] <sinzui> The MP is lying about the changes from the prerequisite branch. I attached the actual changes
[16:14] <jcsackett> sinzui: thanks, i was just looking at the MP diff and getting confused about the scope of the changes. :-)
[16:23] <jcsackett> sinzui: r=me.
[16:27] <sinzui> jcsackett, thank you.
[16:27] <jcsackett> sinzui: you're welcome. :-)
[16:42] <nigelb> zomg.
[16:42] <nigelb> err, wrong channel :P
[17:03] <LPCIBot> Project devel build #947: STILL FAILING in 5 hr 53 min: https://lpci.wedontsleep.org/job/devel/947/
[17:45] <abentley> jcsackett: Could you please review https://code.launchpad.net/~abentley/launchpad/induce-latency/+merge/70471
[18:01] <jcsackett> abentley: sorry, i missed your msg; i'm looking at the MP now.
[18:01] <abentley> jcsackett: cool.
[18:05] <jcsackett> abentley: looks good to me, r=me.
[19:50] <lifeless> bug 820510
[19:50] <_mup_> Bug #820510: hard to turn on extra logging for twisted job runner <jobs> <Launchpad itself:Triaged> < https://launchpad.net/bugs/820510 >
[19:54] <sixstring> So, I killed rocketfuel-setup yesterday. I think it was right in the middle of the "bzr pull" step. Now, when I run "rocketfuel-get", I get "ERROR: No WorkingTree exists".
[19:54] <sixstring> BTW, which pastebin do we use here?
[19:55] <lifeless> any that ou want
[19:55] <sixstring> How do I fix my bzr repo? (I'm familiar with hg and svn, but I'm a bzr n00b.)
[19:56]  * sixstring searches for paste.lifeless.org. ;)
[19:56] <sixstring> http://www.pastie.org/2321492
[19:57] <lifeless> put set -x at the top of rocketfuel get so you can see the failing bzr command
[20:13] <sixstring> It's failing at "bzr up ~/launchpad/lp-sourcedeps/download-cache".
[20:13] <sixstring> In hg world, I'd just blow away that directory and re-clone it. Any bzr advice?
[20:14] <sixstring> BTW, the bash mojo in rocketfuel is impressive.
[20:21] <sixstring> OK, I'll try this: (1) mv ~/launchpad ~/launchpad-busted (to get it out of the way) then (2) re-run rocketfuel-setup.
[20:27] <sixstring> That seemed to work.
[20:27] <sixstring> I don't suppose there's a log-bot logging this stuff for posterity?
[22:54] <LPCIBot> Yippie, build fixed!
[22:54] <LPCIBot> Project devel build #948: FIXED in 5 hr 50 min: https://lpci.wedontsleep.org/job/devel/948/
[23:35] <wgrant> wallyworld: You said you had a better bzr plugin?
[23:35] <wallyworld> wgrant: yes, i'll email it to you
[23:35] <wgrant> Thanks. Going to submit it upstream?
[23:36] <poolie> ooh, for what?
[23:36] <wgrant> poolie: PyCharm.
[23:36] <poolie> that'd be good to send it up to them
[23:36] <poolie> if it has significant new features maybe we can put it on the blog?
[23:36] <wgrant> wallyworld: Also, do you have any hints on setting up the project? You said to create it outside the branch, but I'm not sure how to get everything into it.
[23:37] <wgrant> The original is already on LP (https://launchpad.net/bzr4j)
[23:37] <lifeless> wgrant: do you know why sinzui marked bug 345349 invalid ?
[23:37] <_mup_> Bug #345349: Notifications about private branch linkages sent to unprivileged subscribers <disclosure> <lp-bugs> <Launchpad itself:Invalid> < https://launchpad.net/bugs/345349 >
[23:38] <wallyworld> wgrant: the email may help. my copy of the bzr plugin has changes which i've submitted to the lp project but which have not yet been merged in
[23:38] <wgrant> lifeless: No, and the call finished like 2 minutes ago :(
[23:38] <wallyworld> wgrant: read the email first and then ping me with any questions
[23:39] <wgrant> wallyworld: Indeed, that helps a lot, thanks!
[23:39] <wallyworld> wgrant: np. it's a very terse summry. there's potentially a lot more you may need help with. but see how you go....
[23:42] <wallyworld> wgrant: also, i have debuging working when running lp inside an lxc container
[23:42] <lifeless> wallyworld: oh nice
[23:42] <lifeless> wallyworld: what did you do to make that work ?
[23:43] <wallyworld> you have to add a couple of lines to the top of bin/run
[23:43] <wallyworld> and tweak a setting inside the debug config of pycharm to tell it to connect to a "remote" instance
[23:45] <wallyworld> lifeless: you did realise we were talking about pycharm? i'm not sure about debugging using pdb etc
[23:46] <lifeless> wallyworld: yes, I did
[23:46] <wallyworld> cool, just checking :-)
[23:56] <wgrant> wallyworld: Should I create a separate empty directory for the project?
[23:56] <wgrant> I can't find a way to add external stuff.
[23:57] <wallyworld> wgrant: yes, in my case, i created a ~/PyCharmProjects directory and saved the lp project there
[23:58] <wallyworld> then you can got o settings->project structure and add a content root
[23:58] <wallyworld> the content root is your working tree
[23:58] <wgrant> Ahh, thanks.
[23:58] <wallyworld> and from there you mark stuff as sources and exclude things as per my screenshots
[23:59] <wgrant> Yup.