[00:00] <StevenK> And yeah, we probably need to pull PCJ into it
[00:01] <wgrant> StevenK: Of course, you don't have to do all this in SQL, but you'll want to use at least fragments of this to not make several queries per binary
[00:04] <StevenK> wgrant: The package_name I'm already doing handles source and PCJs, so I might leave that, but yes, looping over the binaries is bad
[00:06] <wgrant> StevenK: And you really want to be avoiding doing even one query per upload, ideally.
[00:08] <StevenK> wgrant: I have to do at least one?
[00:08] <wgrant> StevenK: Why?
[00:08] <wgrant> You can probably get away with <5 queries per batch
[00:08] <StevenK> Right
[00:08] <wgrant> Even for batch sizes larger than 5 :)
[00:43]  * StevenK stabs Storm
[00:43] <StevenK> TypeError: Expected int, found <type 'tuple'>: (1,)
[01:22] <StevenK> 2012-11-28 01:22:14 DEBUG2  [PopulatePackageUploadSearchableNames] Iteration 7 (size 532.2): 2.258 seconds
[01:22] <StevenK> wgrant: ^
[01:26] <wgrant> StevenK: Much better :)
[01:28] <wgrant> Dare I look at the code...
[01:29] <wgrant> Heh
[01:29] <StevenK> No, you daren't
[01:29] <wgrant> I recognise that SQL...
[01:29] <StevenK> wgrant: I was trying to avoid spending four hours converting it to Storm
[01:30] <wgrant> Huh, did that actually work? I would expect the 500 updates to take more than 2 seconds
[01:30] <wgrant> I guess because the DB is local it works
[01:30] <StevenK> I wasn't sure how to phrase it as one update
[01:31] <wgrant> There's only one bulk update in the codebase
[01:31] <wgrant> and it's on line 637
[01:37] <StevenK> wgrant: I wasn't even clear on how to phrase it as SQL
[01:40] <StevenK> Oh god, that monstrous 34 line SQL has to go into BulkUpdate, doesn't it?
[01:40] <wgrant> Well, you don't have to do the string calculation in SQL
[01:40] <wgrant> It's probably awkward to
[01:41] <wgrant> So you can end up just passing the final strings into BulkUpdate
[01:41] <StevenK> Yes
[01:42] <StevenK> update_columns turns into PackageUpload.searchable_names, values=list of the names from results, where=list of ids from results?
[01:43] <StevenK> Reading BulkUpdate to figure out how it works has left me more confused.
[01:43] <wgrant> Sort of. The Values expression is a subquery table, so it will want to be a list of [(id, searchable_names)]
[01:43] <StevenK> So list(results), then
[01:44] <StevenK> By happy coinience
[01:44] <wgrant> The where= will want to match the Values' id with PackageUpload.id, so the rows get matched with each other.
[01:44] <wgrant> And then updated_columns wants to set PackageUpload.searchable_names to the Values' searchable_names
[01:45] <StevenK> How do I reference Values'?
[01:45] <StevenK> Do I need to ClassAlias?
[01:45] <wgrant> (in this case, the Values virtual table is represented to Storm as the cache_data ClassAlias. I did that since the layout of the table is similar to the real LatestPersonSourcePackageReleaseCache table, but in your case it might be best to just define a class there with two cols
[01:45] <wgrant> Or you could alias PackageUpload
[01:45] <wgrant> Since it will have the two columns that you need
[01:49] <StevenK> wgrant: http://pastebin.ubuntu.com/1393383/
[01:52] <wgrant> StevenK: Are you deliberately shadowing the Values import there?
[01:52] <wgrant> Values is a storm expression class
[01:54] <StevenK> Nope, renamed to PUValues
[01:56] <wgrant> That's still not a valid variable name
[01:56] <wgrant> Oh, but I guess it's a classalias
[01:56] <wgrant> So eeh
[01:57] <StevenK> UpdateUpload ?
[01:57] <StevenK> Trying to be short and descriptive, and failing
[02:00] <StevenK> wgrant: Wait, so updated_columns is a list of the updated names, or a Storm expr?
[02:03] <wgrant> StevenK: Updated columns is the SET dict.
[02:03] <wgrant> {some_col_to_set: the_value_expression, another_col_to_set: some_other_value_expression}
[02:04] <wgrant> In the existing example, the value expressions just directly reference columns from the Values expression.
[02:04] <wgrant> Which is probably sufficient for your needs too
[02:04] <wgrant> But they could be any Storm expression
[02:07] <StevenK> ProgrammingError: syntax error at or near "1"
[02:07] <StevenK> LINE 1: ...d SET searchable_names="_3".searchable_names FROM 1, mozilla...
[02:07] <StevenK> Haha
[02:07] <wgrant> sounds like you didn't actually use Values
[02:09] <StevenK> I took "The Values expression is a subquery table, so it will want to be a list of [(id, searchable_names)]" to mean values=list(results) where results is the RS from the SQL o' doom
[02:12] <wgrant> StevenK: It's not quite that simple. Look at how the LPSPRC thing does it
[03:11] <StevenK> wgrant: My attempted simplification of LPSRPCs thing is not going well
[03:13] <wgrant> StevenK: Oh?
[03:14] <StevenK> TypeError: zip argument #1 must support iteration
[03:14] <StevenK> Which falls out of compile_bulkupdate
[03:18] <wgrant> StevenK: Diff?
[03:19] <StevenK> wgrant: http://pastebin.ubuntu.com/1393537/
[03:21] <wgrant> StevenK: Well, I'm not sure if it's the cause of this error, but your Values expression doesn't seem to contain the ID
[03:22] <wgrant> Which isn't going to be helpful when you try to match the searchable_names up to their corresponding PackageUploads
[03:24] <wgrant> Oh
[03:24] <wgrant> I see why it's crashing
[03:24] <wgrant> Your list of rows is just a list of values
[03:24] <wgrant> You dropped a layer of list comprehensions
[03:25] <StevenK> Yes, see 'attempted simplification'
[03:25] <wgrant> Well, you simplified the table to be a single column, which isn't supported because it makes no sense whatsoever :)
[03:26] <wgrant> (unless you want to choose a random row -- there's no other key)
[03:26] <StevenK> WCPGW
[03:28] <StevenK> Then I don't need the for data in ... second list comprehsion?
[03:28] <StevenK> Since I have a list of tuples already?
[03:29] <wgrant> StevenK: Since it's just an int and a string you might not need the list comps
[03:29] <wgrant> It was required in the LPSPRC job to cope with the Reference and DateTime fields
[03:30] <StevenK> It doesn't love that either
[03:31] <StevenK> I've not used zip very often
[03:33] <StevenK> In fact, It's clearly not working at all
[03:34] <wgrant> What's the error now, and what's the code?
[03:34] <StevenK> Well, zip((PackageUpload.id, PackageUpload.searchable_names), results)
[03:34] <StevenK> => [(<storm.properties.PropertyColumn object at 0x6cb3600>, (1, u'mozilla-firefox'))]
[03:35] <StevenK> Which is clearly why dbify_value is choking
[03:35] <wgrant> dbify_value is for values
[03:35] <wgrant> Not columns
[03:36] <wgrant> Oh
[03:36] <wgrant> I think you might be nested one level too deep now
[03:36] <StevenK> .. I am?
[03:36] <StevenK> And here I was thinking I needed another loop
[03:36] <wgrant> Where did you get that zip() invocation from?
[03:37] <StevenK> Trying to munge it from LPSRPC
[03:37] <wgrant> Maybe you're still missing a list comprehension
[03:38] <wgrant> If you look at LPSPRC, the values list comp iterates over the rows of the resultset. It zips each row with the columns, then dbifys each value.
[03:38] <wgrant> Whereas you seem to be zipping the entire resultset with the columns.
[03:48] <StevenK> 0-1@SQL-main-master UPDATE PackageUpload SET searchable_names=cache_data.searchable_names FROM (VALUES (7::integer, E'netapplet-1.0.0.tar.gz'::text), (8, E'cnews'), (9, E'cnews'), (10, E'netapplet-1.0.0.tar.gz'), (11, E'mozilla-firefox'), (12, E'language-pack-de'), (13, E'commercialpackage'), (14, E'pmount'), (15, E'iceweasel')) AS cache_data(id, searchable_names) WHERE PackageUpload.id = cache_data.id
[03:48]  * StevenK declares victory
[03:53] <StevenK> wgrant: Can I have DF's database back?
[03:54] <wgrant> Hmm?
[03:54] <wgrant> I hold no locks of significance
[03:54] <StevenK> 2012-11-28 03:52:52 INFO    [PopulatePackageUploadSearchableNames] Blocked on 0:40:49.687301 old xact launchpad_main@launchpad_dogfood/16564 - <IDLE> in transaction.
[03:55] <wgrant> Ah
[03:55] <wgrant> That was probably my iharness
[03:55] <wgrant> fg
[03:56]  * StevenK waits for PPUSN to ramp up
[03:56] <StevenK> It's only doing ~ 200 per iteration
[03:56] <wgrant> Where's the time going?
[03:57] <StevenK> 2012-11-28 03:57:07 DEBUG2  [PopulatePackageUploadSearchableNames] Iteration 39 (size 334.5): 1.334 seconds
[03:57] <StevenK> 2012-11-28 03:57:24 DEBUG2  [PopulatePackageUploadSearchableNames] Iteration 43 (size 769.7): 3.574 seconds
[03:57] <wgrant> Sure, but which queries are being slow?
[03:58] <wgrant> (LP_DEBUG_SQL will be helpful here)
[03:58] <StevenK> Argh
[03:59] <StevenK> LP_DEBUG_SQL=1 is not helpful
[03:59] <wgrant> Why not?
[03:59] <StevenK> The queries are massive
[03:59] <wgrant> Ah, true, there might be a bit of scrolling :)
[03:59] <StevenK> A bit?
[04:06] <StevenK> 2012-11-28 04:06:19 DEBUG2  [PopulatePackageUploadSearchableNames] Iteration 8 (size 1019.7): 2.791 seconds
[04:07] <StevenK> Seems the slowest bit is the UPDATE
[04:08] <StevenK> Or building it
[04:09] <StevenK> wgrant: Perhaps we're being slowed down by tgrm
[04:09] <wgrant> Ah, that's likely indeed
[04:09] <wgrant> The strings are large
[04:10] <StevenK> However, we're hovering around 850 per iteration
[04:10] <wgrant> As long as we have a reasonable number of queries (ie. <10) and the UPDATE is the slowest bit, we are probably doing well
[04:10] <StevenK> w
[04:10] <StevenK> Bleh
[04:10] <StevenK> 4 queries per batch
[04:11] <StevenK> done is doing a SELECT 1 FROM (...), then __call__ is getting a list of ids, calculating the data, and then running the bulk update
[04:13] <wgrant> Sounds good :)
[04:14] <StevenK> wgrant: So should I put up the DB patch?
[04:15] <wgrant> Might as well
[05:12] <StevenK> wgrant: Do you want to review the DB patch, or wait for stub?
[05:15] <wgrant> I imagine he'll be around soon
[05:16] <StevenK> wgrant: I can't rip out most of IPackageUploadSet.getAll() due to the API :-(
[05:27] <wgrant> StevenK: What do you mean?
[05:32] <StevenK> wgrant: version and exact_match are exported via IDistroSeries.getPackageUploads()
[05:36] <wgrant> Oh my god
[05:36] <wgrant> I'd forgotten what getAll looked like
[05:37] <StevenK> Heh heh
[05:38] <wgrant> StevenK: So, exact_match=True is fairly easily doable
[05:38] <wgrant> You just need to match LIKE 'term %' OR LIKE '% term %' OR LIKE '% term'
[05:39] <wgrant> As for version, um.
[05:40] <wgrant> We could check that Ubuntu doesn't use it and then entirely unexport it
[05:40] <wgrant> Or we could denorm that too
[05:40] <wgrant> I'd suggest the former
[05:40] <StevenK> I was wondering about that
[05:43] <StevenK> Wondering I care enough to grep appserver logs to check for getPackageUploads.*version=
[05:44] <StevenK> We could be naughty, make version a no-op and see if anyone notices :-P
[05:48] <wgrant> The PPR is probably running around now, so it's not a good time to be grepping
[05:49] <wgrant> But I guess it might be worth doing
[05:49] <wgrant> grep a week of logs or so, I guess
[05:49] <wgrant> Not sure if -security use it
[05:51] <wgrant> StevenK: So it turns out all the vocabs and person searching in general are sort of completely terrible
[05:52] <wgrant> My performance fix comes out to -150 or so
[05:52] <StevenK> Haha
[05:52] <wgrant> Are you still abusing DF?
[05:52] <StevenK> Yeah
[05:52] <StevenK> Well, garbo-hourly is on its second run
[06:06] <StevenK> wgrant: zcat -f *access*112[0-7]* | grep -E 'getPackageUploads.*version=' turned up nothing for all four appservers.
[06:07] <wgrant> StevenK: Most of the logs are bzip2ed, IIRC
[06:07] <StevenK> They are not
[06:07] <wgrant> Also, make sure they're actually syncd
[06:07] <wgrant> Oh, maybe that finally got fixed
[06:07] <wgrant> That would be handy
[06:07] <StevenK> All but gac were gzip'd until I got annoyed and filed an RT
[06:08] <wgrant> so, remove it, I suppose :)
[06:10] <StevenK> wgrant: garbo-hourly is done again if you want me to stop touching DF
[06:12] <StevenK> It's roughly half done with two runs
[06:14] <wgrant> StevenK: https://code.launchpad.net/~wgrant/launchpad/bug-1075767/+merge/136572
[06:14] <StevenK> w
[06:14] <StevenK> Bleh
[06:19] <StevenK> wgrant: r=me
[06:30] <wgrant> StevenK: Thanks
[08:49] <adeuring> good morning
[09:48] <StevenK> stub: https://code.launchpad.net/~stevenk/launchpad/db-searchablenames-for-pu/+merge/136568 would love a review
[10:10] <stub> StevenK: How are you going to be searching that btw?
[10:11] <stub> Do you expect searches to return few results, or are we going to have lots of results and we need to order them by similarity or similar?
[10:20] <StevenK> stub: contains_string
[10:20] <StevenK> And like
[10:20] <StevenK> stub: And both, sadly
[10:21] <StevenK> Depends on if it's via a view, or the API
[10:22] <stub> So if a search matches 100,000 results we need to return them all?
[10:23] <StevenK> I doubt it's 100,000 results
[10:23] <StevenK> Perhaps 400-500
[10:23] <StevenK> Usually nowhere near that many
[10:27] <stub> Ok, that sounds sanish
[10:28] <stub> StevenK: If we needed to LIMIT, ordering my similarity, we might need a GiST index instead of GIN or perhaps an alternative data structure
[10:28] <stub> c/my/by
[10:50] <StevenK> stub: Can I get a +1 on that MP?
[10:55] <stub> StevenK: Done, but index creation needs to be in a separate patch to apply live (took > 5s on staging)
[11:01] <StevenK> Ah, right
[11:02] <StevenK> I'll split it out tonight and land just ALTER TABLE tomorrow
[11:30] <cjwatson> wgrant,StevenK: We use the version parameter in the event that somebody uses the 'queue {accept,reject} PACKAGE/VERSION' syntax, which is rather handy
[11:31] <cjwatson> I'm sure we could do it otherwise by getting all the uploads for a given version and filtering manually, but I'm not sure that's an improvement (especially in the case of accepting from the rejected queue)
[15:35] <cgoldberg> )
[15:37] <rick_h_> party
[15:38] <czajkowski> rick_h_: you bringin the cake!
[15:39] <rick_h_> czajkowski: sure, my wife makes a mean homemade cake, or maybe pie
[15:39] <czajkowski> mmmm cake
[15:50] <rick_h_> abentley: a review for you if you get a chance please https://code.launchpad.net/~rharding/launchpad/nonpublic_1052659/+merge/136479
[15:50] <abentley> rick_h_: Sure.
[15:53] <abentley> rick_h_: visible_specification_query already removes inactive products.  Line 1299.
[15:54] <rick_h_> looking
[15:57] <rick_h_> abentley: ah ok
[15:57] <abentley> rick_h_: Could you explain why moving _cache_people is helpful?  I can see why making it static is good, but I don't see why you want to move it to SpecificationSet.
[15:58] <rick_h_> abentley: because it's used in two places
[15:58] <rick_h_> it was orignally inside of the method  _preload_specifications_people
[15:58] <rick_h_> so for me to access it outside of that context I needed to move it to its own method
[15:59] <rick_h_> I can't just use _preload_specification_people since I need to do things like limit and order the original result set
[16:00] <abentley> rick_h_: You need to limit and order the original, not the decorated one?
[16:00] <rick_h_> abentley: right
[16:00] <abentley> rick_h_: Why?
[16:01] <rick_h_> so if I'm understanding how the iter hook stuff works it first grabs the results of the original query, then generates a list of people ids, then does a query for all those people, and attaches them to the original result set
[16:01] <abentley> rick_h_: Sorry, I've got to go now.  I'll ping you when I'm back.
[16:01] <rick_h_> if you only want 10 specs (the home page stuff only lists 5 or so) then you want to get that result set limited first, then run it through the person fetcher method
[16:01] <rick_h_> abentley: rgr, np
[16:01] <rick_h_> thanks
[16:04] <czajkowski> rick_h_: abentley is deryck about today ?
[16:05] <rick_h_> czajkowski: no, he's sick today
[16:05] <czajkowski> :(
[16:06] <czajkowski> rick_h_: any idea if/when documentation will be out to explain the private bugs, properetary stuff, as am getting a lot of users very confused atm
[16:06] <rick_h_> czajkowski: not at the moment. deryck was going to pull together some notes and go over the material already out there
[16:07] <rick_h_> but with a friday deadline we've been cranking on the bugs vs docs so far
[16:07] <czajkowski> nods
[16:07] <rick_h_> czajkowski: so the *if* is we will have some docs. Then *when* might not be until next week but don't quote me on it. I guess call it 'in progress'
[16:07] <czajkowski> no I understand
[16:08] <czajkowski> just trying to point peopleat stuff to explain it
[16:08] <rick_h_> if you can do us a favor though, keep tabs on the questions/issues that come up
[16:08] <czajkowski> np
[16:08] <rick_h_> it'll help direct us to make sure we hit points in the FAQish section or what not
[16:08] <rick_h_> and potential future bugs to make things easier/less crazy
[16:09] <czajkowski> the terminalogy seems to be the confusing
[16:09] <rick_h_> heh, yea...
[17:47] <abentley> rick_h_: Back.
[17:47] <rick_h_> abentley: awesome
[17:48] <abentley> rick_h_: The iter_hook_stuff doesn't fire until you actually start to access the results.  So you don't need the resultset in final form before you create the DecoratedResultSet.
[17:49] <rick_h_> abentley: ok right, but I moved it from something defined inside the  _preload_specifications_people method
[17:49] <abentley> rick_h_: The other call sites for _preload_specifications_people do ordering and sorting afterward, so if that was a performance issue, we'd know already.
[17:49] <rick_h_> to it's own method on the HasSpecificationsMixin
[17:50] <rick_h_> abentley: ok, gotcha, but the reason to move it was more to keep the _preload_specifications_people as is.
[17:50] <abentley> rick_h_: Right.  I said why, you said because I need the resultset in its final form, I'm saying no you don't.
[17:50] <rick_h_> I guess what you're saying is that I could use that and append my order and limit onto the returned DecoratedResultSet?
[17:51] <abentley> rick_h_: Why can't you use _preload_specifications_people as is?
[17:51] <rick_h_> abentley: ok, I gotcha. Because I didn't realize it could be used as it was. So let me update that then.
[17:52] <abentley> rick_h_: Thanks.
[17:54] <abentley> rick_h_: The other call sites are Product.specifications and Distribution.specifications, if you want to see how they do it.
[17:54] <rick_h_> abentley: will do
[17:57] <abentley> rick_h_: Most implementations of IHasSpecification.specifications are not well-tested, so I would add tests for the old implementation, make sure they passed, then do the stormification.
[17:58] <abentley> rick_h_: I'd like to see something similar for SpecificationSet.  You can just copy the tests from test_product.TestSpecifications, but please make sure they pass against both implementations.
[17:59] <czajkowski> abentley: can you see -ops just running out the door and there is an issue
[17:59] <czajkowski> please
[18:48] <jcsackett> abentley: do you have time to review https://code.launchpad.net/~jcsackett/launchpad/private-blueprint-dependencies/+merge/136756 ?
[18:49] <abentley> jcsackett: sure.
[18:49] <jcsackett> thanks.
[18:52] <abentley> jcsackett: I don't think this is a view issue.  I think we want to forbid this kind of change at the model level using a storm validator.  We can catch the exception at the view to set the field error.
[18:54] <jcsackett> abentley: if we do that, we end up having to set the error and abort outside of the validation, right? that seems awkward. i can see an argument for having the model forbid it as well, but i think validate's the right place for this check.
[18:55] <abentley> jcsackett: It's not awkward.  I do it all the time.
[18:55] <jcsackett> abentley: can you point me to an example? i've only dealt with form handling within validate.
[19:00] <abentley> jcsackett: lp.code.browser.BranchEditFormView.change_action
[19:00] <abentley> jcsackett: The bit where it handles BranchTargetError
[19:01] <jcsackett> abentley: huh, that's it?
[19:01] <jcsackett> ok, i'll make that change.
[19:01] <abentley> jcsackett: Yes.
[19:01] <jcsackett> i'll ping you when it's updated.
[19:02] <abentley> jcsackett: Cool.
[19:10] <abentley> jcsackett: The one thing you could run into is I believe self.next_url must not be set for this to work.
[19:30] <jcsackett> abentley: changes have been made; worked out quite nicely, thanks.
[19:30] <abentley> jcsackett: Thanks.  Looking.
[19:33] <abentley> jcsackett: Nice.  r=me.
[19:34] <jcsackett> abentley: thanks.
[21:11] <wgrant> abentley, jcsackett: It's a bit strange to use a validator there, due to artifact-specific grants. You can't assume that just because two blueprints are private the permissions are equal.
[21:12] <jcsackett> wgrant: not sure i follow you; we're not testing for equality between two blueprints. just ensuring that if the root one is public, it doesn't have a private dependency.
[21:12] <abentley> wgrant: That's a good point.
[21:13] <abentley> jcsackett: There is also the case where both are proprietary, but one is visible to the user and the other is not.
[21:13] <jcsackett> aaah.
[21:14] <jcsackett> abentley: i think though that's a situation handled by the follow up work. the strict "no dependency" rule only really applies in the public/non-public situation.
[21:15] <abentley> jcsackett: But do we want that rule if it won't prevent oopses?
[21:15] <jcsackett> abentley: we do, because it makes the excised dependency graph still valid information.
[21:17] <abentley> jcsackett: Could you explain what you mean by that?
[21:17] <wgrant> It's perfectly valid for a public blueprint to depend on a private one
[21:17] <jcsackett> abentley: i was writing out what i meant, and i may be changing my mind. gimme a sec to think through this. :-P
[21:17] <wgrant> Unlike branch stacking, a blueprint is useful without its deps
[21:17] <jcsackett> perhaps we just cease to show the dependency graph if you don't have permission to see the various parts of it.
[21:18] <jcsackett> b/c i'm not comfortable for the possibility of information leakage that anything else implies.
[21:18] <abentley> jcsackett: Cease to show the graph completely?
[21:18] <wgrant> As long as you don't traverse through invisible blueprints there's no possible leakage
[21:18] <jcsackett> abentley: that's becoming my preference, yes.
[21:19] <wgrant> But your solution is easiest
[21:19] <wgrant> Hm
[21:19] <wgrant> On the other hand, hiding the graph entirely reveals that there are private things :)
[21:19] <jcsackett> wgrant: yes, and time is a factor, since we're operating under a hard deadline.
[21:19] <abentley> jcsackett: I am okay with that.  Of course, I'm not a stakeholder, so I recommend getting deryck's buy-in.
[21:19] <wgrant> You're well aware of how little respect I have for that deadline :)
[21:20] <jcsackett> wgrant: don'
[21:20] <jcsackett> wgrant: don't i know it. :-P
[21:20] <jcsackett> wgrant: but i don't think it's budging.
[21:20] <wgrant> Yeah
[21:21] <jcsackett> abentley: i'm going to kill the ec2 land run for the branch, and we'll bring it up at standup tomorrow, i guess?
[21:22] <abentley> jcsackett: Sounds good.
[21:22] <jcsackett> done.
[22:36] <jcsackett> wgrant: how bad is doing check_permission over a sequence of items? I seem to recall that being a performance horror.
[22:40] <wgrant> jcsackett: For blueprints? It'll issue a query per object, unless you've precached a positive or negative assertion for each
[22:44] <jcsackett> so, gets bad with scale.
[22:46] <jcsackett> the easiest way to deal with not navigating the bad nodes in dep graphs is to just filter with check_permission. as a proof of concept it works, but i think the performance question could become bad.
[22:47] <wgrant> jcsackett: Well, I'd do the filter in the initial DB query
[22:47] <wgrant> IIRC it's a WITH RECURSIVE, to which you could just add the usual access filter
[22:48] <jcsackett> 're referring to modifying those attrs?
[22:48] <jcsackett> ...something strange just happened...
[22:48] <jcsackett> trying again... so the graph stuff doesn't do any querying directly. it uses the spec's "dependencies" and "blocked_dep" attrs. but i guess those might be what you're referring to.
[22:49] <wgrant> Right, Specification.all_deps and Specification.all_blocked use a WITH RECURSIVE query to determine the dependency tree in two queries.
[22:50] <jcsackett> wgrant: ok. i'll take a look at modifying those tomorrow morning. thanks.
[22:51] <wgrant> It should be easy enough
[23:34] <rick_h_> wgrant: I'd not played with the db setting you'd mentioned there
[23:34] <rick_h_> I mainly just got stuck because a bad db query will come out as some sort of .pt 'location error'
[23:34] <rick_h_> so got really confused for a bit until I pdb'd through it and got at the real error
[23:34] <wgrant> rick_h_: Setting LP_DEBUG_SQL makes the appserver and bin/harness etc. spew DB queries (including times) to stderr, which can be handy
[23:35] <rick_h_> very cool
[23:35] <wgrant> LocationErrors are usually an issue with the Python expression, not the actual SQL
[23:35] <wgrant> Since TAL has a bad habit of suppressing AttributeErrors
[23:35] <wgrant> Who needs tracebacks anyway :)
[23:35] <rick_h_> well, it was that I did .using(tables) vs .using(*tables( so the sql generated was comma seperated through the join section :/
[23:36] <wgrant> Hm, I wouldn't expect that to be a LocationError, but who knows
[23:36] <rick_h_> and that came out to the app/traceback  level as a locationerror
[23:36] <rick_h_> heh, well I do have a way of breaking things in fun ways :)
[23:36] <wgrant> Heh
[23:37] <rick_h_> I did want to say that I feel like an LP dev now. Back to back deploys in my name wooo! :)