[00:00] wgrant: but loggin in as root with the root password doesn't work either [00:00] or does it only copy the password of the one user it's created with ? [00:01] Bert_2: If asked to bind ~ (the '-b $USER' option) it will copy the relevant lines from 'getent passwd' and 'getent shadow'. If -b is omitted, it will use ubuntu:ubuntu [00:01] I see [00:02] so I'll have to recreate the container then I guess ? [00:02] You'll probably have to leave out -b, or manually 'chroot /var/lib/lxc/$CONTAINER/rootfs passwd $USER' [00:02] wgrant: okey, thanks :D [00:13] StevenK: yeah, i've been otp for an hour sorry. doing it now [00:28] StevenK: is done [01:01] wallyworld_: Thanks [01:02] StevenK: sorry it took a while, i was otherwise engaged for a bit [01:02] wallyworld_: Tis all good. [01:13] wgrant: So I've cribbed get_bug_privacy_filter for get_branch_privacy_filter, but I'm not sure how to collapse down the policy grant filter [01:28] 2012-07-04 01:15:01 DEBUG2 [PopulateBranchAccessArtifactGrant] Done. 30536 items in 304 iterations, 625.594790 seconds, average size 100.449373 (48.8121224706/s) [01:29] wgrant: ^ DF [01:32] StevenK: Rather than saying access_policies && foo, you can say access_policy = ANY(foo) [01:54] wgrant: Does Storm export an Any, or do I need to screw around with SQL() more? [01:58] StevenK: I'm not sure. At worst you'll need to add a three-liner to lp.services.database.stormexpr [02:08] pretty sure it does [02:08] and please add ovbvious things like that to storm itself [02:08] we have reviewers in-team, no need to make the techdebt higher than it already is [02:10] lifeless: I've been looking and I can't see it [02:12] ah, probably doesn't then. its weak on scalars [02:12] still, as wgrant says, easy to add. [02:15] lifeless: You mean !scalars? [02:16] maybe :P [02:27] Bug #1020785 [02:27] <_mup_> Bug #1020785: lp.services.apachelogparser.base.get_files_to_parse doesn't like gzipped files over 4GiB < https://launchpad.net/bugs/1020785 > [02:28] Can anyone see a better solution than checking if the read size is >4GiB and gzip isize <4GiB, then ignore the file if read size % 2**32 == gzip isize? [02:32] lifeless: Can I have a One Patch Policy exception for http://bazaar.launchpad.net/~launchpad-pqm/launchpad/db-stable/view/head:/database/schema/patch-2209-20-1.sql and http://bazaar.launchpad.net/~launchpad-pqm/launchpad/db-stable/view/head:/database/schema/patch-2209-25-1.sql? [02:32] well, thats also going to fail. [02:33] why not keep a signature of the last bytes in the file, or the mtime, or something? What are the constraints. [02:33] lifeless: It'll fail in roughly 1 in a couple of billion files. [02:33] Which is certainly less than ideal. [02:34] But not fatal. [02:35] wgrant: +1 [02:35] Thanks. [02:35] on the file thing [02:36] what are the constraints, why do we need to check that way? [02:36] would a logstash consumer be a better answer ? [02:36] OK [02:36] So [02:36] We need a solution this decade [02:36] => logstash is not an option [02:37] gzip seeking is expensive. [02:37] So we can't just seek to the end and see what is there. [02:38] so these are problems not constraints. [02:38] they are obvious, the constraints aren't. [02:38] when do we need it fixed. [02:39] Does it have cpu limits? latency limits? [02:39] imagine you're telling a particularly annoying dev about the problem for the first time :) [02:39] We need to be able to progressively parse logs [02:40] Ideally without gunzipping 50GB of data we've already read [02:40] Every time [02:40] wgrant: I've done ... similar, but if I understand this code I'm looking at correctly, basically pull in a buffer of gzipped data, and pass that to gzread to decode and give back. [02:41] of course, this is C, and doing it the hard way is expected. [02:41] wgrant: well, that seems unsubstantiated; really not trying to be difficult. but "why not" ? [02:41] notsureifserious [02:41] am [02:41] also you can fingreprint a compressed file trivially [02:41] so not sure why you aren't proposing that [02:42] as a short term stop gap at least [02:42] We could store and check the CRC, but that's probably only marginally better than the filesize here, and it involves a schema change. [02:43] is that a constraint? [02:43] It's a very undesirable thing, as it means I can't fix it before lunch. [02:44] anyhow, short term hack - just store the gzip file length. [02:44] if the files are append only [02:44] How's that better than my initial suggestion? [02:45] It corrupts the database [02:45] And requires SQL to unbreak the history. [02:46] this seems like a fraught discussion [02:46] I'd rather move it to voice, or pass; We're trying to solve the same stuff, so the contention is wholly unneeded. [02:46] Storing the bad size seems strictly worse than changing the check -- I can't see any benefit. [02:46] Can you explain? [02:47] you want to optimise handling of files, avoiding rereads; I'd do that by fingerprinting the primary source of data, not be depending on a documented-as-broken field. [02:48] Oh [02:48] By "gzip file length" you mean the length of the compressed file? [02:48] yes [02:48] Not the file length from gzip. [02:48] right [02:48] Can't do that, since we need to know where to seek to within the file. [02:48] So we know which line to start at. [02:48] yes, it would then be different for existing things in the db, but depending on how you handle repeated reads that might not be an issue. [02:48] So that requires a schema change too [02:49] personally, I'd do the schema change; yes it means another day or two to get the fix deployed, but its much less likely to bite you in the bum later. [02:49] alternatively, use memcache to store out-of-file fingerprints and have fallback code for when thats missng, but - gnnngh, I don't really like that approach except perhaps as a stopgap. [02:50] memcache is only sensible when it's not hideously expensive to recalculate. [02:50] In this case we store logs for months/years [02:50] Worth hundreds of gigabytes [02:50] So the loss of the data is a huge problem. [02:51] what seems odd to me is that we have code even attempting to process old logs; theres no high water mark ? [02:51] if there isn't we have quadratic overheads anyhow [02:51] It's linear over a couple of thousand files and DB rows in a single run. [02:52] There's no watermark because we don't assume anything about the filenames. [02:52] if you want to do a quickiefixie, I've no objection. But I do think its worth considering how to make it deal with a couple of OOM of growth. [02:52] So ops can do as they please with logrotation etc. [02:52] wgrant: you assume a single file can get altered though [02:52] lifeless: Out of memory? [02:52] lifeless: It can [02:52] StevenK: order of magnitude [02:52] OH! Orders of magnitude [02:52] lifeless: It gets appended to throughout the day. [02:52] Yeah, just got it [02:53] StevenK: cool [02:53] lifeless: Also, not just that it can get appended to. [02:53] lifeless: We parse 10 million lines per run [02:53] lifeless: Logs have well over 10 million lines. [02:54] So we'd have to only store the size of the gzip file once we reached the end. [02:54] Storing the physical location in the file isn't going to work, because of the trailer. [02:55] sure, because thats how you're handling repeated reads (and optimising for finding-what-to-process-next). [02:56] lifeless: How would you propose to do it? [02:56] I'd suck stuff off of the logs as it happens and put it in a persistent queue, process it just in time from there. [02:57] If I had a list of things that were good ideas, queueing up thousands of writes a second on any data store we have today would probably not be on that list. [02:58] wgrant: 3K per second isn't high for amqp. [02:58] wgrant: we've got several OOM headroom. Particularly for small things, which this is. [02:58] well, at least two anyhow. [02:58] It also prevents us from ever disabling the process, or reading historical logs. [02:58] An we can shard the queues (and servers) if needed. [02:59] it doesn't discard the logs, we can indeed do historical logs if needed, but they become a special case, not the general case. [02:59] Anyway [02:59] Disabling the process can be done a few ways; have a stateful sucker-upper; disable the consumer (and just rotate queues if we have too many 10's of GB of logs pending). [02:59] That rewrite is at least 10 years away [03:00] How would you do it this century? [03:00] You seem to have a strong opinion that there's a better way than storing these offsets. [03:00] I think that when you're re-reading append only files, storing where you got to in it is fine. [03:01] 12:55:13 < lifeless> sure, because thats how you're handling repeated reads (and optimising for finding-what-to-process-next). [03:01] Hmm [03:01] That suggests you think it's not fine :) [03:01] I think for incremental and startup performance you don't want to read the file content at all unless you have work to do, so storing a stat fingerprint along with the amount read is important. [03:02] lifeless: Ah, perhaps once the number of files is about 3 orders of magnitude higher, yeah. [03:02] I think that having linear growth of files to handle, which is going to jump by three in the short term, suggests that considering all files each run may have a shorter lifespan than you think. [03:02] s/files/files per parser instance/ [03:02] lifeless: It *should* have a shorter lifespan than I think. [03:02] lifeless: But it won't. [03:03] so, right now, I'd do a schema patch, adding a string. I'd use bzrlibs stat finger print function to fingerprint files, and I'd set that for files where we processed to the end of the file as we saw it. [03:04] I'd nag lifeless about getting near-realtime log shipping in place and prepare a cunning plan for doing it better later this year. [03:05] The fingerprinting thing is completely unhelpful for the problem we've encountered here. [03:05] It's an unrelated optimisation. [03:06] The problem is that we have this file. [03:06] We haven't completely parsed it yet. [03:06] But it hasn't changed since we last tried. [03:06] can you explain why that makes it unhelpful ? [03:07] We can't skip based on the fingerprint check unless we only set the fingerprint once we hit EOF. [03:07] Otherwise we'll skip files that we haven't completely parsed. [03:07] My Any addition doesn't work. :-( [03:07] And we haven't hit EOF here; that's the entire problem. [03:07] So the fingerprint check is unrelated. [03:07] StevenK: Oh? [03:08] wgrant: I don't follow. Consider this logic: [03:08] wgrant: Can you prod at http://pastebin.ubuntu.com/1074183/ after you and lifeless are done. Time for lunch. [03:08] does the fingerprint match the file (no fp matches no file) [03:08] onploooooooooooooopiuuuuu [03:08] cat [03:09] if yes, open, ungzip and seek to the last recor~6~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [03:09] BAH [03:10] StevenK: +class Any(CompoundOper): [03:10] StevenK: ANY is not a compound operator [03:10] if yes, open, ungzip and seek to the last recorded offset and process from there. On halt, if we finished the file, store the fingerprint we obtained at the beginning of this. [03:10] StevenK: It's a function. [03:10] wgrant: It's a NamedFunc ? [03:10] Right [03:11] lifeless: That's precisely what I described, right. [03:12] wgrant: That looks better, but I think the SQL is still bong: SELECT Branch.id FROM Branch WHERE Branch.id = 77 AND Branch.information_type IN (1, 2) AND COALESCE((Branch.access_grants)&&(SELECT ARRAY_AGG(TeamParticipation.team) FROM TeamParticipation WHERE TeamParticipation.person = 243654), false) AND (Branch.access_policy) = ANY((SELECT AccessPolicyGrant.policy FROM AccessPolicyGrant JOIN TeamParticipation ON TeamParticipation.team = Acce [03:13] StevenK: Where'd the array_agg go from the APG bit? If you can remove that easily, as it seems, then just say access_policy.is_in(apgs). No need for ANY [03:14] lifeless: Oh. I missed the "and I'd set that for files where we processed to the end of the file as we saw it" you said earlier as I was dying of optimism poisoning from the following line. [03:15] False optimism poisoning, that is. [03:25] well [03:25] you can call it that [03:25] anyhow [03:25] whats the issue with that algorithm? How will it not work ? [03:25] I'd hate to be suggesting something bong [03:26] With the crucial "only set on EOF bit" that I missed you saying, it should be fine. [03:26] there are other ways to fpr files, but the bzr one is shall we say battle tested. [03:27] Heh [04:10] wgrant: http://pastebin.ubuntu.com/1074238/ [04:13] StevenK: Indeed, [04:18] wgrant: Indeed? [04:18] StevenK: That looks roughly correct. Does it work? [04:18] It returns [] [04:18] I think the public query should not be ANDed in [04:18] Ah [04:19] Yes [04:19] That should be an or [04:19] public OR artifact grant OR policy grant [04:19] I meant the complex bits of the query looked correct :P [04:19] Haha [04:20] wgrant: Do you think returning the branch id is okay? I'm really interested if one of those three is True for branch.id == self.id [04:20] Can't COALESCE against three, I guess [04:21] StevenK: Why would you want to COALESCE against three? [04:21] You basically want to SELECT 1 FROM Branch WHERE id = $BRANCH_ID AND $VISIBLE [04:21] In this case you're selecting branch.id, but that's just as good [04:22] Hm, still returns [] [04:23] Oh, but it's supposed to. I think this is the pre-subscribe check [04:23] What's it meant to return? [04:23] Ah [04:24] wgrant: So my next question is I want something more elegant than "return Store.of(self).find((Branch.id,), conds) == [self.id] [04:24] With an ending double quote [04:25] And putting a list around the store, but handwave [04:26] StevenK: More like 'return not store.find(1, Branch.id == self.id, $VISIBLE_FILTER).is_empty()' [04:29] (Storm might not like the 1 very much, but it doesn't at all matter what it is) [04:30] (1,), even? [04:30] AttributeError: 'int' object has no attribute '__dict__' [04:30] Yeah, it's probably trying to autotable it [04:30] Which won't work well [04:31] Same error with (1,) :-( [04:31] Might as well just say Branch, then [04:32] Yeah, we don't give a toss about what it returns [04:40] wgrant: http://pastebin.ubuntu.com/1074270/ [04:41] StevenK: Does it work? [04:41] wgrant: Nope [04:42] Returns 0 rows before subscription which is fine, and 0 rows after which isn't. [04:42] Have you checked the contents of AAG? [04:48] No AAG created :-( [04:49] That would be the issue. Missing FF? [04:49] IBranch.subscribe() does not use a FF to guard [04:50] Hm, maybe the sharing service still thinks subscription confers visibility [04:53] Sort of. getUtility(IAllBranches).visibleByUser() needs porting [04:53] Which I'm trying to work out without my brain dribbling out of my ears. [05:14] wgrant: http://pastebin.ubuntu.com/1074301/ [05:16] StevenK: You'll want to remove the helper methods that you've eliminated references to, but indeed. [05:17] StevenK: It may be worth seeing how the query looks if you reimplement the Branch method in terms of BranchCollection, as as I did with Bug.userCanView [05:21] % bzr damage [05:21] Using submit branch /home/steven/launchpad/lp-branches/devel [05:21] Healing: 4 lines of code [05:21] I approve [05:27] Hm, neat. permission denied on APG === matsubara-afk is now known as matsubara === Ursinha is now known as Guest51695 [07:03] stub: Hi, I've got a couple of DB reviews up for you when you have some time [07:06] This is the appropriate way to run a single test right? 'testr run -- -t lib/lp/bugs/tests/bug.py' [07:07] It should fail, but it's passing [07:08] wgrant: k [07:08] huwshimi: For Python modules you want to specify something like '-t lp.bugs.tests.bug' [07:08] huwshimi: But lp.bugs.tests.bug doesn't have any tests in it. It's just a helper module. [07:08] Perhaps you mean lp.bugs.tests.test_bug [07:10] wgrant: Oh, I see where I've gone horribly wrong. Thanks. [07:16] I've changed a table to a list and in lib/lp/bugs/tests/bug.py the print_bugfilters_portlet_unfilled() helper wants to grab the table and print its contents with print_table(). Any suggestions on how to print it now that it's a ul? [07:17] print_table is probably defined somewhere in that test [07:18] StevenK: It's defined in lp.testing.pages [07:18] StevenK: And there doesn't seem to be print_ul or anything [07:19] huwshimi: BeautifulSoup? [07:19] StevenK: You mean, just print out the contents? [07:21] huwshimi: You could use a doctest matcher, but no, I don't mean just print out the contents. [07:22] huwshimi: BeautifulSoup will parse the output of print_bugfilters_portlet_unfilled() and you can then inspect it with asserts. [07:23] huwshimi: This is in a doctest, right? [07:23] wgrant: No, this is in the helper [07:24] huwshimi: Well, yes, but it's probably eventually used by a doctest. The function prints it out, so you don't want to directly make assertions. I'd try the usual extract_text(find_tag_by_id(...)) pattern, see if that yields usable output. [07:26] I see so I just need to get it to extract the same data in the same format as the table printer was doing. I think I'm on it. [07:27] Right, hopefully that's easy enough. [07:27] If you can't get it exactly the same, get something that looks sensible and change the callsites :) [07:29] Thanks :) [07:53] Well, that wasn't too bad [07:55] good morning [08:11] :( [08:11] postgres upsets me [08:11] It won't use an index including 'foo IS NULL' to satisfy 'foo IS NOT NULL'; [08:18] czajkowski: Ew [08:19] wgrant: morning to you too [08:19] Not really. But we like OpenStack, so we might be able to arrange something with a sufficient degree of evil. [08:19] Indeed, morning. [08:19] wgrant: wrong channel [08:20] IMO it belongs here, but I guess it's arguable. [08:20] it's early [08:21] wgrant: whats the specific index its not using ? [08:22] lifeless: bugsummary(distribution, sourcepackagename, tag IS NULL) WHERE distribution IS NOT NULL [08:22] lifeless: It'll use that for a tag IS NULL query [08:22] And if I s/tag IS NULL/tag IS NOT NULL/ on both it'll work [08:24] tag is not null where distribution is null ? [08:25] lifeless: Confused. [08:25] wgrant: may be a stats thing not an index thing; e.g. many rows predicted [08:25] lifeless: That's pretty much disproven by the fact that inverting the last element of the index causes the query to use it. [08:25] oh hangon. [08:26] wgrant: you say' [08:26] 20:22 < wgrant> lifeless: It'll use that for a tag IS NULL query [08:26] 20:22 < wgrant> And if I s/tag IS NULL/tag IS NOT NULL/ on both it'll work [08:26] so both ways around work. [08:26] Or you mistyped somewhere. [08:26] Yes. Both ways work. [08:26] hello [08:26] But only with the matching index. [08:26] so [08:26] But both are satisfiable from either. [08:26] bugsummary(distribution, sourcepackagename, tag IS NULL) WHERE distribution IS NOT NULL looking for 'tag IS NOT NULL' is what doesn't work ? [08:27] Yes. [08:27] wgrant: I presume you have a distribution in the query as well ? [08:27] lifeless: Naturally. [08:27] Otherwise it wouldn't use that index :) [08:28] well, as it isn't using the index, I felt I should check. [08:28] :> [08:28] jml: elloh [08:28] jml: Re. your email, that excludes test deps, I guess? [08:28] wgrant: so why index tag IS NULL; looking for the specific sub summary? [08:29] lifeless: We currently maintain separate indices on tag IS NULL and tag IS NOT NULL [08:29] Was trying to avoid that. [08:29] hmm, I forget the detail [08:29] have you tried DISTINCT FROM ? [08:30] wgrant: and NOT tag IS NULL ? [08:30] wgrant: yes, the output. the script is just a Pythonically aware grep for 'import' [08:31] wgrant: I'm probably going to buff it up so it recurses directories (or maybe packages) [08:32] and ignores internal imports [08:33] and then maybe consults some kind of db to figure out which things are modules and which aren't [08:34] but, you know [08:36] lifeless: NOT (tag IS NULL) has the same behaviour as tag IS NOT NULL [08:36] Which doesn't really make much sense. [08:36] wgrant: It never will, because an IS NULL index contains no information about rows without a null in that column. [08:36] stub: Not a partial on tag IS NULL [08:36] stub: But an index with the boolean "tag IS NULL" as one of its elements. [08:37] computed indices are evil [08:37] wgrant: why doesn't NOT (tag is NULL) behaving the same as tag IS NOT NULL, not make sense ? [08:38] lifeless: While you're here, what do you think about soren's question in #launchpad? We'd need to set archive.signing_key on the new PPA fairly soon after creation. [08:38] lifeless: Well, it was possible that it didn't consider IS NULL and IS NOT NULL to be inversions of the same operator. [08:38] But it clearly does. [08:39] I was hoping it might :) [08:39] what about DISTINCT FROM ? [08:40] uhm, so I think doing what soren asks would be good; be better to automate it (at least to the extent of sysadmin only apis). [08:40] or sometihng. [08:41] I'm not a huge fan of manual fiddling, but this is a case where our model makes decisions more stressful for users, its largely incumbent on us to deal with the side effects. [08:41] Exactly. [08:42] Although APIing this sounds beyond perilous. [08:43] I think NOT( ) has to assume the contents of the parenthesis is true, false or null. IS NOT NULL is a single operator and it is known it returns true or false. [08:44] wgrant: You go into detail about bugsummary journaling performance, but isn't that irrelevant for writes atm? [08:45] stub: Not at all. It's the most expensive bit of big bug updates apart from the usual Storm n+1 query thing. [08:46] wgrant: Yes, and are big bug updates a performance problem? [08:46] stub: We have about 3 timeout bugs about them. [08:46] eg. approving Linux SRUs [08:46] Do you know what sort of number of bugs are being done in a batch? [08:46] Can create about 50 tasks in one transaction, because the kernel team are insane. [08:47] We end up with bugs with 80ish tasks, 2/3 of them created in one transaction. [08:47] If the bug has 8 tags, which is fairly common, that's a lot of journal rows. [08:47] The last task will unjournal 79*8*afew rows, journal 80*8*afew rows. [08:47] Hmm... So I'm not against performance improvements, just improving performance doesn't really solve scalability problems. [08:48] It stops bulk task addition from being quadratic. [08:48] We can fix it so creating 80 tasks works on our hardware in a single transaction, then some sod will try and create 81. [08:51] And things like dropping foreign key constraints mean we should have DELETE triggers too (they are there already I think?) [08:51] stub: Well, the FKs really shouldn't have been on these tables at all. They actually cause correctness bugs. [08:52] We normally don't bother with them in this situation. Can't recall why we had them. [08:52] I'm wondering if we rely on ON DELETE CASCADE behaviour somewhere. [08:52] We don't. [08:52] Even if we did, it would be wrong. [08:53] Since the deletion process would also remove the references in the original tables. [08:53] Which would create journal entries. [08:53] So you'd end up with negative counts in bugsummary after journal rollup. [08:54] Negative counts are bad :) [08:54] Though we have some :/ [08:55] 289 bugsummaryrows with count < 0 :/ [08:56] That would be a logic flaw somewhere in the triggers, wouldn't it? [08:57] There's a lot of corruption in Ubuntu's kernel bugs. I've identified a couple of bugs in the initial population code that cause some of them, but it doesn't explain all of it. [08:57] The triggers themselves weren't significantly buggy AFAICT. Just the initial population query. [08:58] lifeless: Given https://pastebin.canonical.com/69406/ I propose https://pastebin.canonical.com/69407/ [09:04] wgrant: I wonder if we should dump doing this in triggers entirely, but I guess that would require reliable async job processing. [09:05] mgz: did you see that there is one more failing test on python-2.7? It looks like a doctest that is (again) overly sensitive. [09:05] stub: I came close to doing that when doing BugTaskFlat, but before we can sensibly do that we need both reliable async job processing and something resembling a sensible data access layer in the app. [09:10] r=stub [09:13] stub: Thanks. [09:14] stub: Other disclosure stuff (that doesn't touch the existing bugs model so directly) is using celery-based eventual consistency fairly extensively. [09:16] Yeah, just wondering if we are investing too much in rabbit. Complex beast, and more complexities to ensure we don't drop tasks. [09:17] Using PG as the task queue for celery could suit us much better, and I think celery supports that out of the box. [09:18] Indeed, although it hasn't been problematic yet, and we're not using it for absolutely critical stuff. [09:19] IIRC We haven't chosen it for some things because of this problem, or chosen more convoluted implementations. [09:25] stub: Right, the recent port of the classical job system to celery involves a regular task to reschedule jobs that remain pending but without a rabbitmq job. [09:25] We don't have pure-rabbit jobs. [09:25] Yer, that extra complexity. [09:25] Is that generic, or do we have to do this every time? [09:26] Guess you get it if you use the lp job system. [09:27] stub: It currently only support branch scan jobs, I believe, but it's pretty simple to genericise. I imagine it will be once it's fully deployed and tested for branch jobs. [09:30] wgrant: +1 [09:31] lifeless: On the signing_key SQL? [09:31] yes, caveat being to follow the normal procedure :) [09:31] Of course, thanks. [10:36] lifeless: I thought we didn't run tests for code that wasn't launchpad proper. I'm getting a test failure on python-2.7 inside the launchpadlib test suite (in the egg) [10:43] and the plot thickens. As near as I can tell, you run the lplib tests by running "python setup.py test", but that suite doesn't load the doctests... so they are only being run by launchpad bin/test, and it is in an egg, so we need a new version of launchpadlib, a new release, and then updating the versions.cfg file [10:43] bac: I noticed you are prominent in launchpadlib development. Can I ask you some questions? [10:44] jam: dont think he's on today [10:44] czajkowski: thanks [10:44] I imagine neither is benji, or anyone else in the US [10:44] jam: as is most of USA [10:53] jam: Some launchpadlib tests only run as part of the Launchpad test suite, since they need to test against Launchpad. [10:54] Hi, I'm about to do some changes to my newly rockfuel'ed launchpad, now I am to make some config changes to prevent launchpad from going into dev mode, I was hoping to use this page: https://dev.launchpad.net/WorkingWithProductionConfigs but while it says it describes editing configuration it actually doesn't seem to explain anything else apart from repo and branch stuff (which are not as important as getting out of devmode), so help ? [10:57] wgrant: so if I have a fix for a launchpadlib test, I have to get it committed into launchpadlib trunk, then officially packaged and uploaded, then update versions.cfg, right? [10:58] but I can't run that test outside of launchpad itself... [10:58] wgrant: do you just hack the code in the egg directory directly? [10:59] jam: You're correct on all counts. [11:00] /cry [11:02] Bert_2: That page mostly documents how Canonical LP engineers should work with our production configs. The closest thing to a public production config example is the old configs from before the open-sourcing, in r8666. Are you aware of the trademark and copyright restrictions that apply to the Launchpad name and images? [11:03] wgrant: I'm not fully aware of that no, I thought I should only remove the launchpad logo [11:04] https://dev.launchpad.net/LaunchpadLicense [11:04] k, so basically if I lose the logo and icons and ask permission to use the name launchpad I'm fine ? [11:05] and the hole deepens. The specific contents of the config file is set by lazr.restful.authorize_auth.OAuthAuthorizer... [11:05] Bert_2: You'd be best off changing the name. [11:05] jam: You need to change the config file? [11:05] wgrant: we have a doc test that asserts the exact bytes of the content [11:06] and in py2.7 it changed 2 things (a blank entry now has a trailing space, and the order is changed) [11:06] I'm guessing the ordering is defined by iter(dict) [11:06] and that 2.7 just does it differently. [11:06] It was a very brittle test, waiting to break [11:07] wgrant: and can that be done through the configfile or am I forced to change the name by hand in every python file ? [11:07] jam: Oh, lovely. Can you just change the test to be a bit more flexible? [11:08] wgrant: I was thinking to put a "sorted()" around it, yeah. [11:08] Bert_2: There's no config option. The one other production instance I know of basically ran sed over all the python files and templates. [11:08] I'm just struggling to even run it in isolation :). I think I at least have a plan of attack. [11:08] jam: Heh [11:09] wgrant: alright, that can be done, and how about going from development to production mode, how's that done, cause the wikipage isn't all that clear :S [11:10] Bert_2: You'll see the existing public configs under configs/ [11:10] You select a config to use with the LPCONFIG environment variable. It defaults to 'development'. [11:10] wgrant: so I think I'm going to monkey patch ConfigParser.SafeConfigParser in part of a doc test. That sounds sane, right? :) [11:10] Any objection to me updating the copyright year in LaunchpadLicense? [11:10] cjwatson: I was going to do that myself, but feel free :) [11:11] Done [11:11] (no monkey's will be patched, I swear) [11:11] Bert_2: I'd copy the development config, and fix up the paths and domains. In launchpad.conf of your new config, remove the 'devmode on' line. Most of the settings are in launchpad-lazr.conf. [11:11] jam: If it works... [11:12] wgrant: alright, so if I change paths, domains, the devmode line and then the usual emailaddresses and numbers for question deprecation etc. I'm fine ? === matsubara is now known as matsubara-afk [11:14] gmb: could you take a look at this? https://code.launchpad.net/~ivo-kracht/launchpad/bug-921901/+merge/113234 [11:15] ivory, Sure - thought the topic's out of date :) === gmb changed the topic of #launchpad-dev to: http://dev.launchpad.net/ | On call reviewer: - | Firefighting: - | Critical bugs: 4.0*10^2 [11:18] Bert_2: IF you want it to send email you'll also have to reconfigure a couple of things. zcml/package-includes/mail-configure-normal.zcml redirects all email to root@localhost, and you might want to enable the immediate_mail.send_email in launchpad-lazr.conf as well. You might also want to do away with the sampledata, and instead create a clean DB using the scripts in https://code.launchpad.net/~wgrant/launchpad/bootstrap-db-from-scratch [11:19] wgrant: awesome, I thought going out of devmode would give me a clean db, so thx for saving me the frustration :D [11:27] Bert_2: Changing an appserver configuration file fortunately won't erase your database :) [11:43] wgrant: I thought the database was generated by a python script that read the config, but I didn't really take the time to read the sourcecode, I've barely been out of exams and this development platform is rather important for our student's body [11:52] wf [11:52] wgrant: do you have any experience rolling out an updated EC2 AMI? [11:56] I found EC2Test on the wiki, I'll try following that one [11:58] jam: Why do you need to? [11:58] But yes, that page is correct. [11:59] wgrant: we want to get the test suite running w/ python-2.7 [11:59] so that we can get production moved to precise afterwards [11:59] jam: Not until we've scheduled it on production we don't. [11:59] wgrant: I'm working with flacoste to do... 1) upgrade db-devel to run py2.7 along with ec2-test running py 2.7 [11:59] and then leave devel running 2.6 [12:00] once we have both passing [12:00] then we can upgrade prod [12:00] but we can't upgrade prod until we know it works, and have a clean test suite. [12:01] he wants ec2-test to fail early rather than going into testfix on db-devel, and buildbot will run on devel w/ 2.6 before it goes to qastaging [12:01] I'm not sure it's a great idea to upgrade ec2 unless we know we're going to be upgrading production within a very few weeks, or we're going to be in trouble when people start using 2.7isms and breaking buildbot. [12:01] wgrant: so we need a migration path, which I was asked to work on. Certainly each step gets reviewed, etc. [12:02] Sure. [12:02] Let's not just actually deploy the 2.7 image to everyone until we have a schedule worked out. [13:06] wgrant: the plan is to upgrade quickly, really not more than a few weeks === salgado is now known as salgado-brb [13:30] adeuring: ivory https://plus.google.com/hangouts/_/3dd64d7a6ec2e029e3e9abfc99a72a95ff0bef6c?authuser=0&hl=en-US === salgado-brb is now known as salgado [14:07] mgz: did you see my python-upgrade bug? I submitted a fix for it to launchpadlib === abentley changed the topic of #launchpad-dev to: http://dev.launchpad.net/ | On call reviewer: abentley | Firefighting: - | Critical bugs: 4.0*10^2 [14:08] I saw the mp [14:08] it will take a bit to finish that one off, but that was the last failure after manually merging your branches. [14:08] mgz: did they all land in trunk? [14:08] guess I can review that? not sure what the funny lp reviews system applies to [14:09] ^jam, yup landed now and tickets moved horizontally with lp2kanban [14:54] mgz: what is the approval process for launchpadlib code? [14:54] jam: no idea, my experience has been "bug someone after mp has been forgotten by everyone" [14:54] or was that a lazr thing... [14:55] mgz: I think it is a more-than-one-projects thing [14:55] but everyone I've seen in launchpadlib on lp's "active in this project" is asleep or on holiday. so I might try later. [14:55] eg, tomorrow [14:55] jam: have a look at https://dev.launchpad.net/HackingLazrLibraries [14:56] /wave bac, not doing July-4 stuff yet? [14:56] jam: it has good stuff, especially if you need to make a release [14:56] bah, our founding fathers would be embarrassed by our slothful coworkers. :) [14:56] we probably don't need a release, if we can just bump the lp dep to a specific revno [14:57] mgz: it is an egg, not a sourcecode [14:57] * mgz is not sure if that's a done thing though [14:58] bac: so who actually commits stuff to launchpadlib trunk? [14:59] jam: i think the short answer is 1) make a dev branch, don't work in your copy of trunk, 2) make a MP and get it reviewed, 3) merge to trunk and commit with a good message including [r=reviewer], 4) push. you'll then need to decide on whether to make a release, create an egg, and bump versions.cfg in a LP branch. probably best to just read that long wiki page, now that i think about it. [14:59] (a member of the lazr-developers team) [14:59] bac: oh, I know all about that stuff :) [14:59] I've done 1-3 [14:59] "merge to trunk", you just do it directly? [14:59] or it goes through pqm (the page says pqm) [14:59] jam: yep [14:59] no bots [15:00] oh really? ok, my memory is faulty then [15:00] * bac defers to the wiki [15:00] I'm not sure I'm in lazr-developers [15:00] I'm in launchpad [15:00] (launchpad-dev? whatever that group is) [15:02] bac: looks like I can commit straight to lp:launchpadlib [15:02] jam, you should be in this group, indirectly: https://launchpad.net/~lazr-developers [15:02] yep [15:03] bac: it looks like 'bin/test' doesn't run the doctests either. Only running "bin/test" in launchpad itself, but not in launchpadlib [15:04] yes, iirc getting the test to run is done via a LP branch [15:08] abentley: could you review this for me? https://code.launchpad.net/~ivo-kracht/launchpad/bug-562532/+merge/113414 [15:08] ivory: sure. [15:09] ivory: How does that remove a page request? [15:10] abentley: allthe other edit buttons are sprite icons, only those were tags [15:13] ivory: I'm used to sprites looking like ''. Why is this one different? [15:14] abentley: it's removing the request to: https://launchpad.net/@@/edit via the src of the tag. and referencing the sprite in the CSS. [15:16] rick_h_: ivory already answered that. Do you know what's up with this