[12:10] argh, again [12:11] bac frankban sorry, lost track of time. I'm going to try and get video working. we'll talk in 5 minutes (12:15 UTC) one way or the other [12:12] ok [12:18] gmb if you see this, please kick SpamapS about the charm packaging symlinks and python toolbox packaging [12:28] bac: sorry I've closed firefox, you were tellenig me something, I am back in goldenhorde [12:29] frankban: ok, let's pair in about 10 minutes. i'll share a screen using benji's old method [12:29] ok [13:28] frankban: [13:28] gary_poster: nah, we always contribute patches upstream on their license terms [13:28] gary_poster: so this is fin [13:28] e [13:28] I will unblock your card :-) [13:28] cool gary_poster, so LGPL [13:28] right, frankban [14:09] gary_poster: yay! [14:10] bac, ? [14:10] gary_poster: frankban and i figured out what was going on. wadl generation caches on a class level dictionary so that is what was breaking test isolation [14:11] ah! cool. So you have to...stash the cache and put it back? [14:11] so test that generate wadl need to clear the cache at the end [14:11] ah ok [14:11] you think, or just clear? [14:11] generating the cache is expensive, I think [14:11] could be wrong [14:11] maybe time the two approaches [14:11] no, it is very expensive [14:11] clearing would be nicer [14:12] but do we want to keep it between tests? [14:12] but if you can just put it back and save a couple of minutes in the test...how long does it take? [14:12] we already do keep it between tests [14:12] if we didn't we'd be paying the cost for any of our tests like this [14:12] s/any/all/ [14:13] well the problem is recognizing the tests that will be harmed by using a cached wadl that doesn't represent their view of the world [14:14] sure. [14:14] but...in either case [14:14] a well behaved test will need to either explicitly clear the cache [14:15] or reset the previous contents [14:16] if we put in automation to try to warn test writers (and I'm not quite sure how we would do that unless we stuff it in a shared base testcase), then why can't we put automation to always stash the contents on setup and reinstate on teardown? [14:22] but if every test sets aside the cache in a setup and restores it in a teardown then none of those tests are using the cache and gaining no benefit from it [14:26] i'm actually pretty curious now as to why using the cached wadl breaks this test. the wadl generation looks pretty innocuous [14:36] bac, no, that's not my idea [14:36] idea: in setUp, you make a copy of the cache [14:36] in tearDown, you reinstate that copy [14:36] right [14:36] every test gets a new, clean copy of the cache [14:37] already populated [14:37] but who generates the cache? [14:37] yeah, was thinking about that :-) [14:37] if testA generates the cache, but in teardown replaces it with what he found at the beginning, then it is reset to {} [14:38] i'm currently looking at the difference in the wadl between the cached and newly generated to see if i can tell why it is causing the failure [14:40] setUp could look on a local base class testcase attribute for the cache. If it does not exist, it clears out the wadl cache and forces it to be generated (I think there's an explicit call for that IIRC) [14:40] it then gets the generated code. [14:40] cache I mean [14:40] OTOH if the local version exists, it puts a copy in the cache. [14:40] eh, I dunno. [14:41] difference: doesn't it explicitly specify a different file to load in that test? [14:41] been a while since I looked at it [14:42] bac, gary_poster: I know nothing about how wadl is generated, but, could it be the wadl version? we have devel, beta and 1.0 [14:42] frankban: all three versions are cached and used based on version [14:42] bac: hum... [14:43] yeah [14:52] gary_poster, frankban: so the difference between the old and new wadl is simply https vs http [14:52] huh [14:53] so the cached version has https URL, which cause the api tests to fail with 'connection refused' b/c they are listening for http connections [14:55] and in general--outside of tests--they are supposed to have https urls if I read the code correctly [14:58] test_wadl_generation seems to be the only test that uses generate_wadl, the latter calls _generate_web_service_root that do: [14:58] does: [14:58] # Since we want HTTPS URLs we have to munge the request URL. [14:58] url = url.replace('http://', 'https://') [15:02] gary_poster, bac: we could just add a secure=True argument to generate_wadl and _generate_web_service_root and then call generate_wadl(version, secure=False) in test_wadl... [15:03] I s'pose, but in general we seem to have code that is generating a test-friendly wadl cache somewhere [15:03] that code might change [15:04] if it does, we'd want that change too [15:04] we could try to figure out that mechanism [15:04] and comment it or hook into it or something [15:04] so as to make that approach more reliable [15:04] for the future [15:04] or so as to figure out another similar solution [15:05] utilities/create-lp-wadl-and-apidoc.py ... [15:07] which has no https->http code I see... [15:09] and looking at the wadl files in my filesystem, they are https too... [15:09] so who is fixing up https -> http? [15:09] for the tests? [15:10] i think _generate_web_service_root as frankban pointed out above [15:12] bac, no, the other way around [15:12] bac, you said that our testing cache has http, right [15:12] and after we make this call, the urls are https [15:13] yes, i was backwards [15:13] which time? :-) [15:13] so, it starts out https [15:13] and after that test it is http? [15:14] test_wadl via generate_wadl generated https [15:14] gary_poster: I think after test_wadl it remains https, and it is cached [15:14] i don't know who makes the http version used by the api tests [15:14] right [15:15] that was my question [15:15] gary_poster, bac: there is the possibility that the cached http version is created by the first test that exercises the api [15:16] I believe that is the case--oh! [15:17] yes [15:17] right, I forgot about that. So, that means that initially there is no reading of the wadl files [15:17] then we produce the wadl files, using generate_wadl, which is done in such a way as to enforce https [15:17] my suggestion was like: if we can establish that test_wadl is the only test that uses https, we can switch that test to use http instead, so that the cached wadl is valid for all tests [15:18] right, I followed that. My concern was that other code may want other changes for tests. But, generate_wadl appears to be the only official entry point for this. [15:19] not wadl files, but in-memory cached wadl [15:19] yes, good correction, thanks [15:21] launchpadlib_for defines the service_root to be 'http://api.launchpad.dev/' [15:21] so any test using that part of testing/webservice will get http URLs into the cache [15:21] unless it is already populated [15:22] ok, I feel like you two have been standing there with the full undestanding all along. :-) Mm, so bac's suggestion was clearing out the cache after the test, and frankban's suggestion is to make the test produce http. Um, I'm fine with you all deciding. I lean more towards the "get rid of the stuff we generated in this test" approach rather than the "generate stuff in this test that we an use later," but I'm h [15:22] appy to let you all decide. [15:22] so if our normal mechanism properly generates http and only test_wadl is generating https, then i think we should either fix it to generate http or clear the cache when it is done [15:23] frankban's solution is good in that it preserves the cache [15:23] frankban's solution is bad in that caching is bad for test isolation [15:25] I agree [15:28] another similar solution (with the same pros and cons) is to change _generate_web_service_root to switch http/https only if the version is not "devel", assuming that other tests use 'devel' [15:30] i'm happy in that we now *fully* understand this hateful problem, so any solution is good. [15:31] here is what i propose: http://pastebin.ubuntu.com/981864/ [15:32] if test_wadl is the first wadl generator then there will be repeated effort but in all other situations there is no waste [15:39] bac: I like that only the test code is changed (using your solution), +1 [15:40] yeah, that appealed to me too [15:58] gary_poster: there are no other reviewers (but me), so if you have time, https://code.launchpad.net/~bac/launchpad/bug-996773-2/+merge/105499 [15:58] * bac ->lunch [15:59] sure bac [15:59] frankban, call now or next week? [16:00] gary_poster: now is ok if you are have time [16:01] cool frankban. let's go to https://talkgadget.google.com/hangouts/_/extras/canonical.com/goldenhordeoneonone [17:33] sorry bac [17:33] was on call and then had lunch [17:33] but sinzui approved, as I'm sure you saw [19:34] bac, hey [19:35] you around for a quick call? [19:35] oh yeah [19:35] good thing you fixed your calendar. :) [19:36] gary_poster: in oneonone [20:23] bac, I got a test error on the data center machine which looks like the old bzr whoami problem. Do we have to do something in the system to prevent these? [20:23] v [20:23] http://pastebin.ubuntu.com/982322/ [20:24] Ah, yeah I think I remember [20:24] no, we rolled back the /etc/mailname approach [20:25] and added a 'bzr whoami' to the failing test [20:25] so the test should not fail [20:26] oh [20:26] maybe is this a different failing test, bac? [20:26] and we just need to add the same fix? [20:26] gary_poster: this just posted http://www.youtube.com/watch?v=WAlhLz3lNNs&feature=uploademail "Juju Charm Best Practices" [20:26] i wonder if any of ours made the cut [20:28] well, python shelltoolbox must not have :-( [20:29] gary_poster: that test does not match what i see in trunk [20:29] e.g. line 187, where the failure occurs is not the same [20:29] do you have a recent branch? [20:30] in trunk, the whoami fix is at line 179 [20:30] oh this is r15131 [20:30] < 15236 [20:31] ah! [20:31] ok [20:37] thanks bac. I thought something was already updating the branch [20:37] i have the UDS session playing in the background on my tv [20:37] i keep hearing "best practices" and "promulgate" being used way too much [20:38] heh [20:38] bac, I have it in the background too. AFAICT they are not sharing best practices but just randomly saying blue sky stuff. oops [20:40] yeah, well [20:40] you know, if we're supposed to plugging in more company-wide, we really need to be at UDS [20:44] gary_poster: so far i cannot reproduce bug 998040 [20:44] <_mup_> Bug #998040: lp.codehosting.tests.test_bzrutils.TestGetBranchStackedOnURL.testGetBranchStackedOnUrlNotStacked(RemoteBranchFormat-v2) fails rarely/intermittently in parallel tests < https://launchpad.net/bugs/998040 > [20:45] bac, agree about UDS [20:45] running original worker set repeatedly now [20:45] no idea about repro :-/ [20:45] ok [20:46] I just turned off the UDS session. kind of annoying. :-P