jam | lifeless: I have a question about using fixtures during testing. I'm playing around with having a service that gets brought up as a fixture, and it generates a log file. | 07:46 |
---|---|---|
jam | Which I would like to add as a test detail, if the test fails. | 07:46 |
lifeless | jam: cool | 07:48 |
jam | however Fixture.setUp() doesn't have access to the test. | 07:48 |
jam | So do I just change the test code to do: | 07:48 |
jam | f = self.useFixture(MyService); f.updateDetails(self) | 07:48 |
jam | or is there some other useful way to have the fixture touch TestCase.addDetails ? | 07:48 |
mgz | so, lp:rabbitfixture uses addDetail early with testtools.content.content_from_file | 07:49 |
jam | mgz: yeah, I missed the fact that fixtures *themselves* have an addDetail method. | 07:50 |
jam | vs just TestCase having it. | 07:50 |
lifeless | jam: so, TestCase.useFixture will automatically grab fixture details IIRC. | 07:50 |
jam | that wasn't particularly documented on: http://pypi.python.org/pypi/fixtures | 07:50 |
lifeless | jam: you should just need to add whatever info you want to the fixtures details (during setUp - the actual content grab will be lazy) | 07:51 |
jam | lifeless: right, addDetail('log', content_from_file(log_file)) looks like it will work, though I need to actually test it | 07:52 |
lifeless | jam: looks like I need to document addDetail, could you file a bug pleasE? | 07:54 |
jam | lifeless: what is the ordering between addCleanup(delete_the_file) and addDetail(..., content_from_file) ? Is addCleanup always run after detail collection? | 07:55 |
jam | I'll file a bug | 07:55 |
jam | lifeless: https://bugs.launchpad.net/python-fixtures/+bug/1071649 | 07:56 |
ubot5 | Ubuntu bug 1071649 in Python Fixtures "Document addDetail" [Undecided,New] | 07:56 |
lifeless | jam: detail capturing run when an error is happening in the test framework | 07:58 |
lifeless | jam: cleanup runs after an error is caught or the test ends normally | 07:59 |
lifeless | rephrase | 07:59 |
lifeless | detail capturing runs when *reporting an outcome* | 07:59 |
lifeless | cleanup runs after reporting outcomes or at test end | 07:59 |
lifeless | the nasty bit there is that the success outcome only happens after cleanup completed successfully | 08:00 |
lifeless | so if you need logs etc for tests that pass (and you may want them - up to you :)) you need to stash stuff | 08:00 |
lifeless | jam: TBH this stuff hasn't been polished - it was written, Just Worked, and we haven't reviewed it or had cause to spend more time on it (because it Just Worked :P) | 08:01 |
jam | yeah, I don't need it right now. For failures is fine. Though interestingly it means you lose the 'shutdown' part of the log, since the cleanup to shutdown the server happens after the error that indicates you want to cleanup. | 08:01 |
jam | but yes, it works quite well for my purpose. | 08:01 |
lifeless | cool | 08:03 |
lifeless | shiny stuff is shiny | 08:03 |
lifeless | openstack are just adopting this now | 08:03 |
lifeless | beginning of a learning curve for them on test effectiveness :) | 08:03 |
jam | lifeless: yeah, I just wish this kind of goodness was already in go. | 08:07 |
jam | It looks like our team will be working on juju-core in the near future, and I'm noticing things like really-good testing infrastructure stuff. | 08:07 |
lifeless | :) | 08:07 |
lifeless | perhaps you can add it. | 08:07 |
jam | yeah, a bit of 'as needed'. But always the "to do your simple problem X, first you need to implement all of Y and Z" | 08:08 |
jam | lifeless: so fixture already seems to know about cleaning up state between test runs. Is there much to be gained to poke into testresources? | 08:09 |
jam | (i imagine it mostly adds test organization to bring together similar resourced tests?) | 08:09 |
lifeless | jam: right | 08:09 |
lifeless | jam: FixtureResource will adapt a fixture for you | 08:10 |
lifeless | poo/win 61 | 08:11 |
lifeless | bah | 08:11 |
fullermd | Better than a poo/loss. I think. | 08:13 |
fullermd | lifeless: Incidentally, I was wondering the other day about that libcpuinfo you had. Is that still live? | 08:42 |
lifeless | fullermd: I haven't touched it, but it should still work | 09:13 |
lifeless | fullermd: and I'll happily merge and review patches | 09:13 |
lifeless | fullermd: (working code == no touch no more :)) | 09:13 |
fullermd | I noticed there's an outstanding merge request on it. | 09:13 |
lifeless | oh | 09:14 |
lifeless | I wish LP was better at showing me things like that | 09:14 |
lifeless | jml filed a bug about that :P. | 09:14 |
fullermd | Sadly, nobody noticed the bug because LP didn't show them? :p | 09:14 |
lifeless | fullermd: I won't look at it tonight. | 09:14 |
lifeless | fullermd: trolololol | 09:14 |
fullermd | Oh, no rush at all. Just wondered it in passing a week or two ago. | 09:14 |
fullermd | Should add a sysconf() method. Early on, maybe; library calls are cheap. | 09:15 |
Han | If you want to forget your local changes and just update your branch to | 09:42 |
Han | match the remote one, use pull --overwrite. | 09:42 |
Han | So I run bzr pull --overwrite and guess what I get to see after bzr diff... :P | 09:43 |
Han | How do I really get the remote tree like it should be? | 09:43 |
vila | Han: the should read "If you want to forget your *committed* local changes" | 10:07 |
vila | Han: to get rid of your uncommitted local changes, use 'bzr revert' | 10:07 |
Han | arrrrr | 10:14 |
Han | vila, thanks | 10:19 |
=== yofel_ is now known as yofel | ||
lamalex | hello, is it possible to search through a tree's entire history? | 12:22 |
lamalex | i'm trying to find a function and determine if it was removed, and there's no commit message indicating | 12:22 |
mgz | lamalex: see bzr grep. it's built in to the latest development bzr, or available from lp:bzr-grep as a plugin for earlier versions | 12:23 |
jelmer | lamalex: I usually just grep through 'bar log -p' for that | 12:24 |
jelmer | *bzr | 12:24 |
lamalex | mgz, im trying to use bzr grep but it doesnt seem to run through my entire history (or is it and im just not realizing?) | 12:24 |
mgz | that relies on the nice person saying they removed function X in their commit message :) | 12:25 |
fullermd | Not with -p it doesn't :p | 12:25 |
mgz | lamalex: see `bzr help grep` for details | 12:25 |
lamalex | ahh | 12:25 |
lamalex | :) | 12:25 |
beuno | lamalex, also, bzr-search | 12:38 |
timour | Hello, I am experiencing the following problem while trying to revert a branch to a previous version: | 21:11 |
timour | bzr push --overwrite | 21:11 |
timour | Connection Timeout: disconnecting client after 300.0 seconds | 21:12 |
timour | bzr: ERROR: Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist. | 21:12 |
timour | bzr 2.4.1 | 21:12 |
timour | with bzr 2.62b - same problem | 21:14 |
timour | tried from different boxes in different countries | 21:14 |
timour | Sorry, forgot to mention that the tree I push to is on Launchpad. | 21:15 |
timour | Is this some know problem with BZR or Launchpad? | 21:15 |
jelmer | timour: can you connect to bazaar.launchpad.net with sftp? | 21:15 |
timour | jelmer, will try now | 21:20 |
timour | jelmer, yes, no problem | 21:21 |
timour | jelmer, also please notice that I asked my colleague to try the same, he is an different country, different account | 21:22 |
timour | jelmer, do need to know what is the source tree? | 21:22 |
jelmer | timour: does e.g. "bzr ls lp:bzr" work? | 21:29 |
timour | jelmer, yes | 21:29 |
jelmer | not sure what's going on then | 21:30 |
timour | jelmer, could it be something specific to this tree | 21:30 |
jelmer | timour: if you run with -Dhpss, then ~/.bzr.log might help you see what's going on | 21:31 |
timour | lp:maria/5.5 | 21:31 |
timour | jelmer, ok, will try now | 21:31 |
timour | jelmer, this is some of the interesting errors: | 21:41 |
timour | ooManyConcurrentRequests: The medium 'SmartSSHClientMedium(bzr+ssh://timour@bazaar.launchpad.net/)' has reached its concurrent request limit. Be sure to finish_writing and finish_reading on the currently open request. | 21:41 |
timour | there is a call stack as well, not sure if anyone wants to see it here | 21:43 |
timour | jelmer, thanks for the help, have to go to sleep. I pushed an 'undo' patch, which is the only solution ATM. | 21:58 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!