Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TEST: Tests for grid_tools take a long time #109

Closed
sglyon opened this issue Jan 24, 2015 · 5 comments · Fixed by #461
Closed

TEST: Tests for grid_tools take a long time #109

sglyon opened this issue Jan 24, 2015 · 5 comments · Fixed by #461

Comments

@sglyon
Copy link
Member

sglyon commented Jan 24, 2015

I just ran the tests on my machine and got the following printout:

Ran 247 tests in 73.629s

I checked and if I run only this test in test_cartesian.py I get:

$ nosetests quantecon/tests/test_cartesian.py:test_performance_C
.
----------------------------------------------------------------------
Ran 1 test in 23.669s

Similar timings for the test_performance_F function:

$ nosetests quantecon/tests/test_cartesian.py:test_performance_F
.
----------------------------------------------------------------------
Ran 1 test in 24.902s

These two tests account for about 2/3 of the total testing time. Is there a way we can not run these tests by default, or at least have their run time not take so long?

We could probably still update the travis script to make sure they are checked by travis.

@albop any ideas?

@albop
Copy link
Contributor

albop commented Jan 24, 2015

These long-running tests are actually meant to measure performance by comparing with numpy.
They can be ignored by running nosetests --exclude='.*performance.*' quantecon.
Actually, this seems like the easiest way to differentiate long-running tests from short running ones: decide a naming conventions for the former. (I remember some earlier talks about adding special functions attributes).
The reason why this solution does not completely satisfy me, is that the performance results directly go to
/dev/null. So until we start to focus on systematic performance measurements and compare the results, we can as well comment the corresponding code...

@sglyon
Copy link
Member Author

sglyon commented Jan 28, 2015

@albop thanks of the comments.

I ended up just having nose run the test file(s) relevant to what I was changing so this wasn't a big issue.

I think if we were to implement a suite of performance tests that are monitored via vbench (as suggested in #44) that these tests would be perfect for there.

I think we have talked about why they are slow -- feel free to close this unless you are planning on changing something and want to leave it open as a reminder.

@albop
Copy link
Contributor

albop commented Jan 28, 2015

Sure. Let's wait for the vbench to emerge and close the issue afterwards so that we don't forget about these few lines of code.

@mmcky
Copy link
Contributor

mmcky commented Aug 5, 2015

See update to #44. Is vbench worth investing in? It doesn't seem to be active. Pandas results end in 2012. Statsmodels seems to be more up to date with results for April 2014.

@oyamad
Copy link
Member

oyamad commented Aug 18, 2017

Can we add @attr('slow') to test_performance_* for testing on a local machine (and add -a speed=slow to .travis.yml)?

@mmcky mmcky changed the title TEST: Tests for cartesian take a long time TEST: Tests for grid_tools take a long time Oct 19, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants