-
-
Notifications
You must be signed in to change notification settings - Fork 25.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG+1] Remove nose from CIs and documentation #9840
Conversation
8430f32
to
99c9183
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great that it's finally happening! I double checked there is indeed no references to nose left after this PR (excluding those wrapped into a try: .. ImportError:
clause. LGTM if CI tests pass..
interact with tests that use ``multiprocessing``:: | ||
|
||
C:\Python34\python.exe -c "import nose; nose.main()" -v sklearn | ||
$ pytest sklearn |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add the -v
option to be consistent with make test-code
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure what the recommendation should be. I rarely use -v myself and I would argue that it is only useful (if at all) if you only have a log file e.g. on our CIs, to make sure that a particular test was run or succeeded or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point. Personally, I frequently use -sv
to see, when there is something at the stdout, which unit tests it is associated with. But yes, without a log or the -s
option, -v
might not be that useful (particularly when there are lots and lots of tests as in scikit-learn)...
|
||
nosetests -v sklearn/ | ||
pytest sklearn |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment as above
Probably better to wait for 0.19.1 to be released before merging this one. |
The Travis failure on Python 2.7 conda distrib is one of this "What the ..." one. It happens for all the PRs and started happening only recently (maybe a few hours ago). For some reason (I have even the beginning of a clue why) in this build only, the detected C compiler is:
|
I bet this is a new thing in the miniconda installer that adds some gcc related stuff .... although I am not 100 % sure. I tried to download the installer from: The python I get has been compiled with gcc 7.2.0. Although I can not reproduce the exact same problem I have a somewhat similar one where the detected C compiler has -fstack-protector-strong as arguments:
cc @jnothman in case you are scratching your head over this as well. I am just hoping this is a temporary problem that will not exist tomorrow morning (European time) ;-). And yeah I forgot to say: x86_64-conda_cos6-linux-gnu-gcc does not exist so I can not build scikit-learn of course. |
Looking at https://repo.continuum.io/miniconda/ it looks like the latest miniconda installers have been uploaded around 2017-09-26 16:26:32. |
4620fad
to
487a4ba
Compare
I pushed a commit to use the previous version of the miniconda installer, which I am hoping will fix the problem. This is probably what we should do in case the issue does not get solved quickly on the Anaconda Inc side. |
037ef12
to
9a4a80f
Compare
Seems like the issue has been reported at conda/conda#6030. |
The Miniconda-latest now points to the previous miniconda installer version and I pushed a temporary work-around in master (essentially skip |
We will need a nose instance that runs |
Can you elaborate why, for example what are you worried about? I seem to remember you have mentioned this already in a separate issue but I did not get your point at the time. |
9a4a80f
to
c86b5cf
Compare
It's possible I'm mistaken. Other projects use scikit-learn's
check_estimator. I thought we should check that there's nothing in there
that makes it not work on nose. But I suppose at the end of the day it'll
be some function with some assertions, so perhaps it's unnecessary.
…On 28 September 2017 at 14:11, Loïc Estève ***@***.***> wrote:
We will need a nose instance that runs check_estimator on a variety of
estimators during a nose deprecation period, IMO.
Can you elaborate why, for example what are you worried about? I seem to
remember you have mentioned this already in a separate issue but I did not
get your point at the time.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#9840 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEz6zo5BDJu04nwKFmApd4XqwDfcv8qks5smxxhgaJpZM4Pl4gu>
.
|
OK makes sense, thanks. I'll double check that indeed. |
I compared the output of I found only one single case where a test was run by nose but not by pytest (test class with a constructor) which I will fix separately from this PR (see #9860). |
For completeness I will leave that PR alone for a bit, probably at least until we release 0.19.1. |
Good work :) |
travis fails?
?! |
Yeah this was because I had a conda temporary work-around still in my branch. Now everything is green. I think one of the main thing to discuss before a more thorough review is whether we want to ensure that check_estimator does not rely on pytest, which I think is what @jnothman means in #9840 (comment). I am hoping there is a way to do that without keeping a nose build but I have not looked at it in detail yet. |
well you certainly don't need a full nose build. but I can't see any way to
assure it without running test_check_estimator in an environment without
pytest
|
Let's just add a build that does not install pytest and runs |
How about this to check that pytest is not needed to run check_estimator? This makes sure that pytest is not imported when running import subprocess
import sys
def test_pytest_not_required_for_check_estimator(tmpdir):
to_execute = '\n'.join([
'import sys',
'from sklearn.utils.estimator_checks import check_estimator',
'from sklearn.linear_model import LinearRegression',
'check_estimator(LinearRegression())',
'print(sys.modules["pytest"])'])
path_to_execute = tmpdir.join('test.py')
path_to_execute.write(to_execute)
# Hack to make pandas not importable because pandas imports pytest if pytest
# is installed
tmpdir.join('pandas.py').write('raise ImportError()')
stdout, stderr = subprocess.Popen(
[sys.executable, path_to_execute.strpath],
stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
assert stdout == b''
assert b'KeyError' in stderr Ideally going forward I think we would need to check that pytest is a soft dependency in the non test modules (i.e. basically I can import any module whose name is not |
I suppose that's reasonable. How about just using Pity about pandas... There's some risk that other libraries will do similar, but I suppose we'll find that out quickly? |
05b98b7
to
66e95c8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise LGTM
|
||
|
||
if __name__ == '__main__': | ||
main_module = sys.modules['__main__'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment: "This module is run as a script to check that we have no dependency on pytest for estimator checks." Otherwise it could be removed by the ignorant and the test would still pass!
build_tools/travis/test_script.sh
Outdated
@@ -67,3 +59,17 @@ fi | |||
if [[ "$SKIP_TESTS" != "true" ]]; then | |||
run_tests | |||
fi | |||
|
|||
if [[ "$CHECK_PYTEST_SOFT_DEPENDENCY" == "true" ]]; then | |||
conda remove -y py pytest || echo "pytest not found by conda" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This error message will look weird without someone realising that we're trying to remove it. Silent is fine, IMO.
INSTALL_MKL="true" NUMPY_VERSION="1.13.1" SCIPY_VERSION="0.19.1" | ||
PANDAS_VERSION="0.20.3" CYTHON_VERSION="0.26.1" | ||
TEST_DOCSTRINGS="true" | ||
CHECK_PYTEST_SOFT_DEPENDENCY="true" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also put this in its own worker to be sure it's sandboxed, but I guess it's okay as it is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also put this in its own worker to be sure it's sandboxed, but I guess it's okay as it is.
I thought about too but I thought this is a bit of a waste to have another build and run tests only for 5 seconds.
39089f6
to
eb02410
Compare
…into so-long-nose
$ pip install nose coverage | ||
$ nosetests --with-coverage path/to/tests_for_package | ||
$ pip install pytest pytest-cov | ||
$ pytest --cov sklearn path/to/tests_for_package |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add an example to test a single test?
pytest path/to/tests_for_package::test_unit_test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather try to keep this PR minimal, essentially the doc should only be a nose -> pytest translation.
That should be fixed in the latest release 0.21.0: pandas-dev/pandas#17710 (which doesn't necessarily help of course if you want to run the test against multiple versions of pandas) |
Nice to know, thanks! I ended up testing without pytest as suggested by @jnothman rather than looking in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
@@ -29,19 +24,17 @@ inplace: | |||
$(PYTHON) setup.py build_ext -i | |||
|
|||
test-code: in | |||
$(NOSETESTS) -s -v sklearn | |||
$(PYTEST) --showlocals -v sklearn |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this doesn't show warnings, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what you mean:
- pytest logs by default show a lot less warnings that nosetests. A good thing in my opinion
- for now, we disable pytest warnings in setup.cfg. This is mostly to get rid of the warnings about pytest not supporting tests using yield in pytest 4.0. This will be addressed in further PRs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't able to turn on all the warnings we get with nosetests, and I think that's a bad thing. For example I couldn't get deprecation warnings (ours or numpy) to show. But I'm new to pytest
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From https://docs.pytest.org/en/latest/warnings.html:
Starting from version 3.1, pytest now automatically catches warnings during test execution and displays them at the end of the session:
and
Pytest by default catches all warnings except for DeprecationWarning and PendingDeprecationWarning.
What I was not fully aware of this but it seems that --disable-pytest-warnings
completely disables the warning summary. It was originally added to remove warnings about yield-based tests.
Reading more carefully https://docs.pytest.org/en/latest/warnings.html, it looks like you can use filterwarnings
in setup.cfg to filter warnings a bit more carefully (only filter out yield-based tests deprecation and show other DeprecationWarnings). This is what we should probably do. Do you mind if I do that in a separate PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking closely at it, removing --disable-pytest-warnings
from setup.cfg you get 10k lines of warnings in the warnings summary when running pytest sklearn
. I am not really in favor of doing that by default:
- it makes the pytest output a lot more noisy
- you will not be able to see the Travis log directly inside Travis (you need to download the full log because it goes over the size limit). This is what is happening at the moment for example compare this pytest log to its nose equivalent log. In the nose log you get
This log is too long to be displayed. Please reduce the verbosity of your build or download the raw log.
. - people interested to chase down warnings can run locally with the
--disable-pytest-warnings
commented out in their setup.cfg.
Side comment: actually we get less than 100 "yield tests are deprecated, and scheduled to be removed in pytest 4.0" warnings. So the original motivation for --disable-pytest-warnings
does no longer exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should make sure we don't use warnings instead of ignoring them. Many of them are important. Hiding them hides the problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the output of nose, I see that many of those are pretty recently introduced and probably point to real issues....
if [[ "$COVERAGE" == "true" ]]; then | ||
# Need to append the coverage to the existing .coverage generated by | ||
# running the tests | ||
CMD="coverage run --append" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what the point of this is. The union of the coverage of running the tests with pytest and running the estimator checks without pytest?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what the point of this is. The union of the coverage of running the tests with pytest and running the estimator checks without pytest?
Yes. This way you can see that run_tests_without_pytest
is covered in test_estimator_checks.py
. test_script.sh
is simpler without this so I can definitely remove it. With hindsight I probably should not have done it, it took me way too much time to get to work and that was only so that codecov would be green ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, fine by me. I'm good with merging either way.
I just reran the comparison to err on the side of caution and I am 100% confident that we run the exact same tests on nose and pytest. Summing up all the reviews, I think there is at least 3.5 LGTM from @rth @jnothman @agramfort @amueller @glemaitre. Who will be bold enough to press the green button? |
I am on my phone. Not sure that blackberry allowed for merging PR for security reason :D but otherwise I would...?
|
Great that the pytests migration is happening. Thanks for working on it @lesteve ! |
Well done, @lesteve! Thanks.
…On 17 November 2017 at 06:21, Roman Yurchak ***@***.***> wrote:
Great that the pytests migration is happening. Thanks for working on it
@lesteve <https://github.com/lesteve> !
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#9840 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEz6_c7jsEHhYSGNqIiLDMgFyFIHjgHks5s3ItOgaJpZM4Pl4gu>
.
|
Thanks @lesteve ! |
Is there an issue about the yields and fixing the test names? |
The issue with yields will come after pytest 4 I think.
|
interact with tests that use ``multiprocessing``:: | ||
|
||
C:\Python34\python.exe -c "import nose; nose.main()" -v sklearn | ||
pytest sklearn | ||
|
||
See the web page http://scikit-learn.org/stable/developers/advanced_installation.html#testing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be better to change the web page to dev version? I am instructed to use pytest and I want to click in the link for more information, but are soon surrounded by nose :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch I guess this would make a lot of sense in this case. Feel free to do a quick grep to spot similar problems with the nose to pytest move and open a PR.
We have changed to pytest but the stable doc is about nose. Already gain approval from @lesteve in scikit-learn#9840. Nothing to check so I push directly.
Apparently this went missing in scikit-learn#9840
Apart from tackling comments that people may have, here are a few things I want to do before this can be merged: