Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG+1] Remove nose from CIs and documentation #9840

Merged
merged 3 commits into from
Nov 16, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 4 additions & 9 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,24 +42,19 @@ matrix:
- env: DISTRIB="conda" PYTHON_VERSION="3.6.2" INSTALL_MKL="true"
NUMPY_VERSION="1.13.1" SCIPY_VERSION="0.19.1" PANDAS_VERSION="0.20.3"
CYTHON_VERSION="0.26.1" COVERAGE=true
if: type != cron
# This environment use pytest to run the tests. It uses the newest
# supported Anaconda release (5.0.0). It also runs tests requiring Pandas.
- env: USE_PYTEST="true" DISTRIB="conda" PYTHON_VERSION="3.6.2"
INSTALL_MKL="true" NUMPY_VERSION="1.13.1" SCIPY_VERSION="0.19.1"
PANDAS_VERSION="0.20.3" CYTHON_VERSION="0.26.1"
TEST_DOCSTRINGS="true"
CHECK_PYTEST_SOFT_DEPENDENCY="true"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could also put this in its own worker to be sure it's sandboxed, but I guess it's okay as it is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could also put this in its own worker to be sure it's sandboxed, but I guess it's okay as it is.

I thought about too but I thought this is a bit of a waste to have another build and run tests only for 5 seconds.

if: type != cron
# flake8 linting on diff wrt common ancestor with upstream/master
- env: RUN_FLAKE8="true" SKIP_TESTS="true"
DISTRIB="conda" PYTHON_VERSION="3.5" INSTALL_MKL="true"
NUMPY_VERSION="1.13.1" SCIPY_VERSION="0.19.1" CYTHON_VERSION="0.26.1"
NUMPY_VERSION="1.13.1" SCIPY_VERSION="0.19.1"
CYTHON_VERSION="0.26.1"
if: type != cron
# This environment tests scikit-learn against numpy and scipy master
# installed from their CI wheels in a virtualenv with the Python
# interpreter provided by travis.
- python: 3.6
env: USE_PYTEST="true" DISTRIB="scipy-dev-wheels"
env: DISTRIB="scipy-dev-wheels"
if: type = cron

install: source build_tools/travis/install.sh
Expand Down
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,8 +139,8 @@ tools:
- Code with good unittest **coverage** (at least 80%), check with:

```bash
$ pip install nose coverage
$ nosetests --with-coverage path/to/tests_for_package
$ pip install pytest pytest-cov
$ pytest --cov sklearn path/to/tests_for_package
```

- No pyflakes warnings, check with:
Expand Down
17 changes: 5 additions & 12 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,12 @@

PYTHON ?= python
CYTHON ?= cython
NOSETESTS ?= nosetests
PYTEST ?= pytest
CTAGS ?= ctags

# skip doctests on 32bit python
BITS := $(shell python -c 'import struct; print(8 * struct.calcsize("P"))')

ifeq ($(BITS),32)
NOSETESTS:=$(NOSETESTS) -c setup32.cfg
endif


all: clean inplace test

clean-ctags:
Expand All @@ -29,19 +24,17 @@ inplace:
$(PYTHON) setup.py build_ext -i

test-code: in
$(NOSETESTS) -s -v sklearn
$(PYTEST) --showlocals -v sklearn
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesn't show warnings, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what you mean:

  • pytest logs by default show a lot less warnings that nosetests. A good thing in my opinion
  • for now, we disable pytest warnings in setup.cfg. This is mostly to get rid of the warnings about pytest not supporting tests using yield in pytest 4.0. This will be addressed in further PRs.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't able to turn on all the warnings we get with nosetests, and I think that's a bad thing. For example I couldn't get deprecation warnings (ours or numpy) to show. But I'm new to pytest

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From https://docs.pytest.org/en/latest/warnings.html:

Starting from version 3.1, pytest now automatically catches warnings during test execution and displays them at the end of the session:

and

Pytest by default catches all warnings except for DeprecationWarning and PendingDeprecationWarning.

What I was not fully aware of this but it seems that --disable-pytest-warnings completely disables the warning summary. It was originally added to remove warnings about yield-based tests.

Reading more carefully https://docs.pytest.org/en/latest/warnings.html, it looks like you can use filterwarnings in setup.cfg to filter warnings a bit more carefully (only filter out yield-based tests deprecation and show other DeprecationWarnings). This is what we should probably do. Do you mind if I do that in a separate PR?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking closely at it, removing --disable-pytest-warnings from setup.cfg you get 10k lines of warnings in the warnings summary when running pytest sklearn. I am not really in favor of doing that by default:

  • it makes the pytest output a lot more noisy
  • you will not be able to see the Travis log directly inside Travis (you need to download the full log because it goes over the size limit). This is what is happening at the moment for example compare this pytest log to its nose equivalent log. In the nose log you get This log is too long to be displayed. Please reduce the verbosity of your build or download the raw log..
  • people interested to chase down warnings can run locally with the --disable-pytest-warnings commented out in their setup.cfg.

Side comment: actually we get less than 100 "yield tests are deprecated, and scheduled to be removed in pytest 4.0" warnings. So the original motivation for --disable-pytest-warnings does no longer exist.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should make sure we don't use warnings instead of ignoring them. Many of them are important. Hiding them hides the problem.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the output of nose, I see that many of those are pretty recently introduced and probably point to real issues....

test-sphinxext:
$(NOSETESTS) -s -v doc/sphinxext/
$(PYTEST) --showlocals -v doc/sphinxext/
test-doc:
ifeq ($(BITS),64)
$(NOSETESTS) -s -v doc/*.rst doc/modules/ doc/datasets/ \
doc/developers doc/tutorial/basic doc/tutorial/statistical_inference \
doc/tutorial/text_analytics
$(PYTEST) $(shell find doc -name '*.rst' | sort)
endif

test-coverage:
rm -rf coverage .coverage
$(NOSETESTS) -s -v --with-coverage sklearn
$(PYTEST) sklearn --show-locals -v --with-cov sklearn

test: test-code test-sphinxext test-doc

Expand Down
10 changes: 2 additions & 8 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -110,15 +110,9 @@ Testing
~~~~~~~

After installation, you can launch the test suite from outside the
source directory (you will need to have the ``nose`` package installed)::
source directory (you will need to have the ``pytest`` package installed)::

nosetests -v sklearn

Under Windows, it is recommended to use the following command (adjust the path
to the ``python.exe`` program) as using the ``nosetests.exe`` program can badly
interact with tests that use ``multiprocessing``::

C:\Python34\python.exe -c "import nose; nose.main()" -v sklearn
pytest sklearn

See the web page http://scikit-learn.org/stable/developers/advanced_installation.html#testing
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be better to change the web page to dev version? I am instructed to use pytest and I want to click in the link for more information, but are soon surrounded by nose :-)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch I guess this would make a lot of sense in this case. Feel free to do a quick grep to spot similar problems with the nose to pytest move and open a PR.

for more information.
Expand Down
9 changes: 4 additions & 5 deletions appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -78,12 +78,11 @@ build: false
test_script:
# Change to a non-source folder to make sure we run the tests on the
# installed library.
- "mkdir empty_folder"
- "cd empty_folder"
- "python -c \"import nose; nose.main()\" --with-timer --timer-top-n 20 -s sklearn"

- mkdir "../empty_folder"
- cd "../empty_folder"
- pytest --showlocals --durations=20 --pyargs sklearn
# Move back to the project folder
- "cd .."
- cd "../scikit-learn"

artifacts:
# Archive the generated wheel package in the ci.appveyor.com build report.
Expand Down
3 changes: 1 addition & 2 deletions build_tools/appveyor/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@
numpy==1.9.3
scipy==0.16.0
cython
nose
nose-timer
pytest
wheel
wheelhouse_uploader
2 changes: 1 addition & 1 deletion build_tools/circle/build_doc.sh
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ conda update --yes --quiet conda
# Configure the conda environment and put it in the path using the
# provided versions
conda create -n $CONDA_ENV_NAME --yes --quiet python numpy scipy \
cython nose coverage matplotlib sphinx=1.6.2 pillow
cython pytest coverage matplotlib sphinx=1.6.2 pillow
source activate testenv
pip install sphinx-gallery
# Use numpydoc master (for now)
Expand Down
2 changes: 1 addition & 1 deletion build_tools/travis/after_success.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ set -e

if [[ "$COVERAGE" == "true" ]]; then
# Need to run codecov from a git checkout, so we copy .coverage
# from TEST_DIR where nosetests has been run
# from TEST_DIR where pytest has been run
cp $TEST_DIR/.coverage $TRAVIS_BUILD_DIR
cd $TRAVIS_BUILD_DIR
# Ignore codecov failures as the codecov server is not
Expand Down
28 changes: 6 additions & 22 deletions build_tools/travis/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -39,31 +39,20 @@ if [[ "$DISTRIB" == "conda" ]]; then

# Configure the conda environment and put it in the path using the
# provided versions
if [[ "$USE_PYTEST" == "true" ]]; then
TEST_RUNNER_PACKAGE=pytest
else
TEST_RUNNER_PACKAGE=nose
fi

if [[ "$INSTALL_MKL" == "true" ]]; then
conda create -n testenv --yes python=$PYTHON_VERSION pip \
$TEST_RUNNER_PACKAGE numpy=$NUMPY_VERSION scipy=$SCIPY_VERSION \
pytest pytest-cov numpy=$NUMPY_VERSION scipy=$SCIPY_VERSION \
mkl cython=$CYTHON_VERSION \
${PANDAS_VERSION+pandas=$PANDAS_VERSION}

else
conda create -n testenv --yes python=$PYTHON_VERSION pip \
$TEST_RUNNER_PACKAGE numpy=$NUMPY_VERSION scipy=$SCIPY_VERSION \
pytest pytest-cov numpy=$NUMPY_VERSION scipy=$SCIPY_VERSION \
nomkl cython=$CYTHON_VERSION \
${PANDAS_VERSION+pandas=$PANDAS_VERSION}
fi
source activate testenv

if [[ $USE_PYTEST != "true" ]]; then
# Install nose-timer via pip
pip install nose-timer
fi

elif [[ "$DISTRIB" == "ubuntu" ]]; then
# At the time of writing numpy 1.9.1 is included in the travis
# virtualenv but we want to use the numpy installed through apt-get
Expand All @@ -73,7 +62,7 @@ elif [[ "$DISTRIB" == "ubuntu" ]]; then
# and scipy
virtualenv --system-site-packages testvenv
source testvenv/bin/activate
pip install nose nose-timer cython==$CYTHON_VERSION
pip install pytest pytest-cov cython==$CYTHON_VERSION

elif [[ "$DISTRIB" == "scipy-dev-wheels" ]]; then
# Set up our own virtualenv environment to avoid travis' numpy.
Expand All @@ -86,12 +75,7 @@ elif [[ "$DISTRIB" == "scipy-dev-wheels" ]]; then
echo "Installing numpy and scipy master wheels"
dev_url=https://7933911d6844c6c53a7d-47bd50c35cd79bd838daf386af554a83.ssl.cf2.rackcdn.com
pip install --pre --upgrade --timeout=60 -f $dev_url numpy scipy pandas cython
if [[ $USE_PYTEST == "true" ]]; then
pip install pytest
else
# Install nose-timer via pip
pip install nose nose-timer
fi
pip install pytest pytest-cov
fi

if [[ "$COVERAGE" == "true" ]]; then
Expand All @@ -102,8 +86,8 @@ if [[ "$TEST_DOCSTRINGS" == "true" ]]; then
pip install sphinx numpydoc # numpydoc requires sphinx
fi

if [[ "$SKIP_TESTS" == "true" ]]; then
echo "No need to build scikit-learn when not running the tests"
if [[ "$SKIP_TESTS" == "true" && "$CHECK_PYTEST_SOFT_DEPENDENCY" != "true" ]]; then
echo "No need to build scikit-learn"
else
# Build scikit-learn in the install.sh script to collapse the verbose
# build output in the travis output when it succeeds.
Expand Down
41 changes: 24 additions & 17 deletions build_tools/travis/test_script.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,12 @@ except ImportError:
python -c "import multiprocessing as mp; print('%d CPUs' % mp.cpu_count())"

run_tests() {
if [[ "$USE_PYTEST" == "true" ]]; then
TEST_CMD="pytest --showlocals --durations=20 --pyargs"
else
TEST_CMD="nosetests --with-timer --timer-top-n 20"
fi
TEST_CMD="pytest --showlocals --durations=20 --pyargs"

# Get into a temp directory to run test from the installed scikit-learn and
# check if we do not leave artifacts
mkdir -p $TEST_DIR
# We need the setup.cfg for the nose settings
# We need the setup.cfg for the pytest settings
cp setup.cfg $TEST_DIR
cd $TEST_DIR

Expand All @@ -39,23 +36,18 @@ run_tests() {
export SKLEARN_SKIP_NETWORK_TESTS=1

if [[ "$COVERAGE" == "true" ]]; then
TEST_CMD="$TEST_CMD --with-coverage"
TEST_CMD="$TEST_CMD --cov sklearn"
fi
$TEST_CMD sklearn

# Going back to git checkout folder needed to test documentation
cd $OLDPWD

if [[ "$USE_PYTEST" == "true" ]]; then
# Do not run doctests in scipy-dev-wheels build for now
# (broken by numpy 1.14.dev array repr/str formatting
# change even with np.set_printoptions(sign='legacy')).
# See https://github.com/numpy/numpy/issues/9804 for more details
if [[ "$DISTRIB" != "scipy-dev-wheels" ]]; then
pytest $(find doc -name '*.rst' | sort)
fi
else
# Makefile is using nose
# Do not run doctests in scipy-dev-wheels build for now
# (broken by numpy 1.14.dev array repr/str formatting
# change even with np.set_printoptions(sign='legacy')).
# See https://github.com/numpy/numpy/issues/9804 for more details
if [[ "$DISTRIB" != "scipy-dev-wheels" ]]; then
make test-doc
fi
}
Expand All @@ -67,3 +59,18 @@ fi
if [[ "$SKIP_TESTS" != "true" ]]; then
run_tests
fi

if [[ "$CHECK_PYTEST_SOFT_DEPENDENCY" == "true" ]]; then
conda remove -y py pytest || pip uninstall -y py pytest
if [[ "$COVERAGE" == "true" ]]; then
# Need to append the coverage to the existing .coverage generated by
# running the tests
CMD="coverage run --append"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what the point of this is. The union of the coverage of running the tests with pytest and running the estimator checks without pytest?

Copy link
Member Author

@lesteve lesteve Nov 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what the point of this is. The union of the coverage of running the tests with pytest and running the estimator checks without pytest?

Yes. This way you can see that run_tests_without_pytest is covered in test_estimator_checks.py. test_script.sh is simpler without this so I can definitely remove it. With hindsight I probably should not have done it, it took me way too much time to get to work and that was only so that codecov would be green ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, fine by me. I'm good with merging either way.

else
CMD="python"
fi
# .coverage from running the tests is in TEST_DIR
cd $TEST_DIR
$CMD -m sklearn.utils.tests.test_estimator_checks
cd $OLDPWD
fi
19 changes: 6 additions & 13 deletions doc/developers/advanced_installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -376,24 +376,17 @@ Testing
Testing scikit-learn once installed
-----------------------------------

Testing requires having the `nose
<https://nose.readthedocs.io/en/latest/>`_ library. After
Testing requires having the `pytest
<https://docs.pytest.org>`_ library. After
installation, the package can be tested by executing *from outside* the
source directory::

$ nosetests -v sklearn

Under Windows, it is recommended to use the following command (adjust the path
to the ``python.exe`` program) as using the ``nosetests.exe`` program can badly
interact with tests that use ``multiprocessing``::

C:\Python34\python.exe -c "import nose; nose.main()" -v sklearn
$ pytest sklearn
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add the -v option to be consistent with make test-code?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure what the recommendation should be. I rarely use -v myself and I would argue that it is only useful (if at all) if you only have a log file e.g. on our CIs, to make sure that a particular test was run or succeeded or not.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point. Personally, I frequently use -sv to see, when there is something at the stdout, which unit tests it is associated with. But yes, without a log or the -s option, -v might not be that useful (particularly when there are lots and lots of tests as in scikit-learn)...


This should give you a lot of output (and some warnings) but
eventually should finish with a message similar to::

Ran 3246 tests in 260.618s
OK (SKIP=20)
=========== 8304 passed, 26 skipped, 4659 warnings in 557.76 seconds ===========

Otherwise, please consider posting an issue into the `bug tracker
<https://github.com/scikit-learn/scikit-learn/issues>`_ or to the
Expand All @@ -411,9 +404,9 @@ source directory::

python setup.py build_ext --inplace

Test can now be run using nosetests::
Test can now be run using pytest::

nosetests -v sklearn/
pytest sklearn
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as above


This is automated by the commands::

Expand Down
10 changes: 5 additions & 5 deletions doc/developers/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -280,8 +280,8 @@ You can also check for common programming errors with the following tools:
* Code with a good unittest coverage (at least 90%, better 100%), check
with::

$ pip install nose coverage
$ nosetests --with-coverage path/to/tests_for_package
$ pip install pytest pytest-cov
$ pytest --cov sklearn path/to/tests_for_package
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add an example to test a single test?

pytest path/to/tests_for_package::test_unit_test

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather try to keep this PR minimal, essentially the doc should only be a nose -> pytest translation.


see also :ref:`testing_coverage`

Expand Down Expand Up @@ -519,21 +519,21 @@ Testing and improving test coverage

High-quality `unit testing <https://en.wikipedia.org/wiki/Unit_testing>`_
is a corner-stone of the scikit-learn development process. For this
purpose, we use the `nose <http://nose.readthedocs.io/en/latest/>`_
purpose, we use the `pytest <https://docs.pytest.org>`_
package. The tests are functions appropriately named, located in `tests`
subdirectories, that check the validity of the algorithms and the
different options of the code.

The full scikit-learn tests can be run using 'make' in the root folder.
Alternatively, running 'nosetests' in a folder will run all the tests of
Alternatively, running 'pytest' in a folder will run all the tests of
the corresponding subpackages.

We expect code coverage of new features to be at least around 90%.

.. note:: **Workflow to improve test coverage**

To test code coverage, you need to install the `coverage
<https://pypi.python.org/pypi/coverage>`_ package in addition to nose.
<https://pypi.python.org/pypi/coverage>`_ package in addition to pytest.

1. Run 'make test-coverage'. The output lists for each file the line
numbers that are not tested.
Expand Down
27 changes: 3 additions & 24 deletions setup.cfg
Original file line number Diff line number Diff line change
@@ -1,30 +1,9 @@
[aliases]
# python2.7 has upgraded unittest and it is no longer compatible with some
# of our tests, so we run all through nose
test = nosetests

[nosetests]
# nosetests skips test files with the executable bit by default
# which can silently hide failing tests.
# There are no executable scripts within the scikit-learn project
# so let's turn the --exe flag on to avoid skipping tests by
# mistake.
exe = 1
cover-html = 1
cover-html-dir = coverage
cover-package = sklearn

detailed-errors = 1
with-doctest = 1
doctest-tests = 1
doctest-extension = rst
doctest-fixtures = _fixture
ignore-files=^setup\.py$
#doctest-options = +ELLIPSIS,+NORMALIZE_WHITESPACE
test = pytest

[tool:pytest]
# disable-pytest-warnings should be removed once we drop nose and we
# rewrite tests using yield with parametrize
# disable-pytest-warnings should be removed once we rewrite tests
# using yield with parametrize
addopts =
--doctest-modules
--disable-pytest-warnings
Expand Down
Loading