Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nose -> pytest #4996

Merged
merged 74 commits into from
Dec 19, 2019
Merged

nose -> pytest #4996

merged 74 commits into from
Dec 19, 2019

Conversation

smotornyuk
Copy link
Member

Long-time ago there was discussion related to test-runners(#3310). nose is not supported and thus we have a reason for switching into something new. In addition, current test complexity and constant problems with config/app initialization during tests just freaking me out. Long story short, I've decided to configure pytests.

@smotornyuk smotornyuk self-assigned this Sep 24, 2019
@smotornyuk smotornyuk changed the title Install pytest/replicate existing workflos nose -> pytest Sep 24, 2019
@smotornyuk
Copy link
Member Author

Yay, only 30 failures after initial run

@amercader
Copy link
Member

Really hoping this turns out well, ❤️ pytest

@amercader
Copy link
Member

This is fanstastic work @smotornyuk. I'm still going through the main changes but one thing that caught my eye was that you are applying fixtures at the individual test level:

  class TestResourceViewCreate(object):

      @pytest.mark.ckan_config("ckan.plugins", "image_view")
      @pytest.mark.usefixtures("clean_db", "with_plugins")  
      def test_resource_view_create(self):
          context = {} 
          params = self._default_resource_view_attributes()
  
          result = helpers.call_action("resource_view_create", context, **params)
  
          result.pop("id")
          result.pop("package_id")
  
          assert params == result

      @pytest.mark.ckan_config("ckan.plugins", "image_view")
      @pytest.mark.usefixtures("clean_db", "with_plugins")  
      def test_requires_resource_id(self):
          context = {} 
          params = self._default_resource_view_attributes()
          params.pop("resource_id")
  
          with pytest.raises(logic.ValidationError):
              helpers.call_action("resource_view_create", context, **params)

This is fine of course but given our reliance on test classes I think that being able to apply fixtures at the class level makes the tests more readable and avoids duplication:

  @pytest.mark.ckan_config("ckan.plugins", "image_view")
  @pytest.mark.usefixtures("clean_db", "with_plugins")
  class TestResourceViewCreate(object):
  
      def test_resource_view_create(self):
          context = {} 
          params = self._default_resource_view_attributes()
  
          result = helpers.call_action("resource_view_create", context, **params)
  
          result.pop("id")
          result.pop("package_id")
  
          assert params == result
  
      def test_requires_resource_id(self):
          context = {} 
          params = self._default_resource_view_attributes()
          params.pop("resource_id")
  
          with pytest.raises(logic.ValidationError):
              helpers.call_action("resource_view_create", context, **params)

To make this work I had to slightly change the ckan_config fixture:

diff --git a/ckan/tests/pytest_ckan/fixtures.py b/ckan/tests/pytest_ckan/fixtures.py
index cc8521f56..216231ff9 100644
--- a/ckan/tests/pytest_ckan/fixtures.py
+++ b/ckan/tests/pytest_ckan/fixtures.py
@@ -56,7 +56,7 @@ def ckan_config(request, monkeypatch):
 
     """
     _original = config.copy()
-    for mark in request.node.own_markers:
+    for mark in request.node.iter_markers():
         if mark.name == u"ckan_config":
             monkeypatch.setitem(config, *mark.args)
     yield config

Of course we can combine both approaches depending on the context but perhaps we should favour marking tests at the class level. WDYT?

Of course I am assuming that changing all the references again would be a big task so maybe we can do it later

@smotornyuk
Copy link
Member Author

The more we'll do now, the less we need to rework in future 😄 I'll try to move fixtures to class level wherever it possible - it not so hard with regexps and macros

@amercader
Copy link
Member

@smotornyuk I had a good look at this and as far as I'm concerned this is good to go.
I changed a bit the docstrings and documentation, and added some examples in 3056bee#diff-fb126fa3f292fcb42a73fc636c96eb88R267, which I can add when merging.

Let me know when you are done with the class level changes.

BTW are these changes intended?

@amercader
Copy link
Member

On thing I've noticed though is that the tests are now noticeably slower (~6-7 mins with nose, ~10 min with pytest). We can investigate and improve it later, it might be related with the collection phase or the way we use fixtures.

@smotornyuk
Copy link
Member Author

BTW are these changes intended?

I'm actively using search-replace-with-regexp(sed, awk, etc.) here, and it, unfortunately, breaks code style in a number of places. In order to solve this problem, I'm formatting tests with black so that I don't have to expand the PEP8 blacklist. That's why you can see changes that are not directly related to tests - anyway, I'm doing a lot of changes, so I decided to do it alongside with some formatting.

tests are now noticeably slower

As for slow tests, unfortunately, it's not the collection phase but tests itself. Locally I'm using pytest-randomly in order to run tests in a random order every time. In this way, I discovered a fantastically huge number of stateful tests that will just fail if they won't be executed in a particular order. It prevents us from splitting tests between instances(to some degree) and does not allow(if someone will try to use this pytest's feature in future) running tests in parallel.
In order to make tests independent, I've added a lot of db_clean fixtures - they make tests a bit slower. And the main culprit here is CreateTestData - previously it often was called once per file and tests, that were executed in particular order, were mutating this test data. Now I'm creating test data for each individual test in order to preserve independence. I don't think that we should increase speed by using stateful tests - it would be much better to refactor/create only required data/rethink tests instead.

@amercader
Copy link
Member

Both things make sense. I totally agree that is better to have tests without state leaking that the speed gains. We can refactor them later.

Let me know when you are done with this PR and I'll merge

@smotornyuk
Copy link
Member Author

Done. I've merged fixes from your pytest-2 branch into this one

@amercader amercader merged commit f4cfe5e into ckan:master Dec 19, 2019
@amercader
Copy link
Member

nose tests are no more, long live pytest

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants