Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor tests using Theory #108

Open
wants to merge 28 commits into
base: master
Choose a base branch
from

Conversation

Philipp-Binder
Copy link
Contributor

@Philipp-Binder Philipp-Binder commented Oct 20, 2024

While doing major changes (eg changing from Sprache to Superpower) I had some trouble with testing, because the current Fact-Methods do fail at the first failing case. That way it is hard to check whether you fixed something and just broke a different case, or if it does not fix the intended case at all.
Additionally you always have to care about whether the testcases within a method are really "the same", or if for example some check "AtEnd" and some do not.

That's where Theories come into action.
They reduce many of the test-methods to one-liners, which prevents you from merging different aspects of testing.
Instead it enforces to have a precise test-statements which should be tested with different inputs.

Additionally it reports each single failing input instead of just failing at the first of them.

Is that something you would support?
It is not finished, and of course some scenarios have to be handled differently, we should not parse an entire file for each single test for example.

It's not really possible to check that request via git-compare at the moment, too many changes which are not recognized in the correct order.
If there is an interest in having this, I'd split this commit into single method-change-commits to enable a proper review.
And it is of course still a draft, it is not even fully finished for ParserTests.

@rogusdev
Copy link
Collaborator

rogusdev commented Oct 20, 2024

You don't need to switch to Theory to do what you are doing. It is very doable with XUnit to pass arguments to functions for various tests.

The problem with this change is about setup and teardown (ctor + dispose). If you separate the tests into multiple 1 liners, you run setup and teardown for each and every one of those, because that's the whole point, and that can add significant slowdowns to test running. In this set of tests, perhaps it won't be too noticeable, but there is a real chance that these changes could 3x-10x test runtime.

So, with these changes I would be concerned about showing things down annoyingly, and also unnecessarily dropping XUnit.

If you can address/justify those concerns, I would be open to this continuing.

@Philipp-Binder
Copy link
Contributor Author

Philipp-Binder commented Oct 20, 2024

You don't need to switch to Theory to do what you are doing. It is very doable with XUnit to pass arguments to functions for various tests.

But with calling methods you won't get individual errors, since the test still fails at the first error.

The problem with this change is about setup and teardown (ctor + dispose). If you separate the tests into multiple 1 liners, you run setup and teardown for each and every one of those, because that's the whole point, and that can add significant slowdowns to test running. In this set of tests, perhaps it won't be too noticeable, but there is a real chance that these changes could 3x-10x test runtime.

Sure, as I already stated we should not parse entire files for each individual test for example.
Where such expensive setup-actions are required, I need to find a solution addressing this concern, or the tests just stay as they were.
The huge advantage of individual errors is mainly about the direct (small) Parser tests, where you can directly see which cases are problematic and which are still green.

So, with these changes I would be concerned about showing things down annoyingly, and also unnecessarily dropping XUnit.

If you can address/justify those concerns, I would be open to this continuing.

Does this address the concerns so far?
If your concern is only about performance, I guess there is no reason to stop until we see performance issues.
And if so, we can just leave these tests untouched.

@Philipp-Binder
Copy link
Contributor Author

Small addition:

I found some explanation about SharedContext in X-Unit, which could solve performance issues: https://xunit.net/docs/shared-context

I need to read through, but my first impression ist, that this can address all performance issues.

@rogusdev
Copy link
Collaborator

Actually it turns out what you are doing is the XUnit way of doing this, I think I was thinking of nunit's approach, or some other framework.

So I support the path you are on. For sure this makes things easier to isolate for failures. And we can check performance at the end.

Thanks!

@Philipp-Binder
Copy link
Contributor Author

Splitted to single commits for each changed method now, that way a review should be possible.

Updating the other methods will follow soon.

@Philipp-Binder
Copy link
Contributor Author

Ready with ParserTests, but searching for a better solution for ParseDotenvFile()-refactoring.
I think it has the advantage of separate failing, but it is not a really nice solution so far...

But at least the refactoring of this class can be reviewed, then I can use that feedback for refactoring the next classes.

@Philipp-Binder
Copy link
Contributor Author

Philipp-Binder commented Oct 26, 2024

Found the solution for ParseDotenvFileShouldParseContents:

Using MemberData combined with TheoryData removes all these dictionaryRefs.
The solution is typesafe and you cannot miss any tests, compared with the loosely coupled DictionaryIndexing with InlineData.

It is still quite messy due to the long multiline strings, but that's also true for the old solution.
Maybe someone else has a suggestion to make that better readable?

@Philipp-Binder
Copy link
Contributor Author

Philipp-Binder commented Oct 26, 2024

While refactoring LoadOptionsTests I realized one real trade-off:

There is no good solution for showing which InlineData or TheoryData is under tests (no link back to the code-line).

While that is acceptable for most of the tests, there are cases (mainly with TheoryData I think) where this is quite inconvenient.
With LoadOptions it was quite horrible, because it is nearly impossible to see from the shown data (checking the compiled code is not a good way I guess :D ).

That's why I added a workaround, adding a new Type IndexedTheoryData.
There are some tradeoffs, the compiler cannot create exceptions for wrong parameters on the test method for example, but still the whole thing is typesafe etc. so that should not be a real problem (you get well explaining exceptions if you have wrong parameters).

But still it reduces the code so much, that this tradeoff should be okay imho.

@Philipp-Binder Philipp-Binder force-pushed the refactor/improveTesting branch 3 times, most recently from 31534ed to 83e777e Compare October 26, 2024 20:15
@Philipp-Binder
Copy link
Contributor Author

I just had a look through the other TestClasses now:

HelperTests and EnvReaderTests seem hard to rewrite, and 'm not sure how valuable this is.

  1. I think this functionality is quite stable, there won't happen much
  2. the focus should be on IConfiguration, which is the preferred way of ConfigurationData-Usage imho (IConfiguration allows to deserialize data into classes directly, no "manual" conversions)
  3. all these methods do not work if you do not use SetEnvVars, which is a huge restriction (and an additional reason to use IConfiguration instead)

I would postpone EnvTests and EnvConfigurationTests, because there are potentially huge simplifications when working on #105 (decoupling Parsers and EnvVars) first.

What do you think so far?
I'm not completely happy with some of the code-constructs now, but in general I think it's a good direction.

@Philipp-Binder Philipp-Binder marked this pull request as ready for review October 26, 2024 21:48
@rogusdev
Copy link
Collaborator

Hey sorry for the delay, I did not realize you were waiting on me for next steps. Honestly, this is hard for me to confirm that you retained all existing tests correctly, but if this all passes, it looks broadly correct, and I do like the improvements. I would in fact encourage not doing more than this at this time and instead stop here and call it done, if indeed you are done with these changes.

So, I am fine with merging as-is if you are complete on this set of changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants