Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Catch warnings in test_proton_radiography.py #1050

Merged
merged 4 commits into from
Mar 1, 2021

Conversation

pheuer
Copy link
Member

@pheuer pheuer commented Feb 27, 2021

Addresses issue #844 for proton_radiography.py

@codecov
Copy link

codecov bot commented Feb 27, 2021

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 96.86%. Comparing base (af7b9d3) to head (ed8f1c5).

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1050   +/-   ##
=======================================
  Coverage   96.86%   96.86%           
=======================================
  Files          70       70           
  Lines        6864     6864           
=======================================
  Hits         6649     6649           
  Misses        215      215           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@pheuer pheuer linked an issue Feb 27, 2021 that may be closed by this pull request
20 tasks

sim = prad.SyntheticProtonRadiograph(grid, source, detector, verbose=False)
# Catch warnings because these fields aren't well-behaved at the edges
with warnings.catch_warnings():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it not, instead, be better to use pytest.warns to also check for the eventuality that we broke the functionality and warnings are not raised anymore? As I understand it this is deterministic enough that they'll always get raised.

I probably forgot to mention pytest.warns in the issue and definitely should have, sorry!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, yes, it is pretty much deterministic and should get always get raised. My thinking was that was better to rely on the other places where these warnings are explicitly tested (in situations designed to create them). The warnings fixed in this PR all just occur in setup lines preparing for other tests. So, for the purpose of being organized, I thought it was better to just blanket filter the warnings in that case?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but suppose we had this situation:

  1. our code throws warning A;
  2. we don't want to see warning A in tests, so we use with warnings.catch_warnings();
  3. something changes elsewhere (upstream, decorators, etc...) that causes our code, tests or no, to throw hundreds of occurrences of warning B;
  4. months pass;
  5. we learn our code has been horribly broken for a few months;
  6. everyone's sad

And while I can't exactly point you to the exact PR after these couple years, we did have that situation :D It was something with units and our unit checks IIRC.

What would be best, while we don't do it nearly enough, is to catch these warnings like https://docs.pytest.org/en/stable/warnings.html#warns does, with the following:

with pytest.warns(GeometryWarning, match="is not a stellarator, rethink your reactor design or reboot your fusion program")
	test_DEMO()

Then if test_DEMO

a) ceases to throw GeometryWarning
b) ceases to throw that particular GeometryWarning
c) starts throwing WallMeltingWarning or DisruptionWarning due to some other changes elsewhere

we get to know about in basically in real time!

... I should probably stick that into our testing guide, shouldn't I?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok that makes sense, I'll change over to using pytest.warns then.

I like the idea of checking explicitly to make sure the warning is correct: is there a way to do that while accounting for fstrings in the warning? Many of mine are quasi-deterministic so the warning will always appear but the numbers in the string might not always be identical.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

match is a regex! if you set it to the constant part, it'll work as you'd like it to :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AMAZING

regex is just magic

Copy link
Member

@StanczakDominik StanczakDominik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good changes and the warnings are indeed gone - happy to merge! :)

@StanczakDominik StanczakDominik merged commit 6a161f0 into PlasmaPy:master Mar 1, 2021
@StanczakDominik StanczakDominik added this to the 0.6.0 milestone Mar 12, 2021
@namurphy namurphy added testing plasmapy.diagnostics Related to the plasmapy.diagnostics subpackage labels May 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
plasmapy.diagnostics Related to the plasmapy.diagnostics subpackage testing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fix the many warnings emitted during tests
3 participants