-
Notifications
You must be signed in to change notification settings - Fork 503
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add zero length option to exclusive range test. #307
Add zero length option to exclusive range test. #307
Conversation
I am not sure about how/when to update the changelog. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice PR! I left one quick comment on the parameter name, curious on your thoughts on it before I go through this with a fine-toothed comb :)
lower_bound_column: started_at | ||
upper_bound_column: ended_at | ||
partition_by: customer_id | ||
zero_length: allowed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think of this instead?
zero_length: allowed | |
zero_length_range_allowed: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about that, but I thought it would be best to stick with the required/allowed/not_allowed
convention used in gaps
parameter. Happy to change if you feel strongly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a true/false is nicer here — had to use an enum for the above since there were three different options 😓
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All set :)
One other thing:
I feel like the lack of negative test coverage is concerning for this project. I would like to be able to make test cases where the test (specifically, the test that is meant to be reused in other code) fails. As it stands we only have tests where the tested model passes the test.
One kluge that we could use to solve this quickly is to put tags on tests that should fail, exclude them from the standard test run, and run the tests that should fail with the expectation that it will return a failure code.
To me a better way would be to add an optional flag expect: pass/warn/fail
to the test.config and then invert the results appropriately here:
https://github.com/fishtown-analytics/dbt/blob/dev/kiyoshi-kuromiya/core/dbt/task/test.py#L95
I would love to hear your thoughts, so I can decide where to make my PR, Thanks!
eagerly awaitng tests to pass so I can merge this :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
Use boolean for zero range arg. Update changelog
Use boolean for zero range arg. Update changelog
Use boolean for zero range arg. Update changelog
Use boolean for zero range arg. Update changelog
This is a:
master
dev/
branchdev/
branchDescription & motivation
If the unit of time is large (like a date), a range starting and ending at the same date is reasonable. When this happens, the exclusive range test fails. Now it has an option to not fail.
Checklist