Opened 2 years ago
Last modified 2 years ago
#12358 new enhancement
implement a way to track ValueFlow/AST differences
| Reported by: | kidkat | Owned by: | noone |
|---|---|---|---|
| Priority: | Normal | Milestone: | |
| Component: | Other | Version: | |
| Keywords: | Cc: |
Description (last modified by )
We might implement changes which affect the resulting ValueFlow but we usually only look at them in the terms of the code we are fixing or cases we can think off. It might affect other code in unintended ways but we do not have a way to detect this beyond test failure and/or false negative/positives.
We should implement something that shows the differences in the ValueFlow.
The same applies to the AST.
Change History (4)
comment:1 by , 2 years ago
| Description: | modified (diff) |
|---|---|
| Summary: | implement a way to track AST differences → implement a way to track AST/valueflow differences |
comment:2 by , 2 years ago
| Description: | modified (diff) |
|---|---|
| Summary: | implement a way to track AST/valueflow differences → implement a way to track ValueFlow/AST differences |
comment:3 by , 2 years ago
Maybe this could be implemented within daca but that would generate way too much data to collect it. Maybe we can make that optional for local execution only - akin to the test-my-pr.py. Or integrate it in that.
An idea was to take a fixed corpus and generate the AST/ValueFlow (i.e. debug) output for all the files with a fixed version. That gives a baseline to compare against. We could do this with a scheduled workflow and/or as part of the release checklist to make sure we do not introduce unwanted changes in a release.