Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with filtering out multiple void instances #11

Closed
marckatzenmaier opened this issue Jun 20, 2022 · 4 comments
Closed

Problem with filtering out multiple void instances #11

marckatzenmaier opened this issue Jun 20, 2022 · 4 comments

Comments

@marckatzenmaier
Copy link

I run some tests with your implementation for the panoptic metric and got a curious result.

I made a simple test case with two instance who overlap the void class -> they should get filtered out.

if I have only one of those instances it works properly and I don’t get FP.
If I now have both instances they don’t get filtered out and I get two FP

It might have to do with

*torch.unique(void_mask, return_counts=True)

since you here do the unique over basicly a binary mask if I understand correctly. I supect you wanted to do here instead of void_mask -> instance_ture[batch_idx]

@VSainteuf
Copy link
Owner

Hi @marckatzenmaier,
Thanks very much for pointing this out.
This is indeed a bug. Instead of iterating over all the individual binary instance masks of void segments, this loop only runs once on the total binary mask of all void objects. So instead of matching predicted segments with void segments, this part of the code tries to match predicted segments with the union of all void segments, which probably never happens.

So at the end of the day, segments predicted for void objects are not ignored (though they should be), resulting in "false FP" (false positives that shouldn't be false positives).
This artificially reduces the RQ value and thus the PQ score.

I think this line should be
*torch.unique(instance_true[batch_idx]*void_mask, return_counts=True).

@VSainteuf
Copy link
Owner

This is fixed now.
I re-evaluated the models on PASTIS with the new code and the bug fix results in a ~2-3pt PQ increase thanks to the reduced number of False Positives.

@clotilda-0
Copy link

Facing issue with :
from utils.dataset import ijgiDataset as Dataset
no module error in Main_SITSMamba.py
Possible fix please @VSainteuf @marckatzenmaier

@VSainteuf
Copy link
Owner

Hi @clotilda-0 I don't think your issue refers to the utae-paps repository as it does not contain an ijgiDataset nor a Main_SITSMamba.py script.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants