-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How about 3D version? #20
Comments
Grounded SAM 2 can currently only handle 2D images. If it needs to be applied in a 3D scene, I think you first need to project a certain view onto a 2D plane and then apply Grounded SAM 2. BTW, I was wondering what's the meaning of tracking objects in 3D scenarios, could you explain the scenario you need to use more clearly? This will help us better assist you in brainstorming solutions. |
I have seen a similar example in SEEM: https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once?tab=readme-ov-file#tulip-nerf-examples, is this the scenarios you need or not |
For example, tracking 3D object could be more precise compare to 2D, and tracking multi object is very common in Street scenarios(self-driving) or in some dynamic slam/indoor robot field. |
I wonder can we tracking object by Grounded Sam2 in 3D?
The text was updated successfully, but these errors were encountered: