This program is able to detect when a shot occurs and fill in the balls flight from captured data. It calculates the balls initial velocity and launch angle. It is able to estimate the balls flight perpedicular to the camera plane (The z axis) using a single camera. The program is also able to detect when the balls flight is interupted by another object and will drop those data points.
- unstable video
- shot interrupted by person
- shot interrupted by object
- shot angled with component perpendicular to the camera plane
clip_ID | width | height | frame | category | score | x1 | x2 | y1 | y2 | model |
---|---|---|---|---|---|---|---|---|---|---|
int | int | int | int | string | float | int | int | int | int | string |
Each frame is represented on an individual line capturing only the highest score bounding box of each category detected
- All frames are represented exactly once
- Designed for videos containing at maximum a single basketball and a single person
- Nan values are used with the absence of a detected basketball or person in a frame
clip_ID | width | height | frame | x1_basketball | x2_basketball | y1_basketball | y2_basketball | x1_person | x2_person | y1_person | y2_person |
---|
Each line is an individual frame and contains the centerpoint coordinates of the highest scoring basketball detected as well as the radius and "free" column
- All frames are represented exactly once
- The free column is True if the highest scoring basketballs bounding box has no overlap with the highest scoring persons bounding box
- The radius is
((x2 - x1) + (y2 - y1))/2
- Nan values are used with the absence of a detected basketball
clip_ID | width | height | frame | x | y | radius | free |
---|---|---|---|---|---|---|---|
int | int | int | int | int | int | float | bool |
- This is used to verify the accuracy of the models detections
- Multiple objects are possible for each image
<annotation>
<folder></folder>
<filename></filename>
<path></path>
<source>
<database></database>
</source>
<size>
<width></width>
<height></height>
<depth></depth>
</size>
<segmented></segmented>
<object>
<name></name>
<pose></pose>
<truncated></truncated>
<difficult></difficult>
<bndbox>
<xmin></xmin>
<ymin></ymin>
<xmax></xmax>
<ymax></ymax>
</bndbox>
</object>
</annotation>
OL | LI |
---|---|
clip_ID | folder |
frame | file |
width | width |
height | height |
category | name |
score | |
x1 | xmin |
x2 | xmax |
y1 | ymin |
y2 | ymax |
model |
- frame in this repository is the file name minus its extension
- score is
100.0
if annotated by a human - model is "human" if annotated by a human
{
"PATH/TO/FRAME/IMAGE" :
{
"image_path" : "PATH/TO/FRAME/IMAGE",
"image_folder" : "IMAGE_FOLDER"
"image_filename" : "IMAGE_FILENAME",
"image_height" : HEIGHT_IN_PIXELS (int),
"image_width" : WIDTH_IN_PIXELS (int),
"image_items_list" :
[
"category" : "NAME",
"score" : ACCURACY_SCORE (float),
"box" : [x1,x2,y1,y2] (ints),
"model" : "EVALUATION_MODEL"
]
}
}
data
│
└───clips
│ |
│ | CLIP_ID1.mp4
| | CLIP_ID2.mp4
│ | ...
|
└───verified_li_annotations
│ │
│ └───CLIP_ID1
| | │
| | └───frames
| | | | 1.jpg
| | | | 2.jpg
| | | | ...
| | |
| | └───li_annotations
| | | | 1.xml
| | | | 2.xml
| | | | ...
| |
│ └───CLIP_ID2
│ ...
│
└───ol_annotations
│ ol_annotations.csv