This the official implementation of our IJCV 2022 paper Multi-Object Tracking and Segmentation via Neural Message Passing (Guillem Brasó*, Orcun Cetintas*, Laura Leal-Taixe)
This work builds upon our previous CVPR 2020 (oral) paper Learning a Neural Solver for Multiple Object Tracking and extends it by:
- integrating an attentive module to our neural message passing scheme to yield a unified model for multi-object tracking and segmentation
- providing an extensive evaluation of our tracking model over three challenging datasets, including MOT20, KITTI and the recently proposed Human in Events dataset.
-
Clone and enter this repository:
git clone https://github.com/ocetintas/MPNTrackSeg.git cd MPNTrackSeg
-
Put the project directories in PYTHONPATH by copying the following lines in your .bashrc file:
export PYTHONPATH="${PYTHONPATH}:[PATH_TO_YOUR_PROJECT]/MPNTrackSeg/src" export PYTHONPATH="${PYTHONPATH}:[PATH_TO_YOUR_PROJECT]/MPNTrackSeg/MOTChallengeEvalKit/src" export PYTHONPATH="${PYTHONPATH}:[PATH_TO_YOUR_PROJECT]/MPNTrackSeg/tracktor-mots/src"
-
Create an Anaconda environment for this project:
conda env create -f environment.yml
conda activate MPNTrackSeg
-
Modify the variables
DATA_PATH
, andOUTPUT_PATH
insrc/mot_neural_solver/path_cfg.py
so that they are set to your preferred locations for storing datasets and output results, respectively. -
Download MOTS20 and/or KITTIMOTS datasets. Expected folder structure:
DATA_PATH ├── KITTIMOTS │ └── ... └── MOTS20 └── train │ ├── MOTS20-02 │ │ ├── det │ │ │ └── det.txt │ │ └── gt │ │ │ └── gt.txt │ │ └── img1 │ │ │ └── ... │ │ └── seqinfo.ini │ └── ... └── test └── ...
-
(OPTIONAL) We provide our trained models and detections:
-
Default folder structure is:
MPNTrackSeg └── output └── trained_models └── mots │ ├── kitti.ckpt │ └── mots20.ckpt └── reid └── resnet50_market_cuhk_duke.tar-232
-
Default location for each detections file is under det folder for each sequence with the naming det.txt (See step 5)
-
Specify hyperparameters via tracking_cfg.yaml and choose train and validation splits via train.py.
python scripts/train.py
Specify hyperparameters via tracking_cfg.yaml and choose the trained model via evaluate.py. Our hyperparameters for MOTS20 and KITTIMOTS datasets are provided under configs folder.
python scripts/evaluate.py
If you use our work in your research, please cite our publications:
-
Multi-Object Tracking and Segmentation via Neural Message Passing (IJCV 2022)
@article{MPNTrackSeg, author = {Bras{\'o}, Guillem and Cetintas, Orcun and Leal-Taix{\'e}, Laura}, date = {2022/09/26}, doi = {10.1007/s11263-022-01678-6}, id = {Bras{\'o}2022}, isbn = {1573-1405}, journal = {International Journal of Computer Vision}, title = {Multi-Object Tracking and Segmentation Via Neural Message Passing}, url = {https://doi.org/10.1007/s11263-022-01678-6}, year = {2022}}
-
Learning a Neural Solver for Multiple Object Tracking (CVPR 2020)
@InProceedings{braso_2020_CVPR, author={Guillem Brasó and Laura Leal-Taixé}, title={Learning a Neural Solver for Multiple Object Tracking}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2020} }
We use the codebases of Tracktor for preprocessing and MOTChallengeEvalKit and TrackEval for evaluation. We thank the authors of these codebases for their great work!