An Tensorflow implementation of PersonLab for Multi-Person Pose Estimation and Instance Segmentation. Identify every person instance, localize its facial and body keypoints, and estimate its instance segmentation mask.
Code repo for reproducing 2018 ECCV paper PersonLab: Person Pose Estimation and Instance Segmentation with a Bottom-Up, Part-Based, Geometric Embedding Model
Pose
Segmentation
-
Python3
-
Tensorflow 1.80
-
pycocotools 2.0
-
skimage 0.13.0
-
python-opencv 3.4.1
- Download the model
- python demo.py to run the demo and visualize the model result
-
Download the COCO 2017 dataset
http://images.cocodataset.org/zips/train2017.zip
http://images.cocodataset.org/zips/val2017.zip
http://images.cocodataset.org/annotations/annotations_trainval2017.zip
training images in
coco2017/train2017/
, val images incoco2017/val2017/
, training annotations incoco2017/annotations/
-
Download the Resnet101 pretrained model, put the model in
./model/101/resnet_v2_101.ckpt
-
Edit the config.py to set options for training, e.g. dataset position, input tensor shape, learning rate.
-
Run the train.py script
The augmentation code (which is different from the procedure in the PersonLab paper) and data iterator code is heavily borrowed from this fork of the Keras implementation of CMU's "Realtime Multi-Person Pose Estimation". (The pose plotting function is also influenced by the one in that repo.)
The mask generation code and visualization code are from this fork of the Keras implementation of PersonLab.
@inproceedings{papandreou2018personlab,
title={PersonLab: Person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model},
author={Papandreou, George and Zhu, Tyler and Chen, Liang-Chieh and Gidaris, Spyros and Tompson, Jonathan and Murphy, Kevin},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
pages={269--286},
year={2018}
}