Albert J. Zhai, Shenlong Wang
University of Illinois at Urbana-Champaign
ICCV 2023
As required by the Habitat Challenge, our code uses Docker to run. Install nvidia-docker by following the instructions here (only Linux is supported). There is no need to manually install any other dependencies. However, you do need to download and place several files, as follows:
- Make a folder
habitat-challenge-data/data/scene_datasets/hm3d
- Download HM3D train and val scenes and extract in
habitat-challenge-data/data/scene_datasets/hm3d/<split>
so that you havehabitat-challenge-data/data/scene_datasets/hm3d/val/00800-TEEsavR23oF
etc. - Download episode dataset and extract in
habitat-challenge-data
so that you havehabitat-challenge-data/objectgoal_hm3d/val
etc. - Download Mask-RCNN weights and place in
nav/agent/utils/mask_rcnn_R_101_cat9.pth
- Download prediction network weights and place in
nav/pred_model_wts.pth
The file structure should look like this:
PEANUT/
├── habitat-challenge-data/
│ ├── objectgoal_hm3d/
│ │ ├── train/
│ │ ├── val/
│ │ └── val_mini/
│ └── data/
│ └── scene_datasets/
│ └── hm3d/
│ ├── train/
│ └── val/
└── nav/
├── pred_model_wts.pth
└── agent/
└── utils/
└── mask_rcnn_R_101_cat9.pth
In general, you should modify the contents of nav_exp.sh
to run the specific Python script and command-line arguments that you want. Then, simply run
sh build_and_run.sh
to build and run everything in Docker. Note: depending on how Docker is setup on your system, you may need sudo for this.
An example script for evaluating ObjectNav performance is provided in nav/collect.py
. This script is a good entry point for understanding the code and it is what nav_exp.sh
runs by default. See nav/arguments.py
for available command-line arguments.
An example script for collecting semantic maps and saving them as .npz files is provided in nav/collect_maps.py
. A link to download the original map dataset used in the paper is provided below.
We use MMSegmentation to train and run PEANUT's prediction model. A custom clone of MMSegmentation is contained in prediction/
, and a training script is provided in prediction/train_prediction_model.py
. Please see the MMSegmentation docs in the prediction/
folder for more info about how to use MMSegmentation.
The original map dataset used in the paper can be downloaded from this Google Drive link.
It contains sequences of semantic maps from 5000 episodes (4000 train, 1000 val) of Stubborn-based exploration in HM3D. This dataset can be directly used to train a target prediction model using prediction/train_prediction_model.py
.
Please cite our paper if you find this repo useful!
@inproceedings{zhai2023peanut,
title={{PEANUT}: Predicting and Navigating to Unseen Targets},
author={Zhai, Albert J and Wang, Shenlong},
booktitle={ICCV},
year={2023}
}
This project builds upon code from Stubborn, SemExp, and MMSegmentation. We thank the authors of these projects for their amazing work!