Open-World Multi-Task Control Through Goal-Aware Representation Learning and Adaptive Horizon Prediction
Our codebase require Python ≥ 3.9. It also requires a modified version of MineDojo as the simulator and MineCLIP as the goal text encoder. Please run the following commands to prepare the environments.
conda create -n controller python=3.9
conda activate controller
python -m pip install numpy torch==2.0.0.dev20230208+cu117 --index-url https://download.pytorch.org/whl/nightly/cu117
python -m pip install -r requirements.txt
python -m pip install git+https://github.com/MineDojo/MineCLIP
python -m pip install git+https://github.com/CraftJarvis/MC-Simulator.git
Biome | Download |
---|---|
Plains | url |
Flat | to be uploaded |
Forests | to be uploaded |
Run the following commands to train the agent.
python main.py data=multi_plains eval=multi_plains
We have provided three configure files for three biomes (multi_plains, multi_forests, and multi_flat).
To run the code, call
python main.py eval=multi_plains eval.only=True model.load_ckpt_path=<path/to/ckpt>
After loading, you should see som windows where agents are playing Minecraft.
Below are the configures and weights of models.
Configure | Download | Biome | Number of goals |
---|---|---|---|
Transformer | here | Plains | 4 |
Transformer + Extra Observation | here | Plains | 4 |
For example, if we want to use the "Transformer+Extra Observation" checkpoint, we should specify model=transformer_w_extra
in the command.
python main.py eval=multi_plains eval.only=True model=transformer_w_extra model.load_ckpt_path=<path/to/ckpt>
Our paper is posted on Arxiv. If it helps you, please consider citing us!
@article{cai2023open,
title={Open-World Multi-Task Control Through Goal-Aware Representation Learning and Adaptive Horizon Prediction},
author={Cai, Shaofei and Wang, Zihao and Ma, Xiaojian and Liu, Anji and Liang, Yitao},
journal={arXiv preprint arXiv:2301.10034},
year={2023}
}