This is the implementation of RLAD, which is described in:
RLAD: Reinforcement Learning from Pixels for Autonomous Driving in Urban Environments
If you find our work useful, please consider citing:
@ARTICLE{10364974,
author={Coelho, Daniel and Oliveira, Miguel and Santos, Vítor},
journal={IEEE Transactions on Automation Science and Engineering},
title={RLAD: Reinforcement Learning From Pixels for Autonomous Driving in Urban Environments},
year={2023},
volume={},
number={},
pages={1-9},
keywords={Training;Task analysis;Reinforcement learning;Autonomous vehicles;Visualization;Convolution;Urban areas;Autonomous driving;reinforcement learning;deep learning;feature representation;deep neural networks},
doi={10.1109/TASE.2023.3342419}}
- Clone the repository with
git clone [email protected]:DanielCoelho112/rlad.git
- Download CARLA 0.9.10.1.
- Run the docker container with
docker run -it --gpus all --network=host -v results_path:/root/results/rlad -v rlad_path:/root/rlad danielc11/rlad:0.0 bash
whereresults_path
is the path where the results will be written, andrlad_path
is the path of the rlad repository.
- Start the CARLA server
- Run:
python3 rlad/run/python3 main.py -en rlad_original
Thanks to the authors of End-to-End Urban Driving by Imitating a Reinforcement Learning Coach for providing a framework to train RL agent in CARLA.