Dongkwon Jin and Chang-Su Kim
Official implementation of "OMR: Occlusion-Aware Memory-Based Refinement for Video Lane Detection".
- Python >= 3.6
- PyTorch >= 1.10
- CUDA >= 10.0
- CuDNN >= 7.6.5
-
Clone the repository:
git clone https://github.com/dongkwonjin/OMR.git cd OMR
-
Download pre-trained models and preprocessed data:
unzip pretrained.zip unzip preprocessing.zip
-
Create and activate a conda environment:
conda create -n OMR python=3.8 anaconda conda activate OMR
-
Install dependencies:
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia pip install -r requirements.txt
You can also find more options for installing PyTorch here.
-
OpenLane-V: Download OpenLane-V and place it in the original OpenLane dataset directory.
-
VIL-100: Download from here.
-
KINS: Download from here.
ROOT
├── Preprocessing # Data preprocessing code
│ ├── VIL-100 # Dataset: VIL-100, OpenLane-V
│ │ ├── P00 # Preprocessing step 1
│ │ │ ├── code
│ │ ├── P01 # Preprocessing step 2
│ │ │ ├── code
│ │ └── ...
│ └── ...
├── Modeling # Model code
│ ├── VIL-100 # Dataset: VIL-100, OpenLane-V
│ │ ├── ILD_cls # ILD module for predicting lane probability map and latent obstacle mask
│ │ │ ├── code
│ │ ├── ILD_reg # ILD module for regressing lane coefficient maps
│ │ │ ├── code
│ │ ├── OMR # OMR module
│ │ │ ├── code
│ ├── OpenLane-V
│ │ ├── ...
├── pretrained # Pretrained model parameters
│ ├── VIL-100
│ ├── OpenLane-V
│ └── ...
├── preprocessed # Preprocessed data
│ ├── VIL-100
│ │ ├── P00
│ │ │ ├── output
│ │ ├── P02
│ │ │ ├── output
│ └── ...
├── OpenLane # Dataset directory
│ ├── images
│ ├── lane3d_1000 # Not used
│ ├── OpenLane-V
│ │ ├── label
│ │ ├── list
├── VIL-100
│ ├── JPEGImages
│ ├── Annotations # Not used
└── ...
To evaluate using VIL-100, follow these steps:
-
Install Evaluation Tools
- Download the official CULane evaluation tools from here.
- Save the tools in the
ROOT/Modeling/VIL-100/MODEL_NAME/code/evaluation/culane/
directory.
cd ROOT/Modeling/VIL-100/MODEL_NAME/code/evaluation/culane/ make
-
Refer to the Installation Guidelines
- For detailed installation instructions, refer to the installation guideline.
-
Configure Training
- Set the dataset (
DATASET_NAME
) and model (MODEL_NAME
) you want to train. - Specify your dataset path using the
-dataset_dir
argument.
cd ROOT/Modeling/DATASET_NAME/MODEL_NAME/code/ python main.y --run_mode train --pre_dir ROOT/preprocessed/DATASET_NAME/ --dataset_dir /path/to/your/dataset
- Set the dataset (
-
Optional: Edit Configuration
- Modify
config.py
to adjust the training parameters as needed.
-
Evaluate Pre-Trained Models
- To get performances of pre-trained models:
cd ROOT/Modeling/DATASET_NAME/MODEL_NAME/code/ python main.y --run_mode test_paper --pre_dir ROOT/preprocessed/DATASET_NAME/ --paper_weight_dir ROOT/pretrained/DATASET_NAME/ --dataset_dir /path/to/your/dataset
-
Evaluate Your Trained Model
- To evaluate a model you have trained:
cd ROOT/Modeling/DATASET_NAME/MODEL_NAME/code/
python main.y --run_mode test --pre_dir ROOT/preprocessed/DATASET_NAME/ --dataset_dir /path/to/your/dataset
- (Optional) Visualize Results
- To visualize detection results, set
disp_test_result=True
incode/options/config.py
.
- To visualize detection results, set
Preprocessing data involves several steps:
- Convert Ground-Truth Lanes
- Convert ground-truth lanes to pickle format (VIL-100 specific).
- 2D Point Representation
- Represent each lane in the training set as 2D points sampled uniformly in the vertical direction.
- Lane Matrix Construction
- Construct a lane matrix, perform SVD, and transform each lane into its coefficient vector.
- Generate Video-Based Datalists
- Create datalists for training and test sets.
cd ROOT/Modeling/DATASET_NAME/MODEL_NAME/code/
python main.y --run_mode test --pre_dir ROOT/preprocessed/DATASET_NAME/ --dataset_dir /path/to/your/dataset
@Inproceedings{
Jin2024omr,
title={OMR: Occlusion-Aware Memory-Based Refinement for Video Lane Detection},
author={Jin, Dongkwon and Kim, Chang-Su},
booktitle={ECCV},
year={2024}
}