Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
SD3004 committed May 4, 2022
1 parent 901f69d commit cedb298
Showing 1 changed file with 19 additions and 62 deletions.
81 changes: 19 additions & 62 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,78 +1,35 @@
# SLCN Challenge - MICCAI 2022
# SLCN Challenge 2022 - Docker submission example

Example of docker container for the SLCN challenge organised as part of the MLCN 2022 workshop, satellite event of the MICCAI 2022.
Credits: S. Dahan, LZJ. Williams

# Create DockerFile
This repository provides a reference algorithm Docker container for SLCN 2022 Challenge submission, on the grand-challenge plateform.

coming soon
It should serve as an example or/and a template for your own algorithm container implementaion.

# Build docker image
Here, a [Surface Vision Trasnformer](https://arxiv.org/abs/2203.16414) (SiT) model is used for the task of birth age prediction as an example. Code is based on this [Github](https://github.com/metrics-lab/surface-vision-transformers).

coming soon
More information about algorithm container and submission can be found [here](https://grand-challenge.org/blogs/create-an-algorithm/).

# Run docker command
## Content:
1. [Prerequisites](#prerequisites)
2. [Requirements for Grand Challenge submission](#requirements)

coming soon

## 1. Prerequisites <a name="prerequisites"></a>

Submissions are based on Docker containers and the evalutils library (provided by Grand-Challenge).

Introduction
The test set won't be released to the challenge participants. For this reason, participants must containerise their methods with Docker and submit their docker container for evaluation on the test set. Your code won't be shared and will be only used internally by the SLCN organisers.
First, you will need to install localy [Docker](https://www.docker.com/get-started).

Docker allows for running an algorithm in an isolated environment called a container. In particular, this container will locally replicate your pipeline requirements and execute your inference script.
Then, you will need to install evalutils, that you can pip install:

Design your inference script
The inference will be automatically performed using Docker. More specifically, a command will be executed when your Docker container is run (example: `python3 run_inference.py`), for each of the tasks.
```
pip install evalutils
```

The command must run the inference on the test set. The test set will be mounted into /input and the results must be saved in /output. The folder /input will contain all the test metric files in the format [id]_[sess]_{left,right}.shape.gii. For both tasks, the participant script must save the prediction results in the /output with a CSV file with two columns, one for the predictions, and one for the target values: for example /output/results_birth_age.csv

We provide a script example here.
## 2. Requirements for Grand Challenge submissions

Create your Docker Container
Docker is commonly used to encapsulate algorithms and their dependencies. In this section, we list four steps you will have to follow in order to create your docker image so that it is ready for submission.
You Docker container (via process.py) is supposed to read .mha image files.

Firstly, you will need to install Docker. The NVIDIA Container Toolkit is also required to use CUDA within docker containers. Secondly, you will need to create your own image. Docker can build images by reading the instructions from a Dockerfile. Detailed explanations are provided here. Many images are available online and can be used as a base image. We recommend pulling from the NVIDIA images for models requiring a GPU (e.g., Tensorflow, PyTorch).

Please look at the SLCN Docker container example on Github.

In a nutshell, Dockerfile allows for:

Pulling a pre-existing image with an operating system and, if needed, CUDA (FROM instruction).
Installing additional dependencies (RUN instructions).
Transfer local files into your Docker image (COPY instructions).
Executing your algorithm (CMD and ENTRYPOINT instructions).
Dockerfile example:

## Pull from existing image
FROM nvcr.io/nvidia/pytorch:21.05-py3
## Copy requirements
COPY ./requirements.txt .

## Install Python packages in Docker image
RUN pip3 install -r requirements.txt

## Copy all files (here "./src/run_inference.py")
COPY ./ ./
## Execute the inference command
CMD ["./src/run_inference.py"]
ENTRYPOINT ["python3"]
Thirdly, you can build your docker image:

docker build -f Dockerfile -t [your image name] .
Fourthly, you will upload your image to Docker Hub. Instructions can be found here:

docker push [your image name]
Docker commands
Your container will be run with the following command:

docker run --rm -v [input directory]:/input/:ro -v [output directory]:/output -it [your image name]
[input directory] will be the absolute path of our directory containing the test set, [output directory] will be the absolute path of the prediction directory and [your image name] is the name of your Docker image.

Test your Docker container
To test your docker container, you will have to run your Docker container and perform inference using the validation set. We recommend you test your Docker container prior to submission.

Firstly, create a folder containing the validation set.

Run:

docker run --rm -v [validation set folder]:/input/:ro -v [output directory]:/output -it [your image name]
Important: Images will be read successively and predictions will be made one by one, ie there will be one birth-age.json file per predicition.

0 comments on commit cedb298

Please sign in to comment.