This is an unofficial PyTorch implementation of the paper Image-to-Image Translation with Conditional Adversarial Nets.
If you find this code useful, please star the repository.
- Clone this repository
git clone "https://github.com/FarnoushRJ/MLProject_Pix2Pix.git"
-
Install the requirements
Other Requirements
- Pillow 7.0.0
- numpy 1.18.4
- matplotlib 3.2.1
- barbar 0.2.1
- torch 1.5.0
- torchvision 0.6.0
- Facades and Maps datasets can be downloaded from this link.
Data Directory Structure
|__ DATASET_ROOT
|__ train
|__ test
|__ val
cd train/
python train.py --args
The models is trained for 200 epochs on both Facades and Maps datasets.
Input, Fake Target, Real Target
Input, Fake Target, Real Target (AtoB)
Input, Fake Target, Real Target (BtoA)
- Models
- Modified Model for deblurring, denoising and Inpainting