Skip to content

smvorwerk/stable-dreamfusion-3d

Repository files navigation

stable-dreamfusion-3d

A colab friendly toolkit to generate 3D mesh model / video / NeRF instance / multiview images of colourful 3D objects by text and image prompts input

Stable Dreamfusion 3D is modified from dreamfields-torch and dreamfields

ngp_200_rgb-.mp4

Example generated by text prompt: "a cyborg organic biological pavilion could breathe with building skin containing algae, in the style of dezeen, trending on artstation, surreal", with CLIP ViT-L/14 model, training for 200 epochs with clip_aug and random_fovy_training mode enabled.

Preview.mp4

Example generated by text prompt: "a beautiful painting of a flower tree, by Chiho Aoshima, Long shot, surreal", with CLIP ViT-L/14 model, training for 200 epochs.

Main Contributions:

  • Export obj & ply model with vertex colour.
  • Export 360° Video of final model.
  • Visualizing the training progress and preview the output video in colab.
  • Improve the generation quality.
    • Allow to use different CLIP models.
    • Improve the pre-process of the renderings before feeding into CLIP.
    • Apply random view angle in training.
  • Add more useful augments.
  • Organize the colab notebook.

Future update plan:

  • Use different CLIP models simultaneously.
  • Convert existing mesh to NeRF instance then modify by text / image prompts.
  • Reduce GPU RAM occupation in training.

stable-dreamfusion-3d with pytorch (WIP)

A pytorch implementation of dreamfields as described in Zero-Shot Text-Guided Object Generation with Dream Fields.

An example of a generated neural field by prompt "cthulhu" viewed in real-time:

cthulhu.mp4

Install

The code framework is based on torch-ngp.

git clone https://github.com/svorwerk-dentsu/stable-dreamfusion-3d.git
cd stable-dreamfusion-3d

Install with pip

pip install -r requirements.txt

install customized verion of pymarchingcubes

bash scripts/install_PyMarchingCubes.sh

Build extension

# install all extension modules
bash scripts/install_ext.sh
# if you want to install manually, here is an example:
cd raymarching
python setup.py build_ext --inplace # build ext only, do not install (only can be used in the parent directory)
pip install . # install to python path (you still need the raymarching/ folder, since this only install the built extension.)

Usage

First time running will take some time to compile the CUDA extensions.

# text-guided generation
python main_nerf.py --text "cthulhu" --workspace trial --cuda_ray --fp16

# use the GUI
python main_nerf.py --text "cthulhu" --workspace trial --cuda_ray --fp16 --gui

# [experimental] image-guided generation (also use the CLIP loss)
python main_nerf.py --image /path/to/image --workspace trial --cuda_ray --fp16

check the scripts directory for more provided examples.

Difference from the original implementation

  • Mip-nerf is not implemented, currently only the original nerf is supported.
  • Sampling poses with an elevation range in [-30, 30] degrees, instead of fixed at 30 degree.
  • Use the origin loss.

About

colab ready stable dreamfusion + nerf 3d

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published