A world generated by Terrain Diffusion. Reproduce with seed=1.
A practical, learned successor to Perlin noise for infinite, seed consistent, real time terrain generation.
Terrain Diffusion provides:
- InfiniteDiffusion, an algorithm for unbounded diffusion sampling with constant time random access. Utilizes infinite-tensor.
- A hierarchical stack of models for generating planetary terrain
- Real time streaming of terrain and climate data
- API for a pretty cool Minecraft mod
Infinite Tensor
Python library for managing infinite-dimensional tensors.
https://github.com/xandergos/infinite-tensor
Minecraft Mod (For minecraft integration)
Fabric mod that replaces Minecraft's world generator.
https://github.com/xandergos/terrain-diffusion-mc
git clone https://github.com/xandergos/terrain-diffusion
cd terrain-diffusion
pip install -r requirements.txtIf you have an NVIDIA GPU, it is strongly recommended to ensure that PyTorch is installed with CUDA support for GPU acceleration. Terrain Diffusion is quite fast, and can run on a CPU as well, but it will be much slower. Mac is CPU-only.
-
Install latest NVIDIA driver
-
Install PyTorch with CUDA:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
A collection of models is available here.
This opens a two-panel GUI. The left panel shows the coarse map. Click any pixel on the coarse map to generate a high resolution shaded relief map on the right.
You can also view the temperature of the high resolution map with Temperature (Lapse-rate adjusted).
python -m terrain_diffusion explore xandergos/terrain-diffusion-90m
If you are running the minecraft mod, you need to run this API in the background.
python -m terrain_diffusion mc-api xandergos/terrain-diffusion-90m
This runs a generalized API that can be used to query for elevation and climate data. See API_README.md for details.
python -m terrain_diffusion api xandergos/terrain-diffusion-90m
See TRAINING.md for a step-by-step guide. This is, of course, pretty lengthy.
There are two ways to modify world generation without training from scratch.
The code for generating the base map used for everything is at terrain_diffusion\inference\synthetic_map.py. It is basically just a bunch of perlin noise with some transformations to have the same statistics as real world data, and make sure the climate is at least kind of reasonable. You can modify the file directly to change how the world is generated. While testing, it's recommended to use --hdf5-file TEMP, otherwise you have to delete the HDF5 file manually every time you make a change.
The coarse model is tiny, so you can feasibly play around with the model parameters or the dataset to make new kinds of worlds. For example, you may over-sample crops that have harsher gradients. The dataset is in terrain_diffusion\training\datasets\coarse_dataset.py. You can also modify the config to create more or less powerful coarse models.
- Download ETOPO
Download the "30 Arc-Second Resolution GeoTIFF" here and place it in data/global.
- Download WorldClim data
Download bio 30s here. Extract all into data/global.
- Train with:
accelerate launch -m terrain_diffusion train --config ./configs/diffusion_coarse/diffusion_coarse.cfg
- Save the model:
python -m terrain_diffusion.training.save_model -c checkpoints/diffusion_coarse/latest_checkpoint -s 0.05
Move the output folder (Probably checkpoints/diffusion_coarse/latest_checkpoint/saved_model) to checkpoints/models/diffusion_coarse
You can also do some shenanigans with the coarse model's output directly. For example, I incorporated a coarse_pooling argument to apply pooling to the outputs, essentially compressing horizontal space. I found this worked really well for making terrain more intense without breaking realism (too much). It can be made even more extreme with max pooling on the elevation map and min pooling on p5, this becomes more unrealistic though. This is an interesting direction to explore with limited compute. See WorldPipeline._build_coarse_stage.
@misc{2512.08309,
Author = {Alexander Goslin},
Title = {Terrain Diffusion: A Diffusion-Based Successor to Perlin Noise in Infinite, Real-Time Terrain Generation},
Year = {2025},
Eprint = {arXiv:2512.08309},
}