The First Curation of Text-to-3D, Diffusion-to-3D works. Heavily inspired by awesome-NeRF
09.02.2024
- Level One Categorization11.11.2023
- Added Tutorial Videos05.08.2023
- Provided citations in BibTeX06.07.2023
- Created initial list
X-to-3D
-
Zero-Shot Text-Guided Object Generation with Dream Fields, Ajay Jain et al., CVPR 2022 | citation
-
CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation, Aditya Sanghi et al., Arxiv 2021 | citation
-
PureCLIPNERF: Understanding Pure CLIP Guidance for Voxel Grid NeRF Models, Han-Hung Lee et al., Arxiv 2022 | citation
-
SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation, Yen-Chi Cheng et al., CVPR 2023 | citation
-
DreamFusion: Text-to-3D using 2D Diffusion, Ben Poole et al., ICLR 2023 | citation
-
Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models, Jiale Xu et al., Arxiv 2022 | citation
-
Novel View Synthesis with Diffusion Models, Daniel Watson et al., Arxiv 2022 | citation
-
NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360° Views, Dejia Xu et al., Arxiv 2022 | citation
-
Point-E: A System for Generating 3D Point Clouds from Complex Prompts, Alex Nichol et al., Arxiv 2022 | citation
-
Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures, Gal Metzer et al., Arxiv 2023 | citation
-
Magic3D: High-Resolution Text-to-3D Content Creation, Chen-Hsuan Linet et al., CVPR 2023 | citation
-
RealFusion: 360° Reconstruction of Any Object from a Single Image, Luke Melas-Kyriazi et al., CVPR 2023 | citation
-
Monocular Depth Estimation using Diffusion Models, Saurabh Saxena et al., Arxiv 2023 | citation
-
SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction, Zhizhuo Zho et al., CVPR 2023 | citation
-
NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion, Jiatao Gu et al., ICML 2023 | citation
-
Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation, Haochen Wang et al., CVPR 2023 | citation
-
High-fidelity 3D Face Generation from Natural Language Descriptions, Menghua Wu et al., CVPR 2023 | citation
-
TEXTure: Text-Guided Texturing of 3D Shapes, Elad Richardson Chen et al., SIGGRAPH 2023 | citation
-
NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors, Congyue Deng et al., CVPR 2023 | citation
-
DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models, Jamie Wynn et al., CVPR 2023 | citation
-
3DQD: Generalized Deep 3D Shape Prior via Part-Discretized Diffusion Process, Yuhan Li et al., CVPR 2023 | citation
-
DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model, Gwanghyun Kim et al., CVPR 2023 | citation
-
Novel View Synthesis with Diffusion Models, Daniel Watson et al., ICLR 2023 | citation
-
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation, Zhengyi Wang et al., Arxiv 2023 | citation
-
3D-aware Image Generation using 2D Diffusion Models, Jianfeng Xiang et al., Arxiv 2023 | citation
-
Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior, Junshu Tang et al., ICCV 2023 | citation
-
GECCO: Geometrically-Conditioned Point Diffusion Models, Michał J. Tyszkiewicz et al., ICCV 2023 | citation
-
Re-imagine the Negative Prompt Algorithm: Transform 2D Diffusion into 3D, alleviate Janus problem and Beyond, Mohammadreza Armandpour et al., Arxiv 2023 | citation
-
Generative Novel View Synthesis with 3D-Aware Diffusion Models, Eric R. Chan et al., Arxiv 2023 | citation
-
Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields, Jingbo Zhang et al., Arxiv 2023 | citation
-
Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors, Guocheng Qian et al., Arxiv 2023 | citation
-
DreamBooth3D: Subject-Driven Text-to-3D Generation, Amit Raj et al., ICCV 2023 | citation
-
Zero-1-to-3: Zero-shot One Image to 3D Object, Ruoshi Liu et al., Arxiv 2023 | citation
-
ATT3D: Amortized Text-to-3D Object Synthesis, Jonathan Lorraine et al., ICCV 2023 | citation
-
Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation, Zibo Zhao et al., Arxiv 2023 | citation
-
Diffusion-SDF: Conditional Generative Modeling of Signed Distance Functions, Gene Chou et al., Arxiv 2023 | citation
-
HiFA: High-fidelity Text-to-3D with Advanced Diffusion Guidance, Junzhe Zhu et al., Arxiv 2023 | citation
-
LERF: Language Embedded Radiance Fields, Justin Kerr et al., Arxiv 2023 | citation
-
3DFuse: Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation, Junyoung Seo et al., Arxiv 2023 | citation
-
MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion, Shitao Tang et al., Arxiv 2023 | citation
-
One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization, Minghua Liu et al., Arxiv 2023 | citation
-
TextMesh: Generation of Realistic 3D Meshes From Text Prompts, Christina Tsalicoglou Liu et al., Arxiv 2023 | citation
-
Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, Xingqian Xu et al., Arxiv 2023 | citation
-
SceneScape: Text-Driven Consistent Scene Generation, Rafail Fridman et al., Arxiv 2023 | citation
-
CLIP-Mesh: Generating textured meshes from text using pretrained image-text models, Nasir Khalid et al., Arxiv 2023 | citation
-
Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models, Lukas Höllein et al., Arxiv 2023 | citation
-
Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction, Hansheng Chen et al., Arxiv 2023 | citation
-
PODIA-3D: Domain Adaptation of 3D Generative Model Across Large Domain Gap Using Pose-Preserved Text-to-Image Diffusion, Gwanghyun Kim et al., ICCV 2023 | citation
-
Shap-E: Generating Conditional 3D Implicit Functions, Heewoo Jun et al., Arxiv 2023 | citation
-
Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation, Aditya Sanghi et al., Arxiv 2023 | citation
-
3D VADER - AutoDecoding Latent 3D Diffusion Models, Evangelos Ntavelis et al., Arxiv 2023 | citation
-
DreamSparse: Escaping from Plato's Cave with 2D Frozen Diffusion Model Given Sparse Views, Paul Yoo et al., Arxiv 2023 | citation
-
Cap3D: Scalable 3D Captioning with Pretrained Models, Tiange Luo et al., Arxiv 2023 | citation
-
InstructP2P: Learning to Edit 3D Point Clouds with Text Instructions, Jiale Xu et al., Arxiv 2023 | citation
-
3D-LLM: Injecting the 3D World into Large Language Models, Yining Hong et al., Arxiv 2023 | citation
-
Points-to-3D: Bridging the Gap between Sparse Points and Shape-Controllable Text-to-3D Generation, Chaohui Yu et al., Arxiv 2023 | citation
-
RGB-D-Fusion: Image Conditioned Depth Diffusion of Humanoid Subjects, Sascha Kirch et al., Arxiv 2023 | citation
-
IT3D: Improved Text-to-3D Generation with Explicit View Synthesis, Yiwen Chen et al., Arxiv 2023 | citation
-
MVDream: Multi-view Diffusion for 3D Generation, Yichun Shi et al., Arxiv 2023 | citation
-
PointLLM: Empowering Large Language Models to Understand Point Clouds, Xu Runsen et al., Arxiv 2023 | citation
-
SyncDreamer: Generating Multiview-consistent Images from a Single-view Image, Yuan Liu et al., Arxiv 2023 | citation
-
Large-Vocabulary 3D Diffusion Model with Transformer, Ziang Cao et al., Arxiv 2023 | citation
-
Progressive Text-to-3D Generation for Automatic 3D Prototyping, Han Yi et al., Arxiv 2023 | citation
-
DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation, Jiaxiang Tang et al., Arxiv 2023 | citation
-
SweetDreamer: Aligning Geometric Priors in 2D Diffusion for Consistent Text-to-3D, Weiyu Li et al., Arxiv 2023 | citation
-
Consistent123: One Image to Highly Consistent 3D Asset Using Case-Aware Diffusion Priors, Yukang Lin et al., Arxiv 2023 | citation
-
GaussianDreamer: Fast Generation from Text to 3D Gaussian Splatting with Point Cloud Priors,Taoran Yi et al., Arxiv 2023 | citation
-
Text-to-3D using Gaussian Splatting, Zilong Chen et al., Arxiv 2023 | citation
-
Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model, Ruoxi Shi et al., Arxiv 2023 | citation
-
DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior, Jingxiang Sun et al., Arxiv 2023 | citation
-
HyperFields: Towards Zero-Shot Generation of NeRFs from Text, Sudarshan Babu et al., Arxiv 2023 | citation
-
Enhancing High-Resolution 3D Generation through Pixel-wise Gradient Clipping, Zijie Pan et al., Arxiv 2023 | citation
-
Text-to-3D with classifier score distillation, Xin Yu et al., Arxiv 2023 | citation
-
Noise-Free Score Distillation, Oren Katzir et al., Arxiv 2023 | citation
-
LRM: Large Reconstruction Model for Single Image to 3D, Yicong Hong et al., Arxiv 2023 | citation
-
One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion, Minghua Liu et al., Arxiv 2023 | citation
-
LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching, Yixun Liang et al., Arxiv 2023 | citation
-
MetaDreamer: Efficient Text-to-3D Creation With Disentangling Geometry and Texture, Lincong Feng et al., Arxiv 2023 | citation
-
Adversarial Diffusion Distillation, Axel Sauer et al., Arxiv 2023 | citation
-
MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers, Yawar Siddiqui et al., Arxiv 2023| citation
-
DreamPropeller: Supercharge Text-to-3D Generation with Parallel Sampling, Linqi Zhou et al., Arxiv 2023| citation
-
X-Dreamer: Creating High-quality 3D Content by Bridging the Domain Gap Between Text-to-2D and Text-to-3D Generation, Yiwei Ma et al., Arxiv 2023 | citation
-
StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D, Pengsheng Guo et al., Arxiv 2023 | citation
-
CAD: Photorealistic 3D Generation via Adversarial Distillation, Ziyu Wan et al., Arxiv 2023 | citation
-
RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D, Lingteng Qiu et al., Arxiv 2023 | citation
-
Inpaint3D: 3D Scene Content Generation using 2D Inpainting Diffusion, Kira Prabhu et al., Arxiv 2023 | citation
-
Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors, Lihe Ding et al., Arxiv 2023 | citation
-
Text2Immersion: Generative Immersive Scene with 3D Gaussians, Hao Ouyang et al., Arxiv 2023 | citation
-
Stable Score Distillation for High-Quality 3D Generation, Boshi Tang et al., Arxiv 2023 | citation
-
Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object Structure via HyperNetworks, Christian Simon et al., Arxv 2023 | citation
-
HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3D, Sangmin Woo et al., Arxv 2023 | citation
-
SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity, Peihao Wang et al., Arxiv 2024 | citation
-
AGG: Amortized Generative 3D Gaussians for Single Image to 3D, Dejia Xu et al., Arxiv 2024 | citation
-
Topology-Aware Latent Diffusion for 3D Shape Generation, Jiangbei Hu et al., Arxiv 2024 | citation
-
AToM: Amortized Text-to-Mesh using 2D Diffusion, Guocheng Qian et al., Arxiv 2024 | citation
-
LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation, Jiaxiang Tang et al., Arxiv 2024 | citation
-
IM-3D: : Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation, Luke Melas-Kyriazi et al., Arxiv 2024 | citation
-
L3GO: Language Agents with Chain-of-3D-Thoughts for Generating Unconventional Objects, Yutaro Yamada et al., Arxiv 2024 | citation
-
MVD2: Efficient Multiview 3D Reconstruction for Multiview Diffusion, Xin-Yang Zheng et al., Arxiv 2024 | citation
-
Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability, Xuelin Qian et al., Arxiv 2024 | citation
-
SceneWiz3D: Towards Text-guided 3D Scene Composition, Qihang Zhang et al., CVPR 2024 | citation
-
TripoSR: Fast 3D Object Reconstruction from a Single Image Dmitry Tochilkin et al., Arxiv 2024 | citation
-
V3D: Video Diffusion Models are Effective 3D Generators Zilong Chen et al., Arxiv 2024 | citation
-
CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model Zhengyi Wang et al., Arxiv 2024 | citation
-
Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation Fangfu Liu et al., Arxiv 2024 | citation
-
Isotropic3D: Image-to-3D Generation Based on a Single CLIP Embedding, Pengkun Liu et al., Arxiv 2024 | citation
-
SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video Diffusion, Vikram Volet et al., Arxiv 2024 | citation
-
Generic 3D Diffusion Adapter Using Controlled Multi-View Editing, Hansheng Chen et al., Arxiv 2024 | citation
-
GVGEN: Text-to-3D Generation with Volumetric Representation, Xianglong He et al., Arxiv 2024 | citation
-
BrightDreamer: Generic 3D Gaussian Generative Framework for Fast Text-to-3D Synthesis, Lutao Jiang et al., Arxiv 2024 | citation
3D Editing, Decomposition & Stylization
-
CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields, Can Wang et al., Arxiv 2021 | citation
-
CG-NeRF: Conditional Generative Neural Radiance Fields, Kyungmin Jo et al., Arxiv 2021 | citation
-
TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition, Yongwei Chen et al., NeurIPS 2022 | citation
-
3DDesigner: Towards Photorealistic 3D Object Generation and Editing with Text-guided Diffusion Models, Gang Li et al., Arxiv 2022 | citation
-
NeRF-Art: Text-Driven Neural Radiance Fields Stylization, Can Wang et al., Arxiv 2022 | citation
-
Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions, Ayaan Haque et al., Arxiv 2023 | citation
-
Local 3D Editing via 3D Distillation of CLIP Knowledge, Junha Hyung et al., Arxiv 2023 | citation
-
RePaint-NeRF: NeRF Editing via Semantic Masks and Diffusion Models, Xingchen Zhou et al., Arxiv 2023 | citation
-
Text2Tex: Text-driven Texture Synthesis via Diffusion Models, Dave Zhenyu Chen et al., Arxiv 2023 | citation
-
Control4D: Dynamic Portrait Editing by Learning 4D GAN from 2D Diffusion-based Editor, Ruizhi Shao et al., Arxiv 2023 | citation
-
Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation, Rui Chen et al., Arxiv 2023 | citation
-
Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes, Dana Cohen-Bar et al., Arxiv 2023 | citation
-
MATLABER: Material-Aware Text-to-3D via LAtent BRDF auto-EncodeR, Xudong Xu et al., Arxiv 2023 | citation
-
SATR: Zero-Shot Semantic Segmentation of 3D Shapes, Ahmed Abdelreheem et al., ICCV 2023 | citation
-
Texture Generation on 3D Meshes with Point-UV Diffusion, Xin Yu et al., ICCV 2023 | citation
-
Progressive3D: Progressively Local Editing for Text-to-3D Content Creation with Complex Semantic Prompts, Xinhua Cheng et al., Arxiv 2023 | citation
-
3D-GPT: Procedural 3D Modeling with Large Language Models, Chunyi Sun et al., Arxiv 2023 | citation
-
CustomNet: Zero-shot Object Customization with Variable-Viewpoints in Text-to-Image Diffusion Models, Ziyang Yuan et al., Arxiv 2023 | citation
-
Decorate3D: Text-Driven High-Quality Texture Generation for Mesh Decoration in the Wild, Yanhui Guo et al., NeurIPS 2023 | citation
-
HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image, Tong Wu et al., Arxiv 2023 | citation
-
InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes, Mohamad Shahbazi et al., Arxiv 2024 | citation
-
ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural Radiance Fields, JEdward Bartrum et al., Arxiv 2024 | citation
-
Sketch2NeRF: Multi-view Sketch-guided Text-to-3D Generation, Minglin Chen et al., Arxiv 2024| citation
-
BoostDream: Efficient Refining for High-Quality Text-to-3D Generation from Multi-View Diffusion, Yonghao Yu et al., Arxiv 2024 | citation
-
2L3: Lifting Imperfect Generated 2D Images into Accurate 3D, Yizheng Chen et al., Arxiv 2024 | citation
-
GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting, Xiaoyu Zhou et al., Arxiv 2024 | citation
-
Disentangled 3D Scene Generation with Layout Learning, Dave Epstein et al., Arxiv 2024 | citation
-
MagicClay: Sculpting Meshes With Generative Neural Fields Amir Barda et al., Arxiv 2024 | citation
Avatar Generation and Manupilation
-
Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion, Tengfei Wang et al., Arxiv 2022 | citation
-
DINAR: Diffusion Inpainting of Neural Textures for One-Shot Human Avatars, David Svitov et al., Arxiv 2023 | citation
-
ZeroAvatar: Zero-shot 3D Avatar Generation from a Single Image, Zhenzhen Weng et al., Arxiv 2023 | citation
-
AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control, Ruixiang Jiang et al., ICCV 2023 | citation
-
Chupa: Carving 3D Clothed Humans from Skinned Shape Priors using 2D Diffusion Probabilistic Models, Byungjun Kim et al., ICCV 2023 | citation
-
DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance, Longwen Zhang et al., Arxiv 2023 | citation
-
HeadSculpt: Crafting 3D Head Avatars with Text, Xiao Han et al., Arxiv 2023 | citation
-
DreamHuman: Animatable 3D Avatars from Text, Nikos Kolotouros et al., Arxiv 2023 | citation
-
FaceCLIPNeRF: Text-driven 3D Face Manipulation using Deformable Neural Radiance Fields, Sungwon Hwang et al., Arxiv 2023 | citation
-
AvatarVerse: High-quality & Stable 3D Avatar Creation from Text and Pose, Huichao Zhang et al., Arxiv 2023 | citation
-
TeCH: Text-guided Reconstruction of Lifelike Clothed Humans, Yangyi Huang et al., Arxiv 2023 | citation
-
HumanLiff: Layer-wise 3D Human Generation with Diffusion Model, Hu Shoukang et al., Arxiv 2023 | citation
-
TADA! Text to Animatable Digital Avatars, Tingting Liao et al., Arxiv 2023 | citation
-
One-shot Implicit Animatable Avatars with Model-based Priors, Yangyi Huang et al., ICCV 2023 | citation
-
Text2Control3D: Controllable 3D Avatar Generation in Neural Radiance Fields using Geometry-Guided Text-to-Image Diffusion Model, Sungwon Hwang et al., Arxiv 2023 | citation
-
Text-Guided Generation and Editing of Compositional 3D Avatars, Hao Zhang et al., Arxiv 2023 | citation
-
HumanNorm: Learning Normal Diffusion Model for High-quality and Realistic 3D Human Generation, Xin Huang et al., Arxiv 2023 | citation
-
HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting, Xian Liu et al., Arxiv 2023 | citation
-
Text-Guided 3D Face Synthesis: From Generation to Editing, Yunjie Wu wt al., Arxiv 2023 | citation
-
SEEAvatar: Photorealistic Text-to-3D Avatar Generation with Constrained Geometry and Appearance, Yuanyou Xu et al., Arxiv 2023 | citation
-
GAvatar: Animatable 3D Gaussian Avatars with Implicit Mesh Learning, Ye Yuan et al., Arxiv 2023 | citation
-
Make-A-Character: High Quality Text-to-3D Character Generation within Minutes, Jianqiang Ren et al., Arxv 2023 | citation
-
En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D Synthetic Data, Yifang Men et al., Arxiv 2024 | citation
-
HeadStudio: Text to Animatable Head Avatars with 3D Gaussian Splatting, Zhenglin Zhou et al., Arxiv 2024 | citation
Dynamic Content Generation
-
Text-To-4D Dynamic Scene Generation, Uriel Singer et al., Arxiv 2023 | citation
-
TextDeformer: Geometry Manipulation using Text Guidance, William Gao et al., Arxiv 2033 | citation
-
Consistent4D: Consistent 360 Degree Dynamic Object Generation from Monocular Video, Yanqin Jiang et al., Arxiv 2023 | citation
-
4D-fy:Text-to-4D Generation Using Hybrid Score Distillation Sampling, Lincong Feng et al., Arxiv 2023 | citation
-
Objaverse: A Universe of Annotated 3D Objects, Matt Deitke et al., Arxiv 2022 | citation
-
Objaverse-XL: A Universe of 10M+ 3D Objects, Matt Deitke et al., Preprint 2023 | citation
-
Describe3D: High-Fidelity 3D Face Generation from Natural Language Descriptions, Menghua Wu et al., CVPR 2023 | citation
-
Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable 2D Repainting, Hao Ouyang et at., Arxiv 2023 | citation
-
Customize-It-3D: High-Quality 3D Creation from A Single Image Using Subject-Specific Knowledge Prior, Nan Huang et al., Arxiv 2023 | citation
-
Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering, Kim Youwan et al., Arxiv 2023 | citation
-
SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding, Baoxiong Jia et al., Arxiv 2024 | citation
-
threestudio: A unified framework for 3D content generation, Yuan-Chen Guo et al., Github 2023
-
Nerfstudio: A Modular Framework for Neural Radiance Field Development, Matthew Tancik et al., SIGGRAPH 2023
-
Mirage3D: Open-Source Implementations of 3D Diffusion Models Optimized for GLB Output, Mirageml et al., Github 2023
- Initial List of the STOA
- Provide citations in BibTeX
- Sub-categorize based on input conditioning