[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
-
Updated
Jun 26, 2024 - Python
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
Wav2Lip UHQ extension for Automatic1111
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021
Avatar Generation For Characters and Game Assets Using Deep Fakes
Parallel and High-Fidelity Text-to-Lip Generation; AAAI 2022 ; Official code
This project is a minigame inspired by the famous Talking Cat game, written in JavaScript, CSS and HTML. The pet will repeat whatever you say to him, making it an interactive and fun experience.
DoyenTalker uses deep learning techniques to generate personalized avatar videos that speak user-provided text in a specified voice. The system utilizes Coqui TTS for text-to-speech generation, along with various face rendering and animation techniques to create a video where the given avatar articulates the speech.
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Database of "Learning to Predict Salient Faces: A Novel Visual-Audio Saliency Model", ECCV 2020
Thin plate spline motion model TPSMM converted to ONNX
Add a description, image, and links to the talking-face topic page so that developers can more easily learn about it.
To associate your repository with the talking-face topic, visit your repo's landing page and select "manage topics."