🪄SCEPTER is an open-source code repository dedicated to generative training, fine-tuning, and inference, encompassing a suite of downstream tasks such as image generation, transfer, editing. SCEPTER integrates popular community-driven implementations as well as proprietary methods by Tongyi Lab of Alibaba Group, offering a comprehensive toolkit for researchers and practitioners in the field of AIGC. This versatile library is designed to facilitate innovation and accelerate development in the rapidly evolving domain of generative models.
SCEPTER offers 3 core components:
- Generative training and inference framework
- Easy implementation of popular approaches
- Interactive user interface: SCEPTER Studio & Comfy UI
- [🔥🔥🔥2024.10]: We are pleased to announce the release of the code for ACE, supporting Customized Training / Comfy UI Workflow / gradio-based ChatBot Interface. The detailed documents can be found at ACE repo.
- [2024.10]: Support for inference and tuning with FLUX, as well as for building ComfyUI workflows using this framework.
- [2024.09]: We introduce ACE, an All-round Creator and Editor adept at executing a diverse array of image editing tasks tailored to your specifications. Built upon the cutting-edge Diffusion Transformer architecture, ACE has been extensively trained on a comprehensive dataset to seamlessly interpret and execute any natural language instruction. For further information, please consult the project page.
- [2024.07]: Support the inference and training of open-source generative models based on the DiT architecture, such as SD3 and PixArt.
- [2024.05]: Introducing SCEPTER v1, supporting customized image edit tasks! Simply provide 10 image pairs, SCEPTER will tune an edit tuner for your own Image-to-Image tasks, like
Clay Style
,De-Text
,Segmentation
, etc. - [2024.04]: New StyleBooth demo on SCEPTER Studio for
Text-Based Style Editing
. - [2024.03]: We optimize the training UI and checkpoint management. New LAR-Gen model has been added on SCEPTER Studio, supporting
zoom-out
,virtual try on
,inpainting
. - [2024.02]: We release new SCEdit controllable image synthesis models for SD v2.1 and SD XL. Multiple strategies applied to accelerate inference time for SCEPTER Studio.
- [2024.01]: We release SCEPTER Studio, an integrated toolkit for data management, model training and inference based on Gradio.
- [2024.01]: SCEdit support controllable image synthesis for training and inference.
- [2023.12]: We propose SCEdit, an efficient and controllable generation framework.
- [2023.12]: We release 🪄SCEPTER library.
ACE is a unified foundational model framework that supports a wide range of visual generation tasks. By defining CU for unifying multi-modal inputs across different tasks and incorporating long-context CU, we introduce historical contextual information into visual generation tasks, paving the way for ChatGPT-like dialog systems in visual generation.
We offer a demonstration training YAML that enables the end-to-end training of ACE using a toy dataset. For a comprehensive overview of the hyperparameter configurations, please consult scepter/methods/edit/dit_ace_0.6b_512.yaml
.
Please find the dataset class located in scepter/modules/data/dataset/ms_dataset.py
,
designed to facilitate end-to-end training using an open-source toy dataset.
Download a dataset zip file from modelscope, and then extract its contents into the cache/datasets/
directory.
Should you wish to prepare your own datasets, we recommend consulting scepter/modules/data/dataset/ms_dataset.py
for detailed guidance on the required data format.
The ACE checkpoint has been uploaded to both ModelScope and HuggingFace platforms:
In the provided training YAML configuration, we have designated the Modelscope URL as the default checkpoint URL. Should you wish to transition to Hugging Face, you can effortlessly achieve this by modifying the PRETRAINED_MODEL value within the YAML file (replace the prefix "ms://iic" to "hf://scepter-studio").
You can easily start training procedure by executing the following command:
PYTHONPATH=. python scepter/tools/run_train.py --cfg scepter/methods/edit/dit_ace_0.6b_512.yaml
We have developed a chatbot interface utilizing Gradio, designed to convert user input in natural language into visually captivating images that align semantically with the specified instructions. You can easily access this functionality by launching Scepter Studio with the following command:
PYTHONPATH=. python scepter/tools/webui.py --cfg scepter/methods/studio/scepter_ui.yaml --language zh
Upon starting, you will find a "ChatBot" tab within the Gradio application, which serves as a chat-based interface to handle any requests related to image editing or generation.
ACE Workflow Examples | |||
---|---|---|---|
Control | Semantic | Element | |
Yarn Style | Soft Watercolor Style | ||||
---|---|---|---|---|---|
Travel Style | WuKong Style | ||||
Example Workflow Case | |||
---|---|---|---|
Base | +Mantra | +Tuner | +Control |
- Create new environment with
conda
command:
conda env create -f environment.yaml
conda activate scepter
- Install with
pip
command:
We recommend installing the specific version of PyTorch and accelerate toolbox xFormers. You can install these recommended version by pip:
pip install -r requirements/recommended.txt
pip install scepter
Documentation | Key Features |
---|---|
Train | DDP / FSDP / FairScale / Xformers |
Inference | Dynamic load/unload |
Dataset Management | Local / Http / OSS / Modelscope |
Tasks | Methods | Links |
---|---|---|
Text-to-image Generation | SD v1.5 | |
Text-to-image Generation | SD v2.1 | |
Text-to-image Generation | SD-XL | |
Text-to-image Generation | FLUX | |
Efficient Tuning | LoRA | |
Efficient Tuning | Res-Tuning(NeurIPS23) | |
Controllable Image Synthesis | 🌟SCEdit(CVPR24) | |
Image Editing | 🌟LAR-Gen | |
Image Editing | 🌟StyleBooth | |
Image Generation and Editing | 🌟ACE | |
To fully experience SCEPTER Studio, you can launch the following command line:
pip install scepter
python -m scepter.tools.webui
or run after clone repo code
git clone https://github.com/modelscope/scepter.git
PYTHONPATH=. python scepter/tools/webui.py --cfg scepter/methods/studio/scepter_ui.yaml
The startup of SCEPTER Studio eliminates the need for manual downloading and organizing of models; it will automatically load the corresponding models and store them in a local directory. Depending on the network and hardware situation, the initial startup usually requires 15-60 minutes, primarily involving the download and processing of SDv1.5, SDv2.1, and SDXL models. Therefore, subsequent startups will become much faster (about one minute) as downloading is no longer required.
Image Editing | Training | Model Sharing | Model Inference | Data Management |
---|---|---|---|---|
We deploy a work studio on Modelscope that includes only the inference tab, please refer to ms_scepter_studio and hf_scepter_studio
Manually install by moving custom_nodes to ComfyUI.
cd path/to/scepter
pip install -e .
cp -r path/to/scepter/workflow/ path/to/ComfyUI/custom_nodes/ComfyUI-Scepter
cd path/to/ComfyUI
python main.py
In addition, we also support installation and usage through the ComfyUI Manager.
-
Alibaba TongYi Vision Intelligence Lab
Discover more about open-source projects on image generation, video generation, and editing tasks.
-
ModelScope Library is the model library of ModelScope project, which contains a large number of popular models.
-
SWIFT (Scalable lightWeight Infrastructure for Fine-Tuning) is an extensible framwork designed to faciliate lightweight model fine-tuning and inference.
If our work is useful for your research, please consider citing:
@misc{scepter,
title = {SCEPTER, https://github.com/modelscope/scepter},
author = {SCEPTER},
year = {2023}
}
This project is licensed under the Apache License (Version 2.0).
Thanks to Stability-AI, SWIFT library, Fooocus and ComfyUI for their awesome work.