Skip to content

AIM-SKKU/ADAPT

Repository files navigation

Backpropagation-Free Test-Time Adaptation via Probabilistic Gaussian Alignment


1Sungkyunkwan University,    2Yale University

NeurIPS 2025

[Paper]    [Project Page]

Abstract

Test-time adaptation (TTA) enhances the zero-shot robustness under distribution shifts by leveraging unlabeled test data during inference. Despite notable advances, several challenges still limit its broader applicability. First, most methods rely on backpropagation or iterative optimization, which limits scalability and hinders real-time deployment.  Second, they lack explicit modeling of class-conditional feature distributions. This modeling is crucial for producing reliable decision boundaries and calibrated predictions, but it remains underexplored due to the lack of both source data and supervision at test time. In this paper, we propose an Advanced Distribution-Aware and backPropagation-free Test-time adaptation (ADAPT) method. We reframe TTA as a Gaussian probabilistic inference task by modeling class-conditional likelihoods using gradually updated class means and a shared covariance matrix. This enables closed-form, training-free inference. To correct potential likelihood bias, we introduce lightweight regularization guided by CLIP priors and a historical knowledge bank. ADAPT requires no source data, no gradient updates, and no full access to target data, supporting both online and transductive settings. Extensive experiments across diverse benchmarks demonstrate that our method achieves state-of-the-art performance under a wide range of distribution shifts with superior scalability and robustness. 



Fig. 1: Overview of Online ADAPT. We perform TTA by modeling class-conditional feature distributions under a Gaussian assumption with shared covariance across classes. Class means are initialized from CLIP prototypes and refined using high-confidence samples in fixed-size per-class knowledge banks. To avoid error accumulation, the current test sample is excluded from updates. Predictions are made via a closed-form, backpropagation-free solution. In the transductive setting, the knowledge bank is built using the top-L most confident samples per class from the full test set.

Requirements

Python >= 3.10  
PyTorch == 2.5.1

Datasets

We evaluate our method under three tasks:

Task1: Natural Distribution Shifts

ImageNet, ImageNet-V2,ImageNet-A, ImageNet-R, ImageNet-Sketch

Task12: Corruption Robustness

ImageNet-C

Task13: Cross-Dataset Generalization

Flower102, OxfordPets, SUN397, DTD, Food101, StanfordCars, Aircraft, UCF101, EuroSAT, Caltech101

Please refer to CoOp/CoCoOp and TPT for more details on data.

Using ADAPT

Online Scenario:

bash ADAPT_online_TTA.sh

Transductive Scenario

bash ADAPT_Transductive_TTA.sh

Citation

If you find this work useful, please consider citing it.

@inproceedings{zhangbackpropagation,
  title={Backpropagation-Free Test-Time Adaptation via Probabilistic Gaussian Alignment},
  author={Zhang, Youjia and Kim, Youngeun and Choi, Young-Geun and Kim, Hongyeob and Liu, Huiling and Hong, Sungeun},
  booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
}

Acknowledgements

We thank the authors of CoOp/CoCoOp, TPT and AWT for their open-source implementation and instructions on data preparation.

About

[NeurIPS 2025] Backpropagation-Free Test-Time Adaptation via Probabilistic Gaussian Alignment

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors