Navigating Model Phase Transitions to Enable Extreme Lossless Compression: A Perspective
-
Updated
Feb 9, 2026
Navigating Model Phase Transitions to Enable Extreme Lossless Compression: A Perspective
Official code for "Vanishing Contributions: A Unified Approach to Smoothly Transition Neural Models into Compressed Form", including training and evaluation scripts.
Transformer models library with compression options
Background subtraction for fluorescence microscopy time-lapses and video sequences via low-rank sparse decomposition (GoDec / GreBsmo, Zhou & Tao 2013).
A comprehensive implementation of post-training pruning methods for large language models (LLMs)
Code repository accompanying the paper "Beyond linear summation: Inferring interaction order from neural and biological dynamics" 🧠
Add a description, image, and links to the low-rank-decomposition topic page so that developers can more easily learn about it.
To associate your repository with the low-rank-decomposition topic, visit your repo's landing page and select "manage topics."