Skip to content

Latest commit

 

History

History
81 lines (58 loc) · 2.81 KB

README.md

File metadata and controls

81 lines (58 loc) · 2.81 KB

GD-VAE: Geometric Dynamic Variational Autoencoders

Examples | Documentation | Paper

Geometric Dynamic Variational Autoencoders (GD-VAE) package provides machine learning methods for learning embedding maps for nonlinear dynamics into general latent spaces. This includes methods for standard latent spaces or manifold latent spaces with specified geometry and topology. The manifold latent spaces can be based on analytic expressions or general point cloud representations.

Quick Start

Method 1: Install for python using pip

pip install -U gd-vae-pytorch

For use of the package see the examples folder. More information on the structure of the package also can be found on the documentation pages.

If previously installed the package, please update to the latest version using pip install --upgrade gd-vae-pytorch

To test the package installed use import gd_vae_pytorch.tests.t1 as t1; t1.run()

Packages

The pip install should automatically handle most of the dependencies. If there are issues, please be sure to install pytorch package version >= 1.2.0. The full set of dependencies can be found in the requirements.txt. You may want to first install pytorch package manually to configure it for your specific GPU system and platform.

Usage

For information on how to use the package, see

Additional Information

When using this package, please cite:

GD-VAEs: Geometric Dynamic Variational Autoencoders for Learning Non-linear Dynamics and Dimension Reductions, R. Lopez and P. J. Atzberger, arXiv:2206.05183, (2022), [arXiv].

@article{lopez_atzberger_gd_vae_2022,
  title={GD-VAEs: Geometric Dynamic Variational Autoencoders for 
         Learning Non-linear Dynamics and Dimension Reductions},
  author={Ryan Lopez, Paul J. Atzberger},
  journal={arXiv:2206.05183},  
  month={June},
  year={2022},
  url={http://arxiv.org/abs/2206.05183}
}

Acknowledgements This work was supported by grants from DOE Grant ASCR PHILMS DE-SC0019246 and NSF Grant DMS-1616353.


Examples | Documentation | Paper | Atzberger Homepage