Skip to content

Raw implementation of PyTorch using NumPy. Structured similar to PyTorch, used to create deep learning models without using PyTorch and/or TensorFlow.

License

Notifications You must be signed in to change notification settings

Dristro/puretorch

Repository files navigation

PureTorch

PureTorch is a NumPy-based implementation of PyTorch. It aims to replicate the functionality and user experience of PyTorch while offering a transparent and lightweight codebase for educational and experimental purposes.


Table of Contents

Section Description
Version Current version and features
Purpose The what and why of this repository
Structure File structure of various modules
Upcoming Features Features under development
Setup Instructions for local setup
Report Bugs How to report issues or contribute

Version Info

Current Version: 1.1.0+dev

Migration Guide

If you're upgrading from v0.x.x to v1.x.x-dev, please refer to the Migration Guide for detailed instructions on updating your codebase.

The guide covers:

  • Breaking changes in the API.
  • Updated module structures.
  • Examples of how to transition your code.

Click here to view the Migration Guide.

New Features

  1. puretorch.Tensor:
    • autograd.tensor wrapper, tracks gradients for efficient back-propagation.
    • back-propagation is easier and more convenient.
  2. puretorch.nn:
    • neural network modules like: linear, sequential, more.
  3. puretorch.optim:
    • optimizers like: SGD, other (will add more soon).
  4. autograd:
    • autograd is now supported, allowing users to build custom functions and give full control over gradnients.

Deprecations

  1. The puretorch.layers.x system is replaced with a simpler, PyTorch-like API.
  2. Temporarily removed activations, losses, and optimizers. These will return with updated functionality, supporting Tensors.

Runs on CPU only. Might add GPU support later.


Purpose

Raw implementation of PyTorch-like library using NumPy. The structure and essence of torch remains the same, but its implemented using NumPy.


Structure

New file structure:

  • puretorch
    • nn
      • linear
      • Perceptron (will be deprecated in v1.1.0)
      • sequential
    • optim
      • optimizer
      • sgd
    • tensor (autograd.tensor, modified for better nn compatibility)
  • autograd
    • context
    • engine (not in use now. Abstractions of tensor-ops will be added here, along with higher level tensor logic)
    • function
    • ops
    • tensor

Will be adding other layers, activations, losses and optimizers.


Upcoming features

These are the features that im working on, and will soon be a part of PureTorch.

  1. More optimizers (like sgd w/ momentum, adam, etc)
  2. Loss functions
  3. Model summary (like torchinfo.summary())

Setup

Note: this for setting-up the "dev" branch locally.
For stable installation: go to the "main" branch's setup guide

Prerequisites

  • Python 3.8 or higher
  • Git installed on your system

Installation Steps

  1. Install the development branch:
pip install PureTorch
  1. Verify installation in Python
import puretorch
print(puretorch.__version__)
  1. Verify installation in the terminal
python3 -c "import puretorch; print(puretorch.__version__)"

If the version indicated is: 1.1.0, then the package was installed correctly.
If not, try reinstalling the package (or) check if you installed the stable (vs) development package.


Report Bugs

If you encounter issues or have suggestions, please open an issue via the GitHub Issues tab.

For those interested in contributing:

  • Fork this repository.
  • Make your changes.
  • Open a pull request with a detailed description of your changes.

Test cases will be added soon to help verify contributions and new features.


About

Raw implementation of PyTorch using NumPy. Structured similar to PyTorch, used to create deep learning models without using PyTorch and/or TensorFlow.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages