This repository provides code for the PGNN paper. If you're using the code in your research, please cite the paper https://hdl.handle.net/11296/mwzt6f .
This study addresses the challenges of analytically deriving the dynamic model for parallel kinematic manipulators due to high nonlinearity and strong joint coupling. It employs a novel machine learning model, called Physics-Guided Neural Network, which utilizes the simplicity of the analytically derived kinematic model and the fast computational characteristics of neural networks. The model is trained using data-driven neural networks and incorporates physics-based loss functions to guide the inverse dynamic model training process. The study demonstrates the superior performance of the kinematics model guided neural network in terms of convergence, interpolation capability, extrapolation capability, generalization capability, and robustness compared to traditional data-driven neural networks.
- Python 3.9.13
- Pytorch 1.12.0
- scikit-learn 1.1.1
The repository contains code and datasets needed for training and testing the PGNN model described in the paper, including interpolation, extrapolation, and generalization capability.
All of the training paths in this repository are periodic paths, which can be defined by the Finite Fourier series.
The equation above represents the formula for the periodic function. The coefficients
Additionally, the function values in the
Different paths with varying maximum speeds and accelerations are generated based on the period of the periodic function
To reasonably test the model, the numbers of the dataset are 60,000, which is the same as the validation data. The naming rules for testing data files are as follows.
TestingData_ATraj_B
- A represents trajectory type
- Periodic trajectory
- Quintic polynomial trajectory
- B represents additional information
- Low speed
- Middle speed
- High speed
- Null - includes all of the above
- Extra - vel, accel may over the machine limitation.
- Single - randomly pick one trajectory to display how the model fits
Multiple sets of quintic polynomial trajectories are served to assess the general capabilities of the model. As depicted in the diagram below, a quintic polynomial trajectory involves selecting two points in space and describing the relationship between them using a quintic polynomial.
By taking the derivative of the quintic polynomial with respect to time, we can obtain the velocity equation represented by a quartic polynomial and the acceleration equation represented by a cubic polynomial. From the equations below, it is apparent that to fully determine the parameters
In summary, the range and the sampling number of quintic polynomial trajectory data are described below.
PGNN stands for Physics-Guided Neural Network, which adds a compensation function to the loss function of a neural network during training. Before the training process, a custom class Hook is defined to capture the output values of neurons in the hidden layers. Specifically, the function registerforwardhook is utilized to gain the parameters of the model during the forward pass. By specifying the desired neural network layer, one can access the output of that layer as well as the input from the previous layer. On the other hand, registerbackwardhook is employed to obtain the gradient values of each layer during the backward pass, which is crucial for backpropagation and parameter updates.
class Hook():
def __init__(self, module, backward=False):
if backward==False:
self.hook = module.register_forward_hook(self.hook_fn)
else:
self.hook = module.register_backward_hook(self.hook_fn)
def hook_fn(self, module, input, output):
self.input = input
self.output = output
def close(self):
self.hook.remove()
If you want to retrain the model just execute the file Model_training.ipynb .
- Change training data Find the "Load Data" section that Load data and change it into other data.
tr_path = os.path.abspath("./Resources/TrainingData_PeriodicTraj.xlsx")
tr_dataset = TrajDataset(tr_path)
- Change the model name and file directory Find the "Hyper-parameters Setup" section and change the value of 'save_path' in the variable config.
config = {
'n_epochs': 15, # maximum number of epochs
'batch_size': 16, # mini-batch size for dataloader
'optimizer': 'Adam', # optimization algorithm (optimizer in torch.optim)
'optim_hparas': { # hyper-parameters for the optimizer (depends on which optimizer you are using)
'lr': 0.0005433177089293607 # learning rate of Adam
},
'layers': np.array([9, 189, 119, 9, 53, 85, 3], dtype='int64'), # layers of NN architecture
'early_stop': 200, # early stopping epochs (the number epochs since your model's last improvement)
'save_path': 'models/model.pt' # your model will be saved here
}
- Change the testing and model to use Find the variable ts_path and test_config in the "Testing with trained NN" section.
ts_path = os.path.abspath("./Resources/TestingData_PeriodicTraj_Single.xlsx")
ts_dataset = TrajDataset(ts_path)
test_config = {'save_path' : 'models/model_DDNN.pt'}
- VSCode - Code IDE
- MATLAB/Simulink Simscape - Rigid body dynamics simulation
- ADAMS - Simulation data verification
- WeiChu (Wilson) Chen
- This project is licensed under the MIT License - see the LICENSE.md file for details
- The novel NN structure, PGNN, first appeared in Physics-guided Neural Networks (PGNN): An Application In Lake Temperature Modelling.
- Microsoft NNI is used for Hyperparameter Tuning.
- I'm grateful for Professor Sung's continuous care and support throughout my entire master journey.