Optional project of the Streaming Data Analytics course provided by Politecnico di Milano.
Student: Reza Paki
The project will use the Python library Avalanche (Home page, API), presented in the Continual Learning course. The aim is to compare different Continual Learning strategies on two standard benchmarks in an Incremental Task Learning scenario.
The following strategies will be of interest:
- Baseline strategies: Naive Strategy and Joint Training.
- Replay strategies: Random Replay, GDUMB.
- Regularization strategies: Learning Without Forgetting (LWF), Elastic Weight Consolidation (EWC).
- Architectural Strategies: Copy Weights with Re-Init (CWR), Progressive Neural Networks (PNNs).
- One hybrid strategy you choose (see module 7 of the CL course).
While it is not necessary to delve deeply into these strategies, it is important to have an intuition of how they work. Please note that Avalanche implements all of them, so you do not need to implement them yourself. Since we are not interested in the best model configuration, you can use SimpleMLP as the base model for all the strategies.
Strategies comparison must be made based on the following metrics (computed on each experience):
- Accuracy.
- Forward Transfer, Backward Transfer.
- Time.
- CPU and RAM usage. The project must also include plots and reasoning on Forward and Backward Transfer, as seen during the SDA course.
Experiments must be run separately in an Incremental Task Learning scenario on two different benchmarks, each containing 5 experiences:
For each benchmark, you are required to create a single ipynb file. You must include comments for the principal instructions, and you are allowed to import external py modules. Additionally, ensure you thoroughly comment on the comparison results using various plots associated with the different metrics. Finally, again, within each ipynb file, briefly discuss the conclusions that can be drawn from the experiment.