I've been following Stanford course CS231n: Convolutional Neural Networks for Visual Recognition in my internship program at Rayanesh company. Here I gathered my notes and solutions to assignments. The course lectures were recorded in Spring 2017, but the assignments are from Spring 2021.
Some concepts in assignments like transformers or Self-Supervised learning are not taught in the 2017 lectures. Self-Supervised learning question is solved, but transformers question is skipped. The Style Transfer question was omitted in the 2021 assignments, so I returned to the 2017 homeworks to solve that.
You could get starter code from here.
- Q1: k-Nearest Neighbor classifier. (Done)
- Q2: Training a Support Vector Machine. (Done)
- Q3: Implement a Softmax classifier. (Done)
- Q4: Two-Layer Neural Network. (Done)
- Q5: Higher Level Representations: Image Features. (Done)
You could get starter code from here.
- Q1: Multi-Layer Fully Connected Neural Networks. (Done)
- Q2: Batch Normalization. (Done)
- Q3: Dropout. (Done)
- Q4: Convolutional Neural Networks. (Done)
- Q5: PyTorch / TensorFlow on CIFAR-10. (Done in PyTorch)
You could get starter code from here.
- Q1: Image Captioning with Vanilla RNNs. (Done)
- Q2: Image Captioning with Transformers.
- Q3: Network Visualization: Saliency Maps, Class Visualization, and Fooling Images. (Done)
- Q4: Generative Adversarial Networks. (Done)
- Q5: Self-Supervised Learning for Image Classification. (Done)
- Extra: Image Captioning with LSTMs. (Done)
- Q4: Style Transfer. (Done in PyTorch)
I took notes from some lectures.
- Lecture 6: Training Neural Networks, Part I.
- Lecture 7: Training Neural Networks, part II.
- Lecture 8: Deep Learning Software.
- Lecture 9: CNN Architectures.
- Lecture 10: Recurrent Neural Networks.
- Lecture 11: Detection and Segmentation.
- Lecture 12: Visualizing and Understanding.
- Lecture 13: Generative Models.