Getting Started

Four Deep Learning Papers to Read in February 2021

From Neuroscience to Automatic Differentiation, Neural Network Theory & Underfitting in Neural Processes

Robert Lange
Towards Data Science
5 min readJan 30, 2021

--

Are you looking to get a better overview of the different Deep Learning research streams currently being pursued? Do you have too many open arXiv tabs to reasonably go through? Too little time to watch full videos? If only there was a quick summary of the key idea and concept of a paper. Then I am happy to introduce the ‚Machine-Learning-Collage‘ series. In this series I draft one-slide visual summaries of one of my favourite recent papers. Every single week. At the end of the month I collect all of the resulting collages in a summary blog post. And this is the very first edition. So here are my four favourite papers that I read in January 2021 and why I believe them to be important.

“Unsupervised Deep Learning identifies semantic disentanglement in single IT neurons“

Authors: Higgins et al. (2020) | 📝 Paper

One Paragraph Summary: The ‚neuron doctrine‘ postulated by Ramon y Cajal states that the nervous systems consists of individual discrete units. This led to the special emphasis put on neurons being…

--

--

Published in Towards Data Science

Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.

Written by Robert Lange

Deep Learning PhD @TU Berlin. Research Scientist @Sakana.AI. ✍️ 2x Google DeepMind Intern 🧑‍🔬 Follow me: https://x.com/RobertTLange

Responses (1)