Perhaps the best way to break down deep learning is to imagine a Russian matryoshka doll set – wooden dolls that stack inside of each other, from largest to smallest.
Let’s name the biggest one broad ol’ Artificial Intelligence. Nesting inside is a subfield called Machine Learning, which houses its very own subfield: Deep Learning.
Deep learning is a set of algorithms that resembles neurons in the human brain, that is why they are also called neural networks. (Deep breath.) Neural networks are an integral part of machine learning, so deep learning is machine learning, as our doll analogy illustrates.
But why is deep learning getting so much attention lately? Well, in most machine learning algorithms, if we want the machine to identify a cat in an image, we need to tell the program what features to look out for (e.g. whiskers, pointed ears, long tail).
Deep learning is way ahead of the game, and here’s why: the word deep in deep learning refers to hidden layers in the algorithm – “hidden” because there are many neurons between the input and the output that can learn features on their own.
So, instead of having to tell the algorithm how to find the cat, we just tell it to find a cat and show it what a cat looks like. That’s why deep learning algorithms often need so much data, vast oceans of labeled data (and state-of-the-art equipment).
Deep learning can be used to automatically add color to black & white images or sound to silent movies. It can identify the content of images and automatically add captions. It can reenact politicians. It can translate text on images in real-time.
And if it can write Shakespeare, it could certainly write its own TechTalk definition, right?
Maybe it just did…