At the end of this lesson, students are expected to:
To end our module on ML & neural networks, let’s look back and recap what we’ve learned. We started introducing the concept of representation learning and the idea that neural networks, unlike many of the other ML methods we learn about in this course, are able to not only learn to approximate a highly complex function, but they are also able to transform the representation of the data to a more amenable form to learn the task.
They accomplish this by successively applying simple non-linear functions to make small changes to the representation, each function building on the output of the last.
This is in contrast to “the old way” of doing things, where the representation (or features) are hand-picked and provided to the ML method as input. The ‘classic’ ML method tries to find a direct mapping from $x$ to $\hat{y}$.