This week’s lecture is again about Neural Networks. As a aside, my Firefox settings seem to not work that well with the Coursera user interface. So I switched to Safari. Now the “speed up” buttons work for me. I accelerated the Octave pieces in what I call “chipmunk” mode. You can switch off the sound and read the subtitles only. This way you can listen to one lecture and read the subtitles of the other. In accelerated mode. Imagine how fast you can learn in this manner.
Okay, back to Neural Networks. Here is the list of video titles:
- Cost Function – the cost function of Neural Networks is described.
- Backpropagation algorithm – this algorithm does calculation going back from the output layer.
- Backpropagation intuition – the algorithm is complicated, so this is a useful video to watch.
- Implementation Note: Unrolling parameters
- Gradient Checking – this is a technique to add sanity checks, while developing neural networks. It’s too slow to use in production.
- Random Initialization – values in a neural network need to be initialized to random values close to zero, otherwise the algorithm wouldn’t work properly.
- Putting It Together
- Autonomous Driving – this is a demonstration video of an experiment involving neural networks the goal of which is to teach a computer to steer a (military) vehicle. I got the impression that this is an old experiment and I might have seen the video before. If I understood correctly, they had at least two neural networks. One for one-lane roads and another for two-lane roads.
A few remarks were made concerning neural network architectures in the scenario of multi-class classification. The most common architecture being:
- one input layer with as many nodes as features
- a hidden layer with just a bit more nodes than features, but certainly not more than three times that
- one output layer with as many nodes as classes to classify
Or you could have slightly more hidden layers, but then of course you would pay a price in terms of performance.