Learning in the context of neural network has a very specific meaning: adjusting the weight w of a network. The methods of doing this are called learning algorithms. We have seen examples of such algorithms. In a supervised learning, the weight w is changed based on the input signal x and a target output signal ycorr. The simplest of all the supervised learning rules is perhaps
wi <- wi + e ( ycorr - y(x) ) xi
In unsupervised learning, there no "teacher" to tell the system if the output is correct or not. Actually, we have already seen such an example. This is the Hebb's initial proposal for the learning process that is taken place in the brain. In this model, the learning causes the weights to change according to
wi <- wi + e y(x) xi
This is known as Hebbian learning. We'll discuss the Hebbian learning and associated technique known as principle component analysis. We then consider one of the important unsupervised learning model - the Kohonen model for self-organizing maps.
Read Chapter 8 (Principal Components Analysis) and Chapter 9 (Self-Organizing Maps) of S Haykin, "Neural Networks" 2nd edition.