Perceptron Learning algorithm and convergence theorem
Classic Perceptron Learning Rule:
If the classification is correct, don't change the weights
Suppose the classification is incorrect
If the correct response is 1, but the output is 0
w
->
w
+ c
f
(c is a positive fraction)
This makes
w.f
bigger.
If the correct response is 0, but the output is 1
w
->
w
- c
f
This makes
w.f
smaller.