Hebbian learning
From Wikipedia, the free encyclopedia
Hebbian learning is a hypothesis for how neuronal connections are enforced in mammalian brains; it is also a technique for weight selection in artificial neural networks.
The idea is named after Donald Hebb, who in 1949 presented it in his book The Organization of Behavior which inspired further research into neural networks. His idea specified how much the strength of a connection between two neurons should be altered according to how they are firing at that time. Hebb's original principle was essentially that if one neuron is stimulating some other neuron repeatedly, then the strength of the connection between the two neurons will be increased.
See also Hebbian theory.
Contents |
[edit] Principles of Hebbian learning
From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons will increase if the two neurons activate simultaneously; it is reduced if they activate separately. Nodes which tend to be either both positive or both negative at the same time will have strong positive weights while those which tend to be opposite will have strong negative weights. It is sometimes stated more simply as "neurons that fire together, wire together."
This original principle is perhaps the simplest form of weight selection. While this means it can be relatively easily coded into a computer program and used to update the weights for a network, it also prohibits the number of applications of Hebbian learning. Today, the term Hebbian learning generally refers to some form of mathematical abstraction of the original principle proposed by Hebb. In this sense, Hebbian learning involves weights between learning nodes being adjusted so that each weight better represents the relationship between the nodes. As such, many learning methods can be considered to be somewhat Hebbian in nature.
The following is a formulaic description of Hebbian learning: (note that many other descriptions are possible)
wij = xixj
where wij is the weight of the connection from neuron j to neuron i and xi the input for neuron i. Note that this is pattern learning (weights updated after every training example). In a Hopfield network, connections wij are set to zero if i = j (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.
Another formulaic description is:
,
where wij is the weight of the connection from neuron j to neuron i, n is the dimension of the input vector, p the number of training patterns, and the kth input for neuron i. This is learning by epoch (weights updated after all the training examples are presented). Again, in a Hopfield network, connections wij are set to zero if i = j (no reflexive connections).
A variation of Hebbian learning that takes into account phenomena such as blocking and many other neural learning phenomena is the mathematical model of Harry Klopf. Klopf's model reproduces a great many biological phenomena, and is also simple to implement.
[edit] Hebbian learning in biological systems
Work in the laboratory of Eric Kandel has provided evidence for the involvement of Hebbian learning mechanisms at synapses in the marine invertebrate Aplysia californica.
Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous system synapses studied in marine invertebrates. Much of the work on long-lasting synaptic changes between vertebrate neurons (such as long-term potentiation) involves the use of non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. One such study reviews results from experiments that indicate that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity working through both Hebbian and non-Hebbian mechanisms
[edit] References
- Bishop, C.M. (1995) Neural Networks for Pattern Recognition, Oxford: Oxford University Press. ISBN 0-19-853849-9 (hardback) or ISBN 0-19-853864-2 (paperback)
- Abstract of a paper on changes in synaptic activity.
[edit] External links
- Hebbian Learning tutorial (Part 1: Novelty Filtering, Part 2: PCA)