Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

ADALINE

From Wikipedia, the free encyclopedia
(Redirected fromMADALINE)
Early single-layer artificial neural network
Learning inside a single-layer ADALINE
Photo of an ADALINE machine, with hand-adjustable weights implemented by rheostats
Schematic of a single ADALINE unit[1]

ADALINE (Adaptive Linear Neuron or laterAdaptive Linear Element) is an early single-layerartificial neural network and the name of the physical device that implemented it.[2][3][1][4][5] It was developed by professorBernard Widrow and his doctoral studentMarcian Hoff atStanford University in 1960. It is based on theperceptron and consists of weights, a bias, and a summation function. The weights and biases were implemented byrheostats (as seen in the "knobby ADALINE"), and later,memistors.

The difference between Adaline and the standard (Rosenblatt) perceptron is in how they learn. Adaline unit weights are adjusted to match a teacher signal, before applying the Heaviside function (see figure), but the standard perceptron unit weights are adjusted to match the correct output, after applying the Heaviside function.

Amultilayer network ofADALINE units is known as aMADALINE.

Definition

[edit]

Adaline is a single-layer neural network with multiple nodes, where each node accepts multiple inputs and generates one output. Given the following variables:

the output is:

y=j=1nxjwj+θ{\displaystyle y=\sum _{j=1}^{n}x_{j}w_{j}+\theta }

If we further assume thatx0=1{\displaystyle x_{0}=1} andw0=θ{\displaystyle w_{0}=\theta }, then the output further reduces to:

y=j=0nxjwj{\displaystyle y=\sum _{j=0}^{n}x_{j}w_{j}}

Learning rule

[edit]

Thelearning rule used by ADALINE is the LMS ("least mean squares") algorithm, a special case ofgradient descent.

Given the following:

the LMS algorithm updates the weights as follows:

ww+η(oy)x{\displaystyle w\leftarrow w+\eta (o-y)x}

This update rule minimizesE{\displaystyle E}, the square of the error,[6] and is in fact thestochastic gradient descent update forlinear regression.[7]

MADALINE

[edit]

MADALINE (Many ADALINE[8]) is a three-layer (input, hidden, output), fully connected,feedforward neural network architecture forclassification that uses ADALINE units in its hidden and output layers. I.e., itsactivation function is thesign function.[9] The three-layer network usesmemistors. As the sign function is non-differentiable,backpropagation cannot be used to train MADALINE networks. Hence, three different training algorithms have been suggested, called Rule I, Rule II and Rule III.

Despite many attempts, they never succeeded in training more than a single layer of weights in a MADALINE model. This was until Widrow saw the backpropagation algorithm in a 1985 conference inSnowbird, Utah.[10]

MADALINE Rule 1 (MRI) - The first of these dates back to 1962.[11] It consists of two layers: the first is made of ADALINE units (let the output of thei{\displaystyle i}th ADALINE unit beoi{\displaystyle o_{i}}); the second layer has two units. One is a majority-voting unit that takes in alloi{\displaystyle o_{i}}, and if there are more positives than negatives, outputs +1, and vice versa. Another is a "job assigner": suppose the desired output is -1, and different from the majority-voted output, then the job assigner calculates the minimal number of ADALINE units that must change their outputs from positive to negative, and picks those ADALINE units that areclosest to being negative, and makes them update their weights according to the ADALINE learning rule. It was thought of as a form of "minimal disturbance principle".[12]

The largest MADALINE machine built had 1000 weights, each implemented by a memistor. It was built in 1963 and used MRI for learning.[12][13]

Some MADALINE machines were demonstrated to perform tasks includinginverted pendulum balancing,weather forecasting, andspeech recognition.[3]

MADALINE Rule 2 (MRII) - The second training algorithm, described in 1988, improved on Rule I.[8] The Rule II training algorithm is based on a principle called "minimal disturbance". It proceeds by looping over training examples, and for each example, it:

  • finds the hidden layer unit (ADALINE classifier) with the lowest confidence in its prediction,
  • tentatively flips the sign of the unit,
  • accepts or rejects the change based on whether the network's error is reduced,
  • stops when the error is zero.

MADALINE Rule 3 - The third "Rule" applied to a modified network withsigmoid activations instead of sign; it was later found to be equivalent to backpropagation.[12]

Additionally, when flipping single units' signs does not drive the error to zero for a particular example, the training algorithm starts flipping pairs of units' signs, then triples of units, etc.[8]

See also

[edit]

References

[edit]
  1. ^ab1960: An adaptive "ADALINE" neuron using chemical "memistors"
  2. ^Anderson, James A.; Rosenfeld, Edward (2000).Talking Nets: An Oral History of Neural Networks. MIT Press.ISBN 9780262511117.
  3. ^abYoutube:widrowlms: Science in Action
  4. ^Youtube:widrowlms: The LMS algorithm and ADALINE. Part I - The LMS algorithm
  5. ^Youtube:widrowlms: The LMS algorithm and ADALINE. Part II - ADALINE and memistor ADALINE
  6. ^"Adaline (Adaptive Linear)"(PDF).CS 4793: Introduction to Artificial Neural Networks. Department of Computer Science, University of Texas at San Antonio.
  7. ^Avi Pfeffer."CS181 Lecture 5 — Perceptrons"(PDF). Harvard University.[permanent dead link]
  8. ^abcRodney Winter; Bernard Widrow (1988).MADALINE RULE II: A training algorithm for neural networks(PDF). IEEE International Conference on Neural Networks. pp. 401–408.doi:10.1109/ICNN.1988.23872.
  9. ^Youtube:widrowlms: Science in Action (Madaline is mentioned at the start and at 8:46)
  10. ^Anderson, James A.; Rosenfeld, Edward, eds. (2000).Talking Nets: An Oral History of Neural Networks. The MIT Press.doi:10.7551/mitpress/6626.003.0004.ISBN 978-0-262-26715-1.
  11. ^Widrow, Bernard (1962)."Generalization and information storage in networks of adaline neurons"(PDF).Self-organizing Systems:435–461.
  12. ^abcWidrow, Bernard; Lehr, Michael A. (1990). "30 years of adaptive neural networks: perceptron, madaline, and backpropagation".Proceedings of the IEEE.78 (9):1415–1442.doi:10.1109/5.58323.S2CID 195704643.
  13. ^B. Widrow, “Adaline and Madaline-1963, plenary speech,” Proc. 1st lEEE lntl. Conf. on Neural Networks, Vol. 1, pp. 145-158, San Diego, CA, June 23, 1987

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=ADALINE&oldid=1257356233#MADALINE"
Category:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp