Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Restricted Boltzmann machine

From Wikipedia, the free encyclopedia
Class of artificial neural network
Part of a series on
Machine learning
anddata mining
Diagram of a restricted Boltzmann machine with three visible units and four hidden units (no bias units)

Arestricted Boltzmann machine (RBM) (also called arestricted Sherrington–Kirkpatrick model with external field orrestricted stochastic Ising–Lenz–Little model) is agenerativestochasticartificial neural network that can learn aprobability distribution over its set of inputs.[1]

RBMs were initially proposed under the nameHarmonium byPaul Smolensky in 1986,[2] and rose to prominence afterGeoffrey Hinton and collaborators used fast learning algorithms for them in the mid-2000s. RBMs have found applications indimensionality reduction,[3]classification,[4]collaborative filtering,[5]feature learning,[6]topic modelling,[7]immunology,[8] and evenmany‑body quantum mechanics.[9][10][11]

They can be trained in eithersupervised orunsupervised ways, depending on the task.[citation needed]

As their name implies, RBMs are a variant ofBoltzmann machines, with the restriction that theirneurons must form abipartite graph:

  • a pair of nodes from each of the two groups of units (commonly referred to as the "visible" and "hidden" units respectively) may have a symmetric connection between them; and
  • there are no connections between nodes within a group.

By contrast, "unrestricted" Boltzmann machines may have connections betweenhidden units. This restriction allows for more efficient trainingalgorithms than are available for the general class of Boltzmann machines, in particular thegradient-basedcontrastive divergence algorithm.[12]

Restricted Boltzmann machines can also be used indeep learning networks. In particular,deep belief networks can be formed by "stacking" RBMs and optionally fine-tuning the resulting deep network withgradient descent andbackpropagation.[13]

Structure

[edit]

The standard type of RBM has binary-valued (Boolean) hidden and visible units, and consists of amatrix of weightsW{\displaystyle W} of sizem×n{\displaystyle m\times n}. Each weight element(wi,j){\displaystyle (w_{i,j})} of the matrix is associated with the connection between the visible (input) unitvi{\displaystyle v_{i}} and the hidden unithj{\displaystyle h_{j}}. In addition, there are bias weights (offsets)ai{\displaystyle a_{i}} forvi{\displaystyle v_{i}} andbj{\displaystyle b_{j}} forhj{\displaystyle h_{j}}. Given the weights and biases, theenergy of a configuration (pair of Boolean vectors)(v,h) is defined as

E(v,h)=iaivijbjhjijviwi,jhj{\displaystyle E(v,h)=-\sum _{i}a_{i}v_{i}-\sum _{j}b_{j}h_{j}-\sum _{i}\sum _{j}v_{i}w_{i,j}h_{j}}

or, in matrix notation,

E(v,h)=aTvbThvTWh.{\displaystyle E(v,h)=-a^{\mathrm {T} }v-b^{\mathrm {T} }h-v^{\mathrm {T} }Wh.}

This energy function is analogous to that of aHopfield network. As with general Boltzmann machines, thejoint probability distribution for the visible and hidden vectors is defined in terms of the energy function as follows,[14]

P(v,h)=1ZeE(v,h){\displaystyle P(v,h)={\frac {1}{Z}}e^{-E(v,h)}}

whereZ{\displaystyle Z} is apartition function defined as the sum ofeE(v,h){\displaystyle e^{-E(v,h)}} over all possible configurations, which can be interpreted as anormalizing constant to ensure that the probabilities sum to 1. Themarginal probability of a visible vector is the sum ofP(v,h){\displaystyle P(v,h)} over all possible hidden layer configurations,[14]

P(v)=1Z{h}eE(v,h){\displaystyle P(v)={\frac {1}{Z}}\sum _{\{h\}}e^{-E(v,h)}},

and vice versa. Since the underlying graph structure of the RBM isbipartite (meaning there are no intra-layer connections), the hidden unit activations aremutually independent given the visible unit activations. Conversely, the visible unit activations are mutually independent given the hidden unit activations.[12] That is, form visible units andn hidden units, theconditional probability of a configuration of the visible unitsv, given a configuration of the hidden unitsh, is

P(v|h)=i=1mP(vi|h){\displaystyle P(v|h)=\prod _{i=1}^{m}P(v_{i}|h)}.

Conversely, the conditional probability ofh givenv is

P(h|v)=j=1nP(hj|v){\displaystyle P(h|v)=\prod _{j=1}^{n}P(h_{j}|v)}.

The individual activation probabilities are given by

P(hj=1|v)=σ(bj+i=1mwi,jvi){\displaystyle P(h_{j}=1|v)=\sigma \left(b_{j}+\sum _{i=1}^{m}w_{i,j}v_{i}\right)} andP(vi=1|h)=σ(ai+j=1nwi,jhj){\displaystyle \,P(v_{i}=1|h)=\sigma \left(a_{i}+\sum _{j=1}^{n}w_{i,j}h_{j}\right)}

whereσ{\displaystyle \sigma } denotes thelogistic sigmoid.

The visible units of Restricted Boltzmann Machine can bemultinomial, although the hidden units areBernoulli.[clarification needed] In this case, the logistic function for visible units is replaced by thesoftmax function

P(vik=1|h)=exp(aik+ΣjWijkhj)Σk=1Kexp(aik+ΣjWijkhj){\displaystyle P(v_{i}^{k}=1|h)={\frac {\exp(a_{i}^{k}+\Sigma _{j}W_{ij}^{k}h_{j})}{\Sigma _{k'=1}^{K}\exp(a_{i}^{k'}+\Sigma _{j}W_{ij}^{k'}h_{j})}}}

whereK is the number of discrete values that the visible values have. They are applied in topic modeling,[7] andrecommender systems.[5]

Relation to other models

[edit]

Restricted Boltzmann machines are a special case ofBoltzmann machines andMarkov random fields.[15][16]

Thegraphical model of RBMs corresponds to that offactor analysis.[17]

Training algorithm

[edit]

Restricted Boltzmann machines are trained to maximize the product of probabilities assigned to some training setV{\displaystyle V} (a matrix, each row of which is treated as a visible vectorv{\displaystyle v}),

argmaxWvVP(v){\displaystyle \arg \max _{W}\prod _{v\in V}P(v)}

or equivalently, to maximize theexpectedlog probability of a training samplev{\displaystyle v} selected randomly fromV{\displaystyle V}:[15][16]

argmaxWE[logP(v)]{\displaystyle \arg \max _{W}\mathbb {E} \left[\log P(v)\right]}

The algorithm most often used to train RBMs, that is, to optimize the weight matrixW{\displaystyle W}, is the contrastive divergence (CD) algorithm due toHinton, originally developed to train PoE (product of experts) models.[18][19]The algorithm performsGibbs sampling and is used inside agradient descent procedure (similar to the way backpropagation is used inside such a procedure when training feedforward neural nets) to compute weight update.

The basic, single-step contrastive divergence (CD-1) procedure for a single sample can be summarized as follows:

  1. Take a training samplev, compute the probabilities of the hidden units and sample a hidden activation vectorh from this probability distribution.
  2. Compute theouter product ofv andh and call this thepositive gradient.
  3. Fromh, sample a reconstructionv' of the visible units, then resample the hidden activationsh' from this. (Gibbs sampling step)
  4. Compute theouter product ofv' andh' and call this thenegative gradient.
  5. Let the update to the weight matrixW{\displaystyle W} be the positive gradient minus the negative gradient, times some learning rate:ΔW=ϵ(vhTvhT){\displaystyle \Delta W=\epsilon (vh^{\mathsf {T}}-v'h'^{\mathsf {T}})}.
  6. Update the biasesa andb analogously:Δa=ϵ(vv){\displaystyle \Delta a=\epsilon (v-v')},Δb=ϵ(hh){\displaystyle \Delta b=\epsilon (h-h')}.

A Practical Guide to Training RBMs written by Hinton can be found on his homepage.[14]

Stacked Restricted Boltzmann Machine

[edit]
This sectionmay be too technical for most readers to understand. Pleasehelp improve it tomake it understandable to non-experts, without removing the technical details.(August 2023) (Learn how and when to remove this message)
icon
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(August 2023) (Learn how and when to remove this message)
See also:Deep belief network
  • The difference between the Stacked Restricted Boltzmann Machines and RBM is that RBM has lateral connections within a layer that are prohibited to make analysis tractable. On the other hand, the Stacked Boltzmann consists of a combination of an unsupervised three-layer network with symmetric weights and a supervised fine-tuned top layer for recognizing three classes.
  • The usage of Stacked Boltzmann is tounderstand Natural languages,retrieve documents, image generation, and classification. These functions are trained with unsupervised pre-training and/or supervised fine-tuning. Unlike the undirected symmetric top layer, with a two-way unsymmetric layer for connection for RBM. The restricted Boltzmann's connection is three-layers with asymmetric weights, and two networks are combined into one.
  • Stacked Boltzmann does share similarities with RBM, the neuron for Stacked Boltzmann is a stochastic binary Hopfield neuron, which is the same as the Restricted Boltzmann Machine. The energy from both Restricted Boltzmann and RBM is given by Gibb's probability measure:E=12i,jwijsisj+iθisi{\displaystyle E=-{\frac {1}{2}}\sum _{i,j}{w_{ij}{s_{i}}{s_{j}}}+\sum _{i}{\theta _{i}}{s_{i}}}. The training process of Restricted Boltzmann is similar to RBM. Restricted Boltzmann train one layer at a time and approximate equilibrium state with a 3-segment pass, not performing back propagation. Restricted Boltzmann uses both supervised and unsupervised on different RBM for pre-training for classification and recognition. The training uses contrastive divergence with Gibbs sampling: Δwij = e*(pij - p'ij)
  • The restricted Boltzmann's strength is it performs a non-linear transformation so it's easy to expand, and can give a hierarchical layer of features. The Weakness is that it has complicated calculations of integer and real-valued neurons. It does not follow the gradient of any function, so the approximation of Contrastive divergence to maximum likelihood is improvised.[14]

Literature

[edit]
  • Fischer, Asja; Igel, Christian (2012), "An Introduction to Restricted Boltzmann Machines",Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Lecture Notes in Computer Science, vol. 7441, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 14–36,doi:10.1007/978-3-642-33275-3_2,ISBN 978-3-642-33274-6

See also

[edit]

References

[edit]
  1. ^Sherrington, David; Kirkpatrick, Scott (1975), "Solvable Model of a Spin-Glass",Physical Review Letters,35 (35):1792–1796,Bibcode:1975PhRvL..35.1792S,doi:10.1103/PhysRevLett.35.1792
  2. ^Smolensky, Paul (1986)."Chapter 6: Information Processing in Dynamical Systems: Foundations of Harmony Theory"(PDF). In Rumelhart, David E.; McLelland, James L. (eds.).Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations. MIT Press. pp. 194–281.ISBN 0-262-68053-X.
  3. ^Hinton, G. E.; Salakhutdinov, R. R. (2006)."Reducing the Dimensionality of Data with Neural Networks"(PDF).Science.313 (5786):504–507.Bibcode:2006Sci...313..504H.doi:10.1126/science.1127647.PMID 16873662.S2CID 1658773. Archived fromthe original(PDF) on 2015-12-23. Retrieved2015-12-02.
  4. ^Larochelle, H.; Bengio, Y. (2008).Classification using discriminative restricted Boltzmann machines(PDF). Proceedings of the 25th international conference on Machine learning - ICML '08. p. 536.doi:10.1145/1390156.1390224.ISBN 978-1-60558-205-4.
  5. ^abSalakhutdinov, R.; Mnih, A.; Hinton, G. (2007).Restricted Boltzmann machines for collaborative filtering. Proceedings of the 24th international conference on Machine learning - ICML '07. p. 791.doi:10.1145/1273496.1273596.ISBN 978-1-59593-793-3.
  6. ^Coates, Adam; Lee, Honglak; Ng, Andrew Y. (2011).An analysis of single-layer networks in unsupervised feature learning(PDF). International Conference on Artificial Intelligence and Statistics (AISTATS). Archived fromthe original(PDF) on 2014-12-20. Retrieved2014-12-19.
  7. ^abRuslan Salakhutdinov and Geoffrey Hinton (2010).Replicated softmax: an undirected topic modelArchived 2012-05-25 at theWayback Machine.Neural Information Processing Systems23.
  8. ^Bravi, Barbara; Di Gioacchino, Andrea; Fernandez-de-Cossio-Diaz, Jorge; Walczak, Aleksandra M; Mora, Thierry; Cocco, Simona; Monasson, Rémi (2023-09-08). Bitbol, Anne-Florence; Eisen, Michael B (eds.)."A transfer-learning approach to predict antigen immunogenicity and T-cell receptor specificity".eLife.12 e85126.doi:10.7554/eLife.85126.ISSN 2050-084X.PMC 10522340.PMID 37681658.
  9. ^Carleo, Giuseppe; Troyer, Matthias (2017-02-10). "Solving the quantum many-body problem with artificial neural networks".Science.355 (6325):602–606.arXiv:1606.02318.Bibcode:2017Sci...355..602C.doi:10.1126/science.aag2302.ISSN 0036-8075.PMID 28183973.S2CID 206651104.
  10. ^Melko, Roger G.; Carleo, Giuseppe; Carrasquilla, Juan; Cirac, J. Ignacio (September 2019)."Restricted Boltzmann machines in quantum physics".Nature Physics.15 (9):887–892.Bibcode:2019NatPh..15..887M.doi:10.1038/s41567-019-0545-1.ISSN 1745-2481.S2CID 256704838.
  11. ^Pan, Ruizhi; Clark, Charles W. (2024). "Efficiency of neural-network state representations of one-dimensional quantum spin systems".Physical Review Research.6 (2) 023193.arXiv:2302.00173.Bibcode:2024PhRvR...6b3193P.doi:10.1103/PhysRevResearch.6.023193.
  12. ^abMiguel Á. Carreira-Perpiñán and Geoffrey Hinton (2005).On contrastive divergence learning.Artificial Intelligence and Statistics.
  13. ^Hinton, G. (2009)."Deep belief networks".Scholarpedia.4 (5): 5947.Bibcode:2009SchpJ...4.5947H.doi:10.4249/scholarpedia.5947.
  14. ^abcdGeoffrey Hinton (2010).A Practical Guide to Training Restricted Boltzmann Machines. UTML TR 2010–003, University of Toronto.
  15. ^abSutskever, Ilya; Tieleman, Tijmen (2010)."On the convergence properties of contrastive divergence"(PDF).Proc. 13th Int'l Conf. On AI and Statistics (AISTATS). Archived fromthe original(PDF) on 2015-06-10.
  16. ^abAsja Fischer and Christian Igel.Training Restricted Boltzmann Machines: An IntroductionArchived 2015-06-10 at theWayback Machine. Pattern Recognition 47, pp. 25-39, 2014
  17. ^María Angélica Cueto; Jason Morton; Bernd Sturmfels (2010). "Geometry of the restricted Boltzmann machine".Algebraic Methods in Statistics and Probability.516. American Mathematical Society.arXiv:0908.4425.Bibcode:2009arXiv0908.4425A.
  18. ^Geoffrey Hinton (1999).Products of Experts.ICANN 1999.
  19. ^Hinton, G. E. (2002)."Training Products of Experts by Minimizing Contrastive Divergence"(PDF).Neural Computation.14 (8):1771–1800.doi:10.1162/089976602760128018.PMID 12180402.S2CID 207596505.

Bibliography

[edit]

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Restricted_Boltzmann_machine&oldid=1320630326"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp