Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Activation function

From Wikipedia, the free encyclopedia
Artificial neural network node function
For the formalism used to approximate the influence of an extracellular electrical field on neurons, seeactivating function. For a linear system’s transfer function, seetransfer function.
Part of a series on
Machine learning
anddata mining

Logistic activation function

Inartificial neural networks, theactivation function of a node is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation function isnonlinear.[1]

Modern activation functions include the logistic (sigmoid) function used in the 2012speech recognition model developed byHinton et al;[2] theReLU used in the 2012AlexNet computer vision model[3][4] and in the 2015ResNet model; and the smooth version of the ReLU, theGELU, which was used in the 2018BERT model.[5]

Comparison of activation functions

[edit]

Aside from their empirical performance, activation functions also have different mathematical properties:

Nonlinear
When the activation function is non-linear, then a two-layer neural network can be proven to be a universal function approximator.[6] This is known as theUniversal Approximation Theorem. The identity activation function does not satisfy this property. When multiple layers use the identity activation function, the entire network is equivalent to a single-layer model.
Range
When the range of the activation function is finite, gradient-based training methods tend to be more stable, because pattern presentations significantly affect only limited weights. When the range is infinite, training is generally more efficient because pattern presentations significantly affect most of the weights. In the latter case, smallerlearning rates are typically necessary.[citation needed]
Continuously differentiable
This property is desirable for enabling gradient-based optimization methods (ReLU is not continuously differentiable and has some issues with gradient-based optimization, but it is still possible). The binary step activation function is not differentiable at 0, and it differentiates to 0 for all other values, so gradient-based methods can make no progress with it.[7]

These properties do not decisively influence performance, nor are they the only mathematical properties that may be useful. For instance, the strictly positive range of thesoftplus makes it suitable for predicting variances invariational autoencoders.

Mathematical details

[edit]

The most common activation functions can be divided into three categories:ridge functions,radial functions andfold functions.

An activation functionf{\displaystyle f} issaturating iflim|v||f(v)|=0{\displaystyle \lim _{|v|\to \infty }|\nabla f(v)|=0}. It isnonsaturating iflim|v||f(v)|0{\displaystyle \lim _{|v|\to \infty }|\nabla f(v)|\neq 0}. Non-saturating activation functions, such asReLU, may be better than saturating activation functions, because they are less likely to suffer from thevanishing gradient problem.[8]

Ridge activation functions

[edit]
Main article:Ridge function

Ridge functions are multivariate functions acting on a linear combination of the input variables. Often used examples include:[clarification needed]

Inbiologically inspired neural networks, the activation function is usually an abstraction representing the rate ofaction potential firing in the cell.[9] In its simplest form, this function isbinary—that is, either theneuron is firing or not. Neurons also cannot fire faster than a certain rate, motivatingsigmoid activation functions whose range is a finite interval.

The function looks likeϕ(v)=U(a+vb){\displaystyle \phi (\mathbf {v} )=U(a+\mathbf {v} '\mathbf {b} )}, whereU{\displaystyle U} is theHeaviside step function.

If a line has a positiveslope, on the other hand, it may reflect the increase in firing rate that occurs as input current increases. Such a function would be of the formϕ(v)=a+vb{\displaystyle \phi (\mathbf {v} )=a+\mathbf {v} '\mathbf {b} }.

Rectified linear unit and Gaussian error linear unit activation functions

Radial activation functions

[edit]
Main article:Radial function

A special class of activation functions known asradial basis functions (RBFs) are used inRBF networks. These activation functions can take many forms, but they are usually found as one of the following functions:

wherec{\displaystyle \mathbf {c} } is the vector representing the functioncenter anda{\displaystyle a} andσ{\displaystyle \sigma } are parameters affecting the spread of the radius.

Other examples

[edit]

Periodic functions can serve as activation functions. Usually thesinusoid is used, as any periodic function is decomposable into sinusoids by theFourier transform.[10]

Quadratic activation mapsxx2{\displaystyle x\mapsto x^{2}}.[11][12]

Folding activation functions

[edit]
Main article:Fold function

Folding activation functions are extensively used in thepooling layers inconvolutional neural networks, and in output layers ofmulticlass classification networks. These activations performaggregation over the inputs, such as taking themean,minimum ormaximum. In multiclass classification thesoftmax activation is often used.

Table of activation functions

[edit]

The following table compares the properties of several activation functions that are functions of onefoldx from the previous layer or layers:

NamePlotFunction,g(x){\displaystyle g(x)}Derivative ofg{\displaystyle g},g(x){\displaystyle g'(x)}RangeOrder of continuity
Identityx{\displaystyle x}1{\displaystyle 1}(,){\displaystyle (-\infty ,\infty )}C{\displaystyle C^{\infty }}
Binary step{0if x<01if x0{\displaystyle {\begin{cases}0&{\text{if }}x<0\\1&{\text{if }}x\geq 0\end{cases}}}0{\displaystyle 0}{0,1}{\displaystyle \{0,1\}}C1{\displaystyle C^{-1}}
Logistic, sigmoid, or soft stepσ(x)11+ex{\displaystyle \sigma (x)\doteq {\frac {1}{1+e^{-x}}}}g(x)(1g(x)){\displaystyle g(x)(1-g(x))}(0,1){\displaystyle (0,1)}C{\displaystyle C^{\infty }}
Hyperbolic tangent (tanh)tanh(x)exexex+ex{\displaystyle \tanh(x)\doteq {\frac {e^{x}-e^{-x}}{e^{x}+e^{-x}}}}1g(x)2{\displaystyle 1-g(x)^{2}}(1,1){\displaystyle (-1,1)}C{\displaystyle C^{\infty }}
Soboleva modified hyperbolic tangent (smht)smht(x)eaxebxecx+edx{\displaystyle \operatorname {smht} (x)\doteq {\frac {e^{ax}-e^{-bx}}{e^{cx}+e^{-dx}}}}(1,1){\displaystyle (-1,1)}C{\displaystyle C^{\infty }}
Softsignx1+|x|{\displaystyle {\frac {x}{1+|x|}}}1(1+|x|)2{\displaystyle {\frac {1}{(1+|x|)^{2}}}}(1,1){\displaystyle (-1,1)}C1{\displaystyle C^{1}}
Rectified linear unit (ReLU)[13](x)+{0if x0xif x>0=max(0,x)=x1x>0{\displaystyle {\begin{aligned}(x)^{+}\doteq {}&{\begin{cases}0&{\text{if }}x\leq 0\\x&{\text{if }}x>0\end{cases}}\\={}&\max(0,x)=x{\textbf {1}}_{x>0}\end{aligned}}}{0if x<01if x>0{\displaystyle {\begin{cases}0&{\text{if }}x<0\\1&{\text{if }}x>0\end{cases}}}[0,){\displaystyle [0,\infty )}C0{\displaystyle C^{0}}
Gaussian Error Linear Unit (GELU)[5]Visualization of the Gaussian Error Linear Unit (GELU)12x(1+erf(x2))=xΦ(x){\displaystyle {\begin{aligned}&{\frac {1}{2}}x\left(1+{\text{erf}}\left({\frac {x}{\sqrt {2}}}\right)\right)\\{}={}&x\Phi (x)\end{aligned}}} whereerf{\displaystyle \mathrm {erf} } is thegaussian error function.Φ(x)+12xϕ(x){\displaystyle \Phi (x)+{\frac {1}{2}}x\phi (x)} whereϕ(x)=12πe12x2{\displaystyle \phi (x)={\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}x^{2}}} is the probability density function of standard gaussian distribution.(0.17,){\displaystyle (-0.17\ldots ,\infty )}C{\displaystyle C^{\infty }}
Softplus[14]ln(1+ex){\displaystyle \ln \left(1+e^{x}\right)}11+ex{\displaystyle {\frac {1}{1+e^{-x}}}}(0,){\displaystyle (0,\infty )}C{\displaystyle C^{\infty }}
Exponential linear unit (ELU)[15]{α(ex1)if x0xif x>0{\displaystyle {\begin{cases}\alpha \left(e^{x}-1\right)&{\text{if }}x\leq 0\\x&{\text{if }}x>0\end{cases}}}
with parameterα{\displaystyle \alpha }
{αexif x<01if x>0{\displaystyle {\begin{cases}\alpha e^{x}&{\text{if }}x<0\\1&{\text{if }}x>0\end{cases}}}(α,){\displaystyle (-\alpha ,\infty )}{C1if α=1C0otherwise{\displaystyle {\begin{cases}C^{1}&{\text{if }}\alpha =1\\C^{0}&{\text{otherwise}}\end{cases}}}
Scaled exponential linear unit (SELU)[16]λ{α(ex1)if x<0xif x0{\displaystyle \lambda {\begin{cases}\alpha (e^{x}-1)&{\text{if }}x<0\\x&{\text{if }}x\geq 0\end{cases}}}
with parametersλ=1.0507{\displaystyle \lambda =1.0507} andα=1.67326{\displaystyle \alpha =1.67326}
λ{αexif x<01if x0{\displaystyle \lambda {\begin{cases}\alpha e^{x}&{\text{if }}x<0\\1&{\text{if }}x\geq 0\end{cases}}}(λα,){\displaystyle (-\lambda \alpha ,\infty )}C0{\displaystyle C^{0}}
Leaky rectified linear unit (Leaky ReLU)[17]{0.01xif x0xif x>0{\displaystyle {\begin{cases}0.01x&{\text{if }}x\leq 0\\x&{\text{if }}x>0\end{cases}}}{0.01if x<01if x>0{\displaystyle {\begin{cases}0.01&{\text{if }}x<0\\1&{\text{if }}x>0\end{cases}}}(,){\displaystyle (-\infty ,\infty )}C0{\displaystyle C^{0}}
Parametric rectified linear unit (PReLU)[18]{αxif x<0xif x0{\displaystyle {\begin{cases}\alpha x&{\text{if }}x<0\\x&{\text{if }}x\geq 0\end{cases}}}
with parameterα{\displaystyle \alpha }
{αif x<01if x0{\displaystyle {\begin{cases}\alpha &{\text{if }}x<0\\1&{\text{if }}x\geq 0\end{cases}}}(,){\displaystyle (-\infty ,\infty )}C0{\displaystyle C^{0}}
Rectified Parametric Sigmoid Units (flexible, 5 parameters)α(2x1{xλ}gλ,σ,μ,β(x))+(1α)gλ,σ,μ,β(x){\displaystyle \alpha (2x{1}_{\{x\geqslant \lambda \}}-g_{\lambda ,\sigma ,\mu ,\beta }(x))+(1-\alpha )g_{\lambda ,\sigma ,\mu ,\beta }(x)}

wheregλ,σ,μ,β(x)=(xλ)1{xλ}1+esgn(xμ)(|xμ|σ)β{\displaystyle g_{\lambda ,\sigma ,\mu ,\beta }(x)={\frac {(x-\lambda ){1}_{\{x\geqslant \lambda \}}}{1+e^{-\operatorname {sgn}(x-\mu )\left({\frac {\vert x-\mu \vert }{\sigma }}\right)^{\beta }}}}}[19]

{\displaystyle -}(,+){\displaystyle (-\infty ,+\infty )}C0{\displaystyle C^{0}}
Sigmoid linear unit (SiLU,[5] Sigmoid shrinkage,[20] SiL,[21] orSwish-‍1[22])Swish Activation Functionx1+ex{\displaystyle {\frac {x}{1+e^{-x}}}}1+ex+xex(1+ex)2{\displaystyle {\frac {1+e^{-x}+xe^{-x}}{\left(1+e^{-x}\right)^{2}}}}[0.278,){\displaystyle [-0.278\ldots ,\infty )}C{\displaystyle C^{\infty }}
Exponential Linear Sigmoid SquasHing (ELiSH)[23]{ex11+exif x<0x1+exif x0{\displaystyle {\begin{cases}{\frac {e^{x}-1}{1+e^{-x}}}&{\text{if }}x<0\\{\frac {x}{1+e^{-x}}}&{\text{if }}x\geq 0\end{cases}}}{2e2x+e3xexe2x+2ex+1if x<0xex+e2x+exe2x+2ex+1if x0{\displaystyle {\begin{cases}{\frac {2e^{2x}+e^{3x}-e^{x}}{e^{2x}+2e^{x}+1}}&{\text{if }}x<0\\{\frac {xe^{x}+e^{2x}+e^{x}}{e^{2x}+2e^{x}+1}}&{\text{if }}x\geq 0\end{cases}}}[0.881,){\displaystyle [-0.881\ldots ,\infty )}C1{\displaystyle C^{1}}
Gaussianex2{\displaystyle e^{-x^{2}}}2xex2{\displaystyle -2xe^{-x^{2}}}(0,1]{\displaystyle (0,1]}C{\displaystyle C^{\infty }}
Sinusoidsinx{\displaystyle \sin x}cosx{\displaystyle \cos x}[1,1]{\displaystyle [-1,1]}C{\displaystyle C^{\infty }}

The following table lists activation functions that are not functions of a singlefoldx from the previous layer or layers:

NameEquation,gi(x){\displaystyle g_{i}\left({\vec {x}}\right)}Derivatives,gi(x)xj{\displaystyle {\frac {\partial g_{i}\left({\vec {x}}\right)}{\partial x_{j}}}}RangeOrder of continuity
Softmaxexij=1Jexj{\displaystyle {\frac {e^{x_{i}}}{\sum _{j=1}^{J}e^{x_{j}}}}}    fori = 1, …,Jgi(x)(δijgj(x)){\displaystyle g_{i}\left({\vec {x}}\right)\left(\delta _{ij}-g_{j}\left({\vec {x}}\right)\right)}[1][2](0,1){\displaystyle (0,1)}C{\displaystyle C^{\infty }}
Maxout[24]maxixi{\displaystyle \max _{i}x_{i}}{1if j=argmaxixi0if jargmaxixi{\displaystyle {\begin{cases}1&{\text{if }}j={\underset {i}{\operatorname {argmax} }}\,x_{i}\\0&{\text{if }}j\neq {\underset {i}{\operatorname {argmax} }}\,x_{i}\end{cases}}}(,){\displaystyle (-\infty ,\infty )}C0{\displaystyle C^{0}}
^ Here,δij{\displaystyle \delta _{ij}} is theKronecker delta.
^ For instance,j{\displaystyle j} could be iterating through the number of kernels of the previous neural network layer whilei{\displaystyle i} iterates through the number of kernels of the current layer.

Quantum activation functions

[edit]
Main article:Quantum function

Inquantum neural networks programmed on gate-modelquantum computers, based on quantum perceptrons instead of variational quantum circuits, the non-linearity of the activation function can be implemented with no need of measuring the output of eachperceptron at each layer. The quantum properties loaded within the circuit such assuperposition can be preserved by creating theTaylor series of the argument computed by the perceptron itself, with suitable quantum circuits computing the powers up to a wanted approximation degree. Because of the flexibility of such quantum circuits, they can be designed in order to approximate any arbitrary classical activation function.[25]

See also

[edit]

References

[edit]
  1. ^Hinkelmann, Knut."Neural Networks, p. 7"(PDF).University of Applied Sciences Northwestern Switzerland. Archived fromthe original(PDF) on 6 October 2018. Retrieved6 October 2018.
  2. ^Hinton, Geoffrey; Deng, Li; Deng, Li; Yu, Dong; Dahl, George; Mohamed, Abdel-rahman; Jaitly, Navdeep; Senior, Andrew; Vanhoucke, Vincent; Nguyen, Patrick;Sainath, Tara; Kingsbury, Brian (2012). "Deep Neural Networks for Acoustic Modeling in Speech Recognition".IEEE Signal Processing Magazine.29 (6):82–97.doi:10.1109/MSP.2012.2205597.S2CID 206485943.
  3. ^Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E. (24 May 2017)."ImageNet classification with deep convolutional neural networks".Communications of the ACM.60 (6):84–90.doi:10.1145/3065386.ISSN 0001-0782.
  4. ^King Abdulaziz University; Al-johania, Norah; Elrefaei, Lamiaa; Benha University (30 June 2019)."Dorsal Hand Vein Recognition by Convolutional Neural Networks: Feature Learning and Transfer Learning Approaches"(PDF).International Journal of Intelligent Engineering and Systems.12 (3):178–191.doi:10.22266/ijies2019.0630.19.
  5. ^abcHendrycks, Dan; Gimpel, Kevin (2016). "Gaussian Error Linear Units (GELUs)".arXiv:1606.08415 [cs.LG].
  6. ^Cybenko, G. (December 1989)."Approximation by superpositions of a sigmoidal function"(PDF).Mathematics of Control, Signals, and Systems.2 (4):303–314.Bibcode:1989MCSS....2..303C.doi:10.1007/BF02551274.ISSN 0932-4194.S2CID 3958369.
  7. ^Snyman, Jan (3 March 2005).Practical Mathematical Optimization: An Introduction to Basic Optimization Theory and Classical and New Gradient-Based Algorithms. Springer Science & Business Media.ISBN 978-0-387-24348-1.
  8. ^Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E. (24 May 2017)."ImageNet classification with deep convolutional neural networks".Communications of the ACM.60 (6):84–90.doi:10.1145/3065386.ISSN 0001-0782.S2CID 195908774.
  9. ^Hodgkin, A. L.; Huxley, A. F. (28 August 1952)."A quantitative description of membrane current and its application to conduction and excitation in nerve".The Journal of Physiology.117 (4):500–544.doi:10.1113/jphysiol.1952.sp004764.PMC 1392413.PMID 12991237.
  10. ^Sitzmann, Vincent; Martel, Julien; Bergman, Alexander; Lindell, David; Wetzstein, Gordon (2020)."Implicit Neural Representations with Periodic Activation Functions".Advances in Neural Information Processing Systems.33. Curran Associates, Inc.:7462–7473.arXiv:2006.09661.
  11. ^Flake, Gary William (1998), Orr, Genevieve B.; Müller, Klaus-Robert (eds.),"Square Unit Augmented Radially Extended Multilayer Perceptrons",Neural Networks: Tricks of the Trade, Lecture Notes in Computer Science, vol. 1524, Berlin, Heidelberg: Springer, pp. 145–163,doi:10.1007/3-540-49430-8_8,ISBN 978-3-540-49430-0, retrieved5 October 2024
  12. ^Du, Simon; Lee, Jason (3 July 2018)."On the Power of Over-parametrization in Neural Networks with Quadratic Activation".Proceedings of the 35th International Conference on Machine Learning. PMLR:1329–1338.arXiv:1803.01206.
  13. ^Nair, Vinod; Hinton, Geoffrey E. (2010),"Rectified Linear Units Improve Restricted Boltzmann Machines",27th International Conference on International Conference on Machine Learning, ICML'10, USA: Omnipress, pp. 807–814,ISBN 9781605589077
  14. ^Glorot, Xavier; Bordes, Antoine; Bengio, Yoshua (2011)."Deep sparse rectifier neural networks"(PDF).International Conference on Artificial Intelligence and Statistics.
  15. ^Clevert, Djork-Arné; Unterthiner, Thomas; Hochreiter, Sepp (23 November 2015). "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)".arXiv:1511.07289 [cs.LG].
  16. ^Klambauer, Günter; Unterthiner, Thomas; Mayr, Andreas; Hochreiter, Sepp (8 June 2017). "Self-Normalizing Neural Networks".Advances in Neural Information Processing Systems.30 (2017).arXiv:1706.02515.
  17. ^Maas, Andrew L.; Hannun, Awni Y.; Ng, Andrew Y. (June 2013). "Rectifier nonlinearities improve neural network acoustic models".Proc. ICML.30 (1).S2CID 16489696.
  18. ^He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (6 February 2015). "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification".arXiv:1502.01852 [cs.CV].
  19. ^Atto, Abdourrahmane M.; Galichet, Sylvie; Pastor, Dominique; Méger, Nicolas (2023),"On joint parameterizations of linear and nonlinear functionals in neural networks",Elsevier Pattern Recognition, vol. 160, pp. 12–21,arXiv:2101.09948,doi:10.1016/j.neunet.2022.12.019,PMID 36592526
  20. ^Atto, Abdourrahmane M.; Pastor, Dominique; Mercier, Grégoire (2008),"Smooth sigmoid wavelet shrinkage for non-parametric estimation"(PDF),2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3265–3268,doi:10.1109/ICASSP.2008.4518347,ISBN 978-1-4244-1483-3,S2CID 9959057
  21. ^Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji (2018). "Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning".Neural Networks.107:3–11.arXiv:1702.03118.doi:10.1016/j.neunet.2017.12.012.PMID 29395652.S2CID 6940861.
  22. ^Ramachandran, Prajit; Zoph, Barret; Le, Quoc V (2017). "Searching for Activation Functions".arXiv:1710.05941 [cs.NE].
  23. ^Basirat, Mina; Roth, Peter M. (2 August 2018),The Quest for the Golden Activation Function,arXiv:1808.00783
  24. ^Goodfellow, Ian J.; Warde-Farley, David; Mirza, Mehdi; Courville, Aaron; Bengio, Yoshua (2013). "Maxout Networks".JMLR Workshop and Conference Proceedings.28 (3):1319–1327.arXiv:1302.4389.
  25. ^Maronese, Marco; Destri, Claudio; Prati, Enrico (2022). "Quantum activation functions for quantum neural networks".Quantum Information Processing.21 (4): 128.arXiv:2201.03700.Bibcode:2022QuIP...21..128M.doi:10.1007/s11128-022-03466-0.ISSN 1570-0755.

Further reading

[edit]
Concepts
Applications
Implementations
Audio–visual
Text
Decisional
People
Architectures
Retrieved from "https://en.wikipedia.org/w/index.php?title=Activation_function&oldid=1310047582"
Category:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp