Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Knowledge distillation

From Wikipedia, the free encyclopedia
Machine learning method to transfer knowledge from a large model to a smaller one

Inmachine learning,knowledge distillation ormodel distillation is the process of transferring knowledge from a largemodel to a smaller one. While large models (such as verydeep neural networks orensembles of many models) have more knowledge capacity than small models, this capacity might not be fully utilized. It can be just as computationally expensive to evaluate a model even if it utilizes little of its knowledge capacity. Knowledge distillation transfers knowledge from a large model to a smaller one without loss ofvalidity. As smaller models are less expensive to evaluate, they can be deployed on less powerful hardware (such as amobile device).[1]

There is also a less common technique calledReverse Knowledge Distillation, where knowledge is transferred from a smaller model to a larger one.[2]

Model distillation is not to be confused withmodel compression, which describes methods to decrease the size of a large model itself, without training a new model. Model compression generally preserves the architecture and the nominal parameter count of the model, while decreasing the bits-per-parameter.

Knowledge distillation has been successfully used in several applications of machine learning such asobject detection,[3]acoustic models,[4] andnatural language processing.[5]Recently[when?], it has also been introduced tograph neural networks applicable to non-grid data.[6]

Methods

[edit]

Knowledge transfer from a large model to a small one somehow needs to teach the latter without loss of validity. If both models are trained on the same data, the smaller model may have insufficient capacity to learn aconcise knowledge representation compared to the large model. However, some information about a concise knowledge representation is encoded in thepseudolikelihoods assigned to its output: when a model correctly predicts a class, it assigns a large value to the output variable corresponding to such class, and smaller values to the other output variables. The distribution of values among the outputs for a record provides information on how the large model represents knowledge. Therefore, the goal of economical deployment of a valid model can be achieved by training only the large model on the data, exploiting its better ability to learn concise knowledge representations, and then distilling such knowledge into the smaller model, by training it to learn thesoft output of the large model.[1]

Mathematical formulation

[edit]

Given a large model as a function of the vector variablex{\displaystyle \mathbf {x} }, trained for a specificclassification task, typically the final layer of classification networks is asoftmax in the form

yi(x|t)=ezi(x)tjezj(x)t{\displaystyle y_{i}(\mathbf {x} |t)={\frac {e^{\frac {z_{i}(\mathbf {x} )}{t}}}{\sum _{j}e^{\frac {z_{j}(\mathbf {x} )}{t}}}}}

wheret{\displaystyle t} is thetemperature, a parameter which is set to 1 for a standard softmax. The softmax operator converts thelogit valueszi(x){\displaystyle z_{i}(\mathbf {x} )} to pseudo-probabilities: higher temperature values generate softer distributions of pseudo-probabilities among the output classes. Knowledge distillation consists of training a smaller network, called thedistilled model, on adata set called thetransfer set which could correspond to the original training set or consist of new, possibly unlabeled data. Across-entropyloss function is typically used, computed between the output of the distilled modely(x|t){\displaystyle \mathbf {y} (\mathbf {x} |t)} and the output of the large modely^(x|t){\displaystyle {\hat {\mathbf {y} }}(\mathbf {x} |t)} on the same record (or the average of the individual outputs, if the large model is an ensemble), using a high value of softmax temperaturet{\displaystyle t} for both models:[1]

E(x|t)=iy^i(x|t)logyi(x|t).{\displaystyle E(\mathbf {x} |t)=-\sum _{i}{\hat {y}}_{i}(\mathbf {x} |t)\log y_{i}(\mathbf {x} |t).}

In this context, a high temperature increases theentropy of the output, therefore providing more information to learn for the distilled model compared to hard targets, and at the same time reducing thevariance of thegradient between different records, thus allowing a higherlearning rate.[1]

If ground truth is available for the transfer set, the process can be strengthened by adding to the loss the cross-entropy between the outputyi(x|1){\displaystyle y_{i}(\mathbf {x} |1)} of the distilled model computed witht=1{\displaystyle t=1}, and the known labely¯i{\displaystyle {\bar {y}}_{i}}

E(x|t)=t2iy^i(x|t)logyi(x|t)iy¯ilogyi(x|1){\displaystyle E(\mathbf {x} |t)=-t^{2}\sum _{i}{\hat {y}}_{i}(\mathbf {x} |t)\log y_{i}(\mathbf {x} |t)-\sum _{i}{\bar {y}}_{i}\log y_{i}(\mathbf {x} |1)}

where the component of the loss with respect to the large model is weighted by a factor oft2{\displaystyle t^{2}} since, as the temperature increases, the gradient of the loss with respect to the model weights scales by a factor of1t2{\displaystyle {\frac {1}{t^{2}}}}.[1]

Relationship with model compression

[edit]

Under the assumption that the logits have zeromean, it is possible to show that model compression is a special case of knowledge distillation. The gradient of the knowledge distillation lossE{\displaystyle E} with respect to the logit of the distilled modelzi{\displaystyle z_{i}} is given by

ziE=zijy^jlogyj=ziy^ilogyi+(zikiy^klogyk)=y^i1yiziyi+ki(y^k1ykezkt(1(jezjt)2)ezit1t)=y^i1yiziezitjezjt+ki(y^k1ykykyi1t)=y^i1yi(1tezitjezjt1t(ezit)2(jezjt)2)+yikiy^kt=y^i1yi(yityi2t)+yi(1y^i)t=1t(yiy^i)=1t(ezitjezjtez^itjez^jt){\displaystyle {\begin{aligned}{\frac {\partial }{\partial z_{i}}}E&=-{\frac {\partial }{\partial z_{i}}}\sum _{j}{\hat {y}}_{j}\log y_{j}\\&=-{\frac {\partial }{\partial z_{i}}}{\hat {y}}_{i}\log y_{i}+\left(-{\frac {\partial }{\partial z_{i}}}\sum _{k\neq i}{\hat {y}}_{k}\log y_{k}\right)\\&=-{\hat {y}}_{i}{\frac {1}{y_{i}}}{\frac {\partial }{\partial z_{i}}}y_{i}+\sum _{k\neq i}\left(-{\hat {y}}_{k}\cdot {\frac {1}{y_{k}}}\cdot e^{\frac {z_{k}}{t}}\cdot \left(-{\frac {1}{\left(\sum _{j}e^{\frac {z_{j}}{t}}\right)^{2}}}\right)\cdot e^{\frac {z_{i}}{t}}\cdot {\frac {1}{t}}\right)\\&=-{\hat {y}}_{i}{\frac {1}{y_{i}}}{\frac {\partial }{\partial z_{i}}}{\frac {e^{\frac {z_{i}}{t}}}{\sum _{j}e^{\frac {z_{j}}{t}}}}+\sum _{k\neq i}\left({\hat {y}}_{k}\cdot {\frac {1}{y_{k}}}\cdot y_{k}\cdot y_{i}\cdot {\frac {1}{t}}\right)\\&=-{\hat {y}}_{i}{\frac {1}{y_{i}}}\left({\frac {{\frac {1}{t}}e^{\frac {z_{i}}{t}}\sum _{j}e^{\frac {z_{j}}{t}}-{\frac {1}{t}}\left(e^{\frac {z_{i}}{t}}\right)^{2}}{\left(\sum _{j}e^{\frac {z_{j}}{t}}\right)^{2}}}\right)+{\frac {y_{i}\sum _{k\neq i}{\hat {y}}_{k}}{t}}\\&=-{\hat {y}}_{i}{\frac {1}{y_{i}}}\left({\frac {y_{i}}{t}}-{\frac {y_{i}^{2}}{t}}\right)+{\frac {y_{i}(1-{\hat {y}}_{i})}{t}}\\&={\frac {1}{t}}\left(y_{i}-{\hat {y}}_{i}\right)\\&={\frac {1}{t}}\left({\frac {e^{\frac {z_{i}}{t}}}{\sum _{j}e^{\frac {z_{j}}{t}}}}-{\frac {e^{\frac {{\hat {z}}_{i}}{t}}}{\sum _{j}e^{\frac {{\hat {z}}_{j}}{t}}}}\right)\\\end{aligned}}}

wherez^i{\displaystyle {\hat {z}}_{i}} are the logits of the large model. For large values oft{\displaystyle t} this can be approximated as

1t(1+zitN+jzjt1+z^itN+jz^jt){\displaystyle {\frac {1}{t}}\left({\frac {1+{\frac {z_{i}}{t}}}{N+\sum _{j}{\frac {z_{j}}{t}}}}-{\frac {1+{\frac {{\hat {z}}_{i}}{t}}}{N+\sum _{j}{\frac {{\hat {z}}_{j}}{t}}}}\right)}

and under the zero-mean hypothesisjzj=jz^j=0{\displaystyle \sum _{j}z_{j}=\sum _{j}{\hat {z}}_{j}=0} it becomesziz^iNT2{\displaystyle {\frac {z_{i}-{\hat {z}}_{i}}{NT^{2}}}}, which is the derivative of12(ziz^i)2{\displaystyle {\frac {1}{2}}\left(z_{i}-{\hat {z}}_{i}\right)^{2}}, i.e. the loss is equivalent to matching the logits of the two models, as done in model compression.[1]

"Optimal Brain Damage" algorithm

[edit]

The Optimal Brain Damage (OBD) algorithm is as follows:[7]

Do until a desired level of sparsity or performance is reached:
Train the network (by methods such as backpropagation) until a reasonable solution is obtained
Compute the saliencies for each parameter
Delete some lowest-saliency parameters

Deleting a parameter means fixing the parameter to zero. The "saliency" of a parameterθ{\displaystyle \theta } is defined as12(θ2L)θ2{\displaystyle {\frac {1}{2}}(\partial _{\theta }^{2}L)\theta ^{2}}, whereL{\displaystyle L} is the loss function. The second-derivativeθ2L{\displaystyle \partial _{\theta }^{2}L} can be computed bysecond-order backpropagation.

The idea for optimal brain damage is to approximate the loss function in a neighborhood of optimal parameterθ{\displaystyle \theta ^{*}} byTaylor expansion:L(θ)L(θ)+12i(θi2L(θ))(θiθi)2{\displaystyle L(\theta )\approx L(\theta ^{*})+{\frac {1}{2}}\sum _{i}(\partial _{\theta _{i}}^{2}L(\theta ^{*}))(\theta _{i}-\theta _{i}^{*})^{2}}whereL(θ)0{\displaystyle \nabla L(\theta ^{*})\approx 0}, sinceθ{\displaystyle \theta ^{*}} is optimal, and the cross-derivativesθiθjL{\displaystyle \partial _{\theta _{i}}\partial _{\theta _{j}}L} are neglected to save compute. Thus, the saliency of a parameter approximates the increase in loss if that parameter is deleted.

History

[edit]

A related methodology wasmodel compression orpruning, where a trained network is reduced in size. This was first done in 1965 byAlexey Ivakhnenko and Valentin Lapa inUSSR (1965).[8][9][10] Their deep networks were trained layer by layer throughregression analysis. Superfluous hidden units were pruned using a separate validation set.[11] Other neural network compression methods include Biased Weight Decay[12] and Optimal Brain Damage.[7]

An early example of neural network distillation was published byJürgen Schmidhuber in 1991, in the field ofrecurrent neural networks (RNNs). The problem was sequence prediction for long sequences, i.e.,deep learning. Their approach was to use two RNNs. One of them (theautomatizer) predicted the sequence, and another (thechunker) predicted the errors of the automatizer. Simultaneously, the automatizer predicted the internal states of the chunker. After the automatizer manages to predict the chunker's internal states well, it would start fixing the errors, and soon the chunker is obsoleted, leaving just one RNN in the end.[13][14]

The idea of using the output of one neural network to train another neural network was also studied as the teacher-student network configuration.[15] In 1992, several papers studied thestatistical mechanics of teacher-student configurations with committee machines[16][17] or parity machines.[18]

Compressing the knowledge of multiple models into a singleneural network was calledmodel compression in 2006: compression was achieved by training a smaller model on large amounts of pseudo-data labelled by a higher-performing ensemble, optimizing to match thelogit of the compressed model to the logit of the ensemble.[19] The knowledge distillationpreprint ofGeoffrey Hinton et al. (2015)[1] formulated the concept and showed some results achieved in the task ofimage classification.

Knowledge distillation is also related to the concept ofbehavioral cloning discussed by Faraz Torabi et. al.[20]

References

[edit]
  1. ^abcdefgHinton, Geoffrey; Vinyals, Oriol; Dean, Jeff (2015). "Distilling the knowledge in a neural network".arXiv:1503.02531 [stat.ML].
  2. ^Yifan Xu and Yuxiang Wu and Zhiqiang Hu and Hang Xu and Zhongwei Wan and Yongfeng Zhang and Yu Qiao and Zhen Wang (2023). "RestGPT: Connecting Large Language Models with Real-World RESTful APIs".arXiv:2307.10698 [cs.CV].
  3. ^Chen, Guobin; Choi, Wongun; Yu, Xiang; Han, Tony; Chandraker, Manmohan (2017). "Learning efficient object detection models with knowledge distillation".Advances in Neural Information Processing Systems:742–751.
  4. ^Asami, Taichi; Masumura, Ryo; Yamaguchi, Yoshikazu; Masataki, Hirokazu; Aono, Yushi (2017).Domain adaptation of DNN acoustic models using knowledge distillation. IEEE International Conference on Acoustics, Speech and Signal Processing. pp. 5185–5189.
  5. ^Cui, Jia; Kingsbury, Brian;Ramabhadran, Bhuvana; Saon, George; Sercu, Tom; Audhkhasi, Kartik; Sethy, Abhinav; Nussbaum-Thom, Markus; Rosenberg, Andrew (2017).Knowledge distillation across ensembles of multilingual models for low-resource languages. IEEE International Conference on Acoustics, Speech and Signal Processing. pp. 4825–4829.
  6. ^Yang, Yiding; Jiayan, Qiu; Mingli, Song; Dacheng, Tao; Xinchao, Wang (2020)."Distilling Knowledge from Graph Convolutional Networks"(PDF).Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition:7072–7081.arXiv:2003.10477.Bibcode:2020arXiv200310477Y.
  7. ^abLeCun, Yann; Denker, John; Solla, Sara (1989)."Optimal Brain Damage".Advances in Neural Information Processing Systems.2. Morgan-Kaufmann.
  8. ^Ivakhnenko, A. G.; Lapa, V. G. (1967).Cybernetics and Forecasting Techniques. American Elsevier Publishing Co.ISBN 978-0-444-00020-0.
  9. ^Ivakhnenko, A.G. (March 1970)."Heuristic self-organization in problems of engineering cybernetics".Automatica.6 (2):207–219.doi:10.1016/0005-1098(70)90092-0.
  10. ^Ivakhnenko, Alexey (1971)."Polynomial theory of complex systems"(PDF).IEEE Transactions on Systems, Man, and Cybernetics. SMC-1 (4):364–378.doi:10.1109/TSMC.1971.4308320.Archived(PDF) from the original on 2017-08-29. Retrieved2019-11-05.
  11. ^Schmidhuber, Jürgen (2022). "Annotated History of Modern AI and Deep Learning".arXiv:2212.11279 [cs.NE].
  12. ^Hanson, Stephen; Pratt, Lorien (1988)."Comparing Biases for Minimal Network Construction with Back-Propagation".Advances in Neural Information Processing Systems.1. Morgan-Kaufmann.
  13. ^Schmidhuber, Jürgen (April 1991)."Neural Sequence Chunkers"(PDF).TR FKI-148, TU Munich.
  14. ^Schmidhuber, Jürgen (1992)."Learning complex, extended sequences using the principle of history compression"(PDF).Neural Computation.4 (2):234–242.doi:10.1162/neco.1992.4.2.234.S2CID 18271205. Archived fromthe original(PDF) on 2017-07-06.
  15. ^Watkin, Timothy L. H.; Rau, Albrecht; Biehl, Michael (1993-04-01)."The statistical mechanics of learning a rule".Reviews of Modern Physics.65 (2):499–556.Bibcode:1993RvMP...65..499W.doi:10.1103/RevModPhys.65.499.
  16. ^Schwarze, H; Hertz, J (1992-10-15)."Generalization in a Large Committee Machine".Europhysics Letters.20 (4):375–380.Bibcode:1992EL.....20..375S.doi:10.1209/0295-5075/20/4/015.ISSN 0295-5075.
  17. ^Mato, G; Parga, N (1992-10-07)."Generalization properties of multilayered neural networks".Journal of Physics A: Mathematical and General.25 (19):5047–5054.Bibcode:1992JPhA...25.5047M.doi:10.1088/0305-4470/25/19/017.ISSN 0305-4470.
  18. ^Hansel, D; Mato, G; Meunier, C (1992-11-01)."Memorization Without Generalization in a Multilayered Neural Network".Europhysics Letters.20 (5):471–476.Bibcode:1992EL.....20..471H.doi:10.1209/0295-5075/20/5/015.ISSN 0295-5075.
  19. ^Buciluǎ, Cristian; Caruana, Rich; Niculescu-Mizil, Alexandru (2006). "Model compression".Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining.
  20. ^Torabi, Faraz; Warnell, Garrett; Stone, Peter (2018). "Behavioral Cloning from Observation".arXiv:1805.01954 [cs.AI].

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Knowledge_distillation&oldid=1337315035"
Category:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp