Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Inductive bias

From Wikipedia, the free encyclopedia
Assumptions for inference in machine learning
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Inductive bias" – news ·newspapers ·books ·scholar ·JSTOR
(December 2025) (Learn how and when to remove this message)

Theinductive bias (also known aslearning bias) of alearning algorithm is the set of assumptions that the learner uses to predict outputs of given inputs that it has not encountered.[1] Inductive bias is anything which makes the algorithm learn one pattern instead of another pattern (e.g., step-functions indecision trees instead of continuous functions inlinear regression models). Learning involves searching a space of solutions for a solution that provides a good explanation of the data. However, in many cases, there may be multiple equally appropriate solutions.[2] An inductive bias allows a learning algorithm to prioritize one solution (or interpretation) over another, independently of the observed data.[3]

Inmachine learning, the aim is to construct algorithms that are able to learn to predict a certain target output. To achieve this, the learning algorithm is presented some training examples that demonstrate the intended relation of input and output values. Then the learner is supposed to approximate the correct output, even for examples that have not been shown during training. Without any additional assumptions, this problem cannot be solved since unseen situations might have an arbitrary output value. The kind of necessary assumptions about the nature of the target function are subsumed in the phraseinductive bias.[1][4]

A classical example of an inductive bias isOccam's razor, assuming that the simplest consistent hypothesis about the target function is actually the best. Here,consistent means that the hypothesis of the learner yields correct outputs for all of the examples that have been given to the algorithm.

Approaches to a more formal definition of inductive bias are based onmathematical logic. Here, the inductive bias is a logical formula that, together with the training data, logically entails the hypothesis generated by the learner. However, this strict formalism fails in many practical cases in which the inductive bias can only be given as a rough description (e.g., in the case ofartificial neural networks), or not at all.

Types

[edit]

The following is a list of common inductive biases in machine learning algorithms.

  • Maximumconditional independence: if the hypothesis can be cast in aBayesian framework, try to maximize conditional independence. This is the bias used in theNaive Bayes classifier.
  • Minimumcross-validation error: when trying to choose among hypotheses, select the hypothesis with the lowest cross-validation error. Although cross-validation may seem to be free of bias, the"no free lunch" theorems show that cross-validation must be biased, for example assuming that there is no information encoded in the ordering of the data.
  • Maximum margin: when drawing a boundary between two classes, attempt to maximize the width of the boundary. This is the bias used insupport vector machines. The assumption is that distinct classes tend to be separated by wide boundaries.
  • Minimum description length: when forming a hypothesis, attempt to minimize the length of the description of the hypothesis.
  • Minimum features: unless there is good evidence that afeature is useful, it should be deleted. This is the assumption behindfeature selection algorithms.
  • Nearest neighbors: assume that most of the cases in a small neighborhood infeature space belong to the same class. Given a case for which the class is unknown, guess that it belongs to the same class as the majority in its immediate neighborhood. This is the bias used in thek-nearest neighbors algorithm. The assumption is that cases that are near each other tend to belong to the same class.

Shift of bias

[edit]

Although most learning algorithms have a static bias, some algorithms are designed to shift their bias as they acquire more data.[5] This does not avoid bias, since the bias shifting process itself must have a bias.

See also

[edit]

References

[edit]
  1. ^abMitchell, T. M. (1980),The need for biases in learning generalizations, CBM-TR 5-110, New Brunswick, New Jersey, USA: Rutgers University,CiteSeerX 10.1.1.19.5466
  2. ^Goodman, Nelson (1955). "The new riddle of induction".Fact, Fiction, and Forecast. Harvard University Press. pp. 59–83.ISBN 978-0-674-29071-6.{{cite book}}:ISBN / Date incompatibility (help)
  3. ^Mitchell, Tom M (1980)."The need for biases in learning generalizations"(PDF).Rutgers University Technical Report CBM-TR-117:184–191.
  4. ^DesJardins, M.; Gordon, D. F. (1995),"Evaluation and selection of biases in machine learning",Machine Learning,20 (1–2):5–22,doi:10.1007/BF00993472
  5. ^Utgoff, P. E. (1984),Shift of bias for inductive concept learning, New Brunswick, New Jersey, USA: Doctoral dissertation, Department of Computer Science, Rutgers University,ISBN 9780934613002
Statistical biases
Other biases
Bias reduction
Differentiable computing
General
Hardware
Software libraries
Retrieved from "https://en.wikipedia.org/w/index.php?title=Inductive_bias&oldid=1328896465"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp