Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Probably approximately correct learning

From Wikipedia, the free encyclopedia
Framework for mathematical analysis of machine learning
Part of a series on
Machine learning
anddata mining

Incomputational learning theory,probably approximately correct (PAC)learning is a framework for mathematical analysis ofmachine learning. It was proposed in 1984 byLeslie Valiant.[1]

In this framework, the learner receives samples and must select a generalization function (called thehypothesis) from a certain class of possible functions. The goal is that, with high probability (the "probably" part), the selected function will have lowgeneralization error (the "approximately correct" part). The learner must be able to learn the concept given any arbitrary approximation ratio, probability of success, ordistribution of the samples.

The model was later extended to treat noise (misclassified samples).

An important innovation of the PAC framework is the introduction ofcomputational complexity theory concepts to machine learning. In particular, the learner is expected to find efficient functions (time and space requirements bounded to apolynomial of the example size), and the learner itself must implement an efficient procedure (requiring an example count bounded to a polynomial of the concept size, modified by the approximation andlikelihood bounds).

Definitions and terminology

[edit]

In order to give the definition for something that is PAC-learnable, we first have to introduce some terminology.[2]

For the following definitions, two examples will be used. The first is the problem ofcharacter recognition given an array ofn{\displaystyle n} bits encoding a binary-valued image. The other example is the problem of finding an interval that will correctly classify points within the interval as positive and the points outside of the range as negative.

LetX{\displaystyle X} be a set called theinstance space or the encoding of all the samples. In the character recognition problem, the instance space isX={0,1}n{\displaystyle X=\{0,1\}^{n}}. In the interval problem the instance space,X{\displaystyle X}, is the set of all bounded intervals inR{\displaystyle \mathbb {R} }, whereR{\displaystyle \mathbb {R} } denotes the set of allreal numbers.

Aconcept is a subsetcX{\displaystyle c\subset X}. One concept is the set of all patterns of bits inX={0,1}n{\displaystyle X=\{0,1\}^{n}} that encode a picture of the letter "P". An example concept from the second example is the set of open intervals,{(a,b)0aπ/2,πb13}{\displaystyle \{(a,b)\mid 0\leq a\leq \pi /2,\pi \leq b\leq {\sqrt {13}}\}}, each of which contains only the positive points. Aconcept classC{\displaystyle C} is a collection of concepts overX{\displaystyle X}. This could be the set of all subsets of the array of bits that areskeletonized4-connected (width of the font is 1).

LetEX(c,D){\displaystyle \operatorname {EX} (c,D)} be a procedure that draws an example,x{\displaystyle x}, using a probability distributionD{\displaystyle D} and gives the correct labelc(x){\displaystyle c(x)}, that is 1 ifxc{\displaystyle x\in c} and 0 otherwise.

Now, given0<ϵ,δ<1{\displaystyle 0<\epsilon ,\delta <1}, assume there is an algorithmA{\displaystyle A} and a polynomialp{\displaystyle p} in1/ϵ,1/δ{\displaystyle 1/\epsilon ,1/\delta } (and other relevant parameters of the classC{\displaystyle C}) such that, given a sample of sizep{\displaystyle p} drawn according toEX(c,D){\displaystyle \operatorname {EX} (c,D)}, then, with probability of at least1δ{\displaystyle 1-\delta },A{\displaystyle A} outputs a hypothesishC{\displaystyle h\in C} that has an average error less than or equal toϵ{\displaystyle \epsilon } onX{\displaystyle X} with the same distributionD{\displaystyle D}. Further if the above statement for algorithmA{\displaystyle A} is true for every conceptcC{\displaystyle c\in C} and for every distributionD{\displaystyle D} overX{\displaystyle X}, and for all0<ϵ,δ<1{\displaystyle 0<\epsilon ,\delta <1} thenC{\displaystyle C} is (efficiently)PAC learnable (ordistribution-free PAC learnable). We can also say thatA{\displaystyle A} is aPAC learning algorithm forC{\displaystyle C}.

Equivalence

[edit]

Under some regularity conditions these conditions are equivalent:[3]

  1. The concept classC is PAC learnable.
  2. TheVC dimension ofC is finite.
  3. C is a uniformlyGlivenko-Cantelli class.[clarification needed]
  4. C iscompressible in the sense of Littlestone and Warmuth

See also

[edit]

References

[edit]
  1. ^L. Valiant.A theory of the learnable. Communications of the ACM, 27, 1984.
  2. ^Kearns and Vazirani, pg. 1-12,
  3. ^Blumer, Anselm; Ehrenfeucht, Andrzej; David, Haussler; Manfred, Warmuth (October 1989)."Learnability and the Vapnik-Chervonenkis Dimension".Journal of the Association for Computing Machinery.36 (4):929–965.doi:10.1145/76359.76371.S2CID 1138467.

Further reading

[edit]

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Probably_approximately_correct_learning&oldid=1319345632"
Category:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp