Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

History of natural language processing

From Wikipedia, the free encyclopedia
This article needs to beupdated. Please help update this article to reflect recent events or newly available information.(April 2023)

Thehistory of natural language processing describes the advances ofnatural language processing. There is some overlap with thehistory of machine translation, thehistory of speech recognition, and thehistory of artificial intelligence.

Early history

[edit]

The history of machine translation dates back to the seventeenth century, when philosophers such asLeibniz andDescartes put forward proposals for codes which would relate words between languages. All of these proposals remained theoretical, and none resulted in the development of an actual machine.

The first patents for "translating machines" were applied for in the mid-1930s. One proposal, byGeorges Artsrouni, was simply an automatic bilingual dictionary usingpaper tape. The other proposal, byPeter Troyanskii, a Russian, was more detailed. Troyanskii’s proposal included both the bilingual dictionary and a method for dealing with grammatical roles between languages, based onEsperanto.[1][2]

Logical period

[edit]

In 1950,Alan Turing published his famous article "Computing Machinery and Intelligence" which proposed what is now called theTuring test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably — on the basis of the conversational content alone — between the program and a real human.

In 1957,Noam Chomsky’sSyntactic Structures revolutionized Linguistics with 'universal grammar', a rule-based system of syntactic structures.[3]

TheGeorgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem.[4] However, real progress was much slower, and after theALPAC report in 1966, which found that ten years long research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted until the late 1980s, when the firststatistical machine translation systems were developed.

Some notably successful NLP systems developed in the 1960s wereSHRDLU, a natural language system working in restricted "blocks worlds" with restricted vocabularies.

In 1969Roger Schank introduced theconceptual dependency theory for natural language understanding.[5] This model, partially influenced by the work ofSydney Lamb, was extensively used by Schank's students atYale University, such as Robert Wilensky, Wendy Lehnert, andJanet Kolodner.

In 1970, William A. Woods introduced theaugmented transition network (ATN) to represent natural language input.[6] Instead ofphrase structure rules ATNs used an equivalent set offinite-state automata that were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years. During the 1970s many programmers began to write 'conceptual ontologies', which structured real-world information into computer-understandable data. Examples are MARGIE (Schank, 1975), SAM (Cullingford, 1978), PAM (Wilensky, 1978), TaleSpin (Meehan, 1976), QUALM (Lehnert, 1977), Politics (Carbonell, 1979), and Plot Units (Lehnert 1981). During this time, manychatterbots were written includingPARRY,Racter, andJabberwacky.

Statistical period

[edit]

Up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction ofmachine learning algorithms for language processing. This was due both to the steady increase in computational power resulting fromMoore's law and the gradual lessening of the dominance ofChomskyan theories of linguistics (e.g.transformational grammar), whose theoretical underpinnings discouraged the sort ofcorpus linguistics that underlies the machine-learning approach to language processing.[7] Some of the earliest-used machine learning algorithms, such asdecision trees, produced systems of hard if-then rules similar to existing hand-written rules. Increasingly, however, research has focused onstatistical models, which make soft,probabilistic decisions based on attachingreal-valued weights to the features making up the input data. Thecache language models upon which manyspeech recognition systems now rely are examples of such statistical models. Such models are generally more robust when given unfamiliar input, especially input that contains errors (as is very common for real-world data), and produce more reliable results when integrated into a larger system comprising multiple subtasks.

Datasets

[edit]

The emergence of statistical approaches was aided by both increase in computing power and the availability of large datasets. At that time, large multilingual corpora were starting to emerge. Notably, some were produced by theParliament of Canada and theEuropean Union as a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government.

Many of the notable early successes occurred in the field ofmachine translation. In 1993, theIBM alignment models were used forstatistical machine translation.[8] Compared to previous machine translation systems, which were symbolic systems manually coded by computational linguists, these systems were statistical, which allowed them to automatically learn from largetextual corpora. Though these systems do not work well in situations where only small corpora is available, so data-efficient methods continue to be an area of research and development.

In 2001, a one-billion-word large text corpus, scraped from the Internet, referred to as "very very large" at the time, was used for worddisambiguation.[9]

To take advantage of large, unlabelled datasets, algorithms were developed forunsupervised andself-supervised learning. Generally, this task is much more difficult thansupervised learning, and typically produces less accurate results for a given amount of input data. However, there is an enormous amount of non-annotated data available (including, among other things, the entire content of theWorld Wide Web), which can often make up for the inferior results.

Neural period

[edit]
Timeline of natural language processing models

Neurallanguage models were developed in 1990s. In 1990, theElman network, using arecurrent neural network, encoded each word in a training set as a vector, called aword embedding, and the whole vocabulary as avector database, allowing it to perform such tasks as sequence-predictions that are beyond the power of a simplemultilayer perceptron. A shortcoming of the static embeddings was that they didn't differentiate between multiple meanings ofhomonyms.[10]

Yoshua Bengio developed the first neural probabilistic language model in 2000.[11] Novel algorithms, availability of larger datasets and higher processing power made possible training of larger and larger language models.

Attention mechanism was introduced by Bahdanau et al. in 2014.[12] This work laid the foundations for the famous "Attention is All You Need" paper[13] that introduced theTransformer architecture in 2017. The concept oflarge language model (LLM) emerged in late 2010s. LLM is a language model trained with self-supervised learning on vast amount of text. Earliest public LLMs had hundreds of millions of parameters[14], but this number quickly rose to billion and even trillions.[15]

In recent years, advancements in deep learning and large language models have significantly enhanced the capabilities of natural language processing, leading to widespread applications in areas such as healthcare, customer service, and content generation.[16]

Software

[edit]
SoftwareYearCreatorDescriptionRef.
Georgetown experiment1954Georgetown University andIBMinvolved fully automatic translation of more than sixty Russian sentences into English.
STUDENT1964Daniel Bobrowcould solve high school algebra word problems.[17]
ELIZA1964Joseph Weizenbauma simulation of aRogerian psychotherapist, rephrasing her response with a few grammar rules.[18]
SHRDLU1970Terry Winograda natural language system working in restricted "blocks worlds" with restricted vocabularies, worked extremely well
PARRY1972Kenneth ColbyAchatterbot
KL-ONE1974Sondheimer et al.a knowledge representation system in the tradition ofsemantic networks and frames; it is aframe language.
MARGIE1975Roger Schank
TaleSpin (software)1976Meehan
QUALMLehnert
LIFER/LADDER1978Hendrixa natural language interface to a database of information about US Navy ships.
SAM (software)1978Cullingford
PAM (software)1978Robert Wilensky
Politics (software)1979Carbonell
Plot Units (software)1981Lehnert
Jabberwacky1982Rollo Carpenterchatterbot with stated aim to "simulate natural human chat in an interesting, entertaining and humorous manner".
MUMBLE (software)1982McDonald
Racter1983William Chamberlain and Thomas Etterchatterbot that generated English language prose at random.
MOPTRANS[19]1984Lytinen
KODIAK (software)1986Wilensky
Absity (software)1987Hirst
Dr. Sbaitso1991Creative Labs
IBM Watson2006IBMA question answering system that won theJeopardy! contest, defeating the best human players in February 2011.
Siri2011AppleA virtual assistant developed by Apple.
Cortana2014MicrosoftA virtual assistant developed by Microsoft.
Amazon Alexa2014AmazonA virtual assistant developed by Amazon.
Google Assistant2016GoogleA virtual assistant developed by Google.
ChatGPT2022OpenAIGenerative chatbot.

References

[edit]
  1. ^"Georges Artsrouni".machinetranslate.org. RetrievedJuly 10, 2025.
  2. ^Hutchins, John; Lovtskii, Evgenii (2000),Petr Petrovich Troyanskii (1894-1950): A Forgotten Pioneer of Mechanical Translation, Machine Translation{{citation}}: CS1 maint: location missing publisher (link)
  3. ^"SEM1A5 - Part 1 - A brief history of NLP". Retrieved2010-06-25.
  4. ^Hutchins, J. (2005)
  5. ^Roger Schank, 1969,A conceptual dependency parser for natural language Proceedings of the 1969 conference on Computational linguistics, Sång-Säby, Sweden, pages 1-3
  6. ^Woods, William A (1970). "Transition Network Grammars for Natural Language Analysis". Communications of the ACM 13 (10): 591–606[1]
  7. ^Chomskyan linguistics encourages the investigation of "corner cases" that stress the limits of its theoretical models (comparable topathological phenomena in mathematics), typically created usingthought experiments, rather than the systematic investigation of typical phenomena that occur in real-world data, as is the case incorpus linguistics. The creation and use of suchcorpora of real-world data is a fundamental part of machine-learning algorithms for NLP. In addition, theoretical underpinnings of Chomskyan linguistics such as the so-called "poverty of the stimulus" argument entail that general learning algorithms, as are typically used in machine learning, cannot be successful in language processing. As a result, the Chomskyan paradigm discouraged the application of such models to language processing.
  8. ^Brown, Peter F. (1993). "The mathematics of statistical machine translation: Parameter estimation".Computational Linguistics (19):263–311.
  9. ^Banko, Michele; Brill, Eric (2001)."Scaling to very very large corpora for natural language disambiguation".Proceedings of the 39th Annual Meeting on Association for Computational Linguistics - ACL '01. Morristown, NJ, USA: Association for Computational Linguistics:26–33.doi:10.3115/1073012.1073017.S2CID 6645623.
  10. ^Elman, Jeffrey L. (March 1990)."Finding Structure in Time".Cognitive Science.14 (2):179–211.doi:10.1207/s15516709cog1402_1.S2CID 2763403.
  11. ^Bengio, Yoshua (2003),A Neural Probabilistic Language Model, —, vol. 3 (— ed.), Montreal, Canada: Journal of Machine Learning Research, p. 1137–1155,doi:10.1162/153244303322533223
  12. ^Bahdanau, Dzmitry; Cho, Kyunghyun; Bengio, Yoshua (2014). "Neural Machine Translation by Jointly Learning to Align and Translate".ICLR.arXiv:1409.0473.
  13. ^Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion;Gomez, Aidan N; Kaiser, Łukasz; Polosukhin, Illia (2017)."Attention is All you Need"(PDF).Advances in Neural Information Processing Systems.30. Curran Associates, Inc.Archived(PDF) from the original on 2024-02-21. Retrieved2024-01-21.
  14. ^Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (Dec 2020). Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.F.; Lin, H. (eds.)."Language Models are Few-Shot Learners"(PDF).Advances in Neural Information Processing Systems.33. Curran Associates, Inc.:1877–1901.arXiv:2005.14165.Archived(PDF) from the original on 2023-11-17. Retrieved2023-03-14.
  15. ^Dai, Andrew M; Du, Nan (December 9, 2021)."More Efficient In-Context Learning with GLaM".ai.googleblog.com.Archived from the original on 2023-03-12. Retrieved2023-03-09.
  16. ^Gruetzemacher, Ross (2022-04-19)."The Power of Natural Language Processing".Harvard Business Review.ISSN 0017-8012. Retrieved2024-12-07.
  17. ^McCorduck 2004, p. 286,Crevier 1993, pp. 76−79,Russell & Norvig 2003, p. 19
  18. ^McCorduck 2004, pp. 291–296,Crevier 1993, pp. 134−139
  19. ^Janet L. Kolodner, Christopher K. Riesbeck;Experience, Memory, and Reasoning; Psychology Press; 2014 reprint

Bibliography

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=History_of_natural_language_processing&oldid=1323599231"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp