Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Natural language understanding

From Wikipedia, the free encyclopedia
Subtopic of natural language processing in artificial intelligence
This article is about the computer processing ability. For the psychological concept, seeLanguage processing in the brain
This article needs to beupdated. The reason given is: lack of discussion of recent developments related tolarge language models, but also no mention of older techniques likeword embedding orword2vec. Please help update this article to reflect recent events or newly available information.(February 2024)

Natural language understanding (NLU) ornatural language interpretation (NLI)[1] is a subset ofnatural language processing inartificial intelligence that deals with machinereading comprehension. NLU has been considered anAI-hard problem.[2]

There is considerable commercial interest in the field because of its application toautomated reasoning,[3]machine translation,[4]question answering,[5] news-gathering,text categorization,voice-activation, archiving, and large-scalecontent analysis.

History

[edit]

The programSTUDENT, written in 1964 byDaniel Bobrow for his PhD dissertation atMIT, is one of the earliest known attempts at NLU by a computer.[6][7][8][9][10] Eight years afterJohn McCarthy coined the termartificial intelligence, Bobrow's dissertation (titledNatural Language Input for a Computer Problem Solving System) showed how a computer could understand simple natural language input to solve algebra word problems.

A year later, in 1965,Joseph Weizenbaum at MIT wroteELIZA, an interactive program that carried on a dialogue in English on any topic, the most popular being psychotherapy. ELIZA worked by simple parsing and substitution of key words into canned phrases and Weizenbaum sidestepped the problem of giving the program adatabase of real-world knowledge or a richlexicon. Yet ELIZA gained surprising popularity as a toy project and can be seen as a very early precursor to current commercial systems such as those used byAsk.com.[11]

In 1969,Roger Schank atStanford University introduced theconceptual dependency theory for NLU.[12] This model, partially influenced by the work ofSydney Lamb, was extensively used by Schank's students atYale University, such asRobert Wilensky,Wendy Lehnert, andJanet Kolodner.

In 1970,William A. Woods introduced theaugmented transition network (ATN) to represent natural language input.[13] Instead ofphrase structure rules ATNs used an equivalent set offinite-state automata that were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years.

In 1971,Terry Winograd finished writingSHRDLU for his PhD thesis at MIT. SHRDLU could understand simple English sentences in a restricted world of children's blocks to direct a robotic arm to move items. The successful demonstration of SHRDLU provided significant momentum for continued research in the field.[14][15] Winograd continued to be a major influence in the field with the publication of his bookLanguage as a Cognitive Process.[16] At Stanford, Winograd would later adviseLarry Page, who co-foundedGoogle.

In the 1970s and 1980s, the natural language processing group atSRI International continued research and development in the field. A number of commercial efforts based on the research were undertaken,e.g., in 1982Gary Hendrix formedSymantec Corporation originally as a company for developing a natural language interface for database queries on personal computers. However, with the advent of mouse-drivengraphical user interfaces, Symantec changed direction. A number of other commercial efforts were started around the same time,e.g., Larry R. Harris at the Artificial Intelligence Corporation and Roger Schank and his students at Cognitive Systems Corp.[17][18] In 1983, Michael Dyer developed the BORIS system at Yale which bore similarities to the work of Roger Schank and W. G. Lehnert.[19]

The third millennium saw the introduction of systems using machine learning for text classification, such as the IBMWatson. However, experts debate how much "understanding" such systems demonstrate:e.g., according toJohn Searle, Watson did not even understand the questions.[20]

John Ball, cognitive scientist and inventor of thePatom Theory, supports this assessment. Natural language processing has made inroads for applications to support human productivity in service and e-commerce, but this has largely been made possible by narrowing the scope of the application. There are thousands of ways to request something in a human language that still defies conventional natural language processing.[citation needed] According to Wibe Wagemans, "To have a meaningful conversation with machines is only possible when we match every word to the correct meaning based on the meanings of the other words in the sentence – just like a 3-year-old does without guesswork."[21]

Scope and context

[edit]

The umbrella term "natural language understanding" can be applied to a diverse set of computer applications, ranging from small, relatively simple tasks such as short commands issued torobots, to highly complex endeavors such as the full comprehension of newspaper articles or poetry passages. Many real-world applications fall between the two extremes, for instancetext classification for the automatic analysis of emails and their routing to a suitable department in a corporation does not require an in-depth understanding of the text,[22] but needs to deal with a much larger vocabulary and more diverse syntax than the management of simple queries to database tables with fixed schemata.

Throughout the years various attempts at processing natural language orEnglish-like sentences presented to computers have taken place at varying degrees of complexity. Some attempts have not resulted in systems with deep understanding, but have helped overall system usability. For example,Wayne Ratliff originally developed theVulcan program with an English-like syntax to mimic the English speaking computer inStar Trek. Vulcan later became thedBase system whose easy-to-use syntax effectively launched the personal computer database industry.[23][24] Systems with an easy to use or English-like syntax are, however, quite distinct from systems that use a richlexicon and include an internalrepresentation (often asfirst order logic) of the semantics of natural language sentences.

Hence the breadth and depth of "understanding" aimed at by a system determine both the complexity of the system (and the implied challenges) and the types of applications it can deal with. The "breadth" of a system is measured by the sizes of its vocabulary and grammar. The "depth" is measured by the degree to which its understanding approximates that of a fluent native speaker. At the narrowest and shallowest,English-like command interpreters require minimal complexity, but have a small range of applications. Narrow but deep systems explore and model mechanisms of understanding,[25] but they still have limited application. Systems that attempt to understand the contents of a document such as a news release beyond simple keyword matching and to judge its suitability for a user are broader and require significant complexity,[26] but they are still somewhat shallow. Systems that are both very broad and very deep are beyond the current state of the art.

Components and architecture

[edit]

Regardless of the approach used, most NLU systems share some common components. The system needs alexicon of the language and aparser andgrammar rules to break sentences into an internal representation. The construction of a rich lexicon with a suitableontology requires significant effort,e.g., theWordnet lexicon required many person-years of effort.[27]

The system also needs theory fromsemantics to guide the comprehension. The interpretation capabilities of a language-understanding system depend on the semantic theory it uses. Competing semantic theories of language have specific trade-offs in their suitability as the basis of computer-automated semantic interpretation.[28] These range fromnaive semantics orstochastic semantic analysis to the use ofpragmatics to derive meaning from context.[29][30][31]Semantic parsers convert natural-language texts into formal meaning representations.[32]

Advanced applications of NLU also attempt to incorporate logicalinference within their framework. This is generally achieved by mapping the derived meaning into a set of assertions inpredicate logic, then usinglogical deduction to arrive at conclusions. Therefore, systems based on functional languages such asLisp need to include a subsystem to represent logical assertions, while logic-oriented systems such as those using the languageProlog generally rely on an extension of the built-in logical representation framework.[33][34]

The management ofcontext in NLU can present special challenges. A large variety of examples and counter examples have resulted in multiple approaches to theformal modeling of context, each with specific strengths and weaknesses.[35][36]

See also

[edit]

Notes

[edit]
  1. ^Semaan, P. (2012).Natural Language Generation: An Overview. Journal of Computer Science & Research (JCSCR)-ISSN, 50-57
  2. ^Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness . In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3-17. (Chapter 1). Springer, London. 2013.http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdfArchived 2013-05-22 at theWayback Machine
  3. ^Van Harmelen, Frank, Vladimir Lifschitz, and Bruce Porter, eds.Handbook of knowledge representation. Vol. 1. Elsevier, 2008.
  4. ^Macherey, Klaus, Franz Josef Och, and Hermann Ney. "Natural language understanding using statistical machine translation." Seventh European Conference on Speech Communication and Technology. 2001.
  5. ^Hirschman, Lynette, and Robert Gaizauskas. "Natural language question answering: the view from here." natural language engineering 7.4 (2001): 275-300.
  6. ^American Association for Artificial IntelligenceBrief History of AI[1]
  7. ^Daniel Bobrow's PhD ThesisNatural Language Input for a Computer Problem Solving System.
  8. ^Machines who think by Pamela McCorduck 2004ISBN 1-56881-205-1 page 286
  9. ^Russell, Stuart J.; Norvig, Peter (2003),Artificial Intelligence: A Modern Approach Prentice Hall,ISBN 0-13-790395-2,http://aima.cs.berkeley.edu/, p. 19
  10. ^Computer Science Logo Style: Beyond programming by Brian Harvey 1997ISBN 0-262-58150-7 page 278
  11. ^Weizenbaum, Joseph (1976).Computer power and human reason: from judgment to calculation W. H. Freeman and Company.ISBN 0-7167-0463-3 pages 188-189
  12. ^Roger Schank, 1969,A conceptual dependency parser for natural language Proceedings of the 1969 conference on Computational linguistics, Sång-Säby, Sweden, pages 1-3
  13. ^Woods, William A (1970). "Transition Network Grammars for Natural Language Analysis". Communications of the ACM 13 (10): 591–606[2]
  14. ^Artificial intelligence: critical concepts, Volume 1 by Ronald Chrisley, Sander Begeer 2000ISBN 0-415-19332-X page 89
  15. ^Terry Winograd's SHRDLU page at StanfordSHRDLUArchived 2020-08-17 at theWayback Machine
  16. ^Winograd, Terry (1983),Language as a Cognitive Process, Addison–Wesley, Reading, MA.
  17. ^Larry R. Harris,Research at the Artificial Intelligence corp. ACM SIGART Bulletin, issue 79, January 1982[3]
  18. ^Inside case-based reasoning by Christopher K. Riesbeck, Roger C. Schank 1989ISBN 0-89859-767-6 page xiii
  19. ^In Depth Understanding: A Model of Integrated Process for Narrative Comprehension.. Michael G. Dyer. MIT Press.ISBN 0-262-04073-5
  20. ^Searle, John (23 February 2011)."Watson Doesn't Know It Won on 'Jeopardy!'".Wall Street Journal.
  21. ^Brandon, John (2016-07-12)."What Natural Language Understanding tech means for chatbots".VentureBeat. Retrieved2024-02-29.
  22. ^An approach to hierarchical email categorization by Peifeng Li et al. inNatural language processing and information systems edited by Zoubida Kedad, Nadira Lammari 2007ISBN 3-540-73350-7
  23. ^InfoWorld, Nov 13, 1989, page 144
  24. ^InfoWorld, April 19, 1984, page 71
  25. ^Building Working Models of Full Natural-Language Understanding in Limited Pragmatic Domains by James Mason 2010[4]
  26. ^Mining the Web: discovering knowledge from hypertext data by Soumen Chakrabarti 2002ISBN 1-55860-754-4 page 289
  27. ^G. A. Miller, R. Beckwith, C. D. Fellbaum, D. Gross, K. Miller. 1990.WordNet: An online lexical database. Int. J. Lexicograph. 3, 4, pp. 235-244.
  28. ^Using computers in linguistics: a practical guide by John Lawler, Helen Aristar Dry 198ISBN 0-415-16792-2 page 209
  29. ^Naive semantics for natural language understanding by Kathleen Dahlgren 1988ISBN 0-89838-287-4
  30. ^Stochastically-based semantic analysis by Wolfgang Minker,Alex Waibel, Joseph Mariani 1999ISBN 0-7923-8571-3
  31. ^Pragmatics and natural language understanding by Georgia M. Green 1996ISBN 0-8058-2166-X
  32. ^Wong, Yuk Wah, andRaymond J. Mooney. "Learning for semantic parsing with statistical machine translation." Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. Association for Computational Linguistics, 2006.
  33. ^Natural Language Processing Prolog Programmers by M. Covington, 1994ISBN 0-13-629478-2
  34. ^Natural language processing in Prolog by Gerald Gazdar, Christopher S. Mellish 1989ISBN 0-201-18053-7
  35. ^Understanding language understanding by Ashwin Ram, Kenneth Moorman 1999ISBN 0-262-18192-4 page 111
  36. ^Formal aspects of context by Pierre Bonzon et al 2000ISBN 0-7923-6350-7
  37. ^Programming with Natural Language Is Actually Going to Work—Wolfram Blog
  38. ^Van Valin, Jr, Robert D."From NLP to NLU"(PDF).
  39. ^Ball, John."multi-lingual NLU by Pat Inc".Pat.ai.
General terms
Text analysis
Text segmentation
Automatic summarization
Machine translation
Distributional semantics models
Language resources,
datasets and corpora
Types and
standards
Data
Automatic identification
and data capture
Topic model
Computer-assisted
reviewing
Natural language
user interface
Related
Retrieved from "https://en.wikipedia.org/w/index.php?title=Natural_language_understanding&oldid=1305951411"
Category:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp