Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Speech

From Wikipedia, the free encyclopedia
Human vocal communication using spoken language
For the process of speaking to a group of people, seePublic speaking. For other uses, seeSpeech (disambiguation).
Speech production visualized byreal-time MRI
Part ofa series on
Linguistics
Portal

Speech is the use of thehuman voice as a medium forlanguage.Spoken language combinesvowel andconsonant sounds to form units of meaning likewords, which belong to a language'slexicon. There are many different intentionalspeech acts, such as informing, declaring,asking,persuading, directing; acts may vary in various aspects likeenunciation,intonation,loudness, andtempo to convey meaning. Individuals may also unintentionally communicate aspects of their social position through speech, such as sex, age, place of origin, physiological and mental condition, education, and experiences.

While normally used to facilitatecommunication with others, people may also use speech without the intent to communicate. Speech may nevertheless express emotions or desires; peopletalk to themselves sometimes in acts that are a development of what somepsychologists (e.g.,Lev Vygotsky) have maintained is the use of silent speech in aninterior monologue to vivify and organizecognition, sometimes in the momentary adoption of a dual persona as self addressing self as though addressing another person. Solo speech can be usedto memorize or to test one's memorization of things, and inprayer or inmeditation.

Researchers study many different aspects of speech: speech production andspeech perception of thesounds used in a language,speech repetition,speech errors, the ability to map heard spoken words onto the vocalizations needed to recreate them, which plays a key role inchildren's enlargement of theirvocabulary, and what different areas of the human brain, such asBroca's area andWernicke's area, underlie speech. Speech is the subject of study forlinguistics,cognitive science,communication studies,psychology,computer science,speech pathology,otolaryngology, andacoustics. Speech compares withwritten language,[1] which may differ in its vocabulary, syntax, and phonetics from the spoken language, a situation calleddiglossia.

The evolutionaryorigin of speech is subject to debate and speculation. Whileanimals also communicate using vocalizations, and trained apes such asWashoe andKanzi can use simplesign language, no animals' vocalizations are articulated phonemically and syntactically, and do not constitute speech.

Evolution

[edit]
Main article:Origin of speech

Although related to the more general problem of theorigin of language, theevolution of distinctively human speech capacities has become a distinct and in many ways separate area of scientific research.[2][3][4][5][6] The topic is a separate one because language is not necessarily spoken: it can equally bewritten orsigned. Speech is in this sense optional, although it is the default modality for language.

Places of articulation (passive and active):
1. Exo-labial, 2. Endo-labial, 3. Dental, 4. Alveolar, 5. Post-alveolar, 6. Pre-palatal, 7. Palatal, 8. Velar, 9. Uvular, 10. Pharyngeal, 11. Glottal, 12. Epiglottal, 13. Radical, 14. Postero-dorsal, 15. Antero-dorsal, 16. Laminal, 17. Apical, 18. Sub-apical

Monkeys, non-humanapes and humans, like many other animals, have evolved specialised mechanisms for producingsound for purposes of social communication.[7] On the other hand, no monkey or ape uses itstongue for such purposes.[8][9] The human species' unprecedented use of the tongue, lips and other moveable parts seems to place speech in a quite separate category, making its evolutionary emergence an intriguing theoretical challenge in the eyes of many scholars.[10]

Determining the timeline of human speech evolution is made additionally challenging by the lack of data in the fossil record. The humanvocal tract does not fossilize, and indirect evidence of vocal tract changes in hominid fossils has proven inconclusive.[10]

Production

[edit]
Main articles:Speech production andLinguistics

Speech production is an unconscious multi-step process by which thoughts are generated into spoken utterances. Production involves the unconscious mind selecting appropriate words and the appropriate form of those words from the lexicon and morphology, and the organization of those words through the syntax. Then, the phonetic properties of the words are retrieved and the sentence is articulated through the articulations associated with those phonetic properties.[11]

Inlinguistics,articulatory phonetics is the study of how the tongue, lips, jaw, vocal cords, and other speech organs are used to make sounds. Speech sounds are categorized bymanner of articulation andplace of articulation. Place of articulation refers to where in the neck or mouth the airstream is constricted. Manner of articulation refers to the manner in which the speech organs interact, such as how closely the air is restricted, what form of airstream is used (e.g.pulmonic, implosive, ejectives, and clicks), whether or not the vocal cords are vibrating, and whether the nasal cavity is opened to the airstream.[12] The concept is primarily used for the production ofconsonants, but can be used forvowels in qualities such asvoicing andnasalization. For any place of articulation, there may be several manners of articulation, and therefore severalhomorganic consonants.

Normal human speech is pulmonic, produced with pressure from thelungs, which createsphonation in theglottis in thelarynx, which is then modified by the vocal tract and mouth into different vowels and consonants. However humans can pronounce words without the use of the lungs and glottis inalaryngeal speech, of which there are three types:esophageal speech, pharyngeal speech and buccal speech (better known asDonald Duck talk).

Errors

[edit]
Main article:Speech error

Speech production is a complex activity, and as a consequence errors are common, especially in children. Speech errors come in many forms and are used to provide evidence to support hypotheses about the nature of speech.[13] As a result, speech errors are often used in the construction of models for language production andchild language acquisition. For example, the fact that children often make the error of over-regularizing the -ed past tense suffix in English (e.g. saying 'singed' instead of 'sang') shows that the regular forms are acquired earlier.[14][15] Speech errors associated with certain kinds of aphasia have been used to map certain components of speech onto the brain and see the relation between different aspects of production; for example, the difficulty ofexpressive aphasia patients in producing regular past-tense verbs, but not irregulars like 'sing-sang' has been used to demonstrate that regular inflected forms of a word are not individually stored in the lexicon, but produced from affixation to the base form.[16]

Perception

[edit]
Main article:Speech perception

Speech perception refers to the processes by which humans can interpret and understand the sounds used in language. The study of speech perception is closely linked to the fields ofphonetics andphonology in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how listeners recognize speech sounds and use this information to understandspoken language. Research into speech perception also has applications in buildingcomputer systems that can recognize speech, as well as improving speech recognition for hearing- and language-impaired listeners.[17]

Speech perception iscategorical, in that people put the sounds they hear into categories rather than perceiving them as a spectrum. People are more likely to be able to hear differences in sounds across categorical boundaries than within them. A good example of this isvoice onset time (VOT), one aspect of the phonetic production of consonant sounds. For example, Hebrew speakers, who distinguish voiced /b/ from voiceless /p/, will more easily detect a change in VOT from -10 ( perceived as /b/ ) to 0 ( perceived as /p/ ) than a change in VOT from +10 to +20, or -10 to -20, despite this being an equally large change on the VOT spectrum.[18]

Development

[edit]
Main article:Language development

Most human children develop proto-speech babbling behaviors when they are four to six months old. Most will begin saying their first words at some point during the first year of life. Typical children progress through two or three word phrases before three years of age followed by short sentences by four years of age.[19]

Repetition

[edit]
Main article:Speech repetition

In speech repetition, speech being heard is quickly turned from sensory input into motor instructions needed for its immediate or delayed vocal imitation (inphonological memory). This type of mapping plays a key role in enabling children to expand their spoken vocabulary. Masur (1995) found that how often children repeat novel words versus those they already have in their lexicon is related to the size of their lexicon later on, with young children who repeat more novel words having a larger lexicon later in development. Speech repetition could help facilitate the acquisition of this larger lexicon.[20]

Problems

[edit]
See also:Speech disorder
This sectionneeds morereliable medical references forverification or relies too heavily onprimary sources. Please review the contents of the section andadd the appropriate references if you can. Unsourced or poorly sourced material may be challenged andremoved.Find sources: "Speech" – news ·newspapers ·books ·scholar ·JSTOR(August 2022)

There are several organic and psychological factors that can affect speech. Among these are:

  1. Diseases and disorders of thelungs or thevocal cords, includingparalysis, respiratory infections (bronchitis),vocal fold nodules andcancers of the lungs and throat.
  2. Diseases and disorders of thebrain, includingalogia,aphasias,dysarthria,dystonia andspeech processing disorders, where impairedmotor planning, nerve transmission, phonological processing or perception of the message (as opposed to the actual sound) leads to poor speech production.
  3. Hearing problems, such asotitis media with effusion, and listening problems,auditory processing disorders, can lead to phonological problems. In addition todysphasia,anomia and auditory processing disorder impede the quality of auditory perception, and therefore, expression. Those who aredeaf or hard of hearing may be considered to fall into this category.
  4. Articulatory problems, such as slurred speech,stuttering,lisping,cleft palate,ataxia, ornerve damage leading to problems inarticulation.Tourette syndrome andtics can also affect speech. Variouscongenital and acquiredtongue diseases can affect speech as canmotor neuron disease.
  5. Psychiatric disorders have been shown to change speech acoustic features, where for instance,fundamental frequency of voice (perceived as pitch) tends to be significantly lower inmajor depressive disorder than in healthy controls.[21] Therefore, speech is being investigated as a potential biomarker for mental health disorders.

Speech and language disorders can also result from stroke,[22] brain injury,[23] hearing loss,[24] developmental delay,[25] a cleft palate,[26] cerebral palsy,[27] or emotional issues.[28]

Treatment

[edit]
Main article:Speech–language pathology

Speech-related diseases, disorders, and conditions can be treated by a speech-language pathologist (SLP) or speech therapist. SLPs assess levels of speech needs, make diagnoses based on the assessments, and then treat the diagnoses or address the needs.[29]

Brain physiology

[edit]

Classical model

[edit]
Diagram of the brain
Broca's and Wernicke's areas of the brain, which are critical in language.

The classical orWernicke-Geschwind model of the language system in the brain focuses onBroca's area in the inferiorprefrontal cortex, andWernicke's area in the posteriorsuperior temporal gyrus on thedominant hemisphere of the brain (typically the left hemisphere for language). In this model, a linguistic auditory signal is first sent from theauditory cortex to Wernicke's area. Thelexicon is accessed in Wernicke's area, and these words are sent via thearcuate fasciculus to Broca's area, where morphology, syntax, and instructions for articulation are generated. This is then sent from Broca's area to themotor cortex for articulation.[30]

Paul Broca identified an approximate region of the brain in 1861 which, when damaged in two of his patients, caused severe deficits in speech production, where his patients were unable to speak beyond a few monosyllabic words. This deficit, known as Broca's orexpressive aphasia, is characterized by difficulty in speech production where speech is slow and labored, function words are absent, and syntax is severely impaired, as intelegraphic speech. In expressive aphasia, speech comprehension is generally less affected except in the comprehension of grammatically complex sentences.[31] Wernicke's area is named afterCarl Wernicke, who in 1874 proposed a connection between damage to the posterior area of the left superior temporal gyrus and aphasia, as he noted that not all aphasic patients had had damage to the prefrontal cortex.[32] Damage to Wernicke's area produces Wernicke's orreceptive aphasia, which is characterized by relatively normal syntax and prosody but severe impairment in lexical access, resulting in poor comprehension and nonsensical orjargon speech.[31]

Modern research

[edit]

Modern models of the neurological systems behind linguistic comprehension and production recognize the importance of Broca's and Wernicke's areas, but are not limited to them nor solely to the left hemisphere.[33] Instead, multiple streams are involved in speech production and comprehension. Damage to the leftlateral sulcus has been connected with difficulty in processing and producing morphology and syntax, while lexical access and comprehension of irregular forms (e.g. eat-ate) remain unaffected.[34]Moreover, the circuits involved in human speech comprehension dynamically adapt with learning, for example, by becoming more efficient in terms of processing time when listening to familiar messages such as learned verses.[35]

Animal communication

[edit]
Main article:Talking animals

Some non-human animals can produce sounds or gestures resembling those of a human language.[36] Several species or groups of animals have developedforms of communication which superficially resemble verbal language, however, these usually are not considered a language because they lack one or more of thedefining characteristics, e.g.grammar,syntax,recursion, anddisplacement. Researchers have been successful in teaching some animals to make gestures similar tosign language,[37][38] although whether this should be considered a language has been disputed.[39]

See also

[edit]

References

[edit]
  1. ^"Speech".American Heritage Dictionary.Archived from the original on 2020-08-07. Retrieved2018-09-13.
  2. ^Hockett, Charles F. (1960)."The Origin of Speech"(PDF).Scientific American.203 (3):88–96.Bibcode:1960SciAm.203c..88H.doi:10.1038/scientificamerican0960-88.PMID 14402211. Archived fromthe original(PDF) on 2014-01-06. Retrieved2014-01-06.
  3. ^Corballis, Michael C. (2002).From hand to mouth : the origins of language. Princeton: Princeton University Press.ISBN 978-0-691-08803-7.OCLC 469431753.
  4. ^Lieberman, Philip (1984).The biology and evolution of language. Cambridge, Massachusetts: Harvard University Press.ISBN 9780674074132.OCLC 10071298.
  5. ^Lieberman, Philip (2000).Human language and our reptilian brain : the subcortical bases of speech, syntax, and thought. Vol. 44. Cambridge, Massachusetts: Harvard University Press. pp. 32–51.doi:10.1353/pbm.2001.0011.ISBN 9780674002265.OCLC 43207451.PMID 11253303.S2CID 38780927.{{cite book}}:|journal= ignored (help)
  6. ^Abry, Christian; Boë, Louis-Jean; Laboissière, Rafael; Schwartz, Jean-Luc (1998). "A new puzzle for the evolution of speech?".Behavioral and Brain Sciences.21 (4):512–513.doi:10.1017/S0140525X98231268.S2CID 145180611.
  7. ^Kelemen, G. (1963). Comparative anatomy and performance of the vocal organ in vertebrates. In R. Busnel (ed.),Acoustic behavior of animals. Amsterdam: Elsevier, pp. 489–521.
  8. ^Riede, T.; Bronson, E.; Hatzikirou, H.; Zuberbühler, K. (Jan 2005)."Vocal production mechanisms in a non-human primate: morphological data and a model"(PDF).J Hum Evol.48 (1):85–96.Bibcode:2005JHumE..48...85R.doi:10.1016/j.jhevol.2004.10.002.PMID 15656937.Archived(PDF) from the original on 2022-08-12. Retrieved2022-08-12.
  9. ^Riede, T.; Bronson, E.; Hatzikirou, H.; Zuberbühler, K. (February 2006). "Multiple discontinuities in nonhuman vocal tracts – A reply".Journal of Human Evolution.50 (2):222–225.Bibcode:2006JHumE..50..222R.doi:10.1016/j.jhevol.2005.10.005.
  10. ^abFitch, W.Tecumseh (July 2000). "The evolution of speech: a comparative review".Trends in Cognitive Sciences.4 (7):258–267.CiteSeerX 10.1.1.22.3754.doi:10.1016/S1364-6613(00)01494-7.PMID 10859570.S2CID 14706592.
  11. ^Levelt, Willem J. M. (1999). "Models of word production".Trends in Cognitive Sciences.3 (6):223–32.doi:10.1016/s1364-6613(99)01319-4.PMID 10354575.S2CID 7939521.
  12. ^Catford, J.C.; Esling, J.H. (2006). "Articulatory Phonetics". In Brown, Keith (ed.).Encyclopedia of Language & Linguistics (2nd ed.). Amsterdam: Elsevier Science. pp. 425–42.
  13. ^Fromkin, Victoria (1973). "Introduction".Speech Errors as Linguistic Evidence. The Hague: Mouton. pp. 11–46.
  14. ^Plunkett, Kim; Juola, Patrick (1999). "A connectionist model of english past tense and plural morphology".Cognitive Science.23 (4):463–90.CiteSeerX 10.1.1.545.3746.doi:10.1207/s15516709cog2304_4.
  15. ^Nicoladis, Elena; Paradis, Johanne (2012). "Acquiring Regular and Irregular Past Tense Morphemes in English and French: Evidence From Bilingual Children".Language Learning.62 (1):170–97.doi:10.1111/j.1467-9922.2010.00628.x.
  16. ^Ullman, Michael T.; et al. (2005). "Neural correlates of lexicon and grammar: Evidence from the production,reading, and judgement of inflection in aphasia".Brain and Language.93 (2):185–238.doi:10.1016/j.bandl.2004.10.001.PMID 15781306.S2CID 14991615.
  17. ^Kennison, Shelia (2013).Introduction to Language Development. Los Angeles: Sage.
  18. ^Kishon-Rabin, Liat; Rotshtein, Shira; Taitelbaum, Riki (2002). "Underlying Mechanism for Categorical Perception: Tone-Onset Time and Voice-Onset Time Evidence of Hebrew Voicing".Journal of Basic and Clinical Physiology and Pharmacology.13 (2):117–34.doi:10.1515/jbcpp.2002.13.2.117.PMID 16411426.S2CID 9986779.
  19. ^"Speech and Language Developmental Milestones".National Institute on Deafness and Other Communication Disorders. National Insistitutes of Health. 13 October 2022.
  20. ^Masur, Elise (1995). "Infants' Early Verbal Imitation and Their Later Lexical Development".Merrill-Palmer Quarterly.41 (3):286–306.
  21. ^Low DM, Bentley KH, Ghosh, SS (2020)."Automated assessment of psychiatric disorders using speech: A systematic review".Laryngoscope Investigative Otolaryngology.5 (1):96–116.doi:10.1002/lio2.354.PMC 7042657.PMID 32128436.
  22. ^Richards, Emma (June 2012). "Communication and swallowing problems after stroke".Nursing and Residential Care.14 (6):282–286.doi:10.12968/nrec.2012.14.6.282.
  23. ^Zasler, Nathan D.; Katz, Douglas I.; Zafonte, Ross D.; Arciniegas, David B.; Bullock, M. Ross; Kreutzer, Jeffrey S., eds. (2013).Brain injury medicine principles and practice (2nd ed.). New York: Demos Medical. pp. 1086–1104,1111–1117.ISBN 9781617050572.
  24. ^Ching, Teresa Y. C. (2015)."Is early intervention effective in improving spoken language outcomes of children with congenital hearing loss?".American Journal of Audiology.24 (3):345–348.doi:10.1044/2015_aja-15-0007.PMC 4659415.PMID 26649545.
  25. ^The Royal Children's Hospital, Melbourne."Developmental Delay: An Information Guide for Parents"(PDF).The Royal Children's Hospital Melbourne.Archived(PDF) from the original on 29 March 2016. Retrieved2 May 2016.
  26. ^Bauman-Waengler, Jacqueline (2011).Articulatory and phonological impairments: a clinical focus (4th ed., International ed.). Harlow: Pearson Education. pp. 378–385.ISBN 9780132719957.
  27. ^"Speech and Language Therapy".CerebralPalsy.org.Archived from the original on 8 May 2016. Retrieved2 May 2016.
  28. ^Cross, Melanie (2011).Children with social, emotional and behavioural difficulties and communication problems: there is always a reason (2nd ed.). London: Jessica Kingsley Publishers.
  29. ^"Speech–Language Pathologists".ASHA.org. American Speech–Language–Hearing Association. Retrieved6 April 2015.
  30. ^Kertesz, A. (2005). "Wernicke–Geschwind Model". In L. Nadel,Encyclopedia of cognitive science. Hoboken, NJ: Wiley.
  31. ^abHillis, A.E., & Caramazza, A. (2005). "Aphasia". In L. Nadel,Encyclopedia of cognitive science. Hoboken, NJ: Wiley.
  32. ^Wernicke K. (1995). "The aphasia symptom-complex: A psychological study on an anatomical basis (1875)". In Paul Eling (ed.).Reader in the History of Aphasia: Fromsasi(Franz Gall to). Vol. 4. Amsterdam: John Benjamins Pub Co. pp. 69–89.ISBN 978-90-272-1893-3.
  33. ^Nakai, Y; Jeong, JW; Brown, EC; Rothermel, R; Kojima, K; Kambara, T; Shah, A; Mittal, S; Sood, S; Asano, E (2017)."Three- and four-dimensional mapping of speech and language in patients with epilepsy".Brain.140 (5):1351–70.doi:10.1093/brain/awx051.PMC 5405238.PMID 28334963.
  34. ^Tyler, Lorraine K.; Marslen-Wilson, William (2009). "Fronto-temporal brain systems supporting spoken language comprehension". In Moore, Brian C.J.; Tyler, Lorraine K.; Marslen-Wilson, William D. (eds.).The Perception of Speech: from sound to meaning. Oxford: Oxford University Press. pp. 193–217.ISBN 978-0-19-956131-5.
  35. ^Cervantes Constantino, F; Simon, JZ (2018)."Restoration and Efficiency of the Neural Processing of Continuous Speech Are Promoted by Prior Knowledge".Frontiers in Systems Neuroscience.12 (56): 56.doi:10.3389/fnsys.2018.00056.PMC 6220042.PMID 30429778.
  36. ^"Can any animals talk and use language like humans?".BBC. 16 February 2015. Archived fromthe original on 31 January 2021. Retrieved12 August 2022.
  37. ^Hillix, William A.; Rumbaugh, Duane M. (2004), "Washoe, the First Signing Chimpanzee",Animal Bodies, Human Minds: Ape, Dolphin, and Parrot Language Skills, Springer US, pp. 69–85,doi:10.1007/978-1-4757-4512-2_5,ISBN 978-1-4419-3400-0
  38. ^Hu, Jane C. (Aug 20, 2014)."What Do Talking Apes Really Tell Us?".Slate.Archived from the original on October 12, 2018. RetrievedJan 19, 2020.
  39. ^Terrace, Herbert S. (December 1982). "Why Koko Can't Talk".The Sciences.22 (9):8–10.doi:10.1002/j.2326-1951.1982.tb02120.x.ISSN 0036-861X.

Further reading

[edit]

External links

[edit]
Speech at Wikipedia'ssister projects
Topics and
terminology
Encoding communication
Subfields
Scholars
Authority control databases: NationalEdit this at Wikidata
Physical
Speech
Social context
Other
Unconscious
Multi-faceted
Broader concepts
Further information
Disorders
Neuroanatomy
Applications
Technology
Key people
Related
Non-verbal language
Art and literature
Retrieved from "https://en.wikipedia.org/w/index.php?title=Speech&oldid=1284727812"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp