"Logography" and "Lexigraphy" redirect here. For the printing system invented by Henry Johnson, seeLogography (printing). For dictionaries, seelexicography.
Egyptian hieroglyphs, including logograms such as the sun disk (⊙, visible several times here)
In awritten language, alogogram (fromAncient Greeklogos 'word', andgramma 'that which is drawn or written'), alsologograph orlexigraph, is awritten character that represents asemantic component of a language, such as aword ormorpheme.Chinese characters as used inChinese as well as other languages are logograms, as areEgyptian hieroglyphs and characters incuneiform script. Awriting system that primarily uses logograms is called alogography. Non-logographic writing systems, such asalphabets andsyllabaries, arephonemic: their individual symbols represent sounds directly and lack any inherent meaning. However, all known logographies have some phonetic component, generally based on therebus principle, and the addition of a phonetic component to pureideographs is considered to be a key innovation in enabling the writing system to adequately encode human language.
Some of the earliest recorded writing systems are logographic; the first historical civilizations of Mesopotamia, Egypt, China and Mesoamerica all used some form of logographic writing.[1][2]
All logographic scripts ever used fornatural languages rely on therebus principle to extend a relatively limited set of logograms: A subset of characters is used for their phonetic values, either consonantal or syllabic. The termlogosyllabary is used to emphasize the partially phonetic nature of these scripts when the phonetic domain is the syllable. In Ancient Egyptianhieroglyphs, Ch'olti', and in Chinese, there has been the additional development ofdeterminatives, which are combined with logograms to narrow down their possible meaning. In Chinese, they are fused with logographic elements used phonetically; such "radical and phonetic" characters make up the bulk of the script. Ancient Egyptian and Chinese relegated the active use of rebus to the spelling of foreign and dialectical words.
Logoconsonantal scripts have graphemes that may be extended phonetically according to the consonants of the words they represent, ignoring the vowels. For example, Egyptian
was used to write bothsȝ 'duck' andsȝ 'son', though it is likely that these words were not pronounced the same except for their consonants. The primary examples of logoconsonantal scripts areEgyptian hieroglyphs,hieratic, anddemotic:Ancient Egyptian.
All historical logographic systems include a phonetic dimension, as it is impractical to have a separate basic character for every word or morpheme in a language.[a] In some cases, such as cuneiform as it was used for Akkadian, the vast majority of glyphs are used for their sound values rather than logographically. Many logographic systems also have a semantic/ideographic component (seeideogram), called "determinatives" in the case of Egyptian and "radicals" in the case of Chinese.[b]
Typical Egyptian usage was to augment a logogram, which may potentially represent several words with different pronunciations, with a determinate to narrow down the meaning, and a phonetic component to specify the pronunciation. In the case of Chinese, the vast majority of characters are a fixed combination of a radical that indicates its nominal category, plus a phonetic to give an idea of the pronunciation. The Mayan system used logograms with phonetic complements like the Egyptian, while lacking ideographic components.
Not all logograms are associated with one specific language, and some are not associated with any language at all. Theampersand is a logogram in the Latin script,[3] a combination of the letters "e" and "t." In Latin, "et" translates to "and," and the ampersand is still used to represent this word today, however, it does so in a variety of languages, being a representative of morphemes "and," "y," or "en," if they are a speaker of English, Spanish, or Dutch, respectively.
Outside of any script isUnicode, a compilation of characters of various meanings. They state their intention to build the standard to include every character from every language.[4] It's the generally accepted standard for computer character encoding, but others, likeASCII andBaudot, exist and serve various purposes in digital communication. Many logograms in these databases are ubiquitous, and are used on the Internet by users worldwide.
Chinese scholars have traditionally classifiedChinese characters into six types by etymology.
The first two types are "single-body", meaning that the character was created independently of other characters. "Single-body" pictograms and ideograms make up only a small proportion of Chinese logograms. More productive for the Chinese script were the two"compound" methods, i.e. the character was created from assembling different characters. Despite being called "compounds", these logograms are still single characters, and are written to take up the same amount of space as any other logogram. The final two types are methods in the usage of characters rather than the formation of characters themselves.
Page fromNewly Compiled Four Character Dictionary (新編對相四言), a 1436 Ming Dynasty primer on Chinese characters.
The first type, and the type most often associated with Chinese writing, arepictograms, which are pictorial representations of themorpheme represented, e.g.山 for 'mountain'.
The second type are theideograms that attempt to visualize abstractconcepts, such as上 'up' and下 'down'. Also considered ideograms are pictograms with an ideographic indicator; for instance,刀 is a pictogram meaning 'knife', while刃 is an ideogram meaning 'blade'.
Radical–radical compounds, in which each element of the character (calledradical) hints at the meaning. For example,休 'rest' is composed of the characters for 'person' (人) and 'tree' (木), with the intended idea of someone leaning against a tree, i.e. resting.
Radical–phonetic compounds, in which one component (the radical) indicates the general meaning of the character, and the other (the phonetic) hints at the pronunciation. An example is樑 (liáng), where the phonetic梁liáng indicates the pronunciation of the character and the radical木 ('wood') indicates its meaning of 'supporting beam'. Characters of this type constitute around 90% of Chinese logograms.[5]
Changed-annotation characters are characters which were originally the same character but have bifurcated throughorthographic and oftensemantic drift. For instance,樂 / 乐 can mean both 'music' (yuè) and 'pleasure' (lè).
Improvisational characters (lit. 'improvised-borrowed-words') come into use when a native spoken word has no corresponding character, and hence another character with the same or a similar sound (and often a close meaning) is "borrowed"; occasionally, the new meaning can supplant the old meaning. For example,自 used to be a pictographic word meaning 'nose', but was borrowed to mean 'self', and is now used almost exclusively to mean the latter; the original meaning survives only in stock phrases and more archaic compounds. Because of their derivational process, the entire set ofJapanese kana can be considered to be of this type of character, hence the namekana (lit. 'borrowed names'). Example: Japanese仮名;仮 is a simplified form of Chinese假 used in Korea and Japan, and假借 is the Chinese name for this type of characters.
The most productive method of Chinese writing, the radical-phonetic, was made possible by ignoring certain distinctions in the phonetic system of syllables. InOld Chinese, post-final ending consonants/s/ and/ʔ/ were typically ignored; these developed intotones inMiddle Chinese, which were likewise ignored when new characters were created. Also ignored were differences in aspiration (between aspirated vs. unaspiratedobstruents, and voiced vs. unvoiced sonorants); the Old Chinese difference between type-A and type-B syllables (often described as presence vs. absence ofpalatalization orpharyngealization); and sometimes, voicing of initial obstruents and/or the presence of a medial/r/ after the initial consonant. In earlier times, greater phonetic freedom was generally allowed. During Middle Chinese times, newly created characters tended to match pronunciation exactly, other than the tone – often by using as the phonetic component a character that itself is a radical-phonetic compound.
Due to the long period of language evolution, such component "hints" within characters as provided by the radical-phonetic compounds are sometimes useless and may be misleading in modern usage. As an example, based on每 'each', pronouncedměi inStandard Mandarin, are the characters侮 'to humiliate',悔 'to regret', and海 'sea', pronounced respectivelywǔ,huǐ, andhǎi in Mandarin. Three of these characters were pronounced very similarly in Old Chinese –/mˤəʔ/ (每),/m̥ˤəʔ/ (悔), and/m̥ˤəʔ/ (海) according to a recent reconstruction byWilliam H. Baxter andLaurent Sagart[6] – butsound changes in the intervening 3,000 years or so (including two different dialectal developments, in the case of the last two characters) have resulted in radically different pronunciations.
Within the context of the Chinese language, Chinese characters (known ashanzi) by and large represent words and morphemes rather than pure ideas; however, the adoption of Chinese characters by the Japanese and Korean languages (where they are known askanji andhanja, respectively) have resulted in some complications to this picture.
Many Chinese words, composed of Chinese morphemes, were borrowed into Japanese and Korean together with their character representations; in this case, the morphemes and characters were borrowed together. In other cases, however, characters were borrowed to represent native Japanese and Korean morphemes, on the basis of meaning alone. As a result, a single character can end up representing multiple morphemes of similar meaning but with different origins across several languages. Because of this, kanji and hanja are sometimes described asmorphographic writing systems.[7]
Differences in processing of logographic and phonologic writing systems
Because much research onlanguage processing has centered on English and other alphabetically written languages, many theories of language processing have stressed the role of phonology in producing speech. Contrasting logographically coded languages, where a single character is represented phonetically and ideographically, with phonetically/phonemically spelled languages has yielded insights into how different languages rely on different processing mechanisms. Studies on the processing of logographically coded languages have amongst other things looked at neurobiological differences in processing, with one area of particular interest being hemispheric lateralization. Since logographically coded languages are more closely associated with images than alphabetically coded languages, several researchers have hypothesized that right-side activation should be more prominent in logographically coded languages. Although some studies have yielded results consistent with this hypothesis there are too many contrasting results to make any final conclusions about the role of hemispheric lateralization in orthographically versus phonetically coded languages.[8]
Another topic that has been given some attention is differences in processing of homophones. Verdonschot et al.[9] examined differences in the time it took to read a homophone out loud when a picture that was either related or unrelated[10] to a homophonic character was presented before the character. Both Japanese and Chinese homophones were examined. Whereas word production of alphabetically coded languages (such as English) has shown a relatively robust immunity to the effect of context stimuli,[11] Verdschot et al.[12] found that Japanese homophones seem particularly sensitive to these types of effects. Specifically, reaction times were shorter when participants were presented with a phonologically related picture before being asked to read a target character out loud. An example of a phonologically related stimulus from the study would be for instance when participants were presented with a picture of an elephant, which is pronouncedzou in Japanese, before being presented with the Chinese character造, which is also readzou. No effect of phonologically related context pictures were found for the reaction times for reading Chinese words. A comparison of the (partially) logographically coded languages Japanese and Chinese is interesting because whereas the Japanese language consists of more than 60% homographic heterophones (characters that can be read two or more different ways), most Chinese characters only have one reading. Because both languages are logographically coded, the difference in latency in reading aloud Japanese and Chinese due to context effects cannot be ascribed to the logographic nature of the writing systems. Instead, the authors hypothesize that the difference in latency times is due to additional processing costs in Japanese, where the reader cannot rely solely on a direct orthography-to-phonology route, but information on a lexical-syntactical level must also be accessed in order to choose the correct pronunciation. This hypothesis is confirmed by studies finding that JapaneseAlzheimer's disease patients whose comprehension of characters had deteriorated still could read the words out loud with no particular difficulty.[13][14]
Studies contrasting the processing of English and Chinese homophones inlexical decision tasks have found an advantage for homophone processing in Chinese, and a disadvantage for processing homophones in English.[15] The processing disadvantage in English is usually described in terms of the relative lack of homophones in the English language. When a homophonic word is encountered, the phonological representation of that word is first activated. However, since this is an ambiguous stimulus, a matching at the orthographic/lexical ("mental dictionary") level is necessary before the stimulus can be disambiguated, and the correct pronunciation can be chosen. In contrast, in a language (such as Chinese) where many characters with the same reading exists, it is hypothesized that the person reading the character will be more familiar with homophones, and that this familiarity will aid the processing of the character, and the subsequent selection of the correct pronunciation, leading to shorter reaction times when attending to the stimulus. In an attempt to better understand homophony effects on processing, Hino et al.[11] conducted a series of experiments using Japanese as their target language. While controlling for familiarity, they found a processing advantage for homophones over non-homophones in Japanese, similar to what has previously been found in Chinese. The researchers also tested whether orthographically similar homophones would yield a disadvantage in processing, as has been the case with English homophones,[16] but found no evidence for this. It is evident that there is a difference in how homophones are processed in logographically coded and alphabetically coded languages, but whether the advantage for processing of homophones in the logographically coded languages Japanese and Chinese (i.e. their writing systems) is due to the logographic nature of the scripts, or if it merely reflects an advantage for languages with more homophones regardless of script nature, remains to be seen.
The main difference between logograms and other writing systems is that the graphemes are not linked directly to their pronunciation. An advantage of this separation is that understanding of the pronunciation or language of the writer is unnecessary, e.g.1 is understood regardless of whether it be calledone,ichi orwāḥid by its reader. Likewise, people speaking differentvarieties of Chinese may not understand each other in speaking, but may do so to a significant extent in writing even if they do not write inStandard Chinese. Therefore, in China, Vietnam, Korea, and Japan before modern times, communication by writing (筆談) was the norm ofEast Asian international trade and diplomacy usingClassical Chinese.[citation needed][dubious –discuss]
This separation, however, also has the great disadvantage of requiring the memorization of the logograms when learning to read and write, separately from the pronunciation. Though not from an inherent feature of logograms but due to its unique history of development, Japanese has the added complication that almost every logogram has more than one pronunciation. Conversely, a phonetic character set is written precisely as it is spoken, but with the disadvantage that slight pronunciation differences introduce ambiguities. Many alphabetic systems such as those ofGreek,Latin,Italian,Spanish, andFinnish make the practical compromise of standardizing how words are written while maintaining a nearly one-to-one relation between characters and sounds. Orthographies in some other languages, such asEnglish,French,Thai andTibetan, are all more complicated than that; character combinations are often pronounced in multiple ways, usually depending on their history.Hangul, theKorean language's writing system, is an example of an alphabetic script that was designed to replace the logogrammatichanja in order to increase literacy. The latter is now rarely used, but retains some currency in South Korea, sometimes in combination with hangul.[citation needed]
Entering complex characters can be cumbersome on electronic devices due to a practical limitation in the number of input keys. There exist variousinput methods for entering logograms, either by breaking them up into their constituent parts such as with theCangjie andWubi methods of typing Chinese, or using phonetic systems such asBopomofo orPinyin where the word is entered as pronounced and then selected from a list of logograms matching it. While the former method is (linearly) faster, it is more difficult to learn. With the Chinese alphabet system however, the strokes forming the logogram are typed as they are normally written, and the corresponding logogram is then entered.[clarification needed]
Also due to the number of glyphs, in programming and computing in general, more memory is needed to store each grapheme, as the character set is larger. As a comparison,ISO 8859 requires only onebyte for each grapheme, while theBasic Multilingual Plane encoded inUTF-8 requires up to three bytes. On the other hand, English words, for example, average five characters and a space per word[18][self-published source] and thus need six bytes for every word. Since many logograms contain more than one grapheme, it is not clear which is more memory-efficient.Variable-width encodings allow a unified character encoding standard such asUnicode to use only the bytes necessary to represent a character, reducing the overhead that results merging large character sets with smaller ones.
^Most have glyphs with predominantly syllabic values, calledlogosyllabic, though Egyptian had predominantly consonantal or poly-consonantal values, and is thus calledlogoconsonantal.
^"Determinative" is the more generic term, however, and some authors use it for Chinese as well (e.g. William Boltz, in Daniels and Bright, 1996, p. 194).
^Li, Y.; Kang, J. S. (1993). "Analysis of phonetics of the ideophonetic characters in modern Chinese". In Chen, Y. (ed.).Information Analysis of Usage of Characters in Modern Chinese (in Chinese). Shanghai Education Publisher. pp. 84–98.
^Rogers, H. (2005).Writing Systems: A Linguistic Approach. Blackwell Publishing.
^Hanavan, Kevin; Jeffrey Coney (2005). "Hemispheric asymmetry in the processing of Japanese script".Laterality: Asymmetries of Body, Brain and Cognition.10 (5):413–428.doi:10.1080/13576500442000184.PMID16191812.S2CID20404324.
^abHino, Y.; Kusunose, Y.; Lupker, S. J.; Jared, D. (2012). "The Processing Advantage and Disadvantage for Homophones in Lexical Decision Tasks".Journal of Experimental Psychology: Learning, Memory, and Cognition.39 (2):529–551.doi:10.1037/a0029122.PMID22905930.
^Sasanuma, S.; Sakuma, N.; Kitano, K. (1992). "Reading kanji without semantics: Evidence from a longitudinal study of dementia".Cognitive Neuropsychology.9 (6):465–486.doi:10.1080/02643299208252068.
^See Hino et al. (2012) for a brief review of the literature.
^Haigh, C. A.; Jared, D. (2007). "The activation of phonological representations by bilinguals while reading silently: Evidence from interlingual homophones".Journal of Experimental Psychology: Learning, Memory, and Cognition.33 (4):623–644.doi:10.1037/0278-7393.33.4.623.PMID17576144. Citing Ferrand & Grainger 2003, Haigh & Jared 2004.