
TheGoogle Books Ngram Viewer is an onlinesearch engine that charts the frequencies of any set of search strings using a yearly count ofn-grams found in printed sources published between 1500 and 2022[1][2][3][4] inGoogle'stext corpora in English, Chinese (simplified), French, German, Hebrew, Italian, Russian, or Spanish.[1][2][5]There are also some specialized English corpora, such as American English, British English, and English Fiction.[6]
The program can search for a word or a phrase, including misspellings or gibberish.[5] Then-grams are matched with the text within the selected corpus, and if found in 40 or more books, are then displayed as agraph.[6] The Google Books Ngram Viewer supports searches forparts of speech andwildcards.[6] It is routinely used in research.[7][8]
In the development processes, Google teamed up with twoHarvard researchers, Jean-Baptiste Michel andErez Lieberman Aiden, and quietly released the program on December 16, 2010.[2][9]Before the release, it was difficult to quantify the rate of linguistic change because of the absence of a database that was designed for this purpose, saidSteven Pinker,[10]a well-known linguist who was one of the co-authors of theScience paper published on the same day.[1] The Google Books Ngram Viewer was developed in the hope of opening a new window to quantitative research in the humanities field, and the database contained 500 billion words from 5.2 million books publicly available from the very beginning.[2][3][9]
The intended audience was scholarly, but the Google Books Ngram Viewer made it possible for anyone with a computer to see a graph that represents thediachronic change of the use of words and phrases with ease. Lieberman said in response to theNew York Times that the developers aimed to provide even children with the ability to browse cultural trends throughout history.[9] In theScience paper, Lieberman and his collaborators called the method of high-volume data analysis in digitalized texts "culturomics".[1][9]
Commas delimit user-entered search terms, where each comma-separated term is searched in the database as ann-gram (for example, "nursery school" is a 2-gram or bigram).[6] The Ngram Viewer then returns aplottedline chart. Note that due to limitations on the size of the Ngram database, only matches found in at least 40 books are indexed.[6]
The data sets of the Ngram Viewer have been criticized for their reliance upon inaccurateoptical character recognition (OCR) and for including large numbers of incorrectly dated and categorized texts.[11]Because of these errors, and because they are uncontrolled for bias[12](such as the increasing amount of scientific literature, which causes other terms to appear to decline in popularity), care must be taken in using the corpora to study language or test theories.[13]Furthermore, the data sets may not reflect general linguistic or cultural change and can only hint at such an effect because they do not involve anymetadata like date published,[dubious –discuss] author, length, or genre, to avoid any potentialcopyright infringements.[14]
Systemic errors like the confusion ofs andf in pre-19th century texts (due to the use ofſ, thelongs, which is similar in appearance tof) can cause systemic bias.[13] Although the Google Books team claims that the results are reliable from 1800 onwards, poor OCR and insufficient data mean that frequencies given for languages such as Chinese may only be accurate from 1970 onward, with earlier parts of the corpus showing no results at all for common terms, and data for some years containing more than 50% noise.[15][16][better source needed]
Guidelines for doing research with data from Google Ngram have been proposed that try to address some of the issues discussed above.[17]
Whitepaper presenting the 2012 edition of the Google Books Ngram Corpus