
This is a timeline ofartificial intelligence, also known assynthetic intelligence.
| Date | Development |
|---|---|
| Antiquity | Greek myths ofHephaestus andPygmalion incorporated the idea of intelligentautomata (such asTalos) and artificial beings (such asGalatea andPandora).[1] |
| Sacred mechanical statues built inEgypt andGreece were believed to be capable of wisdom and emotion.Hermes Trismegistus would write, "They have 'sensus' and 'spiritus'... by discovering the true nature of the gods, man has been able to reproduce it."[2] | |
| 10th century BC | Yan Shi presentedKing Mu of Zhou with mechanical men that were capable of moving their bodies independently.[3][4] |
| 384 BC–322 BC | Aristotle described thesyllogism, a method of formal, mechanical thought in theOrganon.[5][6][7] Aristotle also describedmeans–ends analysis (an algorithm forplanning) inNicomachean Ethics, the same algorithm used byNewell andSimon'sGeneral Problem Solver (1959).[8] |
| 3rd century BC | Ctesibius invents a mechanical water clock with an alarm. This was the first example of a feedback mechanism.[citation needed] |
| 1st century | Hero of Alexandria created mechanical men and otherautomatons.[9] He produced what may have been "the world's first practical programmable machine:"[10] an automatic theatre. |
| 260 | Porphyry wroteIsagogê which categorized knowledge and logic, including a drawing of what would later be called a "semantic net".[11] |
| ~800 | Jabir ibn Hayyan developed theArabic alchemical theory ofTakwin, the artificial creation of life in the laboratory, up to and includinghuman life.[12] |
| 9th Century | TheBanū Mūsā brothers created aprogrammable music automaton described in theirBook of Ingenious Devices: a steam-driven flute controlled by a program represented by pins on a revolving cylinder.[13] This was "perhaps the first machine with astored program".[10] |
| al-Khwarizmi wrote textbooks with precise step-by-step methods for arithmetic and algebra, used in Islam, India and Europe until the 16th century. The word "algorithm" is derived from his name.[14] | |
| 1206 | Ismail al-Jazari created aprogrammable orchestra of mechanical human beings.[15] |
| 1275 | Ramon Llull, Mallorcantheologian, invents theArs Magna, a tool for combining concepts mechanically based on anArabic astrological tool, theZairja. Llull described his machines as mechanical entities that could combine basic truth and facts to produce advanced knowledge. The method was further developed byGottfried Wilhelm Leibniz in the 17th century.[16] |
| ~1500 | Paracelsus claimed to have created an artificial man out of magnetism, sperm, and alchemy.[17] |
| ~1580 | RabbiJudah Loew ben Bezalel ofPrague is said to have invented theGolem, a clay man brought to life.[18] |
| Date | Development |
|---|---|
| 1620 | Francis Bacon developed empirical theory of knowledge and introduced inductive logic in his workNovum Organum, a play onAristotle's titleOrganon.[19][20][7] |
| 1623 | Wilhelm Schickard drew a calculating clock on a letter toKepler. This will be the first of five unsuccessful attempts at designing adirect entry calculating clock in the 17th century (including the designs ofTito Burattini,Samuel Morland andRené Grillet).[a] |
| 1641 | Thomas Hobbes publishedLeviathan and presented a mechanical, combinatorial theory of cognition. He wrote "...for reason is nothing but reckoning".[21][22] |
| 1642 | Blaise Pascal invented amechanical calculator,[b] the firstdigitalcalculating machine.[23] |
| 1647 | René Descartes proposed that bodies of animals are nothing more than complex machines (but that mental phenomena are of a different "substance").[24] |
| 1654 | Blaise Pascal described how to findexpected values in probability, in 1662Antoine Arnauld published a formula to find the maximumexpected value, and in 1663,Gerolamo Cardano's solution to the same problems was published 116 years after it was written. The theory of probability was further developed byJacob Bernoulli andPierre-Simon Laplace in the 18th century.[25] Probability theory would become central to AI and machine learning from the 1990s onward. |
| 1672 | Gottfried Wilhelm Leibniz improved the earlier machines, making theStepped Reckoner to domultiplication anddivision.[26] |
| 1676 | Leibniz derived thechain rule.[27] The rule is used by AI to train neural networks, for example thebackpropagation algorithm uses the chain rule.[10] |
| 1679 | Leibniz developed a universal calculus of reasoning (alphabet of human thought) by which arguments could be decided mechanically. It assigned a specific number to each object in the world, as a prelude to an algebraic solution to all possible problems.[28] |
| 1726 | Jonathan Swift publishedGulliver's Travels, which includes this description ofthe Engine, a machine on the island ofLaputa: "a Project for improving speculative Knowledge by practical and mechanical Operations" by using this "Contrivance", "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study."[29] The machine is a parody ofArs Magna, one of the inspirations ofGottfried Wilhelm Leibniz' mechanism. |
| 1738 | Daniel Bernoulli introduces the concept of "utility", a generalization of probability, the basis ofeconomics anddecision theory, and the mathematical foundation for the way AI represents the "goals" ofintelligent agents.[30] |
| 1739 | David Hume describedinduction, the logical method of learning generalities from examples.[7] |
| 1750 | Julien Offray de La Mettrie publishedL'Homme Machine, which argued that human thought is strictly mechanical.[31] |
| 1763 | Thomas Bayes's workAn Essay Towards Solving a Problem in the Doctrine of Chances, published two years after his death, laid the foundations ofBayes' theorem and used in modern AI inBayesian networks.[25] |
| 1769 | Wolfgang von Kempelen built and toured with hischess-playingautomaton,The Turk, which Kempelen claimed could defeat human players.[32] The Turk was later shown to be ahoax, involving a human chess player. |
| 1795–1805 | The simplest kind ofartificial neural network is the linear network. It has been known for over two centuries as themethod of least squares orlinear regression. It was used as a means of finding a good rough linear fit to a set of points byAdrien-Marie Legendre (1805)[33] andCarl Friedrich Gauss (1795)[34] for the prediction of planetary movement.[10][35] |
| 1805 | Joseph Marie Jacquard created aprogrammable loom, based on earlier inventions byBasile Bouchon (1725), Jean-Baptiste Falcon (1728) andJacques Vaucanson (1740).[36] Replaceablepunched cards controlled sequences of operations in the process of manufacturingtextiles. This may have been the first industrial software forcommercial enterprises.[10] |
| 1818 | Mary Shelley published the story ofFrankenstein; or the Modern Prometheus, a fictional consideration of the ethics of creatingsentient beings.[37] |
| 1822–1859 | Charles Babbage &Ada Lovelace worked onprogrammable mechanical calculating machines.[38] |
| 1837 | The mathematicianBernard Bolzano made the first modern attempt to formalizesemantics.[39] |
| 1854 | George Boole set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in thesymbolic language of a calculus", inventingBoolean algebra.[40] |
| 1863 | Samuel Butler suggested thatDarwinianevolution also applies to machines and speculates that they will one day become conscious and eventually supplant humanity.[41] |
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(February 2018) (Learn how and when to remove this message) |
| Date | Development |
|---|---|
| 1910-1913 | Bertrand Russell andAlfred North Whitehead publishedPrincipia Mathematica, which showed that all of elementary mathematics could be reduced to mechanical reasoning informal logic.[42] |
| 1912-1914 | Leonardo Torres Quevedo built an automaton for chess endgames,El Ajedrecista. He was called "the 20th century's first AI pioneer".[10] In hisEssays on Automatics (1914), Torres published speculation about thinking and automata and introduced the idea offloating-point arithmetic.[43][44] |
| 1923 | Karel Čapek's playR.U.R. (Rossum's Universal Robots) opened in London. This is the first use of the word "robot" in English.[45] |
| 1920–1925 | Wilhelm Lenz andErnst Ising created and analyzed theIsing model (1925)[46] which can be viewed as the first artificialrecurrent neural network (RNN) consisting of neuron-like threshold elements.[10] In 1972,Shun'ichi Amari made this architecture adaptive.[47][10] |
| 1920s and 1930s | Ludwig Wittgenstein'sTractatus Logico-Philosophicus (1921) inspiresRudolf Carnap and thelogical positivists of theVienna Circle to use formal logic as the foundation of philosophy. However, Wittgenstein's later work in the 1940s demonstrates that context-free symbolic logic is incoherent without human interpretation. |
| 1931 | Kurt Gödel encoded mathematical statements and proofs as integers and showed that there are true theorems that are unprovable by any consistent theorem-proving machine. Thus, "he identified fundamental limits of algorithmic theorem proving, computing, and any type of computation-based AI,",[10] laying the foundations oftheoretical computer science and AI theory. |
| 1935 | Alonzo Church extended Gödel's proof and showed that thedecision problem of computer science does not have a general solution.[48] He developed theLambda calculus, which will eventually be fundamental to the theory of computer languages. |
| 1936 | Konrad Zuse filed his patent application for a program-controlled computer.[49] |
| 1937 | Alan Turing published "On Computable Numbers",[50] which laid the foundations of the moderntheory of computation by introducing theTuring machine, a physical interpretation of "computability". He used it to confirm Gödel by proving that thehalting problem isundecidable. |
| 1940 | Edward Condon displayedNimatron, a digital machine that playedNim perfectly. |
| 1941 | Konrad Zuse built the first working program-controlled general-purpose computer.[51] |
| 1943 | Warren Sturgis McCulloch andWalter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity", the first mathematical description of anartificial neural networks.[52] |
| Arturo Rosenblueth,Norbert Wiener and Julian Bigelow coin the term "cybernetics". Wiener's popular book by that name was published in 1948. | |
| 1945 | Game theory, which would prove invaluable in the progress of AI, was introduced with the 1944 paper "Theory of Games and Economic Behavior" bymathematicianJohn von Neumann andeconomistOskar Morgenstern. |
| Vannevar Bush published "As We May Think" (The Atlantic Monthly, July 1945), a prescient vision of the future in which computers assist humans in many activities. | |
| 1948 | Alan Turing produces the "Intelligent Machinery" report, regarded as the first manifesto of Artificial Intelligence. It introduces many concepts, including the logic-based approach to problem-solving, that intellectual activity consists mainly of various kinds of search, and a discussion of machine learning in which he anticipates theConnectionism AI approach.[53] |
| John von Neumann (quoted byEdwin Thompson Jaynes) in response to a comment at a lecture that it was impossible for a machine (at least ones created by humans) to think: "You insist that there is something a machine cannot do. If you will tell meprecisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to theChurch–Turing thesis, which states that any effective procedure can be simulated by a (generalized) computer. | |
| 1949 | Donald O. Hebb developsHebbian theory, a possible algorithm for learning inneural networks.[54] |
| Date | Development |
|---|---|
| 1950 | Alan Turing published "Computing Machinery and Intelligence", which proposes theTuring test as a measure of machine intelligence and answered all of the most common objections to the proposal "machines can think".[55] |
| Claude Shannon published a detailed analysis ofchess playing assearch.[56] | |
| Isaac Asimov published hisThree Laws of Robotics.[57] | |
| 1951 | The first working AI programs were written in 1951 to run on theFerranti Mark 1 machine of theUniversity of Manchester: A checkers-playing program written byChristopher Strachey and a chess-playing program written byDietrich Prinz.[54] |
| 1952–1962 | Arthur Samuel (IBM) wrote the first game-playing program, for checkers (draughts), to achieve sufficient skill to challenge a respectable amateur.[58] His first checkers-playing program was written in 1952, and in 1955, he created a version thatlearned to play.[59][60] |
| 1956 | TheDartmouth Collegesummer AI conference is organized byJohn McCarthy,Marvin Minsky,Nathan Rochester ofIBM andClaude Shannon. McCarthy coined the termartificial intelligence for the conference.[61][62] |
| The first demonstration of theLogic Theorist (LT) written byAllen Newell,Cliff Shaw andHerbert A. Simon (Carnegie Institute of Technology, nowCarnegie Mellon University or CMU). This is often called the first AI program, though Samuel's checkers program also has a strong claim. This program has been described as the first deliberately engineered to perform automated reasoning, and would eventually prove 38 of the first 52 theorems inRussell andWhitehead'sPrincipia Mathematica, and find new and more elegant proofs for some.[63] Simon said that they had "solved the venerablemind–body problem, explaining how a system composed of matter can have the properties of mind".[64] | |
| 1958 | John McCarthy (Massachusetts Institute of Technology or MIT) invented theLisp programming language.[59] |
| Herbert Gelernter andNathan Rochester (IBM) described atheorem prover ingeometry.[59] It exploited a semantic model of the domain in the form of diagrams of "typical" cases.[citation needed] | |
| Teddington Conference on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's "Programs with Common Sense" (which proposed theAdvice taker application as a primary research goal)[59]Oliver Selfridge's "Pandemonium", andMarvin Minsky's "Some Methods ofHeuristic Programming and Artificial Intelligence". | |
| 1959 | TheGeneral Problem Solver (GPS) was created by Newell, Shaw and Simon while at CMU.[59] |
| John McCarthy andMarvin Minsky founded theMIT AI Lab.[59] | |
| Late 1950s, early 1960s | Margaret Masterman and colleagues atUniversity of Cambridge designsemantic nets formachine translation.[citation needed] |
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(March 2007) (Learn how and when to remove this message) |
| Date | Development |
|---|---|
| 1960s | Ray Solomonoff lays the foundations of amathematical theory of AI, introducing universalBayesian methods for inductive inference and prediction. |
| 1960 | "Man–Computer Symbiosis" by J.C.R. Licklider. |
| 1961 | James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolicintegration program, SAINT, which solvedcalculus problems at the college-freshman level. |
| InMinds, Machines and Gödel,John Lucas[65] denied the possibility of machine intelligence onlogical orphilosophical grounds. He referred toKurt Gödel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior. | |
| Unimation'sindustrial robotUnimate worked on aGeneral Motorsautomobileassembly-line. | |
| 1963 | Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the sameanalogy problems as are given onIQ tests. |
| Edward Feigenbaum andJulian Feldman publishedComputers and Thought, the first collection of articles about artificial intelligence.[66][67][68][69] | |
| Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine-learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons ofRosenblatt. | |
| 1964 | Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group,Project MAC), shows that computers can understand natural language well enough to solvealgebraword problems correctly. |
| InStanisław Lem's essay-collectionSumma Technologiae Lem discusses "intellectronics" (AI). | |
| Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems. | |
| 1965 | In theSoviet Union,Alexey Ivakhnenko and Valentin Lapa develop the firstdeep-learning algorithm formultilayer perceptrons.[70][71][10] |
| Lotfi A. Zadeh at U.C. Berkeley publishes his first paper introducingfuzzy logic, "Fuzzy Sets" (Information and Control 8: 338–353). | |
| J. Alan Robinson invents a mechanicalproof procedure, the Resolution Method, which allows programs to work efficiently with formal logic as a representation language. | |
| Joseph Weizenbaum (MIT) buildsELIZA, aninteractive program that carries on a dialogue inEnglish language on any topic. It was a popular toy at AI centers on theARPANET when a version that "simulated" the dialogue of apsychotherapist was programmed. | |
| Edward Feigenbaum initiatesDendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the firstexpert system. | |
| 1966 | Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstratessemantic nets. |
| Machine Intelligence[72] workshop at Edinburgh – the first of an influential annual series organized byDonald Michie and others. | |
| Negative report on machine-translation kills much work innatural language processing (NLP) for many years. | |
| Dendral program (Edward Feigenbaum,Joshua Lederberg, Bruce Buchanan, Georgia Sutherland atStanford University) demonstrated to interpret mass spectra of organic chemical compounds. First successful knowledge-based program for scientific reasoning. | |
| 1967 | Shun'ichi Amari becomes the first to usestochastic gradient descent fordeep learning inmultilayer perceptrons.[73] In computer experiments conducted by his student Saito, a five-layer MLP with two modifiable layers learned usefulinternal representations to classify non-linearily separable pattern classes.[10] |
| 1968 | Joel Moses (PhD work at MIT) demonstrates the power ofsymbolic reasoning for integration problems in theMacsyma program. First successful knowledge-based program inmathematics. |
| Richard Greenblatt (programmer) at MIT builds a knowledge-basedchess-playing program,Mac Hack, that was good enough to achieve a class-C rating in tournament play. | |
| Wallace and Boulton's program, Snob (Comp.J. 11(2) 1968), for unsupervised classification (clustering) uses the Bayesianminimum message length criterion, a mathematical realisation ofOccam's razor. | |
| 1969 | Stanford Research Institute (SRI):Shakey the robot, demonstrates combininganimal locomotion,perception andproblem-solving. |
| Roger Schank (Stanford) definesconceptual dependency model fornatural language understanding. Later developed (in PhD dissertations atYale University) for use in story understanding byRobert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner. | |
| Yorick Wilks (Stanford) developed the semantic-coherence view of language called Preference Semantics, embodied in the first semantics-driven mac)hine-translation program, and the basis of many PhD dissertations since (such as those of Bran Boguraev and David Carter at Cambridge. | |
| First International Joint Conference on Artificial Intelligence (IJCAI) held at Stanford. | |
| Marvin Minsky andSeymour Papert publishPerceptronss, demonstrating previously unrecognized limits of this feed-forward two-layered structure. This book is considered by some to mark the beginning of theAI winter of the 1970s, a failure of confidence and funding for AI. However, by the time the book came out, methods for trainingmultilayer perceptrons bydeep learning were already known (Alexey Ivakhnenko and Valentin Lapa, 1965;Shun'ichi Amari, 1967).[10] Significant progress in the field continued (see below). | |
| McCarthy and Hayes start the discussion about theframe problem with their essay "Some Philosophical Problems from the Standpoint of Artificial Intelligence". |
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(March 2007) (Learn how and when to remove this message) |
| Date | Development |
|---|---|
| Early 1970s | Jane Robinson and Don Walker established an influentialNatural Language Processing group at SRI.[74] |
| 1970 | Seppo Linnainmaa publishes the reverse mode ofautomatic differentiation. This method became later known asbackpropagation, and is heavily used to trainartificial neural networks.[75] |
| Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program forcomputer-assisted instruction based on semantic nets as the representation of knowledge. | |
| Bill Woods described Augmented Transition Networks (ATNs) as a representation for natural language understanding. | |
| Patrick Winston's PhD program, ARCH, at MIT, learned concepts from examples in the world of children's blocks. | |
| 1971 | Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program,SHRDLU, with a robot arm that carried out instructions typed in English. |
| Work on the Boyer-Moore theorem prover started inEdinburgh.[76] | |
| 1972 | Prolog programming language developed byAlain Colmerauer. |
| Earl Sacerdoti developed one of the first hierarchical planning programs, ABSTRIPS. | |
| 1973 | The Assembly Robotics Group atUniversity of Edinburgh builds Freddy Robot, capable of usingvisual perception to locate and assemble models. (SeeEdinburghFreddy Assembly Robot: a versatile computer-controlled assembly system.) |
| TheLighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontinue support for AI research in all but two universities. | |
| 1974 | Ted Shortliffe's PhD dissertation on theMYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future ofexpert system development, especially commercial systems. |
| 1975 | Earl Sacerdoti developed techniques ofpartial-order planning in his NOAH system, replacing the previous paradigm of search among state space descriptions. NOAH was applied at SRI International to interactively diagnose and repair electromechanical systems. |
| Austin Tate developed the Nonlin hierarchical planning system, able to search a space ofpartial plans characterised as alternative approaches to the underlying goal structure of the plan. | |
| Marvin Minsky published his widely read and influential article onFrames as a representation of knowledge, in which many ideas aboutschemas andsemantic links are brought together. | |
| The Meta-Dendral learning program produced new results inchemistry (some rules ofmass spectrometry), the first scientific discoveries by a computer to be published in a refereed journal. | |
| Mid-1970s | Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz,Bonnie Webber andCandace Sidner developed the notion of "centering", used in establishing focus ofdiscourse and anaphoric references inNatural language processing. |
| David Marr andMIT colleagues describe the "primal sketch" and its role invisual perception. | |
| 1976 | Douglas Lenat'sAM program (Stanford PhD dissertation) demonstrated the discovery model (loosely guided search for interesting conjectures). |
| Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford. | |
| Stevo Bozinovski and Ante Fulgosi introduced transfer learning method in artificial intelligence, based on the psychology of learning.[77][78] | |
| 1978 | Tom Mitchell, at Stanford, invented the concept ofVersion spaces for describing thesearch space of a concept formation program. |
| Herbert A. Simon wins theNobel Prize in Economics for his theory ofbounded rationality, one of the cornerstones of AI known as "satisficing". | |
| The MOLGEN program, written atStanford by Mark Stefik and Peter Friedland, demonstrated that anobject-oriented programming representation of knowledge can be used to plan gene-cloning experiments. | |
| 1979 | Bill VanMelle's PhD dissertation at Stanford demonstrated the generality ofMYCIN's representation of knowledge and style of reasoning in hisEMYCIN program, the model for many commercial expert system "shells". |
| Jack Myers and Harry Pople atUniversity of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers'clinical knowledge. | |
| Cordell Green, David Barstow,Elaine Kant and others at Stanford demonstrated the CHI system forautomatic programming. | |
| The Stanford Cart, built byHans Moravec, becomes the first computer-controlled,autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates theStanford AI Lab. | |
| BKG, a backgammon program written byHans Berliner atCMU, defeats the reigning world champion (in part via luck). | |
| Drew McDermott and Jon Doyle atMIT, and John McCarthy at Stanford, begin publishing work onnon-monotonic logics and formal aspects of truth maintenance. | |
| Late 1970s | Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration. |
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(March 2007) (Learn how and when to remove this message) |
| Date | Development |
|---|---|
| 1980s | Lisp machines developed and marketed. Firstexpert system shells and commercial applications. |
| 1980 | First National Conference of theAmerican Association for Artificial Intelligence (AAAI) held at Stanford. |
| 1981 | Danny Hillis designs the connection machine, which utilizesparallel computing to bring new power to AI, and to computation in general. (Later foundsThinking Machines Corporation) |
| Stevo Bozinovski and Charles Anderson carried out the first concurrent programming (task parallelism) in neural network research. A program, "CAA Controller," written and executed by Bozinovski, interacts with the program "Inverted Pendulum Dynamics" written and executed by Anderson, using VAX/VMS mailboxes as a way of inter-program communication. The CAA controller learns to balance the simulated inverted pendulum.[79][80][81] | |
| 1982 | TheFifth Generation Computer Systems project (FGCS), an initiative by Japan'sMinistry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform many calculations utilizing massive parallelism. |
| 1983 | John Laird and Paul Rosenbloom, working withAllen Newell, complete CMU dissertations onSoar (program). |
| James F. Allen invents the Interval Calculus, the first widely used formalization of temporal events. | |
| Mid-1980s | Neural Networks become widely used with theBackpropagationalgorithm, also known as the reverse mode ofautomatic differentiation published bySeppo Linnainmaa in 1970 and applied to neural networks byPaul Werbos. |
| 1985 | The autonomous drawing program,AARON, created byHarold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments). |
| 1986 | The team ofErnst Dickmanns atBundeswehr University of Munich builds the first robot cars, driving up to 55 mph on empty streets. |
| Barbara Grosz andCandace Sidner create the first computation model ofdiscourse, establishing the field of research.[82] | |
| 1987 | Marvin Minsky publishedThe Society of Mind, a theoretical description of the mind as a collection of cooperatingagents. He had been lecturing on the idea for years before the book came out (cf. Doyle 1983).[83] |
| Around the same time,Rodney Brooks introduced thesubsumption architecture andbehavior-based robotics as a more minimalist modular model of natural intelligence;Nouvelle AI. | |
| Commercial launch of generation 2.0 of Alacrity by Alacritous Inc./Allstar Advice Inc. Toronto, the first commercial strategic and managerial advisory system. The system was based upon a forward-chaining, self-developed expert system with 3,000 rules about the evolution of markets and competitive strategies and co-authored by Alistair Davidson and Mary Chung, founders of the firm with the underlying engine developed by Paul Tarvydas. The Alacrity system also included a small financial expert system that interpreted financial statements and models.[84] | |
| 1989 | The development ofmetal–oxide–semiconductor (MOS)Very-large-scale integration (VLSI), in the form ofcomplementary MOS (CMOS) technology, enabled the development of practicalartificial neural network (ANN) technology in the 1980s. A landmark publication in the field was the 1989 bookAnalog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.[85] |
| Dean Pomerleau at CMU created ALVINN (An Autonomous Land Vehicle in a Neural Network), which was used in theNavlab program. |
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(March 2007) (Learn how and when to remove this message) |
| Date | Development |
|---|---|
| 1990s | Major advances in all areas of AI, with significant demonstrations in machine learning,intelligent tutoring, case-based reasoning, multi-agent planning,scheduling, uncertain reasoning,data mining, natural language understanding and translation, vision,virtual reality, games, and other topics. |
| Early 1990s | TD-Gammon, abackgammon program written by Gerry Tesauro, demonstrates thatreinforcement (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players. |
| 1991 | DART scheduling application deployed in the firstGulf War paid backDARPA's investment of 30 years in AI research.[86] |
| 1992 | Carol Stoker and NASA Ames robotics team explore marine life in Antarctica with an undersea robotTelepresenceROV operated from the ice near McMurdo Bay, Antarctica and remotely via satellite link from Moffett Field, California.[87] |
| 1993 | Ian Horswill extendedbehavior-based robotics by creating Polly, the first robot to navigate usingvision and operate at animal-like speeds (1 meter/second). |
| Rodney Brooks,Lynn Andrea Stein andCynthia Breazeal started the widely publicizedMIT Cog project with numerous collaborators, in an attempt to build ahumanoid robot child in just five years. | |
| ISX corporation wins "DARPA contractor of the year"[88] for theDynamic Analysis and Replanning Tool (DART), which reportedly repaid the US government's entire investment in AI research since the 1950s.[89] | |
| 1994 | Lotfi A. Zadeh at U.C. Berkeley creates "soft computing"[90] and builds a world network of research with a fusion of neural science andneural net systems,fuzzy set theory andfuzzy systems, evolutionary algorithms,genetic programming, andchaos theory and chaotic systems ("Fuzzy Logic, Neural Networks, and Soft Computing",Communications of the ACM, March 1994, Vol. 37 No. 3, pages 77–84). |
| With passengers on board, the twin robot carsVaMP and VITA-2 ofErnst Dickmanns andDaimler-Benz drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars. | |
| English draughts (checkers) world championTinsley resigned a match against computer programChinook. Chinook defeated 2nd-highest-rated player,Lafferty. Chinook won the USA National Tournament by the widest margin ever. | |
| Cindy Mason atNASA organizes the FirstAAAI Workshop on AI and the Environment.[91] | |
| 1995 | Cindy Mason atNASA organizes the First InternationalIJCAI Workshop on AI and the Environment.[92] |
| "No Hands Across America": A semi-autonomous car drove coast-to-coast across the United States with computer-controlled steering for 2,797 miles (4,501 km) of the 2,849 miles (4,585 km). The throttle and brakes were controlled by a human driver.[93][94] | |
| One ofErnst Dickmanns' robot cars (with robot-controlled throttle and brakes) drove more than 1000 miles fromMunich toCopenhagen and back, in traffic, at up to 120 mph, occasionally executing maneuvers to pass other cars (only in a few critical situations a safety driver took over). Active vision was used to deal with rapidly changing street scenes. | |
| 1996 | Steve Grand, roboticist and computer scientist, develops and releasesCreatures, a popular simulation of artificial life-forms with simulated biochemistry, neurology with learning algorithms and inheritable digital DNA. |
| 1997 | TheDeep Blue chess machine (IBM) defeats the (then) worldchess champion,Garry Kasparov. |
| First officialRoboCup football (soccer) match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators. | |
| ComputerOthello programLogistello defeated the world champion, Takeshi Murakami with a score of 6–0. | |
| Long short-term memory (LSTM) was published in Neural Computation bySepp Hochreiter andJuergen Schmidhuber.[95] | |
| 1998 | Tiger Electronics'Furby is released, and becomes the first successful attempt at producing a type of AI designated for adomestic environment. |
| Tim Berners-Lee published hisSemantic Web Road map paper.[96] | |
| Ulises Cortés andMiquel Sànchez-Marrè organize the first Environment and AI Workshop in EuropeECAI, "Binding Environmental Sciences and Artificial Intelligence".[97][98] | |
| Leslie P. Kaelbling,Michael L. Littman, and Anthony Cassandra introducePOMDPs and a scalable method for solving them to the AI community, jumpstarting widespread use in robotics andautomated planning and scheduling[99] | |
| 1999 | Sony introduces an improved domestic robot similar to a Furby, theAIBO becomes one of the first artificially intelligent "pets" that is alsoautonomous. |
| Late 1990s | Web crawlers and other AI-basedinformation extraction programs become essential in widespread use of theWorld Wide Web. |
| Demonstration of an Intelligent room and Emotional Agents atMIT's AI Lab. | |
| Initiation of work on theOxygen architecture, which connects mobile and stationary computers in an adaptivenetwork. |
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(March 2007) (Learn how and when to remove this message) |
| Date | Development |
|---|---|
| 2000 | Interactive robopets ("smart toys") become commercially available, realizing the vision of the 18th-century novelty toy makers. |
| Cynthia Breazeal at MIT publishes her dissertation on Sociable machines, describingKismet (robot), with a face that expressesemotions. | |
| TheNomad robot explores remote regions of Antarctica looking for meteorite samples. | |
| 2002 | iRobot'sRoomba autonomously vacuums the floor while navigating and avoiding obstacles. |
| Wolfbot Future describes artificial intelligent agents based on XML with a distributed ontology. | |
| 2004 | OWLWeb Ontology Language W3C Recommendation (10 February 2004). |
| DARPA introduces theDARPA Grand Challenge, requiring competitors to produce autonomous vehicles for prize money. | |
| NASA's robotic exploration roversSpirit andOpportunity autonomously navigate the surface ofMars. | |
| 2005 | Honda'sASIMO robot, an artificially intelligent humanoid robot, can walk as fast as a human, deliveringtrays to customers in restaurant settings. |
| Recommendation technology based on tracking web activity or media usage brings AI to marketing. SeeTiVo Suggestions. | |
| Blue Brain is born, a project to simulate the brain at molecular detail.[100] | |
| 2006 | The Dartmouth Artificial Intelligence Conference: The Next 50 Years (AI@50) (14–16 July 2006) |
| 2007 | Philosophical Transactions of the Royal Society, B – Biology, one of the world's oldest scientific journals, puts out a special issue on using AI to understand biological intelligence, titledModels of NaturalAction Selection[101] |
| Checkers issolved by a team of researchers at theUniversity of Alberta. | |
| DARPA launches theUrban Challenge forautonomous cars to obey traffic rules and operate in an urban environment. | |
| 2008 | Cynthia Mason at Stanford presents her idea on Artificial Compassionate Intelligence, in her paper on "Giving Robots Compassion".[102] |
| 2009 | AnLSTM trained byconnectionist temporal classification[103] was the firstrecurrent neural network to winpattern recognition contests, winning three competitions in connectedhandwriting recognition.[104][10] |
| Google builds an autonomous car.[105] |
| Date | Development |
|---|---|
| 2010 | Microsoft launched Kinect for Xbox 360, the first gaming device to track human body movement, using just a 3D camera and infrared detection, enabling users to play their Xbox 360 wirelessly. The award-winning machine learning for human motion capture technology for this device was developed by theComputer Vision group atMicrosoft Research, Cambridge.[106][107] |
| 2011 | Mary Lou Maher andDoug Fisher organize the FirstAAAI Workshop on AI and Sustainability.[108] |
| IBM'sWatson computer defeatedtelevisiongame showJeopardy! championsRutter andJennings. | |
| 2011–2014 | Apple'sSiri (2011),Google'sGoogle Now (2012) andMicrosoft'sCortana (2014) aresmartphoneapps that usenatural language to answer questions, make recommendations and perform actions. |
| 2012 | AlexNet, adeep learning model developed byAlex Krizhevsky, wins theImageNet Large Scale Visual Recognition Challenge with half as many errors as the second-place winner.[109] This is a turning point in the history of AI; over the next few years, dozens of other approaches to image recognition were abandoned in favor ofdeep learning.[110] Krizhevsky is among the first to use GPU chips to train a deep learning network.[111] |
| 2013 | Robot HRP-2 built by SCHAFT Inc ofJapan, a subsidiary ofGoogle, defeats 15 teams to winDARPA'sRobotics Challenge Trials. HRP-2 scored 27 out of 32 points in the eight tasks needed in disaster response. Tasks are driving a vehicle, walking over debris, climbing a ladder, removing debris, walking through doors, cutting through a wall, closing valves and connecting a hose.[112] |
| NEIL, the Never-Ending Image Learner, is released atCarnegie Mellon University to constantly compare and analyze relationships between different images.[113] | |
| 2015 | Two techniques were developed concurrently to train very deep networks:highway network,[114] and theresidual neural network (ResNet).[115] They allowed over 1000-layers-deep networks to be trained. |
| In January 2015,Stephen Hawking,Elon Musk, and dozens of artificial intelligence experts signed anopen letter on artificial intelligence calling for research on the societal impacts of AI.[116][117] | |
| In July 2015, an open letter to ban the development and use of autonomous weapons was signed byHawking,Musk,Wozniak and 3,000 researchers in AI and robotics.[118] | |
| GoogleDeepMind'sAlphaGo (version: Fan)[119] defeated three-time European Go champion 2 dan professionalFan Hui by 5 games to 0.[120] | |
| 2016 | GoogleDeepMind'sAlphaGo (version: Lee)[119] defeatedLee Sedol 4–1. Lee Sedol is a 9-dan professional KoreanGo champion who won 27 major tournaments from 2002 to 2016.[121] |
| 2017 | Asilomar Conference on Beneficial AI was held, to discussAI ethics and how to bring aboutbeneficial AI while avoiding theexistential risk from artificial general intelligence. |
| Deepstack[122] is the first published algorithm to beat human players in imperfect information games, as shown with statistical significance on heads-up no-limitpoker. Soon after, the poker AILibratus by a different research group individually defeated each of its four-human opponents—among the best players in the world—at an exceptionally high aggregated winrate, over a statistically significant sample.[123] In contrast to Chess and Go, Poker is animperfect information game.[124] | |
| In May 2017,GoogleDeepMind'sAlphaGo (version: Master) beatKe Jie, who at the time continuously held the world No. 1 ranking for two years,[125][126] winning each game in athree-game match during theFuture of Go Summit.[127][128] | |
| Apropositional logicboolean satisfiability problem (SAT) solver proves a long-standing mathematical conjecture onPythagorean triples over the set of integers. The initial proof, 200TB long, was checked by two independent certified automatic proof checkers.[129] | |
| AnOpenAIbot using machine learning played atThe International 2017Dota 2 tournament in August 2017. It won during a1v1 demonstration game against professionalDota 2 playerDendi.[130] | |
| Google Lens image analysis and comparison tool released in October 2017, associates millions of landscapes, artworks, products and species to their text description. | |
| Google DeepMind revealed that AlphaGo Zero—an improved version of AlphaGo—displayed significant performance gains while using far fewertensor processing units (as compared to AlphaGo Lee; it used the same amount of TPUs as AlphaGo Master).[119] Unlike previous versions, which learned the game by observing millions of human moves, AlphaGo Zero learned by playing only against itself. The system then defeated AlphaGo Lee 100 games to zero, and defeated AlphaGo Master 89 to 11.[119] Although unsupervised learning is a step forward, much has yet to be learned about general intelligence.[131] AlphaZero masters chess in four hours, defeating the best chess engine, StockFish 8. AlphaZero won 28 out of 100 games, and the remaining 72 games ended in a draw. | |
| Transformer architecture was invented, which led to new kinds oflarge language models such asBERT by Google, followed by thegenerative pre-trained transformer type of model introduced by OpenAI. | |
| 2018 | Alibaba language processing AI outscores top humans at a Stanford University reading and comprehension test, scoring 82.44 against 82.304 on a set of 100,000 questions.[132] |
| The European Lab for Learning and Intelligent Systems (aka Ellis) proposed as a pan-European competitor to American AI efforts, to stave off abrain drain of talent, along the lines ofCERN after World War II.[133] | |
| Announcement ofGoogle Duplex, a service to allow an AI assistant to book appointments over the phone. TheLos Angeles Times judges the AI's voice to be a "nearly flawless" imitation of human-sounding speech.[134] | |
| 2019 | DeepMind's AlphaStar reaches Grandmaster level atStarCraft II, outperforming 99.8 percent of human players.[135] |
This section needs to beupdated. Please help update this article to reflect recent events or newly available information.(September 2023) |

| Date | Development |
|---|---|
| 2020 | In February 2020, Microsoft introduces its Turing Natural Language Generation (T-NLG), which is the "largest language model ever published at 17 billion parameters".[136] |
| In November 2020,AlphaFold 2 by DeepMind, a model that performspredictions of protein structure, wins theCASP competition.[137] | |
| OpenAI introducesGPT-3, a state-of-the-art autoregressive language model that usesdeep learning to produce a variety of computer codes, poetry and other language tasks exceptionally similar, and almost indistinguishable from those written by humans. Its capacity was ten times greater than that of the T-NLG. It was introduced in May 2020,[138] and was in beta testing in June 2020. | |
| 2022 | ChatGPT, an AIchatbot developed byOpenAI, debuts in November 2022. It is initially built on top of theGPT-3.5large language model. While it gains considerable praise for the breadth of its knowledge base, deductive abilities, and the human-like fluidity of its natural language responses,[139][140] it also garners criticism for, among other things, its tendency to "hallucinate",[141][142] a phenomenon in which an AI responds with factually incorrect answers with high confidence. The release triggers widespread public discussion on artificial intelligence and its potential impact on society.[143][144] |
| A November 2022 class action lawsuit againstMicrosoft,GitHub andOpenAI alleges thatGitHub Copilot, an AI-powered code editing tool trained on public GitHub repositories, violates the copyrights of the repositories' authors, noting that the tool can generate source code which matches its training data verbatim, without providing attribution.[145] | |
| 2023 | By January 2023,ChatGPT has more than 100 million users, making it the fastest-growing consumer application to date.[146] |
| On January 16, 2023, three artists,Sarah Andersen, Kelly McKernan, and Karla Ortiz, file a class-actioncopyright infringement lawsuit againstStability AI,Midjourney, andDeviantArt, claiming that these companies have infringed the rights of millions of artists by training AI tools on five billion images scraped from the web without the consent of the original artists.[147] | |
| On January 17, 2023, Stability AI is sued in London byGetty Images for using its images in their training data without purchasing a license.[148][149] | |
| Getty files another suit against Stability AI in a US district court in Delaware on February 6, 2023. In the suit, Getty again alleges copyright infringement for the use of its images in the training ofStable Diffusion, and further argues that the model infringes Getty'strademark by generating images with Getty'swatermark.[150] | |
| OpenAI'sGPT-4 model is released in March 2023 and is regarded as an impressive improvement overGPT-3.5, with the caveat that GPT-4 retains many of the same problems of the earlier iteration.[151] Unlike previous iterations, GPT-4 is multimodal, allowing image input as well as text. GPT-4 is integrated into ChatGPT as a subscriber service. OpenAI claims that in their own testing the model received a score of 1410 on theSAT (94th percentile),[152] 163 on theLSAT (88th percentile), and 298 on theUniform Bar Exam (90th percentile).[153] | |
| On March 7, 2023,Nature Biomedical Engineering writes that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time."[154] | |
| In response to ChatGPT,Google releases in a limited capacity its chatbotGoogle Bard, based on theLaMDA andPaLM large language models, in March 2023.[155][156] | |
| On March 29, 2023, a petition of over 1,000 signatures is signed by Elon Musk, Steve Wozniak and other tech leaders, calling for a 6-month halt to what the petition refers to as "an out-of-control race" producing AI systems that its creators can not "understand, predict, or reliably control".[157][158] | |
| In May 2023, Google announces Bard's transition from LaMDA to PaLM2, a significantly more advanced language model.[159] | |
| In the last week of May 2023, aStatement on AI Risk is signed byGeoffrey Hinton,Sam Altman,Bill Gates, and many other prominent AI researchers and tech leaders with the following succinct message: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[160][161] | |
| On July 9, 2023,Sarah Silverman filed a class action lawsuit against Meta and OpenAI for copyright infringement for training their large language models on millions of authors' copyrighted works without permission.[162] | |
| In August 2023, the New York Times, CNN, Reuters, the Chicago Tribune, Australian Broadcasting Corporation (ABC), and other news companies blocked OpenAI's GPTBot web crawler from accessing their content, while the New York Times also updated its terms of service to disallow the use of its content in large language models.[163] | |
| On September 13, 2023, in a serious response to growing anxiety about the dangers of AI, the US Senate holds the inaugural bipartisan "AI Insight Forum", bringing together senators, CEOs, civil rights leaders and other industry reps, to further familiarize senators with the nature of AI and its risks, and to discuss needed safeguards and legislation.[164] The event is organized by Senate Majority LeaderChuck Schumer (D-NY),[165] and is chaired by U.S. SenatorMartin Heinrich (D-N.M.), Founder and co-chair of the Senate AI Caucus.[166] Reflecting the importance of the meeting, the forum is attended by over 60 senators,[167] as well asElon Musk (Tesla CEO),Mark Zuckerberg (Meta CEO),Sam Altman (OpenAI CEO),Sundar Pichai (Alphabet CEO),Bill Gates (Microsoft co-founder),Satya Nadella (Microsoft CEO),Jensen Huang (Nvidia CEO),Arvind Krishna (IBM CEO),Alex Karp (Palantir CEO),Charles Rivkin (chairman and CEO of the MPA),Meredith Stiehm (president of the Writers Guild of America West),Liz Shuler (AFL-CIO President), andMaya Wiley (CEO of theLeadership Conference on Civil and Human Rights), among others.[164][165][167] | |
| On October 30, 2023, US President Biden signed theExecutive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.[168][169] | |
| In November 2023, the first globalAI Safety Summit was held inBletchley Park in the UK to discuss the near and far-term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[170] 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[171][172] | |
| On December 6,Google announcedGemini 1.0 Ultra, Pro, and Nano. | |
| 2024 | On February 15, 2024,Google releasesGemini 1.5 in limited beta, capable of context length up to 1 million tokens. |
| Also, on February 15, 2024,OpenAI publicly announcesSora, a text-to-video model for generating videos up to a minute long. | |
| Google DeepMind unveils DNA prediction software AlphaFold, which helps to identify cancer and genetic diseases. | |
| On February 22,StabilityAI announcesStable Diffusion 3, using a similar architecture toSora. | |
| On May 14,Google adds an "AI overview" to Google searches. | |
| On June 10,Apple announced "Apple Intelligence" which incorporatesChatGPT into newiPhones andSiri. | |
| On October 9, co-founder and CEO ofGoogle DeepMind andIsomorphic Labs Sir Demis Hassabis, and Google DeepMind Director Dr. John Jumper were co-awarded the 2024 Nobel Prize in Chemistry for their work developingAlphaFold, a groundbreaking AI system that predicts the 3D structure of proteins from their amino acid sequences. | |
| At the Seoul Summit, leaders from the G7, the European Union, and major tech companies adopted the “Seoul Declaration for Safe, Innovative and Inclusive AI”, committing to international cooperation on AI safety, standards and innovation. | |
| 2025 | On February 6,Mistral AI releases Le Chat, an AI assistant able to answer up to 1,000 words per second.[173] |
| Stargate UAE invests to build Europe's largest AI data center in France.[174] | |
| Amazon prepares training of humanoid robots to deliver packages.[175] | |
| NLWeb,Project Mariner, andGoogle Flow launch. | |
| On February 10 and 11,France hosts theArtificial Intelligence Action Summit.[176] 61 countries, includingChina,India,Japan, France andCanada, sign a declaration on "inclusive and sustainable" AI,[177] which the UK and US refused to sign.[178] | |
| Pope Leo XIV calls technologists to build AI machines that embodies love, justice, and the sacred dignity of every human life at their core.[179][180] |
The Liezi text describes an automaton presented to King Mu of Zhou... Yan Shi dismantled it to show it was a construction of leather, wood, glue and lacquer.
{{cite book}}:ISBN / Date incompatibility (help){{cite book}}: CS1 maint: publisher location (link){{cite book}}:ISBN / Date incompatibility (help)