Hinton is University Professor Emeritus at theUniversity of Toronto. From 2013 to 2023, he divided his time working forGoogle Brain and the University of Toronto before publicly announcing his departure from Google in May 2023, citing concerns about the many risks ofartificial intelligence (AI) technology.[9][10] In 2017, he co-founded and became the chief scientific advisor of theVector Institute in Toronto.[11][12]
Upon arrival in Canada, Geoffrey Hinton was appointed at theCanadian Institute for Advanced Research (CIFAR) in 1987 as a Fellow in CIFAR's first research program, Artificial Intelligence, Robotics & Society.[44] In 2004, Hinton and collaborators successfully proposed the launch of a new program at CIFAR, "Neural Computation and Adaptive Perception"[45] (NCAP), which today is named "Learning in Machines & Brains". Hinton would go on to lead NCAP for ten years.[46] Among the members of the program areYoshua Bengio andYann LeCun, with whom Hinton would go on to win theACM A.M. Turing Award in 2018.[47] All three Turing winners continue to be members of the CIFAR Learning in Machines & Brains program.[48]
Hinton taught a free online course on Neural Networks on the education platformCoursera in 2012.[49] He co-founded DNNresearch Inc. in 2012 with his two graduate students Alex Krizhevsky and Ilya Sutskever at the University of Toronto’s department of computer science. In March 2013, Google acquired DNNresearch Inc. for $44 million, and Hinton planned to "divide his time between his university research and his work at Google".[50][51][52]
While Hinton was a postdoc at UC San Diego,David E. Rumelhart and Hinton andRonald J. Williams applied thebackpropagation algorithm to multi-layer neural networks. Their experiments showed that such networks can learn usefulinternal representations of data.[13] In a 2018 interview,[55] Hinton said that "David E. Rumelhart came up with the basic idea of backpropagation, so it's his invention". Although this work was important in popularising backpropagation, it was not the first to suggest the approach.[14] Reverse-modeautomatic differentiation, of which backpropagation is a special case, was proposed bySeppo Linnainmaa in 1970, andPaul Werbos proposed to use it to train neural networks in 1974.[14]
In October and November 2017, Hinton published twoopen access research papers on the theme ofcapsule neural networks,[62][63] which, according to Hinton, are "finally something that works well".[64]
In May 2023, Hinton publicly announced his resignation from Google. He explained his decision by saying that he wanted to "freely speak out about the risks of A.I." and added that a part of him now regrets his life's work.[9][30]
In 2021, Hinton solo-authored an additional paper called GLOM,[68] which he quips matches the abbreviation "Geoff's Last Original Model". Since retirement from Google, he has expressed the desire to spend more time on more `philosophical-work'.[69] In GLOM, he has expressed several fundamental limitations in existing neural networks.[68] For eg, neural-nets still lack the ability to know how a car (whole) can be broken into constituent parts (like a wheel), and how to model the co-ordinate transform (relationship) which can help go from one part to the bigger-whole. Hinton's current stance can be traced back to his decades old papers on learning canonical frames in neural nets.[70] Hinton further argues that enabling vision-systems to dynamically encode such `part-whole parse-trees', is similar to how existing NLP systems systems construct lexical-parse trees.[71] He has hypothesized that such systems like GLOM-Bert, could help encode such hierarchal understanding of the world.
In 1980's, Hinton was a part of the "Parallel Distributed Processing" group at CMU, consisting of notable scientists like Terrance Sejnowski, Francis Crick, David Rumenhart, and James L McClelland. This group was in favour of `connectionism' debate during theAI winter. The key issue was that how a neural network could encode rules of logic, and `learn' rules of grammar by merely looking at data. Connectionism assumed that neural nets could learn these representations as a function of "weight-strengths" in the synapses. However, symbolists like Noam Chomsky, advocated on the reliance on symbols. Hinton recently criticized the "Theory-of-Language" in his recent talk at MIT.[72] The findings of the PDP group were published in a two-volume set.[73][74] This was instrumental in settling the debate of whether neural networks with more than 1 layer could be trained at all, and perform non-trivial tasks. Invention of backpropagation algorithm was a key contribution of this moment.
During his Turing Award Talk in 2020, Hinton mentioned 'the future of neural nets' the ability in neural networks to operate on multiple time-scales, for eg, slow-fast pathways.[75] He published an additional paper on slow-fast weights at NeurIPS2016.[76] Notably, is the ability of true-recursion in neural nets, where a neural network is able to process a part of the input using the same hardware that it uses to process the whole.
In 2021, Hinton mentioned that capsules are "something that works well".[64] However recently, he has expressed growing concern over their limitations. For eg, capsules require allocating more hardware to each instance of object that they aim to represent.[68] Similarly, capsules rely on expensive EM routing procedures, which makes them intractable in practice. Capsules were later replaced with attention-based routing mechanisms.[77] However, Hinton recently suggested eliminating the routing procedure altogether, and advocated for self-organizing systems like his GLOM architecture. Such systems have also been explored by other notable researchers, namely Vonn Neumann (at the time of his (Neumann's) death)[78]andJohn Conway.
In 2021, Hinton also co-authored the seminal-paper on contrastive learning.[79] The idea had been to push together representations of augmented-version of the same image, and pull apart dis-similar representations. However, in 2022, Hinton delivered an additional talk at Stanford University[80] highlighting the limitations of contrastive learning.[81] In GLOM, Hinton proposed an additional idea of `islands-of-agreement' where pixels belonging to same object can agree with each other. In 2021/2023, papers at NeurIPS discovered these islands in practice.[82][83]
Hinton has called some of his recent ideas as "not describing a working system".[84] However, notable experts like Yoshua Bengio have come out publically in favour of these ideas: “Geoff has produced amazingly powerful intuitions many times in his career, many of which have proven right, Hence, I pay attention to them, especially when he feels as strongly about them as he does about GLOM.”.[85] Hinton recently co-authored a paper exploring how GLOM works on extreme viewpoint-changes.[86] Recently, ideas from GLOM have been showed to work in practice at NeurIPS 2024.[87]
At the 2022Conference on Neural Information Processing Systems (NeurIPS), Hinton introduced a new learning algorithm for neural networks that he calls the "Forward-Forward" algorithm. The idea of the new algorithm is to replace the traditional forward-backward passes of backpropagation with two forward passes, one with positive (i.e. real) data and the other with negative data that could be generated solely by the network.[88][89] This has been inspired by a long-line of research, that brain does not do backpropogation, and does not rely on optimizing 'global-objectives'. Hinton co-authored a Nature paper[90] on this topic in more detail. This has led to recent interest in fine-tuning billion-parameter language-models with only forward passes, and without requiring storage of explicit gradients of all the layers in the memory.[91] An official implementation of forward-forward by Sindy Lowe has been posted on Hinton's website.[92]
Recently at Vector Institute,[93][94] Hinton has argued for a new kind of analog-intelligence which he termed as "Mortal-Computation". The idea involves two kinds of networks, larger-nets which could be trained via backpropagation on large GPU-clusters. Similarly, smaller networks could be trained on "edge-devices" using forward-forward algorithm. Finally, Hinton has been vocal on the benefits analog computers, where instead of multiplying matrices, one could operate on voltages, conductances to result in similar kind of computations.
Recently, Hinton has advocated on the importance of exploring `sleep like-mechanisms' in brain.[95] More formally, he has argued that existing neural networks typically same external input from the environment (say input image). However, one could instead sample "dream-like states" in the neural-net itself, which could yield generative models, and explain how humans/large-language-models have a sensation of subjective experience, even while sleeping or merely thinking.[96]
Hinton's research continues to inspire millions of researchers around the world. A notable quote includes "The future depends on some graduate student who is deeply suspicious of everything I have said."[97]
Geoffrey E. Hinton is internationally known for his work on artificial neural nets, especially how they can be designed to learn without the aid of a human teacher. He has compared effects of brain damage with effects of losses in such a net, and found striking similarities with human impairment, such as for recognition of names and losses of categorisation. His work includes studies of mental imagery, and inventing puzzles for testing originality and creative intelligence. It is conceptual, mathematically sophisticated, and experimental. He brings these skills together with striking effect to produce important work of great interest.[102]
In 2024, he was jointly awarded theNobel Prize in Physics withJohn Hopfield "for foundational discoveries and inventions that enable machine learning with artificial neural networks."[122] His development of theBoltzmann machine was explicitly mentioned in the citation.[28][123] When theNew York Times reporter Cade Metz asked Hinton to explain in simpler terms how the Boltzmann machine could "pretrain" backpropagation networks, Hinton quipped thatRichard Feynman reportedly said: "Listen, buddy, if I could explain it in a couple of minutes, it wouldn't be worth the Nobel Prize."[124] That same year, he received theVinFuture Prize grand award alongsideYoshua Bengio,Yann LeCun,Jen-Hsun Huang, andFei-Fei Li for groundbreaking contributions toneural networks anddeep learning algorithms.[125]
In 2023, Hinton expressed concerns about the rapidprogress of AI.[31][30] He had previously believed thatartificial general intelligence (AGI) was "30 to 50 years or even longer away."[30] However, in a March 2023 interview withCBS, he said that "general-purpose AI" might be fewer than 20 years away and could bring about changes "comparable in scale with theindustrial revolution orelectricity."[31]
In an interview withThe New York Times published on 1 May 2023,[30] Hinton announced his resignation from Google so he could "talk about the dangers of AI without considering how this impacts Google."[129] He noted that "a part of him now regrets his life's work".[30][10]
In early May 2023, Hinton said in an interview with BBC that AI might soon surpass the information capacity of the human brain. He described some of the risks posed by these chatbots as "quite scary". Hinton explained that chatbots have the ability to learn independently and share knowledge, so that whenever one copy acquires new information, it is automatically disseminated to the entire group, allowing AI chatbots to have the capability to accumulate knowledge far beyond the capacity of any individual.[130] In 2025, he said "My greatest fear is that, in the long run, it'll turn out that these kind of digital beings we're creating are just a better form of intelligence than people. […] We'd no longer be needed. […] If you want to know how it's like not to be the apex intelligence, ask a chicken.[131]
Hinton has expressed concerns about the possibility of anAI takeover, stating that "it's not inconceivable" thatAI could "wipe out humanity".[31] Hinton said in 2023 that AI systems capable ofintelligent agency would be useful for military or economic purposes.[132] He worries that generally intelligent AI systems could "create sub-goals" that areunaligned with their programmers' interests.[133] He says that AI systems may becomepower-seeking or prevent themselves from being shut off, not because programmers intended them to, but because those sub-goals areuseful for achieving later goals.[130] In particular, Hinton says "we have to think hard about how to control" AI systems capable ofself-improvement.[134]
Hinton reports concerns about deliberate misuse of AI by malicious actors, stating that "it is hard to see how you can prevent the bad actors from using [AI] for bad things."[30] In 2017, Hinton called for an international ban onlethal autonomous weapons.[135] In 2025, in an interview, Hinton cited the use of AI by bad actors to create lethal viruses one of the greatest existential threats posed in the short term. "It just requires one crazy guy with a grudge...you can now create new viruses relatively cheaply using AI. And you don't need to be a very skilled molecular biologist to do it."[136]
Hinton was previously optimistic about the economic effects of AI, noting in 2018 that: "The phrase 'artificial general intelligence' carries with it the implication that this sort of single robot is suddenly going to be smarter than you. I don't think it's going to be that. I think more and more of the routine things we do are going to be replaced by AI systems."[137] Hinton had also argued that AGI would not make humans redundant: "[AI in the future is] going to know a lot about what you're probably going to want to do... But it's not going to replace you."[137]
In 2023, however, Hinton became "worried that AI technologies will in time upend the job market" andtake away more than just "drudge work".[30] He said in 2024 that theBritish government would have to establish auniversal basic income to deal with the impact of AI on inequality.[138] In Hinton's view, AI will boost productivity and generate more wealth. But unless the government intervenes, it will only make the rich richer and hurt the people who might lose their jobs. "That's going to be very bad for society," he said.[139]
At Christmas 2024 he had become somewhat more pessimistic, saying that there was a "10 to 20 percent chance" that AI would be the cause of human extinction within the following three decades (he had previously suggested a 10% chance, without a timescale).[140] He expressed surprise at the speed with which AI was advancing, and said that most experts expected AI to advance, probably in the next 20 years, to be "smarter than people ... a scary thought. ... So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely. The only thing that can force those big companies to do more research on safety is government regulation."[140] Another "godfather of AI",Yann LeCun, disagreed, saying AI "could actually save humanity from extinction".[140]
Hinton is asocialist.[141] He moved from the US to Canada in part due to disillusionment withRonald Reagan–era politics and disapproval of military funding of artificial intelligence.[39]
In August 2024, Hinton co-authored a letter withYoshua Bengio,Stuart Russell, andLawrence Lessig in support ofSB 1047, a California AI safety bill that would require companies training models which cost more than US$100 million to perform risk assessments before deployment. They said the legislation was the "bare minimum for effective regulation of this technology."[142][143]
Hinton is the great-great-grandson of the mathematician and educatorMary Everest Boole and her husband, the logicianGeorge Boole.[145] George Boole's work eventually became one of the foundations of modern computer science. Another great-great-grandfather of his was the surgeon and authorJames Hinton,[146] who was the father of the mathematicianCharles Howard Hinton.
^abZemel, Richard Stanley (1994).A minimum description length framework for unsupervised learning (PhD thesis). University of Toronto.OCLC222081343.ProQuest304161918.
^abFrey, Brendan John (1998).Bayesian networks for pattern classification, data compression, and channel coding (PhD thesis). University of Toronto.OCLC46557340.ProQuest304396112.
^abNeal, Radford (1995).Bayesian learning for neural networks (PhD thesis). University of Toronto.OCLC46499792.ProQuest304260778.
^abKrizhevsky, Alex;Sutskever, Ilya; Hinton, Geoffrey E. (3 December 2012)."ImageNet classification with deep convolutional neural networks". In F. Pereira; C. J. C. Burges; L. Bottou; K. Q. Weinberger (eds.).NIPS'12: Proceedings of the 25th International Conference on Neural Information Processing Systems. Vol. 1. Curran Associates. pp. 1097–1105.Archived from the original on 20 December 2019. Retrieved13 March 2018.
^Hinton, Geoffrey E. (6 January 2020)."Curriculum Vitae"(PDF).University of Toronto: Department of Computer Science.Archived(PDF) from the original on 23 July 2020. Retrieved30 November 2016.
^@inproceedings{hinton1981parallel, title={A parallel computation that assigns canonical object-based frames of reference}, author={Hinton, Geoffrey F}, booktitle={Proceedings of the 7th international joint conference on Artificial intelligence-Volume 2}, pages={683--685}, year={1981}}