Computational Linguistics
About

Geoffrey Hinton

Geoffrey Hinton (b. 1947) is a British-Canadian computer scientist who pioneered backpropagation, deep belief networks, and distributed representations, sharing the 2018 Turing Award and providing the neural network foundations that made modern NLP possible.

Backpropagation: ∂L/∂wᵢⱼ = ∂L/∂aⱼ · ∂aⱼ/∂wᵢⱼ (chain rule through layers)

Geoffrey Everest Hinton is a British-Canadian cognitive psychologist and computer scientist who is widely regarded as one of the founders of modern deep learning. His decades of work on neural networks — particularly backpropagation, distributed representations, Boltzmann machines, and deep belief networks — provided the theoretical and practical foundations on which modern NLP is built. He shared the 2018 ACM A.M. Turing Award with Yoshua Bengio and Yann LeCun.

Early Life and Education

Born in Wimbledon, London, in 1947, Hinton studied experimental psychology at Cambridge and earned his PhD in artificial intelligence from the University of Edinburgh in 1978. After positions at Sussex, UC San Diego, and Carnegie Mellon, he moved to the University of Toronto, where he spent the majority of his career. He also worked at Google Brain before departing to raise awareness about AI safety.

1947

Born in Wimbledon, London

1978

Completed PhD at the University of Edinburgh

1986

Co-authored landmark paper on learning representations by back-propagating errors

2006

Introduced deep belief networks, reigniting interest in deep learning

2012

AlexNet (with Krizhevsky and Sutskever) won ImageNet, catalysing the deep learning revolution

2018

Received the ACM Turing Award

2024

Received the Nobel Prize in Physics for foundational discoveries in machine learning

Key Contributions

Hinton's 1986 paper with Rumelhart and Williams on backpropagation demonstrated that multi-layer neural networks could learn internal representations by propagating error gradients backwards through layers using the chain rule. This algorithm is the foundation of all neural network training, including every language model, parser, and NLP system based on deep learning.

His concept of distributed representations — representing concepts as patterns of activation across many neurons rather than as single units — is the intellectual ancestor of word embeddings. His 2006 work on deep belief networks showed that deep neural networks could be effectively trained using layer-wise pre-training, reigniting the field of deep learning after a period of reduced interest. His student Ilya Sutskever co-developed the sequence-to-sequence framework that enabled neural machine translation.

"The brain has about 10^11 neurons and about 10^14 synaptic connections. It takes the brain about 10^-1 seconds to recognise a visually presented object. In that time, a serial computer can execute 10^9 operations. Clearly, the brain is doing something much more parallel." — Geoffrey Hinton, on the motivation for neural network research

Legacy

Without Hinton's work on backpropagation and distributed representations, modern NLP would not exist. Every neural language model — from Word2Vec through GPT — trains using backpropagation and learns distributed representations. His students and collaborators (including Sutskever, Dahl, and others) went on to build the systems that define modern AI. His recent advocacy regarding existential risks from AI has brought important safety considerations to public attention. He received the 2024 Nobel Prize in Physics alongside John Hopfield for foundational discoveries enabling machine learning with artificial neural networks.

Interactive Calculator

Enter a CSV of publications: year,title,citations_count. The calculator computes total citations, h-index, peak year, and a per-decade breakdown of scholarly output.

Click Calculate to see results, or Animate to watch the statistics update one record at a time.

Related Topics

References

  1. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533–536. doi:10.1038/323533a0
  2. Hinton, G. E. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507. doi:10.1126/science.1127647
  3. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. doi:10.1038/nature14539
  4. Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507.

External Links