Geoff Hinton on Deep Learning

Written by: Stephen Hsu

Primary Source: Information Processing

This is a recent, and fairly non-technical, introduction to Deep Learning by Geoff Hinton.

In the most interesting part of the talk (@25 min; see arxiv:1409.3215 and arxiv:1506.00019) he describes extracting “thought vectors” or semantic meaning relationships from plain text. This involves a deep net, human text, and resulting vectors of weights.

The slide below summarizes some history. Most of the theoretical ideas behind Deep Learning have been around for a long time. Hinton sometimes characterizes the advances as resulting from a factor of a million in hardware capability (increase in compute power and data availability), and an order of magnitude from new tricks. See also Moore’s Law and AI.

The following two tabs change content below.
Stephen Hsu
Stephen Hsu is vice president for Research and Graduate Studies at Michigan State University. He also serves as scientific adviser to BGI (formerly Beijing Genomics Institute) and as a member of its Cognitive Genomics Lab. Hsu’s primary work has been in applications of quantum field theory, particularly to problems in quantum chromodynamics, dark energy, black holes, entropy bounds, and particle physics beyond the standard model. He has also made contributions to genomics and bioinformatics, the theory of modern finance, and in encryption and information security. Founder of two Silicon Valley companies—SafeWeb, a pioneer in SSL VPN (Secure Sockets Layer Virtual Private Networks) appliances, which was acquired by Symantec in 2003, and Robot Genius Inc., which developed anti-malware technologies—Hsu has given invited research seminars and colloquia at leading research universities and laboratories around the world.
Stephen Hsu

Latest posts by Stephen Hsu (see all)