Written by: Stephen Hsu
Primary Source: Information Processing
In the most interesting part of the talk (@25 min; see arxiv:1409.3215 and arxiv:1506.00019) he describes extracting “thought vectors” or semantic meaning relationships from plain text. This involves a deep net, human text, and resulting vectors of weights.
The slide below summarizes some history. Most of the theoretical ideas behind Deep Learning have been around for a long time. Hinton sometimes characterizes the advances as resulting from a factor of a million in hardware capability (increase in compute power and data availability), and an order of magnitude from new tricks. See also Moore’s Law and AI.
Latest posts by Stephen Hsu (see all)
- MSU Research Update (video) - April 17, 2019
- Interview with Genetic Engineering & Biotechnology News - April 17, 2019
- Precision Genomic Medicine and the UK - February 15, 2019