Written by: Stephen Hsu
Primary Source: Information Processing, 10/25/2018.
Ask and ye shall receive :-)
In an earlier post I recommended a talk by Ilya Sutskever of OpenAI (part of an MIT AGI lecture series). In the Q&A someone asks about the status of backpropagation (used for training of artificial deep neural nets) in real neural nets, and Ilya answers that it’s currently not known how or whether a real brain does it.
Almost immediately, neuroscientist James Phillips of Janelia provides a link to a recent talk on this topic, which proposes a specific biological mechanism / model for backprop. I don’t know enough neuroscience to really judge the idea, but it’s nice to see cross-fertilization between in silico AI and real neuroscience.
Latest posts by Stephen Hsu (see all)
- MSU Research Update (video) - April 17, 2019
- Interview with Genetic Engineering & Biotechnology News - April 17, 2019
- Precision Genomic Medicine and the UK - February 15, 2019