Backpropagation in the Brain?

Written by: Stephen Hsu

Primary Source: Information Processing, 10/25/2018.

Ask and ye shall receive :-)

In an earlier post I recommended a talk by Ilya Sutskever of OpenAI (part of an MIT AGI lecture series). In the Q&A someone asks about the status of backpropagation (used for training of artificial deep neural nets) in real neural nets, and Ilya answers that it’s currently not known how or whether a real brain does it.

Almost immediately, neuroscientist James Phillips of Janelia provides a link to a recent talk on this topic, which proposes a specific biological mechanism / model for backprop. I don’t know enough neuroscience to really judge the idea, but it’s nice to see cross-fertilization between in silico AI and real neuroscience.

See here for more from Blake Richards.

The following two tabs change content below.
Stephen Hsu
Stephen Hsu is vice president for Research and Graduate Studies at Michigan State University. He also serves as scientific adviser to BGI (formerly Beijing Genomics Institute) and as a member of its Cognitive Genomics Lab. Hsu’s primary work has been in applications of quantum field theory, particularly to problems in quantum chromodynamics, dark energy, black holes, entropy bounds, and particle physics beyond the standard model. He has also made contributions to genomics and bioinformatics, the theory of modern finance, and in encryption and information security. Founder of two Silicon Valley companies—SafeWeb, a pioneer in SSL VPN (Secure Sockets Layer Virtual Private Networks) appliances, which was acquired by Symantec in 2003, and Robot Genius Inc., which developed anti-malware technologies—Hsu has given invited research seminars and colloquia at leading research universities and laboratories around the world.