Deep Learning tutorial: Yoshua Bengio, Yann Lecun NIPS 2015

Written by: Stephen Hsu

Primary Source: Information Processing

I think these are the slides.

One of the topics which I’ve remarked on before is the absence of local minima in the high dimensional optimization required to tune these DNNs. In the limit of high dimensionality a critical point is overwhelmingly likely to be a saddlepoint (have at least one negative eigenvalue). This means that even though the surface is not strictly convex the optimization is tractable.

The following two tabs change content below.
Stephen Hsu
Stephen Hsu is vice president for Research and Graduate Studies at Michigan State University. He also serves as scientific adviser to BGI (formerly Beijing Genomics Institute) and as a member of its Cognitive Genomics Lab. Hsu’s primary work has been in applications of quantum field theory, particularly to problems in quantum chromodynamics, dark energy, black holes, entropy bounds, and particle physics beyond the standard model. He has also made contributions to genomics and bioinformatics, the theory of modern finance, and in encryption and information security. Founder of two Silicon Valley companies—SafeWeb, a pioneer in SSL VPN (Secure Sockets Layer Virtual Private Networks) appliances, which was acquired by Symantec in 2003, and Robot Genius Inc., which developed anti-malware technologies—Hsu has given invited research seminars and colloquia at leading research universities and laboratories around the world.
Stephen Hsu

Latest posts by Stephen Hsu (see all)