Demis Hassabis: The Future of Artificial Intelligence

Written by: Stephen Hsu

Primary Source: Information Processing

Thanks to AlphaGo and Lee Sedol, even my wife is interested in AI :-)

Here’s a recent talk by DeepMind CEO Demis Hassabis, whose comments start @5:30 min. The content of this talk is suitable for a non-technical audience.

@39 min: AlphaGo value / policy nets, and tree search.

@1h03: over the summer DeepMind will look at the internal representations used in the valuation engine to see how they correspond to expert human intuitions about Go. This is like peeking into the mind of an alien creature that evolved fighting for territory in a 2D world with discrete spacetime :-)

Here’s a related comment that appeared on a HNN thread about the Lee Sedol match:

As someone who studied AI in college and am a reasonably good amateur player, I have been following the matches between Lee and AlphaGo.

AlphaGo plays some unusual moves that go clearly against any classically trained Go players. Moves that simply don’t quite fit into the current theories of Go playing, and the world’s top players are struggling to explain what’s the purpose/strategy behind them.

I’ve been giving it some thought. When I was learning to play Go as a teenager in China, I followed a fairly standard, classical learning path. First I learned the rules, then progressively I learn the more abstract theories and tactics. Many of these theories, as I see them now, draw analogies from the physical world, and are used as tools to hide the underlying complexity (chunking), and enable the players to think at a higher level.

For example, we’re taught of considering connected stones as one unit, and give this one unit attributes like dead, alive, strong, weak, projecting influence in the surrounding areas. In other words, much like a standalone army unit.

These abstractions all made a lot of sense, and feels natural, and certainly helps game play — no player can consider the dozens (sometimes over 100) stones all as individuals and come up with a coherent game play. Chunking is such a natural and useful way of thinking.

But watching AlphaGo, I am not sure that’s how it thinks of the game. Maybe it simply doesn’t do chunking at all, or maybe it does chunking its own way, not influenced by the physical world as we humans invariably do. AlphaGo’s moves are sometimes strange, and couldn’t be explained by the way humans chunk the game.

It’s both exciting and eerie. It’s like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). and much to our surprise, it’s a new way that’s more powerful than ours.

Note AlphaGo almost certainly uses chunking of some sort (“feature identification” in the neural net terminology), but perhaps not the kind familiar to our brains, which evolved in the physical/biological world.

@1h04: No surprise, Hassabis seems to believe in strong AI.

See also

Moore’s Law and AI
DeepMind and Demis Hassabis
Deep Neural Nets and Go: AlphaGo beats European champion.



The following two tabs change content below.
Stephen Hsu
Stephen Hsu is vice president for Research and Graduate Studies at Michigan State University. He also serves as scientific adviser to BGI (formerly Beijing Genomics Institute) and as a member of its Cognitive Genomics Lab. Hsu’s primary work has been in applications of quantum field theory, particularly to problems in quantum chromodynamics, dark energy, black holes, entropy bounds, and particle physics beyond the standard model. He has also made contributions to genomics and bioinformatics, the theory of modern finance, and in encryption and information security. Founder of two Silicon Valley companies—SafeWeb, a pioneer in SSL VPN (Secure Sockets Layer Virtual Private Networks) appliances, which was acquired by Symantec in 2003, and Robot Genius Inc., which developed anti-malware technologies—Hsu has given invited research seminars and colloquia at leading research universities and laboratories around the world.
Stephen Hsu

Latest posts by Stephen Hsu (see all)