This is the first step towards true artificial intelligence.
Without being given any rules or prior information, a simple computer has learnt how to play 49 classic Atari games in just two weeks – and it’s learnt to play them pretty damn well. But what’s most impressive is that the Google-built algorithm it uses wasn’t even built specifically to play games, just to learn from its own experience.
What does that mean, other than the fact computers can now beat us at Space Invaders and Breakout, as well as Chess, Texas hold’em poker and solving Rubik’s Cubes? It turns out we now have the early stages of a general learning algorithm that could help robots and computers to become experts at any task we throw at them, and that’s a pretty huge deal.
“This is the first time that anyone has built a single general learning system that can learn directly from experience to master a wide range of challenging tasks,” Demis Hassabis, one of the lead researchers, told William Herkewitz from Popular Mechanics. Hassabis was one of the co-founders of DeepMind Technologies, the company that started making the algorithm and was bought out by Google last year for a reported US$400 million.
Publishing today in Nature, the team explains how the deep learning algorithm, which is called Deep Q-Network, or DQN, was able to master games such as Boxing, Space Invaders and Stargunner without any background information. This includes details such as what “bad guys” to look out for, and how to use the controls. It only had access to the score and the pixels on the screen in order to work out how to become an expert player. By playing the games over and over and over again, and learning from its mistakes, the algorithm learn first how to play the game properly, and then, within a fortnight, how to win. Via Google’s new AI has already learnt how to crush us at 49 games