So the team sought to create an AI system with a "more human-like" approach to a game Hassabis said "is played primarily through intuition and feel."
AlphaGo uses two sets of "deep neural networks" containing millions of neuron-like connections to reduce the search base to something more manageable.
The first, "policy network" narrows the search at each turn to only those moves most likely to lead to a win.
The second, "value network", estimates a winner from each move made, "rather than searching all the way to the end of the game," said Silver.
"AlphaGo looks ahead by playing out the remainder of the game in its imagination many times over," he explained.
"The search process itself is not based on brute force, it's based on something more akin to imagination."
AlphaGo was programmed with 30 million moves from games played by human experts, and then left to do some self-coaching.
It played "thousands and thousands of games between its neural networks, gradually improving them using a trial-and-error process known as reinforcement learning," said Silver.
The result: The value networks are able to "very accurately" estimate the eventual winner from any Go position, "a problem that was so hard it was believed to be impossible."
- 'Intuitive machinery' -
AlphaGo was tested against the best existing Go programmes, and won all but one of its 500 games, even when giving away free moves as a head-start.
Then last October, it beat Hui.
Tanguy Chouard, a Nature editor, described the feat as an "historical milestone" in AI development, which lies "right at the heart of the mystery of what intelligence is."
Computer games serve as a testing ground for AI developers seeking to invent smart and flexible algorithms that can tackle problems in ways similar to humans.
The first game mastered by a computer was noughts and crosses in 1952, followed by checkers in 1994, and the famous victory by IBM supercomputer Deep Blue over chess champion Garry Kasparov in 1997.
In 2014, another DeepMind system called DQN taught itself to play 49 different video games, and beat human professionals at those.
But Go has proven tough, and until now, computers could only play as amateurs.
"In the game of Go, we need this amazingly complex, intuitive machinery which people previously thought was only possible within the human brain, to even have an idea of who's ahead and what the right move is," said Silver.
The technology may prove useful in making smarter smartphones, and improving medical diagnostics or climate change models, said the team.
AlphaGo's next challenge will be in March, in Seoul, against Go world champion Lee Sedol of South Korea, who has held the crown for a decade.
"I have heard that Google DeepMind's AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time," he said in a statement. afp, photo by iStock