Poker pros win against AI, but experts peg match as statistical draw
Good news for humans! We haven’t yet been surpassed in intelligence by computer programs as a two-day poker showdown between four of the world’s best players of heads-up no-limit Texas Hold’em and Carnegie Mellon University artificial intelligence program called Claudico saw the professionals win by amassing more poker chips than their AI counterpart.
Despite the win, the poker players’ $732,713 collective lead over Claudico wasn’t quite large enough to attain statistical significance, experts have said. This means that the results can’t be accepted as scientifically reliable thereby indicating that the “Brains Vs. Artificial Intelligence” competition effectively ended in a statistical tie.
In the final chip tally, Bjorn Li had an individual chip total of $529,033, Doug Polk had $213,671 and Dong Kim had $70, 491. Jason Les trailed Claudico by $80,482. Each of the players is ranked among the world’s top 10 professionals in heads-up (two-player) no-limit Texas Hold’em. Claudico played 20,000 hands with each pro in the two-player game. No actual wagering took place during the exhibition, though the pros will receive appearance fees based on their performance from a prize purse of $100,000 donated by Rivers Casino and Microsoft Research.
Tuomas Sandholm, the CMU professor of computer science who directed development of Claudico said that they never had second thoughts about Claudico as being the strongest computer poker program in the world, but they had no idea how the program will fare against four Top 10 professional poker players.
“We know theoretically that artificial intelligence is going to overtake us one day,” Li said. “But at the end of the day, the most important thing is that the humans remain on top for now,” even though scientists don’t consider the results statistically significant.
Testing the limits of AI through poker
Poker has become a major test of artificial intelligence, Sandholm explained, because it is an incomplete information game. Players don’t know what cards their opponents hold and all players try to mislead their opponents by bluffing, slow play and other devices.
“Beating humans isn’t really our goal; it’s just a milestone along the way,” Sandholm said. “What we want to do is create an artificial intelligence that can help humans negotiate or make decisions in situations where they can’t know all of the facts.”
Claudico’s strategy was created using algorithms rather than trying to program in human poker expertise. The algorithms ran on the Pittsburgh Supercomputing Center’s Blacklight computer with just the rules of poker as input. The same sort of algorithms could also be used to create strategies for applications involving cybersecurity, business transactions and medicine. For instance, an AI similar to Claudico might help doctors develop sequential treatment plans for a patient, or design drugs that are less prone to resistance. Or, such an AI might help people negotiate their best deal when purchasing a house or a car.
An earlier version of the computer program, called Tartanian7, decisively won the heads-up no-limit Texas Hold’em category against each opponent with statistical significance at the Association for the Advancement of Artificial Intelligence’s Annual Computer Poker Competition last July. The poker pros had a chance to observe Tartanian7’s play prior to this spring’s competition.
“The advances made in Claudico over Tartanian7 in just eight months were huge,” Les said, a rate of improvement that suggests the AI might need only another year before it clearly plays better than the pros.
As it stands, Claudico is a good, but not top-notch player, Polk said.
“There are spots where it plays well and others where I just don’t understand it,” he added. Some of its bets, for instance, were highly unusual, in Polk’s estimation. Where a human might place a bet worth half or three-quarters of the pot, Claudico would sometimes bet a miserly 10 percent or an over-the-top 1,000 percent. “Betting $19,000 to win a $700 pot just isn’t something that a person would do,” he observed.
But Claudico is a supremely cool player. Losing a large bet might rattle a person, changing the way subsequent hands are played. But Claudico never showed signs of being fazed, Polk said.
If Claudico’s game play sometimes left the pros baffled, the computer science team, including Ph.D. students Noam Brown and Sam Ganzfried, were often equally puzzled. Claudico sets its own strategy, Brown noted, and that strategy occupies about two terabytes of data — far more than the CMU team could analyze.
The Blacklight computer was used throughout the event to compute a better and better approximation of game-theory-optimal strategy. The work with Blacklight was supported in part by an allocation from XSEDE, the National Science Foundation’s network of supercomputing resources.
Sandholm expressed confidence that AI will soon be able to clearly exceed the play of top professionals, noting that he and his team already have ideas for improving the algorithms at the heart of the program. Plus, they now have 80,000 hands of data on how top professionals play the game — data that scientists can use to train, test and perfect the successors to Claudico.
The work continues Carnegie Mellon’s pioneering research in artificial intelligence, which dates back to the first AI program in 1956 and includes the establishment of the world’s first Machine Learning Department. CMU faculty members are among the world’s leading scientists in computational game theory, market design, natural language processing, computer vision, speech translation, thought identification and collaboration among intelligent agents. CMU laid the groundwork for computer chess programs that ultimately defeated Grandmaster Garry Kasparov in 1997 and made significant contributions to the Watson program that defeated Jeopardy! champions in 2011.