Advancing AI to Spot Deceit with Diplomacy

Descriptive image for Advancing AI to Spot Deceit with Diplomacy

The online version of the classic board game Diplomacy is serving as a testbed for researchers in the Computational Linguistics and Information Processing Lab (CLIP) to advance artificial intelligence’s ability to negotiate and detect deception.

The project is being funded by the Defense Advanced Research Projects Agency (DARPA) to defend against social engineering attacks, or when a user is manipulated into providing money or sensitive information. It’s estimated that cybercriminals successfully stole $7 billion this way last year, and such attacks can compromise national security.

“We’re not advancing AI so that it can directly detect lies, but more like a copilot over your shoulder so that when an email from a Nigerian prince comes in, it could say, ‘Hmmm maybe that’s not really a prince,’” explains Associate Professor of Computer Science Jordan Boyd-Graber, who is principal investigator of the $985K award.

In Diplomacy there are no dice, playing cards, or other elements that produce random effects. Instead, it requires players to collaborate, collude, and betray to win by conquering a map of Europe on the eve of WWI. The game was reportedly favored by John F. Kennedy, Henry Kissinger, and Walter Cronkite.

 

It’s also a personal favorite of Boyd-Graber’s. He recalls playing it almost every weekend as an undergraduate at the California Institute of Technology. Today, he is a leading expert in computational linguistics with a dual appointment in the University of Maryland Institute for Advanced Computer Studies and the College of Information School.

The project builds off his previous work for DARPA in which his team used data from Diplomacy to study deception and teach computer agents about negotiation. His on the computational linguistics of betrayal was covered by CNNThe Wall Street Journal and others.

The biggest challenge of his work is gathering a valid data set of lies to train the algorithm. Not only is it impossible to ensure that users are identifying their lies correctly, but they may propagate something unknowingly, he explains.

“This is what brought me back to the game of Diplomacy, a community of people who reveled in being effective liars—they could lie, tell us when they were lying, and be proud of it,” he says. “It has already proved to be a very useful tool building data sets for machine learning researchers to train algorithms to detect when people are being deceitful.”

The researchers’ immediate goal is to develop an AI that can win at Diplomacy, but note that evaluating contributing factors to a victory is equally important, such as writing messages, correctly inferring an opponent’s stance, and cooperating with other players.

Computers have some advantages, such as analyzing data that humans might overlook or find difficult to decipher. For example, a time stamp is a big clue in Diplomacy; perhaps the player is lying about their time zone. Or if they’re sending messages too quickly, they’re probably a bot instead of a human.

The researchers’ broader goal is to get computers to cooperate and negotiate with people. Eventually, these advances will lead to better online bots that people regularly use for customer support, scheduling appointments and more.

“Existing programs are insufficient because they focus on single interactions—ignoring the more complex dynamics of extended interactions and the richness and complexities of human communication,” said Boyd-Graber.

 

Leading the research with Boyd-Graber is his former doctoral student Denis Peskoff, who is now a postdoctoral researcher at Princeton University. Peskoff completed his Ph.D. in December and will formally graduate in May.

“He is actually the one who pitched the idea to DARPA and got the ball rolling on the program, I’ve never seen anything like it in a graduate student,” says Boyd-Graber. “It speaks well to Denis’s salesmanship and diplomacy skills.”

Peskoff, who speaks four languages and has a background in international affairs, wasn’t familiar with the game Diplomacy until he started working with Boyd-Graber at UMD. Now it’s one of his favorite games, tied with bridge and basketball.

“This has truly been a passion project where a life hobby of mine meets academic interests,” he says.

Not only does Peskoff play as a hobby and for research, but also to compete—and he faces off against some of the world’s top players. Last year he finished in the top 10 of Diplomacy’s Nexus Press League, and qualified for semi-finals in another tournament.

“Denis is much better than me now,” Boyd-Graber admits.

 

The pair was previously funded by DARPA for a project in which they analyzed language used in Diplomacy games to better predict betrayal and learn what types of communication are successful. Their work is outlined in “It takes Two to Lie, One to Lie and One to Listen,” which they presented at the Association for Computational Linguistics meeting in 2020, and can be watched here(link is external).

“While we build on previous work, the challenge is to train this engine to work with other players, something not yet attempted in AI for Diplomacy,” says Boyd-Graber.

 

—Story by Maria Herd

Computer science researchers from the University of Southern California, the University of Sydney and Princeton University are also contributing to the project.

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.