Meet the World’s Least Ambitious AI

When IBM’s Deep Blue first defeated Garry Kasparov in 1997, the world chess champion accused the company of cheating. There was no way, he thought, that the computer could have beaten him without direct assistance from a skilled human player. But now the situation has flipped entirely. When grandmasters find themselves at the receiving end of a few mind-blowingly brilliant moves today, they accuse their opponent of using a computer. The only worthwhile competition for top chess engines is one another. The programs have become too powerful; humankind has lost.

But as the machines push toward chess perfection, one bot stands firmly in the way, refusing to accept a dominant position in the robot-human hierarchy. That holdout is Martin—the worst computer opponent on Chess.com, by far the most popular chess website in the world. Whereas programs such as ChatGPT dazzle, perplex, and frighten users with increasing computational prowess, Martin is programmed to be awful at chess. Surrounded by ambitious generative-AI products that often deliver incorrect or incoherent responses with brazen confidence, Martin is the rare humble bot that understands and embraces its profound limitations. It has lost when given 31 queens against an army of dinky pawns, which is a bit like breaking your arm in three places while attempting to velcro a shoe.

Martin can certainly beat newcomers, but if you know even the basics of chess strategy, it poses no threat. This is a machine with no grand ambitions, even as its AI contemporaries are engineered with the goal of achieving humanlike “artificial general intelligence.” If this is the era of AI chaos, then Martin serves as a reminder that intelligent programs can still surprise, delight, and even teach within a clear framework that the user controls—especially if they’re, well, kind of dumb.

Martin is part of a decades-long lineage of anthropomorphized computer-chess opponents, with customizable strengths and distinct personalities. The Chessmaster 2000, a computer game released in 1986, pitted users against a gray-bearded man whose Gandalf-esque visage graced the game’s cover. In 2019, Chess.com released bots with individual names, illustrated avatars, nationalities, and sayings. Martin is represented as a turtleneck-wearing Bulgarian man with bushy eyebrows, a thick beard, and a slightly receding hairline; it starts each game with the declaration “My 4 year old son just beat me—ouch!”

Martin ignores obvious threats and celebrates when its opponent takes a piece: “It feels good to capture, no?” After losing, Martin invariably declares, “Great play! Do you want to coach my kids?”—it recognizes its shortcomings and its opponent’s overwhelming strength. The sole position Martin has just about mastered is mate in one—when a single move can win him the game. At that point, the powerful engine beneath the turtleneck shudders to life, unwilling to pass up the opportunity. But Martin is humble in victory, eager to make its opponent feel better. “I’ve been teaching kids, so I know a thing or two,” it says. “Want to try again?”

The answer to that question has been an emphatic yes. Erik Allebest, the CEO of Chess.com, told me that Martin plays about 10 million games a week, the most of any bot on the site. “People love clowning on Martin and posting about it,” he said. “It makes them feel good to just stomp a guy.”

[Read: Chess is just poker now]

In the past few months, leaps in AI processing power—and an outpouring of capital to their creators—have set the world on edge. Martin offers a pleasant contrast to programs such as ChatGPT, which inspire regular editorials about the imminent collapse of mankind. People fear AI will outcompete humanity; Martin struggles to outcompete anybody. People don’t understand AI or where it’s going; Martin offers no mystery and is reliably terrible.

The growing unease recently led some prominent AI entrepreneurs and academics—such as Elon Musk and the professor and Turing Prize winner Yoshua Bengio—to sign an open letter calling for at least a six-month “pause” on the training of AI systems more powerful than OpenAI’s GPT-4. “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control,” the letter states.

In a sharp rebuke, researchers from the Distributed AI Research Institute said that the letter promoted “fearmongering and AI hype” and sidestepped real risks such as worker exploitation, data theft, and the concentration of power in the hands of a few powerful people. “People want to have consent to having technology used on them,” Margaret Mitchell, a co-author of the response and an AI-ethics researcher at Hugging Face, told me. “And there aren’t good mechanisms for informed consent.” Stephen Cave, the director of the University of Cambridge’s Leverhulme Centre for the Future of Intelligence, expanded on the existential threat many people feel. “We are profoundly technological beings but profoundly dependent on these tools,” he told me. “So that requires that we have control over them.”

On one level, Martin is a sort of AI punching bag, Cave explained. Like WALL-E, Baymax in Big Hero 6, and the robots in The Mitchells vs. the Machines (whose inability to distinguish a dog from a loaf of bread helps foil their chances at world domination), it is a friendly bot that presents no threat to humankind. But Martin also displays some useful features for AI design. The fact that you can choose to play against it instead of against the strongest chess engines is itself useful. “Giving people control over how a system behaves is actually quite simple but isn’t really done,” Mitchell pointed out. “The ability to provide constraints is critical for responsible technology where users are empowered.” (This might be why Microsoft allows users to toggle its new Bing chatbot into “Creative,” “Balanced,” and “Precise” modes.)

Martin’s intelligence, if you can call it that, is modest. Google Bard may not be able to end all global conflict, but it nevertheless offers a confident, five-part solution if you ask it to. (“Achieving world peace will not be easy,” Bard concludes, “but it is possible.”) An early version of Bing’s chatbot, Sydney, viciously gaslit users who corrected its erroneous replies. Martin, however, is gracious in defeat. It knows what it can do (play chess badly) and, more important, what it can’t (everything else). Martin is adored even by AI scholars, such as the Carnegie Mellon assistant professor Motahhare Eslami, who told me she appreciates Martin’s perpetual willingness to compete against her 4-year-old son.

Perhaps, as AI-centric companies rush to release products that make simple math mistakes, tell users to leave their spouse, and falsely accuse people of sexual assault, they can learn something from Martin. Give users a sense of control. Provide transparency. And stick to what the system knows—even if that’s simply being terrible at chess.