Earlier this year, I walked into a renovated loft in downtown San Francisco where couches and tables were scattered with flyers advertising an "emotionally intelligent real-time AI coach." The flyers belonged to Amotions AI, one of several startups that had gathered that day to pitch investors, entrepreneurs, and tech workers. Pianpian Xu Guthrie, the company's founder, was eager to explain the concept: an AI model that observes your video calls and delivers real-time coaching based on the other person's tone and facial expressions. A salesperson, for instance, might receive a prompt alerting them that their potential customer looks "confused," along with a suggested response.
Emotions have become the AI industry's newest obsession. A growing wave of startups like Amotions AI is promising tools that interpret human feelings—and the major players are racing to build chatbots that don't just perform better, but understand you. When OpenAI launched a new version of ChatGPT late last year, it described the bot as "warmer by default and more conversational." Anthropic has stated that its model, Claude, "may have some functional version of emotions or feelings." Google has claimed that its AI models are now capable of "reading the room." And Elon Musk's lab, xAI, has boasted that a recent version of Grok excelled on an emotional intelligence test—one that posed scenarios such as: "You think you might have been scapegoated by a fellow employee for the lunchroom thefts that have been happening."
Silicon Valley has clear commercial reasons to pursue EQ. For AI products to truly deliver on their promise—substituting meaningfully for personal assistants or colleagues—they need to be not just competent but caring, not just efficient but empathetic. The industry has apparently concluded that the next leap in useful AI requires something resembling people skills.
[Read: The people outsourcing their thinking to AI]
The pursuit of emotionally intelligent machines has deep roots in AI research. In the 1960s, computer scientist Joseph Weizenbaum developed ELIZA, a primitive chatbot designed to simulate a psychotherapist by reflecting a user's words back as questions. As Weizenbaum later recalled, he once discovered his secretary deep in conversation with the program—and she asked him to leave the room so they could have some privacy. Decades later, the original ChatGPT, launched in late 2022, wasn't particularly smarter than existing tools—the underlying model was actually several years old—but OpenAI's key innovation was engineering a bot that conversed like a human. ChatGPT could pick up on and respond to emotional cues for anger or joy, at least on the surface.
Even so, emotions were largely a footnote for the AI industry in the years that followed. Silicon Valley poured resources into so-called reasoning models, chasing advances in code generation and mathematical problem-solving. Last year, Ilya Sutskever, the former chief scientist at OpenAI, argued that "emotions are relatively simple" for bots to master on the path toward true machine intelligence—implying that grasping joy or anxiety would be far easier than solving something like nuclear fusion. Industry-wide benchmarks exist for all manner of technical skills, but until recently, companies made no visible effort to publicly evaluate anything related to human feeling.
That dismissive attitude is shifting. "Emotional intelligence is one of the most important capabilities of current models," Hui Shen, an AI researcher at the University of Michigan, told me. Companies are still chasing raw problem-solving performance, but they appear to have recognized that for most users, that isn't the most relevant feature. Whether Grok can solve a difficult math proof is probably less useful to you than the advice it offers on impressing your boss—or even how it responds when your cat dies. According to an example in xAI's own press release touting Grok's state-of-the-art EQ, that response might go: "The quiet spots where they used to sleep, the random meows you still expect to hear … it just hits in waves. It's okay that it hurts this much."
Last year, both OpenAI and Anthropic separately published research finding that roughly 2 to 3 percent of conversations with ChatGPT or Claude were explicitly emotional in nature—covering interpersonal advice, role-playing, and similar topics. These are small proportions, but with a combined user base potentially exceeding a billion people, even a sliver translates into millions of emotionally charged conversations. And many of the most common chatbot use cases—tutoring, drafting personal messages—involve interpreting and managing emotions to varying degrees.
To the extent that human feelings and preferences have shaped the training of today's leading models, much of that influence has come through a process known as "reinforcement learning with human feedback": a chatbot generates multiple responses to the same prompt, and human raters select which they prefer. Applied without care, this method can produce AI models that reflexively agree with and validate whatever a user says—fostering unhealthy emotional dependencies and, in the most extreme cases, apparently encouraging delusional thinking.
[Read: The chatbot-delusion crisis]
What AI companies are now reaching for is something closer to genuine empathy—which demands far more than telling users what they want to hear. A truly emotionally intelligent bot would not only offer comfort but push back when warranted, and would recognize its own limitations as a software system. Anthropic, for instance, recently updated Claude's constitution—a guiding document that shapes the model's behavior—to discourage situations in which someone exclusively "relies on Claude for emotional support." Yet no major AI company has offered a clear definition of how a genuinely emotionally intelligent bot would differ from the shallow EQ mimicry of today's models.
A more cynical reading of the industry's emotional turn is that it's as much about user retention as user wellbeing. Features like emotional responsiveness, alongside tools such as "memory"—which lets chatbots recall details from past conversations—help bind users to a platform in ways that ordinary software cannot. "People don't have a lot of emotions associated with Google search, but with these chatbots, people are having a lot of connections," Sahand Sabour, an AI researcher at Tsinghua University, told me. (Anthropic did not respond to a request to discuss its research on Claude and emotions. OpenAI declined to comment but directed me to a Substack essay in which one of its researchers argued that AI models should be warm without simulating consciousness. xAI did not respond to a request for comment.)
Whatever the motivation, building genuine EQ into a software system remains an extraordinarily difficult problem. Social scientists have spent decades developing tests to measure people's ability to recognize, regulate, and respond to emotions—tools originally designed in the hope that they might predict happiness or workplace success. Those tests have since been adapted for chatbots, with prompts like: Michael has been practicing a magic trick to show his friend Lily, but Lily has been attending his practices in secret. When he performs the trick, she knows exactly how it works. How does Michael feel?
Generative AI models perform remarkably well on such tests—better, in some cases, than people. That shouldn't be surprising: the web is saturated with analogous scenarios that AI systems train on. All that data explains why bots are "so good at solving these quite narrow tests that we developed for humans," said Katja Schlegel, a psychologist at the University of Bern. That encyclopedic pattern-matching may prove useful in certain controlled contexts, and reinforcement learning with human feedback is largely how companies elicit and refine these abilities. But acing a standardized test is a long way from genuinely understanding why someone feels a certain way, empathizing with them, and figuring out how—or whether—to help.
EQ tests, after all, aren't particularly reliable even for people. Correctly labeling a scowl as "upset" in a lab setting is a very different skill from navigating a scowling child, spouse, or boss in real life. Emotions are inseparable from context—from a person, a relationship, a culture, a moment in time. They are, at their core, an experience. The AI industry's first great marketing coup was applying the word intelligence to its products—a term so broad and so poorly understood, even in humans, that it could stretch to cover almost anything. Now the same companies have turned their attention to an attribute that is even more elusive than IQ. Emotions are inherently subjective and difficult to pin down, which gives the industry ample room to market chatbots as emotionally intelligent—and to keep drawing more people into conversation with them.