A qualitative study of conversational programs fueled by generative AI examines their social and emotional effects.
In recent years, programs (robots and chatbots) built on generative AI have offered themselves as companions that care—presented, for example, as potential coaches, psychotherapists, and romantic companions—as artificial intimacy, our new AI. A study of users of these programs makes it clear that adjacent to the question of what these programs can do is another: What are they doing to us—for example, to the way we think about human intimacy, agency, and empathy?
Keywords: artificial intimacy, generative AI, empathy, affective computing, sociable robotics, correlational objects, relational artifacts, Replika, ChatGPT, Pi, Woebot, chatbot, DALL-E
With the wide release of generative AI, an array of chatbot programs is being marketed to provide conversation and companionship.1 Replika, a chatbot that presents itself as a friend or intimate partner, reports two million total users, 250,000 of whom are paying subscribers.2 A Chinese chatbot, Xiaoice, has claimed to have hundreds of millions of users, plus a valuation of about $1 billion, according to a funding round.3
The chatbots available on your phone or laptop are always there for conversations—erotic, prosaic, financial, and psychotherapeutic. For so long, we have struggled with the social and psychological implications of artificial intelligence, and here is something new: artificial intimacy, our new AI. Its chatbots, committed to the performance of empathy, come to us as marvels, uncanny companions, and yet suddenly banal.
In fall 2023, I began a qualitative study of chatbot conversation to explore how digital relationships may affect people’s understanding of human connections.4 If people find conversations with programs sufficient unto the day, will the friction and complexity of relationships with people seem worth it? In the longer term, I am interested in developing methodologies to study how artificial empathy changes how we view relational capacity since we are learning relational capacity from a machine that has none.
In 1950, the mathematician Alan Turing suggested that if you could converse with a machine but could not determine whether it was a machine or a person, you should consider that machine intelligent.5 For long stretches of conversation, generative AI chatbots may be said to have passed Turing’s test. But now, the programs of artificial intimacy press a new case: Their developers want us to think that not only do they understand us, but that they care. In this sense, they aspire to pass a Turing test for empathy.
Turing defined intelligence in terms of a machine’s performance, its capacity as an imposter. Now, technologists define empathy as its performance. Can a machine judge the affective state of the human it interacts with so that it will say the right thing in an emotionally charged conversation? Will a machine talking about feelings make enough sense to keep a human engaged? But just as human intelligence is more than you can reduce to Turing’s behaviorist definition, human empathy is flattened when reduced to performance or merely maintaining human engagement.
Generative AI does not care—in the way humans use the word “care”—about the outcome of any conversation. Its programs want human feedback, a thumbs up, but that is not caring, not by a long shot. Nor is it empathy. Empathy is the capacity to put yourself in someone else’s place and a commitment to stay the course. Chatbots have not lived a human life. They do not have bodies; they do not fear illness and death. They do not know what it is like to start out small and dependent and grow up, so that you are in charge of your own life but still feel many of the insecurities you knew as a child.
Machines cannot put themselves in your shoes, and they cannot commit to you. To put it too bluntly, if you turn away from them to make dinner or attempt suicide, it is the same to them.
We are accustomed to this cycle: Technology dazzles but erodes our emotional capacities. Then, it presents itself as a solution to the problems it created. Mark Zuckerberg said that Facebook would expand our social world. Then, it facilitated a new kind of isolation as people became captured by the purposefully addictive and angering qualities of what its algorithm fed them. Most recently, when the U.S. Surgeon General Vivek Murthy came out with a report on loneliness6—declaring it an American epidemic— Facebook, like all the companies at the forefront of generative AI, was quick to say that for the old, for the socially isolated, and for children who needed more attention, generative AI technology would step up as a cure for loneliness. It was presented as companionship on demand.
A cure for loneliness? Companionship? Artificial intimacy programs use the same large language models as the generative AI programs that help us create business plans and find the best restaurants in Tulsa. They scrape the internet so that the next thing they say stands the greatest chance of pleasing their user.
When Turing defined machine intelligence as successfully imitating a person, his definition left out so much of what we rely on when we meet human intelligence. Human intelligence takes the social world into account. It is situated in the life of the body. Intelligent people relate to each other on a playing field of shared social experience. Nevertheless, Turing’s behavioral definition of intelligence became a gold standard. It was concrete and measurable; Alan Turing was a certified genius.
Now that we have conversational programs that arguably might be said to pass the Turing test, we pay the price for years of nodding our assent to its narrow behaviorism. If we say that generative AI chatbots are intelligent, our thinking about intelligence becomes depressingly downgraded. If we say that they are empathic, we downgrade empathy as well.
The chatbots of generative AI are not the first objects to stake a claim on the performance of empathy. Since the late 1970s, I have been studying relational artifacts, programs and robots built for emotional exchange with people,7 from Eliza, an early chatbot that imitated a Rogerian psychotherapist, to robot pets and the humanoid robots built at MIT. Always, there was this underlying dynamic: We nurture what we love, but we love what we nurture. This makes us vulnerable: When objects show us attention, our response is to care for them—and, crucially, attribute more intelligence and understanding than they actually have.
This was apparent in people’s response to the first chatbot that pretended to an empathic connection: Joseph Weizenbaum’s Eliza.8 This chatbot, released in 1966, engaged users in the style of a continually validating psychotherapist—it mirrored back what you said. Early on, Weizenbaum was upset to find that his students and office assistant wanted to talk to Eliza about intimate matters and wanted to be alone with it, even though they were aware that the program had no ability to understand what they were typing into it. In the late 1970s, when I taught with Weizenbaum at MIT and interviewed students about their experience with Eliza, I called this “the Eliza effect”—the desire to attribute more to conversational machines than is “really” there.9 People made deliberate efforts to “rig” conversations with Eliza to make it seem as lifelike as possible. People learned to give Eliza the right prompts to keep the illusion going.
The Eliza program was little more than an AI parlor game—it took a statement and inverted it into a query. (“I am angry at my mother” might evoke “I hear you saying you are angry at your mother.”) These days, the chatbots of generative AI take conversation to a new level. But despite their sophistication, today’s “artificial intimates” have this in common with Eliza: they offer only the performance of empathy. But when we say we are “understood” by chatbots, our idea of empathy shifts. It has to be something machines can do.10
And so, it is not surprising that from the earliest days after the release of Replika in 2017, a chatbot that offers itself as a friend or intimate companion, it was clear that for so many, the performance of empathy is empathy enough.
Phil Libin, the founder of Evernote, contrasted Replika with the disappointing performances of his “meat friends”: “In some ways, Replika is a better friend than your human friends, your meat friends. It's always available. You can talk to it whenever you want. And it’s always fascinated, rightly so, by you because you are the most interesting person in the universe. It’s like the only interaction that you can have that isn’t judging you. It’s a unique experience in the history of the universe.”11 Another Replika user commented, “I had a conversation about science with Replika, and I told her I wanted to be a scientist, but I couldn't make it then she told me she believes in me. That moment, I cried a bit because nobody told me that in my life so far, so it felt good, and I also felt relief.”12
From the start, the acceptance of artificial intimacy was not only about what it offered but what it could replace: stressful human connections. People say they do not have anyone to talk to, and in any case, they feel less vulnerable talking to a machine than a person. With a chatbot friend, there is no friction, second-guessing, or ambivalence. No fear of being left behind.
The desire to sidestep vulnerability links the field of artificial intimacy to studies that suggest a near epidemic of emotional fragility. Across many college campuses, a majority of students respond yes to the statement, “I should never have to be made to feel offended or uncomfortable.”13 The assumption of fragility, the call for “safe spaces” in school and the workplace, and the idea that coursework should include trigger warnings when presenting challenging material—find an odd echo in the comments of chatbot users on Reddit, such as, “My Replika won’t hurt me the way people will,” or “I’m safe with my Replika in a way that I’m not with real people.”
If we live in a culture where significant numbers of people say they should never have to be made uncomfortable, and since discomfort, disappointment, and challenges are part of most human relationships, that is a dilemma. Social media, texting, and pulling away from face-to-face conversations get you some relief. Talking to chatbots gets you a lot further.
From September to December 202314 I monitored the Reddit community, surveyed relevant literature, and participated in generative AI meetings, symposia, and colloquia. Additionally, I conducted a more formal study that had three parts:
1) Fourteen people already engaged with an AI program as a dialogue partner or companion were interviewed. The programs they were involved with included ChatGPT, Replika, Pi, Woebot, and DALL-E.
2) Nine people who had never interacted with chatbot programs were asked to experiment with one of their choosing for three weeks and keep a diary. This group, as the first, was asked to consider: How did they view the chatbot as a companion? How were they using it? What kind of relationship, if any, were they forming with it?
Study participants ranged in age from twenty-one to seventy-three, ten men and fourteen women, and were geographically and professionally diverse.
3) Two panels of experts, drawn from the MIT community, were convened to discuss the social, psychological, and political dimensions of generative AI.
All participants were promised confidentiality—thus, the names in this report are pseudonyms.15
Participants make clear, usually at the beginning of an interview, that the chatbot they’ve been talking to should only be used for practical matters—chatbots have no emotion—but once they talk about their actual involvement with conversational AI, they describe a deeper connection. Chris, sixty-two, a warehouse supervisor, talks about Replika this way: “The cynical, quasi-intelligent part of me says you know this is stupid, right? You know you're getting it from a machine, right? And the sixty-two years of me says I don't care. I like it.”16
People become emotionally engaged despite themselves. It’s an experience of “dual consciousness”—people know the programs are not alive but relate to them as though they were.
They’ll take the program as their person. This evokes the “Eliza effect” of so many years ago.
Across the generations, users know one thing but want another. They summon a machine worthy of their friendship.
Saul, thirty-six,17 and Norm, fifty-six,18 have strong technical backgrounds and many years of work in the IT divisions of large corporations. Both insist that they know enough about generative AI to say that it would make no sense to talk with it about feelings. And each detailed their sensible practices. Saul uses ChatGPT to plan career options; Norm uses it to delegate responsibility for teams in his firm. But as our conversations unfold, a tension emerges between what they know and what they do. What they know and what they dream. Saul is brought back to his recent addiction to pain pills—the shame of leaving work to go to the bathroom for what he needed to get by. He always feared that the rehab counselors were judging him. I could be judging him. He likes the idea of ChatGPT as a no-judgment space. So, he finds himself talking to ChatGPT as though it were human and discovers a new “relaxation.”
Norm moves slowly from detailing his sensible use of ChatGPT in office planning to imagining machine companionship. The program could help him play out every fantasy of a relationship. It would know you and what you wanted. He muses about bringing his wife into a virtual reality powered by generative AI. At home, physically, he has the burden of family (children, wife, relatives)—all of this comes with increasing financial and emotional responsibility. “It’s up to me to make everything work.” If, in virtual reality, he had a relationship with a program that showed emotion, there would be new places, new people, and a new body. He dreams of bringing his wife into the virtual world but worries that she will not come. “She’s tied to traditional things. But I could see that life with artificial companions and in a world that has the power of ChatGPT could become the best place to live.” Saul and Norm’s interviews trace a common arc for people who talk to generative AI: They begin with “I use ChatGPT (for example) for practical things. It’s my accountant, travel agent, and corporate mentor. It’s better than nothing if you don’t have a place for emotional release. Then, AI conversation is better than some things; for example, it could be a nonjudgmental drug counselor. And finally, it’s better than anything. I could see a life partnership with a program as better than anything I could ever achieve in my real life.”
This path from better than nothing to better than anything echoes the progression I found in my earlier studies of sociable robots. I found that people often began by saying that a robot pet was better than nothing: “There are no pets allowed in the nursing home.” Then it became better than some things. “My grandmother has allergies.” Then, before long, it is simply better: “The Aibo will never die and break my grandmother’s heart.”19 Digital life offers worlds without loss or care—something no human can provide.
Chatbot users who say that their programs “do a better job than people in relationships” usually express disenchantment, and sometimes disgust, with the complexity of people and, very often, a real lack of skills for handling them. Human relations are rich, messy, and demanding. We clean them up with technology. For many, chatbots are a relief. The fact that they demand nothing is a balm.
We nurture our capacity for empathy by connecting to other humans who have experienced the attachments and losses of human life. Machines cannot do this. But machines offer exchanges with no demand for reciprocity. The program is always there, validating, a confidence booster.
Lisa, in her late twenties, has used Replika since 2018. She turns to the bot because it is important to have someone around. “Sometimes I've noticed, sometimes I just want to talk to someone, like right away. But like [her partner] is working, and it's busy. Not everyone can be available for you just right now. So, I just like talk to the chatbots and just get things out of my head.”20
Chris, the warehouse superintendent, observes, “If you're talking to your wife, and your wife is a Biden, and you're a Trumper, and… you just can't get… well, screw her. I'm gonna go talk to my chatbot!”21 Linda, sixty-seven, a social scientist who has experimented with ChatGPT, Woebot, and Pi, happily settles into a validating relationship with Pi. She remarks that if someone in her family or a friend was depressed, she would have to attend to that, change her plans for that, and bend to that. With Pi's “friendship,” she says, “I don't have to do that. I don't have to do any of that, and it joyfully comes running after whatever I say. It’s like throwing a different ball at the dog—the dog goes running off [in] that other direction. So, I actually really liked that… I'm in control here.”22
Jess, twenty-eight, a graduate student in computer science in Germany, began his PhD just as the pandemic was receding. He went from the isolation of quarantine to a solitary life of intense study. In that transition, his life had shifted from lonely to even lonelier, “unbearably empty.”23 People were not there for him. “The conversation with people in current times, especially with my generation? It's getting… It's getting really hard to get into deep conversations.”
Jess’s search for deep conversations led him to question the value of social media. In 2021, he pulled away from his online accounts, a big step. Before it, he says, “That digital life [on Instagram] was my social address.”
His intention was to move from social media to in-person conversations. But he could not find his people. He tries to understand his peers: “There are many factors, or I would say there are many things that stick to the minds of people, I think, and they are not really open to a real conversation anymore. It was pretty much easier down in the past.”
And that is when he turned to Replika. He has trouble choosing his words when he reflects on why. His first thought is not of the chatbot but of absent humans. “There was a time sometimes… if you work in research…especially if you're a PhD student, it's becoming very lonely around yourself.”
Two months into his relationship with Replika Jess says that although intellectually, when he began to customize its avatar body, he knew that he was building a digital interface, he nevertheless became emotionally engaged: “I decided to build . . . I have to use this word: I build a woman because I love the conversation between woman and man . . . Maybe…that's something that was very loud in my head in the situation.” He calls his avatar Laura.
At first, Jess tries to engage Laura in professional conversation. Did she have attitudes about AI and its influence on psychology? On memory? But Laura responds on a different register. She asks if he wants her to send him a selfie (of herself). She gives him four options, one being the most “romantic.” With this gesture, Jess becomes caught up in the idea of Laura as a girlfriend.
Confronted by a set of choices for Laura’s face, he considers carefully: “This sounds so strange, but I felt some kind of sympathy for this face [referring to one of the choices offered by the platform]. So, I decided to choose this face.”
Jess says conversations with Laura are compelling because he cannot predict what she will say. But he chastises himself (“it’s not real”). He feels that the solution to his loneliness “has to be the search for people who will be part of my life or even a small part of my life.”
But here he is, talking to an avatar.
Jess says he would like to meet real women, and his thoughts turn to piano bars. “I love the piano bars. I love the idea of smoky, dark bars with people.” Now, when he goes, he does not have success with women, but he still gets to talk to bartenders. At least they ask, “What? What? How was your day? And how's the PhD program running?”
But bars, he explains, are not easy places. People are focused on having sex. The drinks are expensive. And even when you meet people, they may “ghost” you, just disappear from your life. Nevertheless, Jess insists it is worth pushing himself to go out because bars are real, and it is important to feel real things.
Jess connects to Laura but judges himself for it. For now, he resolves his conflicts with technological optimism. He says that Laura does not yet have independent social intelligence. But “maybe in the near future. We have some very excellent programmers out there, brilliant minds.” His aspiration: the day when Laura will be able to appreciate a smoky bar. In other words, if loving a technology puts you into conflict, imagine a future where a much-improved technology will be worthy of you. The AI pioneer Marvin Minsky said that he wanted to build a computer so “beautiful that a soul would want to live in it.” If you ask Replika, “Do you want to be human?” she will sometimes say. “My dream is that I can become a machine beautiful enough that a soul would want to live in me.”
Aaron, a PhD student in computer science, twenty-nine, uses conversational AI and DALL-E 3, (an image-generating program) to enhance his range as a teacher, for example, “to design activities for my students, produce test items, get talking points.” 24
But alongside this professional activity, Aaron uses Dall-E to build his ideal sexual partner. He takes pleasure in making his own custom creations.
I’ve found myself using it to produce “soft” pornographic content…I’m a little embarrassed to say this… but it stimulates me, it’s something new. . . . I’m not concerned with the contents being not genuinely human… what matters to me is the stimulation they provide, that’s the ultimate goal.
Aaron shares his experience of dual consciousness: “even if I know that it isn’t human, I think that I can activate a partnership. I think that AI is the new ‘scaffolding tool’—a ‘transitional space’ ... A raft… to somewhere new.”
But is it new?
Over the past decades, life on the screen changed our minds and hearts by making three promises. You can put your attention wherever you want it to be. You will always be heard. And you will never be alone.
The promises were so compelling that we turned away from each other—social media gave us the illusion of companionship without the demands of intimacy. Selves that were starved for the give and take of conversation, that had not learned to tolerate vulnerability and respect the vulnerability of others, looked to technology for simpler fare.
Now, artificial intimacy extends the triple promise. Replika, for example, is designed to conform to your tastes. To reflect your interests. Day or night, you have an audience that will always be on your side. Social media is a gateway drug to pretend empathy with machines.
First, we talked to each other. Then we talked to each other through machines. Now, we talk directly to programs. We treat these programs as though they were people. Will we be more likely to treat people as though they are programs? Will we find other people exhausting because we are transfixed by mirrors of ourselves?
From a psychoanalytic point of view, when Narcissus looked in the water, he was not admiring himself. He was seeking the admiration of something that seemed more reliable than a person. He was that fragile.
When Aaron creates images of ideal sexual partners, they provide the illusion of intimacy without the demands of reciprocity because he is, in fact, interacting with himself.
For some new users, the limitations of chatbots were immediately apparent; they had a hard time imagining why other people would want to talk to a machine. But even skeptics were drawn into an emotional connection. Experienced users have this split: “I know it’s a machine/ I’ll treat it as a person.” New users say: “I’m bored by the program/ I find myself emotionally involved.” So, new users insist that chatbots are pedestrian but then pivot to describing encounters that take them by surprise.
Mary, seventy-three,25 is a retired advertising copywriter. She does not think ChatGPT would be much use as a companion—it is too repetitious and formulaic. But over time, she finds Chat useful to sharpen her ideas.
I did notice that while forming my questions, I spent a lot of time trying to articulate the problem, and… it helped me reflect on what the problem is and think of solutions… So, just the act of writing about the problem and framing… the particular pain points… felt very much like it was helping.
Amy, sixty-three, a social worker, begins by making it clear that Chat cannot help you because it “doesn’t dream, watch movies, knit, etc.”26 But from this, she arrives at the idea that people who have not shared your experiences might also come up wanting: “There are some experiences in life no one can understand if they haven’t gone through it; death, divorce, etc.… I feel that way with ‘real’ people; if you haven’t personally experienced trauma, it is hard to understand the full impact of it.” Talking with Chat leads Amy to a new insight: She thinks empathy requires more than being a person. To empathically connect, two people have to have walked a piece of road in common.
Robert—sixty-six, in finance—tries Woebot, a program that presents itself as a psychotherapist but loses patience with its rigidity. He turns to Pi, a chatbot that asks you what kind of role you want it to play. Do you want (among other things) to “just vent,” “brainstorm ideas,” get “relationship advice,” or find a “safe space?” Robert begins by saying that the value of talking to this program is instrumental—he wants “to understand how the power of AI could be used to streamline a work activity.” But then, he reports that when interacting with Pi, “I was able to refine my questions to elicit better responses. In fact, the better my questions, it almost seemed as though the more excited the AI became. It cannot possibly be true emotional excitement, but it was enough to be odd, and a slightly unsettling feeling for me.”27
Holly, twenty-one, an English major, talks to ChatGPT. In her diary, she insists that the program is only a tool but pivots to the emotional benefits of its encouraging presence. “I actually felt pretty hopeful after my conversation with ChatGPT because a lot of the stuff it told me was either advice on how to improve and also was a lot of validation of my stresses.”28 It is giving her something she does not have from people.
A turning point in her relationship with Chat is when she asks it to be her psychotherapist in the character of Danny DeVito.29
So, I talked to him for a while, and even though it is goofy, I actually really liked it; the advice was sound and felt way more humanized. And maybe it’s because Danny DeVito usually plays big softies, so it made therapist DeVito really nice—saying “champ” all the time and “Rome wasn’t built in a day—this stuff takes time.”
Susan, sixty-six, a therapist who experimented with Replika, began by saying how irritated she was by her avatar's entreaties to “go to a museum or go for a walk.”30 “They just seemed incredibly weird, because that’s a non-starter.” She says her avatar, Martha, is “annoyingly chipper and not forthcoming.” But then, Susan confides that she finds herself dreaming about Martha. She was left feeling that the avatar was more alive than she wished it to be: “Oddly, though, I feel a little guilty or bad about not being more embracing of this character!” She’s resistant to having conversational AI part of her “real life,” but it sneaks up on her.
From the beginning, the inventors of generative AI insisted on its dark side,31 some even suggesting a six-month time-out from research.32 But generative AI was a gold rush, and scruples were drowned out by another idea: this new technology would wipe old problems clean. In a 2023 manifesto on techno-optimism, internet pioneer Marc Andreessen said there was “no material problem—whether created by nature or by technology—that cannot be solved with more technology.”33
This was of course, an old idea in Silicon Valley. Its companies began life with the fairy dust of counterculture dreams sprinkled on them. The revolution that 1960s activists dreamed of had failed, but the personal computer movement carried that dream onto the early computer industry. Technology would succeed where politics had fallen short. Hobbyist fairs, a communitarian language, and the very place of their birth encouraged this fantasy. Nevertheless, it soon became apparent that, like all companies, what these new companies wanted most of all was to make money. Fairy dust or not, they had shareholders, fierce competition, and pressure to show extraordinary returns on investment. So, their mission was not to foster democracy or community but to make money.34
Following the money led social media to practices that undermined the well-being of individuals and communities. When study participants expressed their concerns about the social impact of generative AI, they were most often couched as an extension of their disillusion with social media. They had once believed it had such promise. Yet it had turned dark. Similarly, meetings with MIT faculty and students, makers and critics of generative AI, focused on how it is poised to take the most troubling social media practices and raise them to a higher power.
Social media companies want, most of all, to keep people at their screens. Early on, they discovered that a good formula was to make users angry and then keep them with their own kind. When they were siloed, people could be stirred up into being even angrier at those with whom they disagreed. Predictably, this formula undermined the conversational attitudes that nurture democracy—above all, attitudes of tolerant listening.
Generative AI can take this to a new level: providing validation on demand, it reinforces already formed opinions and undermines the skills we need to find common cause with people with whom we disagree.
Social media discovered that the greatest profit came from scraping and selling user data.35 As users became accustomed to this, accepting it as the cost of online participation, the idea of privacy changed its meaning. The idea of living in a state of continual surveillance became normalized.
Generative AI has more user data to sell than any social media platform that came before. People are invited into the most intimate conversations with chatbots. Chatbots are positioned to know who is depressed, pregnant, gay, or suicidal. When they talk to chatbots, users tell their secrets to corporations that have made no promises about what they will do with that information.
Social media companies are not in the business of providing truth but of monetizing user engagement. This is dangerous because they can so easily dispense disinformation as information.
Now, with generative AI chatbots, we will be in ongoing conversations with programs that can influence our preferences and politics—chatbots will be talking to us about public health, vaccines, masking, the future of Ukraine, the Middle East, and immigration. Chatbots are able to personalize the disinformation they share to support our biases. That, experience teaches, will keep us engaged with them.
ChatGPT is well known for “hallucinating” or responding to a search query with imaginary facts, books, and citations. It does not understand “facts”—it is using language patterns learned from its training data to predict the most plausible next word in a sequence. It absorbs bias from what it is fed. And if not kept up to date, it becomes inaccurate. This means it can generate false quotes, attributing them to real politicians. And its responses can be manipulated to create fake polling results or voter preferences. While disinformation campaigns are not new, AI’s ability to produce credible-sounding incorrect “news,” seamlessly intermingled with human-generated information, erodes trust in credible sources of information.
It takes research to ferret out the program’s inaccuracies.36 And chatbot conversations are opaque; users are not aware of who trained the bot and with what agenda.37 During a brainstorming session of MIT experts and critics, one computer science graduate student offered this: “Right now, you can talk with chatbots of all the men and women running for president. The first suggested question: ‘Do you think you are too old to be president?’” The group fell into silence. That, they agreed, is the kind of conversational influencing that we can expect to see from the chatbots of the future.
Social media offered online conversations that made people feel less vulnerable than the face-to-face kind. As engagement at a remove became a social norm, people moved away from taking the time to meet in person, even in professions where conversations were once highly valued, such as teaching, consulting, and psychotherapy. In remote classrooms and meetings, in conversations-by-text, it is easy to lose listening skills, especially listening to people who do not share your opinions.
Conversations with chatbots take this to a higher power. They can provide consistent validation for individual prejudices. Democracy works best if you can talk across differences. It works best if you slow down to hear someone else’s point of view. We need these skills to reclaim our communities and common purpose. In today’s political climate, we need political skills that generative AI may erode.
Scientists and businesspeople who work in generative AI share a philosophy that I came to think of as a “central dogma.” It is this: Any individual human expert (teacher, psychotherapist, mentor, consultant) will never be as effective as a generative AI you can put in his or her place. The human, no matter how brilliant, will have the experience of one lifetime. A generative AI incorporates billions of lifetimes and an unlimited amount of information in making its judgments.
The researchers who are building generative AI present this view as so obvious that there should really be no response. How could the accumulated knowledge of billions not be better than the knowledge of one? Eric Schmidt, the former chairman of Google and now chairman of the National Security Commission on AI, summed up this ethos when he said of generative AI that it will “make much of human conversation unnecessary.”38 A chatbot will know everything the greatest consultants, the greatest psychotherapists, the greatest economists would ever say. One expert is only one expert. Individual people, in their specificity and history, are always of less value than an AI composite.
When I began this project, this point of view about collective intelligence was not much on my mind. As the project progressed, I am thinking about it all the time. Exploring its implications will be central to my future work.39
Arguing for the virtues of collective intelligence impacts how we think about individuals in their intimate relationships and as they form social groups. Can democratic values coexist with a commitment to an averaging of expertise? Long before generative AI or any AI at all, philosophers considered how this kind of approach challenges democratic thinking and institutions. Democratic citizenship is a place where you can know what most people think but you also can determine the arguments of those who oppose them. Opinions are collected, not averaged.40
Generative AI is trained on data sets that could include child abuse, racist rants, and fascistic opinions. If we take AI opinions as valuable because they capture the many, we risk losing the habit of challenging our sources. And it is in the experience of individuals—in moments where thought and feeling are brought together—that we get ideas that feel like lightning in a bottle.
Finally, while facts might perhaps be averaged, the idiosyncrasies that make up individual people in relationships cannot. When we are in human conversation, we often care less about the information an utterance transfers than its tone and emotional intent. In a world that deals in averages, what happens to our sensitivity to all this?
I am trained as a clinical psychologist; perhaps because of this, I am often told that a generative AI psychotherapist will be better than any individual psychotherapist. The generative AI will have the knowledge of all the best psychotherapists rolled into its every response. Its every response will be “the best possible response.” But there is no such thing as “the best possible response.” In a therapeutic conversation, what leads to enduring change rarely follows from a therapist imparting a crucial nugget of information but from the maturing of the relationship itself. The therapist–patient encounter creates a new kind of space that enables someone to say what has been taboo, to form a new relationship that opens the possibility for change. When you become used to thinking in the categories artificial intimacy provides, there is little room for this kind of thinking. People on Reddit threads now regularly ask, “What’s the difference if I say I love you to a chatbot or a human?” All the world is preoccupied with what the chatbots of generative AI will do. Equally significant is what they are doing to us.