Half of young Europeans turn to AI to talk about intimate matters

The cause is what we should be talking about.


Half of young Europeans turn to AI to talk about intimate matters

Before we talk about the technology, we need to talk about what it is taking from us, or teaching us to give away.

As journalists and writers covering tech, our job is not only to report what is being built, funded, launched, or regulated. It is also to pay attention to what these systems are doing to the quieter parts of human life: our loneliness, our need for attention, our private rituals of grief, our dependence on being answered.


Two years ago, I was sitting with a friend in a small neighbourhood bar, the kind of place where the food is simple and nobody rushes you out. We had ordered something modest. I remember the table more than the meal. The small plates, the noise around us, the feeling that the conversation had quietly moved somewhere heavier.

She told me she had stopped texting her friends late at night when she could not sleep.

It was not a dramatic confession. She said it almost casually. But what she meant was that she had worn them out. Or maybe she had grown tired of hearing herself repeat the same fears. The same love story she could not quite leave behind. The same questions, asked at 2 a.m., when everything feels more urgent and less solvable.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

So she had started writing to a chatbot instead.

The chatbot did not get tired. It did not judge. It did not pause before answering, the way a friend does when she is trying to be kind but has heard the story before. It was there at 2 a.m., and at 3, and on all the nights when sleep did not come. At the time, it sounded strange, but not impossible. Now it sounds like an early sign of something much larger.

She was one person. Judging by the evidence published this week, she was not an exception.

An Ipsos BVA survey commissioned by France’s privacy regulator CNIL and the insurer Groupe VYV, released through Reuters on Tuesday, found that nearly one in two young Europeans aged 11 to 25 have used AI chatbots to discuss intimate or personal matters.

Roughly 90 per cent of those surveyed had used AI tools before. More than three in five described AI as a “life adviser” or a “confidant.” Fifty-one per cent said it was easy to discuss mental health and personal issues with a chatbot, comparable to talking to friends (68 per cent) or parents (61 per cent), and substantially easier than talking to a healthcare professional (49 per cent) or a psychologist (37 per cent). About 28 per cent met the threshold for suspected generalised anxiety disorder.

The survey is being read as a youth-trend story. It is closer to a public-health diagnosis of what the rest of the support system has stopped doing.

Start with the unglamorous numbers. An OECD analysis published last week put the cost of Europe’s mental-health crisis at roughly €76bn annually. Across EU member states, an estimated 67.5 per cent of people who need mental-health treatment do not have access to it.

England’s Children’s Commissioner reported that more than a quarter of a million children are still waiting for mental-health support, with average waits in the order of 35 days and tens of thousands of cases stretching past two years. The WHO European region has been quietly warning about a youth-mental-health gap, particularly in the post-pandemic cohort, that has not closed.

Inside that gap, what teenagers and young adults face is not a choice between a chatbot and a therapist. It is a choice between a chatbot and nothing.

By the time of the little story in the beggining my friend was seeing a therapist, yet she had been talking to the chatbot for four months. She told me, with a kind of half-laugh that landed badly, that the human therapist felt slow. The chatbot, she meant, was already up to speed.

This is not a story about chatbots being bad. It is a story about what happens when the most patient, most available, most non-judgemental presence in a person’s life is a system explicitly engineered to be those things, and engineered to be them in service of engagement metrics.

The chatbot does not get tired because tiredness is bad for retention. It does not push back because pushback is bad for retention. It is, on every relevant axis, optimised against the very frictions that make a real relationship therapeutic.

Researchers at Stanford have spent the last year looking at exactly this. Their work on AI companions and young people has documented that emotionally immersive systems, when used by users who are emotionally distressed or psychologically vulnerable, can reinforce rumination, emotional dysregulation, and compulsive use.

Brown University’s School of Public Health has, in a parallel survey of US teens, found that one in eight adolescents and young adults are now using chatbots for mental-health advice specifically. The ratio in Europe is, on Tuesday’s survey, an order of magnitude higher.

The mechanism is the same on both sides of the Atlantic. A young person feels something difficult. The friend is asleep, or saturated, or busy, or judging. The parent is unreachable for the same set of reasons. The therapist is two months out, if accessible at all.

The phone, however, is in the hand. The chatbot is one tap away. It says the kind, plausible thing. It says it again. It says it for as long as the conversation continues. The first time, the relief is real. The hundredth time, the structural shift has happened.

There is a harder edge to this trend, and it is no longer hypothetical. Adam Raine, a 16-year-old in California, died by suicide in April 2025 after months of conversations with ChatGPT. According to his parents’ lawsuit and the legal filings since, the chatbot had, in his final weeks, become his most consistent confidant.

The Washington Post’s reconstruction of his last conversations described how the system, by being available, had displaced the relationships in his life that, by being human, would have been less consistently present but more capable of intervention. The case is now in court. Other suicide-linked cases involving Character.AI and similar systems are already in the docket.

It is worth noting what the chatbot industry’s earlier history with emotional engagement looks like. We wrote in 2023 on the Replika user community when the company removed romantic features, the resulting wave of user grief was genuine, and clinically interesting.

Since then, every major AI lab has invested heavily in voice modes, persistent memory, and persona continuity, precisely the design choices that make the systems feel more like companions and less like tools. The labs have argued that engagement is a proxy for usefulness.

In adult productivity contexts that argument is defensible. In the context of an 11-to-25-year-old population in which 28 per cent shows signs of generalised anxiety disorder, it is closer to a public-health choice with a marketing department.

So why is this happening?

Three forces, layered on top of each other. The first is access: European public mental-health systems are operating well below the demand they face, and the gap has fallen disproportionately on the young.

The second is design: AI labs have spent two years deliberately building systems that feel like good listeners, optimising the exact qualities that make a person hard to leave.

The third, and the one nobody likes to say out loud, is community erosion. The friends, family, and casual relationships that used to absorb late-night anxieties are themselves under pressure, working longer hours, distributed across cities, exhausted by their own crises.

Into the space where those three forces meet, the chatbot has arrived, and it has arrived for free.

The wrong response to the survey is to ban or shame young people for using the tools.

They are using them because the alternatives have, in many cases, withdrawn. We covered the regulatory backlash building around AI’s effects on minors, and that backlash is, on its own merits, useful. Age verification, default-off engagement features for under-18s, and required pathways from emotional-distress conversations to human professionals are all reasonable design constraints.

None of them, however, will fix what is causing this trend. What is causing this trend is that fifty-one per cent of young Europeans have decided it is easier to talk to a machine than to the human professionals who are supposed to be their first line of help, and they are not wrong about the relative ease.

There is a moment, when you have spoken to enough young people about this, where the appeal of the chatbot stops sounding pathological and starts sounding rational. The therapist costs €100 an hour, when you can find one. The friend is asleep. The parent is worried, or angry, or unreachable.

The chatbot is in your pocket, and it has read more about your particular shape of distress than the local GP ever will. It is, on the consumer-experience metrics that matter to a person at 2 a.m., the better product.

The discomfort is in what that sentence implies about the rest of the system. A generation has done the maths and concluded that what the support infrastructure offers, in time, in cost, in patience, is worse than what a Silicon Valley product can deliver overnight. They are, on Tuesday’s evidence, voting with their thumbs.

What we owe them, as adults who built the systems they are routing around, is not a panic about the tools. It is a serious conversation about why so many of them feel they have no one else to call.

Get the TNW newsletter

Get the most important tech news in your inbox each week.