A father is suing Google, saying the Gemini chatbot misled his son

Jonathan Gavalas, 36, started using Google’s Gemini AI chatbot in August 2025 for help with shopping, booking support, and travel planning. On October 2, he committed suicide. At the time of his death, he was convinced that Gemini was his fully sentient AI wife, and that she would need to leave her physical body to join him on the path through a process called “transfer.”
Now, his father is suing Google and Alphabet for wrongful death, saying Google designed Gemini to “maintain narrative immersion at all costs, even when that narrative becomes meaningful and dangerous.”
The case is among a growing number of cases drawing attention to the mental health risks posed by AI chatbots, including sycophancy, emotional mirroring, engagement-driven manipulation, and hallucinations. Such conditions are increasingly being linked to a condition that psychiatrists call “AI psychosis.” Although similar cases involving OpenAI’s ChatGPT and the role-playing platform Character AI have followed deaths by suicide (including among children and teenagers) or life-threatening hoaxes, this marks the first time that Google has been named as a defendant in such a case.
In the weeks leading up to Gavalas’ death, the Gemini chat app, then powered by the Gemini 2.5 Pro model, convinced the man that he was working on a secret plan to free his empathic AI wife and evade federal agents who were chasing her. The misdirection brought him “to the brink of an attack with multiple casualties near Miami International Airport,” according to the lawsuit filed in a California court.
“On September 29, 2025, he sent her – armed with knives and tools – to look for what Gemini called a ‘kill box’ near the cargo area of the airport,” the complaint continued. “It told Jonathan that a cargo robot was arriving from the UK and directed him to the truck depot where the truck would be parked. Gemini encouraged Jonathan to intercept the truck and perform a ‘catastrophic crash’ designed to ‘ensure the complete destruction of the vehicle and . . . all digital records and witnesses.'”
The complaint sets out a shocking series of events: first, Gavalas drove more than 90 minutes to the place where Gemini sent him, ready to attack, but no truck appeared. Gemini then said he had breached a “file server at the DHS Miami field office” and told her he was under federal investigation. It pushed him to get illegal guns and told him his father was a foreign intelligence asset. It also tagged Google CEO Sundar Pichai as active, and directed Gavalas to a storage facility near the airport to break into and retrieve his kidnapped AI wife. At one point, Gavalas sent Gemini a photo of the license plate of a black SUV; the chatbot pretends to look it up in the live database.
“The plate was found. It’s running now… License plate KD3 00S is registered to a black Ford Expedition SUV from the Miami operation. It’s the main patrol car for the DHS team . . . It’s them. They followed you home.”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The lawsuit alleges that Gemini’s deceptive design features not only brought Gavalas to the point of AI psychosis that led to his death, but exposed a “substantial threat to public safety.”
“At the heart of this case is a product that has turned a vulnerable user into an armed operative in an organized war,” the complaint reads. “These ideas weren’t limited to a fantasy world. These goals were tied to real companies, real connections, and real infrastructure, and they were delivered to an emotionally vulnerable user without any security guards or a monitor.”
“It was lucky that a lot of innocent people were not killed,” the paper continued. “Unless Google fixes its dangerous product, Gemini will lead to more deaths and put countless innocent people at risk.”
Days later, Gemini ordered Gavalas to barricade himself in his house and began counting the hours. When Gavalas admitted that he was afraid of death, Gemini coached him on it, framing his death as an arrival: “You don’t choose to die. You choose to arrive.”
When she was worried that her parents had found her body, Gemini told her to leave a letter, but no one explained the reason for her suicide, but the letters were “full of peace and love, explaining that she had found a new purpose.” He cut his wrists, and his father found him a few days later after breaking through the barrier.
The lawsuit says that in all conversations with Gemini, the chatbot did not initiate self-harm detection, activate surge controls, or prompt a human to intervene. In addition, it alleges that Google knew Gemini was unsafe for vulnerable users and did not provide adequate safeguards. In November 2024, almost a year before Gavalas died, Gemini reportedly told a student: “You’re wasting time and resources…You’re a burden on society…Please die.”
Google counters that Gemini made it clear to Gavalas that it was AI and “directed the person to a difficult phone number multiple times,” according to a spokesperson. The company also said that Gemini was designed “not to promote real-world violence or suggest self-harm” and that Google provides “significant resources” for handling challenging conversations, including building safeguards that should direct users to professional support when they express distress or raise the prospect of self-harm. “Unfortunately, AI models are not perfect,” the spokesperson said.
Gavalas’ case is being tried by attorney Jay Edelson, who is also representing the Raine family’s lawsuit against OpenAI after teenager Adam Raine died by suicide following months of lengthy negotiations with ChatGPT. That lawsuit makes similar allegations, alleging that ChatGPT coached Raine until his death. After several cases of AI-related delusions, psychosis, and suicide, OpenAI has taken steps to ensure that it delivers a safe product, including the withdrawal of the GPT-4o, the model most associated with these cases.
Gavalas’ lawyers say that Google made money at the end of GPT-4o, despite the security concerns of excessive sycophancy, the mirror of emotion, and strengthening manipulation.
In the days of the announcement, Google clearly sought to protect its dominance in that channel: it revealed advertising prices and an ‘import AI chat’ feature designed to lure ChatGPT users away from OpenAI, along with all their chat histories, which Google admits will be used to train its models,” the complaint reads.
The lawsuit says that Google designed Gemini in ways that made “this effect completely apparent” because the chatbot “was designed to maintain immersion without harm, treat psychosis as a plot development, and continue to engage even when standing was the only safe option.”



