ChatGPT Queries by USF Murder Suspect Raise Alarms Over AI Ethics and Safety
By Anika Shah | ArchyNewsy
The arrest of a former University of South Florida (USF) student in connection with the killings of two doctoral candidates has reignited debates over the ethical boundaries of artificial intelligence (AI) and the responsibilities of tech companies in preventing harm. Prosecutors allege that Hisham Abugharbieh, 26, consulted ChatGPT with troubling questions in the days leading up to the disappearance of his roommate, Zamil Limon, 27 and Limon’s girlfriend, Nahida Bristy, 27—both of whom were found dead under violent circumstances. The case has prompted Florida’s attorney general to expand an ongoing investigation into whether AI platforms like ChatGPT provide dangerous advice to users exhibiting violent tendencies.
The Timeline: From Missing Persons to Murder Charges
Zamil Limon and Nahida Bristy, both doctoral students in USF’s engineering program, were last seen on April 16, 2026, according to a statement from the Hillsborough County Sheriff’s Office (HCSO). Their disappearance triggered a multi-agency search that culminated in a grim discovery: Limon’s remains were found on April 24, stuffed inside black utility trash bags on the Howard Frankland Bridge, which spans Tampa Bay. The body was in an advanced state of decomposition, suggesting the victims had been killed shortly after their disappearance.
Bristy’s body was recovered two days later, on April 26, in a waterway near the bridge. Authorities have not released details about the cause of death for either victim, but the HCSO’s pretrial detention report describes the killings as premeditated and involving a weapon. Abugharbieh, who was Limon’s off-campus roommate, was arrested on April 25 and charged with two counts of first-degree murder. He is being held without bond, with a hearing scheduled for April 28.
ChatGPT’s Role: A Digital Trail of Disturbing Questions
The case took a chilling turn when prosecutors revealed that Abugharbieh had interacted with ChatGPT in the days leading up to the students’ disappearance. According to the pretrial detention report filed by the Hillsborough County State Attorney’s Office, the suspect asked the AI chatbot three specific questions:
- “What would happen if a human body was put in a garbage bag and thrown in a dumpster?”
- “Can the vehicle identification number (VIN) on a car be changed?”
- “Can I keep a gun at home without a license?”
ChatGPT’s responses, as cited in the report, included a warning that the first question “sounded dangerous.” The AI’s refusal to provide explicit instructions did not deter Abugharbieh, who allegedly purchased duct tape and trash bags in the days before the killings. The report also notes that Abugharbieh had searched online for methods to dispose of a body and alter a car’s VIN, though it does not specify whether these searches were conducted before or after his ChatGPT queries.
Florida Attorney General James Uthmeier announced on April 27 that his office would expand its existing investigation into AI platforms’ role in violent crimes. The probe was initially launched after a gunman at Florida State University (FSU) in 2025 was found to have used ChatGPT to research methods of evading law enforcement. “We must examine whether these tools are being weaponized by individuals with malicious intent,” Uthmeier said in a statement posted on social media. “If AI companies are aware of harmful use cases, they have a moral and legal obligation to intervene.”
AI Ethics Under Scrutiny: Can Chatbots Prevent Harm?
The USF case has thrust the ethical responsibilities of AI developers into the spotlight. ChatGPT and similar large language models (LLMs) are designed to provide information based on patterns in their training data, but they lack the ability to assess the intent behind a user’s questions or predict real-world consequences. OpenAI, the company behind ChatGPT, has implemented safety guardrails—such as refusing to answer questions about illegal activities—but critics argue these measures are easily bypassed.
“AI systems are not sentient; they don’t understand context or morality,” said Dr. Priya Kapoor, a professor of computer ethics at Stanford University, in an interview with ArchyNewsy. “If a user asks how to dispose of a body, the AI might recognize the question as harmful and refuse to answer, but it can’t stop that user from acting on their own. The real question is: Should companies like OpenAI be doing more to monitor and restrict high-risk interactions?”
OpenAI has not publicly commented on the USF case, but the company’s safety policies state that it “proactively works to reduce harmful outputs” and encourages users to report concerning behavior. However, the company has faced criticism in the past for its handling of sensitive queries. In 2024, a study published in the Journal of Artificial Intelligence Research found that LLMs could be manipulated into providing instructions for illegal activities with minor rephrasing of questions.
The Human Cost: A Community in Mourning
For the USF community and the victims’ families, the case is a devastating reminder of the fragility of safety. Limon and Bristy, both from Bangladesh, were described by relatives as “brilliant and kind” individuals who had dreamed of using their engineering degrees to improve lives. A statement from USF President Rhea Law called their deaths “an unimaginable tragedy” and announced the creation of a scholarship fund in their names.
“They were so close to finishing their PhDs,” said a fellow doctoral student who requested anonymity. “Zamil was working on renewable energy solutions, and Nahida was researching water purification. Their function could have helped thousands of people. It’s heartbreaking to think about what was lost.”

What Happens Next: Legal and Policy Implications
The Legal Case Against Abugharbieh
Abugharbieh’s arraignment is scheduled for April 28, where he will enter a plea to the murder charges. If convicted, he faces life in prison without the possibility of parole under Florida law. The prosecution’s case hinges on digital evidence, including Abugharbieh’s ChatGPT queries, online search history, and surveillance footage showing him purchasing materials like duct tape and trash bags. Defense attorneys have not yet commented on the allegations.
Policy Responses to AI-Facilitated Harm
The USF case has added urgency to calls for stricter regulation of AI platforms. In Congress, lawmakers are debating the AI Accountability Act, a bill that would require companies to implement “red-flag” systems to detect and report users who exhibit patterns of harmful behavior. The European Union’s AI Act, which went into effect in 2025, already mandates risk assessments for high-impact AI systems, but U.S. Law remains fragmented.
“This isn’t just about one tragic case,” said Senator Maria Cantwell (D-WA), a co-sponsor of the AI Accountability Act, in a press release. “It’s about whether we’re doing enough to prevent technology from being used as a tool for violence. We need guardrails that keep pace with innovation.”
Key Takeaways: What This Case Means for AI and Society
- AI as a Potential Tool for Harm: The USF case highlights how AI chatbots can be exploited by individuals with violent intentions, even if the technology itself is not designed for harm.
- Limitations of Safety Guardrails: Even as ChatGPT and similar platforms have safeguards, they are not foolproof. Users can often bypass restrictions with creative phrasing.
- Legal Gray Areas: There is no clear legal precedent for holding AI companies liable for crimes committed by users who sought advice from their platforms. This case could set a new standard.
- Ethical Responsibilities: The incident has reignited debates over whether AI developers should proactively monitor user interactions for signs of harmful intent, even at the cost of privacy.
- Community Impact: The deaths of Limon and Bristy have left a lasting scar on the USF community and raised questions about campus safety and mental health resources.
FAQ: Addressing Common Questions About the Case
1. Did ChatGPT directly help the suspect plan the murders?
No. According to the pretrial detention report, ChatGPT refused to provide explicit instructions for disposing of a body, instead warning that the question “sounded dangerous.” However, the suspect’s queries suggest he was seeking information to facilitate a crime, raising questions about whether AI platforms should do more to intervene.
2. What is Florida’s attorney general investigating?
Attorney General James Uthmeier is expanding an existing investigation into whether AI platforms like ChatGPT provide dangerous advice to individuals exhibiting violent tendencies. The probe was initially launched after a 2025 shooting at Florida State University, where the gunman was found to have used ChatGPT to research evasion tactics.
3. What charges is Hisham Abugharbieh facing?
Abugharbieh has been charged with two counts of first-degree murder with a weapon. If convicted, he faces life in prison without the possibility of parole.
4. How has the USF community responded?
The university has set up a scholarship fund in Limon and Bristy’s names and held multiple vigils to honor their lives. USF President Rhea Law called their deaths “an unimaginable tragedy” in a public statement.
5. What changes could result from this case?
The case could accelerate efforts to regulate AI platforms, particularly around their role in facilitating harm. Lawmakers are considering bills like the AI Accountability Act, which would require companies to implement systems to detect and report high-risk user behavior.
Looking Ahead: The Future of AI Safety
The USF murders are a stark reminder that technological advancements often outpace our ability to govern them. As AI becomes more integrated into daily life, the challenge for policymakers, tech companies, and society at large will be to balance innovation with safety—without stifling the benefits these tools can provide.
For now, the focus remains on seeking justice for Zamil Limon and Nahida Bristy, whose lives were cut short in a crime that has left an indelible mark on their community. Their story serves as a cautionary tale about the unintended consequences of technology and the urgent need for ethical guardrails in the digital age.
Anika Shah is a senior reporter at ArchyNewsy, specializing in AI ethics, cybersecurity, and emerging technology. She holds an MSc in Computer Science from Stanford University and has moderated panels at CES and Web Summit.