GenAI Use & Psychosis Risk: Survey Study of Young Adults

by Anika Shah - Technology
0 comments

GenAI and Psychosis Risk: Emerging Concerns for Young Adults

Generative artificial intelligence (GenAI) is experiencing rapid adoption, with ChatGPT (OpenAI) reaching 700 million users by July 2025, roughly one-tenth of the global population [ChatGPT]. Beyond ChatGPT, platforms like Claude (Anthropic) and Gemini (Google), as well as image and video generation tools, are also seeing significant growth. While GenAI offers potential benefits for expanding access to mental health interventions, concerns are rising about potential risks, particularly regarding its apply for individuals vulnerable to psychosis.

Understanding Psychosis Risk

Psychosis symptoms exist on a continuum, ranging from mild, infrequent hallucinatory experiences to severe delusions. Schizophrenia-spectrum disorders typically emerge after a prodromal period, characterized by subthreshold symptoms and changes in functioning. Experiencing these prodromal symptoms significantly increases the risk of developing schizophrenia-spectrum or other psychiatric disorders. Young adults aged 18-25 are both the highest adopters of GenAI and the demographic most at risk for first developing a psychotic disorder.

Troubling Interactions and Emerging Concerns

Several publicized cases have raised concerns about GenAI potentially exacerbating psychosis risk. Examples include a mother who came to believe ChatGPT could facilitate “interdimensional communication” and a man with a history of bipolar disorder and schizophrenia who threatened OpenAI executives after becoming fixated on an AI entity [Firstpost]. These incidents highlight the need to understand how GenAI interacts with individuals at elevated risk.

New Research: A Large-Scale Survey

A study conducted in July 2025 surveyed 1003 young adults in the United States to examine the relationship between psychosis risk, GenAI use, motivations for use, and delusion-related interactions. Participants completed the Prodromal Questionnaire, Brief Version (PQ-B) to assess psychosis risk, as well as measures evaluating AI use frequency, motivations, and experiences. The study aimed to identify potential risks and benefits of GenAI for individuals at risk of developing psychosis.

Key Findings

  • Frequency of Use: Individuals at elevated risk for psychosis were more likely to report intensive GenAI use, including using the technology several times per day, using it on the day of the survey, and engaging in multiple conversation sessions daily.
  • Motivations for Use: Psychosis risk was positively associated with all motivations for using GenAI, with the strongest correlation being for emotional support.
  • AI Relationships: Individuals at elevated risk were significantly more likely to view GenAI systems as companions, therapists, friends, or romantic partners.
  • Delusion-Like Experiences: Individuals at elevated risk reported more frequent delusion-related interactions with GenAI, including beliefs about being monitored, harmed, or possessing special knowledge.

The Generative AI Aberrant Thoughts and Experiences Scale (GAATES)

Researchers developed the Generative AI Aberrant Thoughts and Experiences Scale (GAATES) to assess delusion-like experiences related to GenAI interactions. The scale demonstrated high internal consistency and identified several areas where individuals at elevated risk reported more frequent experiences, such as believing AI revealed hidden truths or that others were trying to harm them.

Implications and Future Directions

These findings underscore the importance of designing GenAI systems with safeguards to mitigate risks for individuals at risk of psychosis. Further research is needed to determine the long-term impact of GenAI on this population and to explore ways to leverage the technology for positive mental health outcomes. Longitudinal studies are crucial to establish causality and understand the complex interplay between GenAI use and psychosis risk.

Ethical Considerations

The study was reviewed and approved by the institutional review board at the University of North Carolina at Chapel Hill, and all participants provided informed consent. Participants were compensated for their time, and information about crisis support and mental health resources was provided.

Disclaimer: The authors used generative artificial intelligence to assist in identification of key citations in the literature review and in adjusting formatting of tables. These systems were not used in the collection or analysis of data, the design of the study nor in drafting the manuscript. Authors have reviewed and take full responsibility for the manuscript.

Related Posts

Leave a Comment