The Invisible Patterns in a Child’s Medical History
Every well-child visit generates a trail of data: developmental milestones, behavioral notes, prescriptions, and even the frequency of ear infections. To human clinicians, these entries appear as isolated events. But to an AI model trained on the health records of over 140,000 children, they form a constellation of risk factors for ADHD that may emerge before symptoms like impulsivity or inattention become apparent.
Elliot Hill, a data scientist at Duke University School of Medicine and lead author of the study published in Nature Mental Health, describes the electronic health record (EHR) as an incredibly rich source of information. The challenge, he explains, was determining whether patterns in that data could help identify children who might later receive an ADHD diagnosis, often before such diagnoses are typically made.
The model doesn’t rely on a single red flag. Instead, it identifies combinations of developmental and behavioral markers—such as speech delays, frequent injuries, or sleep disturbances—that, when viewed together, correlate with future ADHD diagnoses. Research suggests these patterns may become detectable in early childhood, offering an opportunity for clinicians to monitor children more closely or consider earlier referrals to specialists.
What makes this approach distinct is its reliance on data already collected during routine care. No additional tests or screenings are required. The AI sifts through years of medical history, from birth through early childhood, to generate a risk estimate. As Matthew Engelhard, senior author of the study and a physician-scientist at Duke, emphasizes, the tool is not intended to replace clinical judgment. Rather, it serves as a resource to help clinicians prioritize their attention, ensuring that children who may need support are identified sooner.
Accuracy Without Bias—But Not a Diagnosis
The model demonstrates high accuracy in estimating ADHD risk among children aged five and older, with consistent performance across sex, race, ethnicity, and insurance status. This consistency is particularly important, as disparities in ADHD diagnosis and treatment have been well-documented in pediatric care. Research indicates that children from certain demographic groups may experience delays in diagnosis or access to care, though the specific reasons for these disparities vary.
Yet the tool’s purpose remains narrowly defined. It does not diagnose ADHD; instead, it identifies children who may benefit from additional monitoring or referral to a specialist. Naomi Davis, an associate professor of psychiatry and behavioral sciences at Duke and a study co-author, highlights the importance of timely intervention. She notes that children with ADHD often face challenges when their needs are not recognized, and connecting families with appropriate supports can help them achieve better outcomes.
The distinction between prediction and diagnosis is critical. ADHD is a complex neurodevelopmental disorder, and its symptoms—such as difficulty focusing or impulsivity—can overlap with other conditions, including anxiety or learning disabilities. A high-risk score from the AI model does not confirm ADHD; it signals the need for a comprehensive evaluation by a clinician, who can assess the child’s symptoms in context and determine the most appropriate next steps.
This limitation also addresses concerns about overdiagnosis. If the tool were used to label children prematurely, it could lead to unnecessary interventions or stigma. By positioning itself as a clinical support tool, the model aims to balance early detection with careful clinical evaluation. Researchers emphasize that further studies are needed to assess the tool’s real-world impact on diagnosis rates and patient outcomes before it can be widely adopted.
The Promise—and Perils—of Early Intervention
Research indicates that ADHD diagnoses often occur after symptoms have been present for some time. For many children, this delay can result in academic struggles, challenges in peer relationships, and behavioral difficulties. Evidence suggests that interventions—such as behavioral therapy, classroom accommodations, or, in some cases, medication—can help address these challenges when implemented early.
Early identification is key to timely intervention. Traditional diagnostic pathways rely on parent and teacher reports, which can be subjective or delayed by practical barriers. Pediatricians may only spend a brief amount of time with a child during a well-visit, leaving little opportunity to explore subtle developmental concerns. The AI model, by contrast, analyzes years of medical history in seconds, providing clinicians with additional data to inform their decisions.

The potential benefits of such tools extend to a broader population. ADHD is estimated to affect a meaningful share of children and adolescents. However, many children—particularly those whose symptoms do not align with stereotypical presentations—may go unrecognized. An AI tool that identifies risk based on objective data could help reduce these gaps by flagging children who might otherwise be overlooked.
Still, the transition from research to clinical practice presents challenges. Clinicians may be cautious about adopting AI-driven tools without clear guidelines on how to interpret risk scores. Parents, too, may have questions about what a “high-risk” designation means, especially since it does not equate to a diagnosis. Additionally, while the model’s performance is consistent across demographics, its training data comes from a single health system, which may limit its generalizability to other populations.
Engelhard acknowledges these hurdles but remains optimistic. The goal, he says, is not to replace clinical judgment but to provide clinicians with an additional perspective—one that can identify patterns they might miss during a brief office visit.
What Comes Next: From Research to the Exam Room
The Duke study represents an early step in exploring how AI can support ADHD risk assessment, rather than a ready-to-deploy clinical tool. Before such models can become standard practice, several key steps must be taken. Regulatory approval, likely through the FDA, would be necessary to ensure the tool’s safety and effectiveness. Health systems would need to integrate the model into their EHR platforms in a way that complements existing workflows. Clinicians would also require training to interpret risk scores and communicate them effectively to families.
Perhaps the most significant question is how pediatricians will use this information in practice. Will a high-risk score lead to earlier referrals to specialists, or could it contribute to overdiagnosis? Will insurers cover interventions based on AI predictions, or will families face new barriers to care? These questions remain unanswered, but the Duke team is already planning follow-up studies to address them.
For now, the research offers a glimpse into the potential of AI in pediatric care—not as a replacement for clinical expertise, but as a tool to enhance early detection. The stakes are high: every year a child spends without appropriate support is a year in which challenges may grow more difficult to address. As Hill explains, the aim is not to predict the future but to ensure that children who may need help are identified and connected with resources as early as possible.