The Promise and Limits of AI Clinical Trial Recruitment

In the past year, Cleveland Clinic has made headlines for its strategic adoption of artificial intelligence across multiple areas of care and operations, from AI-powered structural heart tools through DASI Simulations to AI-driven revenue cycle management via AKASA. But another compelling initiative, the one that fits into today's conversation, has been its partnership with Dyania Health, exploring how AI might ease one of the most persistent challenges in research: clinical trial recruitment.
In a pilot program targeting a Phase 3 study in transthyretin amyloid cardiomyopathy (ATTR-CM) — a rare and potentially life-threatening heart condition — Cleveland Clinic used AI to screen patients from its own data systems. In one week, the AI platform scanned 1.2 million patient records, reviewed 1,476 in depth, and accurately identified 30 eligible participants through its data analysis. Over the course of 90 days, conventional recruitment methods had identified just 14 eligible patients.
This pilot served as more than a proof of concept — it was a glimpse into what clinical research could look like with the thoughtful application of technology. It also raised a larger question: What role should AI play in the future of clinical trial recruitment?
Emerging Evidence for AI-Powered Clinical Research Recruitment
While Cleveland Clinic’s experience made headlines, it’s part of a broader, growing body of research that suggests AI — when used responsibly — can accelerate the often-slow process of matching the right patient to the right trial.
At the National Institutes of Health, researchers developed an AI-driven tool known as TrialGPT to streamline patient-trial matching. This system is designed to perform large-scale eligibility filtering, predict suitability based on patient data, and then score trials accordingly. In testing, TrialGPT matched patients with 87.3% accuracy, and its trial-level rankings strongly aligned with those made by clinicians. Importantly, the tool reduced screening time by 42%.
In another study led by Mass General Brigham, researchers evaluated an AI tool known as RAG-Enabled Clinical Trial Infrastructure for Inclusion/Exclusion Review. This system parsed electronic health records and clinician notes to identify potential candidates for a heart failure trial, then passed these to human reviewers for final assessment. The AI-assisted approach screened 458 patients, nearly twice as many as were reviewed by manual processes. More patients were enrolled in the AI-supported group — 35 versus 19 — and the time required for enrollment was cut almost in half.
Artificial Intelligence as a Valuable Tool, Not a Replacement
For all the promise artificial intelligence holds, it’s important to view it for what it is: a powerful aid.
One of the ongoing challenges with AI clinical trial recruitment is the quality, structure, and context of patient data. AI systems typically rely on structured fields in medical records (coded diagnoses, lab values, medications) to screen for eligibility. But much of what determines trial fit lives in the gray areas of clinical practice: symptom descriptions, photographs, or patient-reported experiences that aren’t consistently recorded or coded.
Consider a dermatology trial investigating a new topical therapy for mild to moderate plaque psoriasis. Eligibility might require evidence of recent flare-ups affecting a specific percentage of body surface area, but not so severe as to require systemic therapy. A patient could very well meet those criteria, but if their diagnosis is simply coded as “psoriasis” with no details about severity or distribution, an AI tool scanning the EHR might pass them over.
Now imagine that same patient had visited a dermatologist who noted “occasional flare-ups on elbows and knees, moderate itching,” in the progress note, but didn’t code the visit beyond a general diagnostic tag. Without structured data reflecting those nuances, the AI misses the signal. A human coordinator, on the other hand, might pick up on that unstructured data during chart review or a pre-screening call, recognize the relevance, and initiate contact for a relevant clinical trial.
These limitations don’t diminish the usefulness of AI; they highlight its current dependency on structured, complete, and accessible real-world data while reaffirming the need for human oversight to identify potential participants.
Combining AI-Driven Tools With Proven Human-Centered Strategies
There’s no denying the promise of AI in clinical trial recruitment. From faster screening to improved match rates, these technologies are reshaping how we think about trial efficiency and eligibility. But at Remington-Davis, we know that successful clinical trial recruitment begins and ends with people.
Our patient database of more than 19,000 volunteers gives sponsors and CROs a strong head start. But we're also committed to connecting with the broader community: meeting people where they are, educating them about the value of research participation, and lowering the barriers to patient access. Whether through targeted outreach, physician engagement, or referrals from satisfied participants, our recruitment engine is both wide-reaching and deeply human.
Equally important is what happens after enrollment. AI can identify a match, but it can’t build trust. It can’t offer a comfortable space to wait during a long PK visit, provide transportation for a participant facing travel barriers, or stay late to accommodate a working parent’s schedule. That’s the kind of ongoing support our team offers — day in and day out — and it’s a big reason why RDI boasts a 97% patient retention rate.
Enhancing patient retention, after all, is a critical driver of clinical study success. High dropout rates compromise clinical trial data integrity, inflate costs, and delay medical research timelines. But when patients feel seen, supported, and respected, they stay, and that helps therapies reach the market faster.
So yes, we believe in the power of technology, but we also believe in the power of people. That's why we bring both together to help sponsors accelerate enrollment timelines and drug development.
If you’re exploring how to enhance your patient recruitment strategy and beyond, let’s talk.
Frequently Asked Questions About AI & Clinical Research
How does AI impact trial diversity and health equity?
AI has the potential to help — or hinder — trial diversity. If trained on biased datasets, AI can unintentionally exclude underrepresented groups. However, when designed with equity in mind, AI tools can identify gaps in recruitment, flag disparities in access, and support targeted outreach to improve demographic representation in trials.
What role can AI play in refining inclusion and exclusion criteria?
AI can help sponsors and CROs understand how rigid eligibility criteria may impact recruitment feasibility. By analyzing large datasets, AI tools can model how many real-world patients would qualify for a given study. Some systems even suggest ways to adjust criteria without compromising trial integrity, potentially improving diversity and speed to enrollment.
Can natural language processing (NLP) help AI better understand unstructured patient data?
Natural language processing (NLP) is one of the most promising areas of AI development in clinical research, particularly for addressing a major limitation: the inability to fully process unstructured data. Today, AI systems are most effective when pulling from structured EHR fields like diagnosis codes or lab values. But critical eligibility details — such as symptom frequency, disease severity, or physician observations — often live in free-text notes.
Can artificial intelligence (AI) improve more than just patient recruitment in clinical trials?
Absolutely. While recruitment is a common starting point, AI is increasingly being applied in areas like trial protocol optimization, safety signal detection, adverse event prediction, and even endpoint analysis. AI tools can help researchers simulate trial outcomes, refine dosing schedules, and monitor patient adherence in real time.
Can AI help accelerate drug development timelines?
Yes. By streamlining recruitment, automating chart reviews, and improving patient matching, AI can help shave months off study timelines. In later phases of clinical development, AI may also support faster data cleaning, safety review, and signal detection — all of which are essential for accelerating time-to-market for new therapies.
