Clinical trial results are often misunderstood, especially when it comes to what “failure” really means. Negative findings do not indicate that a study has failed, they contribute valuable knowledge that helps guide future research. True trial failure usually occurs when a study is stopped early, before meaningful conclusions are made. These premature terminations lead to lost data, wasted resources, and missed opportunities to improve patient care. Understanding what is considered a failed trial is therefore important for improving how clinical research is understood.
To improve how the industry understands and predicts these events, Wemedoo researchers Aleksa Jovanovic, Stojan Gavric, and Nikola Cihoric, in collaboration with Fabio Dennstädt from Bern University Hospital, recently published a comprehensive study: "Approaches in Analyzing Predictors of Trial Failure: A Scoping Review and Meta‑Epidemiological Study" in BMC Medical Research Methodology.
Conducting this review required analyzing a massive dataset. To efficiently screen nearly 18,000 records, the team employed a novel AI-assisted methodology using Large Language Models (LLMs).
The team implemented a two-step screening approach utilizing the Claude Sonnet model. The LLM first filtered out clearly irrelevant titles and then categorized abstracts by relevance, leaving final validation to human reviewers.
This AI-assisted workflow achieved ~100% sensitivity (no relevant studies were missed in the validated tiers) and ~84% specificity, demonstrating how Wemedoo is leveraging cutting-edge technology to accelerate complex evidence synthesis.
Beyond the methods used to write the review, the study also examined the application of ML techniques in trial failure analysis. While the majority of research in this field has utilized frequentist statistics (like logistic regression), Wemedoo’s review identified and analyzed a distinct subset of studies employing ML algorithms.
The research highlighted that these ML approaches offer distinct advantages over traditional statistical methods. Specifically, ML models can handle larger numbers of features, automatically detect complex interactions between data points, and process unstructured data (such as text from trial protocols). These findings suggest that ML represents a distinct and promising methodological alternative for assessing trial failure risk.
A key objective of the study was to clarify what should be considered a failed clinical trial. The authors showed that previous research uses a wide range of definitions, both in terms of which trial statuses are counted as failures and which are treated as successful or ongoing studies. These differences can strongly influence reported failure rates and affect the conclusions drawn from statistical and ML analyses.
The research demonstrated that including ongoing or active trials when calculating failure rates can make failure appear less frequent, since these studies have not yet reached a final outcome. Similarly, classifying suspended trials as failed can be misleading, as such studies may later restart and be completed. To reduce this inconsistency, authors proposed standardizing definitions for future research, recommending:
Failed trials: Should be defined as terminated or withdrawn
Non‑failed trials: Should include completed trials only
This classification excludes ongoing and suspended trials to ensure consistent comparisons across studies and accurate measurement of failure prevalence and predictors.
This study provides guidance for improving how clinical trial failure is defined and interpreted in future research. Researchers can adopt a standardized definition of trial failure, classifying terminated and withdrawn trials as failed and completed trials as non‑failed.
By applying consistent and well‑defined criteria for trial failure, future research can produce more comparable evidence and support more accurate understanding of why clinical trials fail.
If you’re interested in exploring this research in more detail, we invite you to read the full paper for a deeper look at the methods, evidence, and conclusions behind these findings: Approaches in analyzing predictors of trial failure: a scoping review and meta-epidemiological study
Jovanovic, A., Gavric, S., Dennstädt, F. et al. Approaches in analyzing predictors of trial failure: a scoping review and meta-epidemiological study. BMC Med Res Methodol (2026). https://doi.org/10.1186/s12874-026-02774-8
December 23, 2025
Wemedoo featured on AP News
Company News
December 15, 2025
Wemedoo joins Xraised again to share new perspectives
Company News
October 14, 2025
From device to database in 2 weeks made possible by Wemedoo and Withings
Company News