Methodological and applicability pitfalls of clinical prediction models for asthma diagnosis: a systematic review and critical appraisal of evidence
2
Issued Date
2025-12-01
Resource Type
eISSN
14712288
Scopus ID
2-s2.0-105019114693
Pubmed ID
41107747
Journal Title
BMC Medical Research Methodology
Volume
25
Issue
1
Rights Holder(s)
SCOPUS
Bibliographic Citation
BMC Medical Research Methodology Vol.25 No.1 (2025)
Suggested Citation
Wongyikul P., Phinyo P., Seephueng P., Tanasombatkul K., Kawamatawong T., Wongsa C., Thongngarm T. Methodological and applicability pitfalls of clinical prediction models for asthma diagnosis: a systematic review and critical appraisal of evidence. BMC Medical Research Methodology Vol.25 No.1 (2025). doi:10.1186/s12874-025-02680-5 Retrieved from: https://repository.li.mahidol.ac.th/handle/123456789/112792
Title
Methodological and applicability pitfalls of clinical prediction models for asthma diagnosis: a systematic review and critical appraisal of evidence
Corresponding Author(s)
Other Contributor(s)
Abstract
Background: Challenges in identifying patients at high risk of asthma have driven the development of clinical prediction models (CPMs) to optimise workflows. However, concerns about the transparency and usability of these models remain. This study systematically reviewed previously developed CPMs for asthma diagnosis, focusing on their reporting, methodology, and applicability. Methods: We searched four databases—PubMed, Scopus, Embase, de an overview of existing diagnostic modand Cochrane Controlled Trials Register—using a pre-defined search strategy, covering their inception dates through September 2024. Grey literature and unpublished studies were identified through a search on Google Scholar. Data extraction followed the items and signaling questions outlined in TRIPOD+AI and PROBAST. The risk of bias and applicability of the included studies were evaluated using PROBAST. Results: Sixty-nine studies were included in this review, with 54 using supervised machine learning (ML)-based methods and 15 using regression-based methods. Regression-based CPMs had a higher event per variable (median 16.2; IQR: 14.0–42.0) than ML-based CPMs (median 8.2; IQR: 4.6–50.6). Both approaches exhibited high bias risk, particularly in the analysis (100%) and participant (69.6%) domains. Of all studies, 37.7% did not report the method for handling missing data and 91.3% inadequately reported model performance measures. High applicability concerns were 81.5% for ML-based studies and 60.0% for regression-based studies. Conclusions: The majority of studies demonstrated poor methodology and significant applicability concerns, driven by critical flaws in participant recruitment, small sample sizes, handling of missing data, and predictor selection. It is well known that these pitfalls cause bias and reduce analytic power. CPM researchers should be aware of these pitfalls and adhere to TRIPOD+AI reporting guideline.
