Abstract
BackgroundThe National Health Service England (NHS) classifies individuals as eligible for lung cancer screening using two risk prediction models, PLCOm2012 and Liverpool Lung Project-v2 (LLPv2). However, no study has compared the performance of lung cancer risk models in the UK.MethodsWe analysed current and former smokers aged 40-80 years in the UK Biobank (N = 217,199), EPIC-UK (N = 30,813), and Generations Study (N = 25,777). We quantified model calibration (ratio of expected to observed cases, E/O) and discrimination (AUC).ResultsRisk discrimination in UK Biobank was best for the Lung Cancer Death Risk Assessment Tool (LCDRAT, AUC = 0.82, 95% CI = 0.81-0.84), followed by the LCRAT (AUC = 0.81, 95% CI = 0.79-0.82) and the Bach model (AUC = 0.80, 95% CI = 0.79-0.81). Results were similar in EPIC-UK and the Generations Study. All models overestimated risk in all cohorts, with E/O in UK Biobank ranging from 1.20 for LLPv3 (95% CI = 1.14-1.27) to 2.16 for LLPv2 (95% CI = 2.05-2.28). Overestimation increased with area-level socioeconomic status. In the combined cohorts, USPSTF 2013 criteria classified 50.7% of future cases as screening eligible. The LCDRAT and LCRAT identified 60.9%, followed by PLCOm2012 (58.3%), Bach (58.0%), LLPv3 (56.6%), and LLPv2 (53.7%).ConclusionIn UK cohorts, the ability of risk prediction models to classify future lung cancer cases as eligible for screening was best for LCDRAT/LCRAT, very good for PLCOm2012, and lowest for LLPv2. Our results highlight the importance of validating prediction tools in specific countries.</p>