*Result*: Comparative analysis of artificial intelligence tools for the dissemination of colorectal cancer screening guidelines: a novel perspective on early screening education.
Original Publication: London : Surgical Associates Ltd., c2004-
Conroy G, Mallapaty S. How China created AI model DeepSeek and shocked the world. Nature 2025;638:300–01.
Normile D. Chinese firm’s large language model makes a splash. Science (New York, NY) 2025;387:238.
Kanth P, Inadomi JM. Screening and prevention of colorectal cancer. BMJ (Clinical Research Ed) 2021;374:n1855.
Agha RA, Mathew G, Rashid R, et al. Transparency in the reporting of Artificial INtelligence – the TITAN guideline. Prem J Sci 2025;10:100082.
Wang F, Chen G, Zhang Z, et al. The Chinese Society of Clinical Oncology (CSCO): clinical guidelines for the diagnosis and treatment of colorectal cancer, 2024 update. Cancer Commun (London, England) 2024;45:332–79.
Buhr RG, Romero R, Wisk LE. Promotion of knowledge and trust surrounding scarce resource allocation policies: a randomized clinical trial. JAMA Health Forum 2024;5:e243509.
Sandmann S, Hegselmann S, Fujarski M, et al. Benchmark evaluation of DeepSeek large language models in clinical decision-making. Nat Med 2025.
Boscardin CK, Gin B, Golde PB, Hauer KE. ChatGPT and generative artificial intelligence for medical education: potential impact and opportunity. Acad Med 2024;99:22–27.
Tan S, Xin X, Wu D. ChatGPT in medicine: prospects and challenges: a review article. Int J Surg 2024;110:3701–06.
*Further Information*
*This study systematically evaluated the effectiveness of three artificial intelligence (AI) tools - ChatGPT-4o, Claude 3.5, and DeepSeek - in disseminating colorectal cancer screening guidelines to nonmedical populations. Using uniform instructions aligned with the Chinese Society of Clinical Oncology 2024 standards, the AI-generated content was analyzed for accuracy, clarity, and rigor, supplemented by a cross-evaluation mechanism to quantify performance. Key findings revealed that DeepSeek demonstrated superior regional adaptability and logical rigor, while requiring improvements in threshold accuracy; ChatGPT-4o exhibited outdated starting age criteria and oversimplified high-risk population screening protocols; and Claude 3.5 provided a comprehensive framework but lacked critical implementation details. All tools effectively translated complex medical guidelines into accessible language, underscoring AI's potential in public health education. However, outputs necessitated clinical validation and ethical oversight to mitigate data biases. The study emphasizes AI's role as an auxiliary tool for medical knowledge dissemination, advocating for continuous algorithmic optimization, multidisciplinary collaboration, and dynamic regulatory mechanisms to ensure alignment with evolving medical standards while balancing scientific precision and public accessibility.
(Copyright © 2025 The Author(s). Published by Wolters Kluwer Health, Inc.)*