*Result*: Data Poisoning Vulnerabilities Across Health Care Artificial Intelligence Architectures: Analytical Security Framework and Defense Strategies.

Title:
Data Poisoning Vulnerabilities Across Health Care Artificial Intelligence Architectures: Analytical Security Framework and Defense Strategies.
Authors:
Abtahi F; Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Huddinge, Stockholm, Sweden.; Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Huddinge, Stockholm, Sweden.; Department of Clinical Physiology, Karolinska University Hospital, Huddinge, Stockholm, Sweden., Seoane F; Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Huddinge, Stockholm, Sweden.; Department of Medical Technology, Karolinska University Hospital, Stockholm, Sweden.; Department of Textile Technology, University of Borås, Borås, Västra Götaland, Sweden.; Department of Clinical Physiology, Karolinska University Hospital, Huddinge, Stockholm, Sweden., Pau I; ETSIS de Telecomunicación, Universidad Politécnica de Madrid, Madrid, Madrid, Spain., Vega-Barbas M; ETSIS de Telecomunicación, Universidad Politécnica de Madrid, Madrid, Madrid, Spain.
Source:
Journal of medical Internet research [J Med Internet Res] 2026 Jan 23; Vol. 28, pp. e87969. Date of Electronic Publication: 2026 Jan 23.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: JMIR Publications Country of Publication: Canada NLM ID: 100959882 Publication Model: Electronic Cited Medium: Internet ISSN: 1438-8871 (Electronic) Linking ISSN: 14388871 NLM ISO Abbreviation: J Med Internet Res Subsets: MEDLINE
Imprint Name(s):
Publication: <2011- > : Toronto : JMIR Publications
Original Publication: [Pittsburgh, PA? : s.n., 1999-
References:
Lancet Digit Health. 2022 Jun;4(6):e406-e414. (PMID: 35568690)
Nature. 2023 Aug;620(7972):172-180. (PMID: 37438534)
Nat Med. 2019 Jan;25(1):24-29. (PMID: 30617335)
N Engl J Med. 2021 Jul 15;385(3):283-286. (PMID: 34260843)
AMIA Jt Summits Transl Sci Proc. 2025 Jun 10;2025:332-341. (PMID: 40502272)
Nat Med. 2019 Jan;25(1):37-43. (PMID: 30617331)
N Engl J Med. 2020 Aug 27;383(9):874-882. (PMID: 32853499)
Nat Med. 2019 Jan;25(1):44-56. (PMID: 30617339)
Nature. 2025 Jun;642(8067):442-450. (PMID: 40205050)
Nat Med. 2023 Aug;29(8):1930-1940. (PMID: 37460753)
Contributed Indexing:
Keywords: AI governance; artificial intelligence; backdoor attacks; clinical decision support; data poisoning; federated learning; health care security; large language models; medical imaging; patient safety
Entry Date(s):
Date Created: 20260123 Date Completed: 20260123 Latest Revision: 20260210
Update Code:
20260210
PubMed Central ID:
PMC12881903
DOI:
10.2196/87969
PMID:
41575020
Database:
MEDLINE

*Further Information*

*Background: Health care artificial intelligence (AI) systems are increasingly integrated into clinical workflows, yet remain vulnerable to data-poisoning attacks. A small number of manipulated training samples can compromise AI models used for diagnosis, documentation, and resource allocation. Existing privacy regulations, including the Health Insurance Portability and Accountability Act and the General Data Protection Regulation, may inadvertently complicate anomaly detection and cross-institutional auditing, thereby limiting visibility into adversarial activity.
Objective: This study provides a comprehensive threat analysis of data poisoning vulnerabilities across major health care AI architectures. The goals are to (1) identify attack surfaces in clinical AI systems, (2) evaluate the feasibility and detectability of poisoning attacks analytically modeled in prior security research, and (3) propose a multilayered defense framework appropriate for health care settings.
Methods: We synthesized empirical findings from 41 key security studies published between 2019 and 2025 and integrated them into an analytical threat-modeling framework specific to health care. We constructed 8 hypothetical yet technically grounded attack scenarios across 4 categories: (1) architecture-specific attacks on convolutional neural networks, large language models, and reinforcement learning agents (scenario A); (2) infrastructure exploitation in federated learning and clinical documentation pipelines (scenario B); (3) poisoning of critical resource allocation systems (scenario C); and (4) supply chain attacks affecting commercial foundation models (scenario D). Scenarios were aligned with realistic insider-access threat models and current clinical deployment practices.
Results: Multiple empirical studies demonstrate that attackers with access to as few as 100-500 poisoned samples can compromise health care AI systems, with attack success rates typically ≥60%. Critically, attack success depends on the absolute number of poisoned samples rather than their proportion of the training corpus, a finding that fundamentally challenges assumptions that larger datasets provide inherent protection. We estimate that detection delays commonly range from 6 to 12 months and may extend to years in distributed or privacy-constrained environments. Analytical scenarios highlight that (1) routine insider access creates numerous injection points across health care data infrastructure, (2) federated learning amplifies risks by obscuring attribution, and (3) supply chain compromises can simultaneously affect dozens to hundreds of institutions. Privacy regulations further complicate cross-patient correlation and model audit processes, substantially delaying the detection of subtle poisoning campaigns.
Conclusions: Health care AI systems face significant security challenges that current regulatory frameworks and validation practices do not adequately address. We propose a multilayered defense strategy that combines ensemble disagreement monitoring, adversarial testing, privacy-preserving yet auditable mechanisms, and strengthened governance requirements. Ensuring patient safety may require a shift from opaque, high-performance models toward more interpretable and constraint-driven architectures with verifiable robustness guarantees.
(©Farhad Abtahi, Fernando Seoane, Ivan Pau, Mario Vega-Barbas. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 23.01.2026.)*