*Result*: Generative AI Chatbots as Digital Adjuncts for Sexual Health Information After Prostate Cancer in Men Who Have Sex With Men: Auto-Netnographic Study.
J Adv Nurs. 2025 Dec;81(12):8391-8393. (PMID: 40556464)
Cancers (Basel). 2023 May 08;15(9):. (PMID: 37174119)
J Adv Nurs. 2021 Sep;77(9):3707-3717. (PMID: 34003504)
Eur J Oncol Nurs. 2024 Feb;68:102510. (PMID: 38310664)
Front Oncol. 2022 Feb 04;12:812117. (PMID: 35186749)
Eur J Cancer. 2025 Mar 11;218:115274. (PMID: 39922126)
Urol Pract. 2024 Jul;11(4):670-676. (PMID: 38899676)
J Cancer Surviv. 2025 Nov 19;:. (PMID: 41254405)
Acad Med. 2014 Sep;89(9):1245-51. (PMID: 24979285)
ANS Adv Nurs Sci. 2023 Oct-Dec 01;46(4):410-423. (PMID: 36728300)
Urol Pract. 2021 May;8(3):360-366. (PMID: 37145667)
Scand J Caring Sci. 2025 Jun;39(2):e70006. (PMID: 40241327)
JAMA Netw Open. 2025 Aug 1;8(8):e2530220. (PMID: 40747871)
Health Expect. 2009 Sep;12(3):313-20. (PMID: 19555377)
Patient Educ Couns. 2022 Jul;105(7):2033-2037. (PMID: 34865891)
J Cancer Educ. 2023 Feb;38(1):34-41. (PMID: 34365589)
Qual Health Res. 2023 Jul;33(8-9):701-714. (PMID: 37192601)
Prostate Cancer Prostatic Dis. 2025 Mar;28(1):70-80. (PMID: 38918583)
Scand J Caring Sci. 2025 Sep;39(3):e70074. (PMID: 40619709)
*Further Information*
*Background: Sexual health concerns following prostate cancer treatment are common yet often insufficiently addressed in clinical practice, particularly among men who have sex with men. These individuals may face additional barriers stemming from heteronormative assumptions, limited disclosure, and a lack of culturally tailored information. As generative artificial intelligence (GenAI) chatbots become increasingly accessible, patients are using these systems to seek sensitive health information outside traditional care settings. While prior research has focused on the accuracy and safety of chatbot-generated health advice, less attention has been paid to how responses are framed and experienced in sexual minority contexts.
Objective: This study aimed to describe and compare how 4 GenAI chatbots respond to questions about sexual health following prostate cancer treatment, with a focus on the needs of a gay man, and to interpret these responses using netnographic and actor-network theory perspectives.
Methods: A qualitative exploratory study using auto-netnography was conducted. In February-March 2025, the first author interacted once with 4 widely used GenAI chatbots-ChatGPT (GPT-4o; Open AI), Claude (3.5 Sonnet; Anthropic), Copilot (GPT-4 Turbo; Microsoft), and Gemini (2.0 Flash; Google)-while assuming the role of a simulated "mock patient." Two standardized prompts were used verbatim across all platforms: an initial prompt addressing sexual health concerns after prostate cancer treatment and a supplementary prompt focusing on sexual minority-specific issues, including same-sex practices. Chatbot outputs were treated as system-generated data and analyzed qualitatively, integrating system-generated text with reflexive experiential engagement and attention to interactional framing, emotional attunement, specificity, and performative features. The analysis did not assess clinical effectiveness, safety, or generalizability.
Results: Across platforms, chatbot responses addressed treatment-related sexual health concerns using generally inclusive language, with variation in emotional tone, specificity, and cultural sensitivity. Interactional features included the scope and framing of clinical information, encouragement of dialogue, self-care advice, and explicit discussion of same-sex sexual practices. No obvious fabricated claims were identified; however, contextual inaccuracies were observed. Responses were mapped along 2 intersecting continua-logical-to-empathetic orientation and general-to-specific framing-yielding 4 interactional styles: structured overview, rational clarity, compassionate perspective, and compassionate precision. This 4-quadrant framework served as an interpretive heuristic and does not constitute an evaluation of quality or effectiveness.
Conclusions: The findings indicate that contemporary GenAI chatbots, when used as digital adjuncts, may enact communication styles that can be perceived as supportive, culturally sensitive, and LGBTQI+ (lesbian, gay, bisexual, transgender, queer, and intersex)-inclusive in specific sexual health interactions. Although these systems lack ethical consciousness and cannot replace professional care, their performative responses may complement clinical practice by facilitating reflection and access to sensitive information. The study highlights how care-like meanings may emerge through sociomaterial interactions between users and artificial intelligence systems rather than demonstrating generalized performance or clinical reliability.
(© Mats Christiansen, Henrik Eriksson, Lisbeth Fagerström. Originally published in JMIR Cancer (https://cancer.jmir.org).)*