Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Recent debates on generative artificial intelligence (AI) in qualitative research have been framed in increasingly polarized terms, positioning researchers as uncritical adopters or principled refusers. Drawing on recent position statements and editorial guidance, this article argues that such binary framing is unhelpful. While endorsing core concerns about meaning-making, reflexivity, bias, and environmental harm, it suggests that these concerns do not justify categorical rejection of all AI use. Instead, the article proposes refocusing debate on epistemic authority, distinguishing AI-led analysis from human-led practices, and developing governance-oriented responses to ethical and environmental risks. Moving beyond moral binaries is essential to preserving reflexive qualitative inquiry.

More information Original publication

DOI

10.1177/10778004261429383

Type

Journal article

Publisher

SAGE Publications

Publication Date

2026-03-08T00:00:00+00:00