Page 22 - AIH-2-4
P. 22

Artificial Intelligence in Health                                                AI editorial policy ethics



            “outside scope” – feedback suggesting that editorial boards   •   Who determines what constitutes valid evidence?
            often defer scientific vetting to original authors, a process   •   Who is accountable when predictive models reinforce
            colloquially referred to as pre-clearance. While this practice   structural bias or contribute to diagnostic error?
            may be intended to streamline correspondence handling, it   In the absence of systemic safeguards, the pre-mature
            effectively allows original authors to veto external critique,   adoption of under-evaluated AI tools threatens not just the
            compromising the neutrality and independence of peer   integrity of the scientific record but the safety and equity of
            review. 32-38                                      patient care. 30,31,39,40

              This gatekeeping is further exacerbated by a systemic
            lack of technical and ethical expertise among clinical   6. Comparative analysis across cases
            journal editors to assess AI-related submissions. As ML   The rejection  of substantive methodological  critiques  in
            models become more complex and deeply integrated   both the Haghish  and Ding  et al. case studies reveals
                                                                                            12 
                                                                             11
            into healthcare, editorial boards must be equipped to   consistent  patterns  of  editorial  gatekeeping,  technical
            evaluate  not  only  clinical  relevance  but  also  algorithmic   exclusion, and ethical under-evaluation. While the studies
            validity, interpretability, and fairness. 30,31  Without such   addressed different domains (i.e., text-based versus speech-
            expertise,  editorial  decisions  may  inadvertently  privilege   based suicide prediction), the nature of the overlooked
            esthetic novelty or positive results over scientific rigor and   issues and the editorial rationale for rejection were
            replicability.                                     strikingly similar. These cases demonstrate that systemic
              In many journals, the peer review process itself   editorial deficiencies can transcend methodological
            remains  opaque  and  insufficiently  diverse,  further   domain, modality, and even discipline.
            contributing to biased publication outcomes. Studies   Table 1 summarizes the critical methodological concerns
            show  that  increasing  gender  and  international  diversity   raised in each case, mapping them to potential clinical
            among reviewers correlates with fairer evaluations and   consequences and corresponding editorial responses. This
            higher-quality editorial outcomes. 33-35  Yet, even in journals   side-by-side view makes visible the shared vulnerabilities
            that  acknowledge  these disparities, few  have  adopted   in AI health research publication and underscores the
            concrete reforms, such as blind review, reviewer training   urgency for reform in peer review protocols.
            in AI ethics, or structured checklists for evaluating ML   Figure  1 shows a conceptual model depicting the
            studies. 19,30,39-45                               multi-layered  nature of  editorial gatekeeping  and its
              As generative AI continues to scale across clinical   consequences. Critique pre-clearance, limited AI literacy,
            domains, scholars have increasingly called for the   and narrow definitions of clinical relevance combine to
            integration of  embedded  ethics into the development,   create significant obstacles. Together, these factors build
            evaluation, and dissemination of medical AI research. 39-45    barriers that obstruct scientific accountability.
            This approach demands that ethical concerns – such as   Together, these cases illustrate a systemic breakdown
            algorithmic bias, safety, transparency, and explainability –   in editorial accountability. When valid methodological
            be addressed from the outset, not appended post hoc. In this   critiques  are  filtered  out  by  opaque  editorial  practices
            model, ethics is not a checkpoint at the end of the pipeline   or vetoed by original authors, the epistemic integrity
            but a structural element of rigorous scientific inquiry.  of the scientific record is compromised. Moreover, the

              Despite these calls, the editorial handling of the critiques   publication of inadequately vetted AI models has serious
            toward the works of Haghish  and Ding et al.  suggests that   clinical and ethical implications.
                                  11
                                               12
            present publishing norms fall short of AI-driven studies
            and studies using AI methods. The absence of substantive   7. Limitations
            engagement with these challenges implies that many   In discussing the constraints within editorial decision-
            journals remain ill-equipped – or unwilling – to enforce   making, I recognize several key factors that shape how
            ethical scrutiny as part of peer review. Without meaningful   scholarly work and professional discourse are disseminated.
            reform in areas, such as editorial independence, reviewer   Journals operate within specific editorial frameworks that
            training, and conflict-of-interest transparency, flawed AI   dictate what content is selected for publication. These
            models may continue to bypass critical evaluation and   policies may prioritize particular research methodologies or
            enter the clinical literature unchallenged.        thematic focuses, inadvertently shaping which perspectives
              This failure is not merely procedural. It raises   enter the broader academic conversation. 20,21,32-38
            foundational questions about epistemic authority in   The peer-review process, while intended to ensure rigor
            clinical AI:                                       and credibility, is subject to variability in reviewer expertise,


            Volume 2 Issue 4 (2025)                         16                          doi: 10.36922/AIH025210049
   17   18   19   20   21   22   23   24   25   26   27