Page 19 - AIH-2-4
P. 19
Artificial Intelligence in Health
PERSPECTIVE ARTICLE
Expertise in AI and clinical publishing exposes
peer review gaps: A perspective
Ezra N. S. Lockhart*
Department of Marriage and Family Sciences, National University, San Diego, California,
United States of America
Abstract
Artificial intelligence and machine learning are advancing rapidly in medical and
mental health research, yet clinical publishing remains structurally unprepared to
evaluate these technologies with the rigor they demand. Despite the rise of AI-driven
models for suicide risk prediction and diagnostic assessment, editorial and peer
review processes often lack the technical expertise required to assess methodological
validity. Drawing on dual fluency in AI and clinical publishing, this perspective
identifies a critical gap at the intersection of innovation and editorial oversight.
This article reveals how editorial decisions in high-impact psychiatry journals have
dismissed valid methodological concerns as “overly technical” and undermined
independent scientific critique, drawing on two case studies: one involving a model
that differentiates suicidal from non-suicidal self-harm, and another analyzing
speech-based suicide risk assessment. These case studies serve as the foundation
for a broader critique of editorial decision-making in clinical publishing, revealing
persistent structural blind spots in evaluating AI-integrated research. To prevent the
*Corresponding author: pre-mature adoption of flawed models in clinical care, this perspective proposes
Ezra N. S. Lockhart targeted reforms: recruiting technically proficient reviewers, mandating transparent
(elockhart@nu.edu) methodological reporting, and protecting space for independent post-publication
Citation: Lockhart ENS. Expertise evaluation. Without such changes, the integrity of the field and the safety of patients
in AI and clinical publishing exposes remain at risk.
peer review gaps: A perspective.
Artif Intell Health. 2025;2(4):13-21.
doi: 10.36922/AIH025210049 Keywords: Artificial intelligence; Peer review-research; Ethics-research; Editorial policies;
Received: May 22, 2025 Speech analysis
Revised: June 8, 2025
Accepted: June 16, 2025
Published online: July 3, 2025 1. Introduction
Copyright: © 2025 Author(s). The integration of artificial intelligence (AI) and machine learning (ML) into clinical
This is an Open-Access article research is no longer speculative. 1-10 From suicide risk detection to diagnostic
distributed under the terms of the 11,12
Creative Commons Attribution classification, AI-driven tools are already shaping the future of mental healthcare.
License, permitting distribution, Yet, while the promise of these technologies is real, so are the risks of their pre-mature
and reproduction in any medium, adoption. The methodological complexity of AI systems demands careful scrutiny, but
provided the original work is
properly cited. clinical publishing has not kept pace. Many journals lack both the technical infrastructure
and editorial expertise required to evaluate these studies with the rigor they warrant. 13,14
Publisher’s Note: AccScience
Publishing remains neutral with As a researcher-clinician with dual expertise in both AI development and clinical
regard to jurisdictional claims in
published maps and institutional psychiatry, I have observed firsthand the challenges posed by this gap. Two critiques I
affiliations. submitted to high-impact psychiatry journals – one challenging an AI model differentiating
Volume 2 Issue 4 (2025) 13 doi: 10.36922/AIH025210049

