Page 137 - AIH-2-4
P. 137
Artificial Intelligence in Health Artificial intelligence app for EVD navigation
portable diagnostic imaging. The iOS-based platform neurosurgical navigation. To the best of our knowledge, no
44
presented in the present study could similarly serve as a free and reliable mobile neuronavigation system currently
foundational tool in resident education by providing real- exists that can provide real-time neuronavigation in
time, immersive feedback on trajectory and depth during emergency settings. The innovation of this work lies in
catheter placement. Coupled with the low hardware producing an iOS application that enables instantaneous
footprint of smartphones, it allows for seamless integration patient registration on standard mobile devices (iPhone
of safety and navigation features without imposing a 12, iPhone 13, or iPad Pro), offering a free neuronavigation
significant burden on workflow or cost. platform to assist clinicians with the placement of EVDs
without requiring stereotactic immobilization, reference
The cognitive load and decision-making complexity
inherent in high-stakes procedures such as EVD arrays, or fiducials. To evaluate the features available on
iOS-powered devices, anonymized patient data were
placement are often underestimated, particularly for obtained from an open-source repository. 45
junior providers, trainees, or advanced practitioners who
perform the procedure infrequently. AI-based systems The initial step was to identify the essential
can alleviate some of this procedural burden by offering components of a computer-assisted navigation
real-time alignment cues, trajectory verification, and procedure. These included: (i) Processing of pre-
visual reinforcement through augmented overlays. This procedural scans, (ii) real-time detection and tracking
human–machine collaboration reduces reliance on rote of the patient, (iii) object detection and localization of
memorization or abstract spatial reasoning, thereby surgical instruments, and (iv) the ability to map both
lowering error rates, especially during periods of fatigue, patient anatomy and the surgical device to imaging data
night shifts, or acute crisis scenarios. (Figure 2). The overarching goal was to achieve real-time,
continuous registration with minimal surgeon input.
The present study seeks to investigate whether a custom-
designed AI application for mobile devices, specifically an Accordingly, the user interface was designed to reduce
manual interaction, creating a seamless experience.
46
iOS device equipped with a TrueDepth camera, can provide The application integrates multiple programming
instantaneous navigation by identifying and tracking an environments: Python (3.10.10, Python Software
EVD stylet in real time, with potential future application as Foundation, USA) and TensorFlow (2.12.0, Google,
a bedside navigation tool. We developed an iOS application USA) for machine learning models, C++ (17.0.0, Apple,
leveraging the optimized computational hardware of Apple USA) and Metal (Metal 3, Apple, USA) for performance
devices and performed simulated navigated procedures on
specific models (iPhone 12 Pro, 13 Pro, 14 Pro, and M1 optimization, and Swift (5.9.2, Apple, USA) for the iOS
and M2 iPad Pro). We evaluated whether these devices application framework. These were unified using the
could meet the computational requirements for computer- Xcode Integrated Development Environment (15.4.0,
assisted navigation, the resolution and accuracy they Apple, USA) to build and test the app.
could achieve, and the technical feasibility of performing 2.2. iOS true depth camera
these procedures on battery-powered devices. Accuracy
was then compared to that of a traditional navigation The iOS TrueDepth camera, typically used for the Face ID
system. We hypothesize that our custom application will feature, uses light detection and ranging (LiDAR, a remote
provide real-time, accurate surgical navigation on an sensing method) to capture accurate topographic data. It
iPhone, encouraging further exploration of its use in EVD projects and analyzes thousands of laser points, measuring
placement and other cranial neurosurgical procedures both their reflection time to create a depth map, which is then
at the bedside and in the operating room. The ultimate goal coupled with an infrared image. These images are processed
of this investigation is to integrate existing technologies by Apple’s Neural Engine (compatible chips include A11,
in registration and object tracking into a single custom A12 Bionic, A12X Bionic, A13 Bionic, A14 Bionic, and
47
application capable of performing EVD navigation on A15 Bionic) and compared to the enrolled representation.
an iOS device at the bedside, thereby enabling timely Although Apple does not report the depth accuracy of the
neurosurgical navigation without requiring a complex iOS True Depth camera, independent sources estimate it to
48
setup that delays urgent or emergent patient care. be approximately 2% at a distance of 3 m.
2. Data and methods 2.3. AI model creation and training
Two models were developed for the critical steps of surgical
2.1. Application design and development navigation:
The present study involved the development of an (i). A semantic segmentation model for head CT scans.
iOS application capable of performing iOS-assisted (ii). An object detection model to track EVD catheters.
Volume 2 Issue 4 (2025) 131 doi: 10.36922/aih.8195

