Page 139 - AIH-2-4
P. 139
Artificial Intelligence in Health Artificial intelligence app for EVD navigation
A B C D
Figure 3. iOS application screenshots during use. (A) DICOM viewer for targeting. (B) Point cloud obtained following device mount. (C) Registration
review to inspect the point-cloud merge. (D) Augmented reality-driven navigation interface with alignment and depth guidance.
Figure 4. Results of the semantic segmentation model (SSM). Training samples provide four rows of information: (A) original scan, (B) predicted
segmentation, (C) ground-truth segmentation, and (D) error map. Testing samples demonstrate performance on previously unseen data. The SSM
achieved an accuracy of 98.3% for testing and 98.2% for validation when segmenting background (purple), extracranial soft tissue (red), bone (orange),
neural tissue (green), and ventricles (blue).
Once accepted, the phone performs a point-cloud 0.98 for the YOLOv2 model, with inference times of 800 μs
merge by aligning the segmented head CT with the 3D on Apple’s Neural Engine.
TrueDepth scan. The registration algorithm applies scaling,
alignment, and rotation to achieve a coded threshold of 4. Discussion
1 × 10 -8 cm average difference between the two-point clouds The performance of our AI algorithms, combined with the
(Figure 2). The initial merge requires an average of 3.8 s, successful implementation of a functioning application
after which updates are performed at 60 merges per second running these models on local hardware, suggests that
in the background, synchronized with the 60-fps display of iOS devices can feasibly provide a complete neurosurgical
the navigated screen. navigation experience. This innovation has the potential to
significantly improve the accessibility, efficiency, and cost-
The final navigated display provides the surgeon with an effectiveness of surgical navigation, particularly in resource-
AR view of the patient, a projection of the target trajectory, limited settings. For example, it could bring navigation
and an alignment interface for navigating the specialized directly to the bedside, enhancing accuracy in procedures
EVD stylet (Figure 5). Training of the tracking model for such as EVD placements, which currently carry error rates
1,000 epochs resulted in an I/U of 1.0 and a varied I/U of of up to 25% with the standard blind, landmark-based
Volume 2 Issue 4 (2025) 133 doi: 10.36922/aih.8195

