Page 103 - AIH-2-3
P. 103

Artificial Intelligence in Health                                  Bone suppression utility for chest diagnosis



            involvement and the degree of opacity, achieving Pearson   Healthcare, Chicago, IL, USA) at Kitasato University
            correlation coefficients (PCCs) of 0.80 and 0.78 for these   Hospital (Sagamihara City, Japan), to develop a bone
            tasks, respectively. Signoroni  et al.  introduced BS-Net,   suppression model. Most of these patients had pulmonary
                                        34
            an end-to-end architecture, to segment, align, and   inflammatory diseases or pulmonary mass lesions.
            quantify lung compromise based on the Brixia score. 35,36    The detector specifications are detailed in our previous
            The  performance of  BS-Net  was  evaluated not  only  for   work.  Radiography was performed with tube voltages of
                                                                   22
            classification tasks but also for regression tasks using   130 kV for high-energy images and 60 kV for low-energy
            linear regression of the Brixia score, with the highest PCC   images. The system produces bone-suppressed and bone-
            reaching 0.85. These studies underscore the potential of AI   enhanced images, along with standard chest radiographs
            approaches in evaluating COVID-19 severity.        for presentation, all with a resolution of 3524 × 4288 pixels
              Moreover,   transparency,  explainability,  and  and 13-bit contrast, from the raw data of the high- and
            interpretability are critical components of AI, especially   low-energy images. For training, we utilized 480 pairs of
                                37
            in medical applications.  Understanding why and how   standard and bone-suppressed radiographs, while 120
            a model derives a particular decision is essential for   pairs were reserved for testing.
            ensuring clinical accountability and building confidence.   2.1.2. Data pre-processing
            Gradient-weighted class activation mapping (Grad-CAM)
            is a widely used technique to visualize decision-making   To prepare the dataset for model training, we first cropped
            processes by  highlighting  image regions that  contribute   the standard and bone-suppressed radiographs to extract
            to the model’s output.  By generating heatmaps based on   regions of interest (ROIs) centered on the lung area. The
                             38
                                                                                                            41
            gradients from the final convolutional layer, Grad-CAM   lung regions were identified using a pre-trained U-Net
            offers explainable insights to support clinical decision-  model, which segments chest radiographs into the lung,
            making. 39,40  Talaat  et al.  integrated Grad-CAM into a   heart, other anatomical areas, and background, assigning
                                40
            breast cancer classification model, providing radiologists   pixel values of 255, 85, 170, and 0, respectively, in 8-bit
            with valuable insights into the model’s decision-making   contrast. The U-Net architecture employed consisted of
            process and fostering trust in the AI system. In this study,   five depths, incorporating an input  layer, five encoder
            Grad-CAM is used to validate the explainability and   layers, five decoder layers, and an output layer.
            interpretability of COVID-19 severity prediction models.  For training the U-Net model, we utilized all 247 chest
              Our work integrates AI-based bone suppression pre-  radiographs from the Japanese Society of Radiological
            processing into regression models to assess COVID-19 severity   Technology database, along with their corresponding
            using CXR. The primary aim is to expand the applications of   segmented labels.  The U-Net model was trained for up to
                                                                            42
            AI-based bone suppression techniques, verifying their utility   100 epochs using the RMSprop optimizer, with a learning rate
            in severity assessment. By improving the accuracy of severity   of 0.0001, a weight decay of 1 × 10 , and a momentum of 0.9.
                                                                                         −8
            predictions, this approach could enhance patient monitoring   After training, the U-Net model was applied to identify
            and optimize healthcare resource allocation, particularly in   the lung regions in the standard radiographs collected at
            resource-limited settings. Our findings may also validate   Kitasato University Hospital, which had been converted
            the applications of AI-based bone suppression in regression   to 8-bit contrast in advance. These identified lung regions
            tasks for chest image diagnosis. Moreover, this study seeks   were then cropped from both the standard and bone-
            to bridge the gap between the present limitations of CXR   suppressed radiographs. Finally, the cropped images were
            and the superior sensitivity of CT, ultimately contributing to
            more efficient and scalable diagnostic tools for COVID-19   resized to 1024 × 1024 pixels to standardize the input size
            and other pulmonary diseases.                      for the subsequent training of the bone suppression model.
            2. Data and methods                                2.1.3. Bone suppression network architecture and
                                                               training settings
            In this section, we explain the development of the bone   We employed the pix2pix 43,44  network to generate
            suppression model, followed by the method for assessing   virtually bone-suppressed images from the standard
            COVID-19 severity.                                 chest radiographs. Figure 1 illustrates a flowchart of the
            2.1. Bone suppression model                        bone suppression and pre-processing steps. The network
                                                               architecture follows the design proposed by Isora et al.,
                                                                                                            43
            2.1.1. Data collection                             as described in our previous work,  with modifications
                                                                                            22
            We collected  chest  radiographs  from 600  patients   made to the resolution of the generator and discriminator
            using a dual-shot DES system (Discovery XR656, GE   to handle 1024 × 1024 resolution images.


            Volume 2 Issue 3 (2025)                         97                               doi: 10.36922/aih.5608
   98   99   100   101   102   103   104   105   106   107   108