Page 184 - IJB-8-1
P. 184

Analyzing Cell-Scaffold Interaction through Unsupervised 3D Nuclei Segmentation
           A                                B                                C













           D                                E                                F













           Figure 3. (A) Schematic diagram of scaffold with fiber stacking structure. (B) scanning electron microscope images of overall fibrous
           scaffold structure. (C) Cross-section of fiber stacking. (D-F) Fiber surface morphology of poly-E-caprolactone (PCL), PCL-10-D and PCL-
           20-D scaffold, respectively. Figure 3(A)-(D) are original images, and Figure 3(E) and (F) are adapted from ref.  licensed under Creative
                                                                                           [20]
           Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), https://creativecommons.org/licenses/by-
           nc-nd/4.0.
               These scaffolds are used to culture NIH-3T3 mouse   developed  AD-GAN  model  to  output  non-overlapping
           embryonic  fibroblast  cells,  A549  human  non-small   3D  nuclei.  The  probability  of  rotating  directions  of
           cell  lung  cancer  cells  and  HeLa  cells.  The  nuclei  and   ellipsoids is assumed the same in all directions to mimic
           membrane of the three cell lines are stained with Hoechst   real  nuclei,  which  encourages  the  AD-GAN  model  to
           33342 (blue) and DiI (red) for CLSM imaging. The living-  map Z-axis elongated nuclei caused by light diffraction
           image of cell-seeded scaffolds after fluorescent staining   and scattering in real CLSM images to the ellipsoids in
           are taken by CLSM (LSM-880, ZEISS, Germany) with    the  synthetic  masks.  To  achieve  one-to-one  mapping
           an  EC  Plan-Neofluar  20X/0.5  air  immersion  objective.   between real images and synthetic masks, the proposed
           The CLSM images are compiled using Z-stack mode by   AD-GAN training includes both same-domain translation
           ZEN software, and reconstructed using Imaris software   (image-to-image  and  mask-to-mask)  and  cross-domain
           (Bitplane Inc). The collected CLSM images have been   translation (image-to-mask and mask-to-image) as shown
           resized to 512×512×64 voxels by spatial normalization to   in  (Figure  4B).  A  single  GAN-based  auto-encoder  is
           remove redundant information (data available at: https://  designed to build a bidirectional mapping between each
           github.com/Kaiseem/Scaffold-A549).  The  processed   real image and the corresponding content representation
           images are split into 16 image patches with the size of   in the shared content space.
           128×128×64 voxels as the inputs of AD-GAN model.        Our proposed AD-GAN model consists of a unified
                                                               conditional generator and a PatchGAN discriminator Ɗ,
           2.2. AD-GAN method and training strategy            both  of  which  use  3D  convolutional  layers. As  shown
           Unsupervised nuclei segmentation can be considered an   in (Figure 4A), the designed generator in each domain
           image-to-image  translation  task  as  shown  in  Figure  4,   contains two parts: an encoder (ꞔ ) which encodes the
                                                                                           enc
           where  the  inputs  are  the  grayscale  CLSM  images,  and   input  volume  to  content  representation,  and  a  decoder
           the output images are the segmentation results. Our AD-  (ꞔ )  which  reconstructs  the  content  representation  to
                                                                dec
           GAN  model  is  essentially  designed  to  deal  with  two   the output volume. In the same-domain translation, the
           domains:  domain  A  with  image  style  including  input   generator is trained to extract useful information by auto-
           image, reconstructed image and cyclic image, and domain   encoding. In the cross-domain translation, the decoder ꞔ
                                                                                                             dec
           B with mask style including input mask, reconstructed   is frozen and the encoder ꞔ  is trained to generate fake
                                                                                      enc
           mask  and  fake  mask.  The  synthetic  mask  is  generated   images to fool the discriminator by aligning the content for
           by  non-overlapping  ellipsoid  structures  with  random   each domain. The encoder contains two down-sampling
           rotations  and  translations,  which  would  stimulate  the   modules and four ResNet blocks, and the decoder has a

           170                         International Journal of Bioprinting (2022)–Volume 8, Issue 1
   179   180   181   182   183   184   185   186   187   188   189