Page 186 - IJB-8-1
P. 186
Analyzing Cell-Scaffold Interaction through Unsupervised 3D Nuclei Segmentation
approach can be incorporated into 3D segmentation culture. Both of them were generated using Mayavi2 by
workflows based on user needs. Accordingly, we have maximum intensity projection. In (Figure 5A), it is hard to
decided to include them for both visual and quantitative identify nuclei boundaries for cells adhered on the scaffold
comparison in this study. The relevant parameters were fibers. The corresponding 2D slices at the depth of 8 µm,
set either using default or optimal settings to the best 24 µm and 40 µm below the surface of the scaffold are
of our ability. The direct output from CellProfiler and shown in the first column of Figure 6.
Squassh were the segmented nuclei instances. As indicated by the white rectangle box, cells were
observed to adhere on top of the fibers at 8 µm, and on
3.1. Segmentation methods in comparison the fiber side walls at 24 µm. This indicates that cells can
To benchmark the nuclei segmentation performance, the attach to the varied fiber surface. Only a small amount of
AD-GAN was compared with CycleGAN, and Squassh blurred nuclei could be observed at the depth of 40 µm
in ImageJ and CellProfiler 3.0. In CellProfiler, functional since the laser scanning capability of CLSM was seriously
[25]
modules were arranged into specific “pipelines” to identify blocked by non-transparent fiber and cell cluster .
cells and their morphological features. We corrected the Obviously, most of existing imaging technologies and
illumination of CLSM images by a sliding window, and protocols originally designed for 2D culture systems
then applied the “MedianFilter” to remove artifacts within are insufficient to visualize 3D cell culture model at
the images, the “RescaleIntensity” to reduce the image deeper depth, and technologies with more powerful 3D
variation among batches, and the “Erosion” to generate visualization capabilities are expected.
markers for the “Watershed” module. To demonstrate the 3D segmentation results more
Squassh can globally segment 3D objects with comprehensively, the outputs of the AD-GAN model with
constant internal intensity by regulating three parameters volume renderings are shown in (Figure 5B), and the
“Rolling ball window size,” “Regularization parameter,” corresponding results from CellProfiler, CycleGAN, and
and “Minimum object intensity.” To produce visually Squassh are shown in Figure S2-S4, respectively. The
optimal segmentation results, these parameters were slices of these results at the depth of 8 µm, 24 µm, and
adjusted independently to subtract an object within the 40 µm are shown in Figure 6. A demonstration about the
window from the background, avoid segmenting noise- nuclei segmentation process using AD-GAN is shown in
induced small intensity peaks and force object separation. the supplementary video, which consists of an original
CycleGAN was adapted from the official code 2 by CLSM image with low cell density, its grayscale image
replacing 2D convolutional layers with 3D convolutional and the segmented 3D nuclei results.
layers for this task. Half of the default channels in the The segmentation results obtained from CellProfiler
intermediate layers were kept for memory saving and and CycleGAN at 8 µm and 24 µm look similar in two
redundancy reduction. The ResNet with 9 blocks was dimensions. In 3D visualization as shown in Figure S2 and
chosen as the generator architecture and the receptive S4, CellProfiler tended to identify elongated nuclei. This is
field of the discriminator was reduced to 16 × 16 × 16 to probably attributed to Otsu threshold used in CellProfiler,
improve translation performance. which can only distinguish nuclei from the foreground and
Our AD-GAN model (code is available at: https:// background, but not the shadow above or below. Using
github.com/Kaiseem/AD-GAN) was built with open- Squassh, adjacent nuclei were found to be segmented as one
source software library PyTorch 1.4.0 on a workstation object at the depth of 8 µm. Therefore, the size of segmented
with one NVIDIA GeForce RTX 2080Ti GPU. The nuclei was obviously larger than those identified by the other
training process took 9 – 11 min per epoch, and the methods. More often, cells adhered on the scaffold fibers
segmentation of an unseen image took <1 s. The direct led to more geometrically complex scenarios. As indicated
outputs of CycleGAN or AD-GAN were semantic by the white rectangle box under the third column, Squassh
segmentation results, thus a post-process using OpenCV
library was applied to obtain segmented nuclei instances. A B
Specifically, the morphological erosion with a cube of 3 ×
3 × 3 voxel could filter noises or very small instances. The
erosion results could serve as markers for the watershed
algorithm to separate the clustered nuclei into instances,
and the binarized outputs were the segmented nuclei.
(1) Comparison of performance under low cell density
An original CLSM image with low cell density is shown in Figure 5. (A and B) Confocal laser scanning microscopy image and
(Figure 5A) and its grayscale image is shown in Figure S1, 3D nuclei segmentation under lower cell density when culturing at
which demonstrates the initial stage of scaffold-based cell day 1 using A549.
172 International Journal of Bioprinting (2022)–Volume 8, Issue 1

