Page 318 - IJB-8-4
P. 318
Deep learning for EBB control
of the powder, the solution was left at rest at 4°C overnight shown in Table 1. The dataset is available at https://doi.
before printing . To increase variability in the dataset, the org/10.5281/zenodo.7024007.
[7]
Pluronic solution was used both as is (transparent) as well as In Table 1, the layer height (LH) represents the
colored using a red and a blue colorant. distance between the needle and the previously printed
layer (or printing plate for the first layer) and is referred to
3.2. Software implementation and hardware for as a percentage of the needle diameter D . The extrusion
N
training multiplier (EM, also called “flow”) represents a modifier
All code to process data, train and evaluate the model, and of the amount of material that is being extruded. This
implement the control loop was written in Python. The data parameter is computed as a percentage of a standard
processing was performed using the libraries: Pandas flow for extruding cylindrical strands on free air with a
[36]
[7]
for data handling, OpenCV for image preprocessing, diameter equal to the nozzle diameter D N .
[37]
and TensorFlow for efficient data loading. The models 3.3.2. Data partitioning and image pre-processing
[38]
were defined and trained using the Keras API and the
[39]
scikit-learn library . All random function calls (i.e., data The collected videos were first randomly down sampled
[40]
pre-processing and splitting, training, and evaluation) as to the lower representation class to ensure class balance in
well as the global seed value for the used modules were the dataset. A total of 345 videos (115 per each class) were
initialized using the same integer (i.e., 1234) to ensure employed for model training and evaluation. The videos
repeatability of the results across multiple calls. The code were randomly partitioned into training and testing sets
is available at https://doi.org/10.5281/zenodo.7024016 using, respectively, 90% and 10% splits (splits stratified
to reproduce the training and evaluation results. All data over the video class).
analysis, statistical tests, and graphing were performed Each video was converted to a frame sequence by
using GraphPad prism (version 8.0.2). sampling one frame per second. Since it was observed
All models were trained on a dedicated computer that the printing error was more apparent at the end of
with an Intel Core i9 processor (running at 3,3-4,6 GHz), the video, only the last 90 frames from each video were
32 GB of RAM and equipped with a GeForce RTX 3090 used (totaling around 30K individual frames). Then, each
graphics card. A laptop with an Intel Core i7 processor frame was passed through a pre-processing step to reduce
(running at 1.8 GHz) and 16 Gb of RAM was used to the problem complexity. The main steps of the pipeline
evaluate the prediction speed of the final optimized model are summarized in Figure 1A. Briefly, each frame was
using CPU only. converted from red green blue (RGB) to hue saturation
value (HSV) color space to extract the value (V) channel
3.3. Dataset definition and reduce the influence of inter-frame illumination
variations . Then, a ROI of dimensions 128 × 256 pixels
[42]
3.3.1. Dataset design and collection was cropped around the needle region. We chose these
In this work, we examined two distinct extrusion modalities: shape and dimensions to exclude as much as possible other
pneumatic assisted and piston actuated. These two details in the background that may be picked up by the
methods were chosen to expand the variety of the printing model. Contrast limited adaptive histogram equalization
scenarios and hence enhance the trained model capacity (CLAHE) was applied to each grayscale ROI to increase
for generalization. In particular, we employed an Allevi 1 the image contrast. Finally, each image was normalized
bioprinter for the pneumatically assisted mechanism and from the original range 0 – 255 to the range 0 – 1 to avoid
a purposely built bioprinter for the piston-actuated one . numerical under- or over-flow during training.
[41]
To train and evaluate the DL model, we built During training only, the frames were augmented
a dataset of videos using a high-definition webcam by applying a set of random operations that simulate
(Logitech C920 at 1920 × 1080 resolution). The webcam other variability not accounted in the dataset generation,
was positioned in front of the printers at varying distances as shown in Figure 1B. A summary of all applied
and recorded the whole printing process from this view. transformation is found in Table 2, alongside the
Parallelepiped-shaped scaffolds of 10 mm × 10 mm side values for each operation chosen after preliminary
and 5 mm height were printed at ambient temperature. experimentations. The augmentation transformations
After printing, each video was assigned a label were applied in sequence and online, so that at each
depending on the final shape of the scaffold: “Ok,” training epoch the model was presented with a slightly
under-extrusion (“under_e”) (i.e., not enough material diverse set of images.
has been extruded) and over-extrusion (“over_e”) (i.e., 3.4. Model optimization procedure
an excess of material has been deposited). To model
multiple printing scenarios, each video corresponded to The two main requirements that guided the model architecture
a distinct set of parameters. A list of varied parameters is selection and optimization were: (i) good generalization
310 International Journal of Bioprinting (2022)–Volume 8, Issue 4

