Page 54 - AIH-1-3
P. 54
Artificial Intelligence in Health Predicting mortality in COVID-19 using ML
(178 position). The lowest-ranking KNN models highest overall score model, “22_mm_0 – 100_opt_01,”
th
handled datasets that were either not processed with any achieved 93.76% in precision, 95.47% in recall, 91.13% in
normalization method (none) or processed with the “Min– F1-score, 97.86% in AUC-ROC, and a runtime of 6.67306 s.
Max” method, within the 0 – 10 (mm_0 – 10) range, using This result is somewhat expected as XGBoost is designed to
either 10 or 15 attributes and either the default or the first be an effective and scalable method for training ML models,
set of optimized (opt-01) hyperparameter values. particularly suitable for large datasets, such as the one used
The MLP models scored the second-highest runtime in this study. XGBoost also has a strong history of achieving
32
values, ranging from 18.14386 s (125 position) to high-quality results in various ML competitions. The
th
362.46571 s (13 position). The models with the success of the top XGBoost model highlights the positive
th
lowest runtime scores used datasets processed with the impact of hyperparameter tuning, specifically the use of
“StandardScaler” (std) or the “Min–Max” method, within the first set of optimized hyperparameters and the “Min–
the 0 – 1 (mm_0 – 1) range, used either 10 or 15 attributes, Max” normalization method with a range of 0 to 100.
and the default set of hyperparameter values. In the second position were the RF models, with the
The third-highest runtime values were scored by highest overall ranking model being the “22_ mm_0 – 1000_
the RF models, with values ranging from 11.95152 s opt_02.” This model achieved 93.66% in precision, 96.99%
(170 position) to 30.22087 s (85 position). The models in recall, 91.13% in F1-score, 97.71% in AUC-ROC, and a
th
th
with the lowest runtime scores handled datasets that were runtime of 29.84745 s. The excellent overall performance
processed with either the “StandardScaler” (std) or the of the RF models can be attributed to the design of the RF
algorithm, where each DT in the ensemble is trained on a
“Min–Max” method, within the 0 – 1 (mm_0 – 1) range, different subset of the data, and aggregating the predictions
used the 15 most important attributes, and the default set
of hyperparameters. decreases the variation of individual DTs, leading to high
accuracy results. The main reason RF models scored lower
The XGBoost models ranked fourth in runtime, with than the XGBoost ones is that the RF algorithm uses a fixed
th
values ranging from 4.96984 s (214 position) to 17.71116 set of parameters for its entire ensemble, whereas XGBoost
s (179 position). The lowest-scoring XGBoost models adjusts the internal parameters of its ensemble iteratively,
th
used datasets normalized with all different methods, used enabling it to handle large-scale data more effectively. The
either 15 or 10 attributes, and the first set of optimized highest-scoring RF model indicates that using the second
(opt-01) hyperparameter values. set of optimized hyperparameters and the “Min–Max”
The LR models ranked fifth, with runtime values ranging normalization method with a range of 0 – 1000 played an
from 1.3429 s (286 position) to 3.31659 s (215 position). important role in its performance.
th
th
The distribution of the models’ ranking positions did not The MLP models ranked third, with the highest overall
show significant dispersion, with 92.6% of them (50/54) scoring model being the “22_ mm_0 – 1000_opt_01.” This
ranking between the 215 and 265 positions. The model achieved 93.46% in precision, 95.62% in recall,
th
th
highest-ranking models handled datasets that were either 90.76% in F1-score, 97.29% in AUC-ROC, and a runtime
not processed with any normalization method (none) or of 133.76172 s. The performance of the MLP models can be
processed with the “Min–Max” method, within the 0 – 10 attributed to the MLP algorithm’s ability to address complex
(mm_0 – 10) range, used either 15 or 10 attributes, and the non-linear problems with both small and large datasets.
first set of optimized (opt-01) hyperparameter values. However, the extent to which each independent variable
The DT models showed the lowest runtime values, is affected by the dependent variable can be challenging to
ranging from 1.09218 s (324 position) to 1.72228 s determine, and the performance of MLP models is heavily
th
(261 position). The distribution of the models’ ranking dependent on the quality of training, which can be time-
st
positions did not show significant dispersion. The models consuming. The top-performing MLP model suggests that
with the lowest runtime values handled datasets processed using the first set of optimized hyperparameters and the
with the “Min–Max” method, within the 0 – 1 and 0 – 10 “Min–Max” normalization method with a range of 0 to
(mm_0 – 1 and mm_0 – 10, respectively) ranges, using 1000 boosted its performance.
10 attributes and the second set of optimized (opt-02) The DT models came in fourth, with the highest overall
hyperparameter values. score achieved by “22_ mm_0 – 1000_opt_01.” This model
achieved 93.04% in precision, 93.69% in recall, 90.03%
4.2.6. Overall ranking—highest scorers
in F1-score, 95.67% in AUC-ROC, and a runtime of
Based on the overall performance of all models (Figure 19, 1.45128 s. The results of the DT models can be attributed
Tables 3 and 4), the XGBoost models ranked first. The to the design of the algorithm, which, while useful for
Volume 1 Issue 3 (2024) 48 doi: 10.36922/aih.2591

