Page 70 - IJOCTA-15-1
P. 70
O. Ayana, D. F. Kanbak, M. Kaya Keles / IJOCTA, Vol.15, No.1, pp.50-70 (2025)
Figure 5. Results of the proposed BiLSTM model
recently proposed optimization technique inspired F-scores of 0.873 and 0.84, respectively, indicating
by molecular dynamics. To ensure a comprehen- relatively weaker performance in balancing preci-
sive comparison, we also incorporate the Whale sion and recall.
Optimization Algorithm (WOA), 22 which is in-
spired by the foraging behavior of whales and In terms of feature selection efficiency, the results
was introduced to the literature around the same highlight an important observation regarding the
time as SOA. Both ASO and WOA have demon- trade-off between feature selection size and per-
strated considerable potential in various domains formance. Despite selecting the largest average
and has been successfully applied to SA task. In feature set (12754), the BSO algorithm achieved
this study, ASO and WOA algorithms are applied the highest F-score (0.91), indicating its ability to
in binary form for SA as suggested in studies 94 leverage a larger feature pool effectively. In con-
and. 95 These algorithms are implemented using trast, the other four algorithms—ASO, BA, HS,
the Python programming language, with the pa- and WOA—selected fewer features (all averaging
rameters utilized in each algorithm detailed in Ta- below 10200) but demonstrated lower F-scores,
ble 5. The parameters for the five algorithms are ranging from 0.84 (WOA) to 0.90 (ASO). This
selected based on the values specified in their re- suggests that overly aggressive feature reduction
spective original articles, and the performance of may have compromised their performance by ex-
all five algorithms are evaluated under standard cluding valuable information.
conditions.
Interestingly, while reduced feature size typically
Following the decisions outlined in Section 4.2.2,
enhances computational efficiency, the results in-
we select the MNB algorithm as the classifier. Ad-
dicate that selecting fewer features alone does not
ditionally, we apply punctuation and stopwords
guarantee improved performance. For instance,
removal as preprocessing steps to the dataset.
WOA, which selected the smallest feature set
Each of the five algorithms is executed 5 times
(9378), achieved the lowest F-score (0.84), demon-
with a population size of 100 and a total of 50
strating that a balance between feature size and
epochs.
model effectiveness is crucial. Similarly, ASO,
with an average feature size of 9899, performed
The results of this section are summarized in Ta- well in F-score (0.90) but did not surpass BSO.
ble 6. The experimental results reveal significant
differences in performance among the tested al-
gorithms, as measured by their F-scores. BSO BSO emerges as the most balanced approach,
demonstrated the best performance with an F- achieving superior performance metrics while
score of 0.91, highlighting its strong balance be- maintaining a reasonable feature size, albeit
tween precision and recall. ASO followed closely larger than its counterparts. This highlights its
with an F-score of 0.90, underscoring its robust capacity to extract and utilize critical features
capability, particularly in recall (0.921). Mean- more effectively than the other algorithms, mak-
while, the BA achieved a comparable F-score of ing it an ideal choice for scenarios where both fea-
0.894, whereas HS and WOA lagged behind with ture diversity and high accuracy are essential.
64

