Page 58 - AIH-1-1
P. 58

Artificial Intelligence in Health                                AI model for cardiovascular disease prediction





















                                                               Figure 14. Performance analysis of K-nearest neighbor algorithms on
            Figure 12. Performance analysis of K-means algorithms on datasets of   datasets of different sizes.
            different sizes.
                                                               and precision were 54.00%, 0.4600, 100%, 100%, and 50%,
                                                               respectively.  The  results  showed  that  KNN  has  excellent
                                                               sensitivity and specificity of 1000% even at a reduced
                                                               number of datasets. Accuracy and precision decreased
                                                               gradually as the number of datasets reduced from 2000
                                                               to 500. On the other hand, MSE increased steadily with a
                                                               reduced number of datasets.
                                                                 The performance results of the model using the SVM
                                                               classification technique on 2000, 1500, 1000, and 500
                                                               datasets are presented in  Figure  15. The classification
                                                               accuracy, MSE, sensitivity, specificity, and precision were
                                                               78.30%, 0.2170, 91.70%, 58.30%, and 96.10%, respectively,
                                                               for 2000 datasets. According to the results, the system
            Figure 13. Performance  analysis of artificial neural network-genetic   performed better at 2000 datasets, hence adopted for the
            algorithms on datasets of different sizes
                                                               system. When the dataset number was reduced to 1500, the
            reduced from 2000 to 500. MSE increased with a reduced   accuracy, MSE, sensitivity, specificity, and precision were
            number of datasets. Specificity increased as the number of   71.10%, 0.2890, 84.00%, 55.00%, and 70.00%, respectively.
            datasets was reduced from 2000 to 1500, decreased at 1000   For 1000 datasets, the accuracy, MSE, sensitivity, specificity,
            CVD datasets, but increased sharply when the number   and precision were 73.30%, 0.2670, 100%, 75.00%, and
            of datasets was reduced to 500. Meanwhile, the precision   85.70%, respectively, and for the 500 datasets, the same
            decreased as the number of datasets was reduced from   set of performance parameters was measured at 58.30%,
            2000 to 1500, increased at 1000 datasets, and decreased as   0.4170, 66.70%, 50.00%, and 57.10%, respectively. The
            the number of datasets was reduced to 500.         accuracy,  sensitivity,  specificity,  and  precision  decreased
                                                               as the number of datasets reduced from 2000 to 1500,
              The performance of the KNN classification technique   then increased as the number of datasets was 1000 before
            using 2000, 1500, 1000, and 500 datasets is presented in   decreasing at 500 datasets. While the MSE increased when
            Figure  14. The classification accuracy, MSE, sensitivity,   datasets reduced to 1500, it decreased at 1000 datasets
            specificity, and precision were 71.70%, 0.2838, 100%,   before increasing to its peak at 500 datasets.
            70.00%, and 67.90%, respectively, for 2000 datasets. Based
            on the results, the system performed better at 2000 datasets   The performance results of the system using the DT
            when compared with 1500, 1000, and 500 datasets, and   algorithm for 2000, 1500, 1000, and 500 datasets are
            hence, 2000 datasets were adopted for the system. When the   presented in Figure 16. For 2000 datasets, the classification
            dataset number was reduced to 1500, the accuracy, MSE,   accuracy, MSE, sensitivity, specificity, and precision were
            sensitivity, specificity, and precision were 55.60%, 0.4440,   75.00%, 0.2500, 88.90%, 54.20%, and 74.40%, respectively.
            100%, 100%, and 55.60%, respectively. For 1000 datasets,   It was observed that the system performed better at 2000
            the accuracy, MSE, sensitivity, specificity, and precision   datasets; therefore, it was adopted for the system. Based on
            were 56.70%, 0.4330, 100%, 100%, and 56.70%, respectively.   the results, when the datasets were reduced to 1500, the
            For 500 datasets, the accuracy, MSE, sensitivity, specificity,   accuracy, MSE, sensitivity, specificity, and precision were


            Volume 1 Issue 1 (2024)                         52                        https://doi.org/10.36922/aih.1746
   53   54   55   56   57   58   59   60   61   62   63