Page 131 - AIH-1-3
P. 131

Artificial Intelligence in Health                                 Interpretability of deep models for COVID-19



               Networks Via Gradient-Based Localization. In: Proceedings   Communication Association; 2021. p. 4301-4305.
               of the IEEE International Conference on Computer Vision;      doi: 10.21437/Interspeech.2021-1798
               2017. p. 618-626.
                                                               21.  Gauy MM, Berti LC, Cândido Júnior A, et al. Discriminant
               doi: 10.1109/ICCV.2017.74
                                                                  Audio Properties in Deep Learning Based Respiratory
            11.  Zheng  F,  Zhang G,  Song  Z.  Comparison  of different   Insufficiency Detection in Brazilian Portuguese. In: Artificial
               implementations of MFCC.  J  Comput Sci Technol.   Intelligence in Medicine:  21   International Conference on
                                                                                       st
               2001;16(6):582-589.                                Artificial Intelligence in Medicine; 2023. p. 271-275.
               doi: 10.1007/BF02943243                            doi: 10.1007/978-3-031-34344-5_32
            12.  Gauy MM, Finger M. Audio MFCC-Gram Transformers   22.  Sobahi N, Atila O, Deniz E, Sengur A, Acharya UR.
               for Respiratory Insufficiency Detection in COVID-19.   Explainable COVID-19 detection using fractal dimension
               In:  Proceedings  XIII  Simpósio  Brasileiro  de  Tecnologia  da   and vision Transformer with Grad-CAM on cough sounds.
               Informação e da Linguagem Humana, STIL; 2021. p. 143-152.  Biocybern Biomed Eng. 2022;42(3):1066-1080.
               doi: 10.5753/stil.2021.17793                       doi: 10.1016/j.bbe.2022.08.005
            13.  Kong Q, Cao Y, Iqbal T, Wang Y, Wang W, Plumbley MD.   23.  Moujahid H, Cherradi B, Al-Sarem M,  et al. Combining
               PANNs:  Large-Scale  Pretrained  Audio  Neural  Networks   CNN and Grad-CAM for COVID-19 disease prediction
               for Audio Pattern Recognition. Vol.  28. In:  IEEE/ACM   and  visual  explanation.  Intell  Autom Soft Comput.
               Transactions on Audio, Speech, and Language Processing;   2022;32(2):723-745.
               2020. p. 2880-2894.
                                                                  doi: 10.32604/iasc.2022.022179
               doi: 10.1109/TASLP.2020.3030497
                                                               24.  Panwar H, Gupta P, Siddiqui MK, Morales-Menendez R,
            14.  Vaswani A, Shazeer N, Parmar N, et al. Attention is all you   Bhardwaj P, Singh V. A  deep learning and Grad-CAM
               need. Adv Neural Inform Process Syst. 2017;30:5998-6008.  based color visualization approach for fast detection of
               doi: 10.5555/3295222.3295349                       COVID-19  cases using chest x-ray and ct-scan images.
                                                                  Chaos Solitons Fractals. 2020;140:110190.
            15.  Bartl-Pokorny KD, Pokorny FB, Batliner A, et al. The voice
               of COVID-19: Acoustic correlates of infection. J Acoust Soc      doi: 10.1016/j.chaos.2020.110190
               Am. 2021;149(6):4377.                           25.  Gauy MM, Finger M. Pretrained Audio Neural Networks
               doi: 10.1121/10.0005194                            for  Speech  Emotion  Recognition  in  Portuguese.  In:  First
                                                                  Workshop on Automatic Speech Recognition for Spontaneous
            16.  Berti LC, Spazzapan EA, Pereira PL, et al. Mudanças Nos   and Prepared Speech Speech emotion recognition in
               Parâmetros Acústicos da voz em Brasileiros com COVID-19.   Portuguese, SE&R; 2022.
               In: XXIX Congresso Brasileiro e o IX Congresso Internacional
               de Fonoaudiologia; 2021. p. 2819-2819.          26.  Xu X, Dinkel H, Wu M, Xie Z, Yu K. Investigating Local
                                                                  and Global Information for Automated Audio Captioning
            17.  Berti LC, Spazzapan EA, Queiroz M,  et al. Fundamental   with Transfer Learning. In: IEEE International Conference
               frequency related parameters in Brazilians with COVID-19.   on Acoustics, Speech and Signal Processing (ICASSP); 2021.
               J Acoust Soc Am. 2023;153:576-585.                 p. 905-909.
               doi: 10.1121/10.0016848                            doi: 10.1109/ICASSP39728.2021.9413982
            18.  Fernandes-Svartman FR, Berti LC, Martins MVM, de   27.  Zhang H, Cisse M, Dauphin YN, Lopez-Paz D. mixup:
               Medeiros  BR,  Queiroz  M.  Temporal  Prosodic  Cues  for   Beyond Empirical Risk Minimization. In:  International
               COVID-19 in Brazilian Portuguese Speakers. In: Proceedings   Conference on Learning Representations; 2018.
               Speech Prosody; 2022. p. 210-214.
                                                               28.  28.  Xu  K, Feng D,  Mi  H,  et  al.  Mixup-based acoustic
               doi: 10.21437/SpeechProsody.2022-43
                                                                  scene classification using multi-channel convolutional
            19.  Schuller BW, Batliner A, Bergler C, et al. The INTERSPEECH   neural network. In: Advances in Multimedia Information
               2021 Computational Paralinguistics Challenge: COVID-19   Processing. Vol. 11166. Cham: Springer; 2018. p. 14-23.
               Cough, COVID-19 Speech, Escalation and Primates.      doi: 10.1007/978-3-030-00764-5_2
               In:  22   Annual Conference of the International Speech
                    nd
               Communication Association, INTERSPEECH; 2021.   29.  Park DS, Chan W, ZhangY, et al. SpecAugment: A simple data
                                                                  augmentation method for automatic speech recognition.
               doi: 10.21437/Interspeech.2021-19
                                                                  Proc Interspeech. 2019;1:2613-2617.
            20.  Casanova E, Cândido A, Fernandes RC,  et al. Transfer
               Learning and Data Augmentation Techniques to the      doi: 10.21437/Interspeech.2019-2680
               COVID-19 Identification Tasks in COMPARE 2021.   30.  Brigham EO, Morrow R. The fast fourier transform. IEEE
               In:  22   Annual Conference of the International Speech   Spectrum. 1967;4(12):63-70.
                    nd

            Volume 1 Issue 3 (2024)                        125                               doi: 10.36922/aih.2992
   126   127   128   129   130   131   132   133   134   135   136