Page 100 - AIH-1-4
P. 100

Artificial Intelligence in Health                               Transformer-based radiology report summaries



            Writing–original draft: All authors                   Proceedings of the ACL Workshop: Text Summarization
            Writing–review & editing: All authors                 Braches Out; 2004. p. 10.
                                                               8.   Raffel C, Shazeer N, Roberts A, et al. Exploring the limits
            Ethics approval and consent to participate            of transfer learning with a unified text-to-text transformer.
            Not applicable.                                       J Mach Learn Res. 2020;21(140):1-67.
                                                               9.   Li Y, Wehbe RM, Ahmad FS, Wang H, Luo Y. Clinical-
            Consent for publication                               longformer and clinical-bigbird: Transformers for long clinical
            Not applicable.                                       sequences. J Am Med Inform Assoc. 2022;29(2):273-281.
                                                                  doi: 10.48550/arXiv.2201.11838
            Availability of data
                                                               10.  Yalunin  A,  Umerenkov  D,  Kokh  V. Abstractive
            MIMIC-CXR can be accessed at https://physionet.org/   Summarization  of  Hospitalisation  Histories  with
            content/mimic-cxr/2.0.0/.                             Transformer Networks.

            Further disclosure                                    doi: 10.48550/arXiv.2204.02208
                                                               11.  Kraljevic Z, Newham M, Fox D,  et al. Multimodal
            Part of the findings have been presented in a final project   representation  learning  for  medical  text  summarization.
            event at the Stanford University campus in Palo Alto,   J Biomed Inform. 2021;116:103713.
            California, in May 2022. The paper has been uploaded to a   12.  Zhang T, Kishore V, Wu F, Weinberger KQ, Artzi Y.
            preprint server (doi: 10.48550/arXiv.2405.06802).
                                                                  BERTScore: Evaluating  Text  Generation  with  BERT.  In:
            References                                            International Conference on Learning Representations
                                                                  (ICLR); 2020.
            1.   Takacs N, Makary MS.  Are we Prepared for a Looming      doi: 10.48550/arXiv.1904.09675
               Radiologist Shortage? Radiology Today. Available from:
               https://www.radiologytoday.net/archive/rt0619p10.shtml   13.  Lewis M, Liu Y, Goyal N, et al. BART: Denoising Sequence-
               [Last accessed on 2024 Jun 03].                    to-Sequence Pre-training for Natural Language Generation,
                                                                  Translation, and Comprehension. In: Proceedings of the
            2.   Devlin J, Chang MW, Lee K, Toutanova K. BERT:  Pre-  58   Annual Meeting of the Association for Computational
                                                                    th
               training of Deep Bidirectional Transformers for Language   Linguistics; 2020. p. 7871-7880.
               Understanding.  arXiv  preprint arXiv:1810.04805;  2019.
               Available from: https://aclanthology.org/N19-1423 [Last      doi: 10.48550/arXiv.1910.13461
               accessed on 2024 Jun 03].                       14.  Lamb AM, Goyal A, Zhang Y, Zhang S, Courville A,
            3.   Johnson AEW, Pollard TJ, Shen L, et al. MIMIC-CXR-JPG, a   Bengio Y. Professor forcing: A new algorithm for training
               large publicly available database of labeled chest radiographs.   recurrent networks.  Adv  Neural  Inform  Process  Syst.
               Nat Sci Data. 2019;24(1):1-18.                     2016;29:4601-4609.
            4.   De Padua RS, Qureshi I.  Colab Notebook with Fine-     doi: 10.48550/arXiv.1610.09038
               Tuned  T5  Model  for  Radiology  Summarization.   15.  Wolf T, Debut L, Sanh V, et al. Transformers: State-of-the-
               Available  from:   https://colab.research.google.com/  Art Natural Language Processing. In: Proceedings of the
               drive/14A3j4bsTiC3hh3GdbLxwWGtwZoFiwciv  [Last     2020 Conference on Empirical Methods in Natural Language
               accessed on 2024 Jun 03].                          Processing: System Demonstrations; 2020. p. 38-45.
            5.   Chen Z, Gong Z, Zhuk A.  Predicting  Doctor’s  Impression      doi: 10.48550/arXiv.1910.03771
               for Radiology Reports with Abstractive Text Summarization.
               CS224N: Natural Language Processing with Deep Learning.   16.  Zaheer M, Guruganesh G, Dubey A,  et al. Big bird:
               Stanford University; 2021. Available from: https://web.  Transformers  for  longer  sequences.  Adv Neural Inform
               stanford.edu/class/archive/cs/cs224n/cs224n.1214/reports/  Process Syst. 2020;33:17283-17297.
               final_reports/report005.pdf [Last accessed on 2024 Jun 03].     doi: 10.48550/arXiv.2007.14062
            6.   Alsentzer E, Murphy JR, Boag W, et al. Publicly Available   17.  Dahal P.  Classification  and  Loss  Evaluation  -  Softmax  and
               Clinical BERT Embeddings. In: Proceedings of the 2  Clinical   Cross Entropy Loss. Available from: https://deepnotes.io/
                                                   nd
               Natural Language Processing Workshop (ClinicalNLP); 2019.   softmax-crossentropy [Last accessed on 2024 Jun 03].
               p.  72-78. Available from: https://aclanthology.org/W19-
               1909 [Last accessed on 2024 Jun 03].            18.  Wolk  K, Marasek K. Enhanced Bilingual  evaluation
                                                                  understudy. arXiv preprint arXiv: 1509.09088; 2015.
            7.   Lin  CY.  ROUGE: A  Package for Automatic Evaluation of
               Summaries. Text Summarization Branches Out (2004): 74-81.      doi: 10.48550/arXiv.1509.09088
               Barcelona, Spain: Association for Computational Linguistics.   19.  Tay Y, Dehghani M, Bahri D, Metzler D. Efficient


            Volume 1 Issue 4 (2024)                         94                               doi: 10.36922/aih.3846
   95   96   97   98   99   100   101   102   103   104   105