Deep Learning (DL) has established itself as a highly effective approach in medical imaging, a fact that became particularly evident during the COVID-19 pandemic when the need for advanced diagnostic tools became critical. Although much of the existing research has centered around computed tomography (CT) scans, this study focuses on the application of DL in lung ultrasound (LUS), emphasizing LUS as a non-invasive imaging technique that is safer and more accessible than CT. This paper introduces a multi-task learning framework, meticulously crafted for both the classification and segmentation of lung damage induced by COVID-19. The approach aims to evaluate whether integrating shared features between classification and segmentation tasks enhances predictions and boosts the overall effectiveness of the model, even in domains characterized by highly variable, complex images lacking geometric patterns such as LUS images. The model's performance was thoroughly assessed using the publicly available ICLUS dataset, showing promising results. For the task of classification, the model achieved an accuracy of 66%, which is an improvement over the 63% accuracy rate achieved by single-task models. In segmentation, the model attained a Dice similarity coefficient of 49%, surpassing the 47% obtained with established techniques. This integrated approach ultimately leads to a more precise and visually clearer assessment of patients' clinical states, enhancing the diagnostic process.

A Multi-Task Deep Learning Approach for the Assessment of COVID-19 in Lung Ultrasound / Fiorentino, Maria Chiara; Rosati, Riccardo; Federici, Lorenzo; Zingaretti, Primo. - (2024), pp. 77-82. (Intervento presentato al convegno 3rd IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering, MetroXRAINE 2024 tenutosi a St Albans, United Kingdom nel 2024) [10.1109/metroxraine62247.2024.10796166].

A Multi-Task Deep Learning Approach for the Assessment of COVID-19 in Lung Ultrasound

Fiorentino, Maria Chiara;Rosati, Riccardo;Federici, Lorenzo;Zingaretti, Primo
2024-01-01

Abstract

Deep Learning (DL) has established itself as a highly effective approach in medical imaging, a fact that became particularly evident during the COVID-19 pandemic when the need for advanced diagnostic tools became critical. Although much of the existing research has centered around computed tomography (CT) scans, this study focuses on the application of DL in lung ultrasound (LUS), emphasizing LUS as a non-invasive imaging technique that is safer and more accessible than CT. This paper introduces a multi-task learning framework, meticulously crafted for both the classification and segmentation of lung damage induced by COVID-19. The approach aims to evaluate whether integrating shared features between classification and segmentation tasks enhances predictions and boosts the overall effectiveness of the model, even in domains characterized by highly variable, complex images lacking geometric patterns such as LUS images. The model's performance was thoroughly assessed using the publicly available ICLUS dataset, showing promising results. For the task of classification, the model achieved an accuracy of 66%, which is an improvement over the 63% accuracy rate achieved by single-task models. In segmentation, the model attained a Dice similarity coefficient of 49%, surpassing the 47% obtained with established techniques. This integrated approach ultimately leads to a more precise and visually clearer assessment of patients' clinical states, enhancing the diagnostic process.
2024
9798350378009
File in questo prodotto:
File Dimensione Formato  
A_Multi-Task_Deep_Learning_Approach_for_the_Assessment_of_COVID-19_in_Lung_Ultrasound.pdf

Solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza d'uso: Tutti i diritti riservati
Dimensione 2.16 MB
Formato Adobe PDF
2.16 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/342617
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact