The widespread adoption of wearable health monitoring devices has highlighted the need for robust algorithms to detect and mitigate motion artifacts in photoplethysmographic (PPG) signals. This article introduces a novel approach to motion artifact segmentation, supported by the creation of a new dataset. PPG signals from publicly available repositories were curated and enhanced with precise segmentation masks, enabling rigorous model training and evaluation. Three deep learning architectures—UNet, LSTM-UNet, and Atrous-UNet—were assessed using a comprehensive set of metrics, including dice score, intersection over union (IoU), and average Hausdorff distance (AHD), to highlight the model performance beyond standard evaluations. This multimetric approach underscored the importance of addressing diverse aspects such as boundary precision and overall robustness. Atrous-UNet emerged as the most effective model, offering a balance of high performance and computational efficiency suitable for real-time deployment. The models were validated on real-world data from open-source wearable devices, EmotiBit and Bangle.js 2 (accuracy above 80% for both), demonstrating generalizability across varying hardware and environmental conditions. Comparisons with expert and nonexpert human annotators revealed that the models significantly outperformed the two groups in reliability and detection consistency. The models were optimized using quantization techniques to enable deployment on resource-constrained devices. Although this introduced some performance losses, the models retained robust artifact detection capabilities with an accuracy of over 78%.
Segmentation of Motion Artifacts in Wearable PPG Signals Using Lightweight Neural Networks / Bolpagni, M.; Campanella, S.; Gabrielli, S.; Palma, L.. - In: IEEE SENSORS JOURNAL. - ISSN 1530-437X. - 25:11(2025), pp. 20635-20647. [10.1109/JSEN.2025.3561604]
Segmentation of Motion Artifacts in Wearable PPG Signals Using Lightweight Neural Networks
Campanella S.Secondo
;Palma L.Ultimo
2025-01-01
Abstract
The widespread adoption of wearable health monitoring devices has highlighted the need for robust algorithms to detect and mitigate motion artifacts in photoplethysmographic (PPG) signals. This article introduces a novel approach to motion artifact segmentation, supported by the creation of a new dataset. PPG signals from publicly available repositories were curated and enhanced with precise segmentation masks, enabling rigorous model training and evaluation. Three deep learning architectures—UNet, LSTM-UNet, and Atrous-UNet—were assessed using a comprehensive set of metrics, including dice score, intersection over union (IoU), and average Hausdorff distance (AHD), to highlight the model performance beyond standard evaluations. This multimetric approach underscored the importance of addressing diverse aspects such as boundary precision and overall robustness. Atrous-UNet emerged as the most effective model, offering a balance of high performance and computational efficiency suitable for real-time deployment. The models were validated on real-world data from open-source wearable devices, EmotiBit and Bangle.js 2 (accuracy above 80% for both), demonstrating generalizability across varying hardware and environmental conditions. Comparisons with expert and nonexpert human annotators revealed that the models significantly outperformed the two groups in reliability and detection consistency. The models were optimized using quantization techniques to enable deployment on resource-constrained devices. Although this introduced some performance losses, the models retained robust artifact detection capabilities with an accuracy of over 78%.| File | Dimensione | Formato | |
|---|---|---|---|
|
Segmentation_of_Motion_Artifacts_in_Wearable_PPG_Signals_Using_Lightweight_Neural_Networks.pdf
accesso aperto
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza d'uso:
Creative commons
Dimensione
2.85 MB
Formato
Adobe PDF
|
2.85 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


