This paper proposes an AI-driven framework for designing multimodal, symbiotic human-machine interfaces (HMIs) in modern vehicles. As automotive systems evolve toward higher levels of automation and user-centric design, the integration of data fusion techniques with deep neural networks becomes essential for enhancing driver safety and comfort. The proposed framework continuously monitors a range of inputs—including driver gaze, facial expressions, posture, and vehicle telemetry—and processes these signals through a hierarchical data fusion methodology. This approach involves raw signal preprocessing, feature extraction and selection, and decision-level integration using advanced deep learning models. Experimental validation using a high-fidelity driving simulator across varied scenarios (high-stress, relaxed, and frustration-inducing) demonstrates the framework’s capability to accurately detect driver distraction and drowsiness. The findings suggest that integrating sensor data fusion with neural network-based analysis can significantly improve the adaptability and responsiveness of HMIs.
Integration of Data Fusion and Deep Neural Networks for In-vehicle Symbiotic HMI Design / Generosi, A.; Villafan, J. Y.; Montanari, R.; Mengoni, M.. - ELETTRONICO. - 15780:(2025), pp. 326-342. ( 19th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2025, held as part of the 27th HCI International Conference, HCII 2025 Gothenburg, Sweden 22 - 27 June 2025) [10.1007/978-3-031-93848-1_22].
Integration of Data Fusion and Deep Neural Networks for In-vehicle Symbiotic HMI Design
Generosi A.
;Villafan J. Y.;Mengoni M.
2025-01-01
Abstract
This paper proposes an AI-driven framework for designing multimodal, symbiotic human-machine interfaces (HMIs) in modern vehicles. As automotive systems evolve toward higher levels of automation and user-centric design, the integration of data fusion techniques with deep neural networks becomes essential for enhancing driver safety and comfort. The proposed framework continuously monitors a range of inputs—including driver gaze, facial expressions, posture, and vehicle telemetry—and processes these signals through a hierarchical data fusion methodology. This approach involves raw signal preprocessing, feature extraction and selection, and decision-level integration using advanced deep learning models. Experimental validation using a high-fidelity driving simulator across varied scenarios (high-stress, relaxed, and frustration-inducing) demonstrates the framework’s capability to accurately detect driver distraction and drowsiness. The findings suggest that integrating sensor data fusion with neural network-based analysis can significantly improve the adaptability and responsiveness of HMIs.| File | Dimensione | Formato | |
|---|---|---|---|
|
Integration of data fusion HCII25.pdf
Solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza d'uso:
Tutti i diritti riservati
Dimensione
1.59 MB
Formato
Adobe PDF
|
1.59 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
|
camera-ready_post print.pdf
embargo fino al 03/06/2026
Tipologia:
Documento in post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza d'uso:
Licenza specifica dell'editore
Dimensione
1.14 MB
Formato
Adobe PDF
|
1.14 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


