This study investigates the use of Edge AI for object detection and tracking in living environments, aiming to enhance both building management and robotic navigation. In detail, this study explores the possibility of using deployable AI models from various pre-trained models used for object detection and tracking in indoor environments. The primary goal is to identify object recognition models that achieve high accuracy while remaining computationally efficient enough to be deployed on resourceconstrained edge devices, eliminating the need for cloud-based processing, while ensuring energy conservation. Importantly, the approach prioritizes user privacy by processing all data locally on edge microcontrollers, avoiding transmitting raw data over networks. These microcontrollers are selected for their costeffectiveness, energy efficiency, and capability to run advanced models. The study evaluates multiple pre-trained AI models, namely MobileNetV2 SSD FPN-Lite, FOMO (Faster Objects, More Objects), YOLO, and SSD, through the Edge Impulse platform. The results highlight FOMO as the best-performing model, achieving 100% accuracy with a memory footprint of less than 100 KB, making it well-suited for Edge AI applications. This approach ensures real-time detection of environmental changes and obstacles, safeguards user privacy by processing data locally and optimizes energy efficiency. Our findings demonstrate that Edge AI can be effectively leveraged to improve living environment robotic navigation and dynamic BIM updates while maintaining a lightweight computational footprint.

Edge AI for Object Recognition in Digital Twins: Enhancing Indoor Navigation and Bim Systems for Living Environments / Omer, K.; Monteriu', A.. - (2025), pp. 490-494. ( 4th IEEE International Workshop on Metrology for Living Environment, MetroLivEnv 2025 Venice, IT 11 - 13 June 2025) [10.1109/MetroLivEnv64961.2025.11106999].

Edge AI for Object Recognition in Digital Twins: Enhancing Indoor Navigation and Bim Systems for Living Environments

Omer K.;Monteriu' A.
2025-01-01

Abstract

This study investigates the use of Edge AI for object detection and tracking in living environments, aiming to enhance both building management and robotic navigation. In detail, this study explores the possibility of using deployable AI models from various pre-trained models used for object detection and tracking in indoor environments. The primary goal is to identify object recognition models that achieve high accuracy while remaining computationally efficient enough to be deployed on resourceconstrained edge devices, eliminating the need for cloud-based processing, while ensuring energy conservation. Importantly, the approach prioritizes user privacy by processing all data locally on edge microcontrollers, avoiding transmitting raw data over networks. These microcontrollers are selected for their costeffectiveness, energy efficiency, and capability to run advanced models. The study evaluates multiple pre-trained AI models, namely MobileNetV2 SSD FPN-Lite, FOMO (Faster Objects, More Objects), YOLO, and SSD, through the Edge Impulse platform. The results highlight FOMO as the best-performing model, achieving 100% accuracy with a memory footprint of less than 100 KB, making it well-suited for Edge AI applications. This approach ensures real-time detection of environmental changes and obstacles, safeguards user privacy by processing data locally and optimizes energy efficiency. Our findings demonstrate that Edge AI can be effectively leveraged to improve living environment robotic navigation and dynamic BIM updates while maintaining a lightweight computational footprint.
2025
9798331501556
File in questo prodotto:
File Dimensione Formato  
Karameldeen_Edge-AI-Object-Recognition-Digital_2025.pdf

Solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza d'uso: Tutti i diritti riservati
Dimensione 2.62 MB
Formato Adobe PDF
2.62 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/354732
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact