Due to the spread of collaborative robotics, human-robot collaboration is becoming more and more common in industrial environments. However, there is a need to make this interaction increasingly usable for the operator and less tiring by making the robot adapt to the operator’s needs. In this paper, a framework based on Deep Learning techniques is presented that enables the operator to perform an manufacturing task assisted by a collaborative robot in a safer, less tiring and more flexible way. Three RGB-D cameras are used to capture information about the environment in which the human is working and about the human position. The framework has three main components: gesture recognition, robotic grasping and collision avoidance. Specifically, a Convolutional Neural Network has been implemented for gesture classification. Through gestures, the operator tells the robot what tool is needed for that subtask and the robot provides it. At the end of the machining process, through an automatic grasping pose detection, the robot is able to pick up the tool on its own at the position where the operator left it. During the interaction with the robot and the sharing of the workspace, the safety of the operator is ensured by avoiding collision with the robot. The safety distance between human and robot is always respected. Results of testing the framework on a real user-case using the Universal Robot 5e robot are presented. Code, videos and data are available at https://github.com/matteoforlini/human_robot_assembly_task.
Enhanced Human-Robot Collaboration through AI Tools and Collision Avoidance Control / Forlini, M.; Neri, F.; Carbonari, L.; Callegari, M.; Palmieri, G.. - ELETTRONICO. - (2024). (Intervento presentato al convegno 20th IEEE/ASME International Conference on Mechatronic, Embedded Systems and Applications, MESA 2024 tenutosi a Genova, Italy nel 2 - 4 September 2024) [10.1109/MESA61532.2024.10704917].
Enhanced Human-Robot Collaboration through AI Tools and Collision Avoidance Control
Forlini M.
Primo
;Neri F.Secondo
;Carbonari L.;Callegari M.;Palmieri G.Ultimo
2024-01-01
Abstract
Due to the spread of collaborative robotics, human-robot collaboration is becoming more and more common in industrial environments. However, there is a need to make this interaction increasingly usable for the operator and less tiring by making the robot adapt to the operator’s needs. In this paper, a framework based on Deep Learning techniques is presented that enables the operator to perform an manufacturing task assisted by a collaborative robot in a safer, less tiring and more flexible way. Three RGB-D cameras are used to capture information about the environment in which the human is working and about the human position. The framework has three main components: gesture recognition, robotic grasping and collision avoidance. Specifically, a Convolutional Neural Network has been implemented for gesture classification. Through gestures, the operator tells the robot what tool is needed for that subtask and the robot provides it. At the end of the machining process, through an automatic grasping pose detection, the robot is able to pick up the tool on its own at the position where the operator left it. During the interaction with the robot and the sharing of the workspace, the safety of the operator is ensured by avoiding collision with the robot. The safety distance between human and robot is always respected. Results of testing the framework on a real user-case using the Universal Robot 5e robot are presented. Code, videos and data are available at https://github.com/matteoforlini/human_robot_assembly_task.File | Dimensione | Formato | |
---|---|---|---|
MESA2024_paper_52 (collision avoidance).pdf
accesso aperto
Tipologia:
Documento in post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza d'uso:
Tutti i diritti riservati
Dimensione
2.64 MB
Formato
Adobe PDF
|
2.64 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.