loading page

Optimized Residual Action for Interaction Control with Learned Environments
  • +4
  • Vincenzo Petrone ,
  • Luca Puricelli ,
  • Enrico Ferrentino ,
  • Alessandro Pozzi ,
  • Pasquale Chiacchio ,
  • Francesco Braghin ,
  • Loris Roveda
Vincenzo Petrone
University of Salerno

Corresponding Author:[email protected]

Author Profile
Luca Puricelli
Author Profile
Enrico Ferrentino
Author Profile
Alessandro Pozzi
Author Profile
Pasquale Chiacchio
Author Profile
Francesco Braghin
Author Profile
Loris Roveda
Author Profile

Abstract

In industrial settings, robotic tasks often require interaction with various objects, necessitating compliant manipulation to prevent damage while accurately tracking reference forces.
To this aim, interaction controllers are typically employed, but they need either human tinkering for parameter tuning or precise environmental modeling.
Both these aspects can be problematic, as the former is a time-consuming procedure, and the latter is unavoidably affected by approximations, hence being prone to failure during the actual application.
Addressing these challenges, current research focuses on devising high-performance force controllers.
Along this line, this work introduces ORACLE (Optimized Residual Action for Interaction Control with Learned Environments), a novel force control approach.
Utilizing neural networks, ORACLE predicts robot-environment interaction forces, which are then used in an optimal residual action controller to locally correct actions from a base force controller, minimizing the expected force-tracking error.
Tested on a real Franka Emika Panda robot, ORACLE demonstrates superior force tracking performance compared to state-of-the-art controllers, with a short setup time.
21 Mar 2024Submitted to TechRxiv
29 Mar 2024Published in TechRxiv