Humans issue control signals to robot systems in contexts ranging from teleoperation to instruction to shared autonomy, and in domains as spread as space exploration to assistive robotics. For a human to issue a control signal to a robot platform requires physical actuation of an interface—whether via overt body movement or electrical signals from the muscles or brain.
However, robot control systems overwhelmingly are agnostic to interface source and actuation mechanism: a velocity command is handled the same regardless of whether it derives from joystick deflection or sip-and-puff respiration. Yet, deviations—in magnitude, direction, or timing—between the true signal intended by the human and that received by the autonomy can have rippling effects throughout a control system.

Our premise is that when robot systems that depend on human input do not consider the physical source of the human control signal—their physical capabilities, the interface actuation mechanism, signal transmission limitations—a fundamental, and artificial, upper limit on team synergy and success is imposed.
In this project, we propose a framework to model, novel algorithmic work to engender, and extensive human subject studies to evaluate, interface-awareness in robot teleoperation and autonomy. Our work will demonstrate both the need for and utility of interface-aware robotic intelligence.
Building on our seminal work [46], we relax a number of assumptions and constraints in the initial formulation. Specifically: (1) We scale up the control space and complexity of the robot operation, from a 3-DoF virtual point robot to a 7-DoF real hardware robotic arm. This dramatically complicates teleoperation using a 1-D sip-and-puff control interface. (2) We no longer assume the human’s policy to be known, and make minimal assumptions on this policy (that relate only to safety). Our case study evaluations with two spinal-cord injured participants operating the robotic arm found safety to increase, despite the assistance being undetectable to the participants [59].
Based on the case study feedback, we expanded our model of interface use to explicitly include the raw signal measured by the interface, and designed a new data collection task that gives more attention to this raw signal and also captured sequences of task-level commands. We conducted a end-user study on the feasibility and efficacy of our method with 8 participants with spinal cord injury (SCI) operating a 7-DoF robotic arm using a 1-D sip/puff interface. The results showed our interface-aware method to perform better than the standard sip/puff interface mapping, and also to improve task metrics such as obstacle collisions without any knowledge of the environment or task [65]. Our method also was preferred by users.
Our work also has built custom maps from interface-level commands to the control space of the robot, that are defined by the user rather than an a priori fixed map. Our approach prompts users to provide an interface-level command in response to visually observing the robot move. We also designed methods to generate synthetic data from users’ verbal statements to capture their intended control mappings. We evaluated our methods by conducting a study with 10 participants controlling a powered wheelchair and a robotic arm through four control interfaces. Our study enabled the identification of several artifacts that can manifest in user-defined interface maps, and we designed methods to identify, quantify, and filter these artifacts, in order to curate user-defined interface maps [64].
Funding Source: National Science Foundation (NSF/FRR-2208011 Interface-Aware Intelligence for Robot Teleoperation and Autonomy)