A Design Space of Control Coordinate Systems in Telemanipulation

Teleoperation systems map operator commands from an input device into some coordinate frame in the remote environment. This frame, which we call a control coordinate system, should be carefully chosen as it determines how operators should move to get desired robot motions. While specific choices made by individual systems have been described in prior work, a design space, i.e., an abstraction that encapsulates the range of possible options, has not been codified. In this paper, we articulate a design space of control coordinate systems, which can be defined by choosing a direction in the remote environment for each axis of the input device. Our key insight is that there is a small set of meaningful directions in the remote environment. Control coordinate systems in prior works can be organized by the alignments of their axes with these directions and new control coordinate systems can be designed by choosing from these directions. We also provide three design criteria to reason about the suitability of control coordinate systems for various scenarios. To demonstrate the utility of our design space, we use it to organize prior systems and design control coordinate systems for three scenarios that we assess through human-subject experiments. Our results highlight the promise of our design space as a conceptual tool to assist system designers to design control coordinate systems that are effective and intuitive for operators.

Fig. 1: A. Telemanipulation systems must determine how to map operator inputs to robot movements.B. Control coordinate systems can be selected in the remote environment to map user inputs to.In the figure, the control coordinate system is a frame attached to the robot's base.A variety of other options may be suitable.C. In this paper, we enumerate common choices for the axes of control coordinate systems.D. Additionally, we present three design criteria for designers to understand the ramifications of the choices and reason about the suitability of control coordinate systems for different scenarios.
Abstract-Teleoperation systems map operator commands from an input device into some coordinate frame in the remote environment.This frame, which we call a control coordinate system, should be carefully chosen as it determines how operators should move to get desired robot motions.While specific choices made by individual systems have been described in prior work, a design space, i.e., an abstraction that encapsulates the range of possible options, has not been codified.In this paper, we articulate a design space of control coordinate systems, which can be defined by choosing a direction in the remote environment for each axis of the input device.Our key insight is that there is a small set of meaningful directions in the remote environment.Control coordinate systems in prior works can be organized by the alignments of their axes with these directions and new control coordinate systems can be designed by choosing from these directions.We also provide three design criteria to reason about the suitability of control coordinate systems for various scenarios.To demonstrate the utility of our design space, we use it to organize prior systems and design control coordinate systems for three scenarios that we assess through human-subject experiments.Our results highlight the promise of our design space as a conceptual tool to assist system designers to design control coordinate systems that are effective and intuitive for operators.

I. INTRODUCTION
In telemanipulation, human operators and robots are generally in separated physical spaces.Operator commands are mapped from the input device into some coordinate frame in the robot's space.We call this frame the control coordinate system because operators perceive the robot as being controlled with respect to it.Control coordinate systems are a fundamental design choice for telemanipulation systems as they decide how operators should move to get desired robot motions.The design of control coordinate systems affects operators' sense of direction and spatial orientation in the remote environment.However, there has been scant attention to the design of these control coordinate systems, with little systematic examination of the possibilities or comparison of their impact.Designers need to know what options they have and what criteria to follow to design control coordinate systems that are effective and intuitive for operators.
In this paper, we provide a design space as a conceptual tool to articulate and explore the range of options for control coordinate systems (Figure 1).By presenting a design space of the key choices in control coordinate systems, we provide a structure to organize and differentiate existing approaches and suggest new potential combinations.Our key insight is that there are a small number of meaningful directions in the remote environment.A control coordinate system can be described by how its axes align with these directions and designed by choosing from these directions.Additionally, we provide three design criteria for designers to understand the ramifications of their choices and reason about the suitability of control coordinate systems for different scenarios.These criteria include considering whether a control coordinate system causes visual-motor misalignments, is natural to human operators, and satisfies task semantics.
A design space can be evaluated according to its descriptive, evaluative, and generative powers [1].To showcase the ability to describe existing control coordinate systems (descriptive power), we positioned existing control coordinate systems in our design space by describing how input device axes are mapped to the remote environment.To showcase the ability of our design space to assess design alternatives (evaluative power) and create new designs (generative power), we conducted three case studies in which we apply the choices and reasoning afforded by the design space to design control coordinate systems and predict their suitability for three telemanipulation scenarios ( §IV).We assessed these designed control coordinate systems in human subject experiments to confirm our predictions ( §V).Our results showed that the choices and reasoning afforded by our design space allowed us to design and predict suitable control coordinate systems for various scenarios.
The central contribution of this paper is a design space that consists of the key choices of control coordinate systems and design criteria to assist designers in reasoning about the suitability of control coordinate systems for different scenarios.By articulating a design space, we can design and assess various control coordinate systems, including hybrid frames which leverage geometric information from the coordinate frames attached to two or more objects, providing alternative options for future teleoperation systems.

II. RELATED WORK
Our work builds upon prior work from three areas: control coordinate systems in telemanipulation, visual-motor misalignments, and low-dimensional input devices for telemanipulation.

A. Control Coordinate Systems in Telemanipulation
While there are a variety of input devices for robot telemanipulation, in this paper we focus on spatial input devices that capture finger, hand or arm movements, such as computer mice, joysticks, or VR controllers.We do not consider non-spatial input devices such as sip-and-puff [2] or electromyographic sensors [3] in this paper because they offer a different mapping challenge as they must map nonspatial inputs to spatial outputs.
While specific choices made by individual systems have been described in prior works, this paper provides a design space for designers to understand the range of possible control coordinate systems and reason about their suitability.
Recently, learning-based methods have been employed to generate personalized mappings to compensate for individual differences [12], [13] and task-specific mappings to leverage latent actions [14], [15].These learning-based mappings follow manifolds derived from observations and are generally dynamic and non-linear.While they are not "designed" by choosing a control coordinate system, these learning-based mappings can be interpreted and described using a dynamic control coordinate system at each instant.Therefore, the design criteria afforded by our design space is still useful to understand and reason about learning-based mappings.

B. Visual-Motor Misalignments
In telemanipulation systems, camera positions are selected to provide a clear view of the manipulation space.The camera viewpoint and robot control coordinate system are not always aligned, which causes a misalignment between the robot motion on screen (visual) and the user's input motion (motor).This visual-motor misalignment increases the mental workload of human operators -they would need to use mental rotation strategies (spatial transformation of desired robot motions) or perspective-taking strategies (imagine how a scene looks like from another viewpoint) to move the robot [16].
To address visual-motor misalignments in robot telemanipulation, various solutions have been proposed including haptic devices [17], visual cues [9], [10], and training to improve the operator's spatial abilities [16].However, these methods only help operators handle visual-motor misalignments; to inherently eliminate visual-motor misalignments, the control coordinate system should align with the coordinate frame of the camera.For example, DeJong et al. [18] present a mathematical framework that considers camera, display, and controller positions to minimize mental transformations in telemanipulation.
While visual-motor misalignments affect teleoperation performance and human mental workload, they should not be the only factor to consider when choosing control coordinate systems.Ellis et al. [19] find that the visual-motor misalignment disturbance is not linearly proportional to the degree of misalignment; human operators can adapt to some visual-motor misalignments.Such findings give designers more creative freedom, allowing them to choose control coordinate systems by considering other factors.In addition to minimizing visual-motor misalignments, our work identifies two other design criteria to assist designers in reasoning about and predicting the suitability of control coordinate systems (Section III-D).

C. Telemanipulation with Low-Dimensional Input Devices
While many 3D input devices have been developed, lowdimensional input devices such as joysticks and computer mice are still widely used in professional teleoperation systems such as surgical robots [20], assistive robots [21], disaster rescue robots [22], and space exploration robots [23].Different kinds of 2D input devices have been used for a human operator to specify the position of a robot end-effector, including joysticks [24], [25], [26], [27], computer mice [28], and touch screens [29], [30].Prior research has found lowdimensional input devices to be more accurate, comfortable, and easier to use for inexperienced operators over 3D input devices [27], [31], [32], [23].While having many benefits, low-dimensional input devices are intrinsically limited because they can only move an end-effector in a subspace of the 3D environment.Therefore, mode switching mechanisms [21], [33] where operators switch the degrees of freedom of the robot that they want to control, and shared control methods [34] where operators control a subset of the robot's degrees of freedom and the remaining degrees of freedom are autonomously controlled by an autonomy system, are often employed for low-dimensional input devices.However, systems with mode switching or shared control methods still require mappings for user inputs where our design space can be applied.
Given the limitations of low-dimensional devices, their control coordinate systems should be carefully selected as the control coordinate systems represent which subspace the robot can move in.While many existing systems attach the control coordinate system to the ground plane [25], [5], [11] or the camera image plane [24], [5], [8], several alternative choices may be appropriate.In this paper, by articulating a design space of control coordinate systems, we introduce a hybrid frame that is specifically designed for low-dimensional input devices.

III. DESIGN SPACE
In this section, we present our design space of control coordinate systems.Our design space describes a control coordinate system by how its axes align with meaningful directions identified by our design space.Our design space can be used to generate a control coordinate system by choosing a direction in the remote environment for each axis of the input device.Our design space also provides design criteria to evaluate a control coordinate system.The design criteria combine insights from prior research with our experience designing control coordinate systems.
This section follows the Questions-Options-Criteria framework [35] to present our design space.First, we identify the key question in designing a control coordinate system in Section III-A: designers need to determine how to map each axis of the input device to the remote environment.Then, in Section III-B, we provide options that are possible answers to the question by enumerating common choices of axes.We use these options to organize control coordinate systems in the literature in Section III-C.Finally, Section III-D provides three design criteria to reason about and predict the suitability of control coordinate systems for a scenario.

A. Control Coordinate Systems
In telemanipulation systems, a spatial input device captures an operator's finger or arm movements and represents them in the input device's coordinate system.To map user inputs to robot motions, a coordinate system needs to be selected in the remote environment.The control coordinate system can be described by the directions of its axes.Mathematically, we represent a control coordinate system using a matrix where columns are unit vectors d x , d y , d z ∈ R 3 that denote the directions of x, y, z axes of the control coordinate system: The robot end-effector's linear velocity v o ∈ R 3 and angular velocity ω o ∈ R 3 can be obtained by mapping translational and rotational user inputs v i and ω i to the control coordinate system.The inputs v i , ω i ∈ R 2 on 2D input devices and v i , ω i ∈ R 3 on 3D input devices.
User inputs v i and ω i may have different physical meanings depending on the control method.Position control is commonly used on VR controllers or computer mice, where a relative user movement is directly mapped to the robot.In position control, the v i and ω i denote the operator's linear and angular velocity.Meanwhile, rate control is commonly used on joysticks or space mice, where the displacement of the input device to the device's origin pose is mapped to the robot's velocity.In rate control, v i and ω i denote the positional and rotational displacements.Specifically, ω i is a scaled axis whose direction is along the axis of a rotational displacement and whose norm is the rotation angle.
It is worth mentioning that a control coordinate system does not have to be attached to a physical object.For example, the control coordinate system of orbit control moves on an imaginary sphere (Figure 2A).Moreover, because user inputs are mapped to the robot end-effector's velocity, the origin of a control coordinate system does not affect the results of the mapping.
A control coordinate system can be generated by choosing a direction in the remote environment for each axis of the input device.We find that there is a relatively small set of common choices of these directions and list the common choices in Section III-C.To form a control coordinate system, the direction for each input device axis may be chosen from the coordinate frame of the same object (e.g., the coordinate frame attached to the robot's base), or a designer may mix and match different directions.We call a control coordinate system that leverages geometric information of two or more coordinate systems a hybrid frame.In some cases, hybrid frames are created by choosing two directions and computing the third direction using the cross product.For example, a hybrid frame may be designed as the direction that moves the end-effector right in the camera image, the direction that moves the end-effector vertically up in the world frame, and the cross product of these two directions [6], [7] (Figure 3A).To maintain the generalizability of the design space, mutually orthogonal axes are not necessary for a control coordinate system.For example, the control coordinate system for a pouring task [36] in Figure 2B maps 3D user inputs to two directions.One direction is vertical and moves the bottle up and down; the other direction is perpendicular to the bottle's axis and tilts the bottle.Although the two directions are not mutually orthogonal and don't span the 3D space, the control coordinate system is sufficient to finish the pouring task.

B. Choices for Directions
Control coordinate systems can be defined by choosing a direction in the remote environment for each axis of the input device.We organize the meaningful choices for directions into three categories.
The Axes of Key Coordinate Frames -The primary sources of axes used to map user inputs are the coordinate frames connected to the main entities: the global world frame, the coordinate frame attached to the robot's base, the frame of the robot's end-effector, the coordinate system defined by the camera, and the coordinate frame of the objects to be manipulated (i.e., the task frame [37], discussed as semantic directions below).
The meaningfulness of each axis of these frames to operators may vary based on the scenario.For example, the upright direction in the world frame aligns with gravity, which is essential to tasks like pouring or throwing.Another example is that a camera's axes, which are determined by the camera orientation in the remote space, impact how the robot appears to move in the image captured by the camera.Moreover, the end-effector axes may be meaningful in describing motions relative to object interactions, such as approaching a grasp.
Semantic Directions to Complete Geometrically Constrained Tasks -Many daily manipulation tasks are geometrically constrained, e.g., pulling a drawer, rotating a knob, or drawing on a table.A manipulation task's geometric constraints impose along which directions the robot may move to complete the task, e.g., the directions parallel to the table plane in a drawing task.We define semantic directions as the directions to complete a geometrically constrained task.Semantic directions also include directions to complete softly constrained tasks such as the vertical direction to pick an object out of a box.Semantic directions can be specified manually or inferred from human demonstrations [38].When designing control coordinate systems for low-dimensional input devices, designers should consider mapping axes of the input device to these semantic directions.Low-dimensional input devices only move the robot in a subspace (e.g., a plane); selecting semantic directions ensures that operators can easily generate commands along the semantic directions.
Projected Camera Axes -As described in Section II, selecting a camera frame as the control coordinate system minimizes visual-motor misalignments.However, the camera frame is not a universally good design choice because it may be unnatural or lacking in semantics.For example, it is unnatural when horizontal human inputs are mapped to non-horizontal robot movement which can happen when a camera looks down at the robot and the optical axis is nonhorizontal.In this case, if an operator pushes the input device horizontally away from them, the robot moves away from the camera along the non-horizontal optical axis of the camera.A control coordinate system may lack task semantics when 2D user inputs are mapped to the camera image plane.This control coordinate system only allows operators to freely move parallel to the image plane, which may not include the semantic directions related to a task.For example, in a tabletop drawing task, semantic directions are within the tabletop plane, but controlling a robot in the camera frame leads to movement parallel to the camera image plane.
An alternative design option is to project the camera axes onto a meaningful plane (e.g., the ground or a semantic plane to complete a planarly constrained task) to generate natural or semantic directions.This method utilizes the camera projection principle, which implies that a 2D axis v c on the image plane may be projected from an infinite number of vectors v d in 3D space.Robot movements along any of these 3D vectors cause the on-screen robot to move in the same direction.
Let R c ∈ SO(3) denote the orientation of the camera with respect to the robot.The orientation of the meaningful plane is represented by a vector v p ∈ R 3 that is perpendicular to the plane.We assume that R c and v p are known through some robot-camera calibration method (e.g., [39]) or geometric constraint inferring algorithm (e.g., [38]).Let v c ∈ R 2 be an axis in the camera image plane.The projected camera axis x ∈ R 3 can be computed using the following two equations.
where P = 1 0 0 0 1 0 is a projection matrix that projects a vector onto the camera image (XY) plane.Equation 3makes sure that the 2D on-screen robot movements match user expectations to minimize visual-motor misalignments.Meanwhile, Equation 4guarantees that the 3D robot movements x are within the natural or semantic plane.Combining Equations 3 and 4, a projection camera axis can be obtained given a camera axis v c , camera orientation R c , a vector v p perpendicular to the meaningful plane: Considering both the 3D robot movements before projection and the 2D on-screen robot movements after projection, the projected camera axes (Figure 3B) minimize visual-motor misalignments as well as satisfy the natural or semantic requirements.

C. Organizing Existing Control Coordinate Systems
With our design space, a control coordinate system can be described by how an input device axis is mapped to the remote environment.Our design space provides a vocabulary to organize and differentiate control coordinate systems.The vocabulary is simple yet descriptive enough to represent various control coordinate systems and precise enough to maintain reproducibility.In order to demonstrate that our design space serves as a structure to organize and differentiate control coordinate systems, in this section, we systematically identify control coordinate systems in prior works and position them in our design space.
Methodology -We collected articles on Google Scholar by combining keywords in [teleoperation, telemanipulation, robot] and [input mapping, control mapping, control frame, reference frame, coordinate system].Since all telemanipulation systems involve a choice of control coordinate systems, we only included works that either introduce a new coordinate system or compare multiple coordinate systems.A total of 18 papers were identified.
Findings -Our findings are listed in Table I.All control coordinate systems we found can be represented in our design space.Many control coordinate systems align with key coordinate frames (e.g., the end-effector frame) in the remote environment.Some of the control coordinate systemare attached to some imaginary objects.For example, several works [5], [25], [34] drew inspiration from orbit camera control methods in computer graphics.Orbit control moves the end-effector along the surface of an imaginary sphere, creating orbital motions (Figure 2A).Quere et al. [36] and Mower et al. [26] use task-specific control coordinate systems that utilize semantic directions (as shown in Figure 2 B&C).Finally, some prior works [27], [6] use hybrid frames.As an example, the view-dependent frame presented by Notheis et al. [27] (Figure 2D) consists of the world axes that form the closest acute angles with the camera image plane.By positioning the existing control coordinate systems in our design space, we demonstrate that the design space serves as a structure to organize and differentiate control coordinate systems.

D. Design Criteria
With so many possible options in our design space, designers need a method to reason about and predict the suitability of control coordinate systems for a scenario.We provide three initial design criteria based on prior work and our design experience.While designers should consider all the criteria comprehensively, we note that there are trade-offs between the criteria and it may be impossible to satisfy all the design criteria simultaneously in some scenarios.
Criterion 1: Minimize Visual-Motor Misalignments -We define visual-motor misalignments as the angular difference between the control coordinate system and the camera frame.Such angular difference causes a misalignment between the robot motion on screen (visual) and the user's input motion (motor), increasing the human operator's mental workload [18] and affect teleoperation performance [43], [6].However, the detrimental effects of visual-motor misalignments are isotropic [19] and can fluctuate based on operators' spatial abilities [16].Visual-motor misalignments can be described using rotations about three axes: roll axis pointing to the operator's straight ahead, pitch axis to the operator's left, and yaw axis directed upward.Ellis et al. [19] found that the rotation along pitch or yaw axes was distinctly less disruptive than roll rotation.Such findings are consistent with the usage of a computer mouse -operators move a mouse in a horizontal table plane (motor) and see the cursor move in a nearly vertical screen plane (visual).Most operators can quickly adapt to this visual-motor misalignment which is approximately a 90-degree pitch rotation.In addition, the experiment conducted by Menchaca-Brandan [16] suggests that operators with better spatial abilities can better handle visual-motor misalignments.The operator's ability to adapt to visual-motor misalignments gives designers more creative freedom when designing control coordinate systems, allowing them to take the following two design criteria into consideration.
Criterion 2: Be Natural -Natural control coordinate systems enable natural mapping, taking advantage of spatial analogies and leading to immediate understanding [44].For example, to move the robot up, move the controller up.To rotate the end-effector about the vertical axis, rotate the controller in the same way.From the perception point of view, humans inherently maintain a stable perception of the vertical direction despite changing viewpoints, which is known as orientation constancy [45].Due to orientation constancy, even if the camera is tilted in the remote environment, a human operator still can perceive whether the end-effector moves vertically or not.Natural mappings work the way that the user expects them to, without having to think about it [46].When human operators move horizontally, they expect the robot also to move parallel to the ground plane.Therefore, a natural control coordinate system for horizontal 2D input devices (e.g., computer mice) moves the robot horizontally.
Criterion 3: Consider Task Semantics -While the first two criteria focus on mental workload and intuitiveness for human operators, the third criterion is proposed from the task point of view.Designers should consider what robot motions are desired to complete the task and how to design the control coordinate system to facilitate the operations that enable those desired robot motions.In particular, control coordinate systems should be chosen to match the semantic directions for those geometrically constrained tasks.For example, Mower et al. [26] present a robotic system in which a 2D joystick is used to teleoperate a robot to spray concrete onto a curved tunnel surface.In this task, desired motions maintain a fixed distance to the tunnel surface, so they use a control frame containing semantic directions that are parallel to the surface (Figure 2C).The control coordinate system facilitates teleoperation by consistently keeping the nozzle at a constant distance from the surface.
When task semantics are taken into consideration, the designed control coordinate systems are task-specific and may not be applicable or meaningful for other tasks.There is a trade-off: task-specific control frames may greatly facilitate the teleoperation of a particular task; however, when

E. Summary
A control coordinate system can be designed by selecting a direction in the remote environment for each axis of the input device.We enumerated common choices of these directions, including the axes of key coordinate frames, semantic directions, and projected camera axes.Moreover, we provided design criteria that suggest designers to consider visual-motor misalignments, naturalness, and task semantics when designing control coordinate systems.IV.CASE STUDIES In the previous section, we demonstrated the descriptive power of our design space by organizing existing control coordinate systems in the literature.In this section, we demonstrate its generative and evaluative power using three case studies.In each case study, we consider a telemanipulation scenario and use the design space to generate an control coordinate system and make predictions about its effectiveness over some common designs (i.e., evaluate).We confirm these predictions with human subjects experiments in Section V.The scenarios include a pick-and-place task and a geometrically constrained tracing task and use a 6 DoF VR controller or a 2D mouse as input devices (Figure 4).The designed control coordinate systems are examples of hybrid frames, which leverage geometric information of different coordinate frames.

A. Pick-and-Place with VR Controller
In this scenario, operators perform a pick-and-place task (Figure 4 Row 1) with a 3D input device.Common choices of control coordinate systems are the robot frame or camera frame.Choosing the robot frame as the control coordinate system would cause significant visual-motor misalignments (Section III-D Criterion 1) because the camera and the robot are facing different directions, leading to a large angular difference between the robot frame and the camera frame.Meanwhile, the camera frame lacks naturalness (Section III-D Criterion 2) because when an operator moves the controller vertically up, the robot moves along the camera up direction, which is not vertical.
With our design space, control coordinate systems are designed by picking a direction in the remote environment for each axis of the input device.Firstly, we picked the right direction of the camera for the input device's right.The camera right direction partially encodes camera orientation and reduces visual-motor misalignments.When an operator moves right, they see the end-effector moving right onscreen.Secondly, we chose to map the input device's up axis to the world up direction, which is a natural direction that aligns with gravity.To move the end-effector straight up, an operator just moves the controller straight up.Finally, we generated the third mapping direction by taking the crossproduct of the first two directions.We made this design choice to get a right-handed coordinate frame.The camera right direction, world up direction, and the cross product of these two directions form hybrid frame 1 in the remote environment (Figure 3A).The mathematical formulation of the hybrid frame is in Table II.
We predicted that hybrid frame 1 would lead to better task performance and user experience than the robot or camera frame because the hybrid frame considers both visual-motor alignments and naturalness (Section III-D Criteria 1&2).Although hybrid frame 1 is similar to the camera frame with only some rotational difference along the camera pitch axis (see the visualization in Figure 4 Row 1), we predicted that the subtle difference would improve user performance.

B. Pick-and-Place with Mouse
In the second case study, operators perform a pick-andplace task with a mouse (Figure 4 Row 2).As described in §II, prior research has found that low-dimensional input devices, including a computer mouse, are more accurate, comfortable, and easier to use for inexperienced operators over 3D input devices.Since a mouse only captures user movement along a 2D plane, the mouse wheel is utilized to provide an additional dimension to perform the pickand-place task in 3D space.People habituate the mapping between a mouse and its cursor on a screen, e.g., to move the cursor up, move the mouse forward.To replicate a similar mapping in telemanipulation, a system selects the camera frame as the control frame, which makes on-screen endeffector movements appear similar to a cursor.However, the camera frame lacks naturalness (Section III-D Criterion 2) because operators rarely want the end-effector to move parallel to the camera image plane.Considering naturalness, the robot should move parallel to the table plane because operators move the mouse in the horizontal table plane.Another common design choice maps inputs to the robot frame.However, given the camera position, a mapping to the robot frame would cause significant visual-motor misalignments (Section III-D Criterion 1).
With our design space, we designed a control coordinate system with projected camera axes.As described in Section III-B, the projected camera axes utilize the camera projection principle, allowing on-screen movements to minimize visualmotor misalignments and 3D robot motions to be natural (Criteria 1&2 in Section III-D).To create a right-handed coordinate frame, the mouse wheel was selected to move the end-effector along the direction that is orthogonal to the projected camera axes.The projected camera axes and the direction orthogonal to them form hybrid frame 2, whose mathematical formulation is in Table II.Considering both visual-motor misalignment and naturalness, we predicted that hybrid frame 2 would improve performance and user experience compared to the robot or camera frame.

C. Tracing with Mouse
In the third case study, operators trace letters on a whiteboard with a mouse; this is a planarly-constrained task where the pen tip should stay on the 2D whiteboard plane (Figure 4 Row 3).One common design choice of control coordinate systems is the whiteboard's coordinate frame (task frame), but it will cause visual-motor misalignments (Section III-D Criterion 1) since the camera is not facing the whiteboard.
In the control coordinate system that we designed for this scenario, we mapped mouse inputs to the camera axes projected on the whiteboard plane (hybrid frame 3).This decision was made by following Criteria 1 and 3 in Section III-D:minimize visual-motor alignments and consider task semantics.The control coordinate system is semantic because it makes the end-effector stay in the whiteboard plane.The control coordinate system minimizes visualmotor misalignment because when being watched through a camera, the end-effector movement on screen behaves like a mouse cursor (e.g., when an operator moves the mouse to the right, the end-effector on screen moves right like a mouse cursor).Therefore, we predicted that hybrid frame 3 would outperform the task frame.Table II lists the mathematical formulation to implement hybrid frame 3.

V. EVALUATION
In the previous section, we designed control coordinate systems for three telemanipulation scenarios and reasoned about their suitability with our design space.In this section, we evaluate the designed control coordinate systems to explore whether the choices and reasoning afforded by our design space allowed us to predict appropriate control coordinate systems for various scenarios.Our hypothesis is that the predictions made using the design space are accurate.Specifically, our design space predicts that the hybrid frames designed with our design space will improve performance and user experience compared to some commonly-used reference frames.

A. Experimental Design, Tasks, & Conditions
In the case studies in Section IV, we designed control coordinate systems for three scenarios by picking meaningful directions and following the design criteria in our design space.The control coordinate system designed for each scenario was evaluated in an independent experiment with different participants.Each experiment followed a withinparticipants design.The order of the conditions in each experiment was counterbalanced.Experiment 1 was conducted in-person using a physical robot, while experiments 2 and 3 were conducted online using a simulation platform with participants recruited via Amazon Mechanical Turk.In all three of our experiments, cameras were placed in surveillance-camera-like positions to provide adequate coverage of the workspace and non-occluded views from an elevated perspective.Figure 4 visualizes the control coordinate systems assessed in the three independent experiments.The mathematical representations of the axes of each control coordinate systems are listed in Table II.
1) Experiment 1: Pick-and-Place with VR Controller: Participants picked up a foam block and placed it onto another flat foam block.Participants were instructed to move as quickly as possible but avoid colliding with the table or knocking over the foam block.To simplify the experiment, the orientation of the end-effector was set to grasp the foam block and only translational inputs were mapped to the robot.The robot was controlled with one of the three control coordinate systems described in Section IV-A: the robot frame and camera frame as control conditions; hybrid frame 1 (camera right + upright) as the experimental condition.
2) Experiment 2: Pick-and-Place with Mouse: Participants picked up a block and placed it in a given circular area.Participants were instructed to move as quickly as possible while avoiding collisions.The translational movements of the mouse were mapped to the robot.As described in Section IV-B, the robot frame and camera frame were employed as control conditions, and hybrid frame 2 (camera axes project to the table plane) was the experimental condition.
3) Experiment 3: Tracing with Mouse: Participants were instructed to trace the letters on a whiteboard as quickly as possible while maintaining accuracy.As described in Section IV-C, the experimental condition mapped user inputs to the hybrid frame 3 (camera axes project to the whiteboard plane).The control condition was the task frame, which mapped user inputs to the whiteboard's coordinate frame.Both conditions guaranteed that the pen tip always stayed on the whiteboard.

B. Implementation Details
We implemented a teleoperation system for the in-person experiment 1 and a web-based platform for the online experiments 2 and 3. Performing the experiments online provides access to a larger and more diverse participant pool.
In experiment 1 (in-person pick and place), participants used a mimicry-based telemanipulation approach [47], [48] to guide the 3D position of the end-effector of a Rethink Sawyer robot.Participants were located in the same room with the robot and a room divider was placed to obstruct the participants' line of sight.The participants' inputs [∆x, ∆y, ∆z] ⊺ were captured by an HTC Vive motion controller at approximately 60 Hz.The velocity of the end-effector was computed using Equation 2, where v i = [∆x/∆t, ∆y/∆t, ∆z/∆t] ⊺ and ω i = [0, 0, 0] ⊺ .End-effector positions were filtered by a collision protection program, which halted the control if the robot's fingertip was closer than 1 centimeter to the tabletop.Participants were not aware The designed hybrid frames were experimental conditions, and the robot frame, camera frame, and task frame were selected as control conditions.In the visualization, red, green, and blue arrows represent the directions mapped from the input device's right, forward and up axes, respectively.
A control coordinate system is obtained by substituting dx, dy, dz into Equation 2. The orientation of the camera, the world coordinate system, and the task frame, Rc, Rw, Rt ∈ SO(3), are all with respect to the robot.Projected camera axis proj(vc, Rc, vp) is obtained using Equation 5.
of the collision protection program, but they were instructed to avoid collisions with the table.The trigger on the Vive controller was used to open or close the robot's gripper.We used a Logitech 930e webcam to stream 1080P video on a large-screen display.
For experiments 2 and 3, we used an online study platform that allowed multiple participants to teleoperate separated virtual robots simultaneously.We ran the Robot Operating System (ROS) on a Google Cloud Platform server to provide a forward/inverse kinematics solver and store teleoperation data.Each time a client connected to the ROS server, a new virtual robot and a RelaxedIK [48] inverse-kinematics solver were spawned for the client.The robot link positions were sent to the client, and the robot was finally rendered using three.js in the client's browser.
To ensure our online study interface could run adequately on a participant's machine and the participant's internet condition was fast and stable enough for real-time telemanipulation, the participant must pass a technical qualification test before starting the study.The technical qualification page allowed users to compare side-by-side the rendered robot with a static image that showed a correctly rendered robot.Once the participant confirmed that the robot was rendered correctly, 300 commands were automatically sent to the server at 30 Hz to move the virtual robot along a predefined trajectory.During the automatic manipulation, latency was recorded and the study could only proceed if the two-way latency was always below 125 ms.This threshold was set based on prior research which found that teleoperation performance does not degrade until 250 ms of latency [47], [49].After the technical qualification test, a preliminary page ensured that the participant used a desktop or laptop computer (instead of a tablet) and a physical mouse (instead of a touchpad).Since people may have different scrolling direction settings (natural or reverse scrolling), we also calibrated scrolling directions by instructing participants to scroll mouse wheels towards or away from them.
Upon passing the technical qualification and preliminary test, participants controlled a virtual Rethink Sawyer robot using a mouse.By clicking in the starting area, participants started the control and their cursors were temporally locked.Participants picked up their mice for clutching and pressed the ESC button on keyboards to stop the control.In addition to the mouse movement on tabletop [∆x, ∆y], mouse wheels provided an additional input ∆z to perform the 3D pick-andplace task.The robot speed was computed using Equation 2, where v i = [∆x/∆t, ∆y/∆t, ∆z/∆t] ⊺ and ω i = [0, 0, 0] ⊺ .In the pick-and-place task, a virtual block was automatically grasped if it was close enough to the gripper.A warning sound was played if the robot's gripper or the object in the gripper collided with the table.

C. Experimental Procedure
All three of our experiments followed the same procedure.After informed consent, participants were provided with study goals and instructed on using the input devices to control the robot.Then participants were presented the first condition.After a fixed amount of time of practice to get used to the control coordinate system (1 minute for the in-person experiment 1 and 30 seconds for online experiments 2 and 3), participants were introduced to the study task and finished two practice rounds.Then participants performed the task for three trials and filled out a questionnaire regarding their experience in the condition.This procedure was repeated for other conditions.
In both pick-and-place experiments, the block's initial and target positions change in each practice round and trial.For the tracing task, participants traced a four-point polyline and a semicircular arc as practice and traced letters for each condition in the order "hri", "ros", and "lab".Upon finishing all conditions, participants provided demographic information, and were asked their preferred condition and to describe the differences between the robots.The two  questions were asked in a semi-structured interview in the in-person experiment and in a Qualtrics questionnaire in online experiments.The in-person pick-and-place experiment took approximately 40 minutes, and participants received $10 compensation.Online participants received $3 for about 20 minutes in the tracing experiment or $5 for about 30 minutes for finishing the pick-and-place experiment.

D. Participants
We recruited 29 participants on campus for experiment 1 (pick & place with VR controller).Five participants were excluded from the analyses because of failing to complete all three trials in one condition.The resulting 24 participants (1 demifemale, 8 females, and 15 males) aged 18 to 32.They had various education backgrounds, including engineering, psychology, history, economics, physics, and management.We recruited participants via Amazon Mechanical Turk for our online experiments.The 24 participants for experiment 2 (online pick & place) aged 24 to 68 (7 females and 17 males).They had various occupations, including electrician, IT manager, baker, sales associate, and legal transcriptionist.In online experiment 3 (tracing with mouse), one participant was excluded from the analyses for leaving the drawing area (the whiteboard).The remaining 24 participants (1 agender, 10 females, and 13 males) aged 28 to 57.They were from various industries, including information technology, education, food service, and hospitality.
Participants reported their familiarity with relevant technologies using 5-point Likert scales.As shown in Table III, all participants had low-to-moderate familiarity with robots and Computer-Aided Design (CAD) software.As for the familiarity with 3D video games, in-person participants had moderate familiarity, while online participants had moderateto-high familiarity.We speculate the difference was caused by the technical qualification test before online studies.To pass the technical qualification test, participants required a fast computer and internet, and these types of participants tended to be game players.

E. Measures
We employed a combination of objective and subjective measures to assess participants' performance and user experience.
1) Objective Measures: We employed completion time and task error metrics to assess task performance in each experiment.The maximum time limit for each task is 90 seconds.In addition to task time, we counted the number of failed trails as the task error metric in experiment 1 (inperson pick & place).A trial is considered to be a failure if the participant exceeded the time limit, knocked over the foam block, or triggered the robot's collision protection program.In experiment 2 (online pick & place), we measured the number of collisions between the table and either the robotic gripper or the block in hand.In experiment 3 (online tracing task), we adapted a trajectory error metric from prior work [47] to assess how well participants trace a target curve.
The trajectory error metric is the sum of a Cartesian accuracy score and a completeness score.The Cartesian accuracy score is the average error distance between the pen tip and its closest point on the target curve.The Cartesian accuracy is normalized to [0, 1].To compute the completeness score, we first associate an arc-length parameter value in [0, 1] to all points on the target curve.The completeness score of each point on the pen tip curve is the arc-length parameter value of its closest point on the target curve.We then use the maximum completeness score on points as the completeness score of the entire curve.The range of trajectory error is [0, 2], where a lower value indicates a better trajectory.
The time and task error metrics are possibly associated, e.g., a participant who hurriedly finished the pick-and-place task in a short time might have a large number of collisions.Therefore, we formulated a combined objective measure for each experiment to aggregate the data.We first combined task time and error metrics by normalizing them to [0, 1] and summing them together.The relative combined measure was computed using the performance of one condition minus the average performance of that participant.The resulting range is [−2, 2], where a lower value indicates better performance.
2) Subjective Measures: We administered a questionnaire based on prior research [50], [47] to measure perceived predictability, trust, and fluency.Additionally, we employed NASA TLX [51] to assess perceived workload.The six TLX subscale scores were averaged with equal weighting to calculate the overall TLX score.

F. Results
Figure 5 and Table IV summarize our results using oneway repeated-measures analyses of variance (ANOVA) to determine whether the designed hybrid frames had a sig- nificant effect.We used Tukey's HSD test to make pairwise comparisons.We found high inter-subject variability in our results, which matches prior studies that compare control coordinate systems [4], [5]. 1) Experiment 1: Pick-and-Place with VR Controller: Our results showed no significant difference but a medium-tolarge effect size (η 2 = 0.09) in combined objective measures.We speculate that high inter-subject variability may be a cause for no significant difference.As for self-reported measures, participants reported the robot with hybrid frame 1 to be significantly more fluent, trustworthy, and predictable, and to require significantly lower workload than the robot using robot frame.Comparing hybrid frame 1 with camera frame, our results showed a significant difference only in the fluency metric and no significant difference in other self-reported metrics.In the post-experiment interview, 16 out of 24 participants stated that they preferred the hybrid frame condition.3 participants preferred the camera frame condition, and 2 participants couldn't separate the hybrid frame 1 and the camera frame condition.Being asked about the differences between conditions, E1-P6 commented, "The [camera frame] seems very similar to [hybrid frame 1], just that when I move forward, [the camera frame] will move away from me and down.. If I want the robot to remain on an equal plane, I had to almost go like this" (hand moves diagonally up).
As described in Section IV-A, to balance visual-motor misalignments and naturalness, hybrid frame 1 only has some rotational difference with the camera frame along the camera pitch axis.Our study results indicated that most participants noticed the subtle distinctions between hybrid frame 1 and the camera frame.Furthermore, the subtle distinctions created meaningful differences in user experience, although these differences did not produce statistically significant results in our experiment.
2) Experiment 2: Pick-and-Place with Mouse: Our data showed that the interface with hybrid frame 2 significantly outperformed the interfaces using the robot frame or camera frame in the combined objective measure.When controlling the robot with hybrid frame 2, participants reported significantly greater perceived fluency, trust, and predictability than the interface with the camera frame.Although it took participants significantly more time to finish the task when using the robot frame, there were no significant differences in subjective measures between the hybrid frame 2 and the robot frame condition.After the study, 12 participants reported that they preferred the condition with hybrid frame 2. E2-P5 commented: "The robot [with hybrid frame] moved kind of statically.Where in a short time I could master the movements.It seemed the easiest to use quickly."Meanwhile, 7 participants preferred the condition with the robot frame, even though 6 of them were faster in the hybrid frame condition.
3) Experiment 3: Tracing with Mouse: Our results revealed that, across combined objective measure, predictability, trust, fluency, and TLX overall score, there were significant differences with the hybrid frame 3 being superior to the task frame.Subjectively, 19 participants chose hybrid frame 3 as their preferred condition, while 5 participants preferred the task frame condition.
4) Summary: All three experiments showed the same tendency that the hybrid frames identified in our design space led to better performance and user perception compared to some commonly-used reference of frames.However, some of the differences were not statistically significant due to high inter-subject variability.We designed these hybrid frames based on the reasoning afforded by the design space and our experiment results match our predicted outcomes.Our experiments confirm the predictions made by our design space, illustrating its utility as a conceptual tool in generating and evaluating control coordinate systems.

VI. DISCUSSION
In this paper, we presented a design space for designers to understand the range of options in control coordinate systems and reason about the appropriateness of their designs for different scenarios.To showcase the utility of our design space, we organized existing control coordinate systems in our design space and conducted three case studies in which we designed control coordinate systems for various telemanipulation scenarios.These designed frames were evaluated in human subject experiments.Our user evaluation showed that the choices and reasoning afforded by our design space allowed us to design and predict the performance and preference of control coordinate systems for various scenarios.Moreover, the subtle distinctions made visible by our design space created meaningful differences in user experience.The results highlighted the usability and generalizability of our design space.Below, we discuss additional findings, implications for teleoperation systems, and the limitations of this work.
Individual Differences -Our experiment revealed considerable individual differences that may impact design choices.For example, our interview data showed that both mental rotation and perspective-taking strategies were used to address visual-motor misalignments, which matches the findings by Menchaca-Brandan et al. [16].For some participants who used perspective-taking strategies, it seemed hard for them to adapt to a new control coordinate system.E1-P23 commented, "[The robot frame] is like from the point of view of that screen (Sawyer robot's face)... [The robot frame] is the one I first experienced.I got used to that, so it just got stuck in my brain a little.".Similarly, E1-P16 commented, "[At the start of the camera frame condition], I feel like I couldn't find my position."Moreover, one participant who preferred the robot frame in experiment 1 stated the reason as being habituated to reverse scrolling settings when using a touchpad.These individual differences suggest that a telemanipulation system may benefit from personalized mappings (e.g., [12], [13]).
Implications -Besides mapping user inputs to some commonly used reference frames in the remote environment (e.g., robot frame), there are many other possible control coordinate systems for telemanipulation.Designers can tailor control coordinate systems for their various scenarios by choosing meaningful directions from our design space.To reason about the suitability of control coordinate systems, prior research [43] emphasized the reduction of mental workload caused by visual-motor misalignments; however, our experiments showed that some visual-motor misalignments along the pitch axis (in the hybrid frames in pick-and-place experiments) do not increase mental workload.These results match findings in the ergonomics community [19].Besides minimizing visual-motor alignments, designers should take into consideration other design criteria provided in this work, i.e., naturalness and task semantics.
Limitations & Future Work -Our work has some limitations that must be addressed by future work.First, while our design space allows us to choose control coordinate systems for different scenarios, we cannot make claims about its comprehensiveness (as defined by Kerracher and Kennedy [52]).We summarized the design criteria based on prior work and our design experience but our design space may be extended by additional criteria.Moreover, directions decomposed from some learning-based mappings [12], [14], [13] may provide insights to extend our design space.Meanwhile, our design criteria may be used to reason about the suitability of learning-based mappings or encoded in learning-based methods to generate more intuitive mappings.Second, while control coordinate systems map both translational inputs and rotational inputs to the robot, we only assess control coordinate systems on translational inputs in our human subject experiments.Future work should evaluate control coordinate systems on manipulation tasks that involve both translational and rotational movements.Besides selecting a control coordinate system, future work should explore alternative mapping approaches, e.g., a mapping from translational inputs to rotational robot movements [46].Third, while our approach applies to mode-switched interfaces, future work should extend our framework to consider the costs and benefits of changing control coordinate systems.Fourth, in this work, all control coordinate systems were evaluated on low-latency teleoperation systems.Future work should explore control coordinate systems for high-latency teleoperation systems where operators get delayed feedback, introducing additional challenges for the design of control coordinate systems.Fifth, while our paper provides a design space to assist telemanipulation system designers, there are several trade-offs designers need to consider, such as choosing a task-specific or a general design (described in §III-D), prioritizing either user experience or task performance, or tailoring control coordinate systems for expert or non-expert users.Last, in the post-experiment interview in experiment 1, some participants complained about the depth ambiguity caused by the single, static camera setting.Future work should investigate control coordinate systems under multiple or dynamic camera views (e.g., [7]).
Conclusion -In this paper, we present our design space that articulates the range of options in control coordinate systems and provides design criteria for designers to reason about the suitability of their designs for various telemanipulation scenarios.We believe that the contribution of our design space with respect to designing and predicting suitable control coordinate systems will enhance and facilitate the input mapping design process for future telemanipulation systems.

Fig. 2 :
Fig. 2: Visualization of some control coordinate systems in prior works.Red, green, and blue axes represent the directions mapped from the input device right (x-axis), forward (y-axis), and up (z-axis), respectively.Control coordinate systems A and B are designed for 3D input devices; C and D are for 2D input devices.A. Orbit control [34], [25], [5] -The control coordinate system moves on an imaginary sphere, whose radius is controlled by the blue axis.B. Pour task frame [36] -Vertical inputs move the bottle vertically up and down (blue arrow).Inputs along other two axes tilt the bottle (red and green) to finish the pouring task.C. Spray task frame [26] -2D inputs are mapped to the surface to be sprayed.The figure shows a tunnel surface that is curved along the vertical direction.D. View-dependent frame [27] -The control coordinate system is a plane formed by two axes chosen from the world frame.The two axes must form an acute angle with the camera image plane.

Fig. 3 :
Fig. 3: A. A hybrid frame that is constructed by combining the camera right axis and the upright direction.B. A hybrid frame that is formulated by projecting camera axes onto a plane.

Fig. 4 :
Fig.4: We conducted three case studies in which we applied the choices and reasoning afforded by the design space to design control coordinate systems for various telemanipulation scenarios.The control coordinate system designed in each case study was assessed in an independent human subject experiment.The designed hybrid frames were experimental conditions, and the robot frame, camera frame, and task frame were selected as control conditions.In the visualization, red, green, and blue arrows represent the directions mapped from the input device's right, forward and up axes, respectively.

Fig. 5 :
Fig. 5: Box and whisker plots of data from the performance and user perception measures for each experiment.The top and bottom of each box represent the first and third quartiles, and the line inside each box is the statistical median of the data.The length of the box is defined as the interquartile range (IQR).The whiskers are with maximum 1.5 IQR.Horizontal lines indicate significant Tukey HSD test results.TLX means NASA Task Load Index.

TABLE I :
Control coordinate systems in prior works or designed in our case studies 3D Input Device (e.g., VR Controller or Space Mouse)

TABLE III :
Participant demographics

TABLE IV :
Statistical results of our measures