Human to Robot Mapping Methods
The information described below is primarily from the paper, "Calibration and Mapping of a Human Hand for Dexterous Telemanipulation." Recent modifications can be found here.
  
  
  
  
  
 

Point-to-Point & Object-Based Mapping

Human to robot mapping is an integral part of our telemanipulation set up. Once we have accurate information on the user's fingertip locations we can then use this information to control the robot finger tips, in other words, map virtual hand motions to robotic hand motions. However, difficulties arise when attempting to map the three-dimensional motion of the human hand to the two-dimensional motion of a planar robot, yet still make the mapping intuitive to the teleoperator.

We have developed a dexterous planar robot, known as Dexter, that serves as a test bed for investigation of motion mapping methods.Initially we attempted to solve the mapping problem using a point-to-point mapping. The drawbacks associated with this method lead us to develop an alternate mapping scheme, the object-based mapping method.

Point-to-Point Mapping

The point-to-point mapping method map the planar projection of the fingertip positions to the robot positions. The index and thumb tip positions are projected on to a plane that is perpendicular to the palm and contains the index finger extension-flexion motion. The projected fingertip positions are mapped to the robot fingertip positions using a standard frame transformation (below).

Under the point-to-point mapping method, the robotic finger corresponding to the index finger was relatively easy to control, due to the kinematic similarities. However, control of the finger corresponding to the thumb proved extremely difficult. Achievable positions for the thumb were mapped to a relatively small region of the corresponding robotic finger workspace.

Method Difficulties

This method highlighted two underlying problems associated with mapping human hand motion to planar robot motion.

  • Thumb does not directly oppose the index finger
    Thus thumb fingertip location information is lost when using a planar projection
  • Lack of correspondence between ideal manipulation points
    The robot has the greatest manipulation range at approximately the geometric center of the workspace. Ideally, this position should correspond to the natural pinch position of the human hand. However, humans tend to manipulate small objects near the outer edge of the hand workspace. Simply scaling the motions of the index and thumb results in manipulations being performed near the lower limit of the robot workspace, thereby limiting the manipulation range of the robot.

These two factors resulted in poor workspace utilization and a non-intuitive mapping. This lead us to develop object-based mapping.

Object-Based Mapping

The goal of object based mapping is to allow the user to make natural motions, such as grasping, releasing, or rolling an object, and have the robot perform analogous motions. The object-based mapping scheme assumes that a virtual sphere is held between the user's thumb and index finger. Parameters of the virtual object (size, orientation, and midpoint location) are scaled non-linearly and independently to improve the achievable workspace and the mapping's intuitive feel.

Object Mapping Implementation

The size of the virtual object is calculated from the 3D distance between the thumb and index finger.The object midpoint location is initially calculated in the hand frame by finding the midpoint between the thumb and index finger tip positions. This midpoint is then projected onto the index finger plane of motion (same plane used in point-to-point mapping). Using a unity gain standard frame transformation the midpoint is transformed to the robotic hand frame. The natural human pinch point is mapped to the ideal manipulation point using the translation offsets. The extension/retraction of the pinch point is mapped to the vertical axis using the rotation angle. Also note that the orientation of the object is based on angle of the projected line between the fingertips in the hand frame.Once the object parameters are in the robotic hand frame, the parameters are further modified to match the robotic workspace, yet still maintain the correspondence between the natural pinch point and the ideal manipulation point.

Workspace Matching

An algorithm modifies the object parameters using nonlinear gain functions to better match the robotic workspace despite the differences in human hand workspace. Typical robotic and human hand positions are shown in the figure below.

Note: Te configurations shown here are shown in this
MOVIE!! (12 MB)
Typical Human Hand Positions and Corresponding Robotic Hand Configurations
Workspace Matching - Object Size

The object size parameter is varied non-linearly to better control fingertip separation. The gain on the object size is proportional to the size difference between the human hand and the robotic hand for small objects (A). The gain increases for larger objects to extend the range in the robotic hand workspace (B). The gain is a piece-wise linear function of the object size, see plot.

Workspace Matching - Object Midpoint

In the typical positions figure above shows the correspondence between the ideal manipulation point and the natural pinch point (D). As stated above this is matched using the translation offset. The human hand tends to retract a greater amount from the pinch point than to the extended position. Thus the gain on the vertical position of the midpoint, in the robotic hand frame, is modified so that the entire robotic workspace is utilized (as in E and F). The gain is a piece-wise linear function of object midpoint.

Developing a User Mapping

Once a user is calibrated to the CyberGolve we then customize the mapping parameters and transformation parameters. The transformation parameters include rotation angle and translation offsets. Object mapping parameters include gain values, gain blend sizes, center gain location. In all there are 17 parameters which are modified to give the user the best utilization of the workspace while maintaining the an intuitive feel. Using a custom Graphical User Interface, the parameters can be modified real-time. A virtual robotic hand display within the GUI is initially used to setup the parameters to avoid any problems with poor mapping parameters with the robotic hand running. Once the parameters are adjusted using the virtual display, the user's motion is connected to the robotic hand and the parameters are further tuned for the best possible mapping.

Ideally, if the mapping parameters are adjusted to match the typical positions A,B,D,E, and F, then the user should be able to roll a virtual object about his/her comfortable manipulation position and the robot will in turn roll an object about the ideal manipulation position (C) and still be able to reach most of the robot's workspace.

Mapping Method Results

Under the point-to-point mapping method one can see that the index finger maps fairly well to the left robot finger workspace. However, the thumb motion is mapped to a relatively small area roughly along a vertical line. This is partially due to using the planar project of the thumb motion. Also, the large number of points outside the robotic hand workspace leads to a distortion in the mapping. The robot will go to the closest possible position at the edge of the workspace. It is also important to note that the natural pinch point does not match the robot's ideal manipulation point.

Under the object-based mapping method the index finger motion lies almost completely within the robot's workspace. More importantly, the motion of the right finger, the thumb, has been greatly expanded. Also, the pinch point matches the ideal manipulation position directly.

Method Conclusions

The object based mapping method does show considerable improvement over the point-to-point method. However, there are some drawbacks associated with the method. The motion of the thumb and index finger are coupled, thus individual finger exploration is difficult. Also due to the large number of parameters there are varying degrees of success.

Mapping Method Extensibility

While object mapping method concept has been demonstrated using a particular robotic hand, the mapping method is a general technique and is extensible to any planar robotic hand. By modifying the mapping parameters, particularly the nonlinear gain functions, the motions of the human hand can be mapped to a new robot workspace. One caveat that must be observed when performing the nonlinear mapping is that, in some cases, relatively small motions of the human fingers could result in large motions of the robot fingers. Thus it may be desirable to plot corresponding velocity ellipsoids.

Recent Improvements

The mapping parameters (17 in total) were previously adjusted manually by the experimenters for each operator. A new algorithm was developed to automatically compute the mapping parameters based on a few simple motions of the operator's hand. Furthermore, a new mapping method for index finger exploration was also implemented. The automatic mapping method employs the use of line fitting functions to determine the operator's verical and grasp orientations. Then a skew transformation, with offsets determined by the operator's pinch point, is used to map the position to the robot space. Additonally, the gain functions describe above for object size and midpoint were replaced with smooth quadratic and linear functions in each quadrant (stemming from the mapped pinch point). The scaled and transformed object imformation can then be used to control the fingertips (via a simple kinematic transformation) or an object in the robots grasp (via onbject impedance control code).

This page was last modified on 3/22/02. If you have any questions regarding the website or the content please contact the .