# Virtual fixture

A virtual fixture is an overlay of augmented sensory information upon a user's perception of a real environment in order to improve human performance in both direct and remotely manipulated tasks. Developed in the early 1990s by Louis Rosenberg at the U.S. Air Force Research Laboratory (AFRL), Virtual Fixtures was a pioneering platform in virtual reality and augmented reality technologies.

## History

Louis Rosenberg testing Virtual Fixtures, one of the first augmented reality systems ever developed (1992)

Virtual Fixtures was first developed by Louis Rosenberg in 1992 at the USAF Armstrong Labs, resulting in the first immersive augmented reality system ever built.[1][2][3][4][5][6] Because 3D graphics were too slow in the early 1990s to present a photorealistic and spatially-registered augmented reality, Virtual Fixtures used two real physical robots, controlled by a full upper-body exoskeleton worn by the user. To create the immersive experience for the user, a unique optics configuration was employed that involved a pair of binocular magnifiers aligned so that the user's view of the robot arms were brought forward so as to appear registered in the exact location of the user's real physical arms.[1][7][8][9] The result was a spatially-registered immersive experience in which the user moved his or her arms, while seeing robot arms in the place where his or her arms should be. The system also employed computer-generated virtual overlays in the form of simulated physical barriers, fields, and guides, designed to assist in the user while performing real physical tasks.[10][11]

Fitts Law performance testing was conducted on batteries of human test subjects, demonstrating for the first time, that a significant enhancement in human performance of real-world dexterous tasks could be achieved by providing immersive augmented reality overlays to users.[12][13]

## Concept

Virtual fixtures were used by Rosenberg (1992) to enhance operator performance in the telerobotic control of Fitt's Law peg-board task.

The concept of virtual fixtures was first introduced by Rosenberg (1992)[1] as an overlay of virtual sensory information on a workspace in order to improve human performance in direct and remotely manipulated tasks. The virtual sensory overlays can be presented as physically realistic structures, registered in space such that they are perceived by the user to be fully present in the real workspace environment. The virtual sensory overlays can also be abstractions that have properties not possible of real physical structures. The concept of sensory overlays is difficult to visualize and talk about, as a consequence the virtual fixture metaphor was introduced. To understand what a virtual fixture is an analogy with a real physical fixture such as a ruler is often used. A simple task such as drawing a straight line on a piece of paper free-hand is a task that most humans are unable to perform with good accuracy and high speed. However, the use of a simple device such as a ruler allows the task to be carried out quickly and with good accuracy. The use of a ruler helps the user by guiding the pen along the ruler reducing the tremor and mental load of the user, thus increasing the quality of the results.

The definition of virtual fixtures by Rosenberg[1][7][10] is much broader than simply providing guidance of the end-effector. For example, auditory virtual fixtures are used to increase the user awareness by providing audio clues that helps the user by providing multi modal cues for localization of the end-effector. Rosenberg argues that the success of virtual fixtures is not only because the user is guided by the fixture, but that the user experiences a greater presence and better localization in the remote workspace. However, in the context of human-machine collaborative systems, the term virtual fixtures is often used to refer to a task dependent virtual aid that is overlaid upon a real environment and guides the user's motion along desired directions while preventing motion in undesired directions or regions of the workspace. This is the type of virtual fixtures that is described in detail in the next section of this article.

Virtual fixtures can be either guiding virtual fixtures or forbidden regions virtual fixtures. A forbidden regions virtual fixture could be used, for example, in a teleoperated setting where the operator has to drive a vehicle at a remote site to accomplish an objective. If there are pits at the remote site which would be harmful for the vehicle to fall into forbidden regions could be defined at the various pits locations, thus preventing the operator from issuing commands that would result in the vehicle ending up in such a pit.

Example of a forbidden regions virtual fixture

Such illegal commands could easily be sent by an operator because of, for instance, delays in the teleoperation loop, poor telepresence or a number of other reasons.

An example of a guiding virtual fixture could be when the vehicle must follow a certain trajectory,

Example of a guiding virtual fixture

The operator is then able to control the progress along the preferred direction while motion along the non-preferred direction is constrained.

With both forbidden regions and guiding virtual fixtures the stiffness, or its inverse the compliance, of the fixture can be adjusted. If the compliance is high (low stiffness) the fixture is soft. On the other hand, when the compliance is zero (maximum stiffness) the fixture is hard.

The stiffness of a virtual fixture can be soft or hard. A hard fixture completely constrains the motion to the fixture while a softer fixture allows some deviations from the fixture.

## Virtual fixture control law

This section describes how a control law that implements virtual fixtures can be derived. It is assumed that the robot is a purely kinematic device with end-effector position ${\displaystyle \mathbf {p} =\left[x,y,z\right]}$ and end-effector orientation ${\displaystyle \mathbf {r} =\left[r_{\textrm {x}},r_{\textrm {y}},r_{\textrm {z}}\right]}$ expressed in the robot's base frame ${\displaystyle F_{\textrm {r}}}$. The input control signal ${\displaystyle \mathbf {u} }$ to the robot is assumed to be a desired end-effector velocity ${\displaystyle \mathbf {v} ={\dot {\mathbf {x} }}=\left[{\dot {\mathbf {p} }},{\dot {\mathbf {r} }}\right]}$. In a tele-operated system it is often useful to scale the input velocity from the operator, ${\displaystyle \mathbf {v} _{\textrm {op}}}$ before feeding it to the robot controller. If the input from the user is of another form such as a force or position it must first be transformed to an input velocity, by for example scaling or differentiating.

Thus the control signal ${\displaystyle \mathbf {u} }$ would be computed from the operator's input velocity ${\displaystyle \mathbf {v} _{\textrm {op}}}$ as:

${\displaystyle \mathbf {u} =c\cdot \mathbf {v} _{\textrm {op}}}$


If ${\displaystyle c=1}$ there exists a one-to-one mapping between the operator and the slave robot.

If the constant ${\displaystyle c}$ is replaced by a diagonal matrix ${\displaystyle \mathbf {C} }$ it is possible to adjust the compliance independently for different dimensions of ${\displaystyle {\dot {\mathbf {x} }}}$. For example, setting the first three elements on the diagonal of ${\displaystyle \mathbf {C} }$ to ${\displaystyle c}$ and all other elements to zero would result in a system that only permits translational motion and not rotation. This would be an example of a hard virtual fixture that constrains the motion from ${\displaystyle \mathbf {x} \in \mathbb {R} ^{6}}$ to ${\displaystyle \mathbf {p} \in \mathbb {R} ^{3}}$. If the rest of the elements on the diagonal were set to a small value, instead of zero, the fixture would be soft, allowing some motion in the rotational directions.

To express more general constraints assume a time-varying matrix ${\displaystyle \mathbf {D} (t)\in \mathbb {R} ^{6\times n},~n\in [1..6]}$ which represents the preferred direction at time ${\displaystyle t}$. Thus if ${\displaystyle n=1}$ the preferred direction is along a curve in ${\displaystyle \mathbb {R} ^{6}}$. Likewise, ${\displaystyle n=2}$ would give preferred directions that span a surface. From ${\displaystyle \mathbf {D} }$ two projection operators can be defined (Marayong et al., 2003), the span and kernel of the column space:

{\displaystyle {\begin{aligned}{\textrm {Span}}(\mathbf {D} )&\equiv \left[\mathbf {D} \right]=\mathbf {D} (\mathbf {D} ^{T}\mathbf {D} )^{-1}\mathbf {D} ^{T}\\{\textrm {Kernel}}(\mathbf {D} )&\equiv \langle \mathbf {D} \rangle =\mathbf {I} -\left[\mathbf {D} \right]\end{aligned}}}


If ${\displaystyle \mathbf {D} }$ does not have full column rank the span can not be computed, consequently it is better to compute the span by using the pseudo-inverse (Marayong et al., 2003), thus in practice the span is computed as:

${\displaystyle {\textrm {Span}}(\mathbf {D} )\equiv \left[\mathbf {D} \right]=\mathbf {D} (\mathbf {D} ^{T}\mathbf {D} )^{\dagger }\mathbf {D} ^{T}}$


where ${\displaystyle \mathbf {D} ^{\dagger }}$ denotes the pseudo-inverse of ${\displaystyle \mathbf {D} }$.

If the input velocity is split into two components as:

${\displaystyle \mathbf {v} _{\textrm {D}}\equiv \left[\mathbf {D} \right]\mathbf {v} _{\textrm {op}}{\textrm {~and~}}\mathbf {v} _{\tau }\equiv \mathbf {v} _{\textrm {op}}-\mathbf {v} _{\textrm {D}}=\langle \mathbf {D} \rangle \mathbf {v} _{\textrm {op}}}$


it is possible to rewrite the control law as:

${\displaystyle \mathbf {v} =c\cdot \mathbf {v} _{\textrm {op}}=c\left(\mathbf {v} _{\textrm {D}}+\mathbf {v} _{\tau }\right)}$


Next introduce a new compliance that affects only the non-preferred component of the velocity input and write the final control law as:

${\displaystyle \mathbf {v} =c\left(\mathbf {v} _{\textrm {D}}+c_{\tau }\cdot \mathbf {v} _{\tau }\right)=c\left(\left[\mathbf {D} \right]+c_{\tau }\langle \mathbf {D} \rangle \right)\mathbf {v} _{\textrm {op}}}$


## References

1. ^ a b c d L. B. Rosenberg. The Use of Virtual Fixtures As Perceptual Overlays to Enhance Operator Performance in Remote Environments. Technical Report AL-TR-0089, USAF Armstrong Laboratory, Wright-Patterson AFB OH, 1992.
2. ^ Rosenberg, L.B. (1993). "Virtual fixtures: Perceptual tools for telerobotic manipulation". Proceedings of IEEE Virtual Reality Annual International Symposium. IEEE: 76–82. doi:10.1109/vrais.1993.380795. ISBN 978-0780313637.
3. ^ Rosenberg, Louis. "Rosenberg, L. (1993). "The use of virtual fixtures to enhance telemanipulation with time delay," in Proceedings of the ASME Winter Anual Meeting, Robotics & Telemanipulation, Vol. 49, (New Orleans, LA)". Cite journal requires |journal= (help)
4. ^ Rosenberg, Louis. ""The use of virtual fixtures to enhance operator performance in time delayed teleoperation,"J. Dyn. Syst. Control, vol. 49,pp. 29–36, 1993". Cite journal requires |journal= (help)
5. ^ Noer, Michael (1998-09-21). "Desktop fingerprints". Forbes. Retrieved 22 April 2014.
6. ^ Rosenberg, Louis. "Defense Technical Information Center - Virtual Fixtures (1992)".
7. ^ a b Rosenberg, L., "Virtual fixtures as tools to enhance operator performance in telepresence environments," SPIE Manipulator Technology, 1993.
8. ^ Rosenberg, Louis B. (March 1993). "The use of Virtual Fixtures to Enhance Operator Performance in Time Delayed Teleoperation". Cite journal requires |journal= (help)
9. ^
10. ^ a b Rosenberg, "Virtual Haptic Overlays Enhance Performance in Telepresence Tasks," Dept. of Mech. Eng., Stanford Univ., 1994.
11. ^ Rosenberg, Louis B. (18–22 Sep 1993). "Virtual fixtures: Perceptual tools for telerobotic manipulation". Virtual Reality Annual International Symposium, 1993. Seattle, WA: IEEE: 76–82. doi:10.1109/VRAIS.1993.380795. ISBN 978-0-7803-1363-7.
12. ^ Rosenberg, Louis (March 1993). "The use of Virtual Fixtures to Enhance Operator Performance in Time Delayed Teleoperation". Cite journal requires |journal= (help)
13. ^ Rosenberg, Louis B. (1993). "Virtual fixtures as tools to enhance operator performance in telepresence environments". Telemanipulator Technology and Space Telerobotics. 2057: 10–21. doi:10.1117/12.164901.
• L. B. Rosenberg. Virtual fixtures: Perceptual tools for telerobotic manipulation, In Proc. of the IEEE Annual Int. Symposium on Virtual Reality, pp. 76–82, 1993.
• P. Marayong, M. Li, A. M. Okamura, and G. D. Hager. Spatial Motion Constraints: Theory and Demonstrations for Robot Guidance Using Virtual Fixtures, In Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 1270–1275, 2003.