3D interaction

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.

The 3D space used for interaction can be the real physical space, a virtual space representation simulated in the computer, or a combination of both. When the real space is used for data input, humans perform actions or give commands to the machine using an input device that detects the 3D position of the human action. When it is used for data output, the simulated 3D virtual scene is projected onto the real environment through one output device or a combination of them.

Background[edit]

3D’s early beginnings can be traced back to 1962 when Morton Heilig invented the Sensorama simulator. It provided 3D video feedback, as well motion, audio, and haptic feedbacks to produce a virtual environment. The next stage of development was Dr. Ivan Sutherland's completion of his pioneering work in 1968. He created a head-mounted display that produced a 3D, virtual environment by presenting a left and right still image of that environment.

Availability of technology as well as impractical costs held back the development and application of virtual environments until the 1980s. Applications were limited to military ventures in the United States. Since then, further research and technological advancements have allowed new doors to be opened to application in various other areas such as education, entertainment, and manufacturing.

In 3D interaction, users carry out their tasks and perform functions by exchanging information with computer systems in 3D space. It is an intuitive type of interaction because humans interact in three dimensions in the real world. The tasks that users perform have been classified as selection and manipulation of objects in virtual space, navigation, and system control. Tasks can be performed in virtual space through interaction techniques and by utilizing interaction devices. 3D interaction techniques were classified according to the task group it supports. Techniques that support navigation tasks are classified as navigation techniques. Techniques that support object selection and manipulation are labeled selection and manipulation techniques. Lastly, system control techniques support tasks that have to do with controlling the application itself. A consistent and efficient mapping between techniques and interaction devices must be made in order for the system to be usable and effective. Interfaces associated with 3D interaction are called 3D interfaces. Like other types of user interfaces, it involves two-way communication between users and system, but allows users to perform action in 3D space. Input devices permit the users to give directions and commands to the system, while output devices allow the machine to present information back to them.

3D interfaces have been used in applications that feature virtual environments, and augmented and mixed realities. In virtual environments, users may interact directly with the environment or use tools with specific functionalities to do so. 3D interaction occurs when physical tools are controlled in 3D spatial context to control a corresponding virtual tool.

Users experience a sense of presence when engaged in an immersive virtual world. Enabling the users to interact with this world in 3D allows them to make use of natural and intrinsic knowledge of how information exchange takes place with physical objects in the real world. Texture, sound, and speech can all be used to augment 3D interaction. Currently, users still have difficulty in interpreting 3D space visuals and understanding how interaction occurs. Although it’s a natural way for humans to move around in a three-dimensional world, the difficulty exists because many of the cues present in real environments are missing from virtual environments. Perception and occlusion are the primary perceptual cues used by humans. Also, even though scenes in virtual space appear three-dimensional, they are still displayed on a 2D surface so some inconsistencies in depth perception will still exist.

3D user interfaces[edit]

User interfaces are the means for communication between users and systems. 3D interfaces include media for 3D representation of system state, and media for 3D user input or manipulation. Using 3D representations is not enough to create 3D interaction. The users must have a way of performing actions in 3D as well. To that effect, special input and output devices have been developed to support this type of interaction. Some, such as the 3D mouse, were developed based on existing devices for 2D interaction.

Input devices[edit]

Input Devices are instruments used to manipulate objects, and send control instructions to the computer system. They vary in terms of degrees of freedom available to them and can be classified into standard input devices, trackers, control devices, navigation equipment, and gesture interfaces.

Standard input devices include keyboards, tablets and stylus, joysticks, mice, touch screens, knobs, and trackballs.

Trackers detect or monitor head, hand or body movements and send that information to the computer. The computer then translates it and ensures that position and orientation are reflected accurately in the virtual world. Tracking is important in presenting the correct viewpoint, coordinating the spatial and sound information presented to users as well the tasks or functions that they could perform. 3D trackers have been identified as mechanical, magnetic, ultrasonic, optical, and hybrid inertial. Examples of trackers include motion trackers, eye trackers, and data gloves.

A simple 2D mouse may be considered a navigation device if it allows the user to move to a different location in a virtual 3D space. Navigation devices such as the treadmill and bicycle make use of the natural ways that humans travel in the real world. Treadmills simulate walking or running and bicycles or similar type equipment simulate vehicular travel. In the case of navigation devices, the information passed on to the machine is the user’s location and movements in virtual space.

Wired gloves and bodysuits allow gestural interaction to occur. These send hand or body position and movement information to the computer using sensors.

Output devices[edit]

Output devices allow the machine to provide information or feedback to the user. They include visual displays, auditory displays, and haptic displays. Visual displays provide feedback to users in 3D visual form. Head-mounted displays and CAVEs (Cave Automatic Virtual Environment) are examples of a fully immersive visual display, where the user can see only the virtual world and not the real world. Semi-immersive displays allow users to see both. Monitors and workbenches are examples of semi-immersive displays. Auditory displays provide information in auditory form. This is especially useful when supplying location and spatial information to the users. Adding background audio component to a display adds to the sense of realism. Haptic displays send tactile feedback or feeling back to the user.

3D interaction techniques[edit]

3D interaction techniques are methods used in order to execute different types of task in 3D space. Techniques are classified according to the tasks that they support.

Selection and manipulation[edit]

Users need to be able to manipulate virtual objects. Manipulation tasks involve selecting and moving an object. Sometimes, rotation of the object is involved as well. Direct-hand manipulation is the most natural technique because manipulating physical objects with the hand is intuitive for humans. However, this is not always possible. A virtual hand that can select and re-locate virtual objects will work as well.
3D widgets can be used to put controls on objects: these are usually called 3D Gizmos or Manipulators (a good example are the ones from Blender). Users can employ these to re-locate, re-scale or re-orient an object (Translate, Scale, Rotate).
Other techniques include the Go-Go technique and ray casting, where a virtual ray is used to point to, and select and object. More recently there has been user interface development and research by Richard White in Kansas over the past 3 years regarding interactive surfaces & classroom interactive whiteboards, grade school students, and 3D natural user interfaces known as Edusim.

Navigation[edit]

The computer needs to provide the user with information regarding location and movement. Navigation tasks have two components. Travel involves moving from the current location to the desired point. Wayfinding refers to finding and setting routes to get to a travel goal within the virtual environment.

  • Wayfinding : Wayfinding in virtual space is different and more difficult to do than in the real world because synthetic environments are often missing perceptual cues and movement constraints. It can be supported using user-centred techniques such as using a larger field of view and supplying motion cues, or environment-centred techniques like structural organization and wayfinding principles.
  • Travel : Good travel techniques allow the user to easily move through the environment. There are three types of travel tasks namely, exploration, search, and manoeuvring. Travel techniques can be classified into the following five categories:
    • Physical movement – user moves through the virtual world
    • Manual Viewpoint manipulation – use hand motions to achieve movement
    • Steering – direction specification
    • Target-based travel – destination specification
    • Route planning – path specification

System control[edit]

Tasks that involve issuing commands to the application in order to change system mode or activate some functionality fall under the category of system control. Techniques that support system control tasks in three-dimensions are classified as:

  • Graphical menus
  • Voice commands
  • Gestural interaction
  • Virtual tools with specific functions

Symbolic input[edit]

This task allows the user to enter and/or edit, for example, text, making it possible to annotate 3D scenes or 3D objects.

See also[edit]

References[edit]

  1. Bowman, D., Kruijff, E., LaViola, J., Poupyrev, I. (2001, February). An Introduction to 3-D User Interface Design. Presence, 10(1), 96–108.
  2. Bowman, D., Kruijff, E., LaViola, J., Poupyrev, I. (2005). 3D User Interfaces: Theory and Practice. Boston: Addison–Wesley.
  3. Burdea, G. C., Coiffet, P. (2003). Virtual Reality Technology (2nd ed.). New Jersey: John Wiley & Sons Inc.
  4. Carroll, J. M. (2002). Human–Computer Interaction in the New Millennium. New York: ACM Press
  5. Csisinko, M., Kaufmann, H. (2007, March). Towards a Universal Implementation of 3D User Interaction Techniques [Proceedings of Specification, Authoring, Adaptation of Mixed Reality User Interfaces Workshop, IEEE VR]. Charlotte, NC, USA.
  6. Fröhlich, B.; Plate, J. (2000). "The Cubic Mouse: A New Device for 3D Input". Proceedings of ACM CHI 2000. New York: ACM Press. pp. 526–531. doi:10.1145/332040.332491. 
  7. Keijser, J.; Carpendale, S.; Hancock, M.; Isenberg, T. (2007). "Exploring 3D Interaction in Alternate Control-Display Space Mappings". Proceedings of the 2nd IEEE Symposium on 3D User Interfaces. Los Alamitos, CA: IEEE Computer Society. pp. 526–531. 
  8. Larijani, L. C. (1993). The Virtual Reality Primer. United States of America: R. R. Donnelley and Sons Company.
  9. Rhijn, A. van (2006). Configurable Input Devices for 3D Interaction using Optical Tracking. Eindhoven: Technische Universiteit Eindhoven.
  10. Stuerzlinger, W., Dadgari, D., Oh, J-Y. (2006, April). Reality-Based Object Movement Techniques for 3D. CHI 2006 Workshop: "What is the Next Generation of Human–Computer Interaction?". Workshop presentation.
  11. Vince, J. (1998). Essential Virtual Reality Fast. Great Britain: Springer-Verlag London Limited
  12. Yuan, C., (2005, December). Seamless 3D Interaction in AR – A Vision-Based Approach. In Proceedings of the First International Symposium, ISVC (pp. 321–328). Lake Tahoe, NV, USA: Springer Berlin/ Heidelberg.
  13. The CAVE (CAVE Automatic Virtual Environment). Visited March 28, 2007
  14. Virtual Reality. Visited March 28, 2007
  15. The Java 3-D Enabled CAVE at the Sun Centre of Excellence for Visual Genomics. Visited March 28, 2007
  16. 3D Interaction With and From Handheld Computers. Visited March 28, 2008

External links[edit]