Immersion (virtual reality)

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Immersion into virtual reality is a perception of being physically present in a non-physical world. The perception is created by surrounding the user of the VR system in images, sound or other stimuli that provide an engrossing total environment.

The name is a metaphoric use of the experience of submersion applied to representation, fiction or simulation. Immersion can also be defined as the state of consciousness where a "visitor" (Maurice Benayoun) or "immersant" (Char Davies)’s awareness of physical self is transformed by being surrounded in an artificial environment; used for describing partial or complete suspension of disbelief, enabling action or reaction to stimulations encountered in a virtual or artistic environment. The degree to which the virtual or artistic environment faithfully reproduces reality determines the degree of suspension of disbelief. The greater the suspension of disbelief, the greater the degree of presence achieved.

Types of immersion[edit]

Classic Virtual reality HMD

According to Ernest W. Adams, author and consultant on game design,[1] immersion can be separated into three main categories:

Tactical immersion
Tactical immersion is experienced when performing tactile operations that involve skill. Players feel "in the zone" while perfecting actions that result in success.
Strategic immersion
Strategic immersion is more cerebral, and is associated with mental challenge. Chess players experience strategic immersion when choosing a correct solution among a broad array of possibilities.
Narrative immersion
Narrative immersion occurs when players become invested in a story, and is similar to what is experienced while reading a book or watching a movie.

Staffan Björk and Jussi Holopainen, in Patterns In Game Design,[2] divide immersion into similar categories, but call them sensory-motoric immersion, cognitive immersion and emotional immersion, respectively. In addition to these, they add a new category:

Spatial immersion
Spatial immersion occurs when a player feels the simulated world is perceptually convincing. The player feels that he or she is really "there" and that a simulated world looks and feels "real".

Presence[edit]

Virtual reality glasses can produce a visceral feeling of being in a simulated world, a form of spatial immersion called presence.[3] According to Oculus VR, the technology requirements to achieve this visceral reaction are low-latency and precise tracking of movements.[4][5][6]

Michael Abrash gave a talk on VR at Steam Dev Days in 2014.[7] According the VR research team at Valve, all of the following are needed to establish presence.

  • A wide field of view (80 degrees or better)
  • Adequate resolution (1080p or better)
  • Low pixel persistence (3 ms or less)
  • A high enough refresh rate (>60 Hz, 95 Hz is enough but less may be adequate)
  • Global display where all pixels are illuminated simultaneously (rolling display may work with eye tracking.)
  • Optics (at most two lenses per eye with trade-offs, ideal optics not practical using current technology)
  • Optical calibration
  • Rock-solid tracking - translation with millimeter accuracy or better, orientation with quarter degree accuracy or better, and volume of 1.5 meter or more on a side
  • Low latency (20 ms motion to last photon, 25 ms may be good enough)

Immersive virtual reality[edit]

Immersive virtual reality is a hypothetical future technology that exists today as virtual reality art projects, for the most part.[8] It consists of immersion in an artificial environment where the user feels just as immersed as they usually feel in consensus reality.

Direct interaction of the nervous system[edit]

The most considered method would be to induce the sensations that made up the virtual reality in the nervous system directly. In functionalism/conventional biology we interact with consensus reality through the nervous system. Thus we receive all input from all the senses as nerve impulses. It gives your neurons a feeling of heightened sensation. It would involve the user receiving inputs as artificially stimulated nerve impulses, the system would receive the CNS outputs (natural nerve impulses) and process them allowing the user to interact with the virtual reality. Natural impulses between the body and central nervous system would need to be prevented. This could be done by blocking out natural impulses using nanorobots which attach themselves to the brain wiring, whilst receiving the digital impulses of which describe the virtual world, which could then be sent into the wiring of the brain. A feedback system between the user and the computer which stores the information would also be needed. Considering how much information would be required for such a system, it is likely that it would be based on hypothetical forms of computer technology.

Requirements[edit]

Understanding of the nervous system

A comprehensive understanding of which nerve impulses correspond to which sensations, and which motor impulses correspond to which muscle contractions will be required. This will allow the correct sensations in the user, and actions in the virtual reality to occur. The Blue Brain Project is the current, most promising research with the idea of understanding how the brain works by building very large scale computer models.

Ability to manipulate CNS

The nervous system would obviously need to be manipulated. Whilst non-invasive devices using radiation have been postulated, invasive cybernetic implants are likely to become available sooner and be more accurate. Manipulation could occur at any stage of the nervous system - the spinal cord is likely to be simplest; as all nerves pass through here, this could be the only site of manipulation. Molecular Nanotechnology is likely to provide the degree of precision required and could allow the implant to be built inside the body rather than be inserted by an operation.

Computer hardware/software to process inputs/outputs

A very powerful and probably (but not necessarily) Strong AI would be required to process all the inputs from the CNS, run a simulation of a virtual reality approaching the complexity of consensus reality, and translate its events to a complete set of nerve impulses for the user. Strong artificial intelligence may also be required to write the program for a decent alternate reality.

Immersive digital environments[edit]

Cosmopolis (2005), Maurice Benayoun's Giant Virtual Reality Interactive Installation

An immersive digital environment is an artificial, interactive, computer-created scene or "world" within which a user can immerse themselves.[9]

Immersive digital environments could be thought of as synonymous with Virtual reality, but without the implication that actual "reality" is being simulated. An immersive digital environment could be a model of reality, but it could also be a complete fantasy user interface or abstraction, as long as the user of the environment is immersed within it. The definition of immersion is wide and variable, but here it is assumed to mean simply that the user feels like they are part of the simulated "universe". The success with which an immersive digital environment can actually immerse the user is dependent on many factors such as believable 3D computer graphics, surround sound, interactive user-input and other factors such as simplicity, functionality and potential for enjoyment. New technologies are currently under development which claim to bring realistic environmental effects to the players' environment - effects like wind, seat vibration and ambient lighting.

Perception[edit]

To create a sense of full immersion, the 5 senses (sight, sound, touch, smell, taste) must perceive the digital environment to be physically real. Immersive technology can perceptually fool the senses through:

  • Panoramic 3D displays (visual)
  • Surround sound acoustics (auditory)
  • Haptics and force feedback (tactile)
  • Smell replication (olfactory)
  • Taste replication (gustation)

Interaction[edit]

Once the senses reach a sufficient belief that the digital environment is real (it is interaction and involvement which can never be real), the user must then be able to interact with the environment in a natural, intuitive manner. Various immersive technologies such as gestural controls, motion tracking, and computer vision respond to the user's actions and movements. Brain control interfaces (BCI) respond to the user's brainwave activity.

Examples and applications[edit]

Training and rehearsal simulations run the gamut from part task procedural training (often buttonology, for example: which button do you push to deploy a refueling boom) through situational simulation (such as crisis response or convoy driver training) to full motion simulations which train pilots or soldiers and law enforcement in scenarios that are too dangerous to train in actual equipment using live ordinance.

Computer games from simple arcade to Massively multiplayer online game and training programs such as flight and driving simulators. Entertainment environments such as motion simulators that immerse the riders/players in a virtual digital environment enhanced by motion, visual and aural cues. Reality simulators, such as one of the Virunga Mountains in Rwanda that takes you on a trip through the jungle to meet a tribe of Mountain Gorillas.[10] Or training versions such as one which simulates taking a ride through human arteries and the heart to witness the build up of plaque and thus learn about cholesterol and health.[11]

In parallel with scientist, artists like Knowbotic Research, Donna Cox, Rebecca Allen, Robbie Cooper, Maurice Benayoun, Char Davies, and Jeffrey Shaw use the potential of immersive virtual reality to create physiologic or symbolic experiences and situations.

Other examples of immersion technology include physical environment / immersive space with surrounding digital projections and sound such as the CAVE, and the use of head-mounted displays for viewing movies, with head-tracking and computer control of the image presented, so that the viewer appears to be inside the scene.. The next generation is VIRTSIM, which achieves total immersion through motion capture and wireless head mounted displays for teams of up to thirteen immersants enabling natural movement through space and interaction in both the virtual and physical space simultaneously.

Practical Applications at Ford Motor Company

The developmental process used at Ford Motor Company may seem unusual to anyone who is not familiar with virtual reality, but this does not mean it is not impressive. The innovative concepts being used in the Virtual Reality lab at Ford have even sparked the interest of NASA.[citation needed] Ford’s Virtual Reality lab is led by technical expert, Elizabeth Baron.[citation needed] Along with her team, she promotes some of the most advanced car designs in the auto industry.[according to whom?]

Ford Motor Company uses 3D imaging to help them construct the interior of their vehicles. Inside the iVE lab, Ford has a test car that supports the motion software. The test car is built to resemble an actual car and has doors, seats, and a steering wheel. The test car is fixed to a platform which has built in overhead cameras that face the driver. Through the software, engineers are able to develop the interior of cars and display it through the virtual reality headset of the person sitting in the test car. The cameras that are facing the test car allow the user full visual mobility while looking at the interior of the vehicle. The motion sensing is able to tell when the user turns their head and gives them a three hundred and sixty degree view. The appearance of the interior can be switched from a Mustang to an Escape in as quickly as five minutes which allows easy comparison of concept designs.[citation needed]

The Cave Automated Virtual Environment (CAVE) is also used by developers to look at the exterior as well as the interior of concept designs. While standing in the CAVE, 3D glasses fixed with motion sensors are needed to control the view. Images developed in design software are projected over the walls of the room to give users an idea as to what future products are intended to look like. To change the view of the car from exterior to interior, the person wearing the 3D glasses just has to sit atop a chair that is positioned in the center of the room.

Although Ford has state of the art equipment in their labs, engineers are able to duplicate the same visual experience with their mobile station. This allows Ford to do public demonstrations at events like the North American International Auto Show and at various universities across the country.[citation needed] Ms. Baron is a key note speaker who has traveled with the mobile station to give demonstrations on the work being done with this type of technology.[citation needed] Ford’s mobile station consists of some of the same components that are found in their onsite company lab. The station has seats, pedals, and a steering wheel which give the user an opportunity to physically grasp what they are seeing through their virtual reality headset. Ford’s mobile station also consists of two, top of the line, 3D televisions that display the same images as the virtual reality headset which is useful when demonstrating to larger audiences.[citation needed]

This software gives engineers real life data on the overall appearance of their future products. Engineers are able to make recommendations and suggest that certain components be placed in a more convenient position. Being able to suggest these ideas before production allows the company to make full use of their resources. These innovative concepts that have been applied in the virtual reality lab have sped up production and reduced costs, thus ultimately making better products for their customers. The technology of virtual reality has become state of the art when compared to where it was five years ago; it is only imaginable as to what will be the next advancement.[citation needed]

Detrimental Issues[edit]

Simulation sickness, or simulator sickness, is a condition where a person exhibits symptoms similar to motion sickness caused by playing computer/simulation/video games (Oculus Rift is working to solve simulator sickness).

Motion sickness due to virtual reality is very similar to simulation sickness and motion sickness due to films. In virtual reality, however, the effect is made more acute as all external reference points are blocked from vision, the simulated images are three-dimensional and in some cases stereo sound that may also give a sense of motion. Studies have shown that exposure to rotational motions in a virtual environment can cause significant increases in nausea and other symptoms of motion sickness (So, R.H.Y. and Lo, W.T. (1999) "Cybersickness: An Experimental Study to Isolate the Effects of Rotational Scene Oscillations." Proceedings of IEEE Virtual Reality '99 Conference, March 13–17, 1999, Houston, Texas. Published by IEEE Computer Society, pp. 237–241)

Footnotes[edit]

See also[edit]

References[edit]

External links[edit]