From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Unlike conventional 3D stereoscopy, which simulates a 3D scene by displaying only two different views of it, each visible to only one of the viewer's eyes, 3D multiscopy displays more than two images, representing the subject as viewed from a series of locations, and allows each image to be visible only from a range of eye locations narrower than the average human interocular distance of 63 mm. As a result, not only does each eye see a different image, but different pairs of images are seen from different viewing locations.[1]

This allows the observer to view the 3D subject from different angles as they move their head, simulating the real-world depth cue of shifting parallax. It also reduces or eliminates the complication of pseudoscopic viewing zones typical of "no glasses" 3D displays that use only two images, making it possible for several randomly located observers to all see the subject in correct 3D at the same time.

Photographic images of this type were named parallax panoramagrams by inventor Herbert E. Ives circa 1930, but that term is strongly associated with a continuous sampling of horizontal viewpoints, captured by a camera with a very wide lens or a lens that travels horizontally during the exposure. The more recently coined term has increasingly been adopted as more accurately descriptive when referring to electronic systems that capture and display only a finite number of discrete views.


Examples of multiscopic (as opposed to stereoscopic) 3D technologies include:[2]

  • sweeping a projection across subsurfaces
  • transparent substrates (such as "intersecting laser beams, fog layers")


  1. ^ Douglas Lanman, Matthew Hirsch, Yunhee Kim, Ramesh Raskar. Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. Proc. of SIGGRAPH Asia 2010 (ACM Transactions on Graphics 29, 6), 2010.
  2. ^