News Stories

MIT Autostereo approach (FAQ)

HR3D: High-Rank 3D Display
using Content Adaptive Parallax Barriers

From Siggraph Asia 2010

FAQ

Contents

  1. Q: Does the HR3D display require the viewer to wear any special equipment?
  2. Q: Is the HR3D display just another parallax barrier display?
  3. Q: How much brighter is the HR3D display?
  4. Q: Can’t I just make the backlight brighter?
  5. Q: Why would anyone prefer this over a lenticular display?
  6. Q: How does HR3D relate to the Nintendo 3DS display?
  7. Q: What is the meaning of High Rank?
  8. Q: Is this similar to a hologram? (a.k.a. What’s all this about Light Fields?)
  9. Q: What is the maximum frame rate of the prototype?
  10. Q: How is content generated for an HR3D display?
  11. Q: How can you show more images on a normal LCD screen? Is the content compressed?

Body

  1. Q: Does the HR3D display require the viewer to wear any special equipment?

    A: No, the HR3D display is an auto-multiscopic display. This means that without glasses, head tracking devices, or any special equipment worn by the viewer, it is possible to see a realistic 3D image will both horizontal and vertical parallax. Parallax means that the viewer can see around objects on the screen by moving his or her head. This is in contrast to stereoscopic imagery, which appears to distort as the viewer moves.

  2. Q: Is the HR3D display just another parallax barrier display?

    A: The HR3D display is a new type of barrier based display which we are calling a Content Adaptive Parallax Barrier. By adapting the barriers in our display to the content being shown, we can allow signficantly more light from the display to reach the viewer when showing imagery with full horizontal and vertical parallax. The adaptation can also be tuned to favor increased framerate, or brightness at the cost of image fidelity. This type of trade cannot be made with traditional parallax barriers.

  3. Q: How much brighter is the HR3D display?

    A: While the degree of brightening is a function of the angular and spatial resolution of the display, a comparison to traditional parallax barrier dispays will shed some light on the question. 3D displays that use 1D parallax barriers (horizontal parallax only) block less light than pinhole based displays (horizontal and vertical parallax). Our adaptation procedure is capable of creating a display with both horiztonal and vertical parallax that emits as much light as a screen using a standard 1D parallax barrier pattern. In practice, with our prototype, this results in an approximately 3× increase in brightness.

  4. Q: Can’t I just make the backlight brither?

    A: In principle, yes. However, in practice environmental and economic considerations limit the power consumption of displays. Therefore, being more efficient with the use of emitted photons using technology like the HR3D display will improve display brightness in practice.

  5. Q: Why would anyone prefer this over a lenticular display?

    A: The primary disadvantage of a lenticular display is that once the lenses are attached to the screen, they cannot be removed. While 3D content is growing in popularity, much of the existing content available for viewing on displays is 2D. Using a barrier technology, such as HR3D which provides a switchable LCD front barrier, means that the display can easily switch between displaying a 3D image and a high-resolution 2D image. This is not possible when lenses are glued to the screen. Nintendo chose to use a barrier based technology in their latest 3DS gaming console for just this reason.

  6. Q: How does HR3D relate to the Nintendo 3DS display?

    A: The Nintendo 3DS was released in Japan on February 26, 2011. It has received significant media coverage, not only for being a prominent handheld gaming system, but primarily for being the first to incorporate a glasses-free (autostereoscopic) 3D display. As discussed in the previous question, the 3DS was widely rumored to incorporate a dual-stacked LCD display using a conventional, vertically-oriented parallax barrier (an array of slits placed in front of a standard LCD panel). Since its release, this suspicion has been confirmed by several media outlets, where magnification of the screen confirms the presence of a vertical parallax barrier (see Tech On). This barrier uses a second, specialized LCD panel where the spacing between the slits can be controlled by the user (by adjusting a physical slider) or disabled completely (to revert to a normal 2D display). While remarkable for introducing this technology to the mass market, the 3DS display shares the limitations of all conventional parallax barriers; when used in the 3D mode, the display is half as bright and has half the horizontal resolution. Additional media reports cite limited battery life for the 3DS. In part, this limited battery life can be attributed to the additional power required for a brighter backlight to offset light lost after passing through the parallax barrier. The HR3D display is intended to address these limitations by finding optimized patterns to display on dual-stacked LCDs; by optimizing these patterns, full-resolution display can be achieved. Most importantly, the image will appear brighter than the 3DS without decreasing battery life. As such displays spread from handheld gaming devices, such as the 3DS, to all mobile devices and eventually home theaters, HR3D will present a brighter, higher-resolution display alternative to the parallax barriers employed by the 3DS, although one requiring additional hardware (a second general-purpose LCD panel) and additional computing resources to compute the display patterns.

  7. Q: What is the meaning of High Rank?

    A: When we say rank here, we mean it in the linear algebra sense. In our technical paper we describe barrier based displays as rank-1. This means that if you try to represent a set of light rays using two attenuating layers, the resulting set of rays has very few degrees of freedom. The result is that you cannot accurately represent an arbitrary set of light rays with just two masks very well. We use this rank deficiency property to formulate an optimization that can solve for the best possible approximation to a given set of rays. The HR3D display creates a set of optimal low-rank approximations which are shown to the viewr very quickly (120Hz in our prototype). As the viewer’s eye integrates each of these sub-frames together, a high-rank approximation to the desired set of light rays is constructed. So in a sense HR3D is in the eye of the beholder!

  8. Q: Is this similar to a hologram? (a.k.a. What’s all this about Light Fields?)

    A: We use light fields tp analyze the HR3D display. Light fields assume that light travels along rays on a ballistic trajectory. This is a simplifying assumption that is valid so long as the light emitting and attenuating elements are large compared to the wavelength of light. Our HR3D screen works in this range. Holograms, on the other hand, work in a domain where the emitters and attenuators are on the same size scale as the wavelength of light. Typically people who create holograms use wave optics to analyze their creations. It has been shown by theCamera Culture group that augmented light fields can be used to analyze diffraction as well.

  9. Q: What is the maximum frame rate of the prototype?

    A: The prototype is built using two 120Hz Viewsonic LCD displays. At the lowest quality setting, using a single frame on each display to produce a desired light field, the prototype could run at the full 120Hz. This will result in very poor image quality. On the other hand, to accurately reproduce a the full-rank 3×3 light field, the display would only run at 120/9 Hz = 13Hz. This will usually fall below the flicker fusion threshold of the human eye. Therefore, we would typically choose to run the display at a rate somewhere between these two extremes. Interestingly, the rate that falls below the flicker fusion threshold for human vision will vary depending on lighting conditions. This means that the HR3D display can be run at lower rates/higher quality in low lighting conditions.

  10. Q: How is content generated for an HR3D display?

    A: In our technical paper we describe an optimization procedure used to generate frames for the HR3D prototype. This is a slow procedure, taking 8-20 minutes per frame depending on the resolution of the input light field and output imagery. Given what we now know about the structure of the generated masks (described in Figure 12 of our paper), we are confident that a much faster analytic solution will be found in the future.

  11. Q: How can you show more images on a normal LCD screen? Is the content compressed?

    A: Yes! Interestingly, this is an example of a compressive display. Imagery sent to an HR3D display is compressed, or “lossy”. But it is compresseed in a way that allows the viewer’s eye to decompress it. Any light field sent to an HR3D display is reduced down to two images — one for the front LCD and one for the rear LCD. It shouldn’t be confusing now that the HR3D display is able to display more than two images on two LCD screens. Like other multiscopic 3D displays, the HR3D display trades spacial resolution for angular resolution.

    See the original post of the FAQ here: http://web.media.mit.edu/~mhirsch/hr3d/faq.html

    See a detailed explanation with many pictures and illustrations here: http://web.media.mit.edu/~mhirsch/hr3d/

Contact

Technical Details
Douglas Lanman, Postdoctoral Associate, MIT Media Lab
dlanman (at) media.mit.edu

Press
Alexandra Kahn, Senior Press Liaison, MIT Media Lab 
akahn (at) media.mit.edu or 617/253.0365


San Francisco International Arts Festival to feature 3D movie without 3D glasses

[by Shockya]

This month, the San Francisco International Arts Festival is going to feature a stereoscopic film that doesn’t feature 3D glasses. The film, created by Walter Funk of Hologlyphics and titled “Spaceforms: Homage to Homer”, will allow viewers to walk around the viewing area; when the viewers’ position changes, so will the perspective of the scene being viewed. The experience will combine animation, live footage, action, and sound to create the ultimate experience. To quote the press release:

Audience members will be able to walk around the viewing area, watching Spaceforms from multiple angles. As their viewing position changes, so will the perspective of the scene they are watching. Nebulas, Saturn, and planetary motion sequences take on new life, floating in front of the audience. No longer flat, without the glasses.

Also, here’s a bit more insight into the background of the film:

The screening is produced by Zero Gravity Arts Consortium (ZGAC) in collaboration with affiliate partners including the Space Arts Development Fund of the National Space Society and The Studio for Creative Inquiry, Carnegie Mellon University. ZGAC is an international organization dedicated to fostering access for artists to space flight technology and zero gravity space through international partnerships with space agencies, space industry entrepreneurs, and leading universities.

“Spaceforms: Homage to Homer” is a stereoscopic sonic journey exploring animation, live footage, sound and motion in the space around the audience. Several 3D displays, each with differing visual properties, will be showing the movie.
…The movie is a Homage to Homer B. Tilton, scientist, mathematician, and 3D display pioneer. Tilton’s electronic 3D display work dates back to the late 1940s, a system that worked with 2D perspectives only. In the early 50s he developed a stereoscopic version, requiring user worn eye-wear. The real breakthrough came about in the late 60s, when Homer developed a method for viewing electronically generated stereoscopic moving images without glasses.

Tilton’s 3D display could provide the viewer with a 3D image that was interactive in real-time. As the viewer moves side to side, infinite perspectives of the image are seen. This is in large contrast to the more recent 3D displays that have been commercialized in the past several years. The commercial displays usually have an average of 8 views, definitely nowhere near infinite. With infinite views, the visual quality is much closer to a white light hologram.

The film premiere will begin at 2:00 p.m. on May 29 at the Forth Mason’s South Side Theater.

See the original post here:  http://www.shockya.com/news/2011/05/10/san-francisco-international-arts-festival-to-feature-3d-movie-without-3d-glasses/

< PREVIOUS ARTICLES NEXT ARTICLES >

Specification for Naming VFX Image Sequences Released

ETC’s VFX Working Group has published a specification for best practices naming image sequences such as plates and comps. File naming is an essential tool for organizing the multitude of frames that are inputs and outputs from the VFX process. Prior to the publication of this specification, each organization had its own naming scheme, requiring custom processes for each partner, which often resulted in confusion and miscommunication.

The new ETC@USC specification focuses primarily on sequences of individual images. The initial use case was VFX plates, typically delivered as OpenEXR or DPX files. However, the team soon realized that the same naming conventions can apply to virtually any image sequence. Consequently, the specification was written to handle a wide array of assets and use cases.

To ensure all requirements are represented, the working group included over 2 dozen participants representing studios, VFX houses, tool creators, creatives and others.  The ETC@USC also worked closely with MovieLabs to ensure that the specification could be integrated as part of their 2030 Vision.

A key design criteria for this specification is compatibility with existing practices.  Chair of the VFX working group, Horst Sarubin of Universal Pictures, said: “Our studio is committed to being at the forefront of designing best industry practices to modernize and simplify workflows, and we believe this white paper succeeded in building a new foundation for tools to transfer files in the most efficient manner.”

This specification is compatible with other initiatives such as the Visual Effects Society (VES) Transfer Specifications. “We wanted to make it as seamless as possible for everyone to adopt this specification,” said working group co-chair and ETC@USC’s Erik Weaver. “To ensure all perspectives were represented we created a team of industry experts familiar with the handling of these materials and collaborated with a number of industry groups.”

“Collaboration between MovieLabs and important industry groups like the ETC is critical to implementing the 2030 Vision,” said Craig Seidel, SVP of MovieLabs. “This specification is a key step in defining the foundations for better software-defined workflows. We look forward to continued partnership with the ETC on implementing other critical elements of the 2030 Vision.”

The specification is available online for anyone to use.

Oops, something went wrong.