News Stories

New gadget promises 3D without the headaches

(Philip Lelyveld comment: the device sends the exact same image to both eyes, removing binocular vision cues, and creating a deep 3D experience by forcing the brain to rely on all of the other cues about depth (monocular, motion, sensory,…))

In 1907 a Polish optical scientist named Moritz von Rohr unveiled a strange device named the Synopter, which he claimed could make two-dimensional images appear 3D. By looking through the arrangement of lenses and mirrors, visitors to art galleries would be drawn into the paintings, as if the framed canvas had become a window to a world beyond. But the Synopter – heavy and prohibitively expensive – was a commercial failure, and the device vanished almost without trace.

A century later, Rob Black is hoping to rekindle interest in von Rohr’s creation. A psychologist specialising in visual perception at the University of Liverpool, UK, Black has designed and built an improved version he calls “The I”. Unlike some 3D glasses, the device uses no electronics, and works on normal 2D images or video.

Playing tricks on your eyes

The device works in the opposite way to the 3D systems employed in cinemas. There, images on the screen are filtered so that each eye sees a slightly different perspective – known as binocular disparity – fooling the brain into perceiving depth. “The I” ensures that both eyes see an image or computer screen from exactly the same perspective. With none of the depth cues associated with binocular disparity, the brain assumes it must be viewing a distant 3D object instead of looking at a 2D image. As a result, the image is perceived as if it were a window the viewer is looking through, and details in the image are interpreted as objects scattered across a landscape.

The perceptual trick, called synoptic vision, is apparent on any nearby two-dimensional image, but is especially marked where other depth cues exist. For instance, the brain will naturally assume an animal in the 2D image is in the foreground if it is large, and far away if it is small.

No more headaches

Black says that the device also avoids the headaches associated with other 3D technologies. In movie theatres, the eyes need to focus on the screen itself to see objects in focus, but the 3D effects can force the viewer to try to focus several metres in front of or behind the screen instead. “Even with if you use the world’s best 3D kit, it can still present conflicting perceptual information,” Black told New Scientist.

Because his device uses no binocular disparity the viewer isn’t forced to attempt such impossible feats of focusing – instead, they can focus naturally on any object in the image, using other cues such as size to ‘decide’ what depth the object occupies. “By turning off that conflicting information, you can enjoy the scene in the way the artist depicted.”

Currently the device is still a prototype, but Black hopes that his synoptic viewer will one day be incorporated into existing 3D systems. “I think 3D is impressive at the moment, but with this we can get significantly closer to reality simulation.”

21:11 07 December 2010 by Frank Swain

See the original story here: http://www.newscientist.com/article/dn19825-new-gadget-promises-3d-without-the-headaches.html

 

< PREVIOUS ARTICLES NEXT ARTICLES >

Specification for Naming VFX Image Sequences Released

ETC’s VFX Working Group has published a specification for best practices naming image sequences such as plates and comps. File naming is an essential tool for organizing the multitude of frames that are inputs and outputs from the VFX process. Prior to the publication of this specification, each organization had its own naming scheme, requiring custom processes for each partner, which often resulted in confusion and miscommunication.

The new ETC@USC specification focuses primarily on sequences of individual images. The initial use case was VFX plates, typically delivered as OpenEXR or DPX files. However, the team soon realized that the same naming conventions can apply to virtually any image sequence. Consequently, the specification was written to handle a wide array of assets and use cases.

To ensure all requirements are represented, the working group included over 2 dozen participants representing studios, VFX houses, tool creators, creatives and others.  The ETC@USC also worked closely with MovieLabs to ensure that the specification could be integrated as part of their 2030 Vision.

A key design criteria for this specification is compatibility with existing practices.  Chair of the VFX working group, Horst Sarubin of Universal Pictures, said: “Our studio is committed to being at the forefront of designing best industry practices to modernize and simplify workflows, and we believe this white paper succeeded in building a new foundation for tools to transfer files in the most efficient manner.”

This specification is compatible with other initiatives such as the Visual Effects Society (VES) Transfer Specifications. “We wanted to make it as seamless as possible for everyone to adopt this specification,” said working group co-chair and ETC@USC’s Erik Weaver. “To ensure all perspectives were represented we created a team of industry experts familiar with the handling of these materials and collaborated with a number of industry groups.”

“Collaboration between MovieLabs and important industry groups like the ETC is critical to implementing the 2030 Vision,” said Craig Seidel, SVP of MovieLabs. “This specification is a key step in defining the foundations for better software-defined workflows. We look forward to continued partnership with the ETC on implementing other critical elements of the 2030 Vision.”

The specification is available online for anyone to use.

Oops, something went wrong.