News Stories

DCS Notes – Day 1 – Session 1 – Understanding Stereopsis and 3D Image Capture

Session 1: Understanding Stereopsis and 3D Image Capture

Speaker(s):

Peter Lude, Senior Technology Executive, Sony Electronics

Steve Schklair, CEO, 3ality Digital Systems LLC

Peter Lude

Monocular depth cues (such as motion parallax, depth from motion, and perspective) contribute to our mind’s interpretation of 3D in the real world as well as stereoscopic 3D content. In a given image, something out of focus is perceived as being behind something that is in focus.  Vanishing point, or the convergence of lines as they approach the horizon, also provide visual depth queues.    Color intensity and contrast contribute; things that are far away are duller.  If you look at a 2D image with only one eye before you have seen it with both eyes, you will perceive it as 3D.  Only when binocular vision kicks in do you instantly snap into perceiving it as only 2D.

Mean interocular distance is about 65mm, with a wide variation.  Children start off with a distance 10-15 mm smaller on overage.

Positive parallax corresponds to seeing the right image on the right, and the left image on the left.  This places the object behind the screen plain.  Negative parallax crosses the eyes by positioning the right-eye’s image to the left of the left-eye’s image.  This places the virtual object in front of the screen, in ‘negative space.’

Mistakes in shooting 3D include; vertical misalignment of the image, non-synchronous lens zooms, mismatched focus, color mismatch, and keystoning.  The content should be authored for the largest expected display size.  If you author for a small screen and display it on a large screen, you will produce disparity – you are forcing the audiences’ eyes to turn outward in opposite directions.

3D camera rigs can either be two physically separate cameras locked on a bar, or a camera-pair looking through a silvered mirror beam splitter.  The beam splitter  creates a more ‘human’ interoccular distance.

Steve Schklair

Steve Scklair used a live feed during his presentation to illustrate shooting points and errors.  Production challenges include; developing the pool of trained crew, choosing the appropriate technology, revising the production pipeline, understanding the production budget and logistics, and developing the new language.

Two basic rig types.  The beam splitter simulates the interoccular distance.  It allows you to bring objects right up close to the camera.  Side-by-side rigs match the functionality of the beam splitter rig at a lower cost because no beam-splitter optics.  It is often used in sports because there is no reason to bring anything close to the lens.

Steve conducted a live demonstration of vertical misalignment.  Everything about camera-pair positioning must be remotely controlled, because you cannot have people running up to the rig with wrenches during a shoot.  The idea of keeping vertical locked in a zoom was considered critical from day one, because you have to zoom in when shooting sports.  Focus mismatch and zoom mismatch can occur even when you turn the zoom rigs identically, because the mechanics and lens characteristics aren’t identical.  This can be fixable in post, but it can cause discomfort during live events.

Too narrow an interaxial distance reduces the 3D to 2D.  It is ok to have sustained images 1-2% of the screen width in front of the screen, but holding images in front of the screen further and longer would be problematic.  Keeping the depth fairly consistent among the cameras makes cutting more comfortable, both for the audience and the editor.

A fix for edge violations is to focus on the closest object. Or you can just eliminate the edge violation by reconverging your cameras and putting the object completely in the frame.

3DIQ: Sky paid 3ality to put rigs into Telegenic trucks.  The lessons 3ality and Sky learned from the experience include:

  • • Editorial pace is slower from shot to shot (because there is more info in the shot)
  • • Staying a bit wider works
  • • It is important to be consistent with the depth
  • • It is important to level the depth across the edits
  • • For live broadcasting, fewer camera positions are needed in 3D than in 2D
  • • The story is more important than the WOW factor

On set, the monitors are good enough to view the shots as long as you are positioned properly.  Realignment in post will kill your budget.


YOUTUBE CHANNELS

Our Youtube channel can be found here
Watch the vNAB videos below

SOUNDCLOUD TALKS

MISSION

  • To advance technology and innovation within the entertainment industry
  • To provide a neutral setting for the entertainment industry, technology and electronics companies and to identify and discuss pressing issues
  • To understand the impact of technology on the consumer experience and the creative process
  • To connect and leverage the University of Southern California’s extensive research facilities, faculty and student body with companies
  • To provide insight about emerging consumer habits
  • To convene industry peer groups and partners to share knowledge and experience
  • To create an environment for testing and evaluation of proposed technology solutions
  • To help identify new business models for the entertainment industry
  • To improve the consumer experience and advance the art of entertainment as the 21st century unfolds

ETC Events

 

ETC Quarterly Board Meeting (closed meeting)
(March 6)


ETC Quarterly All Members Meeting (closed meeting)
(March 21)