News Stories

DCS Notes – Day 1 – Session 2 – Programming: Lessons Learned

Session 2: Programming: Lessons Learned

Moderator(s):

Al Barton, Consultant, Freelance Digital

Panelist(s):

Jason Goodman, CEO, 21st Century 3D

Pierre de Lespinois, Co-Founder, Evergreen Films

Thomas Edwards, VP, Digital Television Testing & Evaluation, FOXPanel

Pierre de Lespinois

Evergreen is working to integrate the storytelling with the engineering.  Technology advances fuel revenue growth.  After conversion to HD had been done, integrating 3D into that workflow process for live events was fairly easy.   They have an “interoccular” crew member, someone “pulling convergence” during the shoot.  The camera operator controls the zoom and focus.  The stereographer in the truck communicates with the “interoccular”, who makes sure that the cameras are balanced.  He has found that most of the time is spent getting the lenses centered so they zoom properly.  They spend a full day tracking and calibrating the lenses before the shoot.  Once the lenses are tracked, they keep the lenses with the camera for the duration.   (He showed a beautify Dave Matthews Band concert clip and a clip from a feature called Totem.)  The cost difference for shooting 3D, due to 2 cameras, rig, stereographer, and other, amounts to an additional cost of 10-15% on production and 10-15% on post.

Thomas Edwards

(He started with an extended Fox Sports clip, highlighting football.)  The trucks use micropol displays because it is hard to synchronize a bank of shutter displays.  Wide/high shots tell the story but produce a toy soldier effect.  Tight/low shots “get into the action,” especially if you zoom in.  Tasteful, occasional use of extreme negative parallax can be good, but you must avoid objects that are too close to the viewer such as a foul net or a foul pole.  Score box placement is an open question.  You want the scores/stats to be in front of the closest image.  Placing the score graphic at the bottom of the screen puts it in front of the grass (sharp foreground), but putting it at the top puts it in the sky (near screen plane).  One bonus benefit of putting the score graphic in the sky/screen plane is that it lets people without glasses read the score.

Challenges

  • Equipment is tough to obtain
  • Equipment is fragile
  • Equipment is large and heavy
  • Stereography training – “convergence pullers”, and others
  • Discovering what works for 3D sports direction
  • Challenges of dual 2D/3D production – more seat kills (e.g. seats lost to cameras in the stadium)
  • Challenges of backhaul
  • Budget?  Is this ever going to make money?  HD did not make us any money.  3D must make money for us.
  • Small number of distribution channels.

Jason Goodman

(Jason is the first person to be recognized by the DGA as a Stereographer.)

21st Century 3D developed a ground-breaking 3D camera; compact, light-weight, progressive scan 24fps, with the look and feel of a normal camera, binocular viewfinder, and purely digital workflow.   He discussed the evolution of their cameras.  They are announcing their next gen camera, available for purchase, this week at NAB.  (He showed a clip from the Black Eyed Peas movie that they are working on, plus other footage.)

Q&A

Why do you need two cameras? Panasonic is showing a single body camera for the prosumer market.  The Panasonic has 2 lenses, one chip.  (Pierre – going back to an earlier discussion) Dimensionalizing 2D is fooling the public.  Don’t call it 3D.  Certain shots in 2D don’t work in 3D.  Films need to be shot for 3D.  $4.5M to dimensionalize is less than $30M to shoot 3D, but it isn’t real 3D.  Call it dimensionalizing.

Thoughts on edge violations? (Pierre) We make sure that things on the edges of the frames are non-intrusive.

(Al) The hardest thing right now is learning what terms to use when discussing 3D production with someone else.  The actual language used to describe 3D issues and processes is in flux.

DCS Notes – Day 1 – Session 1 – Understanding Stereopsis and 3D Image Capture

Session 1: Understanding Stereopsis and 3D Image Capture

Speaker(s):

Peter Lude, Senior Technology Executive, Sony Electronics

Steve Schklair, CEO, 3ality Digital Systems LLC

Peter Lude

Monocular depth cues (such as motion parallax, depth from motion, and perspective) contribute to our mind’s interpretation of 3D in the real world as well as stereoscopic 3D content. In a given image, something out of focus is perceived as being behind something that is in focus.  Vanishing point, or the convergence of lines as they approach the horizon, also provide visual depth queues.    Color intensity and contrast contribute; things that are far away are duller.  If you look at a 2D image with only one eye before you have seen it with both eyes, you will perceive it as 3D.  Only when binocular vision kicks in do you instantly snap into perceiving it as only 2D.

Mean interocular distance is about 65mm, with a wide variation.  Children start off with a distance 10-15 mm smaller on overage.

Positive parallax corresponds to seeing the right image on the right, and the left image on the left.  This places the object behind the screen plain.  Negative parallax crosses the eyes by positioning the right-eye’s image to the left of the left-eye’s image.  This places the virtual object in front of the screen, in ‘negative space.’

Mistakes in shooting 3D include; vertical misalignment of the image, non-synchronous lens zooms, mismatched focus, color mismatch, and keystoning.  The content should be authored for the largest expected display size.  If you author for a small screen and display it on a large screen, you will produce disparity – you are forcing the audiences’ eyes to turn outward in opposite directions.

3D camera rigs can either be two physically separate cameras locked on a bar, or a camera-pair looking through a silvered mirror beam splitter.  The beam splitter  creates a more ‘human’ interoccular distance.

Steve Schklair

Steve Scklair used a live feed during his presentation to illustrate shooting points and errors.  Production challenges include; developing the pool of trained crew, choosing the appropriate technology, revising the production pipeline, understanding the production budget and logistics, and developing the new language.

Two basic rig types.  The beam splitter simulates the interoccular distance.  It allows you to bring objects right up close to the camera.  Side-by-side rigs match the functionality of the beam splitter rig at a lower cost because no beam-splitter optics.  It is often used in sports because there is no reason to bring anything close to the lens.

Steve conducted a live demonstration of vertical misalignment.  Everything about camera-pair positioning must be remotely controlled, because you cannot have people running up to the rig with wrenches during a shoot.  The idea of keeping vertical locked in a zoom was considered critical from day one, because you have to zoom in when shooting sports.  Focus mismatch and zoom mismatch can occur even when you turn the zoom rigs identically, because the mechanics and lens characteristics aren’t identical.  This can be fixable in post, but it can cause discomfort during live events.

Too narrow an interaxial distance reduces the 3D to 2D.  It is ok to have sustained images 1-2% of the screen width in front of the screen, but holding images in front of the screen further and longer would be problematic.  Keeping the depth fairly consistent among the cameras makes cutting more comfortable, both for the audience and the editor.

A fix for edge violations is to focus on the closest object. Or you can just eliminate the edge violation by reconverging your cameras and putting the object completely in the frame.

3DIQ: Sky paid 3ality to put rigs into Telegenic trucks.  The lessons 3ality and Sky learned from the experience include:

  • • Editorial pace is slower from shot to shot (because there is more info in the shot)
  • • Staying a bit wider works
  • • It is important to be consistent with the depth
  • • It is important to level the depth across the edits
  • • For live broadcasting, fewer camera positions are needed in 3D than in 2D
  • • The story is more important than the WOW factor

On set, the monitors are good enough to view the shots as long as you are positioned properly.  Realignment in post will kill your budget.


< PREVIOUS ARTICLES NEXT ARTICLES >

Specification for Naming VFX Image Sequences Released

ETC’s VFX Working Group has published a specification for best practices naming image sequences such as plates and comps. File naming is an essential tool for organizing the multitude of frames that are inputs and outputs from the VFX process. Prior to the publication of this specification, each organization had its own naming scheme, requiring custom processes for each partner, which often resulted in confusion and miscommunication.

The new ETC@USC specification focuses primarily on sequences of individual images. The initial use case was VFX plates, typically delivered as OpenEXR or DPX files. However, the team soon realized that the same naming conventions can apply to virtually any image sequence. Consequently, the specification was written to handle a wide array of assets and use cases.

To ensure all requirements are represented, the working group included over 2 dozen participants representing studios, VFX houses, tool creators, creatives and others.  The ETC@USC also worked closely with MovieLabs to ensure that the specification could be integrated as part of their 2030 Vision.

A key design criteria for this specification is compatibility with existing practices.  Chair of the VFX working group, Horst Sarubin of Universal Pictures, said: “Our studio is committed to being at the forefront of designing best industry practices to modernize and simplify workflows, and we believe this white paper succeeded in building a new foundation for tools to transfer files in the most efficient manner.”

This specification is compatible with other initiatives such as the Visual Effects Society (VES) Transfer Specifications. “We wanted to make it as seamless as possible for everyone to adopt this specification,” said working group co-chair and ETC@USC’s Erik Weaver. “To ensure all perspectives were represented we created a team of industry experts familiar with the handling of these materials and collaborated with a number of industry groups.”

“Collaboration between MovieLabs and important industry groups like the ETC is critical to implementing the 2030 Vision,” said Craig Seidel, SVP of MovieLabs. “This specification is a key step in defining the foundations for better software-defined workflows. We look forward to continued partnership with the ETC on implementing other critical elements of the 2030 Vision.”

The specification is available online for anyone to use.

Oops, something went wrong.