News Stories

DCS Notes – Day 1 – Session 3 – 3D Conversion

Session 3: 3D Conversion

Moderator(s):

Brad Collar, Vice President, Technology, Warner Bros.

Panelist(s):

Barry Sandrew, Ph.D, Founder and President/COO, Legend3-D, A Legend Films Company

Chris Bond, President – View D, Prime Focus

Chris Yewdall, Executive Director and Chief Executive Officer, DDD USA, Inc.

Warren Littlefield, President, The Littlefield Company

Brad Collar

There are often two controversies when the entertainment industry looks at a new technology; should we do it, and what is the best way to do it?  The list of options in 3D keeps growing.  This panel will focus on 2D-to-3D conversion.

Warren Littlefield

Home 3D is the next breakthrough.  In 1996, he did a 3D episode of 3rd Rock from the Sun.  It doubled their audience, and helped them win the May sweep.  The lesson learned was that the audience was looking for something new.

There are three categories of 3D content to the home; filmed entertainment, live content, and games.

Four business models (Warren’s term):

– Home video: over 600 episodes of StarTrek are available for a 3D sale.  He thinks 3D conversion will be the killer app

– 3D networks – expect more this year

– Worldwide program distribution

– Digital distribution, including iTunes, Amazon, etc.

The number of sets that will be sold is unknown.  With great cost comes great risk.  The revenue models will depend on how many 3D sets are sold.  There is no format war, just a need for content.  This is an opportunity to up-sell the library.  The box office for 3D indicates a sustained appetite, dispelling the idea of ‘stunt’ appeal.  Building the market will be about volume – volume of set sales and volume of content (live events, movies, and series) together (the chicken plus the egg).

Chris Yewdall

(Chris calls himself the self-proclaimed minister of propaganda for 3D)

Conversion can be done at one of three quality levels: good (embedded in 3D device, automated 3D conversion), better (3D conversion with automated depth recover, manual focal point/depth effect decisions), and best (original 3D content creation, or 3D conversion with manual / semi automated depth recover).  The cost goes up accordingly.

Carl Franklin wrote a great book, “Why Technology Fails.”  Any 3D conversion must pass the “so what” test.

Since 1993 DDD has been developing a process called Depth-Image Based Rendering (DIBR).  A single 2D image is depth mapped using monocular queues to create the conversion.   They have been working to make the manual aspects more efficient by continually improving the computational rules.  Cost went from $100k/minute in 1999 to $1,500/min in 2007 by relying on the automation more and the human intervention much less.

3D conversion consists of two stages: depth recovery from 2D content, and 3D scene reconstruction from depth and source image.  Auto 3D conversion will not deliver the “better” category (first paragraph).  Human intervention is required, particularly for 3D focal point.  At the end of the day, the source content is still 2D, but filming in 2D with the scene set-ups designed for 3D improves the results.

Barry Sandrew, PhD

(He invented the first auto-colorization process, and he contributed to the Alice in Wonderland conversion)

(He thanked Pierre de Lespinois (from the previous panel.  Pierre was very critical of conversion.) for the kind words. (sarcasm))  70% of the over $80M of Alice opening weekend was from 3D screens.  Rather than go through his slides, he showed recent work to present a more accurate sense of the current state of conversion.

Christopher Bond

Christopher oversaw conversion of Clash of the Titans, and presented the chronology of the process.  Prime Focus received the call in mid-January to review the film.  They had the kick-off meeting with the Director, Editorial, and VFX supervisors, who were skeptical that it could be done in 8 weeks, in London on Jan. 19th.  The delivery date for the final version of the conversion was fixed at March 19th.  The details of what they were to be given when work started included:

– not a locked cut / expect editorial changes / scene additions

– 90-100 minutes run time, 1000-200 shots

– No graded material, begin work on raw scenes ASAP

– Less than 1/8 of the VFX shots were final at the start of conversion

Having to deal with working with an unlocked, ungraded cut meant redefining their work process:

– Focus efforts on the “most locked” reels

– Work with raw scans (so they were prepped when the graded material arrived) / 8 frame handles head and tail

– 16 frames – 1935 shots = 21.5 additional minutes

– Conceive and develop tools to reprocess graded material once delivered

– Automate as much of the pipeline as possible

– Everything went into a database, which was created for this process

The creative elements came into play by this process:

– Define “Keystone” shots throughout the film

– Bring “Keystone” shots to final ASAP (approximately 5-10 days) and show them to WB

– Apply notes / feedback from WB and propagate to shots within the same sequence

– The dailies went through many rapid-fire iterations.  Dailies were being worked on in 3 rooms, 1 Dolby and 2 RealD

– Because of the quantity of dailies, they came up with a grading system – A (perfect), B (some artifacts), C (first-pass)

– Twice weekly formal client reviews, progressing from scattered shots, to sequences, to reels, to, near the end, the entire movie

– Daily reviews in the last few days covered 15-17 minutes of reviews each day

– Toward the very end they applied convergence and watched the shots in ‘cut’

Lessons learned

– Do not underestimate editorial and ‘conform’ needs and reviews.  This is a massive amount of data.

– Working with an unlocked cut meant that lots of content ended up on the cutting room floor.

– Clients change their minds, even on fundamental issues like how much stereo

– We can convert a movie in 8-10 weeks.  Just stay calm.

Q&A

(Brad) Why is there so much controversy about conversion? (Warren) Consumers will vote with their wallets.  (Barry) Something this new will attract both criticism and praise, but it will make money if done right.  (Chris Bond) Sound, Panavision, TV all met the same criticisms, which faded as they improved and gained market acceptance.

(Brad) Will costs come down?  (Chris Yewdall) TVs can do the ‘good.’  Avatar shows the ‘best.’  We will fill in the middle over time, and costs will come down.

(Brad) Do you see a head-end broadcast converter box?  (Warren) Live conversion will not be as good as human intervention in the near term.  The automated process in the TV cuts out the content maker from the revenue and control, so it is critical that they get involved in the process somehow.

(Brad) Chris (Yewdall), you brought up good, better, best.  Will consumers see the difference?   Should they be branded differently?  (Chris Yewdall) No, they should not be branded differently.  There are already variations; 720p v 1080p v 1080i.  Over time, the conversion technologies will improve to the point where consumers won’t be able to tell the difference because there may not be a difference.

(Brad) What are the technical advantages of converting a new feature versus an old feature.  (Barry)  The degree of creative input is much less for older features.  There is a great deal of creative input for new films, which adds time an cost. (Warren) We’ll have a new generation of filmmakers shooting in 2D, knowing it will be converted.  Their shooting process will be changed by that knowledge, and the viewer will benefit from that.  It is just starting to happen.

DCS Notes – Day 1 – Session 2 – Programming: Lessons Learned

Session 2: Programming: Lessons Learned

Moderator(s):

Al Barton, Consultant, Freelance Digital

Panelist(s):

Jason Goodman, CEO, 21st Century 3D

Pierre de Lespinois, Co-Founder, Evergreen Films

Thomas Edwards, VP, Digital Television Testing & Evaluation, FOXPanel

Pierre de Lespinois

Evergreen is working to integrate the storytelling with the engineering.  Technology advances fuel revenue growth.  After conversion to HD had been done, integrating 3D into that workflow process for live events was fairly easy.   They have an “interoccular” crew member, someone “pulling convergence” during the shoot.  The camera operator controls the zoom and focus.  The stereographer in the truck communicates with the “interoccular”, who makes sure that the cameras are balanced.  He has found that most of the time is spent getting the lenses centered so they zoom properly.  They spend a full day tracking and calibrating the lenses before the shoot.  Once the lenses are tracked, they keep the lenses with the camera for the duration.   (He showed a beautify Dave Matthews Band concert clip and a clip from a feature called Totem.)  The cost difference for shooting 3D, due to 2 cameras, rig, stereographer, and other, amounts to an additional cost of 10-15% on production and 10-15% on post.

Thomas Edwards

(He started with an extended Fox Sports clip, highlighting football.)  The trucks use micropol displays because it is hard to synchronize a bank of shutter displays.  Wide/high shots tell the story but produce a toy soldier effect.  Tight/low shots “get into the action,” especially if you zoom in.  Tasteful, occasional use of extreme negative parallax can be good, but you must avoid objects that are too close to the viewer such as a foul net or a foul pole.  Score box placement is an open question.  You want the scores/stats to be in front of the closest image.  Placing the score graphic at the bottom of the screen puts it in front of the grass (sharp foreground), but putting it at the top puts it in the sky (near screen plane).  One bonus benefit of putting the score graphic in the sky/screen plane is that it lets people without glasses read the score.

Challenges

  • Equipment is tough to obtain
  • Equipment is fragile
  • Equipment is large and heavy
  • Stereography training – “convergence pullers”, and others
  • Discovering what works for 3D sports direction
  • Challenges of dual 2D/3D production – more seat kills (e.g. seats lost to cameras in the stadium)
  • Challenges of backhaul
  • Budget?  Is this ever going to make money?  HD did not make us any money.  3D must make money for us.
  • Small number of distribution channels.

Jason Goodman

(Jason is the first person to be recognized by the DGA as a Stereographer.)

21st Century 3D developed a ground-breaking 3D camera; compact, light-weight, progressive scan 24fps, with the look and feel of a normal camera, binocular viewfinder, and purely digital workflow.   He discussed the evolution of their cameras.  They are announcing their next gen camera, available for purchase, this week at NAB.  (He showed a clip from the Black Eyed Peas movie that they are working on, plus other footage.)

Q&A

Why do you need two cameras? Panasonic is showing a single body camera for the prosumer market.  The Panasonic has 2 lenses, one chip.  (Pierre – going back to an earlier discussion) Dimensionalizing 2D is fooling the public.  Don’t call it 3D.  Certain shots in 2D don’t work in 3D.  Films need to be shot for 3D.  $4.5M to dimensionalize is less than $30M to shoot 3D, but it isn’t real 3D.  Call it dimensionalizing.

Thoughts on edge violations? (Pierre) We make sure that things on the edges of the frames are non-intrusive.

(Al) The hardest thing right now is learning what terms to use when discussing 3D production with someone else.  The actual language used to describe 3D issues and processes is in flux.

< PREVIOUS ARTICLES NEXT ARTICLES >

Specification for Naming VFX Image Sequences Released

ETC’s VFX Working Group has published a specification for best practices naming image sequences such as plates and comps. File naming is an essential tool for organizing the multitude of frames that are inputs and outputs from the VFX process. Prior to the publication of this specification, each organization had its own naming scheme, requiring custom processes for each partner, which often resulted in confusion and miscommunication.

The new ETC@USC specification focuses primarily on sequences of individual images. The initial use case was VFX plates, typically delivered as OpenEXR or DPX files. However, the team soon realized that the same naming conventions can apply to virtually any image sequence. Consequently, the specification was written to handle a wide array of assets and use cases.

To ensure all requirements are represented, the working group included over 2 dozen participants representing studios, VFX houses, tool creators, creatives and others.  The ETC@USC also worked closely with MovieLabs to ensure that the specification could be integrated as part of their 2030 Vision.

A key design criteria for this specification is compatibility with existing practices.  Chair of the VFX working group, Horst Sarubin of Universal Pictures, said: “Our studio is committed to being at the forefront of designing best industry practices to modernize and simplify workflows, and we believe this white paper succeeded in building a new foundation for tools to transfer files in the most efficient manner.”

This specification is compatible with other initiatives such as the Visual Effects Society (VES) Transfer Specifications. “We wanted to make it as seamless as possible for everyone to adopt this specification,” said working group co-chair and ETC@USC’s Erik Weaver. “To ensure all perspectives were represented we created a team of industry experts familiar with the handling of these materials and collaborated with a number of industry groups.”

“Collaboration between MovieLabs and important industry groups like the ETC is critical to implementing the 2030 Vision,” said Craig Seidel, SVP of MovieLabs. “This specification is a key step in defining the foundations for better software-defined workflows. We look forward to continued partnership with the ETC on implementing other critical elements of the 2030 Vision.”

The specification is available online for anyone to use.

Oops, something went wrong.