News Stories

DCS Notes – Day 1 – Session 6 – After the Capture: What Other Tools Exist?

Session 6: After the Capture: What Other Tools Exist?

Moderator(s):

Jim Whittlesey, Sr. Vice President, Technology, Deluxe Digital Media

Speaker(s):

Ray Harrisian, Stereographer, 3ality

Matthew DeJohn, VP/VFX Producer, In-Three Inc.

Peter Postma, FilmLight

Steve Owen, Director of Worldwide Marketing, Quantel

Peter Postma

Baselight is a software color correction product.  It is capable of multiple streams of 4K for color correction.  I’ll talk about what we added for 3D work.

Why would you perform stereo adjustments in a color corrector?  Because you have high performance tools (interactive), a good environment (big screen), the right talent for it, and the right time (at least in part) in the process for color and convergence issues to be addressed.

Stereographers will get eye fatigue if they wear any type of 3D glasses all day.  They can work in side-by-side or simple wipes between the two eyes.  Anaglyph is good for a quick check.  Checkerboard is useful for color adjustments – errors jump out when the content is viewed in checkerboard mode.  Color difference view and interleaved views are also useful.

Stereo tools include a multilayer multitrack timeline, so you can ‘gang color corrections.’  You can also synchronize the color grades, transferring information from one eye to the other.

Baseline is capable of adjusting for keystone, rotation, translation, and scale, and has floating window adjustment tools.

Steve Owen

The challenges of 3D post

What’s the problem?  The data needs to be perfectly synchronized, of the highest quality, comfortable to watch, meet the creative brief, and cost not much more than 2D content.

New issues for Post;

– Where does the image sit in relation to the screen? Creative choice and delivery method / screen size issues.

– Do your edits work in Stereo 3D? What are you asking the viewer’s eyes to do?

What are 5 good questions for any Post house to ask when preparing a quote for a 3D job, especially if your goal is to stay in business and make money.

1 What do the rushes look like?  ‘Fix it in post’ is the wrong attitude for 3D!  Quality control on-set is vital.  Don’t quote a job until you’ve seen the rushes.

2 What does the customer want?  For reference, Sky has posted their requirements for stereoscopic alignment on their website.

3 How much client interaction will there be?  Reliable viewing and play-out is critical to a good client experience.

4 Can your pipeline cope? (Will we make money on the job?)  3D is twice the data to manage, process, move, and store.  Quality matters like never before.  It will stress your network infrastructure, storage, and management systems.  A good 2D pipeline doesn’t guarantee success in 3D.

5 How can your technology help?  Can you do more in one suite?  Do you have real-time tools.  Stereo 3D time-saving tools: stereo color balance, geometry correction, image analysis, and tools to report divergence.

Ray Harrisian

3ality’s stereo image processing box analyzes the left/right image down to 1/100th of a degree (or whatever the unit is) for accurate alignment.

Where to place things in space may not be obvious when it is shot, but is revealed during editing.  In U2 3D every transition (e.g. many of the cuts) is a visual effect.  They did dynamic depth balancing.  For example, they created the effect in Post of Edge and Bono looking at each other across a cut.

Matthew DeJohn

Dimensionalization as a tool after capture (“Dimensionalization” is a trademarked term of In-Three!) is useful to create and alter content, ensure viewer comfort, and enhance artist control.

Three basic stages for dimensionalization:

– isolation of the elements

– depth generation

– paint process

– (sidebar: transparencies, particles, motion blur, etc. are handled both here and elsewhere)

Doing dimensionalization wrong results in:

– rubber sheet effect – foreground objects wrap around into background objects

– cardboard cutout effect – insufficient roundness

– inaccurate depth layout – conflicting depth cues

– lack of depth continuity

– bad compositing (ex. hair, hard edges that conflict with motion blur, transparencies (foregrounds sticking to background), no paint / auto paint)

Doing dimensionalization right results in:

– good depth, distinct separation, nuanced / detailed, natural fall-off, matched 2D & 3D depth cues, solid depth continuity, and depth that is adjustable to the client’s desire

– compositing is VFX-caliber, hair as good as green screen, stereo-accurate transparency, real image data for occlusions, high quality matters.

When to use Dimensionalization:

– Difficult to capture shots

– Prohibitive environments

– Prohibitive production schedule

– Benefits of traditional 2D shoot

– Flexibility in artistic decisions

– Failed stereo capture

– Alternative stereo capture

– Catalogue titles

Q & A

Regarding catching errors, how much is manual versus software-based detection.  (In-Three) human intervention is a necessary part of the process.

When dimensionalizing, how do you know what is behind the object being dimensionalized. (In-Three) You follow standard visual effects techniques and processes.

DCS Notes – Day 1 – Session 5 – A Case for Quality in Production and Post-Production

Session 5: A Case for Quality in Production and Post-Production

Speaker(s):

Buzz Hays, Executive Stereoscopic 3D Producer, 3D Technology Center, Sony Corporation of America

(Buzz produced the 3D version of G-Force and Monster House)

What constitutes ‘high quality?’

– technical considerations; resolution, artifacts, (mis)alignment (can damage the people working in Post!)

– aesthetic values; the artistry must be very high-quality.  He has sent effects back to be improved.  It can have a lot to do with understanding parallax and stereography.

– effect on the viewers; some people are seeing it for the first time and don’t yet understand what they are looking at.  The audience will become more critical over time.  There is also concern over fatigue and eyestrain.  This is especially important now that it is coming to TV and people will be watching more 3D for longer periods.

Buzz received the completed version of Open Season and was asked to convert it to IMAX.  It had scenes that didn’t work well in 3D.  He used this to make the point that 3D must be considered in the pipeline regardless of plans to make it in 3D or not.

With Beowulf, Zemekis had a lot of experience in 3D, but was now telling a 2 hr. story to an older audience.  How to sustain 3D moments without causing eye fatigue was a key concern.  Phil McNally says that we’ve spent the last 200 years trying to convert the world to 2D.  It has now become its own art form.  We need to discover the fundamental language of 3D.  Motion may tell the story much better than cutting does in 3D.  Perhaps in 3D every shot is a point-of-view shot.

At the Sony 3D Technology Center they’ve started an educational program.  Working with the Local 600 Guild they are focused on the Cinematographers.  They will soon offer the program to Film and TV Directors as well.  They are working with Live Events people to retrain them to instinctively work in 3D.  In addition, they are reaching out to Game Developers to provide them with the education they need to optimize 3D game play experiences.  Later in the year they will be producing an educational program for Editors.

Stereoscopic 3D Terminology and Techniques

– Basic Terms, physiology, good vs bad, examples of 3D content, 3D camera systems, storytelling in 3D, lighting (back to the notion that lighting is used for sculpting), shooting 2D for 3D, production and post-production, practical shooting experience (Sony Pictures sound stage with a 3ality camera where they offer a 1 day class and 2 days of hands-on shooting).

Terms

– interocular distance – distance between the eye centers, about 2.5”, dictates the scale at which we see the world.  Our eyes don’t work like cameras.  We usually shoot at a 1” or less interaxial distance.

– convergence – rotate the cameras inward, but not so much that you produce keystoning on the chip.  It helps push infinity to the right distance.  The keystoning produces vertical misalignment.  Shooting 720p using a 1080p gives you enough information to fix the vertical misalignment in post.

– vergence accommodation – this is a key issue.

– negative parallax / positive parallax – negative is in front of the screen,(right -ye image is to the left of the left-eye image), positive is behind the screen (right eye image is on the right of the left-eye image)

– divergence – eyes point away from each other to fuse the object. At 1920 pixels on a 40’ screen, a 2.5” interocular distance means that more than10 pixels will cause divergence.  Viewing the content improperly on a small monitor will produce massive divergence when the content is projected onto a big screen.

– orthostereoscopy – we now have a chance to create a condition that we couldn’t any other way.  We can create a life-size experience with the audience.  It can simulate sitting in the front row of a theatre because we know something about where people sit when they watch TV.

Techniques that can work differently in 2D and 3D: focal length, framing, blocking action, camera motion (it may be a better way to tell 3D stories), depth of field

Q&A

What one rule would you recommend?  For home viewing, respect the personal space and push the 3D into and behind the screen plain.

< PREVIOUS ARTICLES NEXT ARTICLES >

Specification for Naming VFX Image Sequences Released

ETC’s VFX Working Group has published a specification for best practices naming image sequences such as plates and comps. File naming is an essential tool for organizing the multitude of frames that are inputs and outputs from the VFX process. Prior to the publication of this specification, each organization had its own naming scheme, requiring custom processes for each partner, which often resulted in confusion and miscommunication.

The new ETC@USC specification focuses primarily on sequences of individual images. The initial use case was VFX plates, typically delivered as OpenEXR or DPX files. However, the team soon realized that the same naming conventions can apply to virtually any image sequence. Consequently, the specification was written to handle a wide array of assets and use cases.

To ensure all requirements are represented, the working group included over 2 dozen participants representing studios, VFX houses, tool creators, creatives and others.  The ETC@USC also worked closely with MovieLabs to ensure that the specification could be integrated as part of their 2030 Vision.

A key design criteria for this specification is compatibility with existing practices.  Chair of the VFX working group, Horst Sarubin of Universal Pictures, said: “Our studio is committed to being at the forefront of designing best industry practices to modernize and simplify workflows, and we believe this white paper succeeded in building a new foundation for tools to transfer files in the most efficient manner.”

This specification is compatible with other initiatives such as the Visual Effects Society (VES) Transfer Specifications. “We wanted to make it as seamless as possible for everyone to adopt this specification,” said working group co-chair and ETC@USC’s Erik Weaver. “To ensure all perspectives were represented we created a team of industry experts familiar with the handling of these materials and collaborated with a number of industry groups.”

“Collaboration between MovieLabs and important industry groups like the ETC is critical to implementing the 2030 Vision,” said Craig Seidel, SVP of MovieLabs. “This specification is a key step in defining the foundations for better software-defined workflows. We look forward to continued partnership with the ETC on implementing other critical elements of the 2030 Vision.”

The specification is available online for anyone to use.

Oops, something went wrong.