News Stories

DCS Notes – Day 1 – Session 5 – A Case for Quality in Production and Post-Production

Session 5: A Case for Quality in Production and Post-Production

Speaker(s):

Buzz Hays, Executive Stereoscopic 3D Producer, 3D Technology Center, Sony Corporation of America

(Buzz produced the 3D version of G-Force and Monster House)

What constitutes ‘high quality?’

– technical considerations; resolution, artifacts, (mis)alignment (can damage the people working in Post!)

– aesthetic values; the artistry must be very high-quality.  He has sent effects back to be improved.  It can have a lot to do with understanding parallax and stereography.

– effect on the viewers; some people are seeing it for the first time and don’t yet understand what they are looking at.  The audience will become more critical over time.  There is also concern over fatigue and eyestrain.  This is especially important now that it is coming to TV and people will be watching more 3D for longer periods.

Buzz received the completed version of Open Season and was asked to convert it to IMAX.  It had scenes that didn’t work well in 3D.  He used this to make the point that 3D must be considered in the pipeline regardless of plans to make it in 3D or not.

With Beowulf, Zemekis had a lot of experience in 3D, but was now telling a 2 hr. story to an older audience.  How to sustain 3D moments without causing eye fatigue was a key concern.  Phil McNally says that we’ve spent the last 200 years trying to convert the world to 2D.  It has now become its own art form.  We need to discover the fundamental language of 3D.  Motion may tell the story much better than cutting does in 3D.  Perhaps in 3D every shot is a point-of-view shot.

At the Sony 3D Technology Center they’ve started an educational program.  Working with the Local 600 Guild they are focused on the Cinematographers.  They will soon offer the program to Film and TV Directors as well.  They are working with Live Events people to retrain them to instinctively work in 3D.  In addition, they are reaching out to Game Developers to provide them with the education they need to optimize 3D game play experiences.  Later in the year they will be producing an educational program for Editors.

Stereoscopic 3D Terminology and Techniques

– Basic Terms, physiology, good vs bad, examples of 3D content, 3D camera systems, storytelling in 3D, lighting (back to the notion that lighting is used for sculpting), shooting 2D for 3D, production and post-production, practical shooting experience (Sony Pictures sound stage with a 3ality camera where they offer a 1 day class and 2 days of hands-on shooting).

Terms

– interocular distance – distance between the eye centers, about 2.5”, dictates the scale at which we see the world.  Our eyes don’t work like cameras.  We usually shoot at a 1” or less interaxial distance.

– convergence – rotate the cameras inward, but not so much that you produce keystoning on the chip.  It helps push infinity to the right distance.  The keystoning produces vertical misalignment.  Shooting 720p using a 1080p gives you enough information to fix the vertical misalignment in post.

– vergence accommodation – this is a key issue.

– negative parallax / positive parallax – negative is in front of the screen,(right -ye image is to the left of the left-eye image), positive is behind the screen (right eye image is on the right of the left-eye image)

– divergence – eyes point away from each other to fuse the object. At 1920 pixels on a 40’ screen, a 2.5” interocular distance means that more than10 pixels will cause divergence.  Viewing the content improperly on a small monitor will produce massive divergence when the content is projected onto a big screen.

– orthostereoscopy – we now have a chance to create a condition that we couldn’t any other way.  We can create a life-size experience with the audience.  It can simulate sitting in the front row of a theatre because we know something about where people sit when they watch TV.

Techniques that can work differently in 2D and 3D: focal length, framing, blocking action, camera motion (it may be a better way to tell 3D stories), depth of field

Q&A

What one rule would you recommend?  For home viewing, respect the personal space and push the 3D into and behind the screen plain.

DCS Notes – Day 1 – Session 4 – Keynote Speaker: Mark Schubin, Technology Consultant

Session 4: Keynote Speaker: Mark Schubin, Technology Consultant

What is 3D?

Jan 14, ITU-R SG6 defined 3 generations of 3D; first generation is plano- stereoscopic (with 4 sub-levels, 1 = anaglyph), second generation is multiview, and third generation is object-wave profile (holography), which is 15-20 yrs away.  RabbitHole is delivering film-based holography (we have examples in the ETC Consumer 3D Experience Lab).  It is limited to 1280 frames.

Less than full stereoscopic is enhanced chromostereopsis and pulfrich effect.  Chromostereopsis works as long as you control the colors in the scene.  View-shifting works by jiggling, and microstereopsis (Trioscopics) doesn’t cause vision problems but doesn’t work well either.

The term ‘3D graphics’ is used for both CGI and stereoscopic 3D, which can make conversations confusing.

POOT is plain old ordinary TV.  If you close one eye, you might sense depth.

He reviewed the visual cues mentioned by Pete Lude in a previous talk.  Papers by S. Nagata, 1991 and J. Cutting & P. Vishton, 1995 contain graphs illustrating the influence of the various cues.

Ames rooms confuse perspective vs. size cues.  (Ames rooms are the distorted, optical-illusion rooms that make the same object appear huge in one corner and tiny in the other.)  Five Ames rooms were used in Lord of the Rings to make Gandalf appear much taller than the Hobbits.  If Lord of the Rings was shot in 3D, the effect would not work.

Why does 3D matter?

Things to consider when creating  3D:

– The placement of the convergence point when shooting

– Pupillary distance can be 40-80 mm for children

– Screen size

– Viewing distance (if your pupillary distance matches the negative parallax on the screen, the object will come out 3’ if you are 6’ away, and 25’ if you are 50’ away, which is less credible.)

The movie theatre is not the home.  In a theatre, the audience is most likely entirely in the zone of comfort.  At home, the zone of comfort is only a small portion of the available seating locations around the TV.

Conflict /             Effect

– perspective-size /             incorrect relative size

– occlusion – stereopsis /             graphics difficult to view

– vergence-accommodation /             possible discomfort

– stereovisual – vestibular /             possible discomfort

– stereopsis – vergence /             incorrect depth

– impairment, choice  /             might training help the muscular areas? We don’t know.

Why are so many TVs coming out with active shutter technology?  Because polarization is hard to implement.  With active shutter, you don’t have to do anything to the screen.  You just add the emitter, so it ends up about the same price.  This is good for the CE manufacturer, because they are not necessarily concerned about the cost of the glasses.  Also, the battery life of the battery in the glasses is not their concern.

While people who are sensitive to 3D can watch polarized in 2D by putting the same lens in both eyes, isolating one image for both eyes in active shutter glasses may produce flicker.

BBC R&D White Paper 180, posted on the web, discusses how to synthesize 3D.

Steve Scklair said 3D requires fewer camera positions.”  The same was said at the birth of HDTV.  Audience expectations evolved, so expect the gradual adoption of faster cuts and shorter scenes.

Terms that are often used in marketing and press releases but rarely credible include: “the first”, “successful”, and “good enough.”

A paper by I. Howard & B. Rogers, 1996, discussed microstereopsis.  Human perception of stereo is greater than human perception of luminance.

Digital Optical Technology Systems, The Netherlands, is a glasses-free display that uses a slit iris to induce color fringes that creates a 3D effect.  The patents for this technology have expired.  It works with one ordinary lens per camera position, as long as it is outfitted with a slit iris.  This solution, which has a nearly zero interocular distance, has no ghosting, but it is incapable of a WOW effect.  It is a ‘kinder gentler 3D.’

(See Gary Shapiro’s editorial on 3D in the current issue of Vision magazine.)

Q&A

What types of content are particularly suited for 3D?  From a physiological standpoint, talking heads and children’s shows are well suited for 3D because of narrow depth range and comfort.  Getting people to buy things, though, will take sports and movies.

How can you shoot the Grand Canyon in 3D?  Make the camera spacing very large.  Hyperstereo reduces the 3D at a distance and heightens it up close.

Do you want to make any points about Avatar?  We need to interview everyone who had a problem viewing Avatar.  That is the epitome of 3D, so we need to understand why those people had problems.

Why is this rebirth of 3D different?  We have the digital technology, management of the convergence plains (keystoning), multiple camera sizes are available, and other factors, so from a technical perspective we are in a new place.  I cannot speak to the market question of whether this is the future.

< PREVIOUS ARTICLES NEXT ARTICLES >

Specification for Naming VFX Image Sequences Released

ETC’s VFX Working Group has published a specification for best practices naming image sequences such as plates and comps. File naming is an essential tool for organizing the multitude of frames that are inputs and outputs from the VFX process. Prior to the publication of this specification, each organization had its own naming scheme, requiring custom processes for each partner, which often resulted in confusion and miscommunication.

The new ETC@USC specification focuses primarily on sequences of individual images. The initial use case was VFX plates, typically delivered as OpenEXR or DPX files. However, the team soon realized that the same naming conventions can apply to virtually any image sequence. Consequently, the specification was written to handle a wide array of assets and use cases.

To ensure all requirements are represented, the working group included over 2 dozen participants representing studios, VFX houses, tool creators, creatives and others.  The ETC@USC also worked closely with MovieLabs to ensure that the specification could be integrated as part of their 2030 Vision.

A key design criteria for this specification is compatibility with existing practices.  Chair of the VFX working group, Horst Sarubin of Universal Pictures, said: “Our studio is committed to being at the forefront of designing best industry practices to modernize and simplify workflows, and we believe this white paper succeeded in building a new foundation for tools to transfer files in the most efficient manner.”

This specification is compatible with other initiatives such as the Visual Effects Society (VES) Transfer Specifications. “We wanted to make it as seamless as possible for everyone to adopt this specification,” said working group co-chair and ETC@USC’s Erik Weaver. “To ensure all perspectives were represented we created a team of industry experts familiar with the handling of these materials and collaborated with a number of industry groups.”

“Collaboration between MovieLabs and important industry groups like the ETC is critical to implementing the 2030 Vision,” said Craig Seidel, SVP of MovieLabs. “This specification is a key step in defining the foundations for better software-defined workflows. We look forward to continued partnership with the ETC on implementing other critical elements of the 2030 Vision.”

The specification is available online for anyone to use.

Oops, something went wrong.