News Stories

Plenoptic lens arrays signal future?

[DVB Europe]

…  Sony’s Senior Vice President of engineering and SMPTE President, Peter Lude, gave his version of the future in five steps.
“Step one is the clunky, cabled and complex approach we have used to date. We are now into step two which is about greater automation and computer analysis which should make it easier to use rigs, correct errors and reduce manual convergence.
“It should be possible for a computer system to network together multiple cameras arrayed around a stadia, for example, and to toe those cameras at the same time to keep the object at the same convergence point so that when cutting between cameras there is no discomfort with viewer’s eyes having to readjust.”
Step 3 is to use advance image processing tools. One idea is to use a synthetic or virtual camera. For example a 35mm camera can be used as source for texture, colour and framing while subsidiary cameras either to the side or from other parts of the set capture additional information. This information can be used to create a ‘virtual camera’ in post, or used to derive information which can off-set occlusion.  …
Another idea is to use infra-red systems as used by Microsoft’s Kinect or LIDAR Light Detection And Ranging devices to scan a field of view and extract depth patterns which can be used to reconstruct scenes.
Holographic technologies are perhaps the next step and for information on that check here: http://www.tvbeurope.com/newsletter-3dmasters-content/full/holographic-tv-on-the-horizon
Walt Disney Studios’ Vice President of production technology, Howard Lukk, also has his eye on plenoptics. While a plenoptic lens is comprised of multiple micro-lenses which capture a slightly different area of a picture, he speculated what a rig fitted with up to 100 camera lenses might capture. “What if we could come up with new camera system that comprises more one single camera?” he asked.  …
If 3D camera rigs are not the long term future of the industry, Lukk suggests that a hybrid approach will develop which will be a combination of capturing volumetric space on set and being able to produce the 3D in a post-production environment at the back end.  …
“You can be less accurate on the front end. Adobe has been doing a lot of work in this area, where you can refocus the image after the event. You can apply this concept to high dynamic range and higher frame rates.”  …

More Details Regarding Stereo 3D Conversions — Technology Is Growing

 

[3D TV]

… I ran into a very interesting report from Below The Line (via IBC), which mentions more professional methods of ; and goes into detail regarding how the process works.

What is also interesting about this piece is how it describes the advancements stereo  has made from the way it was as recently as 2010.

“we’re seeing a rise in stereo conversion. Converting standard 2D images into stereo 3D has come a long way since 2010’s Clash of the Titans, with the final installment of the Harry Potter saga providing a great example of tasteful and effective stereoscopy,” according to BTL’s Eric Philpott. …

Read the full story here: http://www.3dtv.com/news-reviews/more-details-regarding-stereo-3d-conversions-technology-is-growinng.php

< PREVIOUS ARTICLES NEXT ARTICLES >

Specification for Naming VFX Image Sequences Released

ETC’s VFX Working Group has published a specification for best practices naming image sequences such as plates and comps. File naming is an essential tool for organizing the multitude of frames that are inputs and outputs from the VFX process. Prior to the publication of this specification, each organization had its own naming scheme, requiring custom processes for each partner, which often resulted in confusion and miscommunication.

The new ETC@USC specification focuses primarily on sequences of individual images. The initial use case was VFX plates, typically delivered as OpenEXR or DPX files. However, the team soon realized that the same naming conventions can apply to virtually any image sequence. Consequently, the specification was written to handle a wide array of assets and use cases.

To ensure all requirements are represented, the working group included over 2 dozen participants representing studios, VFX houses, tool creators, creatives and others.  The ETC@USC also worked closely with MovieLabs to ensure that the specification could be integrated as part of their 2030 Vision.

A key design criteria for this specification is compatibility with existing practices.  Chair of the VFX working group, Horst Sarubin of Universal Pictures, said: “Our studio is committed to being at the forefront of designing best industry practices to modernize and simplify workflows, and we believe this white paper succeeded in building a new foundation for tools to transfer files in the most efficient manner.”

This specification is compatible with other initiatives such as the Visual Effects Society (VES) Transfer Specifications. “We wanted to make it as seamless as possible for everyone to adopt this specification,” said working group co-chair and ETC@USC’s Erik Weaver. “To ensure all perspectives were represented we created a team of industry experts familiar with the handling of these materials and collaborated with a number of industry groups.”

“Collaboration between MovieLabs and important industry groups like the ETC is critical to implementing the 2030 Vision,” said Craig Seidel, SVP of MovieLabs. “This specification is a key step in defining the foundations for better software-defined workflows. We look forward to continued partnership with the ETC on implementing other critical elements of the 2030 Vision.”

The specification is available online for anyone to use.

Oops, something went wrong.