News Stories

Lytro Announces Light Field Camera

 

[Philip Lelyveld comment: this story brings more technical perspective to this important tech announcement.]

[by Matt]

Lytro Inc. (Mountain View CA) has announce a point-and-shoot light field camera targeting the consumer market. No details, such as price, availability, resolution, camera size, etc. were available for the camera, however. The press release from Lytro said vaguely the camera would be available “Later this year.”

“This is the next big evolution of the camera,” said CEO and Founder Dr. Ren Ng. “The move from film to digital was extraordinary and opened up picture taking to a much larger audience. Lytro is introducing Camera 3.0, a breakthrough that lets you nail your shot every time and never miss a moment. Now you can snap once and focus later to get the perfect picture.”

Light field science was the subject of Dr. Ng’s 2006 Ph.D. dissertation in computer science at Stanford, which was awarded the internationally-recognized ACM Dissertation Award in 2007. Dr. Ng’s research focused on miniaturizing a roomful of a hundred cameras plugged in to a supercomputer in a lab. In 2011, the Lytro team will complete the job of taking light fields out of the lab and making them available in the form of a consumer light field camera.

 

Computational photography using light field reconstruction has been a research topic for a number of years. One of the problems with computational photography is the very large amount of data associated with a high-resolution image. To get the image quality a consumer associates with a normal 1MByte snapshot from an 8Mpixel camera, it may be necessary to store as much as 100Mbytes of data and use an imager with 800Mpixels. Obviously, this would not be practical in a point-and-shoot camera, so Insight Media is looking forward to seeing how Lytro solves this problem. Even at a professional level, 800Mpixel sensors aren’t really practical which is why researchers into computational photography have used a room full of a hundred individual cameras in the past.

Typically, both the sensor and the “lens” in computational photography are large in area but can, at least theoretically, be very thin. Since no specifications on the camera are available, it is not clear if the proposed point-and-shoot camera is also a pocket camera. Size isn’t necessarily a catastrophic barrier-people have accepted the 10″ size of the iPad, for example, to get features and a display not available in a 4″ smartphone. A 4″ – 10″ diagonal would be a reasonable size for a computational photography camera that promises to generate 3D images, as Lytro does.

Another problem with computational photography is that it doesn’t produce a viewable image until after the “computational” part. Presumably, any handheld camera from Lytro would include the basic software needed to produce an image visible on the camera display. Typically, computational photography involves post processing and image editing. Again, from a consumer point of view, is this what they want? Taking a photo but being unable to view it in its full glory except after a half hour or so of optimizing it on your computer is not really what point-and-shoot photography is all about.

Lytro has an on-line picture gallery of Adobe Flash photos that can be manipulated over the web to simulate what a consumer can do with his own computational photography images. While it is not stated, presumably these photos were generated with the Lytro camera, either a laboratory model or a prototype of the consumer version. (Note: anyone who can use a Nixie tube display as the subject of a photo demonstration is a geek after my own heart!)

Computational photography is normally based on multi-aperture imaging, as was discussed recently in Display Daily. An expanded version of this story with the available details on Lytro’s business plans will appear in the upcoming edition ofMobile Display Report.

See the original post here: http://displaydaily.com/2011/07/13/lytro-announces-light-field-camera/

MPEG-4 AVC/H.264 Video Codecs Comparison

[Excerpt]

The main goal of this report is the presentation of a comparative evaluation of the quality of new H.264 codecs using objective measures of assessment. The comparison was done using settings provided by the developers of each codec. The main task of the comparison is to analyze different H.264 encoders for the task of transcoding video—e.g., compressing video for personal use. Speed requirements are given for a sufficiently fast PC; fast presets are analogous to real-time encoding for a typical home-use PC.

See the full story  here: http://www.compression.ru/video/codec_comparison/h264_2011/

< PREVIOUS ARTICLES NEXT ARTICLES >

Specification for Naming VFX Image Sequences Released

ETC’s VFX Working Group has published a specification for best practices naming image sequences such as plates and comps. File naming is an essential tool for organizing the multitude of frames that are inputs and outputs from the VFX process. Prior to the publication of this specification, each organization had its own naming scheme, requiring custom processes for each partner, which often resulted in confusion and miscommunication.

The new ETC@USC specification focuses primarily on sequences of individual images. The initial use case was VFX plates, typically delivered as OpenEXR or DPX files. However, the team soon realized that the same naming conventions can apply to virtually any image sequence. Consequently, the specification was written to handle a wide array of assets and use cases.

To ensure all requirements are represented, the working group included over 2 dozen participants representing studios, VFX houses, tool creators, creatives and others.  The ETC@USC also worked closely with MovieLabs to ensure that the specification could be integrated as part of their 2030 Vision.

A key design criteria for this specification is compatibility with existing practices.  Chair of the VFX working group, Horst Sarubin of Universal Pictures, said: “Our studio is committed to being at the forefront of designing best industry practices to modernize and simplify workflows, and we believe this white paper succeeded in building a new foundation for tools to transfer files in the most efficient manner.”

This specification is compatible with other initiatives such as the Visual Effects Society (VES) Transfer Specifications. “We wanted to make it as seamless as possible for everyone to adopt this specification,” said working group co-chair and ETC@USC’s Erik Weaver. “To ensure all perspectives were represented we created a team of industry experts familiar with the handling of these materials and collaborated with a number of industry groups.”

“Collaboration between MovieLabs and important industry groups like the ETC is critical to implementing the 2030 Vision,” said Craig Seidel, SVP of MovieLabs. “This specification is a key step in defining the foundations for better software-defined workflows. We look forward to continued partnership with the ETC on implementing other critical elements of the 2030 Vision.”

The specification is available online for anyone to use.

Oops, something went wrong.