News Stories

Puppy Bowl Preview – 3D, Kiss Cam and More

For those with 3-D TVs,…it’s being filmed in 3-D

In what’s become an annual awww-producing tradition, this year’s Puppy Bowl has a few new tricks up its sleeve. From chicken cheerleaders to the Kiss Cam to watching the whole thing in 3-D, Animal Planet’s Puppy Bowl VII just might make you forget about the Packers-Steelers game altogether!

Meet the Puppy-Meister
We recently caught up with the man who is in the middle of all the action: Puppy Bowl referee Andrew Schechter (who also serves as coordinating producer of the event). Schechter has been making tough “unnecessary-ruffness” calls for four years now and wouldn’t have it any other way.

“Some children grow up wanting to be firefighters; others want to be astronauts,” he says. “I grew up wanting to be the referee of adorable puppies pretending to play football in a miniature stadium. Dreams really do come true!”

Shelter Dogs Rule the Gridiron
This year he’ll ref a picture-perfect lineup of a whopping 47 puppies! “They are all from different shelters. The puppies range in age from 7 to 16 weeks old, and there are 29 different breeds represented,” he says. “Of course, the Puppy Bowl referee doesn’t play favorites but, I must say, Puppy Bowl VII features some of the cutest and most rambunctious puppy players yet!”
Though the super sweet puppies are obviously the main draw of the show, Schechter says that Puppy Bowl VII has taken things up a notch. Last year’s bunny cheerleaders have been replaced with — rather appropriately — chicks.

“That’s right, we have chicken cheerleaders rooting on the puppy players!” he says. “Then, for the first time ever, we’re introducing a new camera angle: the Puppy Cam. This unique perspective gives viewers a puppy point of view, so you feel like you’re a part of the adorable action.”

Technological Advancements
For those with 3-D TVs, this year’s Puppy Bowl will really allow you to get into the fluffy action, as it’s being filmed in 3-D. Also, don’t miss the new half-time performance by John Fulton, host of the upcoming Animal Planet series “Must Love Cats.”

Then there’s perhaps the cutest addition of all: the Kiss Cam! Comprised almost entirely of user-generated videos, the Kiss Cam highlights some of the sweetest human-animal moments out there.

Best Year Ever?
Some things have stayed the same. A fan favorite, the Bissell Kitty Halftime Show that features 16 sweet kittens from shelters, hamster blimp pilots and the water-bowl cam (which is our personal favorite close-up of the pups) are also back as part of the seventh annual event.

In a nutshell, Schechter says that it’s pretty much the best Puppy Bowl ever, and with all of the new features, we’d have to agree. Especially with all of the cute little guys that are in the starting lineup.

Will you tune in to watch Puppy Bowl VII on the Animal Planet at 3 p.m. Eastern on Sunday? We will.

There are videos at the original post here: http://www.pawnation.com/2011/02/04/puppy-bowl-preview-3d-kiss-cam-and-more/

(SD&A presentation) Motion and binocular depth cues

Eye-tracking autostereoscopic systems require fast response times of less than 100 ms and must track head and eye movements.

——————-

Ulrich Leiner, researcher from the Fraunhofer Heinrich-Hertz Institute talked about evaluating motion and binocular parallax as depth cues in autostereo displays.

3-D displays have a number of applicable technologies for showing images. Autostereo displays operate by providing two or more views to a single viewer. The problem is that these views only show a reasonable 3-D image in a few angular dimensions, the sweet spot(s) are limited. To address this problem, his research uses a display-mounted camera to view the user’s eyes and steer the visual fields to the eye locations.

This image feedback system requires fast response times of less than 100 ms and must track head and eye movements. To be useful, the system needs to be robust and not be affected by skin color, eye color, hair cuts and changing levels of illumination. To move the images on the display, there are three possible –mechanisms. They could control the focused backlight, shift and scale the pixels, or move or switch the beam splitter.

They found that it is hard to shift the pixels at the subfield levels and moving a focused backlight didn’t produce a comfortable image. As a result, they are using a mechanical structure to move the vertical lenticular lenses across the x-axis to match the head and eye movements. A feedback loop including the viewer combines the binocular images and horizontal displacement to increase the 3-D effects in the image.

In testing the apparatus, they found that motion parallax is much less effective as a depth cue than the binocular cues. By combining the two, viewers experienced a 20-30 percent increase in perceived depth over either set of cues separately. The resulting images are close to holographic images. The likely applications for this enhanced viewing technology are those that need critical depth information. Some of these applications include medical (especially robotic-assisted surgeries and tele-medicine) and gaming.

See original post here: http://mandetech.com/2011/02/02/motion-and-binocular-depth-cues/

< PREVIOUS ARTICLES NEXT ARTICLES >

Specification for Naming VFX Image Sequences Released

ETC’s VFX Working Group has published a specification for best practices naming image sequences such as plates and comps. File naming is an essential tool for organizing the multitude of frames that are inputs and outputs from the VFX process. Prior to the publication of this specification, each organization had its own naming scheme, requiring custom processes for each partner, which often resulted in confusion and miscommunication.

The new ETC@USC specification focuses primarily on sequences of individual images. The initial use case was VFX plates, typically delivered as OpenEXR or DPX files. However, the team soon realized that the same naming conventions can apply to virtually any image sequence. Consequently, the specification was written to handle a wide array of assets and use cases.

To ensure all requirements are represented, the working group included over 2 dozen participants representing studios, VFX houses, tool creators, creatives and others.  The ETC@USC also worked closely with MovieLabs to ensure that the specification could be integrated as part of their 2030 Vision.

A key design criteria for this specification is compatibility with existing practices.  Chair of the VFX working group, Horst Sarubin of Universal Pictures, said: “Our studio is committed to being at the forefront of designing best industry practices to modernize and simplify workflows, and we believe this white paper succeeded in building a new foundation for tools to transfer files in the most efficient manner.”

This specification is compatible with other initiatives such as the Visual Effects Society (VES) Transfer Specifications. “We wanted to make it as seamless as possible for everyone to adopt this specification,” said working group co-chair and ETC@USC’s Erik Weaver. “To ensure all perspectives were represented we created a team of industry experts familiar with the handling of these materials and collaborated with a number of industry groups.”

“Collaboration between MovieLabs and important industry groups like the ETC is critical to implementing the 2030 Vision,” said Craig Seidel, SVP of MovieLabs. “This specification is a key step in defining the foundations for better software-defined workflows. We look forward to continued partnership with the ETC on implementing other critical elements of the 2030 Vision.”

The specification is available online for anyone to use.

Oops, something went wrong.