The Entertainment Technology Center @ USC hosted its fourth annual Virtual Conference (vETC) on June 27 & 28, 2018 at the Technicolor Experience Center in Los Angeles, CA. This year’s program highlighted talks in the AI, Immersive Experience and Adaptive Production sectors. Thought leaders and catalysts from the entertainment, and service industries presented an insider’s look into the emerging technologies disrupting everything from the creative process to business models and consumer behavior.
Wednesday June 27th, 2018
|8:15 - 8:30 am||Registration|
|8:30 - 9:00 am||It's time to Stop Talking About Production in the Cloud|
Much of today’s focus on cloud production is IT-centric. It’s about the elasticity of resources, opex vs. capex, on prem vs. zero on prem, and so on. That’s not enough. The migration from tape to file had two components. The first was the IT component: we replaced expensive single function hardware with software running on standard IT hardware. The second was new workflows made possible when the constraints of a tape workflow disappeared, for example linear access vs. random access. But a file workflow itself is constrained, and underlying today’s workflows are the movement of data between data islands, some internal and most external. Where the revolution in the workflow starts is not in the IT piece, having the data in the cloud, it’s in the data being accessible over a network: the fully interconnected network workflow replacing the data islands of the file workflow. Don’t send data, send a link.
|9:00 - 9:30 am||Emerging Storage Technologies that M&E Could be Implementing Right Now|
There are a number of developing architectures for both high performance and hybrid cloud storage that could have a big impact on media workflows. These include new architectures that take advantage of NVMe characteristics; provide end-to-end private cloud services; or enable data movement between on-premise and public cloud storage. This talk will cover my own “cool vendors” list of emerging companies and architectures that should be on the radar of every M&E company right now.
CEO Renaissance Consulting Group
|9:30 - 10:00 am||The Evolving Media Pipeline Aided by Machine Learning |
This presentation covers the evolving media landscape of content delivered anytime, anywhere on any platform as new Machine Learning (ML) services help content creators navigate the end-to-end ecosystem from content creation through delivery and personalization. The presentation includes use cases for the NFL and The Royal Wedding, and also covers AWS ML Services including Amazon SageMaker, DeepLens, and Video Rekognition.
Global Leader, M&E Amazon Web Services
|10:00-10:30 am||OTT - The Good, The Bad and The Ugly: What's Broken, how we fix it and what's next|
OTT services have exploded rapidly across the globe—and recently crossed the 250M paying-user threshold. So, where do we as technology providers and service operators currently stand? What’s next? In this session, we will examine the state of the technology and operations behind the delivery of OTT services. We will explore the major pain points that the industry currently faces, as well as their potential solutions. We’ll also take a look at the promising innovative applications of the future, which leverage AI/ML, voice assistants, blockchain, and Cloud. We’ll cover areas such as metadata, where AI and Machine Learning have delivered incredible advancements, but where, in reality, we still lack industry standards on the collection and sharing of this data (and resort to dreadful spreadsheets, emails, and painful manual processes). In the gap between the promise of new technology and where we currently stand, we find opportunity: We know the nature of the problems we face, and we know what we can do to solve them. The question is, who wants to help craft these solutions, and who will ultimately lead us forward?
Solution Architect & Product Lead, Globant
|10:30 - 11:00 am||API and The Cloud: How to Enable the Media Supply Chains of the Future|
In a world of fragmented and rapid content distribution, an efficient supply chain is critical. However, timelines continue to shrink, vendors continue to exchange an exponentially increasing number of emails to enable production and the true state of content is unknown. Couple this with significant wastage as data in the production chain is lost downstream and often has to be re-created at various points and, we have an impending disaster. This talk will focus on how new technologies (AI, smart workflow orchestration etc.) can be used to rebuild the traditional supply chain for efficiency in a cloud-based world.
SVP, Product Strategy, Deluxe Technology & Innovation
|11:00 - 11:15 am||Break - setup for Panel|
|11:15 - 12:00 pm||Panel on companies that have implement C4 into software or workflow. Currently have Avalanche.io, FotoKem, PixSystems.||Joshua Kolden
|12:00 pm - 12:45 pm||Lunch|
|12:45 - 1:15 pm||Guy Finley|
|1:15 - 1:30 pm||Secure Cloud Configurations for Media Workflows|
Presented by elite security research & consulting firm Independent Security Evaluators (ISE), this session analyzes the various security controls relevant to cloud platforms in media production workflows. This session considers media & entertainment industry context, business objectives, the security controls themselves, and how to ensure proper configuration -- all irrespective of any given cloud provider.
|1:30 - 2:15 pm||How would you store a zetta-byte of cold data?|
As the world creates more content that needs to be stored for longer periods of time, today’s technologies are struggling to keep up. How would you store a zetta-byte of information? Our research advances the state-of-the art, creating Glass and DNA storage technologies allowing content creators to store massive volumes, at low cost, for – quite literally – ever. Join us to hear more about these cutting edge technologies and the work we are doing today, to make for a better content tomorrow.
|Dr. Austin Donnelly
Principal RSDE Lead, Microsoft
Dr. Karin Strauss
Sr. Researcher, Microsoft
|2:15 - 2:45 pm||IF THE FUTURE IS INTERACTIVE, IS CINEMATOGRAPHY DEAD?|
As the competing verticals of cinema, television, advertising, and gaming converge, entertainment is becoming undeniably more interactive; 'passive viewers' are becoming 'active users.’ As the user experience of theatrical exhibition evolves, so too must the process of image capture. From capturing volumes and lightfields to AI and natural language processing, new methods of imaging and content origination are rising to the challenge. Is there a difference between entertainment and content and how will future audiences engage with it? If our future is destined to be interactive, how do we ensure that we are still telling quality stories? Can we preserve the premium, directed experiences of the best movies, television, and advertising in an age of glut or will quality just matter less? Award-winning cinematographer and thought leader in emerging technology integration and immersive media, Andrew Shulkind, is at the front lines of these questions and their solutions. He will share his experiences with these new contractions and help make sense of the future of entertainment before we ruin our chance to save it.
Director of Future Imaging AI Technology & Global Immersive Strategy
|2:45 - 3:00 pm||Break & Setup for AI|
Machine Learning and AI
|3:00 - 3:30 pm||"Fireside Chat": The Neuroscience of Content||Yves Bergquist, Gabe Silva
Neuroscientist, Director of CENI/UCSD
|3:30 - 3:55 pm||We’ve recently completed some work integrated QC and video analysis AI services to our Rally platform, then making that time-based metadata (objects, scenes, nudity, bad language, etc.) available to operators in an Adobe Premiere environment to optimize content QC, legal compliance, and simple versioning. If you think this sounds like a good fit, we’d be happy to present the architecture and findings. ||Simon Eldridge
Chief Product Officer
|4:00 - 4:25 pm||AI Cinematography|
Projected case study of AI-augmented cinematography. Yves Bergquist and Andrew are building his talk together (scheduled to chat next week).
Director of Future Imaging AI Technology & Global Immersive Strategy
|4:25 - 4:55 pm||Systems Media - An AI Case Study |
What happens when you represent attributes of content together with granular audience insights about these attributes, and performance metrics? You get granular insights on what's driving the performance of your content. Yves will present a case study just completed on 21 TV shows.
Data & Analytics, ETC
|5:00 - 5:25 pm||Distributed AI Architectures|
Artificial Intelligence (AI) is now making material impact in day to day lives of people. Unlike in the past when there had been numerous false starts, AI movement is now real due to the confluence of big data, big compute and innovation in AI algorithms. Clouds have democratized AI by making it easier for people to consume AI. However, it is still difficult to build distributed scalable AI architectures. Historically, AI architectures have been centralized in nature where both the model building and model use parts of the AI architecture have been centralized. However, we are now entering the realm of distributed AI architectures. In this talk we will describe 3 major distributed AI trends and discuss what are some potential architectures for dealing with these trends. The three trends that we will discuss are: 1) How AI is moving to the Edge in order to deal with data volume and data security issues 2) How AI needs to use data from multiple external sources in order to improve the accuracy of the models and how this provides opportunities for Enterprises to monetize their data in data marketplaces. In many cases Enterprises are selling their data as part of consortiums and they are asking for secure sandboxes to ensure that they can still keep control over their data, and 3) how hybrid AI architectures are leveraging public cloud innovation without Enterprises getting locked into a single cloud provider. In this talk we will provide concrete use cases to motivate both the problem and the solution architectures.
VP, Technology Innovation & Sr. Fellow, Equinix
Thursday June 28th, 2018
|9:30 - 10:00 am||The War to Rate Exercise in Virtual & Augmented Reality: The Trojan Horse of Augmented Experiences|
"Several years ago, I saw a poster while walking in London. It was published by the British Heart Association, and was a public service announcement. It showed a young child holding a game controller, sitting slack jawed on the couch, staring at a glowing TV off-page. ""Risk an early death,"" it read. ""Just do nothing."" For years, we have been trained that video games are the enemy of the healthy lifestyle. However, VR and AR fundamentally change that - even the most passive VR experiences contain more movement than games in the past, and our work at the VR Health Institute is showing that somewhere in the range of 30% of VR titles require enough movement in order to win to qualify as exercise, some of them peaking out at an energy expenditure seen by contestants riding in the Tour de France. Designing VR and AR titles without putting thought into their physical motion is quickly becoming an outdated oversight. Understanding when games cross the line between fun activities, and fun activities that may someday save your life, is now beginning to emerge. The VR Health Institute operates without a profit motive, in collaboration with San Francisco State University, and exists to rate VR and AR games for their exercise potential, similar to how the ESRB rates games for graphic content. I'd like to present the work of the VR Health Institute, our efforts to use metabolic carts to measure the calorie consumption of players through the rate of oxygen consumption by the lungs, and the invisible force that's holding back one of the market verticals that will ultimately have the most impact on the mainstream consumer. If preferred, we'll also be in a position to discuss key design philosophies around user comfort, and what sorts of design decisions impact the energy cost of various experiences.I'm very experienced as a speaker, and I've had the honor of speaking on technology disruption, gaming, and start-ups as far away as London, New York, Hong Kong, Germany, Japan, and Denmark, as well as institutions like Stanford University. My work has been profiled on numerous occasions by media such as CNN, the Wall Street Journal, Mashable, TechCrunch, FastCompany, Popular Science, Popular Mechanics, Wired Magazine, NPR, the Huffington Post, the Seattle Times, ABC News, and others."
Director VR Institute of Health and Exercise
|10:00 - 11:00 am||Imagining the Future of Immersive Therapeutics: World Building for Translational Science|
"From the bedroom of a deaf child to the hospital bed of a patient suffering from Crohn’s Disease, immersive video entertainment and comic book heroes are emerging as the latest tools of therapeutic intervention. Following on the heels of now established practices of analog art therapies and growing from billion dollar industries of video games and superhero comics book brands, new immersive media tech reach further to grasp attention and open doors to reducing pain signals. This panel asks: What roles does World Building play, if any in generating the high impact therapeutic media response? Drawing examples from clinical, university and creative agency settings, the panel will focus on three World Building for Translational Science case studies shared by creative directors and a clinical head of applied tech research. Each speaker will zero in on precipitating factors that invoked each World Build and speak to the World Building."
Faculty, USC SCA iMA+P
USC SCA iMA+P & Founder of Pigeon Hole Productions
|11:00 - 11:15 am||Break||Scott Spector
Global Leader, M&E Amazon Web Services
|The Future of Story Experience|
|11:15 - 11:45 am||A Vision of the Future with AI-infused Location-based Entertainment|
"The VR industry is still in its infancy, though Location-based Entertainment (LBE) shows signs of being the vehicle with which it first succeeds in a major way. There are many reasons for this, among which are the experiences are social-shared and the commitment outlay is relatively small—relegated to the price of a ticket. As a result of LBE being in the forerunning to be a hit, many of the innovations are coming from there, including the creation of high-powered wireless expereriences, enhanced 3D audio and the start of active utilization of AI for both the narrative storyline and the characters found within the experiences. My focus here is on the AI innovations in LBE which are set to greatly expand what is possible to portray there. The vision of the future with AI-infused LBE is one where a narrative could have several different branched-out versions depending on the choices that the people (upwards of 12 in number) going through the experience actively make, as well as one where several characters within the experience seamlessly appear as real as the people going through the experience appear and act—to the extent that it would be difficult to tell the diference. These characters would adapt different lines depending on what is said to them and a different set of characters could appear based on this as well. This makes an LBE experience as close as possible to a shared fantasy that could be experienced as real. In my talk, I will first discuss what the current status of AI utilization in LBE is and then present several scenarios that are promising to be technically possible."
Partner & President, Transformation Group, CEO & President, Sprout Reality
|11:45 - 12:00 pm||Case Study: Save LA Cougars: Using VR for public policy|
"VR when it works – using VR/360 for cause-based campaigning and strategic communications. using VR to tell the story and generate support for building the Liberty Canyon wildlife crossing over the 101 freeway – a multiple-format (iOS, Android, GearVR/Go/Rift, JauntVR, YouTube 360) VR documentary and app using high end cinematic 360 footage, ambisonic sound, and architectural visualization as part of a broader strategic communications plan.
More info: www.savelacougarsvr.com
Creative Technology, Pacific Virtual Reality
|12:00 - 12:15 pm||Exploring the Infite - fractal worlds in VR|
using VR for exploring fractal dimensions and creating algorithm-generated art
More info: http://pacificvirtualreality.com/fractal-frontiers/
|12:15 - 12:45 pm||Gameview AR: an AR tabletop game viewing experience (partners with ESL, Turner, PGA, NBA)|
Grab Games will provide an overview of their AR Tabletop platform for eSports called GameviewAR. GameviewAR was featured by Google at GDC earlier this year and provides a new way for gamers to watch their favorite game(s) from a mobile augmented reality application. A few of Grab's AR partners include ESL, Turner, PGA, and the NBA. Grab is also a partner with Magic Leap and building additional tabletop experiences launching in the future.
CEO, Grab Games USC Viterbi Faculty
|12:45 - 1:15 pm||Interview Guy Shalem - Lori Schwartz|
"As you know, in the past several years, I've taken a step from network TV, to develop cutting edge content for FOX (under Bryan Singer's umbrella), A&E, Lifetime (Fall into me) and as of late AT&T. I'm excited to be one of the founders a new mobile-first studio and distribution App/Brand focusing on engaging original content for the 'finger triggered insta-generation'. We believe it’s a revolutionary idea intended to shake up the post millennials hunger for new user engagements viewing experience. All content will be cultivated by us to preserve the unique quality of our brand as a cutting edge platform that is original, true and authentic. The App will using revolutionary A.I. technology to distribute and curate in a similar echo-system to Intragram Stories and FB Live (tapping, vertical view, interactive experience, social, etc) Our current team is made out of a team of Hollywood A-listers working closely with a visionary startup minds from around the world.
Tech Cat, Story Tech
|1:15 - 1:30 pm||Break|
|Independent Production & Financing|
|1:30 - 2:00 pm||Metaverse Roadmap - the foundation of IM language, and Unrepresented - a blockchain token marketplace for independent production financing and distribution|
"I am available for mixed reality XR and blockchain talks, specifically at the intersection of media production, rights management and finance for new projects to production. My production work focused on workflow between 360 and interactive content with XR, VR and AR play, and I also consult in this space for many major companies (from HP to Black and Decker this month). Our latest project Unrepresented is being unveiled this week at Cannes to finance and distribute independent film on a blockchain token marketplace (with immersive/interactive content including series, XR in the future). In the works: creating new interactive media networks that can function like a global PBS for the types of amazing stories Brian and thousands of others are making. My team at Light Lodges is prototyping game shows, narrative and docuseries, blockchain public media networks and funnels for content that will allow makers from all walks of life to get to their audiences more directly than ever before, engaging new types of participation in play. My practical FX company in Los Angeles, Toyshoppe Systems, is currently running the prop side of Avatar 2,3,4,5 on set with Fox in Manhattan Beach, and while I cannot speak to the current production methods they are using in detail I can speak in generalities to the various types of mixed reality production workflows available to producers at every budget, from DIY at home to cinematic interactive experiences. I am passionate about live XR in the city, no greenscreen. My previous talks in LA have included work in public sector engagement, participation in TV shows for producers (designing social engagement strategy and workflows for live and broadcast media), and work with the USC Network Culture Project to create funding for virtual experiences with real world public good as part of a MacArthur grant at Annenberg back in 2009. I later worked in partnership with State Department to create their mixed reality experiences in the early Obama administration and have been producing mixed reality experiences for the public since 2006. I am one of the facililator/authors of a document that became the foundation of VR/AR/MR/XR language called the Metaverse Roadmap and over the next few weeks will be publishing greater detail on the future of mixed reality media in the living room, in smartglasses out in the world and everywhere as new interactive networks emerge that go with us everywhere."
Founder, LightLodges CEO, Mixed Reality Media
|2:00 - 2:30 pm||Advertising and Branding within Immersive Media|
"As the lead Developer Evangelist for Admix, I work with hundreds of VR/AR developers and studios to introduce branding partnerships and opportunities for them to monetize their applications.My topic of choice would be Advertising and Branding within Immersive Media - how the traditional advertising industry will be rendered obsolete by the promise of immersive technologies. A few main themes: The metrics used to determine advertising campaign efficiency will completely change, as standard impressions and clicks are no longer going to be the ways to measure brand exposure. Leveraging analytical capabilities like heat mapping and gaze tracking, we can understand what a user is experiencing within a virtual environment to a much larger degree than current standards. The era of intrusive advertising will come to an end through virtual and augmented reality. Since these experiences are meant to be social and consistent across each user, the branding done will be targeted contextually rather than individually. Privacy won't be invaded as the advertising with be based off what the user is doing and where they are in the virtual environment, and not depended upon personal data. I've spent a lot of time on this while building the VR AR Pledge - a commitment against intrusive advertising in immersive media. https://www.vrarpledge.org/ Many reports demonstrate that brand recognition and memory recall is much higher in immersive mediums than traditional advertising methods. Brands who do not understand the promise of immersive technologies will fall behind as consumer attention shifts to experiences that they can engage and interact with.I've got plenty more to talk about and would be happy to touch on anything you believe your audience would like to hear about. Looking forward to hearing back from you and I hope to see you all in LA late June! Forgot to mention - my company Admix recently joined an alliance called the Virtual Reality Blockchain Alliance, which focuses on the intersection of these two revolutionary technologies as the blockchain will enable digital asset ownership, persistent virtual identity and virtual currencies which will prove to be a critical piece for VR development. Would love to touch on this topic as well, although it is a bit cross-disciplinary! "
VR/AR Developer Evangelist
|2:30 - 2:45 pm||Break|
|IM Creative Process|
|2:45 - 3:15 pm||The Current Immersive Media Landscape: Trends and Opportunities|
Tipatat Chennavasin is Co-Founder and General Partner of the Venture Reality Fund, a VC fund focused on startups in the Immersive Media space. Mr. Chennavasin, who travels the global evaluating start-ups, technologies, and experiences in the IM space, will present his informed analysis of the current state of IM.
Co-Founder/General Partner, Venture Reality Fund
|3:15 - 3:30 pm||Break|
|3:30 - 4:00 pm||Mixed Reality Virtual Production: Building the Filmmaking Toolset of Tomorrow*|
Traditional film productions that incorporate visual effects have a strict delineation between previsualization, production, and final visual effects art and animation. Thanks to immersive tools and advances in real-time rendering cinematic scenes in game engines, these lines are blurring and more and more shows are carrying the same 3D assets from previz through post-viz to final delivery. Encoded hardware tools and shared VR environments unlock the dream of allowing films to be "shot" in real-time by small, nimble crews led by techs-savvy directors. Linking the game engine tools with professional visual effects pipelines allow the filmmakers to capture their visions in a way that directly translates to the finished shots. MPC, one of the largest VFX houses in the world, has battle-tested these methods on several blockbuster films and has now integrated their tools into the Technicolor Experience Center motion capture stage.
|4:00 - 4:30 pm||Philip Lelyveld|
|4:30 - 5:30 pm||Panel: Crowd-Sourcing 3D Assets for Immersive Experiences: The Good, The Bad, and the Overly Technical|
One challenge of the mixed reality revolution is the dearth of high quality 3D assets. HP and Technicolor found a clever solution to populating a large environment on their upcoming Mars Home Planet VR experience: engage the community of 3D artists and engineers to envision the colony of the future built to sustain a million people on Mars. With this high-concept call to action came a slew of very real production challenges that expose the strengths and limitations of the CAD design -> real-time game engine workflow.