By Debra Kaufman

During the HPA Tech Retreat eXtra (TR-X) last month, Phil Lelyveld, program lead for the AR/VR Initiative at the Entertainment Technology Center @ USC, spoke about “Artificial Intelligence: Immersion, Story, Technology and Ethics.” He started by reminding attendees that although the market has divided “virtual reality” and “augmented reality” into two separate verticals, it’s actually a continuum. “The goal is to create objects or experiences indistinguishable from real experiences, which can impact your brain like a real experience,” he says. Researcher Skip Rizzo, director for the Medical Virtual Reality Institute for Creative Technologies, describes all of it as “mental stimuli,” noting that “we already live in a mixed reality world.”

As it advances, this world of mixed reality will also be impacted by social media, world building, crowd-sourcing and data from dozens of Internet of Things devices, from smart watches to smart houses. Then comes artificial intelligence. “AI will shape and filter the information you get through AR or VR, so it can have a huge impact on how you view the world,” he says. Lelyveld showed “Eclipse,” a music video commissioned by Saatchi & Saatchi that was made completely in AI systems. “When it was shown in film festivals, side by side with other music videos created by humans, the audience couldn’t tell the difference,” he reports.

Behind the Scenes on ‘Eclipse’

“AI can out-bet and out-bluff poker players,” Lelyveld says. “It can make decisions based on limited information and it can act unpredictably. Yet we’re putting it in charge of human activities.” Fable Studios, headed by co-founder Edward Saatchi, formerly with Oculus Story Studios, is working on the Lucy Project, attempting to create an interactive and believable character who can “hand you and be handed objects, collaborate to move through the story, remember what you’ve done and fall back to it, be believably interrupted and has a hierarchy of emotional attachments to her objects.” Lucy, says Saatchi, “is the future of AR/VR storytelling.”

Fable Studios’ Wolves in the Walls starring Lucy:

Lelyveld also demonstrated at the HPA Tech Retreat some of the less positive potential of AI, including a synthesized speech by Obama, in which he is lip-synched. Adobe already has a voice synthesis technology – its Project VoCo — that, once it has the profile of someone’s voice, can create a convincing recording of that person saying things he never said. “We live in a post-evidence world,” says Lelyveld, who reports that Facebook can remove or add objects and effects to videos, even live ones. Images of fake celebrities are also surging; he showed those created by Generative Adversarial Networks (GANs).

Synthesizing Obama’s speech:

“We don’t know what is real and what is altered and the line between virtual and real is blended. It’ll take experts to tell the difference. Or maybe even they won’t be able to. Do we have a right to know when voice/sound, image/video or data are altered to change meaning or faked?” Lelyveld asked. “Do we have a right to understand how data and information is filtered and to an audit trail to understand the framework for decisions? This is a huge open question.”

AI isn’t likely to answer those questions, says Lelyveld, because “there is no clear way to communicate how AI reaches a decision or determines a response.” Making that more challenging, scientists are not replicating AI studies; because code and results are not being shared, it’s made it “harder to compare, understand, and improve.” Quoting Will Knight’s article on “The Dark Secret at the Heart of A.I.,” published by the MIT Technology Review, Lelyveld asked, “How well can we get along with machines that are unpredictable and inscrutable?”

All is not lost. The potential dark side of AI hasn’t gone unnoticed by the technology world; MIT, Harvard and LinkedIn co-founder Reid Hoffman have invested $27 million to a fund to analyze the impact and implications of AI. The IEEE has proposed ethical guidelines. Others, such as researchers at Carnegie Mellon and the Alan Turing Institute Data Ethics Panel, continue to study the ethical issues raised by AI.

At a European Union conference on “Preserving Democracy in a Digital Age,” Lelyveld reports he told attendees that, “the purveyors of fake news are innovators.” “The way to counteract their efforts is to out-play them,” he says. “It may make more sense to create a system that identifies, elevates, and rewards a bounded set of data, information, and knowledge that we can verify to be true, reliable, and undistorted, than to try to detect and react to an unbounded flow of false, distorted, and fake content.”

“Technology is considered to be morally neutral,” he adds. “All it takes is one person out of 8 billion to use it for bad and cause great harm. This is not an unforeseen problem. You can bury your head in the sand, or you can start thinking now about how to handle it.”

Comments are closed.