[ad_1]
This month, advertising giant WPP will send unusual corporate training videos to tens of thousands of employees worldwide. A presenter will speak in the recipient’s language and address them by name, while explaining some basic concepts in artificial intelligence. The videos themselves will be powerful demonstrations of what AI can do: The face, and the words it speaks, will be synthesized by software.
WPP doesn’t bill them as such, but its synthetic training videos might be called deepfakes, a loose term applied to images or videos generated using AI that look real. Although best known as tools of harassment, porn, or duplicity, image-generating AI is now being used by major corporations for such anodyne purposes as corporate training.
WPP’s unreal training videos, made with technology from London startup Synthesia, aren’t perfect. WPP chief technology officer Stephan Pretorius says the prosody of the presenters’ delivery can be off, the most jarring flaw in an early cut shown to WIRED that was visually smooth. But the ability to personalize and localize video to many individuals makes for more compelling footage than the usual corporate fare, he says. “The technology is getting very good very quickly,” Pretorius says.
Deepfake-style production can also be cheap and quick, an advantage amplified by Covid-19 restrictions that have made conventional video shoots trickier and riskier. Pretorius says a company-wide internal education campaign might require 20 different scripts for WPP’s global workforce, each costing tens of thousands of dollars to produce. “With Synthesia we can have avatars that are diverse and speak your name and your agency and in your language and the whole thing can cost $100,000,” he says. In this summer’s training campaign, the languages are limited to English, Spanish, and Mandarin. Pretorius hopes to distribute the clips, 20 modules of about 5 minutes each, to 50,000 employees this year.
The term deepfakes comes from the Reddit username of the person or persons who in 2017 released a series of pornographic clips modified using machine learning to include the faces of Hollywood actresses. Their code was released online, and various forms of AI video and image-generation technology are now available to any interested amateur. Deepfakes have become tools of harassment against activists, and a cause of concern among lawmakers and social media executives worried about political disinformation, although they are also used for fun, such as to insert Nicolas Cage into movies he did not appear in.
Deepfakes made for titillation, harassment, or fun typically come with obvious giveaway glitches. Startups are now crafting AI technology that can generate video and images able to pass as substitutes for conventional corporate footage or marketing photos. It comes as synthetic media, and people, are becoming more mainstream. Prominent talent agency CAA recently signed Lil Miquela, a computer-generated Instagram influencer with more than 2 million followers.
Rosebud AI specializes in making the kind of glossy images used in ecommerce or marketing. Last year the company released a collection of 25,000 modeling photos of people that never existed, along with tools that can swap synthetic faces into any photo. More recently, it launched a service that can put clothes photographed on mannequins onto virtual but real-looking models.
[ad_2]
Source link