[ad_1]
Perhaps most importantly, the public is increasingly aware of the technology. In fact, that increasingly knowledge may ultimately pose a different kind of risk, related to and yet distinct from the generated audio and videos themselves: politicians will now be able to dismiss real, scandalous videos as artificial constructs simply by saying, “That’s a deepfake!” In one early example of this, from late-2017, the U.S. President’s more passionate online surrogates suggested (long after the election) that the leaked Access Hollywood “grab ‘em” tape could have been generated by a synthetic-voice product named Adobe Voco.
But synthetic text—particularly of the kind that’s now being produced—presents a more challenging frontier: it will be easy to generate in high volume, and with fewer tells to enable detection. Rather than being deployed at sensitive moments in order to create a mini-scandal or an October Surprise, as might be the case for synthetic video or audio, textfakes could instead be used in bulk, to stitch a blanket of pervasive lies. As anyone who has followed a heated Twitter hashtag can attest, activists and marketers alike recognize the value of dominating what’s known as “share of voice”: Seeing a lot of people express the same point of view, often at the same time or in the same place, can convince observers that everyone feels a certain way, regardless of whether the people speaking are truly representative… or even real. In psychology, this is called the majority illusion. As the time and effort required to produce commentary drops, it will be possible to produce vast quantities of AI-generated content on any topic imaginable. Indeed, it’s possible that we’ll soon have algorithms reading the web, forming “opinions” and then publishing their own responses. This boundless corpus of new content and comments, largely manufactured by machines, might then be processed by other machines, leading to a feedback loop that would significantly alter our information ecosystem.
Right now, it’s possible to detect repetitive or recycled comments that use the same snippets of text in order to flood a call to comment, game a Twitter hashtag, or persuade audiences via Facebook posts. This tactic has been observed in a range of past manipulation campaigns, including those targeting the millions of comments posted to U.S. government calls for public comment on topics such as payday lending and the FCC’s network neutrality policy. A Wall Street Journal analysis of some of these public-comment calls identified hundreds of thousands of suspicious contributions, identified as such because they contained repeated, long sentences that were unlikely to have been composed spontaneously by different people. If these sorts of comments had been generated independently—by an AI, for instance—the same persuasion-manipulation campaigns would have been much harder to smoke out.
In the future, deepfake videos and audiofakes may well be used to create distinct, sensational moments that commandeer a press cycle, or to distract from some other, more organic scandal. But undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister. The ability to manufacture a majority opinion, or create a fake-commenter arms race—with minimal potential for detection—would enable sophisticated, extensive influence campaigns. Pervasive generated text has the potential to warp our social communication ecosystem: algorithmically-generated content receives algorithmically-generated responses, which feeds into algorithmically-mediated curation systems that surface information based on engagement.
Our trust in each other is fragmenting, and polarization is increasingly prevalent. As synthetic media of all types—text, video, photo, and audio—increases in prevalence, and as detection becomes more of a challenge, we will find it increasingly difficult to trust the content that we see. It may not be so simple to adapt, as we did to Photoshop, by using social pressure to moderate the extent of these tools’ use, and accepting that the media surrounding us is not quite as it seems. This time around, we’ll also have to learn to be much more critical consumers of online content, evaluating the substance on its merits rather than its prevalence.
Photograph: Jabin Botsford/The Washington Post/Getty Images
[ad_2]
Source link