[ad_1]
Today, we’re announcing two new technologies to combat disinformation, new work to help educate the public about the problem, and partnerships to help advance these technologies and educational efforts quickly.
There is no question that disinformation is widespread. Research we supported from Professor Jacob Shapiro at Princeton, updated this month, cataloged 96 separate foreign influence campaigns targeting 30 countries between 2013 and 2019. These campaigns, carried out on social media, sought to defame notable people, persuade the public or polarize debates. While 26% of these campaigns targeted the U.S., other countries targeted include Armenia, Australia, Brazil, Canada, France, Germany, the Netherlands, Poland, Saudi Arabia, South Africa, Taiwan, Ukraine, the United Kingdom and Yemen. Some 93% of these campaigns included the creation of original content, 86% amplified pre-existing content and 74% distorted objectively verifiable facts. Recent reports also show that disinformation has been distributed about the COVID-19 pandemic, leading to deaths and hospitalizations of people seeking supposed cures that are actually dangerous.
What we’re announcing today is an important part of Microsoft’s Defending Democracy Program, which, in addition to fighting disinformation, helps to protect voting through ElectionGuard and helps secure campaigns and others involved in the democratic process through AccountGuard, Microsoft 365 for Campaigns and Election Security Advisors. It’s also part of a broader focus on protecting and promoting journalism as Brad Smith and Carol Ann Browne discussed in their Top Ten Tech Policy Issues for the 2020s.
New Technologies
Disinformation comes in many forms, and no single technology will solve the challenge of helping people decipher what is true and accurate. At Microsoft, we’ve been working on two separate technologies to address different aspects of the problem.
One major issue is deepfakes, or synthetic media, which are photos, videos or audio files manipulated by artificial intelligence (AI) in hard-to-detect ways. They could appear to make people say things they didn’t or to be places they weren’t, and the fact that they’re generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.
Today, we’re announcing Microsoft Video Authenticator. Video Authenticator can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated. In the case of a video, it can provide this percentage in real-time on each frame as the video plays. It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.
This technology was originally developed by Microsoft Research in coordination with Microsoft’s Responsible AI team and the Microsoft AI, Ethics and Effects in Engineering and Research (AETHER) Committee, which is an advisory board at Microsoft that helps to ensure that new technology is developed and fielded in a responsible manner. Video Authenticator was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset, both leading models for training and testing deepfake detection technologies.
We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.
Today, we’re also announcing new technology that can both detect manipulated content and assure people that the media they’re viewing is authentic. This technology has two components. The first is a tool built into Microsoft Azure that enables a content producer to add digital hashes and certificates to a piece of content. The hashes and certificates then live with the content as metadata wherever it travels online. The second is a reader – which can exist as a browser extension or in other forms – that checks the certificates and matches the hashes, letting people know with a high degree of accuracy that the content is authentic and that it hasn’t been changed, as well as providing details about who produced it.
This technology has been built by Microsoft Research and Microsoft Azure in partnership with the Defending Democracy Program. It will power an initiative recently announced by the BBC called Project Origin.
Partnerships
No single organization is going to be able to have meaningful impact on combating disinformation and harmful deepfakes. We will do what we can to help, but the nature of the challenge requires that multiple technologies be widely adopted, that educational efforts reach consumers everywhere consistently and that we keep learning more about the challenge as it evolves.
Today, we’re highlighting partnerships we’ve been developing to help these efforts.
First, we’re partnering with the AI Foundation, a dual commercial and nonprofit enterprise based in San Francisco, with the mission to bring the power and protection of AI to everyone in the world. Through this partnership, the AI Foundation’s Reality Defender 2020 (RD2020) initiative will make Video Authenticator available to organizations involved in the democratic process, including news outlets and political campaigns. Video Authenticator will initially be available only through RD2020, which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology. Campaigns and journalists interested in learning more can contact RD2020 here.
Second, we’ve partnered with a consortium of media companies including the BBC, CBC/Radio-Canada and the New York Times on Project Origin, which will test our authenticity technology and help advance it as a standard that can be adopted broadly. The Trusted News Initiative, which includes a range of publishers and social media companies, has also agreed to engage with this technology. In the months ahead, we hope to broaden work in this area to even more technology companies, news publishers and social media companies.
Media Literacy
We’re also partnering with the University of Washington (UW), Sensity and USA Today on media literacy. Improving media literacy will help people sort disinformation from genuine facts and manage risks posed by deepfakes and cheap fakes. Practical media knowledge can enable us all to think critically about the context of media and become more engaged citizens while still appreciating satire and parody. Though not all synthetic media is bad, even a short intervention with media literacy resources has been shown to help people identify it and treat it more cautiously.
Today, we are launching an interactive quiz for voters in the United States to learn about synthetic media, develop critical media literacy skills and gain awareness of the impact of synthetic media on democracy. The Spot the Deepfake Quiz is a media literacy tool in the form of an interactive experience developed in partnership with the UW Center for an Informed Public, Sensity and USA Today. The quiz will be distributed across web and social media properties owned by USA Today, Microsoft and the University of Washington and through social media advertising.
Additionally, in collaboration with the Radio Television Digital News Association, The Trust Project and UW’s Center for an Informed Public and Accelerating Social Transformation Program, Microsoft is supporting a public service announcement (PSA) campaign encouraging people to take a “reflective pause” and check to make sure information comes from a reputable news organization before they share or promote it on social media ahead of the upcoming U.S. election. The PSA campaign will help people better understand the harm misinformation and disinformation have on our democracy and the importance of taking the time to identify, share and consume reliable information. The ads will run across radio stations in the United States in September and October.
Finally, in recent months we have significantly expanded our implementation of NewsGuard, which enables people to learn more about an online news source before consuming its content. NewsGuard operates a team of experienced journalists who rate online news websites on the basis of nine journalistic integrity criteria, which they use to create both a “nutrition label” and a red/green rating for each rated news website. People can access NewsGuard’s service by downloading a simple browser extension, which is available for all standard browsers. It is free for users of the Microsoft Edge browser. Importantly, Microsoft has no editorial control over any of NewsGuard’s ratings and the NewsGuard browser extension does not limit access to information in any way. Instead, NewsGuard aims to provide greater transparency and encourage media literacy by providing important context about the news source itself.
Policy considerations
Governments, companies, non-profits and others around the world have a critical part to play in addressing disinformation and election interference broadly. In 2018, the Paris Call for Trust & Security in Cyberspace brought together a multistakeholder group of global leaders committing to nine principles that will help ensure peace and security online. One of the most critical of these principles is defending electoral processes. In May, Microsoft, the Alliance for Securing Democracy and the Government of Canada launched an effort to lead global activities on this principle. We encourage any organization interested in contributing to join the Paris Call.
[ad_2]
Source link