Microsoft’s technology is designed to detect disinformation.
Microsoft on Tuesday showed off two new computer programs to combat deep fakes. The company said its new apps will be able to identify manipulated videos, which are often hard to detect, as well as tell people whether the media they’re looking at is likely authentic or not.
Microsoft said its new technology will be built into its Azure server technology for businesses. Content makers will be able to send “hash” fingerprints of their videos to Microsoft, which will then, in turn, help people identify whether media is likely manipulated.
For more like this
“Disinformation comes in many forms, and no single technology will solve the challenge of helping people decipher what is true and accurate,” Tom Burt, corporate vice president of customer security and trust, and Eric Horvitz, chief scientific officer, said in a blog post.
The company also created an online game of sorts, challenging people to identify deepfakes, called SpotDeepfakes.org.
Microsoft’s efforts mark the latest among tech experts raising the alarm over the danger of deep fakes. The technology, which is a shorthand for videos or audio recordings manipulated by a computer to say or appear to do anything the user wants, has increasingly become easier to use and harder to spot, creating an opportunity for potentially catastrophic meddling in politics and elections.
The Massachusetts Institute of Technology dramatized that concern in July, with a deepfake video and audio of President Richard Nixon giving a speech he never actually delivered. “This project shows the dangers of misinformation,” MIT’s Center for Advanced Virtuality said at the time. “By creating this alternative history the project explores the influence and pervasiveness of misinformation and deepfake technologies in our contemporary society.”
To combat these concerns, Microsoft launched its Defending Democracy Program. Another app the company created was ElectionGuard, new voting machine software that’s designed to quickly identify hacking attempts.
Microsoft said its new deep fake programs were developed within its Microsoft Research division, as well as its Responsible AI team, and its Ethics and Effects in Engineering and Research Committee.
This content was originally published here.