US elections are just around the corner and of late there has been a steady spike in the number of fake videos shared by political parties in a bid to malign the opposition. The Republicans came under heavy fire after as many as four manipulated or doctored videos were shared to harm presidential nominee Joe Biden's image. Trump, among other high-profile Republicans, shared these fake videos, which garnered millions of views on social media platforms.
It's time to put that nuisance to end. Deepfakes pose a real threat to the way how users consume their news. When a video shared from a verified source is manipulated, there's so little doubt one might have in its legitimacy. That's exactly what happened when videos of two fake Biden interviews were shared and in one clip Biden's comments were taken out of context. They widely went viral. There have been more instances of such fake videos shared by Republicans that people fell for. Trump campaign's "War Room" account has also indulged in spreading fake videos.
How does the deepfake detector tool work?
To combat the spread of disinformation, Microsoft has unveiled a new tool that will spot deepfakes or synthetic media which are photos, videos or audio files manipulated by Artificial Intelligence (AI) which are very hard to identify if false or not. The tool is called Microsoft Video Authenticator and it can analyse a still photo or video to provide a percentage chance, or confidence score, that the content is artificially manipulated.
The deepfakes detector tool works great in case of videos as it provides this percentage in real-time on each frame as the video plays. The tool works by detecting the blending boundary of the deepfakes and subtle fading or greyscale elements that might not be detectable by the human eye, Microsoft said in a blog post on Tuesday.
This technology has two components. The first is a tool built into Microsoft Azure that enables a content producer to add digital hashes and certificates to a piece of content. The hashes and certificates then live with the content as metadata wherever it travels online, IANS reported.
"The second is a reader – which can exist as a browser extension or in other forms – that checks the certificates and matches the hashes, letting people know with a high degree of accuracy that the content is authentic and that it hasn't been changed, as well as providing details about who produced it," Microsoft explained.
Deepfakes - misuse of AI
Fake audio or video content, also known as 'Deepfakes', has been ranked as the most worrying use of artificial intelligence (AI) for crime or terrorism. According to the latest study, published in the journal Crime Science, AI could be misused in 20 ways to facilitate crime over the next 15 years.
Deepfakes could appear to make people say things they didn't or to be places they weren't, and the fact that they're generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology.
"However, in the short run, such as the upcoming US election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes," Microsoft said.
"No single organisation is going to be able to have a meaningful impact on combating disinformation and harmful deepfakes," it added.
Microsoft also announced several partnerships in this regard, including with the AI Foundation, a dual commercial and nonprofit enterprise based in the US, and a consortium of media companies that will test its authenticity technology and help advance it as a standard that can be adopted broadly.