Intel unveils technology capable of detecting deepfakes - SIMPLYSHAN.COM

Intel unveils technology capable of detecting deepfakes

Intel has announced the first deepfake detector based not on raw data but on the physical characteristics that make us human beings.

Called FakeCatcher, the tool created by the American giant Intel wants to allow us to enter a new era in the fight against deepfakes used with bad intentions. This one, wanting to be accessible to all, comes to upset the fight against deepfakes but also brings questions.

96% accuracy

“Deepfake videos are everywhere now. You’ve probably seen it before; videos of celebrities doing or saying things they’ve never done. » Behind these words of Ilke Demir, scientist and head of the research team at Intel Labs, we can remember the false capitulation of Ukrainian President Volodomyr Zelensky, for example. As part of its Responsible AI program, Intel has developed a great tool called FaekCatcher to help detect the deepfakes that are now swarming the internet.

The tool analyzes a video in real time not by looking at its raw data, but by relying on physical details. As the researcher says in the Intel video, those details that make us human. The tool then identifies the fluctuations of color in the veins of the face, as the heart beats, or the brightness of the eyes on the pixels of a video. The technology is built on many software and hardware layers to operate on a web platform. FakeCatcher relies on the 3rd generation Xeon Scalable processors which allows it to run up to 72 analyzes simultaneously, the detection of which is 96% accurate according to the brand.

A deepfake feeding machine?

Since their appearance a few years ago, deepfake videos have multiplied on the web. Whether to make a celebrity appear in a video program or for fun, their use has nevertheless been quickly diverted to create content considered dangerous.

If they affect, with the general public, problems of misinformation, deepfakes are no less devastating on another obvious register: pornography. Already in 2019, cybersecurity company Deeptrace reported that 96% of deepfake videos on the web were created for pornographic use, putting celebrities’ faces on videos without their consent.

Videos that directly affect the dignity of the people targeted, very often women. The fight against deepfakes is noble, but users are asking the question following Intel’s announcement. Making their tool accessible to everyone, wouldn’t that also give the creators of deepfakes a way to strengthen their tools? On this question, Intel will probably have to show its solutions during a Twitter Q&A on November 16 at 8:30 p.m. local time.

Leave a Comment