But videos proved more trustworthy, since after all, it’s tough to face a live recording – at least, it was tough. What if video can also be ‘photoshopped’? It really would be hard to believe our eyes.
Misuse Of AI Creates Mistrust
With ever-improving video editing technology that uses AI, new altered videos, called, ‘Deepfakes’ are becoming easier to create and thus more prolific. By taking multiple images of a person’s face, loading them into video editing software, and then directing the video on where to superimpose that face, software is becoming better and better at stitching one person’s face on a completely unrelated video.
While this may be the next fodder for hilarious YouTube videos of your friend’s face on the head of your family cat or the iguana you saw at the zoo, this AI technology has far reaching, and frightening, implications. From creating scandalous circumstances to criminal activity, substituting a face into a video may just be the worst type of identity theft.
In an era of disinformation and ‘fake news’ accusations, the ability to truly generate ‘fake news’ through 100% fake video is alarming. Not only are the videos misleading, but they are also adding confusion to an already confusing time in which the veracity of mainstream media has been called into question.
AI Also Restores Trust
Creating deepfakes employs AI to analyze static images and accurately predict how those images would move together to create the fake video. However, while the deepfake software does a convincing job of stitching the face to an existing video, it cannot (yet) alter the movements or non-facial characteristics of the rest of the video. This glitching allows deepfake-detection software to sort the real from the fake.
So, while AI is at the heart of deepfake technology, AI is also to thank for deepfake detection technology. The detection software analyzes everything else in the video file to catch the glitches.
New digital forensics software packages use machine learning to analyze verified genuine video to study the movements, speech patterns, and facial expressions of the victim and then compare that to the unverified video, a potential deepfake.
Other software looks for specific areas of the face, such as around the forehead or neck where the most digital ‘stitching’ has to take place. The program analyzes on a pixel-by-pixel scale, looking for inconsistencies that are too small or too fast for the human eye to catch.
Technological Arms Race
With the ease that suspect content can be broadcast, it is critical now more than ever to verify images and videos. As the ability to create deepfakes improves, detection software must stay one step ahead.