Deepfakes, and how to avoid them
Evidence in court traditionally consists of paper documents and the oral evidence of witnesses. But with the rise of portable technology almost everyone can now take a picture, shoot a video or record a voice clip. These contemporaneous records of events are increasingly being taken into court and used as key pieces of evidence. But is seeing really believing? Litigants and legal advisors need to be aware that things are not always as they seem.
What are deepfakes?
Deepfakes are highly convincing fakes that could convince even the savviest viewer. They can be made using AI to create videos or voice clips based on existing images or clips of a person speaking. The end product can be highly convincing “evidence” of something that never actually happened.
This technology has been used to create viral videos such as the one that appears to show Barack Obama insulting Donald Trump. However, the increasing availability of the technology involved in creating deepfakes means a growing risk of them slipping into evidence.
Family lawyer, Byron James, has recently drawn attention to the use of deepfakes in litigation. A voice clip was lodged with the court of a threatening message apparently left by his client. Despite having the same accent, tone and use of language as his client, it was ultimately proven to be a deepfake. The client had never left the message.
Another risk comes from the potential for ultra-realistic masks to fool witnesses – even from close range. A recent article looking at the use of masks draws attention to research which indicates that witnesses are not very good at spotting when a mask is being worn – whether looking at photographs or in real life. A further example involved a man arrested in the US after being identified in CCTV footage by his own mother. It turned out that the real culprit had been wearing an ultra-realistic mask and the individual originally accused was not involved in the crime.
Spotting the fakes
Concerns about the rise in deepfakes have prompted AI firms to act, with many now working on “deepfake detectors” and online security systems offering protection against the technology.
Researchers from UC Berkeley and the University of Southern California have created a tool that can detect when videos have been synthetically generated, with at least a 92% success rate. It currently identifies deepfakes of political figures by tracking minute facial movements, which are unique to the individuals. Fortunately, deepfakes are not yet sophisticated enough to mimic real-life movements perfectly, thus enabling the tool to identify what is fake and what is real.
However, this detector is reliant on large volumes of existing footage, in order to learn the unique quirks of the individual, so it isn't effective for individual litigants who are unlikely to have hundreds of hours of footage of themselves available to be analysed.
Additionally, experts anticipate that almost as soon as cybersecurity finds a way to detect the fakes, the creators will find a way to adapt, to avoid the detectors, automatically rendering them obsolete.
This game of cat and mouse was first seen in 2018, when a detector was launched that successfully recognised the use of deepfake technology in a video, by identifying a lack of blinking in featured individuals. Shortly after this detection device was announced, deepfake AI was updated to make individuals blink – rendering this mechanism of detection largely defunct.
However, all is not lost; technology may not be needed to spot a deepfake. A recent study showed that people can already spot fakes 88% of the time. An increased awareness of the potential for deepfakes and how to spot them can only increase this number – and there are several clues to look out for.
- Does the person say something strange? Do they use a turn of phrase you wouldn’t expect? In a recent French case, a fraudster using a hyper realistic mask was only uncovered after a minor linguistic slip up – the use of "vous" rather than "tu".
- Perhaps they promise something that never arrived? A UK executive was recently convinced by a deepfake of his CEO's voice, calling him to tell him to send $250,000 to a fraudulent account. The fraud was only discovered when the executive realised the money the CEO said would arrive by way of reimbursement never appeared.
- Do the facts add up? In the above case the executive's suspicions were first raised when he realised that he had been called by an Austrian number. His CEO was based in Germany.
- Do you have an original electronic copy of the image or clip? Being able to see the original metadata of a file could highlight if it has been altered. If a video or recording is lodged with the court, ask to see an electronic copy of the original.
Fortunately, current instances of deepfakes in court actions are rare, but that doesn't mean the legal sector should be complacent. Ever-improving technology means it is an area that is only likely to develop further, and something that litigants and legal advisers will need to consider looking out for in appropriate cases.