Cyber security companies race to combat ‘deepfake’ technologyConcern grows that criminals could use false video and audio to target businesses
Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found at
https://www.ft.com/tour.
https://www.ft.com/content/63cd4010-bfce-11e9-b350-db00d509634eWhen a chief executive called a finance employee to demand they make an urgent $10m wire transfer to a supplier, that staffer promptly carried out the task, even though it went against the company’s protocols. After all, it was the boss on the phone. But was it?
So far this year, as many as three companies have been victim to fraudsters using manipulated media — or “deepfake” technology — to try to con them into making funds transfers, according to cyber security group Symantec.
In one case, $10m was wired to criminals who used artificial intelligence to impersonate an executive down the phone.
Deepfaking, in which content is doctored to give uncannily realistic but false renderings of people, has increasingly caught online attention with the emergence of a number of viral videos.
Typically the technique can involve a “face swap” — replacing one person’s face with that of another — or “lip syncing”, in which a subject’s mouth is seen to move along with an audio track that has been laid over it. Audio too can be faked by training AI programs to mimic existing recordings of a person.
Much of the debate around the potential power of the nascent technology has focused on whether it can be used as a means for spreading political disinformation. But as the techniques, which are developed via complex machine learning, have become cheaper and more readily available online, there is evidence that they are starting to be adopted by fraudsters in commercial settings.
And as a result, several cyber security companies are racing to find ways to thwart potential attacks. “Not only are deepfakes evolving rapidly and improving their level of realism, but also the barrier to entry to create and distribute deepfakes is getting lower,” said Michael Farrell, executive director of the Institute for Information Security and Privacy at Georgia Tech. “There is a significant opportunity for cyber security companies to play in this space when it comes to fraud prevention,” he added.
‘Everybody has to have a plan’
California-based Symantec said last month that it had recorded three instances of deepfaked audio attacks on corporations in 2019. Other potential attacks could include market manipulation — for example creating a video of a chief executive announcing a fake merger or false earnings in order to shift the share price — or brand sabotage. While there are no confirmed reports of these latter attacks to date, cyber security experts say companies should be on high alert.
“Deepfakes . . . will allow cyber criminals to up their game in terms of social engineering,” said John Farley, who heads up the cyber practice of global insurance brokerage Gallagher. “There’s a whole host of nightmare scenarios.”
“Everybody has to have a plan for what’s going to happen in the age of deepfakes,” said Darren Shou, vice-president of research labs at Symantec.
The biggest concern is reputation damage. We are going to see some interesting scams coming out
Matthew Price, ZeroFOX
“Every single bank that I’ve talked to recently, we’ve had [a] discussion [about this], and quite a few non-financial sectors are having it too,” he added, citing board-level conversations with healthcare and retail firms.
Given the increasingly complex nature of deepfake technologies, security companies and academics are exploring a wide range of techniques to combat them.
Some are using AI to detect discrepancies in fake media based on an understanding of how deepfakes are created. Tell-tale signs for videos, for instance, can include changes in pixels around a subject’s mouth, or inconsistencies in the generation of shadows or the angles of a person’s face.
But this technology is still very new and developers face hurdles. “Some detection methods are really accurate but right now there’s not enough data out there to build a data set for the detection model,” said Matthew Price, principal research engineer at Baltimore-based ZeroFOX, which last week launched its own video analysis tool for detecting deepfakes.
Mr Shou of Symantec said his company was researching ways of mapping the “provenance” of video and audio media — whether it came from reputable websites originally and how it had since travelled online — as a sign of whether it might be fake.
The company is also exploring the possibility of whether 3D-printed glasses that can be used to evade facial recognition — by tricking the software into misclassifying the wearer — could also be repurposed to help chief executives prevent deepfakes being made of them.
Recommended
FT Magazine Madhumita Murgia
Why some AI research may be too dangerous to share
Meanwhile start-ups such as ProofMode and Truepic offer technology that stamps photos with a watermark to prove that they can always be trusted. The latter is partnering with major chipmaker Qualcomm to potentially include these capabilities in mobile phone hardware.
Rumman Chowdhury, who leads “responsible AI” consulting at Accenture, said another option was to look at preventive measures, such as requiring those who publish code for creating deepfakes to build in verification measures.
Hype and reality
Still, some remain sceptical about the dangers of deepfakes, saying that the technology is in its infancy and that the risks — particularly to businesses — are in danger of being prematurely hyped.
“Theoretically, it’s a threat,” said Camille François, chief innovation officer at Graphika, where she leads social media threat investigations. “[But] people need to be confronted with the clunkiness of the technology.
“CEOs are not going to be the first victims of this; women are,” she added, pointing to its use by some to add women’s faces to pornography.
Meanwhile, others warn of the limits of trying to combat the trend. Ms Chowdhury said that even the best techniques to detect deepfakes could be undermined by human nature.
“We need to be cognisant that no matter how many tools we put out there, there will always be a certain percentage of people who will not believe a verification tool,” she said. “I don’t think we should discount people’s desire to consume fake media.”
Nevertheless, cyber security companies are pushing forward with their work. “The biggest concern is reputation damage,” said Mr Price of ZeroFOX. “We are going to see some interesting scams coming out.”