A nonprofit publication of the Kentucky Center for Public Service Journalism

Keven Moore: Risks of ‘deepfakes’ are real, and businesses need protection from phony videos


For the past few years, we have all have heard of the term “Deep state” and “fake news” but have you heard of deepfakes? When I first had read the term Deepfakes I had to do a double-take because I thought that it was a spelling error. But it’s a real term and can be a real problem.

Deepfakes refer to manipulated videos, or other digital representations produced by sophisticated artificial intelligence, that yield fabricated images and sounds that appear to be real.

Usually, it’s a video of a person in which their face or body has been digitally altered so that they appear to be someone else. These are typically used maliciously or to spread false information.

Keven Moore works in risk management services. He has a bachelor’s degree from the University of Kentucky, a master’s from Eastern Kentucky University and 25-plus years of experience in the safety and insurance profession. He is also an expert witness. He lives in Lexington with his family and works out of both Lexington and Northern Kentucky. Keven can be reached at kmoore@roeding.com

Deepfake examples on the internet are mind-bogglingly convincing. These videos demonstrate just how the deep learning technology available today can totally alter a video while cutting down the time to edit a video. Many of these videos are so convincing that even trained eyes could have trouble spotting a fake. But there is increasing worry over how this technology could be abused to create realistic doctored videos, made for a negative purpose.

Deepfakes refer to sophisticated forgeries of an image, video, or audio recording. Deepfakes have been around for years — you can even find a version of them in social media applications. For instance, with Snapchat, face-changing filters take real-time data and feed it through an algorithm to produce a synthetic image.

Go to your app store on your phone and find dozens upon dozens deepfake apps that range from faceswaps, faceovers, refaces, facial animators, face morphers, fatify your face, age your face, talkers which add different voices, voice changers, body slimers, gif yourself …etc. Most are designed to entertain yourself and others on social media.

However, as the technology has evolved, deepfakes are now able to alter media so well that it’s often difficult to detect that manipulation has occurred. Through the use of artificial intelligence (AI) technology, deepfakes leverage existing audio and video of an individual — all while continuously learning how to produce a more convincing forgery.

As a risk management and safety professional, concerns are endless when I imagine the possible risk exposures. Wars could literally be started because of a deepfake video you imagine how this technology could be used.

A good deepfake video created by a country to persuade or inspire a nation, its citizens, a movement, or terrorist group is scary.

Deepfakes have been used to impersonate influential political figures. They can even be used to alter both real-time or recorded media. Deepfakes are so sophisticated that they can deceive the general public into thinking a person has said or done something they normally wouldn’t. And, in the hands of a malicious party, deepfakes can be incredibly devastating.

From a risk management perspective, a good deepfake video with malicious intent from a competitor or person with a chip on their shoulder could quickly ruin a Fortune 100 company as well as a small business owner.
  
The Risk of Deepfakes for Businesses

Through the use of phishing and “fake president” scams, cybercriminals have long tried to deceive businesses into giving up sensitive information. Often, these scams are executed using fraudulent email accounts, which, in some cases, can be easy to spot. However, using deepfakes, cybercriminals now have the power to fool even the most careful and perceptive organizations.

With deepfakes, cybercriminals can make a person in a video look and sound like a target company’s CEO, tricking employees into wiring money or sharing sensitive data, among other compromising actions. Specifically, deepfakes can be used to execute social engineering scams or sway public opinion:

Using deepfakes in social engineering scams—Put simply, social engineering is when a malicious party takes advantage of human behavior to commit a crime. Social engineers can gain access to buildings, computer systems, and data simply by exploiting the weakest link in a security system: humans. For example, social engineers could steal sensitive documents or place key loggers on employees’ computers at a bank—all while posing as fire inspectors from a nearby fire department. Social engineers don’t need to have expert knowledge of a company’s computer network to break into a business—all it takes is for one employee to give out a password or allow the social engineers access to an area they shouldn’t be in. And because deepfake technology has become less expensive and more accessible, the prospect of tricking an employee to perform a malicious action through social engineering tactics is that much easier. This is especially true given how realistic deepfakes can be.

Using deepfakes to sway public opinion—By deepfaking a company’s CEO or figureheads, a malicious party can easily spread false or potentially damaging information. Through deepfakes, criminals can make key stakeholders say or do just about anything. They could have a CEO share false information, say or do socially unacceptable things or attempt to influence consumer behavior. All of these actions can harm a business’s reputation, sometimes irreparably.

Given the potential harm of deepfakes, it’s crucial that businesses are prepared to protect themselves. If you don’t have a good cyber insurance policy then you need to call your agent and have him or her fully explain the coverages, deductibles, and limits to an existing or new policy.

Far too often well-educated business owners, CEO’s, Risk Manages, CFO’s and IT professionals have developed a false sense of security and understanding when it comes to Cyber liability. Many feel that they are well protected, or covered under an existing policy without ever reading the exclusions in the fine print.
 
How To Guard Against Deepfakes

When it comes to protecting your business from deepfake schemes, you should consider:

• Training employees — To protect your organization against deepfakes, employee training is critical. Employees should be educated on deepfakes, including what they are and how they may be used against the business. Simply by raising awareness of deepfakes, employees will be better equipped to spot them, allowing your business to respond quickly and swiftly.

• Utilizing detection software — While AI is used to make deepfakes better and more effective, it can also be used to help detect potential deepfakes. In fact, large corporations such as Facebook and Microsoft use AI and similar software to detect and remove deepfake videos from their platforms. When it comes to deepfakes, the earlier you detect one, the better. This allows you to act quickly to reduce potential harm.

• Establishing a response strategy — If and when your organization is the target of a deepfake-driven attack, it’s crucial to have a response strategy in place. Such a strategy should center around crisis mitigation. This includes outlining individual responsibilities, determining escalation practices, and communicating response best practices.

• Transfer the Risk — You should attain a cyber policy and/or beefing up an existing policy to cover the losses from a deepfake attack. 

Be Safe My Friends!


Related Posts

One Comment

  1. Sean Harris says:

    Thank you for this educational article. It’s true, deepfake technology is invaluable for its potential uses in industries like entertainment, call centers, healthcare, but certain caution and levity needs to be exercised, and research should be done into developing security measures and protocols to counter its abuse.

Leave a Comment