What We All Need to know on Deepfake-Synthetic Media.

What We All Need to know on Deepfake-Synthetic Media.

What We All Need to know on Deepfake-Synthetic Media.

Are you real? What will Deepfake do to Security Risk? Shocking Examples

You may not know the term Synthetic Media but you likely have heard of the phrase DEEPFAKE from social media and TV entertainment news. Today, it’s far beyond Photoshop and celebrity impersonation...there are real cybercrimes which have occurred with devastating consequences in real life.

In fact, the US military, Congress and Federal law enforcement have now gotten involved. Today they certainly know all about it and are paying careful attention, monitoring advances in synthetic media/deepfake regularly and it’s time we all are aware of it too, so you can protect yourself, your family and the organization’s brand you serve.

Back in In March 2021 FBI put out PSA warning private industry, US Citizens and their families advising us of the very real threat that foreign governments, Russia, China and others are putting out synthetic profile images, videos and live disguised media. They are creating deepfake journalists, media personalities, IT engineers, business owners and social media influencers across lots of platforms.

Even on LinkedIn. They are connecting with people with full profiles. What’s worse is the images used may not even be real humans, but rather synthetic generated AI images that look and sound and appear completely real. Their goal in that instance was to spread anti US propaganda for Political purposes.

But it has gone far beyond foreign international relations or propaganda. Deep fake touches the very core of what we see, here in believe is real and the risk that opposes to us individually, our families in the organizations’ brands that we serve, is the most critical risk we will be facing in the future.

Later on in 2021 and earlier in this year in 2022, the US Military, Congress and US National Sec teams have been aggressively addressing the risks posed by deepfake. Congressional hearings have even been held on it with future ones scheduled. What’s key to understand here is synthetic media is just starting. The technology is extremely persuasive today, but it is just at its inception or incubator life stage and growing exponentially with technology advances every single week. It caused $220 MIL in damages last year alone and this year is expected to dramatically increase.


Most recently, in June 2020 to the FBI Internet crimes complaint center warned of hundreds of examples of US companies being threatened by cyber criminals’ use of deep fake technology’s during the hiring HR process.

The advisory said that criminals are using a combination of deepfake videos and stolen personal data to misrepresent themselves and gain employment in a range of work-from-home positions that include information technology, computer programming, database maintenance, and software-related job functions.

Federal law-enforcement officials said in the advisory that they’ve received a rash of complaints from businesses.

The use of this deepfake synthetic media is designed to to gain access to company systems directly under the guise of becoming an employee in a fraudulent manner. This way they would be able to access the company systems and gain, what otherwise would be on authorized access, in order to capture PHI, PII, IP and related info never meant for the public. This information could be used for espionage, extortion and sold on the dark web the game premium yield in bitcoin.

Historical Congressional Hearing

Recently there was a congressional hearing on the subject of deep fake threats and whether the US national security was prepared to address them. The answers provided were unsettling to many. Senator Ben Sasse of Nebraska question Dan Coates, former Director of National Intelligence. 

In the hearing he asked specifically whether national security was prepared to deal with the rash of advancing technology is faced. Dan coats responded that it is a major threat and one of the most serious we face today, and stated that the US intelligence agencies are in need of complete restructuring in order to address effectively the threats that deep fake poses.

What does DEEPFAKE Look like?

There are literally hundreds of companies that offer deep fake technologies, openly advertised on the surface web as well as the dark web. They even allow you to create your own demo version of one.

And so we did.

You are able to select gender, race, nationality, language from more than 60 different variance and customize in program these to say anything that you want. What is the most shocking is that a no cost you can generate something that is decent, with human gestures and voice inflections that seem fairly real. But for a slight cost one can generate something that completely for the human eyes in human years.

The following is example of a free demo that we obtained, where we simply plugged in a comical manner to say that they like their podcast. But when you view it you will notice that the hand movements are extremely real. The person looks real (she is not) and the lip movements are fairly accurate. We saw some other demonstrations from paid versions and they were able to for the human eye in human years completely.

What Does it Mean for Families and Society?

There is a recent example from March 2021 where deep fake was implicated in a well-known cyber bullying incident that occurred. Cyberbullying is a common issue especially with younger generations due to the high usage of social media. Rumors can be easily spread through social media and online platforms, which, when coupled with fake images or videos to suggest the rumor is true, can make the rumor more believable. This can ruin reputations and cause untold psychological harm.

That year a Pennsylvania mother made international news when she was accused of using deep fake videos and audio to show members of her daughters cheer squad allegedly engaging in drinking, smoking and sexual activity. These are naturally actions that, if true, could get them cut from the cheerleading squad. Several of the victims came forward about the cyberbullying and one victim claimed the mother went as far as encouraging suicide.

Two Well-Known Case Studies

Deep fake has been used for costly business reputational damage through business email compromise (BEC) attacks.

  1. In 2019 an attacker used deep fake software to impersonate the voice of a German company's CEO to convince another executive there to make a wire transfer of $243,000. It was a form of business email compromise and wire fraud. But it worked.
  2. Additionally, in 2021, the criminal used deep fake audio and video to convince an employee of a UAE company to transfer $35 million to a fraudulent  account. The company was in the process of mergers and acquisitions and the belief of the executive was that the payment was being made in support of one of the acquisitions.

Understanding Synthetic Media

The term “Deepfake”  is commonly used to denote a video of a person in which their face and/or body has been digitally altered. This is done so they appear to be someone else. This is often used either for entertainment or educational purposes, but in reality it is also used maliciously to cause a data breach and to spread false information.

Another operative term is Generative AI. These are the programs that leverage and allow for manipulation of text, audio files and images in order to create new content.

The overall blanket term for this is Synthetic Media. This denotes the artificial production and modification of data and media by automated means.

Understanding Generative Adversarial Networks (GANs)

The way Deepfake works is key to its success. It is created in a way so that the media is improved, over and over, until it is virtually undetectable by the human eyes and ears.

A key technology leveraged to produce deepfakes and other synthetic media is the concept of a “Generative Adversarial Network” or GAN. In a GAN, two machine learning networks are used to develop the synthetic content. They do this through an adversarial process.

The first computer network is the “generator.” Data that represents the original content is fed into the generator so it can ‘learn’.

The results are then presented to the second machine learning network, which has also been trained (but through a slightly different approach) to ‘learn’ to identify the characteristics of that type of data.

This second network (the “adversary”) attempts to detect flaws and rejects those it identifies as “fakes.”

These fakes are then ‘returned’ to the first network, so it can learn and improve. This back and forth of machine learning continues until it is virtually undetectable.

What Will Deepfake Do To Security?

The cost of deepfake scams exceeded $250 million in 2020, and this form of technology is still in its early inception, incubator stages. This is the cybersecurity risk of today.

There’s no doubt that as deepfake technology evolves, so will the sophistication of how criminals exploit this tech to attack businesses and consumers alike.

There have been specific costly examples that have happened recently when deepfakes are used in cyber attacks, fraud and access to system that hackers should be barred from. So much so, recently within the last 60 days the FBI and other European agencies have issued specific warnings to watch for DEEPFAKES

It goes to the very heart of what is and is not believable. When our own ears and eyes are deceived, while we are distracted or busy at work or at home.....then it creates a societal cynicism of what is believable.

Set Cybersecurity Priorities-Ask for Help

Cybersecurity is not the responsibility of the CIO. It’s the responsibility of the C-Suite. Tope leadership own, founded or manage the brand. When a breach destroys the trust customers have in the organization then the brand is irreparably harmed. That accountability does not solely fall into the lap of the top tech person. Sure they may have the team to manage systems and infrastructure but it’s the executive leaders who set funding, prioities and place security top of mind into the culture.

Those who run the culture of an organization actually own the responsibility for cybersecurity.

If you don’t know what steps to take, or which priorities to set this year, then simply get help. Contact your IT advisor or get an independent holistic perspective on your state of risk from our team at All Covered-Konica Minolta, a Top 10 rated Cybersecurity Firm globally, located right here in the US.

Listen to the PODCAST EPISODE What We All Need to know on Deepfake-Synthetic Media.

on Deepfake here: https://www.buzzsprout.com/2014652/11215087

For Other Research discussions, see all our Episodes at Cyber Crime Junkies Podcast site: https://cybercrimejunkies.buzzsprout.com


David Mauro, Regional Director

All Covered, Konica Minolta Business Solutions

Contact David Mauro and the All Covered Team to learn more. @dmauro@allcovered.com

Check out more content to raise awareness along with true crime stories at CYBERCRIMEJUNKIES.COM

Like/Follow on Facebook @CYBER CRIME JUNKIES

Back to blog