Deepfakes Are Quietly Becoming the Next Big Risk for Brands

At first, deepfakes felt like a novelty. A viral video here, a celebrity imitation there. Something interesting, slightly unsettling, but still far removed from everyday business reality.

That illusion is fading quickly.

At the AI in Communications Boot Camp in Zurich, Oliver Hayes OBE, EMEA Head of Counter Disinformation at Edelman, delivered a wake up call for communicators and business leaders. The world of disinformation is evolving at extraordinary speed and artificial intelligence is accelerating it.

For companies, the implication is simple. Reputation risk is entering a new era.

Deepfakes Are Quietly Becoming the Next Big Risk for Brands

 

The Perfect Environment for Disinformation

The rise of deepfakes is happening in a moment when trust is already fragile.

Across societies, confidence in institutions is declining and people increasingly struggle to determine what information is credible. At the same time, information ecosystems have become fragmented. People rely on smaller communities, influencers and individuals who feel relatable rather than traditional authorities.

This creates fertile ground for manipulation.

A large share of the public now admits they have difficulty determining whether information is true or false. Even more concerning is that a significant minority openly approves of spreading disinformation under certain circumstances.

When trust erodes, the barrier to manipulation becomes dramatically lower.

Deepfakes Have Moved From Experiment to Weapon

Just a few years ago, creating convincing fake media required significant technical expertise. Today the tools are widely accessible.

Deepfakes can now take the form of highly realistic videos, synthetic audio recordings or fabricated images that appear indistinguishable from authentic content. Artificial intelligence also enables the mass production of convincing written content and automated social media profiles.

The real transformation lies in scale.

Malicious actors can now generate large volumes of realistic content quickly and deploy coordinated campaigns across platforms within minutes. A narrative can spread globally before an organization even realizes it exists.

Manipulation Is Shockingly Cheap

One of the most striking insights shared during the session involved the economics of disinformation.

The barrier to entry is remarkably low.

For roughly ten euros, a malicious actor can purchase packages that generate fake engagement on social media. These include thousands of artificial views, likes, comments and followers designed to give the illusion that a narrative is gaining traction.

In other words, influence can be manufactured.

Once a piece of manipulated content appears to be trending, algorithms and audiences can amplify it further, creating a feedback loop that gives false narratives the appearance of legitimacy.

Why Brands Are Increasingly Targets

Deepfakes are no longer limited to politics. Businesses are becoming targets as well.

A manipulated video of a CEO announcing layoffs could trigger panic among employees and investors. A fabricated statement about a sensitive social issue could damage a brand overnight. Even a synthetic audio recording could be used to initiate financial fraud.

Beyond reputational damage, deepfakes can also affect stock prices, fuel social media backlash or create confusion among stakeholders.

For communications leaders, this represents a new category of crisis.

The challenge is not simply responding to negative coverage. It is responding to events that may never have happened in the first place.

The First Step Is Awareness

Despite the growing threat, many organizations remain unprepared.

One of the first steps Hayes recommends is understanding where vulnerabilities exist. This means analyzing the digital environment around the brand, identifying topics that could become targets for manipulation and assessing how quickly an organization could detect and respond to disinformation.

Monitoring the information ecosystem has become essential. Companies must track emerging narratives and identify coordinated campaigns early before they escalate.

Preparation Is the New Crisis Strategy

Traditional crisis communications plans were designed for real world events.

In the age of deepfakes, organizations also need to prepare for synthetic crises.

This involves developing playbooks for handling manipulated content, training executives on how deepfakes might be used against them and running simulations that test response capabilities.

Preparation dramatically reduces the time required to react when a fabricated narrative begins to spread.

Fighting Disinformation Before It Starts

One of the most powerful strategies discussed during the session is something known as prebunking.

Instead of reacting to misinformation after it spreads, organizations can anticipate likely narratives and prepare audiences in advance.

When stakeholders understand how manipulation works, they become less susceptible to it.

In a world where artificial intelligence can fabricate convincing stories at scale, proactive trust building becomes one of the most valuable defensive strategies.

The New Responsibility of Communicators

For decades, reputation management focused on shaping narratives.

Today the role of communicators is expanding.

They must now defend organizations against narratives that are artificially generated, algorithmically amplified and often designed to appear credible.

The uncomfortable reality is that the next reputational crisis may not start with a real event. It may begin with a video, a voice or a message that looks authentic but never actually happened.

And in the age of artificial intelligence, that possibility is becoming increasingly real.

Dave Fleet, Managing Director and Global Head of Digital Crisis at Edelman, will be exploring the growing risk of deepfakes and how organizations should respond at the Crisis Communications Boot Camp taking place from 16 to 17 April in Chicago, where communicators will examine how disinformation and AI driven manipulation are reshaping crisis preparedness.


Tags