Manipulating artificial intelligence is no longer just a theorized threat. Here it is. Steps are being taken to protect people and organizations from malicious content generated by AI. However, there are things you can do more proactively to maintain trust in your digital ecosystem.
Deepfakes are trying to disrupt free and fair elections
In August, Elon Musk shared a deepfake video of Vice President Kamala Harris on His post has been viewed over 100 million times and received a lot of criticism. Musk called it satire. However, experts condemned this as a violation of Company X’s own synthetic and manipulated media policy. Others warned of the potential for AI to disrupt free and fair electoral processes and called for stronger national responses to stop the spread of deepfakes.
2024 is a critical election year, with nearly half of the world’s population heading to the polls. Moody’s has warned that AI-generated deepfake political content could play a role in threatening the integrity of elections. According to the 2024 TeleSign Trust Index, 72% are concerned that AI content will undermine future elections, a sentiment shared by voters around the world.
The risks of AI manipulation extend to all areas of society.
Incite fear and suspicion in global institutions
In June, Microsoft reported that a network of Russian-affiliated organizations was conducting a campaign to undermine France, the International Olympic Committee (IOC), and the Paris Games. Microsoft has admitted that a well-known Kremlin-linked organization created a deepfake of Tom Cruise criticizing the IOC. They also accused the group of producing persuasive deepfake news reports to stoke fears of terrorism.
It’s important to remember that this is not the first time bad actors have attempted to manipulate perceptions of a global organization. It is even more important to distinguish between real problems and dangerous ones.
The real problem is not that generative AI has democratized the ability to easily and cheaply create believable fake content. That’s because there aren’t adequate protections in place to stop its spread. This, in turn, has effectively democratized the ability to mislead, confuse, and corrupt on a large scale and globally.
You too may be responsible for deepfake scaling
One way deepfakes spread is through fake accounts, and the other is what the cybersecurity world calls account takeover.
On January 9, hackers successfully took control of social media accounts owned by the Securities and Exchange Commission (SEC). The criminal soon posted false regulatory information about Bitcoin exchange-traded funds, causing the price of Bitcoin to skyrocket.
Now imagine another (but not far-fetched) hypothesis. A bad guy takes over the official account of a trusted national journalist. This is relatively easy for fraudsters to do if proper authentication measures are not in place. Once inside, they could post misleading deepfakes of candidates days before voting begins, or before a CEO announces important news.
Because deepfakes come from legitimate accounts, they can spread and gain a level of credibility that can change minds, influence elections, and move financial markets. Once misinformation is out there, it’s difficult to put the genie back in the bottle.
How can we stop the spread of AI manipulation?
Significant efforts are underway in the public and private sectors to protect people and organizations from these threats. For example, the Federal Communications Commission (FCC) has banned the use of AI-generated voices in robocalls and proposed disclosure rules for AI-generated content used in political ads.
Big tech companies are also making progress. Meta and Google are working to quickly identify, label, and remove malicious content generated by AI. Microsoft is doing great work to reduce the creation of deepfakes.
But the stakes are too high to sit back and wait for comprehensive national or global solutions. And why wait? There are three important steps that are currently available but underutilized.
Social media companies need to strengthen onboarding to prevent fake accounts. With around 1.3 billion fake accounts across various platforms, more robust authentication is needed. By requesting both phone numbers and email addresses and using technology to analyze risk signals, we can improve fraud detection and ensure a more secure user experience.
AI and machine learning can be deployed in the fight against AI-powered fraud. 73% of people around the world agree they would have more confidence in election results if AI was used to counter election-related cyberattacks and identify and remove election misinformation. I’m doing it.
Finally, more public education is needed to help the public better understand the risks. Cybersecurity Awareness Month, held each October in the United States, is an example of the public-private collaboration needed to raise awareness of the importance of cybersecurity. There also needs to be more emphasis on creating a security-conscious workplace culture. According to a recent CybSafe report, 38% of employees admit to sharing sensitive information without their employer’s knowledge, and 23% skip security awareness training because they believe they “already know enough.” Masu.
Trust is a precious resource and should be better protected in the digital world. An ounce of prevention is worth a pound of cure. It’s time for everyone to take their medicine. Failure to do so will put the health of our digital infrastructure and the trust we have in our democracies, economies, institutions and each other at risk.