October 8, 2020

Technology, Disinformation, and Elections


KEY TAKEAWAYS

  • Foreign adversaries — including Russia, China, and Iran — use social media to spread disinformation in attempts to divide Americans, influence our elections, and advance their agendas.
  • Since the 2016 election, social media and technology companies have taken steps to try to fight disinformation on their networks.

Foreign adversaries use social media as a tool to spread disinformation, divide Americans, and advance their agendas. Russia, as part of its long-term agenda of weakening democracy and democracies globally, used social media to interfere in the 2016 election. Social media and other technology companies — alongside government efforts to secure election systems and hold bad actors accountable — have been trying to prevent adversaries from repeating that scenario ever since.

Social Media Election Information

Technology, Disinformation, and Elections

Source: Forbes, Twitter

The threat is not just from Russia. The Office of the Director of National Intelligence warned Congress in early March that “China, Iran, North Korea, Cuba, and even non-state actors all have the opportunity, means, and potential motive to interfere in the 2020 elections as a way to achieve their goals.”

disinformation efforts, a threat to democracy

Adversaries have a long history of using disinformation against the U.S. and exploiting divisions within society. Disinformation campaigns pose broad threats to democracies around the world, and extend beyond electoral politics. Countries like Russia, China, and Iran seek to use false information as an inexpensive way of refocusing attention away from their own issues and harm the global democratic system. The Trump administration designated countering these disinformation campaigns as a priority in the 2018 National Cyber Strategy. The administration continues pursuing aggressive action to dismantle the threat.

The Senate Select Committee on Intelligence’s exhaustive, bipartisan report on Russian interference in the 2016 election identified several themes in the Russian disinformation playbook: a high-volume, multiple channel strategy to overwhelm the target audience; merging covert operations like hacking and disinformation operations with overt propaganda like state-funded media; speed in spreading disinformation that far outpaces the ability of bureaucracies to respond; use of automated accounts, bots, and paid internet “trolls”; and manipulating real world actions and events.

The committee found the press to be a common target of Russia’s efforts, which included reducing faith in the media. As one expert cited by the Intelligence Committee explained it: “The goal for Russia is to achieve a state in which the average media consumer says ‘there are too many versions of events, and I’ll never know the truth.’” The report also found “by far, race and related issues were the preferred target of the information warfare campaign designed to divide the country in 2016.”

social media in disinformation campaigns

Technology has transformed nearly every aspect of American society, and elections are no exception. High-speed wireless internet, sophisticated search engines, and smartphones have put more knowledge about candidates and issues within easy reach of voters than ever before. Social media companies like Facebook and Twitter have enabled millions of Americans to exercise their free speech rights and participate in public discourse.

The Intelligence Committee found, however, that the ways social media can help increase civic engagement also lend themselves to exploitation, “which frequently materializes as the dissemination of intentionally false, misleading, and deliberately polarizing content.”

“A Lie Can Travel Around the World While the Truth Is Lacing Up Its Boots”

Technology, Disinformation, and Elections

Technology, Disinformation, and Elections

Technology, Disinformation, and Elections

Technology, Disinformation, and Elections

Many technology companies have instituted policies, adjusted their algorithms and other software, hired cybersecurity and foreign affairs experts, and taken other steps to try to reduce the likelihood their platforms are exploited in 2020 the way they were in 2016. They are working more closely with one another and with the government to identify cross-platform activity. Last month, Facebook and Twitter, acting on information from the FBI, suspended accounts linked to the Internet Research Agency, a Russian troll farm responsible for much of Russia’s disinformation activities in the 2016 election.

Facebook

Seven in 10 U.S. adults use Facebook, and about four in 10 say they get news from the service. The site was a force in the 2016 election on a scale far greater than any other social media company. Facebook had such a problem with misinformation on its platform in 2016 that, according to one analysis cited in the Intelligence Committee report, in the final three months leading up to Election Day, the top-performing intentionally false stories on Facebook received more shares, reactions, and comments than the top real news stories from 19 major news outlets.

In February, Facebook released a plan to help prevent its platform from being used in the same manner in the 2020 elections. The company focuses its efforts on fighting influence operations, increasing transparency around ads, and reducing the spread of misinformation such as manipulated videos. Facebook also has assigned more than 35,000 employees to safety and security issues. It says that last year it took down more than 50 networks of fake accounts worldwide, many ahead of major elections. In August, Facebook reported it had removed three additional networks of fake accounts, pages, and groups. Two of them — one from Russia and one from the U.S. — targeted people outside of their country. The third network was based in Pakistan and focused on audiences in Pakistan and India.

Twitter

Although only around 22% of Americans use Twitter, its influence is significant, as journalists, politicians, governments, and celebrities use the platform. Americans sent roughly one billion tweets and retweets about the election between the first primary debates in August 2015 and Election Day 2016.

Twitter has long had a problem with misinformation proliferating on its platform. Researchers at MIT tracked more than 125,000 news stories on Twitter, which were shared by 3 million people over the course of 11 years. They found: “Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information.”

In an attempt to minimize the ability of foreign adversaries to misuse the platform, Twitter updated its rules to address fake accounts engaged in a variety of malicious behaviors. Affected accounts include those that deliberately mimic or replace accounts the company has previously suspended for violating its rules and accounts that distribute hacked materials containing private information or trade secrets, or could put people in harm’s way. Earlier this year, after gathering 6,500 responses to an initial draft, Twitter announced new rules surrounding “synthetic and manipulated media.” The rules say that users “may not deceptively promote synthetic or manipulated media that are likely to cause harm.” They also establish criteria for labeling or removing tweets under the rule. Twitter hosts an archive of potential state-backed operations it has identified. The archive is open, and researchers, others in industry, the public, government, and the media can examine the groups and tactics used.

Microsoft

Microsoft has worked to support research into foreign disinformation, developing AI technology to combat synthetic or doctored videos, known as “deepfakes,” and launching the Defending Democracy Program. This effort aims to bring together governments, non-government organizations, academics, and industry from around the world to “explore technological solutions to preserve and protect electoral processes and defend against disinformation campaigns.” 

Last month, Microsoft warned that a ransomware attack on election infrastructure could have the potential to disrupt the 2020 election. The company said that Russia-based hackers were responsible for 52% of ransomware incidents it saw between July 2019 and June 2020. Iran was responsible for 25% of the attacks, and China for 12%.

Google

Google faced criticism in the last election that its “autocomplete” feature may have favored certain candidates. In response, the company expanded its autocomplete policies related to elections. In September, the company announced plans to block some autocomplete search suggestions in an effort to stop misinformation from spreading online during the election. These could include predictions that talk about “voting methods, requirements, or the status of voting locations” or that raise doubts about the security and integrity of the election. People will still be able to search for information on the subjects, the search engine just won’t push people to consider those topics. Google also offers tools to help election officials secure websites, and for candidates and campaigns to secure email and other accounts.