Social Media Struggles with Coronavirus Misinformation
- Social media has transformed American life and business as an open forum for people to share news and other information with little or no filter.
- During the coronavirus pandemic, misleading or harmful coronavirus information has circulated on social media platforms, including bogus treatments and cures, conspiracy theories about the origins of the virus, and inaccurate information on testing facilities.
- Social media companies have taken steps to fight the misleading or harmful coronavirus information by labeling inaccurate information and removing dangerous posts.
Access to accurate and timely information is critical for people and governments as they respond to the coronavirus pandemic and work to reopen the economy. Millions of Americans get news from social media. Some social media companies are making an effort to ensure the information related to coronavirus on their sites is reliable, including flagging potential misinformation, fact checking content, and even removing it entirely.
Technology for social creatures
Social networks likely contributed to the development of large-scale cooperation in humans, a necessary element of civilization. Technology has allowed some social interactions to move online. Companies facilitating this interaction include Facebook, Twitter, and TikTok. According to the Pew Research Center, just 5% of American adults used at least one of these platforms in 2005. By last year, 72% of U.S. adults were using some type of social media. Early adopters of social media sites have traditionally been younger, but use among older adults has risen.
Users of social media sites and services can post pictures and videos, share information, communicate with family and friends, buy and sell goods, schedule meetings and activities, and express political and other beliefs. Many of these uses are constitutionally protected forms of speech.
Social media also has been used to spread false or misleading information, spew hate speech, broadcast terrorist attacks, and facilitate the spread of child sexual abuse images. Social media companies generally enjoy broad immunity for most content posted by their users due to a 1996 statute commonly known as Section 230. The law also provides immunity for voluntary, good faith actions that service providers take to remove objectionable content from their platforms. The guiding principle of the statute is that responsibility for speech lies with the speaker, not the host.
False and misleading information related to the coronavirus pandemic has proliferated on social media. Posts promoting bogus treatments and cures, conspiracy theories about the origin of the virus, and inaccurate information on locations and times of testing facilities have circulated widely. A completely unfounded claim that 5G technology causes the coronavirus spread broadly across a variety of social media platforms, including Facebook and Twitter. Clips from a debunked conspiracy theory video, which Facebook, YouTube, and other companies had removed, made its way to TikTok. In one survey, 64% of adults said they have seen at least some news and information about the coronavirus that seemed completely made up.
The Federal Trade Commission has worked to stem the tide of fraud and unproven health claims around the coronavirus. The FTC has sent more than 120 warning letters to marketers who have made unproven claims, often on platforms such as the company’s Facebook page. The letters warn companies that their conduct is likely unlawful and that they can face serious legal consequences, such as a federal lawsuit, if they do not immediately stop. The FTC says most companies that receive such letters take steps to quickly come in compliance with the law.
The Rise of Social Media among U.S. Adults
facebook – a global giant
Facebook claims to have 2.6 billion monthly active users, more than any country has citizens. According to a Pew report early last year, 69% of U.S. adults use Facebook and 43% got news from the service. The company is a global giant, and its reach into American life and American households is immense. It makes money primarily by selling ads, which users see alongside news articles and posts from their friends. Facebook’s massive user base and the amount of information it collects on so many people allows it to target who sees ads based on numerous factors including location, gender, education, and interests. These ads are a common target for fraudsters.
Facebook has said it is taking steps to combat the spread of coronavirus-related misinformation, including removing posts with inaccurate claims about cures or the availability of testing facilities. Facebook has also placed information from health authorities and organizations at the top of users news feeds. It sometimes places warning labels on posts that are deemed to be potentially inaccurate though not harmful. In April, Facebook placed warning labels on about 50 million pieces of COVID-19 related content.
Twitter users send an average of 500 million tweets each day, with huge spikes occurring during special events such as the Super Bowl. Twitter has 330 million active monthly users worldwide and 145 million daily users. Twitter is dominated by its most prolific tweeters, with an estimated 80% of the tweets from U.S. adults coming from 10% of users. Many accounts have never tweeted. While Pew noted last year that only 22% of American adults used Twitter – down from 24% in 2018 – its influence is still significant, as journalists, politicians, and celebrities use the platform.
Twitter says it has taken a number of steps to stop the spread of coronavirus-related misinformation on its platform, including creating a dedicated COVID-19 events page and adjusting its ad policies to thwart potential fraud. The company recently updated its policy regarding harmful content and will use various labels and warnings to identify tweets that may contain misleading, disputed, or unverified claims. Oxford University researchers, however, recently found that most of what fact checkers have labeled as misinformation remained on Twitter without any warning label.
During the pandemic, researchers at Carnegie Mellon found nearly half of Twitter accounts discussing COVID-19 may be “bot” accounts. These accounts are not real people, but rather are part of a collection of accounts being used for a purpose, including to try to influence events, commit fraud, or sow chaos and division. The researchers found many bot accounts have been used to spread false medical advice, amplify conspiracy theories, and push to end-stay-at-home orders.
tiktok – a chinese App gone mainstream
TikTok is a social network owned by ByteDance, a Chinese technology company, which launched a version for markets outside of China in 2017. Users upload short videos of 3 to 15 seconds in length, often showing them lip syncing or dancing to popular music. The app has 800 million monthly active users worldwide and an estimated 65 million in the U.S. It is most popular among young people – 37% of its active users in the U.S. are between 10 and 19 years old.
Like other social media companies, TikTok has struggled with coronavirus-related misinformation on its platform. The company has taken steps to combat misinformation, including adjusting its guidelines to prohibit posts “that may cause harm to an individual’s health, such as misleading information about medical treatments” and “misinformation meant to incite fear, hate, or prejudice.” The company introduced a feature for users to report content they believe to be “intentionally deceptive.”
TikTok has faced criticism and scrutiny from lawmakers, free speech advocates, and national security experts concerned about its ties to the Chinese government. The company claims none if its data is subject to Chinese law, that it has never been asked by the Chinese government to remove or censor any content, and “would not do so if asked.” Last December, the Pentagon banned people in the military from downloading the app on military devices.
Next Article Previous Article