COVID-19 and What it Tells Us About the State of Misinformation
How are tech companies handling the spread of information on their platforms?
COVID-19 is the virus causing the 2019 coronavirus disease. Discussing numbers for current cases and fatalities in a format like this is almost moot due to the rapid development of the situation. But with cases spreading rapidly around the world and information about the disease trickling out as quickly as scientists are able to produce it, information about the details of the disease are far more limited than the overall coverage and impact that it’s having on society at the moment. I’m going to go ahead and assume that if you’re reading this article, you are interested enough to know about the basics of the disease but just in case you’re not, the general scientific consensus at the moment for staying safe is to wash your hands thoroughly, avoid touching your eyes and mouth, and avoid large, crowded gatherings.
Websites are doing better than they have historically in spreading the truth. Most notably, Google has pushed more reputable sources to the top of its search results. A search for “COVID-19” returns results from the CDC and WHO providing accurate and up to date information on how to best prepare for the outbreak.
Although Facebook and Twitter have lagged behind Google, they have still shown marked improvements over their previous efforts to stop misinformation.
The outbreak of the virus in early 2020 is the first time that such a spread of disease has occurred during the age of social media as we experience it today. Technology companies control the way that we receive our information and the past few years have not given them a good reputation. From 2016’s infamous Brexit vote and US presidential election, skepticism began to noticeably rise as the world became more aware of not just the existence and origin of misinformation on social media, but also its power to influence important global politics.
In an age where all of the information in the world is easily accessible by most of the population, it feels counterintuitive that false information can spread so easily, and in some cases more easily, than the truth. But repeatedly, we have seen that this is the case. The reasons for such a reality are numerous.
First and importantly, social media sites and tech companies like Facebook and Google have their profits driven by engagement. This means that the longer someone stays on their platform, the more money that they are going to make. So whether it’s true or false, these sites will push the most popular posts to the top because they are proven to drive further engagement. A lack of incentive to filter out false information gives it less importance to the company. And that’s without even mentioning the vast grey area that exists between fact and opinion that can make it difficult to justify certain curation choices.
The obvious solution for this problem is for legislative action to be taken to limit the spread of misinformation. But, as if things weren’t complicated already, the United States’s deeply rooted assertion that free speech is unalienable makes the creation of laws for regulating speech on social media platforms tricky at best and impossible at worst.
The immediacy of news makes maintaining the truth difficult. Where news was able to be printed and spread definitively before the 24 hour news cycle, now every news story is pieced together from minute to minute. Rapid updates and shocking stories that are released with sparse information lead the public to jump to conclusions and spread information that is concocted, knowingly or otherwise, to fill in the gaps.
An immediate example of this occurred the day that I’m writing this. During the coronavirus outbreak, rumors spread quickly that New York City would be completely shut down, similar to what is happening in cities in Italy. The spread of the shocking misinformation spread faster and further than the truth that was authored by NYPD Twitter account which told the public that there was no plan to shut down subways or roadways as was feared by many.
Sadly, however, the worst of humanity from simple internet trolls to those whose nefarious plans target the public’s fear will continue to take advantage of social media to spread lies. The dissemination of misinformation is not dissimilar to a game of telephone with all of the attendants of a sporting event. The truth will eventually be distorted, whether it be on purpose or the result of numerous miscommunications. And with an American president who clocked in at telling 10 lies per day in the first year of his presidency, the ability to think for oneself is more important than ever.
There are three entities that need to claim responsibility for controlling misinformation.
The first are the tech sites that house the misinformation. The many years of spouting that they are merely the platform and not the author are over, it’s time to show responsibility for the information that is spread on their platforms.
The second is that of national governments. Although legislation might be tricky to produce, action must be taken to avoid further spread of misinformation.
But with financial incentives of tech companies and the trademark slow movement of policy changes, the most important shield to misinformation is you. Looking at information from any source requires a heavy dose of skepticism. Taking any information from social media without consulting an outside source is especially dangerous and can even be irresponsible. Even drawing conclusions from a single reputable news source is ill-advised.
From important events and the public’s reaction to them we can learn two things. The first is to be skeptical and the second is to pay closest attention to what experts are saying. Not armchair experts, actual experts. Doing this gives us our best chance to avoid panic and resolve situations quickly and safely.
Sources and further reading: