Countering Fake News
Since the presidential election campaign in the US in 2016, the phenomenon of fake news increasingly moved to the center of the public debate. In contrast to classical false media reports of the analog age, fake news is generated intentionally in an environment where a truth claim actually exists. In public discourse, fake news often refers to all kinds of reports that seem problematic, including vague or inadvertently distributed information. Populists turned the term even into a polemıcal battle cry to accuse established media of systematically incorrect coverage. In reality, however, fake news is targeted disinformation that circumvents professional journalists who would not knowingly publish false information for fear of reputational damage. It was the rise of social media that not only democratized the way news is distributed but also strengthened the ability of fake news to circulate. Today, everyone can potentially create and spread false information. While political purposes seem obvious as the reason behind fake news, their diffusion can also be financially motivated. In those cases, attention-drawing fake news serves as a means to channel users to specific websites in order to generate advertising revenues.
Fake news mainly spread on social networks through individual user interactions such as commenting, liking or sharing. Those are central for the spread of fake news because users tend to consider a news item more trustworthy if a known or trusted person has interacted with it. Thereby, fake news has the widest reach in user groups of similar worldviews. In those homogenous groups, news with clear political messages have the best chances to attract attention. A negative and controversial but easily understandable character of the message strengthens this effect. Therefore, fake news typically deals with morally charged topics such as migration, child abuse or war, as those trigger emotional-affective reactions and interaction. Additionally, social bots, which are computer programs simulating human behavior, contribute to the dissemination of fake news by algorithmically liking or sharing contents. Their interaction raises the relevance attributed to the content so that it is displayed to social media users preferentially. Yet, the boys’ activities are empirically difficult to track and their share in the spread of fake news unclear.
How big a threat is a fake news?
One of the main questions shaping the debate on fake news is the extent of their potential adverse effects. To shed more light on the issue, it is necessary to ask how far this kind of misinformation reaches into the population. Though empirically hard to measure, studies indicate that only a small group of internet users regularly come into contact with fake news, namely those spending a lot of time on the internet and on social media. The vast majority, however, faces the phenomenon only occasionally or not at all. This allows the conclusion that fake news has a certain impact potential, yet a limited one. Although more and more people, especially young ones, access the internet, news on social media encounters skepticism. Compared to traditional information sources, they do not appear credible. Thus, we can assume that fake news does not pose too great a threat as long as users continue to reflect on their credibility and, if suspicion arises, conduct follow-up research.
While the reach of fake news does not seem to be extensive yet, their unconscious and uncritical reception give a reason for concerns. Most people process news heuristically, meaning that they invest only low cognitive effort to deal with their content. Thus, the risk to believe false information increases considerably. The way in which news is presented on social media – usually only with a catchy headline and few lines summarizing the content of a news report – further tempts the reader to grasp contents only quickly. At the same time, the mere appearance of news teasers in social media feeds conveys the users a feeling of being well informed. Yet, since users do not typically open social media channels to receive news, they take in news contents only in passing, while being in a less aware and less critical reception mode. Therefore, even users who are usually aware of the quality flaws of news on social platforms are potentially susceptible to fake news.
One of the central characteristics of this heuristic information processing is the so-called “confirmation bias”. Due to this phenomenon, people tend to absorb and interpret information in a way that confirms their opinion. Internet users preferably rely on information sources that fit into their worldview. Consequently, they will also be suggested corresponding news in social media, because the algorithms reflect their behavior. Due to the resulting filter bubble, fake news might have an easy game. If their content confirms the user’s worldview, she/ he is likely to believe it without questioning. In that sense, fake news mainly reinforces existing views. This mechanism is enhanced by the so-called “sleeper effect”. It describes that people might initially not believe information because of mistrusting its source. Over time, however, they might forget where the information came from but still remember its content and thus, come to believe it if it matches their ideas. Another danger is the repetition of fake news. Upon reiteration, information that was initially considered false becomes familiar and hence more credible. On the other hand, frequent confrontation with the same message might provoke a questioning of the facts and lead to the opposite outcome.
What to do against fake news?
Despite the potential threat, experts do not recommend warnings against fake news or clarification of the facts. Those would require the repetition of the false information and subsequently increase the risk of remaining in the memory. Moreover, not the source of the information but rather its conformity with one’s ideas is crucial for its credibility. Warnings, which are basically information about the news’ source, are therefore unlikely to be efficient. Recipients of warnings might even be angered and perceive them as illegitimate interference in their freedom of decision. Consequently, they might show reactance and believe even stronger in the fake news. Deletion of fake news does not present itself as an appropriate solution, either. In the width of the internet, users will always find information that confirms their worldview. By deleting fake news from popular channels like Facebook or Twitter, their followers are channeled to remote, alternative platforms. Such a development promotes polarisation further by hampering the exchange of different positions. Moreover, erasing fake news bolsters the position of populists by delivering them just another argument for the criticism against the established media. In this light, the Network Enforcement Act from June 2017 to combat fake news in social media is seen critically. Not only is the number of fake news that was tackled within its framework extremely low, but it also provides operators of social media channels a legal justification for politically opportune overblocking of content.
In contrast to the mentioned approaches, experts recommend the promotion of media literacy. In that sense, knowledge about information processing and media impact should accompany expertise on how to deal with information sources. Additionally, a social dialogue should arise and in particular include population groups that cannot identify themselves with the “centre” of society and are susceptible to fake news. This helps to tackle polarisation and to make everyone feel taken seriously. After all, experts also call for more caution in the use of the term itself. Mutual accusations of speaking the untruth by political adversaries damage the trust in politics and media. Finally yet importantly, more research on fake news is necessary to clarify the quantity of fake news in circulation, their contents, and their impacts.
*This article is the summary of the analysis “What can be done to counter fake news?” by Dr. Philipp Müller and Nora Denner. For the original analysis click here.