Monday, 1 july 2019 | Redacción CEU
The words we use are not harmless and anecdotal. They have an effect on the people that receive them, even if they are on the other side of a screen. However, in practice, users may forget this detail. The studies show that, when we speak on social networking platforms, we are more cutting and critical, becoming trolls or haters in the most extreme cases. Social platforms are useful for meeting people, keeping in touch with those who are far away, giving voice to the people who are often ignored, making actions visible, sharing experiences and keeping us informed and entertained. Their advantages are many, but they also have a dark side. In them, there is a sense of anonymity and total freedom that causes people to act with greater intensity than they would in person.
John Suler calls this phenomenon the "disinhibition effect online" and lists six factors as its causes. The first of them, dissociative anonymity. Being behind a screen brings a sense of anonymity that helps separate online actions from the "real" life. The second factor is invisibility, which leads users to doing things that they would not in a different situation. Another characteristic of this virtual scenario is solipsism. Users project an image based on their prejudices and expectations on the individuals they find on the network, since they do not really know them. Asynchronism also plays an important role. This is a channel that enables users to create complex messages or launch them without having to wait what the answer is. Thanks to dissociative imagination, users can develop a completely different personality on the network, as if they were in a parallel dimension. Finally, minimization of authority has a significant impact, since the reference of the status and power is missed, and one talks to other without following any kind of hierarchy.
This uninhibited behavior also finds an explanation in a concept of social psychology called deindividuation, according to which subjects lose their personal identity within a group. Likewise, guided by the sense of anonymity, users feel, to a certain extent, that instead of talking to people that have names and surnames, they are talking to a kind of abstract entity. In other words, they do not realize that on the other side of the screen there is a person who receives their message and that what they say may hurt their feelings and bring about negative emotions. This causes users to act differently than they would in a different context, as this type of behavior would not be socially accepted. Therefore, this is one of the theories that explains the "troll phenomenon".
As we mentioned in a previous article, the messages that trolls and haters send can achieve the same level of influence as the ones sent by influencers. The problem is that, in this case, the comments are often offensive, humiliating, defamatory, insolent and, generally, negative. It is true that these comments may also go unnoticed, but when they have an impact, they can create a whole school of thought. Even if they do not defend any logical argument, they can polarize the debate and create a confrontational climate.
A divided society is the ideal breeding ground for fake news growing. This fake information is fueled by both the emotions and the prejudices of users. The creators of this type of "disinformation" have realized that if they use fear and topics that generate indignation, their false "news" are shared by more people. They also take advantage of what is known as confirmation bias. Meaning, when a message confirms something that we believe, we let ourselves get carried away and we tend to share the message (without worrying about whether it is real or not). That is why, in these impulsive, negative and sometimes hate-filled messages, are found some of the keys to making fake news have a greater impact.
The leaders of the social platforms are becoming increasingly aware of the consequences that this type of social behavior can bring about. Facebook is one of the companies that is working on the design of control mechanisms to combat hate messages (it is also working against fake news). Zuckerberg's network has announced a rating system for comments that pretends to reward its quality and help to combat messages that may incite hatred. According to this proposal, the comments that do not comply with community standards will be erased. In turn, the company is also limiting the re-sending of messages and using technology based on artificial intelligence in countries such as Sri Lanka and Myanmar to try to regulate the spread of violent content or attempts against ethnic minorities.
Users can also act in different ways to combat hate on the Internet:
The CEU IAM Business School offers a Master's Degree in Digital Marketing that responds to the new demands of this growing market. This is a training that offers a current and practical approach thanks to which participants can acquire essential knowledge and skills in key aspects such as marketing planning, digital marketing channels, the use of social networks as a strategy, the management of media, web positioning and electronic commerce. Ask for further information!