Pivony Home

GENERAL SOCIAL INTELLIGENCE

How to Spot bot and trolls on Social Media

-

Have you ever logged in to a social network and seen an account or read comments and tried to understand if they were real or created by bots? 
Have you ever read about a topic related to your company and tried to understand if it was created only for provoking an unhealthy discussion? 
How important are topics and comments around the company and how difficult is it to verify the reliability in a world where bots and trolls are now on the agenda?

1. Bots

  • What is a bot?
  • How do bots work?
  • Types of bots
  • Malicious bots 

2. What is the difference between bots and trolls 

3. Trolls

  • What is a troll?
  • How to spot a troll? 

4. Twitter bots

5. Instagram vs. bots 

6. How can Pivony help you?

Over the last few years, words like “bots”, and “trolls” have become part of conversations in social networks. They have gained a huge and often unrecognized influence on social media and they are used to influence conversations for commercial or even political reasons. 

Social media platforms are constantly trying to find a way to fight fake accounts, bots and trolls. Instagram has recently introduced a new feature to confront fake accounts: you must take a photo and video of yourself to demonstrate you are a real person not a bot. 

In order to maintain the quality of discussion on social sites, it’s becoming necessary to screen and moderate community content. Is there a way to filter out bots and trolls? The answer is yes.

Dictionary.com defines a bot asa software program that can execute commands, reply to messages, or perform routine tasks, as online searches, either automatically or with minimal human intervention (often used in combination)”

A bot – abbreviation of the term “robot” – is a computer program designed to perform online operations automatically, quickly and repetitively, without any human intervention. 

Bots can be used in different areas of business: customer service, search functionality and entertainment. Using a bot in each area brings different benefits.

There are plenty of different types of bots designed differently to accomplish a wide variety of tasks. Some common bots include:

  • chatbot: a program that is able to analyze and understand the language of real users interacting with them. In customer service, bots are available 24/7. They increase the availability of customer service employees, so they can focus on more complicated issues. They interact with people by giving pre-defined prompts for the individual to select.

    Their skills improve incrementally thanks to machine learning: bots can learn from their mistakes and, above all, from their interaction with real people. This allows them to improve their human language analysis skills and thus provide more and more precise and accurate answers. Chatbots may also use pattern matching, natural language processing (NLP) and natural language generation (NLG) tools. You can also decide your chatbot name
  • Social bots: bots have now become a very frequent presence even within social networks in the form of fake profiles. They operate on social media platforms and instant messenger apps such as Facebook Messenger, WhatsApp, and Slack.

    People use them, for example, to inflate the number of followers and make people believe, in this way, to be more famous than they actually are. “Social” bots, however, are also acquiring an increasingly political dimension.
  • Technical bots
    It is, now, the type of bots more widespread and less known to users, because they act a bit “in the shadows”. Within this category there are:
    • Web crawlers or web spiders
    • Wiki bots, software that have the task of automating the management of wiki projects (such as Wikipedia, for example) by checking if the links are correct. They also update the contents automatically or even create new pages and new entries in the free encyclopedia
    • Knowbots, programs that collect knowledge for a user by automatically visiting Internet sites.
    • Monitoring bots, they are used to monitor the health of a website or system.
    • Transactional bots, their task is to complete transactions on behalf of a human.
    • Shopbots, they shop around the web on your behalf

These are the good bots but there are, unfortunately, also the bad bots that threaten and can cause damage to your system.

Malicious bots

They are designed to carry out illegitimate and questionable activities with great efficiency and en masse.

Common types of malicious bots include:  

  • Bots running Ddos attacks: they overload a server’s resources and halt the service from operating.
  • Bots scanning and circumventing security measures: made to distribute malware and attack websites
  • Bots running fake accounts
  • Spambots, which post promotional content to drive traffic to a specific website.

Every “bot” can be used to reach harmful objectives. They distort reality in an extremely subtle way and publicly attack companies through the creation of false news. Disinformation can lead to a high percentage of shares on social platforms like Twitter and Instagram. 

The more people bots reach, the higher the percentages of subjects who will believe and support the cause, even if false or unreliable. And, the greater the damage for the company. 

What is the difference between a troll and a bot?

A troll is different from a bot because a troll is a real user, whereas bots are automated. The two types of accounts are mutually exclusive. Both trolls and malicious bots can easily distort the image or reputation of your company by tweeting or commenting fake news. 

A troll is defined as someone who interacts with other online users using controversial comments or provocative posts. Their purpose is not to build a critical or constructive speech, but to disturb, insult, and foment the so-called ‘flames’ of comments. 

Platforms targeted by trolls include social media, forums, and chat rooms. Troll comments can be offensive, aggressive, and stupid. The troll is a very effective way to:

  • spread rumors and misinformation
  •  create tension between different parties
  • change public opinion 
  • disrupt conversations in companies. 

This ends up creating a very powerful tool, or even a weapon, to create tension and control public opinion.

Most online communities allow users to create usernames that aren’t linked to their real identities. This anonymity makes it easier for trolls to escape the consequences of their actions.

These kinds of accounts are used to propagate fake information or news resulting in intense debate between groups of people. It’s not always easy to distinguish between trolling and someone who genuinely wants to argue about a topic. 

When you are dealing with a troll there are common signs of trolls to look out for, that includes:

  • Creating fake profiles: 
  • Going off-topic: This is to annoy and disrupt other posters.
  • Creating posts, videos, memes, comments and share them; to attract attention, they share false news with resounding headlines
  • Not letting things drop: They tend to post repeatedly again and again until they have provoked the response they wanted.
  • Sharing links to dangerous sites (with viruses, etc.) or prohibited to minors
  • Ignoring evidence or facts: They won’t acknowledge facts that contradict their point of view.
  • Using a dismissive and aggressive tone to others: They adopt a condescending or confrontational tone and dismiss any counter-arguments as a way to provoke the other party. Their language is aggressive and vulgar.

In general, if someone seems uninterested in a genuine, good-faith discussion, and is being provocative on purpose, then he is probably an internet troll.



How to handle trolls on social media

Online trolls can be aggravating and unpleasant, and it can be difficult to know how to react. If you’re wondering how to respond to a troll, here are some tips:

  • Ignore them: The only smart way to handle a troll is to ignore it and not involve it in any way. Attempting to debate them will only make them troll more. Don’t feed the trolls, do not get involved and do not comment/feed the flame and above all do not respond to offense with offense
  • Block them: Most social media platforms make it easy these days to block other users. If a troll is annoying you, you can block them.
  • Report them: Most social media platforms and online forums allow you to also report other users who are being abusive or hateful. If your report is successful, the troll may be temporarily suspended, or their account banned entirely.

Social media platforms like Facebook, Twitter, and Instagram started out as a way to connect with friends, relatives, and family but with a period of time troll and bot accounts took it over.

Today, Troll and bot accounts have a huge influence on every Social media platform. They are used to influence and manipulate conversations for their own purpose. 

They are getting more sophisticated and harder to detect. You need to be careful when being on social media because you are not always aware you are dealing with suspicious bots or trolls.

The research says that Social bots have been used in elections to spread fake news to propagate political agenda, this news is mostly inappropriate with no accuracy and high reaches which can easily manipulate people’s opinions.

Bots have the potential to harm the company’s reputation or its product which can lead to financial damages also. In some studies, it has been proven that bots can access personal information such as Phone numbers, email addresses, etc, that in turn can be used for cybercrime.

According to the most recently reported period between October 2018 and March 2019, Facebook said it removed 3.39 billion fake accounts.

It does not come as a surprise that chatbots and the spread of fake news by social bots are also used in the crisis triggered by the coronavirus in spring 2020. According to the researchers of Carnegie Mellon University, nearly half of the Twitter accounts spreading messages on the social media platform about the coronavirus pandemic are likely bots. 

Researchers identified more than 100 false narratives about COVID-19 that are proliferating on Twitter by accounts controlled by Twitter bots. Twitter says its automated systems have “challenged” 1.5 million Twitter bot accounts that were targeting discussions about COVID-19 with malicious or market manipulation.

Bots are predominantly found on Twitter and other social networks that allow users to create multiple accounts. A Twitterbot (sometimes spelled “Twitter bot”) is a software program that sends out automated posts on Twitter. A Twitter bot is simply an account run by a piece of software.
Most Twitterbots send out tweets periodically or respond to instances of specific phrases in user messages. More sophisticated Twitterbots perform various tasks, such as mining and analyzing tweets in real time. Depending on the purpose, Twitterbots can be useful, informative, annoying or dangerous.

According to Ian Urbina, writing in TheNew York Times, on average, only 35 percent of the followers of any Twitter account are actual people. Presumably, the remaining 65 percent are Twitterbots.

Here’s some useful Twitterbots:

  • @NiemanLabFuego finds and retweets the stories that most journalists are discussing.
  • @trackthis tracks parcels for followers and sends them direct messages each time a package’s location changes.
  • @EarthquakesSF tweets about earthquakes in the San Francisco area in real time, using seismographic information from the USGS.
  • @yesyoureracist and @yesyouresexist seek and autorespond to racist/sexist tweets.

There’s not an easy way to identify a bot anymore. But there are some common practices, such as profile photo, the tweet posting frequency, number of following/followers, average like/retweet count, the topics they’re involved in that can help to find out if it’s a bot or not.

These malicious bots can seriously distort debate, especially when they work together. They can be used to make a phrase or hashtag trend, to amplify or attack a message or article; or to harass other users or companies.

They have some characteristics in common:

1. Anonymity: A fundamental factor is the degree of anonymity an account shows. In general, the less personal information it gives, the more likely it is to be a bot. The use of stolen images, alphanumeric handles, and mismatched names can reveal a fake account.

2. Activity: Another key indicator that an account is automated is its activity. This can readily be calculated by looking at its profile page and dividing the number of posts by the number of days it has been active.

The benchmark for suspicious activity varies. The Oxford Internet Institute’s Computational Propaganda team views an average of more than 50 posts a day as suspicious. This is a widely recognized and applied benchmark, but may be on the low side.

3. Amplification: The third key indicator is amplification. One main role of bots is to boost the signal from other users by retweeting, liking or quoting them.

The most effective way to establish this pattern is to machine-scan a large number of posts.

Another amplification technique is to program a bot to share news stories directly from selected sites without any further comment. An account which posts long strings of such shares is likely automated.

Bots achieve their effect by the massive amplification of content by a single account. Another way to achieve the same effect is to create a large number of accounts that retweet the same post once each: a botnet. 

If an account writes individual posts, comments, replies, or engages with other users’ posts, then the account cannot be classified as a bot.

Twitter is already a very difficult platform to observe real consumer feedback, and together with bots and trolls, it becomes way more difficult.

Just to stem this danger, Twitter has launched several campaigns aimed at finding false profiles. They want to prevent these bots from being used to revive false news and feed, thus, a dangerous vicious circle. 

With the arrival of Meta (the new company that controls Whatsapp, Facebook, Instagram), new functions have been gradually introduced on their social networks.
The company has developed a tool to counter fake accounts, bots. As well as, all those profiles created only to increase the number of comments and followers or to make spam. 

On 15th November it was announced that the social network will begin to use the recognition via video selfies to verify the identity of people. They want to have the certainty that behind the screen there is a human being.

New members, but also existing profiles, will be asked to make a short video in which you perform movements with your head.
The video will help Instagram to detect the person, and only in this way you will have access to the social network. The bot profiles will not be able to verify their identity and will remain outside the platform.

Despite some concerns about user privacy, Instagram has assured that videos will never be published and will be deleted after 30 days. In addition, the system will not carry out facial recognition and will not collect biometric data of the person. 

How can you defend yourself and your company from propaganda and spam posted by malicious trolls and bots? Or from fake news associated with your company? Your company could carefully investigate the background of each post, but it requires a lot of time for every comment to read. It also takes time to analyze if the account is real or if it’s a troll or bot. Trolls and bots are disrupting social media but there is a way to stop them!

The answer is to automate the detection using big data and machine learning

Some messages on social media are just plain noise. Messages from malicious bots and trolls belong to this category. They deviate the conversation to make something appear as real, they cause noise to social media.

Noise may seem harmless, but it does have an impact on any brand’s business. Since comments and messages can’t be left unattended, brands must monitor and respond. Unfortunately, the more noise you get, the more messages you have to review, the more time and money it takes to manage.

There is one thing you can do to reduce the noise on your Social Pages and better manage comments and productivity without hurting your customer engagement and get precious insights and results: using social intelligence platforms. There are 2 benefits from using Pivony:

  • When analyzing tweets and comments of users, customers on Twitter, Pivony Al Engine is able to filter out everything that doesn’t seem relevant to the research and analysis, including bots’ and trolls’ comments.
    It makes a deeper analysis on the users so you won’t have to worry about how trolls and bots can damage your company. 
    Bots and trolls are capable of twisting reality and opinions about a company. Being sure that the topic they are talking about can’t affect your company is crucial to make insights valuable and actionable. 
  • Pivony helps you instantly detect the bots in a direct relation to your brand or name unlike other free tools that are spotting the most harmful bots but not likely to have a relation with you.

In conclusion, managing bots and trolls can be necessary to face comments and reviews your company gets on social media. Knowing if a user is a person or a bot is not so easy but with some small tips and tools you will be able to stop it. 

With AI bots and trolls will be filtered out. You will be sure of results without wasting time analyzing every single user. 

Start the free trial or book a demo if you want to know more on how Pivony can help your company even in troublesome situations with bots and trolls.

Elenora Verdinelli

Marketing Speciaist @Pivony

Share this article

Share on facebook
Share on twitter
Share on linkedin

Subscribe for weekly updates