Social Media was conceived as a nirvana for freedom of speech and social justice that would transcend the bounds of race, creed, socioeconomics, geography and national boundaries.  However, the reality has been something quite different.

One of its major problems is that social media is more than just a trendy way for us to share our opinions – about 2/3 of American adults use it as a real-time source for our news1.  This is disturbing given that by now most Americans are familiar with how the 2016 elections were meddled with.  If you are not, I recommend you watch Netflix’s 2019 documentary The Great Hack (just be aware that even the documentary has an agenda).  We are also aware at some level that companies are capturing information from us to better understand what we will like, what we might buy, and ultimately how we think.

What we are not as familiar with is the extent to which our opinions are being overtly manipulated via Social Media Bots.  Wikipedia2 defines Social Media Bots as “…agent(s) that communicate(s) more or less autonomously on social media, often with the task of influencing the course of discussion and/or the opinions of its readers”.  Thanks to advances in machine learning, these automated algorithms (in conjunction with minor human input) are working to actively manipulate our opinions and direct popular culture.  Just how bad is this issue?  According to a 2018 study by the Pew Center3:

  • 9-15% of all Twitter accounts are automated
  • 66% of links to popular sites were tweeted by bot accounts
  • 89% of links to news aggregation sites are bot driven

What kinds of issues are these bots being manipulated?  It turns out that their influence goes well beyond American elections, including everything from Brexit, Muller’s report to Congress, to what you choose to buy5.  The problem is that humans are prone to being influenced by what we perceive to be socially and/or popularly accepted.  “One of the big problems for the general public is we mostly believe what we see and what we’re told,” Frank Waddell, assistant professor at the University of Florida’s College of Journalism and Communications, told Engadget4. “And this is kind of amplified on social media where there’s just so much information.”

Furthermore, we are not good at telling the difference between information posted by humans or bots.  According to the Pew Study3, only 7% of us think that we could tell the difference.  This creates a perfect storm for our society that makes us all ripe for misinformation and exploitation.  The problem is that many of these bots may not be working in our best interests.  While some are just reflective of the biases of the programmers and/or organization that developed them, others are working under the direction of non-friendly governments and other threat actors to actively seed disinformation or widen societal or ideological chasms.

According to another study conducted by Indiana University6, there is a solution: reduce the number of bots.  The leader of the study (Dr. Filippo Menczer) stated, “As people across the globe increasingly turn to social networks as their primary source of news and information, the fight against misinformation requires a grounded assessment of the relative impact of the different ways in which it spreads…. this work confirms that bots play a role in the problem — and suggests their reduction might improve the situation.”

Fortunately, we are not left waiting for these third-party social media companies to shut down floods of bots; we have a great deal of control ourselves.  If you are reading a post from a ‘troll’ trying to inflame tensions, think before you blindly respond and add fuel to the fire.  In many instances, you can also look at the activity of the ‘inflammatory’ account.  If the account’s activity is limited to one topic and few sources of content, you should be highly suspicious that it is automated and worth ignoring.

In the meantime, we need to be aware of the risks inherent in basing our opinions and ‘research’ into current news based solely on social media or news aggregation services (e.g., Facebook, Apple, or Google).  Just because a social media post has thousands of likes or reposts does not necessarily mean that it is commonly accepted among your peers.  All of us would benefit from making our own judgments about current events based on our own research, critical thinking, and above all – well-considered opinions.  If you want to use social media, just be sure you are doing your job to PROTECT IT.

1 https://www.journalism.org/2018/09/10/news-use-across-social-media-platforms-2018/

2 https://en.wikipedia.org/wiki/Social_bot

3 https://www.pewinternet.org/2018/04/09/bots-in-the-twittersphere/

4 https://www.engadget.com/2019/08/15/social-media-bots-are-damaging-our-democracy/

5 https://niccs.us-cert.gov/sites/default/files/documents/pdf/ncsam_socialmediabotsoverview_508.pdf

6  https://news.iu.edu/stories/2018/11/iub/releases/20-twitter-bots-election-misinformation.html