obert Epstein, a Harvard psychologist who but twice
. But the rise of social bots isn't just bad for love lives -- it could have broader implications for our ability to trust the authenticity of nearly every interaction we have online.
Bots in general aren't new. Your spam folder is filled to the brim with e-mails from them. But what is new is how difficult it is to identify some social bots and how they are being deployed to influence things outside of our commercial interactions, like political dialogues.
For example, a recent New York Times piece
on social bots reported that thousands of Twitter bots started flooding the digital conversation during a dispute over a Russian parliamentary ɛƖɛctıon in 2011, aiming to drown out anti-Kremlin activists. And similar tactics were deployed by the beleaguered Syrian government.
It's easy to see how tyrants could find such a campaign an attractive way to reduce the radically democratizing power of freedom of speech online. Sure, the general public may be outraged by some government scandal and take to the Internet to voice their complaints. But you can deploy an even greater number of bots to fight with them online, and they may give up hope on a cause: If everyone else talking about an issue supports the government on this policy, what chance is there at reform? Since some researchers estimate that only 35 percent of the average Twitter user’s followers are real people right now and that within two years about 10 percent of the activity on social online networks will bots, it's a very real possibility that bots could have major influence on these kinds of debates.
Of course, this is a familiar concept. For years, governments and companies have been paying actual people
to comment online, creating digital astroturf movements that obscure and influence real public sentiment. But bots are becoming better at imitating real people, fed by news databases and given realistic sleep cycles.