Information Warfare Attack Vectors and the Influence upon Democratic Elections
|✅ Paper Type: Free Essay||✅ Subject: Security|
|✅ Wordcount: 5730 words||✅ Published: 8th Feb 2020|
The purpose of this study is to determine if there are effective information warfare strategies capable of influencing voters and whether specifically Twitter can use those strategies to influence voters in the context of an Australian federal election. The design of the study is to review existing academic literature and to conduct assessments based upon the implementation of Twitterbots. The major results of our findings are that the review of the available literature supports a finding that there are effective information warfare strategies capable of influencing voters and that these strategies can be effectively delivered using social media platforms such as Twitter. The conclusions and interpretations based upon our literature reviews is that there is support for the premise that the use of information warfare strategies through social media platforms is effective in influencing voter sentiment. We also determine that when Twitter is used in conjunction with information warfare strategies through the use of bots, it is important to have the appropriate number of bots in place in order to meaningfully influence voter sentiment.
Information Warfare, Social Media, Twitter, Facebook, Australian federal election,
The purpose of this study is to make a determination if Information Warfare strategies can influence the voter decision making process, and if Twitter can use Information Warfare strategies to influence people. To make our determinations we use two research methods.
The first research method is based upon the traditional literary review of existing academic papers, reports, and journals, etc.
The second research method is based upon the creation, implementation, monitoring, and analysis of Twitter scrapers, Twitter bots, Facebook scrapers, and an RSS email campaign. At the conclusion of this paper we summarise our findings and deliver a determination by addressing the previously stated purpose of the study based upon information provided from both research methods.
With the advent of the technological age, the technological breakthroughs experienced provided an environment for both economic upheavals, and change affecting the structure of both civil society and military organizations. (Taddeo, 2012). It is against this background of continued technological advancement, that the weaponization of social media as part of an effective Information Warfare strategy emerged.This process is never more apparent than in the buildup to an election where political candidates will leverage the power and immediacy of social media platforms such as Twitter and Facebook to debate political topics and advance their electoral campaign. (Yang, 2016).
The significance of our research is that it focuses on local Australian content. At the time of writing there is a scheduled Australian federal election in May with voters going to the polls in less than three weeks. Consequently, there has been a marked escalation in political oratory across all communication channels, and in particular that of social media. Our research specifically encompasses the social media platforms of Twitter and Facebook where we look at these social media vehicles in relation to the Australian democratic process, and their ability, or not, to influence users through the use of amplifiers, force multipliers, and persuasion.
A review of the basic features of strategic Information Warfare provides the following summaries with low entry cost, blurred traditional boundaries, expanded role for perception management, and geographical vulnerabilities are of particular relevance in the context of the pending Australian federal election.
- Low entry cost: Unlike conventional weapon technologies, information-based techniques do not need substantial financial assets or state support. Information systems proficiency and availability to targett networks may be the single requirement.
- Blurred traditional boundaries: Conventional definitions between those of public versus private interests, military versus criminal actions, and national boundaries are complicated by increasing interaction within the information infrastructure.
- Expanded role for perception management: Innovative information-based techniques can significantly leverage the effectiveness of deceit and of image-management behaviours, substantially complicating a nation-states ability to gain political traction for information security related policy initiatives.
- Geographical vulnerabilities: Information warfare-based techniques ensure geographical distance irrelevant. With the continuing escalation of reliance upon networked information infrastructure,there follows a corresponding increase of accessible targets for Information Warfare attackers.
An illustration depicting the five spectrums of the Information Warfare realms is shown in Figure 1.
Figure 1. Information Warfare realms (Molander, 1996)
We have a limited understanding of the factors that make people influential and topics popular in social media, and despite a growing body of research surrounding content and content creators, our understanding of the factors that make messages popular and influential is still incomplete. (Weng, Menczer, & Lambiotte, 2015) Online news information sources have long since become an indispensable part of the public’s media regimen. The results of a 2012 survey conducted by the Pew Research Centre for the People and the Press support this with their findings revealing that 25% of American adults were regularly learning about the presidential candidates and campaigns from the Internet. (Dimitrova & Bystrom, 2013)
If you need assistance with writing your essay, our professional essay writing service is here to help!Essay Writing Service
Social media content that communicates conflict can be used to cajole, dissuade and influence others. Using these channels, social media content can be packaged in terms of trends in order to either discredit or support a topic. Importantly, the content may or may not be accurate or true. But, if it delivers some form of divisiveness, conflict or controversy, then it may have success in driving its amplification through trends leveraging the theory of homophily. (Monge, 2003) summarized two main lines of reasoning that support the theory of homophily, including (Byrne, 1971) similarity-attraction hypothesis and (Turner, 1987) theory of self-categorisation. The similarity-attraction hypothesis predicts that people are more likely to interact with those with whom they share similar traits. The self-categorisation theory proposes that people tend to self-categorise themselves and others in terms of race, gender, age, education, etc., and that they use these categories to further differentiate between similar and dissimilar others. It is the theory of homophily that helps to explain one of the more common criticisms of social media in that it can create an ‘echo chamber’ environment. In an echo chamber, users see only viewpoints with which they agree, and it is this that may encourage polarisation. If there is one fundamental truth about social media’s impact on democracy it is that it amplifies human intent; for both good and bad. At best social media provides a platform with which to express ourselves and our views. At worst, social media becomes a vehicle with which to spread misinformation, division, and to erode and obstruct democracy. (Chakrabarti, 2018)
Originally designed as a means to connect friends and family, Facebook has long since been exploited by people to channel their political energies and it is now being used in ways never anticipated at the time of its’ original design and release. In January of 2018, the Product Manager, Civic Engagement for Facebook, (Chakrabarti, 2018) wrote in a blog hosted on the Facebook Newsroom saying that ‘in 2016, we at Facebook were far too slow to recognize how bad actors were abusing our platform.’ (Chakrabarti, 2018) went on to comment that the 2016 US presidential election served to highlight the risks of foreign meddling, ‘fake news,’ and political polarisation. Facebook later learned that Russian sock-puppets and meat-puppets had created and promoted fake pages on Facebook in order to influence voter sentiment. He acknowledges that Facebook recognises that the same tools that give people a greater voice can also be used for malicious purposes in order to deliver misinformation, create hoaxes, and create dissention. There continues a debate on exactly how much of the information social media users consume is misinformation and to what extent that misinformation influences consumer behaviour. (Kollanyi, 2016; Samuel C. Woolley & Douglas Guilbeault, 2017)
Twitter promotes itself primarily as a news medium, from its advertising campaign using the marketing slogan ‘What’s happening?’, to its subscriber email promoting current news items (Peterson, 2016). The Pew Research Center conducted a study and found that 62 percent of the U.S. citizens receive their news via social media (Shearer, 2016). In 1963 Cohen argued that the influence of the news media was to not necessarily tell people what to think, but to prioritize information and focus the public’s attention on what it considered to be the prescient matters of the day (Cohen, 1963). As a result editors, editorial boards, television producers and campaign managers acted as ‘gatekeepers’ hand-selecting those issues that would guide the political agenda and shape elections. However, social media platforms like Twitter have by-passed these gatekeepers, disrupting this process allowing its users to guide the political agenda. However, without editorial process users can now post information that purports to be news but is in fact disinformation.
Bots are automated accounts that allow for the posting of content or the interacting with other users without the need for human intervention. It is estimated that two-thirds of all ‘tweeted links to popular websites are posted by automated accounts – not human beings.’(Wojcik, 2018). Bots can be used in a positive way by providing timely updates of RSS feeds and news events. Conversely, they can also be used for more nefarious activities such as the propagation of fake news, the manipulation of online rating and review systems, and by attempting to alter and persuade political discussion.
Widespread public interest concerning Twitter bots occurred when, after the 2016 US presidential election, Russian new site RBC revealed that a Twitter account purported to belong to the Tennessee Republican Party (@TEN_GOP), with 136,000 followers, was found to be a Russian bot operated by Internet Research Agency (IRA) (Timberg, 2017). This particular bots’ influence extended as far as including the U.S. president’s son, Donald Trump Jr. amongst its followers (Collins, 2017). He continued to follow this account until its eventual closure on August 23, 2017 and had even retweeted posts from it three times.
A study from the University of Oxford’s Computational Propaganda Research Project reports that during the US 2016 election, armies of bots allowed the campaigns and their backers to achieve two key objectives: 1) to manufacture consensus and 2) to democratize online propaganda (Woolley, 2017). In the first instance the artificial amplification of material supporting a candidate made that candidate appear more widely supported and legitimate than they actually were. Secondly, it gave the average citizen with access or knowledge to social media automation techniques a means to create a propaganda network, previously only available to governments and large commercial organizations. Trump later stated on a CBS 60 Minutes that he believed social media provided him with the key to victory (CBS, 2016).
1) That there are several effective Information Warfare strategies for influencing decisions about whether to vote, and who or what to votefor.
To address this hypothesis:
a) We research and analyse existing literature on the topic.
b) We create, deploy, monitor, and analyse fully automated politically focused bots.
2) Twitter can use Information Warfare strategies to influencepeople.
To address this hypothesis:
a) We research and analyse existing literature on the topic.
b) We monitor and assess our Twitterbots for Twittersphere interaction and their potential to influence voter sentiment in the forthcoming Australian federal election.
c) We monitor the Twitter accounts used by our bots in order to detect if an account is suspended or cancelled by Twitter. Accounts identified as bots are 4.6 times more likely to be suspended than accounts identified as human. (Wojcik, 2018)
d) We determine if it is possible to manipulate the characteristics used to categorise a Twitterbots ‘bot-ness’ to reduce the likelihood of detection.
We created two fully automated Twitter bots around the account names ‘Kratzen’ and ‘Pravum11’ based upon a Twitterbot application written in Python. Each bot was given a ‘Bio’ and we also modified the source-code in order to prepend random statements to the Retweets. Both bots were programmed to:
1) Retweet new RSS feeds from the Liberal Party of Australia website every four hours. A log of all Tweets sent via the RSS feed was recorded and referenced prior to retweeting in order to ensure the bot did not retweet previously sent Tweets.
2) Retweet all new tweets found with the hashtag of the current Prime Minister of Australia #ScottMorrisonMP every fifteen minutes. Likewise, a log of all Tweets sent was recorded and referenced prior to retweeting to ensure the bot did not retweet previously sent Tweets.
The Twitterbots were hosted on separate virtual machines running the Kali Linux OS versions 2017.3 and 2019.1 respectively. The bot account @kratzen was commissioned on April 15, 2019 and a second bot account @pravum11 was commissioned on April 24, 2019.
In order to ascertain whether the accounts might be interpreted as ‘bots’ or ‘human’ we used two applications, ‘Tweetbotornot’ and ‘Botometer’. Tweetbotornot is an R package that uses machine learning and purports to be 93.8% accurate. The algorithm is open source and uses ‘a generalized boosted model, which was trained using thousands of real automated (bot) and non-automated Twitter accounts and returns estimated probabilities of whether supplied accounts are bots.’
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.View our services
Botometer (formerly BotOrNot) is an online application that ‘checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot. Higher scores are more bot-like.’ (https://botometer.iuni.iu.edu/#!/). Botometer has been used by a number of researchers and as such is considered as a benchmark (Rizoiu, 2018; Wojcik, 2018; Woolley, 2017). Botometer assigns scores to accounts on a scale of 0 to 1. For this project, we used a score of 0.43 or higher to predict that an account is likely automated, based on a series of validation exercises (Wojcik, 2018). This score equates to 0.43 for Tweetbotornot, and 2.15 for the online version of Botometer which has a scale of 0 to 5.
Changes were made to the @kratzen profile and these were measured using both Botometer and Tweetbotornot. Both bot prediction tools were used to account for any bias that may result from the different algorithms that they use.
On April 11, 2019 it was reported that hashtags #GoBackModi and #TNwelcomesModi were being utilised by competing Twitter bots to drive pro- and anti-Modi traffic in the forthcoming Indian elections where Prime Minister Modi was a main candidate (Ajmal, 2019). ‘Inteltag’ is a script that identifies those Twitter accounts that post most frequently for a given hashtag (https://github.com/Cignoraptor-ita/inteltag). @MccRoopan came to our attention posting with the anti-Modi hashtag #GoBackFascistModi, posting 59 times on March 30, 2019. The account was opened in November 2013, had been almost completely dormant for 63 months and then made a total of 104 posts on two separate days (see figure 2). On these grounds it was flagged as a potential sock-puppet or bot and monitored for comparison with our bots.
Figure 2. MccRoopan – Tweet timeline
Figures 3 and 4 illustrate the recorded statistics of the bot predictability measurements for @kratzen and @MccRoopan using Tweetbotornot and Botometer respectively, for the duration of this study.
Figure 3. TweetBotornot – Bot Probability Prediction
Figure 4. Botometer – Bot Probability Prediction
The bot account @kratzen was created on April 15, 2019 with the default profile i.e. no profile image or bio information, with geolocation being disabled by default. The profile was later updated on April 18, 2019 with an image and a bio. On the same date the Twitter geolocation was enabled and a VPN was used to make it appear that the account was posting from Costa Rica and Dallas, Texas (see Figure 5) The Botometer algorithm considers Twitter metadata related to user profile, language and geographic locations (Davis, 2016). No significant changes were seen in the overall bot scores from either Botometer or Tweetbotornot.
Figure 5. @kratzen geolocation enabled – VPN: Atlanta, Georgia, USA
On April 19, 2019 we made changes to the code of our bot in order to prepend a random text message to the heading of a retweet. (Figure 6 and Figure7). Chosen at random from a database of predefined messages the anticipation was to ‘humanise’ our bots and thereby lower their ‘bot-ness’ rating. This change did result in decreases (more human) of the Botometer temporal and sentiment assessment values (see Figure 8). These decreases continued until April 24, 2019 when they appear to plateau. However, they did not significantly impact the overall Botometer analysis score. Interestingly, @pravum11 is a clone bot of @kratzen and does not inherit the lower Botometer temporal and sentiment values seen with @kratzen (see Figure 9).
Figure 6. Example 1 of random prepended text
Figure 7. Example 2 of random prepended text
Figure 8. Botometer – scores for @kratzen
Figure 9. Botometer – scores for @pravum11
The Tweetbotornot result for @MccRoopan remained below our bot threshold, whereas for Botometer the score started below and then despite few posts, approached the level resembling our automated bots. Potentially, these differences could be seen to support our suspicions that this account might be a sock-puppet or a bot. However, further analysis falls outside the scope of this study, but it does demonstrate the complexity of algorithms for bot-detection analysis.
More importantly, it did reveal some interesting data when it comes to @kratzen’s interaction with the Twittersphere and its potential to influence both the Australian federal election and inadvertently the Indian election 2019. During the study@kratzen created 169 tweets in total. These in turn generated further Twitter activity (see Figure 10). In the early stages it retweeted six tweets by the followers of @MccRoopan with #Election2019, a hashtag associated with the Indian elections.
Making the most basic assumptions, if the six #Election2019 tweets are considered we see a third-party retweet ratio of ~1:100, whereas Australian election third-party retweets are ~1:1. Calculating Twitter users in India=300,000,000 (statista.com, 2019): Australia=4,700,000 (Cowling, 2019), the ratio is ~15:1. Assuming the engagement ratio is 1:1 and existing bot ratio 1:1, we would need approximately 15 cloned bots to achieve similar results.
@pravum11 commenced activity on April 24, 2019, generated 71 tweets and no re-tweets.
Figure 10. Third party Twitter interaction with @kratzen and forthcoming elections
In order to make our determinations as to the validity or otherwise of our hypotheses we adopted two dissimilar research methods.The first research method is based upon the review of traditional literary material from such sources as business journals, websites, academic reports, books, and articles. The second research method is based upon the creation, implementation, monitoring, and analysis of several social media tools and applications. To this extent, the success of our research can be summarised as follows. That the review of literary material supports the hypothesis that effective information warfare strategies are capable of influencing the democratic process, particularly when channelled through the medium of social media. The review of literature also supports the hypothesis that Twitter in particular can be leveraged to exert influence upon voter sentiment using polarisation, the theory of homophily, the theory of similarity-attraction hypothesis and the theory of self-categorisation. Our endeavours in using our own Twitterbots as an independent analysis tool to address the hypotheses produced results that were inconclusive. This result is due to two factors; firstly, the study’s very short time frame and secondly, the lack of scale of the Twitterbots. We created two Twitterbots however now believe that 15 or more would provide a more meaningful set of results.
The purpose of this study is to determine if there are effective information warfare strategies capable of influencing voters and whether specifically Twitter can use those strategies to influence voters in the context of an Australian federal election. Our research showed that the use of information warfare strategies can influence voter sentiment, particularly when used in conjunction with Twitter. Our research into the use Twitterbots indicated that given a sufficient timeframe and sufficient scale, a Botnet has the potential to alter and persuade political discussion. The significance of our research is that it draws particularly upon local Australian content in the political context of a nation three weeks away from a federal election.
Our recommendations for continuing study in this area is to expand the time frame and to expand the scale of the Botnet. In addition, efforts should be focussed on making the bots more humanlike in behaviour and when assessed against bot probability prediction criteria.
- Ajmal, A. (2019). ‘Pro’ and ‘anti-Modi’ bots driving pre-election traffic on Twitter: US think-tank. Retrieved from https://timesofindia.indiatimes.com/elections/news/pro-and-anti-modi-bots-driving-pre-election-traffic-on-twitter-us-think-tank/articleshow/68819462.cms
- Byrne, D. E. (1971). The Attraction Paradigm. New York: Academic Press.
- CBS, M. (2016). Trump says his social media power key to victory. Retrieved from https://www.cbsnews.com/news/president-elect-trump-says-social-media-played-a-key-role-in-his-victory/
- Chakrabarti, S. (2018, 22nd January, 2018). Hard Questions: What Effect Does Social Media Have on Democracy? [Around the world, social media is making it easier for people to have a voice in government — to discuss issues, organize around causes, and hold leaders accountable. As recently as 2011, when social media played a critical role in the Arab Spring in places like Tunisia, it was heralded as a technology for liberation.]. WordPress Retrieved from https://newsroom.fb.com/news/2018/01/effect-social-media-democracy/
- Cohen, B. (1963). Press and Foreign Policy. Princeton NJ: Princeton University Press.
- Collins, P., Ackerman, Woodruff. (2017). Trump Campaign Staffers Pushed Russian Propaganda Days Before the Election. Retrieved from https://www.thedailybeast.com/trump-campaign-staffers-pushed-russian-propaganda-days-before-the-election
- Cowling, D. (2019). Social Media Statistics Australia – January 2019. Retrieved from https://www.socialmedianews.com.au/social-media-statistics-australia-january-2019/
- Dimitrova, D. V., & Bystrom, D. (2013). The Effects of Social Media on Political Participation and Candidate Image Evaluations in the 2012 Iowa Caucuses. The American Behavioral Scientist, 57(11), 1568. Retrieved from https://journals-sagepub-com.ezproxy.ecu.edu.au/doi/pdf/10.1177/0002764213489011. doi:10.1177/0002764213489011
- Kollanyi, B., Howard, P, Woolley, S. (2016). Bots and Automation over Twitter during the U.S. Election. Data Memo 2016.4: Project on Computational Propaganda. Retrieved from http://blogs.oii.ox.ac.uk/politicalbots/wp-content/uploads/sites/89/2016/11/Data-Memo-US-Election.pdf.
- Molander, R., Riddle, Andrew., Wilson, Peter. (1996). Strategic Information Warfare: A Mew face of War. RAND Corporation. Retrieved from https://www.rand.org/pubs/monograph_reports/MR661/index2.html.
- Monge, P. R., Contractor, N. (2003). Theories of Communication Network. Retrieved from Oxford:
- Peterson, T. (2016). Twitter recycles 7-year-old tagline in new ad campaign. Retrieved from https://marketingland.com/twitter-recycles-7-year-old-tagline-new-ad-campaign-185996
- Rizoiu, M., Graham, T., Zhang, R., Zhang, Y., Ackland, R., Xie1, L. (2018). DEBATENIGHT: The Role and Influence of Socialbots on Twitter During the First 2016 U.S. Presidential Debate. Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM 2018). Retrieved from https://arxiv.org/abs/1802.09808.
- Samuel C. Woolley & Douglas Guilbeault. (2017). Computational Propaganda in the United States of America: Manufacturing Consensus Online.”. Retrieved from Oxford, UK:: https://comprop.oii.ox.ac.uk/
- Shearer, J. G. E. (2016). News Use Across Social Media Platforms 2016. Pew Research Center. Retrieved from https://www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/.
- statista.com. (2019). Twitter Users in India. Retrieved from https://www.statista.com/statistics/381832/twitter-users-india/
- Taddeo, M. (2012). Information Warfare: A Philosophical Perspective. Philosophy & Technology, 25(1), 105-120. doi:10.1007/s13347-011-0040-9
- Timberg, D., Entous. (2017). Russian Twitter account pretending to be Tennessee GOP fools celebrities, politicians. Retrieved from https://www.chicagotribune.com/bluesky/technology/ct-russian-twitter-account-tennessee-gop-20171018-story.html
- Turner, J. C. (1987). Rediscovering the Social Group: A Self-Categorization Theory Oxford: :Basil Blackwell.
- Weng, L., Menczer, F., & Lambiotte, R. A. E. (2015). Topicality and Impact in Social Media: Diverse Messages, Focused Messengers. PLOS ONE, 10(2). doi:10.1371/journal.pone.0118410
- Wojcik, S. M., S. Smith, S. Rainie, L, Hitlin, P. (2018). Bots in the Twittersphere”. Retrieved from https://www.pewresearch.org/search/bots%20in%20the%20twittersphere
- Woolley, S., & Guilbeault, D. (2017). Computational Propaganda in the United Statesof America:Manufacturing ConsensusOnline. Working Paper No.2017.5: Computational Propaganda Research Project. Retrieved from http://blogs.oii.ox.ac.uk/politicalbots/wp-content/uploads/sites/89/2017/06/Comprop-USA.pdf.
- Yang, X., Chen, B. C., Maity, M., Ferrara, E. (2016). Social Politics: Agenda Setting and Political
- Communication on Social Media,”. Paper presented at the Social Informatics: 8th International Conference, Bellevue, WA, USA,.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: