Introduction
The emergence of computational propaganda has given rise to a new social media phenomenon that utilizes automation and algorithms, facilitating the efficient dissemination and amplification of discourse on social media platforms. This includes the spread of disinformation and state-funded propaganda, enabling ideological control and manipulation. Governments and other entities leverage computational power, Internet resources, and big data to achieve information control and manipulation objectives. The use of social media for spreading disinformation, consolidating power, exerting social control, and promoting agendas has become a recognized strategy for many states globally.
Propaganda strategies continuously adapt to technological and media changes, emphasizing the need to monitor media discourse, particularly on social media platforms. Recent trends, including the rise of bots, trolls, and other manipulative efforts, underscore the importance of identifying and analyzing these activities, as well as the narratives and communities involved in disseminating malicious information.
The 2022 Russia’s invasion of Ukraine underscores the significant role of social media in modern warfare, as both sides utilize online platforms to manipulate geopolitical dynamics and shape public opinion. Russia-affiliated social media accounts propagate narratives aligned with their motives, downplaying support for sanctions against Russia and undermining support for Ukraine. Conversely, the Ukrainian side aims to raise global awareness of Russia’s war crimes, garner Western support, emphasize their own military endeavors, and challenge prevailing perceptions of the Russian military.
While extensive research exists on identifying malicious cyber activities, less attention has been given to investigating narratives and their role in broader conversations, particularly concerning Russia’s invasion of Ukraine. This study focuses on examining the propagation and discussion of the ‘fascism/Nazism’ narrative specifically on Twitter, encompassing both the English and Russian segments of the platform.
Methodology and Data Analysis
To achieve this, we employ a mixed-methods pipeline for social media analysis, combining network science approaches, natural language processing, as well as community clustering, and qualitative analysis of the tweets and users.
Data Collection and Preprocessing
Python package twarc
was used to collect tweets via an archive search with updated Twitter academic API. Twarc facilitates the retrieval of tweets using the Twitter API and simplifies the process of searching, filtering, and collecting tweets. Two datasets in English and Russian were compiled, with tweets collected for a period of one year, starting on December 24, 2021, and ending on January 24, 2023.
Quantitative Analysis
Network Analysis
The ORA software tool was used to analyze the data. ORA provides various features for Twitter data, such as identifying super spreaders (users who frequently generate and effectively spread shared content) and super friends (users who engage in frequent two-way communication, facilitating large or strong communication networks).
Community Detection
To identify network communities participating in conversations on Twitter, the Leiden clustering method was used. The Leiden clustering algorithm involves network partitioning and node movement, ensuring the formation of well-connected communities.
Topic Modeling
A modified BERTopic modeling methodology was used to generate topic networks based on the text content of tweets. This enhanced pipeline uses OpenAI’s GPT-4 to generate human-readable topic labels instead of using common words within each topic cluster.
Qualitative Analysis
The qualitative analysis involved manually reviewing the content of tweets and users through textual and visual analysis. The focus was on the most influential users in each community, as their content is widely disseminated.
Quantitative Outcomes
The network analysis identified super spreaders and super friends in the English and Russian datasets. The super spreaders encompass accounts belonging to influential figures, news organizations, and users propagating anti-West and anti-Ukraine narratives associated with Russia’s invasion.
The Leiden clustering analysis revealed distinct communities with varying attitudes, ranging from pro-Ukraine to pro-Russia sentiments. The BERTopic modeling generated topic networks that showed similar discussions in both English and Russian tweets, including many of the expected propaganda narratives. The English topics tend to focus on the word “Nazi” while the Russian topics use a variation of “fascist” in narratives that seek to justify the invasion for similar reasons.
Qualitative Outcomes
The qualitative analysis of the identified topics and influential users provided deeper insights into the narratives and discourse strategies employed by different political groups.
The pro-Ukraine accounts actively disseminate narratives about Russia’s war crimes, labeling Russian politicians as fascists and drawing comparisons between Russian actions and those of Nazi Germany. They also ridicule Russia, state propaganda, and state media narratives, likening Putin to Hitler and referring to Russia as a Nazi regime.
In contrast, the pro-Russia state propaganda narratives present the invasion as a “special military operation” necessary to prevent an attack from Ukraine or to stop war and discrimination against the Russian population, aligning with Russia’s official government stance. They also accuse Ukrainian military forces of war crimes and killing their own citizens, depicting the invasion as a “liberation” of Ukraine from “Ukrofascists” and “Ukronazis.”
The alt-right political activists and conspiracy theorists, on the other hand, propagate narratives that revolve around Hunter Biden’s emails, alleging Biden’s corrupted interests in Ukraine. They also undermine financial support to Ukraine, mention corruption in Ukraine, and discuss disinformation narratives and conspiracy theories about global elites and world order, military biolabs in Ukraine, and others.
Discussion
The findings of this study contribute to the broader understanding of disinformation campaigns employed by governments on social media. By shedding light on the strategies, narratives, and communities associated with Russia’s state propaganda discourse during the invasion of Ukraine, it enhances our knowledge of the evolving tactics used to manipulate public opinion and shape geopolitical dynamics.
The analysis confirms that the most ideologically extreme parties tend to have a more aggressive language than the moderate ones. The pro-Russia state propaganda narratives aim to justify the invasion, accuse Ukraine of war crimes, and undermine Western support, while the pro-Ukraine accounts seek to expose Russia’s war crimes and challenge the prevailing narratives.
Notably, the study found no strong correlation between the tone of the tweets and their level of dissemination, suggesting that the perception of increased aggressiveness in the political discourse may not be entirely supported by empirical data. This highlights the importance of combining quantitative and qualitative approaches to gain a comprehensive understanding of the dynamics at play.
Conclusion
This study contributes to the growing body of research on computational propaganda, emphasizing the need for continued monitoring of social media discourse for a nuanced understanding of evolving propaganda tactics. The findings underscore the importance of considering cultural and historical contexts in analyzing propaganda narratives and highlight the role of influential actors in shaping public opinion during geopolitical events.
Moving forward, it is crucial to continue research and efforts aimed at developing effective countermeasures against disinformation campaigns. This includes raising awareness among social media users about the presence and impact of computational propaganda, promoting media literacy, and improving the transparency and accountability of social media platforms. Collaboration between researchers, policymakers, and technology companies is essential in developing comprehensive strategies to combat the spread of harmful disinformation and protect the integrity of information in the digital age.