Election security, influence operations and Indo-Pacific: New challenge(s) for Democracies

0
99

A recent report released by Microsoft has once again highlighted the threat posed by totalitarian states like China and North Korea to democratic states, both of whom have mastered the use of asymmetric warfare as a weapon of state policy. Moreover, what has alarmed many is the integration of new emerging technologies like Generative AI to enhance one’s operational performance and the use of cyber activities as a tool of mass influence operations (IOs) through new media. These developments should set alarm bells ringing in democratic states and force them to rethink the necessity of having a safeguard mechanism to ensure election security. India is particularly vulnerable due to its recurring and continuous election cycle, especially in the Lok Sabha elections that span over weeks.
Cyber operations serving strategic goals
Motivations for undertaking targeted cyber operations may vary from region to region, as stated in the 2024 report. However, even within regions, there are specific motivations for Chinese state actors to undertake cyber operations. These include military, economic, political, intelligence, and social interests. For example, forging strong ties with the South Pacific islands is an economic and strategic motivation for China. In the case of Papua New Guinea, the motivation may be purely economic as it is part of the Belt and Road Initiative (BRI). In countries like South Korea, the factors driving cyber operations may be different.
Military interests also play an important role in the state’s cyber calculations. For instance, the Raspberry Typhoon, a Chinese nation-state activity group, targeted military entities in Indonesia and Malaysia, just a week before a major multilateral maritime exercise. Similarly, the Flax typhoon, another threat actor, targeting US-Philippines military exercises adds to the long list of attacks. All these cyber operations are either aimed at extracting sensitive information or operational details. However, influence operations are another way that these threat actors engage in cyberspace. These operations present a threat to democratic states’ election security. All of these actions are part of China’s larger strategic goals to maximise its interests by influencing narratives and discourses in democratic states.
Influence operations and elections security
Chinese influence operations have become a potent tool to manipulate narratives in democratic states, where people enjoy fundamental rights such as freedom of speech and free press. In these settings, China is effectively using traditional and new media, taking inspiration from other IOs like Doppelganger, a Russia-aligned influence operation network.
In recent years, a pattern has emerged highlighting China’s increasing attempts at using IOs. Meta’s report has identified Spamouflage, a China-based operation, as one of the two largest sources of manipulative information. Social media companies like Twitter (now X) and YouTube have shut down many accounts that are reportedly either state-supported or state-affiliated. Additionally, Chinese activities can also be found on other social media platforms like Pinterest, Quora, Vimeo, Reddit, and Instagram.
Much of China’s cyber theatrics observed in Taiwan are grey-zone tactics. For instance, during the 2024 Taiwan elections, cyber-attacks against the island state rose exponentially 24 hours before polling; this is supposedly part of Beijing’s ‘mo hei’ practice influenced by the Soviet-era strategy of collecting information for political purposes. This practice also finds mention in the 2024 US Intelligence Assessment report, which identifies Beijing’s influence operations as being influenced by Moscow’s playbook. The report further highlights Beijing’s attempts at exploiting US societal divisions, discrediting political institutions, moulding public discourse, and sowing doubts about US leadership. Most of these tactics may look harmless if seen in isolation, however, when connected with China’s larger political objective, they are quite concerning for democratic states.
South Korea, another country mentioned in the report, has also been targeted by Chinese threat actors. This was to influence the people’s perception of Japan’s decision to release Fukushima wastewater. However, it was not the first time that Chinese activities have been raised in South Korea. Last year, it was reported that two Chinese public relations companies were running 38 fake Korean-language news websites (mock news sites), trying to manipulate public opinion towards China and th eUS. Besides these cyber influences, South Korea has also identified police stations in Seoul and Jeju Island operated by China. These police stations were ‘designed to intervene in domestic policies or bend public opinion‘ and ‘intelligence collection.’ Following an uproar in South Korea, these institutions were shut down by the government.
In the case of India, China has traditionally engaged in IOs via different means, be it through co-opting journalists, enticing academia, or paying mainstream newspapers to present its political narrative. However, in a tech-dominated world, it has not been able to expand much in Indian social media discourse compared to Western countries. This is because of an increased crackdown on its activities and the banning of Chinese social media news applications. But with decreasing avenues to exploit, China is now looking to use existing social media ecosystems for its IOs like X and YouTube. Currently, the scale and scope of Chinese election interference through IOs remain under-researched in India. But it remains a big concern for authorities. The Election Commissioner, Anup Chandra, said, “Cyber-attacks and information influence operations are posing an increasing threat to election infrastructure and perceptions of electoral integrity.” We can expect threat actors to exploit the existing political fault lines. The presence of political disagreements concerning election voting machines is one issue that can be easily exploited by threat actors.
AI-enabled IOs and elections
One of the most important issues concerning IOs is the use of AI to improve its reach and catch users’ attention. Many of these accounts on popular platforms have achieved high levels of engagement using these tools. These IOs have become more dangerous with the use of AI like generative AI, as its ability to create visually engaging content has drastically improved. Due to this, propaganda dissemination through accounts has become easier; threat actors can now achieve ‘increasing volume and frequency throughout the year’, which was not the case earlier, showing its increasing effectiveness among GenZ. Such AI applications were observed in the case of the TikTok app in the US, where it was used to target US politicians on both sides. Besides, China’s APTs have also targeted Biden’s campaign staff through phishing. Similarly, during the Taiwan elections, threat actors like Storm 1376, also known as Spamouflage or Dragonbridge, used AI—a first for a state-affiliated actor to influence foreign elections. Meta has identified that these activities are associated with Chinese law enforcement.
AI-enabled deepfakes is another challenge that has emerged in IOs, making it easier to fake. Apart from this, AI-enabled memes, AI-generated anchors, and AI-enhanced videos have also found their way into Chinese IO tactics. In Taiwan’s elections, actors like Storm-1376 have used AI-enabled audio, as in the case of Terry Gou to falsely endorse another candidate on election day to manipulate voting preferences. Similarly, a deepfake video was used against Lai Ching-te to spread false rumours, tarnishing his reputation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here