Interview with Christine Dugoin-Clément on artificial intelligence systems and influence strategies

Portrait de Christine Dugoin-Clément
Texte

Christine Dugoin-Clément is an associated researcher with the Chaire “Risques” IAE Paris at the Sorbonne Business School. At Paris 1 Panthéon-Sorbonne University, she wrote and defended a dissertation (“Analyse d’opérations d’influences modifiant le comportement”) on the impacts of our digital strategies on decision-making in extreme situations based on case studies of military teams.

A specialist of Eastern European geopolitics, she recently published Influence et manipulations: des conflits armés modernes aux guerres économiques, where she discusses the possible reproduction in Western Europe and America of the influence strategies used in Ukraine. In conversation with Paris 1 Panthéon-Sorbonne’s Artificial Intelligence Observatory, she talks about the connections between algorithmic systems and influence strategies.

The paradox of artificial intelligence systems

Christine Dugoin-Clément: There is a constitutive paradox in the format of artificial intelligence systems and in what we want them to do. For instance, facial recognition is perceived as a threat when it is used by the State for surveillance purposes, but the potential use of the very same technology by private businesses, often to further commercial interests, does not appear to create as much distrust.

For AI to be effective, we usually need a large amount of data, and we need it to be “clean”, ie. classified. But issues persist concerning the origin of this data: it often has various sources, and the owners of this data are not always aware of what access and usage they have authorized. Finally, even when consent has been given, we can extract further information from the data, information which may have been entered in a faulty manner. For example, anonymized data can be “de-anonymized” with cross-checking tools through which we can unveil the supposedly protected identity of individuals.

Furthermore, we have reports about AI systems developed with a very high level of technical refinement, but for which issues surrounding ethics, data protection and social liberties had not been considered. For when we only take into account the effectiveness and technical capacity of these systems during development, the result is often a product which does not meet requirements about those crucial issues. In other words, when designing technical tools with operational goals within a limited time-frame, the ethical aspect concerning individual freedoms can quickly be left aside. This risk can increase given the constant need for massive data sets, since the processing of this kind of data can negatively impact social and fundamental rights.

This being said, we do notice a greater awareness about these issues, and there is an increasing number of projects which integrate social considerations in their design. For instance, there are more and more projects being designed following a HITL (human-in-the-loop) or SITL (society-in-the-loop) approach. Other systems tend to be “ethical by design”, ie. to integrate ethical norms and fundamental rights in the earliest steps of design to make sure they are respected. And cultural differences lead to a last difficulty, since different countries will have different notions of what is ethical.

The involvement of algorithmic systems in “influence operations”

Christine Dugoin-Clément: First, let us recall that we are dealing with influence operations when a variety of strategies are used by one party in order to alter decision-making by a third party for its own benefit (possible targets include competitors, opponents, or enemies in situations of war). Many tools can be employed to achieve this: humans, media, finance, economy — including virtual tools developed following the rise of the internet. For example, patents and norms are at the center of truly thorny issues. As the saying goes, whoever controls norms, controls markets. This can change how competitive competitors or opponents can be, sometimes to the point of impacting an opponent’s economic system.

At the level of information, the end goal of these operations is to modify decisions and behaviors in an individual, a social group or, on a broader scale, an entire population. The goal can be reached through the dissemination of content selected to suit the purpose of the designer of the operation. There are also various degrees to which such operations can be achieved, depending on the aim being to persuade a third party, to create distrust between individuals or groups (for instance, between a social group in particular and a government), or to jeopardize the existing trust between stakeholders to alter how instructions or orders are carried out. In all cases, however, it results in behavior modification. It can be slight — like decreased engagement — but it is nonetheless enough to disorganize groups and institutions.

The general public discovered the advantage there was in having data (and especially psychometric data) at one’s disposal to perform influence operations when the Cambridge Analytica data scandal broke out. In this last case, the data had been collected from millions of Facebook users. The Russian interference in favor of Donald Trump in the 2016 American presidential elections has confirmed that these attempts are real. But influence operations can also aim to cause turmoil, or to breach trust, which would lead to the decisions of a State, a business or a competitor being effectively hindered.

Public opinion is a crucial issue in the strategy field, especially among the military, who have been interested in this topic for years. Recently, Russian military doctrines have even pointed to the greater importance of non-military operations (including influence operations) over military operations. In the field, before the Russian invasion of February 24, 2022, Ukraine was already involved in the Donbass conflict since 2014. This made the country an open-air laboratory for cyberattacks, but also for influence operations and disinformation and misinformation campaigns, as well as propaganda. Different channels can be used in such cases: blogs, online radio and television stations, and social media, of course. The well-known troll farm located in Olgino, a Saint Petersburg suburb, was especially focused on social media. As a reminder, this group employed men and women who worked 24/7 to put forward and widely circulate positions favorable to Russian interests online.

Yet, beyond the work of human employees, algorithms and software can also be of use, particularly to optimize online reaction time. We can thus use bots to decrease response time, or to increase the likelihood of a selected topic, story or hashtag going viral.

Where deepfakes (ie. synthetic content generated with AI to obtain a realistic result, usually in audio or video format) are concerned, however, we have observed how little they were used for political influence, or to target the probity or image of public figures. For example, when the Russian invasion of Ukraine began, a low-quality deepfake was produced and broadcast online, featuring President Zelensky asking Ukrainians to give up the fight and surrender. The quality was very poor: in the recording, the face did not match the body in size or image resolution for instance. It was quite crudely and badly done, so much so that this piece of synthetic content was easily and quickly identified as such. This allowed us to see how deepfakes, despite this flawed instance, could be used in situations of war.

How to fight influence strategies at the individual and collective levels

Christine Dugoin-Clément: We often tend to think that debunking fake news or identifying it as such suffices to counteract infuence strategies. It is an honorable goal, but it does not work in 100% of cases. The inaccurate information which has been circulated will already have persuaded individuals in the larger public who were already disposed to believe falsified or distorted content. In these cases — and this is especially true in conspirationist networks — attempted debunking may backfire and reinforce the initial adhesion to inaccurate content: whoever is persuaded that a fake news is true will take reporters or official representatives to cover up truths which they do not want being known.

The fight against influence strategies depends on education. We must provide better digital education to encourage awareness on these topics, and to teach people how to decipher and deconstruct images and other kinds of content. Furthermore, this also provides a better understanding of algorithmic systems and their problems and issues, and it helps us be more careful when we watch videos, listen to recordings or read texts online.

Additionally, our digital footprint is growing, fed by our daily usage of digital tools — and most of us are not properly aware of the footprints we leave behind online. These digital traces make it easier to refine user profiles in more and more specific ways, which profiles then allow for the creation of narratives made-to-measure for us and thus much more impactful. Nowadays, information propagates at great speed — we call this “virality” or “information overload”. Individuals are used to consume information which is short, synthetized, immediately available and requiring little attention span and concentration. We rarely take the time necessary to reflect and verify. The speed at which this happens, how habituated to it we have become and the nature of online content itself all reinforce emotional reactions which bypass verification and analysis. But if individuals are more aware of the risks this involves — if they know that they can be targeted by these mechanisms — then it becomes more natural for them to step back and pause. This will become even more important given the numerous types of content and media. There is content which is created to mislead, and sometimes completely synthetic. But there is also true information, which has been removed from its context prior to being shared. For example, in 2016, images of mass graves were circulated as proofs of massacres commited by Ukraine. The images were real — but they had been shot in Bosnia during the Yugoslav wars, not in present-day Ukraine.

Of course tools have been designed to properly track and identify false information. Other tools can identify deepfakes in order to prevent them from going viral and contributing to the spread of false, patchy or out-of-context information. But these instruments are still not well known, nor easily accessible. And finally, the time required to verify massive amounts of data is sometimes not compatible with content demand.