What Does Information Integrity Mean for Democracies?
Disinformation is only a symptom of a much larger problem.
Published by The Lawfare Institute
in Cooperation With
Democracies around the world are encountering unique challenges with the rise of new technologies. Experts continue to debate how social media has impacted democratic discourse, pointing to how algorithmic recommendations, influence operations, and cultural changes in norms of communication alter the way people consume information. Meanwhile, developments in artificial intelligence (AI) surface new concerns over how the technology might affect voters’ decision-making process. Already, we have seen its increased use in relation to political campaigning.
In the run-up to Pakistan’s 2024 presidential elections, former Prime Minister Imran Khan used an artificially generated speech to campaign while imprisoned. Meanwhile, in the United States, a private company used an AI-generated imitation of President Biden’s voice to discourage people from voting. In response, the Federal Communications Commission outlawed the use of AI-generated robocalls.
Evolving technologies present new threats. Disinformation, misinformation, and propaganda are all different faces of the same problem: Our information environment—the ecosystem in which we disseminate, create, receive, and process information—is not secure and we lack coherent goals to direct policy actions. Formulating short-term, reactive policy to counter or mitigate the effects of disinformation or propaganda can only bring us so far. Beyond defending democracies from unending threats, we should also be looking at what it will take to strengthen them. This begs the question: How do we work toward building secure and resilient information ecosystems? How can policymakers and democratic governments identify policy areas that require further improvement and shape their actions accordingly?
Policymakers and researchers within democracies, from the White House to the United Nations, have turned to “information integrity” as the approach for bettering their information ecosystems. Information integrity is usually defined as “the accuracy, consistency, and reliability of the information content, processes and systems to maintain a healthy information ecosystem.” Such definitions are drawn from the existing literature, but they fall short. Information integrity as a concept was borrowed from the field of information security, but it has not been adapted to fit the complexity of an information environment. Moreover, research on the concept comes largely from the Global North, introducing bias into how we understand it. As used by policymakers and researchers so far, it lacks a framework that can guide policy actions and ways to assess the success of those actions. Before information integrity is tossed aside as yet another empty buzzword or politicized for its lack of clarity, greater depth must be applied to how it is understood and how it can be executed, specifically looking at its relationship to the information environment in the context of democracy.
The term “information integrity” emerged from the field of information security and commonly refers to the internal security systems of corporations—the risks posed to those systems as well as solutions to mitigate and manage such risks. Across several applications of the concept—from information security to finance to library sciences—the “accuracy, consistency, and reliability” of information is the most common definition of information integrity. However, such definitions don’t account for the complexity of an information environment.
In information security, accuracy “can be assessed by identifying an established standard and by determining an acceptable tolerance for deviations from that standard.” In a company, this assessment can be carried out by an individual within a hierarchical system. But when it comes to the integrity of the information environment, making these corrections is only a first step. We must also consider how corrections are made and who is making them. Therefore, in the context of the information environment, accuracy is better defined as the quality or state of information being correct or precise, including efforts in fact-checking and disinformation monitoring.
In existing definitions, consistency “can be evaluated by identifying the degree to which repeated instances of the same information occur in space, over time, and in relation to one another at the same point in time.” However, in the information environment, the ideal of consistency may have less to do with people receiving the exact same information and more to do with whether individuals have continuous access to information they want. Consistency, thus, is the regular or steady access to information including the ability to stay online, the degree of censorship, and the maintenance and functioning of infrastructure used to disseminate information.
Reliability in the field of information security is “determined by examining its completeness in relation to a given specification; by assessing its currency or relative newness; and by establishing its verifiability, the degree to which its origin and history can be traced.” The expansiveness of the information environment makes it infeasible for anyone to conduct this level of verification on the completeness of every individual’s knowledge. A more fitting approach would be to take a closer look at the infrastructures that produce information for the masses. Therefore, in the information environment, reliability can be defined as the suitability of results regarding information, which includes enabling quality sources of information and media that are sustainable, independent, and transparent.
In the context of information security, information integrity as a standard only has to be applied to a closed system. In democracies, however, the information environment must be open and free. To accommodate this difference, any definition of information integrity must include fidelity, safety, and transparency, along with accuracy, consistency, and reliability.
People who receive the same information can still interpret it differently. Fidelity refers to the degree of exactness with which information is copied or reproduced and, in the context of the information environment, the degree to which audiences understand information as originally intended by the producer or sender. This category includes media literacy efforts and prebunking. This category is perhaps the most challenging to measure for impact as it requires understanding how audiences receive and process information. Safety is an integral part of information integrity, because while people are rarely put in danger when handling a company’s information systems, such is unfortunately not the case in the information environment. Safety in that context is the condition of being protected from or unlikely to be at risk of danger, risk, or injury. This could include digital safety and cybersecurity. Lastly, transparency refers to the quality of work being done in an open way without secrets. Expectations for accountability are higher for a democratic government than for a private company. Transparency here relates to governments being accountable and transparent about how they engage with civil society and industry in crisis response.
While existing research sometimes provides definitions for the components of accuracy, consistency, and reliability, rarely does it include methods to measure and compare levels of information integrity, nor does it have practical suggestions for actions that governments, civil society organizations, and industry actors across media types should take to uphold information integrity.
This new definition provides practical suggestions for policymakers and researchers to understand the goals they are working toward when using this term. It also helps them develop measurements to evaluate how well different countries’ ecosystems live up to this standard of information integrity. Based on the Partnership for Countering Influence Operation’s work protecting Ukraine’s information ecosystem from Russia’s invasion in 2022, we demonstrated how a framework could be used to uphold information integrity across the stages of emergency management. Across the four stages of emergency management—namely, prevention and mitigation, preparedness, response, and recovery—we proposed lines of action that can help improve components of information integrity. Following this framework, policymakers can map out how existing initiatives fulfill different aspects of information integrity, and they can identify areas in which programming or resourcing may not currently exist. This facilitates multi-stakeholder coordination efforts. By implementing this approach along an emergency management framework, policymakers can preemptively evaluate how they can prepare for cases of crises and conduct capacity building ahead of time rather than in the moment. The framework also provides clearly defined goals of information integrity. Using those goals, policymakers can develop measurements most applicable to their countries’ contexts, in terms of how well existing efforts fulfill these aims.
As information integrity is increasingly used in the context of the information environment, it needs a clear definition that provides guidance on how the concept can be translated into policy. This article identifies six components of information integrity—namely, accuracy, consistency, reliability, fidelity, safety, and transparency—and defines them in the context of the information environment. Stakeholders such as policymakers, civil society organizations, and industry actors can structure and coordinate their activities around these components, to bring their democracies closer to the goal of information integrity.