How Will Artificial Intelligence Impact Battlefield Operations?
AI is reshaping warfare, accelerating decision-making, and impacting civilian casualties—but over-reliance poses risks and vulnerabilities.
.jpg?sfvrsn=e915b36f_5)
Published by The Lawfare Institute
in Cooperation With
The rapid advancement of artificial intelligence (AI) is transforming industries at an unprecedented pace. The business of warfare is no exception. Nations are already racing to integrate AI into military operations, with Ukraine and Russia at the forefront of developing autonomous systems to gain an advantage on the battlefield. But as they integrate the technology into combat, one critical question remains: How much should we rely on it, and at what risk?
As Austrian Foreign Minister Alexander Schallenberg warned, “This is the Oppenheimer moment of our generation.” Just as nuclear weapons redefined warfare in the 20th century, AI-enabled weapons are now reshaping battlefields—most notably in Ukraine. Speaking at a Vienna conference on autonomous weapons, Schallenberg warned of a growing risk: that AI-driven warfare could spiral into an uncontrollable arms race. Autonomous drones and algorithm-driven targeting systems threaten to make mass killing a mechanized, near-effortless process.
AI’s Growing Role in Modern Warfare
The Pentagon is already actively testing AI-driven decision-making tools in real-world scenarios. For example, in January, the Pentagon reportedly began using generative AI tools—similar to that of ChatGPT—in the Indo-Pacific region to enhance battlefield decision-making against high-tech adversaries like China.
Led by the Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO), which was established in 2022, the initiative to integrate AI into the department involves collaborations with companies such as Anduril and Palantir to accelerate commanders’ decision-making processes on the battlefield. The initiative marks an important shift toward leveraging private-sector innovation to enhance military decision-making.
More specifically, Anduril’s Lattice AI software integrates sensor data for real-time decision-making, bringing autonomous sensemaking to command and control. Palantir’s AI-driven data fusion equips commanders with actionable intelligence across multiple domains. By integrating vast amounts of data from land, air, sea, cyber, space, and electronic warfare operations, it enables real-time decision-making, enhances battlefield awareness, and ensures a synchronized response across complex operational environments.
Palantir’s AI software turned Ukraine into what Time magazine referred to as an “AI war lab.” The technology has been instrumental in analyzing satellite imagery, processing drone footage, and fusing open-source intelligence to help Ukrainian forces identify and target Russian positions in real time. Samuel Bendett, a senior fellow at the Center for a New American Security, emphasized the unprecedented volume of data generated by the war in Ukraine, which is fueling AI-driven military innovations. “Over the past three years, an enormous amount of data has been collected—equivalent to hundreds of years’ worth—spanning air, space, ground, and cyber-based sources. Both belligerents are already utilizing this data, shaping military planning and wargaming, particularly in the use of drones and unmanned systems,” Bendett explained.
The AI Arms Race in Ukraine
Ukraine is already locked in an AI-driven drone race against Russia, with both sides leveraging autonomous technologies to gain an edge on the battlefield. Faced with Russia’s numerical superiority, Ukraine turned to drones early in the war, forcing Moscow to follow suit. As Russia advanced its electronic warfare capabilities—jamming Ukrainian drones as a result—both sides were pushed to innovate at an accelerated pace. In the Russian-Ukrainian war, drones now cause most battlefield casualties, accounting for approximately 70 percent of total deaths and injuries.
This cat-and-mouse game led both sides to adopt fiber-optic cables to bypass jamming. Unsurprisingly, countermeasures to disrupt these cables are already in development.
Now, the next phase of drone warfare is taking shape: AI-powered targeting systems designed to operate even in heavily jammed environments, allowing drones to identify and strike targets with minimal human intervention.
In a November 2023 interview with The Economist, former Ukrainian Commander-in-Chief Valerii Zaluzhnyi likened the battlefield to World War I. “Just like in the first world war, we have reached the level of technology that puts us into a stalemate,” he said. He stressed that overcoming this deadlock would require a major leap in unmanned and robotic systems and acknowledged that “[t]here will most likely be no deep and beautiful breakthrough.”
The stalemate remains today. As a result, both Ukraine and Russia are scrambling to find short-term technological breakthroughs wherever possible. The war’s technological race has become a fight for drone supremacy. In this battle for drone supremacy, AI-enabled drones will evolve and potentially turn warfare into a battle of algorithms. The side with the fastest and most adaptive AI will dominate, accelerating target identification and execution within the kill chain, where speed and precision become the ultimate advantages. The more data and sensory inputs fed into the algorithms, the more precise and lethal AI-driven targeting systems will become.
AI’s Next Evolution in Combat
One Russian military blogger, writing on Telegram, warned that AI will eliminate traditional forms of warfare, making camouflage, deception, and electronic countermeasures nearly impossible. “Camouflage is impossible. Computing power will allow AI algorithms to constantly review all reconnaissance data and detect even the slightest change,” he said.
According to the Russian blogger, AI could revolutionize electronic warfare, rendering radio reconnaissance obsolete by using machine-learning systems that mimic human voices, intercept communications, and manipulate enemy decision-making. “Radio reconnaissance is impossible—GPT chats will support real radio exchange of voices of real people, hacking radio networks and gaining access to their negotiations will only confuse reconnaissance. Electronic warfare loses its meaning—each combat unit is autonomous.”
The Russian blogger also highlighted the evolution of AI-enhanced swarm weapons, which include drones, missile modifications, and guided munitions that adapt and learn from combat engagements in real time. He argues that AI-driven targeting systems will create an ever-evolving battlefield where countermeasures quickly become obsolete.
“All weapons are learning. An anti-tank missile that a vehicle or tank managed to dodge will instantly transmit data back to the carrier …. Like an Apache helicopter, which will then launch a new missile already ‘knowing’ how to counter the earlier evasion. The same with torpedoes, anti-ship missiles, air-to-air missiles, with any guided weapons,” he added. “This is truly a ‘new atomic bomb’, if not worse.”
Despite these concerns, the defense editor for The Economist, Shashank Joshi, explained that AI’s immediate impact lies not in fully autonomous warfare but in enhancing military strategy and decision-making. One of the most striking real-world applications of AI in warfare has been seen in Israel’s military operations in Gaza, where AI-driven targeting systems have played a key role in the bombing campaign.
While the blogger may overstate AI’s immediate impact, his warnings reflect growing anxieties about the speed at which warfare is evolving. One important aspect of the race is not just deploying AI on the battlefield, but in developing countermeasures just as quickly.
A Double-Edged Sword
AI is poised to reshape warfare with unparalleled precision and adaptability, but its rapid integration comes with serious risks. While it may reduce unintended casualties and enhance battlefield efficiency, the same technology could lead to uncontrolled escalation and over-reliance on automation, with unpredictable consequences for the wars of the future.
After all, researchers are still struggling to fully understand how AI itself functions, particularly in training and decision-making processes. The opacity of AI models, often referred to as the “black box” problem, means that even the engineers developing these systems may not fully grasp how they arrive at certain conclusions. This lack of transparency raises significant concerns in military applications, where life-and-death decisions depend on reliability, predictability, and accountability. If militaries become too dependent on AI without a thorough understanding of its limitations, they risk deploying systems that fail in unpredictable ways—whether due to adversarial manipulation, unforeseen biases, or operational breakdowns in contested environments.
Treston Wheat, chief geopolitical officer at Insight Forward and an adjunct professor at Georgetown University, told me in an interview that he believes AI will help reduce unintended casualties. For example, a commander relying on multiple intelligence sources might overlook critical details and order a strike that harms civilians. In contrast, AI systems can process vast amounts of data in real time, identifying nuances that a human might miss, potentially preventing such errors.
“AI will absolutely reduce civilian casualties, and that will be one of the most significant benefits of these weapons,” Wheat explained. “While humans are creative and thoughtful, AI processes information much faster, including evaluating potential scenarios. People also have far more limited vision. As such, AI will allow weapons to distinguish between targets far more effectively.”
However, while AI promises greater precision and a reduction in unintended casualties, its integration into warfare is not without risks. One of the issues that will grow as AI operations scale is the potential over-reliance on technology, which could leave militaries vulnerable when these systems fail or are disrupted.
Wheat pointed to historical examples in which militaries prioritized technological dominance, only to be outmaneuvered by low-tech countermeasures. This pattern reflects a broader issue for modern militaries, where excessive dependence on technology can erode fundamental military skills. What happens in a moment on the battlefield when technology fails to work? A jammed signal, a depleted battery, or an enemy cyberattack could render sophisticated technology systems useless, leaving soldiers unprepared to operate without them. As seen in counternarcotics operations in Mexico, reliance on technology has left officers vulnerable when devices fail, highlighting the danger of skill atrophy.
“There are always problems when a military becomes too focused on its technological advantage,” Wheat warned. “Both Israel and the United States have faced this issue, relying heavily on advanced technology while their adversaries used low-tech methods to subvert them.”
For instance, Wheat pointed out that during the “war on terror,” the U.S. led in signals intelligence, so al-Qaeda adapted by using paper communications and couriers to avoid detection. Similarly, in the Second Lebanon War, Hezbollah employed fire-suppression blankets to conceal missile launch sites, preventing Israeli airstrikes from effectively targeting them.
While AI and autonomous weapons will undoubtedly enhance battlefield lethality, Wheat emphasized that ingenuity and low-tech countermeasures can still neutralize technological superiority.
“Advanced technology and autonomous weapons will undoubtedly make a military force more lethal,” he noted, “but leaders should never forget that imagination and low-tech solutions can undermine this advantage.”
Paul Lushenko, assistant professor and director of special operations at the U.S. Army War College, cites Israel’s AI-driven targeting in Gaza as an example of how AI is already influencing real-time battlefield decisions. He notes that machine-learning algorithms trained on military datasets can predict enemy locations, analyze combat doctrine, and optimize targeting solutions.
However, he warns that the integration of AI into lethal operations raises serious ethical concerns, particularly regarding the use of autonomous weapons and algorithm-driven targeting. Israel’s military has relied heavily on AI-driven targeting in its war against Hamas, using a previously undisclosed system called Lavender to identify potential targets. It was reported that Lavender was responsible for selecting up to 37,000 potential Hamas-linked operatives, dramatically accelerating the pace of airstrikes, with many civilians caught in the crossfire.
Lushenko also addressed the concept of “minotaur warfare,” in which AI could assume greater control over combat operations, directing ground patrols, aerial dogfights, and naval engagements. He argued that this shift would require radical changes to military structures, including redefining command and control, creating new career fields, and reconsidering centralized versus decentralized operations.
This approach envisions AI as the central “brain” of military operations, analyzing battlefield data in real time and issuing commands to both human and autonomous units with greater speed and precision than traditional methods. The term “minotaur” suggests a hybrid model, in which AI and human forces work together, balancing automation with human oversight to improve military effectiveness.
As AI integration in warfare continues to accelerate, the central question remains: How much decision-making should be entrusted to machines, and at what cost?
Not every AI model will be trained for every battlefield scenario, and the AI will have its limitations. Ironically, the side that becomes overly dependent on AI-driven warfare may also expose itself to new vulnerabilities—ones that its adversary will inevitably learn to exploit.
“There will always be vulnerabilities with technology. What will matter is how effective we are at deploying AI-enhanced cyber defenses, but a successful cyberattack can never be ruled out,” said Wheat. “Especially if there are accidental insider risks or an extremely capable threat actor.”
If AI targeting systems rely on predefined rules of engagement, they may struggle to adapt to unconventional warfare. If AI models are trained primarily on conventional warfare patterns, they may struggle to identify and counter these rapidly evolving threats. Worse still, if AI systems prioritize efficiency over ethical constraints, they could misclassify nontraditional combatants or objects as legitimate targets, leading to catastrophic miscalculations on the battlefield.
Russia, after all, is sending civilian cars into battle in Ukraine. What happens when an adversary systematically disregards international laws and norms? Should Western AI models be trained to target civilian vehicles if they are being used in combat? What if Russian forces—or another adversary—abandon military uniforms altogether, blending in with civilians before launching attacks? Some Russian soldiers have even attempted to wear Ukrainian uniforms to infiltrate enemy positions.
These tactics expose a fundamental challenge in developing AI for warfare: How can such systems distinguish legitimate military targets when the rules of engagement are deliberately blurred? Adversaries will always seek to exploit technological advancements, and an over-reliance on AI risks creating vulnerabilities that could have devastating consequences in future conflicts.