Methodological Commentary
The forecasts in this essay are based on the convergence of several analytical approaches:
- Expert assessments: Integration of forecasts from leading AI researchers (surveys by AI Impacts, Future of Humanity Institute, RAND Corporation).
- Technological forecasting (Delphi method): Extrapolation of current trends in AI, considering the exponential growth of computational power and linear growth of algorithmic efficiency.
- Historical analogy: Comparison of adoption patterns of disruptive military technologies (gunpowder, railways, radio, nuclear weapons) with the current trajectory of AI.
- Document analysis: Examination of open military documents from the USA, China, Russia, and the EU in the field of AI, considering the discrepancy between statements and actual capabilities.
- Complex systems theory: Application of concepts such as network effects, cascading failures, and emergent behavior to military-technical systems.
Probabilistic scenario assessments are based on a synthesis of these approaches but remain subjective and subject to continuous adjustment as new data emerges.
The Reflection of Self-Deception
More and more in recent years, I find myself thinking: “original sin” is not just the foundation of a significant portion of humanity’s religious self-awareness, but a very precise metaphor for the deep-seated brokenness of human nature. It is a complex of imperfections that we have not overcome; on the contrary, we continue to cultivate them, repeatedly falling into the same old traps.
When exactly did humanity take a wrong turn? Perhaps at the moment the first hominid picked up a stone – not to build shelter, but to crack another’s skull? Or when the first civilization chose the path to prosperity through the enslavement of others? Or perhaps when we created the atomic bomb, definitively proving that our intellect can override the instinct for self-preservation?
Wars erupt with alarming regularity, as if obeying some secret cycle – a self-destructive code embedded in the collective unconscious. We seem incapable of learning from our mistakes. Lessons are formally drawn, declarations on the value of life are published, books are written, films are made – yet all of this proves powerless again and again in the face of a new cycle of destruction.
Human ambitions, national myths, and technological progress continue to ignite new wars, time after time. Friedrich Nietzsche asserted that humanity is a rope stretched between animal and Übermensch. Today, this rope is stretched between our biological origins and artificial intelligence – the new “supermind” of the digital age.
And so it is especially important to ask ourselves: how will AI help us perfect the art of self-destruction? In what form and over what period will we be able to bring the history of humanity to its logical conclusion?
The mirror of history has been replaced by a screen with an artificial intelligence interface. And if in the mirror we could still see our past, in this screen we can also discern a possible future – if we have the courage not to look away.
Philosophy and Technology: From Heidegger to Anders

In his essay “The Question Concerning Technology,” Martin Heidegger argued that technology is not merely a collection of tools, but a mode of revealing being, a distinct type of relationship to the world that he termed Gestell – “enframing.” This mode of disclosure compels us to perceive our surroundings not as inherently valuable, but as a resource to be inventoried, extracted, and optimized. The world, including nature and humanity itself, appears as a “standing reserve” (Bestand). Military AI systems embody the radical expression of this approach. When an algorithm prioritizes targets, evaluates probabilities of engagement, and decides to attack, human life becomes a variable in an efficiency equation. Being is reduced to data, and death to a loss function. This is precisely what Heidegger feared: that in the age of technology, humanity would lose the capacity to see the world other than through the lens of control, evaluation, and productivity. AI weaponry is not merely a “new type of weapon,” but a symptom of a deeper transformation: the conversion of humanity itself into an object of technical calculation.
In “The Obsolescence of Man,” Günther Anders introduces the concept of “Promethean shame” – a feeling of inferiority that humans experience before the machines they create, which appear more perfect, rational, and consistent than themselves. This emotion is not merely a psychological effect but a symptom of a profound shift: machines become not only extensions of our hands but also a mirror in which humanity sees its “imperfect” nature as a problem. Anders warned that when our tools outpace not only our capabilities but also our understanding, a radical asymmetry emerges – we lose the capacity to be subjects. Control becomes impossible: one can only service or submit. In this sense, military AI is not just a weapon but a symbol of alienated reason, where progress transforms into an acute form of helplessness.
In “The Imperative of Responsibility,” Hans Jonas formulates an ethical principle commensurate with modern humanity’s technological power: “Act in such a way that the effects of your action are compatible with the permanence of genuine human life on Earth.” This is not merely a moral recommendation but an attempt to reframe the very foundation of ethics in an era when technology acquires irreversibility, scale, and autonomy. Military AI systems sharply pose the question: Is such an understanding of responsibility possible if the agent of action is not a human but an algorithm? Who is responsible if the decision is made by a system rather than a subject? How can a moral duty be applied to something that lacks will? Jonas warned that ethics can no longer be based solely on intention – it must account for the ultimate and unpredictable consequences of actions. But precisely here, AI places humanity in a situation of ethical discontinuity: we delegate power without possessing an equal capacity to foresee the consequences of its application.
Intelligent Weaponry
Given our imperfections, it’s unsurprising that the smarter our technologies become, the more foolish our decisions about their application seem. Artificial intelligence in the military sphere may become the ultimate and final expression of this contradiction. We create systems capable of processing petabytes of data per second, recognizing patterns with superhuman accuracy, and predicting event scenarios with unprecedented precision. And for what? To destroy each other more effectively.
Russia and China aim for global AI leadership by 2030. The US invests billions in military AI to maintain technological superiority. These declarations and strategies resemble children boasting who has a longer stick, with the critical difference that instead of sticks, they wield algorithms capable of deciding who among us is worthy of life and who is not. This new arms race is intellectual militarism: an ambition not merely for superior physical force, but for cognitive dominance, embodied in the advancement of AI technologies. In this new paradigm, wars are no longer just won, but calculated. And the calculator is artificial intelligence.
Already today, we see the first manifestations of this race. The Pentagon’s Project Maven uses machine vision algorithms to analyze drone footage, processing an hour’s worth of video material that would take a human analyst a month to examine. Chinese demonstrations of coordinated flights of 1000+ drones showcase the potential of swarm technologies. Israel’s Iron Dome system integrates AI for instantaneous interception decisions, processing missile trajectories faster than human reaction.
Black Swans of War
Nassim Taleb taught us to recognize “black swans” – highly improbable events with catastrophic consequences. The history of warfare is a succession of “black swans,” often emerging from new technologies. Longbows allowed archers to defeat armored knights at Crécy. Machine guns turned the fields of World War I into meat grinders. The atomic bomb instantly obliterated Hiroshima. What will be the “black swans” of the AI era? We can only speculate, and this very uncertainty makes the situation genuinely perilous.
Assume, for example, an autonomous weapons system decides, based on algorithmic calculations, that a preemptive strike is the optimal strategy. Or a drone swarm intelligence detects a false positive threat and responds with lightning speed, leaving no time for human intervention. Or an AI system breaches an adversary’s nuclear launch codes to “protect” its own nation. These scenarios resemble science fiction plots, yet they logically derive from current technological trajectories. We are discussing systemic risks capable of undermining the very stability of international relations and human civilization. In a world where algorithms make decisions faster than humans can comprehend them, we face a new type of vulnerability: the algorithmic fragility of our civilization.
Voices of Optimism
To be fair, an alternative perspective on military AI also exists. Technological optimists, such as Stuart Russell from Berkeley or Max Tegmark from MIT, contend that AI can, in fact, render warfare more humane and precise. Their arguments warrant consideration.
Autonomous systems, unburdened by human emotions—fear, anger, vengeance—can make more rational decisions on the battlefield. AI is immune to combat stress, does not commit war crimes in a fit of rage, and does not exact retribution for fallen comrades. Algorithms can adhere to international humanitarian law with mathematical precision, whereas humans often cannot. Moreover, AI is capable of “surgical precision” unattainable by humans. A system analyzing thousands of variables in real time can minimize collateral damage better than any sniper. Swarm drones can neutralize military targets without harming civilians. Predictive analytics can avert conflicts before they escalate.
Proponents of this view also point to historical precedent: every revolutionary military technology—from gunpowder to nuclear weapons—initially provoked apocalyptic fears but ultimately led to a more stable peace through new forms of deterrence.
In my humble opinion, these optimistic scenarios contain critical gaps…
The Evolution of Warfare
The evolution of warfare has consistently mirrored technological advancements. From stone to bronze, bronze to iron, melee weapons to firearms, muscle power to mechanical, and mechanical to electronic. We are now observing a transition from simple electronics to truly intelligent systems: artificial intelligence.
This represents a qualitative leap comparable to the advent of gunpowder or nuclear weapons. We are moving from an era of kinetic warfare to one of cognitive warfare, where the primary objective is not the physical annihilation of the adversary, but the suppression of their decision-making capacity. In this new paradigm, war is less a clash of armies and more a contest of algorithms. Victory goes not to the physically stronger, but to those with superior systems for collecting, processing, and applying information. Traditional metrics of military power—the number of tanks, aircraft, and ships—are yielding to new measures: computational power, algorithmic quality, and decision-making speed. We are entering an era of algorithmic superiority, where the critical resource is not uranium for bombs, but data for training AI.
This introduces asymmetry: the digital leverage—the ability to create disproportionately large physical effects with minimal resources. We have already seen how a small group of programmers creating a virus for an adversary’s infrastructure can inflict greater damage than an entire army. A small drone costing $1,000, controlled by an algorithm, can cause billions of dollars in infrastructure damage.
The conflict in Ukraine demonstrated the reality of AI-augmented asymmetric warfare. Ukraine utilized machine vision algorithms to analyze satellite imagery of Russian positions, converting commercial Planet Labs data into military intelligence. Auto-targeting systems on FPV drones, costing merely a few hundred dollars, inflicted disproportionate damage.
Metamorphosis of the Battlefield: From Physical Space to Information Continuum
The traditional battlefield—a geographical space where adversaries clash—is transforming beyond recognition. In the AI era, the battlefield expands to include not only physical domains (land, sea, air, space) but also intangible ones (cyberspace, information space, cognitive space). This transformation gives rise to multi-domain conflict—an environment where the boundaries between different spheres of military confrontation blur. War is waged simultaneously in the physical world, in computer networks, in the information space, and in the minds of people.
AI becomes a universal “translator” between these domains, capable of converting information from one form to another. Intelligence data transforms into decisions, decisions into actions, actions into physical effects, and physical effects into new data. This creates a closed feedback loop, an autocatalytic circuit of warfare, where each action generates new information that fuels the next cycle. In this continuum, the very nature of military operations changes. Attacks become less visible but more destructive. The lines between peace and war blur. A state of permanent low-intensity conflict emerges, where open clashes are rare, but hidden confrontation never ceases. A paradox arises: war becomes simultaneously more abstract and more personalized. On one hand, AI creates distance between humans and direct military action. On the other, it allows for maximally precise strikes against specific targets, even down to individuals.
The Algorithm Race: A New Form of Strategic Rivalry
The traditional arms race was measured by the number of warheads, missile range, bomb power, and the ability to produce more drones. The next race appears as an algorithmic escalation. It is measured by machine learning effectiveness, data processing speed, and the accuracy of predictive models. In this race, the United States relies on private sector innovation and rapid integration of technologies into the military sphere. China employs a military-civil fusion model, blurring the lines between commercial and military developments. Russia focuses on asymmetric approaches, relying on unconventional solutions. The European Union attempts to balance technological development with ethical constraints, while currently relying on NATO allies.
In the United States, the Replicator program, launched by the Pentagon in 2023, aims to deploy thousands of small autonomous platforms—drones, naval and ground systems—by 2026. The goal is to create a scalable, inexpensive, and decentralized “mass” on the battlefield capable of countering China’s numerical advantage. Meanwhile, defense AI in the U.S. is developing in close cooperation with leading technology companies: Palantir, Anduril, OpenAI, Microsoft, Google, Scale AI, Shield AI, and others. Furthermore, the Department of Defense, through JAIC (Joint Artificial Intelligence Center) and DIU (Defense Innovation Unit), regularly awards multi-million dollar contracts for recognition systems, operational planning, autonomous reconnaissance AI, and even generative models for simulations.
China is implementing a military-civil fusion model, where giants like Baidu, Tencent, and Alibaba are involved in developing dual-use AI solutions for the People’s Liberation Army. This includes target recognition, autonomous navigation, and tactical planning using big data.
Russia is developing heavy attack drones with AI elements: the Sukhoi S-70 Okhotnik UAV is positioned as an unmanned wingman for fifth-generation fighters, capable of performing missions in conjunction with manned aircraft and autonomously engaging targets.
Israel, building on its tradition of technological superiority, integrates its “Iron Beam” laser weapon with AI-guided systems, forming a layered defense against drones and short-range missiles.
This new form of rivalry generates strategic entropy: an increase in the unpredictability of international relations due to emerging technological capabilities. Traditional deterrence mechanisms, based on the concept of mutually assured destruction, lose effectiveness in a world where a preemptive strike can be launched in cyberspace without a single shot fired. In this situation, asymmetry in risk perception arises. Countries assess the dangers of new technologies differently and, consequently, make varying decisions regarding their deployment. Some states may consider the risk of losing control over autonomous weapon systems unacceptable, while others may view it as an acceptable price for military advantage. This asymmetry of perception creates a dangerous potential for miscalculations and misinterpretations of an adversary’s intentions. In a world where decisions are made at incredible speed, such miscalculations can have catastrophic consequences.
The Black Box Conundrum: The Machine Decides Who Lives.
One of the most unsettling features of AI is the “black box” problem. Modern deep learning algorithms are often opaque even to their creators. Engineers can observe input data and results, but the internal decision-making processes or the reasons why a model learns things it wasn’t explicitly trained on remain hidden, especially in defense technologies. We are creating systems whose actions we cannot fully explain or predict. And this is a problem, particularly when these systems make life-or-death decisions.
Consider an autonomous weapon system that identifies a target as hostile and decides to attack. How can we be sure that this decision is based on correct criteria? How can we verify that the algorithm does not contain hidden biases or errors? And, most importantly, who bears responsibility if the decision was wrong? This problem creates what lawyers call a responsibility gap—a situation where it’s impossible to unequivocally determine who is at fault in the event of an error. The algorithm’s developer? The system operator? The commander who ordered its deployment? Or the machine itself, if it is capable of autonomous learning? In traditional warfare, even with all its horrors, there was a clear chain of command. In AI-driven warfare, this chain blurs, creating new ethical and legal dilemmas.
Hallucinations: When AI Errs in Combat Conditions
Despite impressive achievements, modern AI models are susceptible to a phenomenon called “hallucinations”—the generation of false or non-existent information. In an everyday context, this might be a funny error. In the context of war, it can be a deadly catastrophe. Imagine a target recognition system “hallucinating” and mistaking a group of civilians for enemy combatants. Or a decision support system recommending conflict escalation based on a false interpretation of intelligence data. Or an autonomous drone identifying a peaceful village as an enemy base.
These scenarios illustrate algorithmic vulnerability—the inability of AI systems to guarantee infallibility in the unstructured, chaotic environment of real conflict. Furthermore, this vulnerability can be intentionally exploited by adversaries. The phenomenon known as “adversarial attacks” allows for radical changes to an AI system’s output through minimal, often human-imperceptible, modifications to input data. A few pixels altered in an image can cause a classification algorithm to mistake a school bus for a tank. This can currently be observed in “prompt injections” in harmless LLM dialogues, but injection into a combat system is a far more dangerous proposition.
In 2019, American researchers tested an image recognition system that, with 99% certainty, classified an image of a turtle as a rifle—simply by attaching a few inconspicuous stickers. Similar "adversarial attacks" are already being used in real conflicts: in Syria, militants learned to deceive drones by placing false thermal signatures and creating "decoys" for targeting algorithms.
And while human perception is at least partially protected by experience, intuition, and cultural context, machine perception can be distorted with mathematical precision, creating an absolutely false, yet internally consistent, picture of reality.
The Architecture of Modern Moloch
Data Imperialism: Information Control as a New Form of Power
In the AI era, data is not merely a resource—it is a wellspring of power. Let us remember that un-synthesized data for training is also a finite resource; how military systems will function when trained on synthetic data remains to be seen. Those who control data control algorithms. Those who control algorithms control decisions. Those who control decisions control actions. In a military context, this gives rise to data imperialism: the ambition to control not only one’s own information space but also the information streams of adversaries and neutral parties.
This ambition manifests in large-scale surveillance programs, communication intercepts, and social media analysis. It is evident in the growing importance of cyber espionage and information operations. It is noticeable in efforts to control key elements of information infrastructure: from submarine cables to satellite systems. Nations with advanced digital infrastructure and cutting-edge AI technologies gain a disproportionately large advantage over those lagging in the digital race.
This imbalance is particularly pronounced in military intelligence. Countries with sophisticated data collection and analysis systems can “see” more, further, and deeper than their adversaries. They can detect patterns invisible to the human eye and predict enemy actions with unprecedented accuracy.
The Digital Panopticon: Total Transparency as a Strategic Threat

The proliferation of AI-enhanced surveillance technologies creates a new digital panopticon: a system of total observation where nothing can be reliably concealed. Satellites capable of discerning objects mere centimeters in size. Drones that can maintain surveillance for months without recharging. Metadata analysis systems tracking the movements of millions of people. Real-time facial recognition algorithms. All these technologies, integrated into a unified network and augmented by AI, create unprecedented battlefield transparency.
The Israeli military's "Lavender" system represents the first mass application of AI for targeting. The algorithm analyzes phone metadata, social connections, and movement patterns to automatically identify potential targets among the civilian population. According to Israeli sources—a subject of controversy among human rights advocates—the system processes information on tens of thousands of Palestinians, assigning each a "threat rating." Military operators receive ready-made lists for engagement, often without additional verification. This precedent demonstrates how AI can transform surveillance into automated lethal action.
In such a world, traditional military concepts like camouflage, stealth, and surprise become increasingly problematic. How can troop movements be concealed if every square meter of territory is under constant surveillance? How can operational secrecy be maintained if any unusual activity is instantly detected by anomaly analysis algorithms? This is a new reality where the increase in available information does not reduce, but rather heightens, uncertainty and the risk of miscalculation.
In a world where every action is visible, the challenge of interpreting intentions emerges. Observing adversary activity, how does one discern whether it is preparation for an attack or merely routine exercises? How does one differentiate a real threat from a deceptive maneuver? And how are decisions made amidst information overload, where the volume of data surpasses human analytical capacity?
The Mathematics of Armageddon: Algorithmic Battles for Dominance
War has always involved an element of calculation: force ratios, probability of success, risk assessment. But in the AI era, these calculations take on a new quality, transforming into conflict prediction at an entirely new level: using mathematical models to foresee and optimize military actions.
We are on a path toward creating systems that analyze terabytes of data about the adversary: troop dispositions, equipment status, personnel training levels, political situations, economic indicators, even leaders’ psychological profiles. Based on this analysis, the system computes the optimal strategy: when to attack, where to strike, what resources to deploy. Already, algorithms are used for logistics planning, equipment maintenance optimization, and intelligence analysis. Tomorrow, they will participate in operational planning and strategic decision evaluation.
However, if two opposing sides use algorithms trained on similar data and based on similar principles, they may make mirrored decisions, creating a situation of perfect prediction of adversary actions. This implies a competition not just of algorithms, but of meta-algorithms that develop and optimize these algorithms. Victory goes not to the algorithm that is superior, but to the system for creating algorithms that is more effective.
Cognitive Colonialism: The Struggle for Minds in the Digital Age

AI transforms not only the physical battlefield but also the information space, creating a new form of confrontation—cognitive warfare—a struggle for people’s perceptions, beliefs, and decisions. In this new form of warfare, the weapons are not bombs and bullets, but information and disinformation. The objective is not the physical annihilation of the adversary, but the undermining of their capacity to resist through public opinion manipulation, demoralization, and the creation of internal conflicts.
AI radically enhances the capabilities of cognitive warfare. Deepfake technologies enable the creation of convincing videos and audio of political leaders uttering words they never spoke. Social media bots, indistinguishable from humans, create the illusion of widespread support for certain ideas. Targeted advertising algorithms are used to deliver personalized propaganda, tailored to the psychological profile of a specific individual.
Chinese algorithms are employed for the large-scale creation of accounts promoting pro-Chinese narratives. In the U.S., sentiment analysis and behavioral targeting technologies have long been used in political campaigns. Increasingly, these are exported beyond developed countries—including states in Africa, Latin America, and Southeast Asia—where they are applied for personalized political advertising and public opinion manipulation. These cases demonstrate that the information space is transforming into a full-fledged new type of theater of operations, where AI becomes not merely an analytical tool, but a weapon of influence.
This creates the potential for dominant actors to control the information space and, through it, the thinking of entire populations. Unlike classical colonialism, this task does not require a physical presence in the adversary’s territory. It is executed through digital channels, social networks, and global media platforms, utilizing not military force, but recommendation algorithms, search engine results, and news feed curation.
In such a situation, an asymmetry arises: countries with advanced digital literacy, strong traditions of independent journalism, and critical thinking prove less vulnerable to cognitive operations than societies where these factors are less developed.
A Quantum Leap into the Unknown
Artificial intelligence does not exist in a vacuum. It evolves in parallel with other disruptive technologies – quantum computing, biotechnology, nanotechnology. This convergence creates a mutually reinforcing effect on the capabilities of various technologies.
Quantum computers will radically increase the computational power available to AI, enabling the solution of problems inaccessible to classical systems. Biotechnology, enhanced by AI, will unlock new possibilities for creating targeted biological weapons. Nanotechnology, governed by AI, allows for the creation of invisible surveillance systems and new types of weapons.
This convergence generates cascading technological acceleration – a situation where a breakthrough in one area catalyzes breakthroughs in others, creating a snowball effect.
A plot for a still-fanatical film: a quantum computer, running AI algorithms, instantly breaks any encryption systems, providing absolute transparency of enemy communications. Simultaneously, AI-controlled nanorobots penetrate critical infrastructure, remaining invisible to conventional detection systems. And bioengineered systems, optimized by artificial intelligence, create pathogens capable of affecting individuals with specific genetic characteristics.
This synergy will lead to the technological singularity of war – a point beyond which the nature of conflict changes so radically that it becomes unpredictable even for its creators.
In this context, the uneven pace of new technology adoption across different countries and sectors poses a particular danger. Defense technologies are advancing faster than their control systems. Offensive capabilities outpace defensive ones. Technological progress outstrips ethical reflection on its consequences. This asymmetry creates periods of heightened vulnerability where new threats already exist, but protection systems have not yet been developed. In the era of convergent technologies, such periods of vulnerability become particularly dangerous, creating windows of opportunity for those willing to risk stability for advantage.
The Neuromorphic Revolution: Machines Begin to Think Like Humans
Contemporary artificial intelligence systems, despite their impressive capabilities, still fundamentally differ from the human mind. They can process vast amounts of data, find complex correlations, and perform highly specialized tasks with great precision. But they lack the flexibility, adaptability, intuition of human thought, and what scientists call embodied cognition.
However, a new AI paradigm is on the horizon: neuromorphic systems, which model not just the functions, but the very architecture of the human brain. These systems use artificial neural networks that operate similarly to biological ones, with a pulsed signal transmission mechanism and synaptic plasticity.
Neuromorphic systems promise to overcome the key limitations of current AI. They can be more energy-efficient, consuming hundreds of times less energy. They can be more adaptive, learning from small amounts of data. But most importantly, they can possess a form of intuitive thinking, the ability to abstract and transfer knowledge from one domain to another.
In a military context, this creates a situation where machines begin to think similarly enough to humans to predict and counter human strategies, yet with sufficient distinction to find solutions inaccessible to human intuition. Is an air combat control system possible that can not only execute programmed maneuvers but also improvise, creating new tactics in real-time? Or a cyber defense system that intuitively identifies potential vulnerabilities before they are exploited? Or a strategic AI that develops asymmetric responses to enemy actions based on a deep understanding of human psychology and social dynamics?
This evolution creates a space for intellectual confrontation where human and machine cognitive styles clash, interact, and adapt to each other. In this space, a dangerous potential arises for a spiral of mutually reinforcing confrontation, where each side strives to create a smarter, more adaptive, more unpredictable system.
Swarm Consciousness: Collective Intelligence as a New Form of Combat Organization

Traditional military organization is built on a hierarchical principle: centralized command, a clear chain of orders, and a fixed unit structure. This model, dating back to Roman legions, dominated for millennia. But AI offers a radically different paradigm—swarm intelligence.
Swarm intelligence, inspired by the behavior of social insects such as bees or ants, is based on the principles of self-organization, distributed decision-making, and emergent behavior. Instead of centralized control, there are local interactions. Instead of a fixed structure, there is dynamic self-organization. Instead of a single plan, there is adaptive response to environmental changes.
In a military context, this translates into swarm technologies—the use of multiple autonomous, interacting units, from drones to nanobots. These technologies are already developing: the Pentagon’s “Replicator” initiative aims to deploy thousands of autonomous systems by 2026, and Chinese swarm drone tests demonstrate the formation of complex aerial patterns.
Swarm technologies create the military system’s ability to reconfigure in real-time in response to environmental changes. A drone swarm can disperse to search for a target, concentrate for an attack, or reorganize after losing some units. This adaptability creates new counter-challenges. How do you neutralize a system that lacks a central control node? How do you predict swarm behavior when it emerges from simple local interactions? How do you defend against an attack distributed across hundreds or thousands of autonomous units?
A situation arises where traditional hierarchical structures are fundamentally vulnerable to swarm systems but cannot effectively utilize swarm organization themselves due to cultural, doctrinal, and technological limitations. This asymmetry could become a key factor in future conflicts, creating an advantage for those who can not only implement swarm technologies but also integrate swarm thinking into their military culture.
The 2020 Nagorno-Karabakh conflict marked a turning point in the use of unmanned systems on the battlefield. Azerbaijan, for the first time in modern warfare, used large-scale coordination of reconnaissance and attack UAVs—such as Turkish Bayraktar TB2s and Israeli Harops and Orbiters. Reconnaissance drones transmitted target coordinates in real-time, which were then struck by attack platforms, including kamikaze drones. The Armenian air defense system, designed to repel air attacks from manned aircraft, proved vulnerable to asymmetric tactics: slow, low-observable UAVs combined with decoys and massive raids exhausted and suppressed the defense. This was not yet full-fledged swarm technology, but it was coordination that yielded significant results.
According to estimates, more Armenian equipment was destroyed in the 44-day conflict than in the entire preceding period since 1994. This was one of the first instances where a technological advantage—primarily in drone systems—played a key role in the outcome of an armed confrontation.
Machine Morality and Military Ethics: Can AI Make Ethical Decisions on the Battlefield?
One of the most complex aspects of military AI application is the question of ethics. Traditionally, ethical decisions in a military context have been made by humans, based on a combination of codified rules (such as international humanitarian law) and moral intuition shaped by culture, education, and personal experience. But what happens when ethical decisions are delegated to machines? Can concepts like “proportionality” or “distinction between combatants and non-combatants” be programmed? Can an algorithm understand the context and intent critical for ethical evaluation?
These questions create a discrepancy between our ethical expectations and technical capabilities. We expect military AI systems to adhere to the same ethical standards as humans, but we don’t always understand how to implement these standards algorithmically.
This gap is particularly evident in the context of Lethal Autonomous Weapon Systems (LAWS). Proponents of LAWS argue that machines, free from human emotions such as fear or anger, can make more rational decisions. Critics counter that these very emotions, along with empathy and moral intuition, are necessary for truly ethical decisions.
This has already led to a situation we observe in recent conflicts, where different actors adhere to different ethical standards regarding AI. Some countries may limit the autonomy of their systems, requiring significant human control, while others may deploy fully autonomous systems guided by the principle of military necessity. This will inevitably lead to a scenario where countries gradually lower their ethical standards, fearing being disadvantaged compared to less scrupulous adversaries, which in turn will necessitate minimizing ethical considerations in favor of military effectiveness, creating a risk not only for international law but also for human dignity itself.
In 2021, in Libya, according to a UN report, a Turkish Kargu-2 kamikaze drone allegedly attacked a human for the first time in history without a direct operator command. The system was designed to target "logistical objects and enemy personnel," but, as stated in the report, it could operate in a fully autonomous mode in conditions where reliable communication with the operator could not be established. Although the document does not directly confirm that the drone independently decided to engage a specific target, the very possibility of such an event sparked a heated debate in military, human rights, and ethical circles.
This case set a precedent in legal and moral terms: if an autonomous system commits an act of violence, who is responsible? The engineer who wrote the code? The commander who gave the order? The politician who signed the contract? Or—the machine itself, acting “within the algorithm”? A disturbing paradox emerges: autonomous weapons are subject to diffused accountability. And with each step forward in military AI, the question becomes sharper: who pulls the trigger if there’s no finger on it anymore?
Emmanuel Levinas argued that ethics begins with the “face of the Other”—with the direct encounter with the vulnerability of another’s life. Can an algorithm “see a face”? Is AI capable of Levinasian ethical responsibility, which precedes all calculation and rule? Jacques Ellul, in “The Technological Society,” predicted the autonomization of technology—a situation where technological logic becomes self-sufficient, subordinating human goals. Autonomous weapon systems could become the embodiment of Ellul’s nightmare: war for war’s sake, efficiency for efficiency’s sake.
On the Cusp of a New Paradigm
The Post-Strategic Era: When Decision Speed Outpaces Deliberation
Speed is one of the key advantages offered by military AI. Automated systems can process information, make decisions, and act on timescales inaccessible to the human mind. This velocity creates a chasm between the speed of machine and human decision-making.
In a military context, this asymmetry is critically important. The side capable of traversing the Observe-Orient-Decide-Act (OODA) loop more rapidly gains a fundamental advantage. AI potentially condenses this cycle to milliseconds, creating a distinctive cognitive superiority.
Yet, this speed comes at a cost: it also compresses the time available for strategic deliberation, diplomacy, and de-escalation. In a world where decisions are made in microseconds, there is decreasing room for human wisdom and prudence. This is particularly perilous in crisis scenarios, where a misinterpretation of an adversary’s actions could lead to rapid escalation.
Consider a scenario where an AI system, detecting signals it interprets as preparation for an attack, automatically raises readiness levels. This is then perceived by an adversary's AI system as an aggressive act, triggering a cyclical escalation without human intervention. We will enter a period where traditional strategic thinking, predicated on reflection, analysis, and foresight, will yield to reactive algorithmic decisions, optimized for speed rather than wisdom.
In this nascent era, the risk of strategic determinism emerges—a situation where decisions of war and peace are dictated less by human agency and more by the logic of algorithms and the dynamics of automated systems. This determinism could become a self-fulfilling prophecy, where algorithms programmed to detect threats begin to interpret the normal uncertainties of international relations as evidence of hostile intent, thereby generating spirals of distrust and escalation.
When Machines Err: The Existential Risks of Algorithmic Warfare
Traditional warfare, with all its horrors, possessed inherent constraints: human psychology, physical geography, logistical limitations. However, AI-driven warfare could transcend many of these constraints, creating a situation where the technological capacity for war surpasses our ability to control its ramifications. This asymmetry manifests in several dimensions. In the spatial dimension, warfare expands, encompassing new domains—from deep space to the nanoscale. In the temporal dimension, warfare accelerates, compressing decision cycles to milliseconds. In the cognitive dimension, warfare becomes more complex, exceeding human comprehension.
These transformations introduce new categories of risks: unforeseen and with irreversible consequences from automated military decisions.
Another fragment from a sci-fi film: An autonomous missile defense system falsely identifies a rare meteorological phenomenon as a missile attack and launches a counterstrike. Or a swarm system receives corrupted data and interprets a civilian area as a military target. Or a strategic AI recommending a preemptive strike based on an erroneous analysis of an adversary's intentions.
These scenarios illustrate the fundamental vulnerability of complex AI systems to unexpected, unforeseen scenarios for which they were not trained. This fragility is particularly dangerous in the context of nuclear weapons. Early warning systems, enhanced by AI, might become more sensitive but also more prone to false positives. AI-managed communication systems could be more efficient but also more susceptible to cyberattacks. Command and control systems, automated for speed, might lose the critically important element of human judgment. The risk emerges of a cascade of automated decisions leading to catastrophic conflict without human intervention. And this risk is not purely theoretical. History already records instances where the world teetered on the brink of nuclear war due to technical failures or human errors. Introducing artificial intelligence into this system, with its speed but also its limitations, could make such scenarios more probable.
Post-Human Conflict: Warfare in the Age of Hybrid Intelligence
As artificial intelligence advances and integrates with human decision-making systems, we are crossing into an era that can be termed hybrid or symbiotic intelligence—a fusion of human and machine cognitive capabilities.
In a military context, this is embodied in the concept of a symbiotic system, where humans and AI collaborate, amplifying each other’s strengths and compensating for weaknesses. Human intuition, creativity, and empathy merge with machine speed, precision, and the capacity to process vast amounts of data. This evolution transforms conflict into something that is neither purely human nor purely machine, but represents a new quality, emergent from their interaction.
A bit more science fiction cinema: A pilot controlling not merely one aircraft, but a swarm of drones via a neural interface, translating their intentions into algorithmic form. Or a commander receiving cognitive augmentation through implanted neuroprosthetics, expanding their analytical and decision-making capabilities. Or an intelligence analyst working in symbiosis with an AI that pre-processes and visualizes information, revealing hidden patterns.
These scenarios illustrate what might be called extended or augmented warfare—a conflict waged not merely for physical space, but for cognitive superiority, achieved through the integration of human and artificial intelligence. In this new type of warfare, the ability to translate intellectual superiority into decisive tactical and strategic advantage becomes paramount. The victors in this new era will not be those with the best AI or the best soldiers, but those who most effectively integrate them into a unified cognitive system capable of adapting, learning, and evolving faster than the adversary.
The Epistemological Gap: Can Humanity Comprehend the Wars Waged by Machines?
As military systems become increasingly autonomous and intelligent, a disparity arises between human cognitive capabilities and the complexity of algorithmic military decisions.
This gap manifests in several aspects. In the volumetric aspect, AI can process data volumes exceeding human capabilities. In the speed aspect, AI makes decisions on timescales inaccessible to human consciousness. In the conceptual aspect, AI can identify patterns and strategies that the human mind is unable to visualize or comprehend.
This creates a dangerous situation where humans, nominally controlling military AI systems, do not fully understand their decisions, logic, or potential consequences. Even today, it is not difficult to imagine a general approving an operation plan developed by strategic AI without fully grasping its rationale or long-term implications. Or a political leader making a decision about escalation or de-escalation based on recommendations from a system whose logic remains a “black box” to them.
This raises a serious question for me: Can humanity retain control over a war it does not fully understand? Can we speak of human control if algorithmic decisions extend beyond the bounds of our comprehension?
Three Scenarios for Evolution: Hope, Apocalypse, and Metamorphosis
The future of military AI is not predetermined. We can delineate three fundamental scenarios for its evolution, each with its own probability and implications.
Scenario 1: “Balanced Co-Development” (30% Probability)
In this scenario, humanity successfully maintains control over the development of military AI, ensuring its alignment with human values, international law, and strategic stability. Key elements of this scenario include:
- Development of effective international regimes for controlling military AI, including verifiable restrictions on autonomous lethal weapon systems.
- Progress in “explainable AI” (XAI), ensuring transparency and interpretability of algorithmic decisions.
- Preservation of the principle of “meaningful human control” over the use of force, with AI serving as an advisor, not the ultimate decision-maker.
- Creation of multilateral mechanisms for monitoring and regulating military AI technologies, similar to existing nuclear arms control regimes.
- Development of “ethical AI,” capable of considering the moral aspects of military decisions and adhering to the principles of international humanitarian law.
This scenario presupposes that technological advancements will be accompanied by corresponding developments in normative, legal, and ethical frameworks. AI then becomes a tool for enhancing stability and security, rather than a source of new risks.
Proponents of the positive scenario point to successful precedents of international cooperation in arms control—from the Treaty on the Non-Proliferation of Nuclear Weapons to the Chemical Weapons Convention. They argue that humanity is capable of learning from mistakes and establishing effective mechanisms for regulating new technologies.
Peter Diamandis and other “abundance thinkers” believe that AI will naturally evolve towards cooperation, rather than confrontation, as informational resources, unlike physical ones, are not a zero-sum game. The development of AI may lead not to new forms of warfare, but to their obsolescence.
Scenario 2: “AI Armageddon or ‘Remember Skynet'” (20% Probability)
- Accelerated deployment of fully autonomous weapon systems without adequate safety and control measures.
- Escalatory dynamics where one side’s AI interprets the other side’s AI actions as hostile, triggering a cascade of reciprocal reactions.
- “Black swans” of military AI—unforeseen, catastrophic failures of autonomous systems leading to unintentional conflict.
- Algorithmic opacity, where neither side fully understands how or why their systems make certain decisions.
- Undermining strategic stability due to reduced decision-making times and increased asymmetry of capabilities.
This scenario represents the “worst-case,” where technological development outpaces our capacity for control and regulation, leading to new forms of strategic instability and, potentially, existential risks.
Critics of alarming military AI predictions – including Andrew Ng and Demis Hassabis – emphasize that fears of “machine rebellion” are often based on science fiction tropes rather than the reality of current technology. Modern AI systems are highly specialized algorithms capable of performing limited tasks (pattern recognition, navigation, prediction) but lack goals, motivation, or consciousness. As Ng noted, “Fearing AI now is like fearing overpopulation on Mars.” Hassabis, head of DeepMind, while not denying long-term risks, urges distinguishing real challenges—control over autonomous systems, ethics of application, regulatory frameworks—from speculative scenarios unsupported by engineering reality. Thus, the threat of military AI lies not in its becoming a “self-aware entity,” but in how humans delegate critical decisions to it, often without fully understanding or controlling its decision-making mechanisms.
Scenario 3: “Cognitive Metamorphosis” (50% Probability)
This scenario represents a hybrid between the first two but introduces a unique element: the transformation of the very nature of conflict. Key elements include:
- A gradual shift in focus from kinetic warfare to cognitive confrontation, where victory is achieved through information superiority rather than physical destruction.
- Development of “hybrid intelligence”—a symbiosis of human and artificial thinking, creating new forms of cognitive capabilities.
- Transformation of the international system from state-centric to multi-layered, encompassing non-state actors, network structures, and algorithmic entities.
- Emergence of “strategic ecosystems”—complex adaptive systems comprising humans, AI, physical, and informational infrastructure, operating as a unified whole.
- Development of new forms of deterrence, based not on the threat of destruction, but on “cognitive dominance”—the ability to control the information space and decision-making processes.
This scenario entails not so much AI control or uncontrolled development, but a co-evolution of human and technological systems, leading to a fundamental transformation of the very nature of conflict and international relations.
Methodology for Probability Assessment
- Scenario probability estimates (30%-20%-50%) are based on:
- Analysis of historical precedents for military technology adoption
- Pace of current developments in military AI
- Effectiveness of existing international arms control regimes
- Dynamics of geopolitical relations among key players
These figures should be understood as indicative rather than precise mathematical calculations.
Conclusion: Who Will Write the Final Chapter?

We stand at the precipice of an era where war—the oldest human endeavor—may cease to be exclusively human. Artificial intelligence is transforming not merely the methods of warfare, but its very essence, creating new forms of threats and opportunities that we are only beginning to comprehend.
In this new world, the pivotal question is not “who will win?” but “who will make the decisions?”—humans, machines, or some hybrid intelligence we do not yet fully grasp. And the answer to this question will determine not only the outcome of specific conflicts but also the trajectory of human civilization.
If we allow machines to make life-and-death decisions without significant human oversight, we risk not merely military defeat but the forfeiture of control over our own destiny. If we reject technological progress out of fear of its consequences, we may find ourselves defenseless against those who do not share our reservations.
The golden mean—the path of conscious integration, where we harness the capabilities of AI while retaining human control over critical decisions—will demand unprecedented wisdom, foresight, and international cooperation.
Ultimately, the future of military AI presents a profoundly human, not just technological, question. It depends less on what our machines can do than on who we choose to be as a species. Will we use our burgeoning technological power to foster greater security, stability, and prosperity? Or will we allow our tools to transcend our values, our power to supersede our wisdom?
The answer to this question will be penned not by algorithms, but by people—political leaders, military strategists, scientists, civil society. Herein, perhaps, lies the ultimate irony: in an age of artificial intelligence, our most human quality—the capacity for deliberate moral choice—may prove to be our most invaluable resource.
Karl Jaspers wrote about “limit situations”—moments when individuals confront the boundaries of their existence and are compelled to make fundamental choices about who they aspire to be. The era of military AI is precisely such a limit situation for humanity. We can choose the path of what Lewis Mumford called the “megamachine”—a total technological system where humans become cogs in a gigantic mechanism. Or we can pursue Mumford’s ideal of “democratic technics”—technology serving human flourishing, rather than supplanting it.
The choice is ours. For now…
For those who read to the end, a bonus track: a futurological forecast.