War Games Reimagined: How AI Is Reshaping Military Strategy

Stuart Kerr
0

 

By Stuart Kerr, Technology Correspondent

📅 Published: July 7, 2025 | Last Updated: July 7, 2025
📧 liveaiwire.com | 🐦 @liveaiwire


The Rise of Algorithmic Warfare

Artificial Intelligence is no longer confined to chatbots, language models, or self-driving cars. It is now embedded in one of humanity’s most controversial domains: the battlefield.

Across Ukraine, India, and even NATO research labs, AI is being integrated into everything from drone swarms to battlefield simulations and autonomous weapon systems. It’s a transformation that’s occurring out of the public eye but has the power to redefine international law, ethics, and global security.


From Logistics to Lethality

On the front lines in Ukraine, AI-driven combat robots now support troops, carry equipment, and in some cases, engage targets remotely. These machines often operate using old weaponry—like the M2 Browning machine gun—repurposed and guided by intelligent logic circuits (SIPRI).

According to the Financial Times, nearly two million drones have been deployed in Ukraine alone, with thousands relying on AI for route planning, targeting, and autonomous flight. These drones are trained to operate independently even when connection with human operators is lost—a feature that’s becoming more critical as both sides deploy electronic warfare systems to jam signals.

These trends parallel the shift we explored in AI in Law Enforcement, where AI’s predictive capabilities redefine how force is applied.


Who Decides to Kill?

Two categories dominate modern military AI: autonomous weapon systems (AWS) and AI decision-support systems (AI-DSS).

AI-DSS are already in use—they assist human operators in identifying and ranking targets but still rely on human command to act. AWS go further: identifying, selecting, and engaging targets on their own.

Human Rights Watch, in its 2025 report “Hazard to Human Rights,” warns that fully autonomous systems risk violating fundamental principles of humanitarian law—particularly around proportionality and civilian protection.

This ethical dilemma echoes concerns raised in The AI Gender Gap, where we asked: if the creators of AI systems don’t reflect the diversity of society, can we trust them to make fair decisions?


Global Rules, Fragmented Action

While nations experiment, the international community is trying to catch up. In late 2024, the UN General Assembly passed a resolution urging states to adopt binding limits on AWS use. But the Convention on Certain Conventional Weapons (CCW)—the main treaty body—has stalled, relying on unanimous consent among participants (ASIL).

The International Committee of the Red Cross now advocates a ban on unpredictable autonomous weapons altogether, while encouraging stricter safeguards for those that remain under “meaningful human control.”

The United Kingdom’s House of Lords and the U.S. Department of Defense both mandate that human operators remain in charge of critical decisions—though definitions of “meaningful control” vary.


India’s Leap, and Ukraine’s Testbed

India’s recent deployment of the Akashteer AI defence system showcases how integrated AI warfare has become. The system links drones, satellites, and anti-air defences into a real-time web of automation, minimising human latency in national airspace protection.

Meanwhile, Ukraine is rapidly becoming a proving ground for AI combat. A Business Insider investigation reveals how the war has shifted AI from the lab to live testing—often without clear rules of engagement.

Just as we discussed in AI in Cybersecurity, the boundaries between civilian and military applications are becoming dangerously blurred.


Transparency at Gunpoint

The true problem may not be control—but clarity. In most cases, no one can explain how an autonomous system made a decision. This “black box” effect leaves operators, commanders, and even courts unable to trace accountability when things go wrong.

According to Human Rights Watch, the lack of explainability in lethal decisions presents an existential challenge to human rights law.

It’s a transparency issue we’ve seen before, as explored in Behind the Facade—where trust in AI systems can collapse if the reasoning behind their actions remains hidden.


A Call for Restraint

Several policy experts propose a three-pronged approach:

  • Ban indiscriminate and unpredictable autonomous weapons.

  • Enforce oversight rules for semi-autonomous systems.

  • Introduce audit trails and AI explainability in military operations.

The World Federation of United Nations Associations has proposed a roadmap that includes certification frameworks and independent weapons audits, modelled after existing arms control treaties.


A Future on the Edge

We now stand at the tipping point. AI weapons systems are here. Some are already deployed. Others will follow soon. But regulation, oversight, and international cooperation lag dangerously behind.

As we explored in AI and the Beautiful Game, even recreational use of AI reveals how deeply these systems can alter human judgement and strategy.

In war, that influence can be fatal.


About the Author

Stuart Kerr is LiveAIWire’s Technology Correspondent. You can follow his work at 👉 liveaiwire.com/p/to-liveaiwire-where-artificial.html or reach out via 🐦 @liveaiwire or 📧 liveaiwire.com.


Internal References

External Sources

Post a Comment

0 Comments

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!