Serhiy Beskrestnov has issued a stark warning, labeling a recently intercepted Russian drone as “our future threat.” His caution comes after personally examining the newly acquired technology.
He quickly discerned that this was no ordinary drone. Powered by artificial intelligence, the unmanned aerial vehicle is capable of autonomously locating and engaging targets.
As a key consultant for Ukraine’s defense forces, Beskrestnov has personally scrutinized a vast array of drones.
Distinguished by its unique inability to transmit or receive any signals, this particular model remained entirely impervious to jamming, offering a critical advantage over conventional designs.
Both Russian and Ukrainian military units are actively exploring and, in certain capacities, implementing artificial intelligence within the ongoing conflict. This technology is being leveraged for critical tasks such as target identification, intelligence gathering, and de-mining operations.
Artificial intelligence has become fundamental to the Ukrainian military’s capabilities.
Ukraine’s military is processing more than 50,000 video streams from the front lines each month, with artificial intelligence handling the analysis, Deputy Defence Minister Yuriy Myronenko has confirmed.
This functionality swiftly processes vast datasets, enabling the precise identification of targets and their subsequent visualization on a map.

Advanced artificial intelligence is recognized for its transformative potential, poised to revolutionize strategic planning, optimize resource utilization, and critically, preserve lives.
Yet, autonomous weapon systems are fundamentally altering the landscape of modern warfare.
Ukrainian forces are now deploying artificial intelligence software that enables their drones to independently acquire and fixate on targets. This technology allows the unmanned aerial vehicles to then navigate autonomously for the final hundreds of meters, ensuring the successful completion of their missions.
Electronic jamming of such devices is impossible, and the act of shooting down a small airborne object presents a significant challenge.
These systems are anticipated to eventually transform into fully autonomous weaponry, capable of independently identifying and neutralizing targets.
Soldiers’ engagement will be simplified to a mere tap on a smartphone application, as stated by Yaroslav Azhnyuk, chief executive of the Ukrainian development firm The Fourth Law.
The drone will do the rest, he says, finding the target, dropping explosives, assessing the damage and then returning to base.
“And it would not even require piloting skills from the soldier,” he adds.

Interceptor drones with that kind of automation could significantly strengthen air defences against Russian long-range attack drones, such as the notorious Shaheds.
“A computer-guided autonomous system can be better than a human in so many ways,” says Azhnyuk. “It can be more perceptive. It can see the target sooner than a human can. It can be more agile.”
Deputy Defence Minister Yuriy Myronenko stated that a particular system does not yet fully exist, but he anticipates Ukraine is nearing the final stages of its development. Myronenko added that aspects of this system have already seen partial implementation in various devices.
Azhnyuk forecasts that the number of operational systems could climb into the thousands by the close of 2026.
Ukrainian developers are proceeding with caution regarding the full integration of defense systems that rely entirely on artificial intelligence, operating without human oversight. A key apprehension, articulated by Vadym (who declined to provide his surname), is the risk that AI could fail to distinguish between Ukrainian and Russian soldiers, particularly as both may wear similar uniforms.
His company, DevDroid, manufactures remote-controlled machine guns that integrate artificial intelligence for autonomous human detection and tracking. Citing concerns about potential friendly fire, he confirms these systems are not equipped with an automatic shooting function.
“We can enable it, but we need to get more experience and more feedback from the ground forces in order to understand when it is safe to use this feature.”

There are also fears that automated systems will violate the rules of war. How will they avoid harming civilians, or distinguish soldiers who want to surrender?
For the deputy defence minister, the final decision in such circumstances should rest with a human, although AI would make it “easier to decide”. But there are no guarantees that states or armed groups will adhere to international humanitarian norms.
So counteracting these systems becomes even more critical.
How do you stop a “swarm of drones” when jamming or using jets, tanks or missiles is rendered ineffective?
Ukraine’s highly successful “Spider Web” operation, when 100 drones targeted Russian air bases last June, was probably assisted by AI tools.
Many in Ukraine fear that Moscow will copy that tactic, not just on the front line but beyond it too.
Ukraine’s Volodymyr Zelensky warned the UN last month that AI was contributing to “the most destructive arms race in human history”.
He called for global rules for the use of AI in weapons, and said the issue was “just as urgent as preventing the spread of nuclear weapons”.







