Analysis: Combat use of AI in Ukraine 

Sam Cranny-Evans

The war in Ukraine has become an incubator for the use of artificial intelligence (AI) in combat. Developers and programmers on both sides have set about building algorithms for battlefield uses including autonomous navigation, target identification and engagement, and intelligence processing

From the available evidence, the AI in use is beneficial on a small scale but ultimately it is a maturing technology that requires further development to fully realise its potential. However, properly deployed and managed AI can improve rapidly, because of this, the uses of AI in Ukraine are worth monitoring in order to understand its beneficial use cases as well as its limitations. 

Tactical

The crunchiest applications for AI in Ukraine can be observed on the mass of small drones that are produced in their thousands every month by volunteers and small companies set up to meet the needs of the war. The presence of Russian electronic warfare (EW) along the frontline is significant, and in some areas, several systems are layered and operated at full power to provide more comprehensive jamming of satellite navigation signals. Where this is the case, Ukrainian units might be forced to sacrifice drones to test whether Russian EW is active, or unable to fly anything effectively for real time targeting. EW effects are occasionally deactivated, but this is usually to allow a Russian air strike. Because of this, autonomous navigation using AI has made its way into some small drones. It typically uses algorithms trained on imagery of the locations to enable the drone to navigate without satellite guidance.

Both Russia and Ukraine have developed algorithms to assist with the targeting of first-person view (FPV) drones. One Russian system is referred to as the Gadfly and its AI homing capabilities were publicly displayed in the summer of 2023. Ukraine has developed and deployed the Saker drone, which uses AI to locate and identify targets. It may also have some form of AI-assisted targeting. Furthermore, videos that appear to show FPVs with bounding boxes as part of their final approach have increased since early 2024. The use of AI is assumed based on the presence of a bounding box around the identified vehicle. Bounding boxes are used in object detection for computer vision; it will often take the form of a green rectangle that will appear around the vehicle to indicate its location and the confidence rating that it is what the computer thinks it is. Their presence suggests that an algorithm is locating the vehicle in question, which means it is theoretically possible that the algorithm is helping the drone track and engage the target.

It is worth noting that many of these efforts are launched by small volunteer organisations. The Gadfly has been developed by volunteers as has Saker, this does not necessarily mean that these efforts are inferior to those of larger defence or AI companies, but it may mean that the systems are developed using commercially available algorithms and that there are fewer AI specialists able to work on them. This can impact the pace of development and the iteration cycles that are used to rapidly bring algorithms and software up to the standards that users expect. “In Ukraine, the technology, and the market both move fast and are only getting faster. To have any relevance you have to commit quickly, build collaboratively and deploy an MVP[1] into users’ hands,” Will Blyth, CEO of Arondite, a defence AI company that is working in Ukraine told EDR Magazine. “That’s when the most important part of the work actually starts: Collecting lots of real-world feedback, incorporating it into your product and redeploying something before the world changes again. This calls for pragmatic, practical and dedicated AI engineers who are focussed on outcomes and build fast,” he added, underlining the rapidly changing nature of the technology in use.

Deep strikes

On the 2nd of April, CNN published a report indicating that Ukrainian drones used to strike Russian oil refineries had used computer vision to autonomously navigate to their targets and engage them. This is essentially an AI-enabled version of the radar-based terrain matching capabilities of Tomahawk land attack cruise missiles (TLAM). However, it is worth noting that it is not clear whether the computer vision on Ukraine’s strike drones enables them to fly a terrain hugging profile, as is the case for the TLAM. Russia also employs the Lancet loitering munition, which in some cases carries an Nvidia TX2 Jetson, a small computer built for the use of AI at the edge of a network. This means that the Lancet, which has a range of 50 km, may also be capable of some form of autonomous navigation. Both cases provide an indication of the potential for AI to improve the resistance of precision strike munitions to jamming and spoofing.

Air defence

Respeecher is a Ukrainian company that developed an app to mimic celebrity voices by using AI. The technology has found an interesting new use in Ukraine, where it is employed as part of the Zvook acoustic detection system. Algorithms have been trained on the sounds of Russian cruise missiles and paired to acoustic sensors located around Ukraine. The network is designed to track Russian missiles and provide an indication of their flight path. In theory, this allows Ukraine to carefully position its radars and air defence systems and use Zvook to monitor any potential weak points in its radar coverage. Again, this system was developed by volunteers from Respeecher, an IT company called i3 and Ukraine’s Territorial Defence Force.

In May 2023, Russian press excitedly reported on an S-350 air defence system that had engaged a target autonomously using AI. As is typical for Russian announcements, details were scarce, but if the system did perform this engagement with the help of AI it raises interesting prospects. It is theoretically possible to fuse the outputs from multiple sensors into a single recognised air picture. The US Army’s ICBS achieved this in 2020, when it used radars from Patriot and Sentinel to engage a pair of cruise missiles. It is not clear whether the ICBS uses any machine learning to help process its targeting information, but similar effects can be attained through AI-enabled sensor fusion. The use of edge computing and sets of algorithms designed to interpret the data generated by radars and other sensors and combine those outputs into a single picture would enable this. Going one step further and allowing AI to make decisions about air defence engagements could prove more successful than human operators – especially after several learning cycles.

This also touches on an important distinction to make: the Aegis air defence system onboard US Navy Arleigh Burkes can operate autonomously. In this mode, which is rarely used, the ship’s battle management system takes control of its radars and interceptors. It is programmed to protect the ship, which means it may exhaust its missile cells unnecessarily or engage other targets that are not considered threats. To do this, the Arleigh Burke employs thousands of lines of code that have been written by programmers, it is not possible for the ship to do anything other than what its code states. Put simply, “if target matches these conditions, engage in the following way.” An air defence system enabled by AI on the other hand, has taught itself how to conduct air defence from a blend of synthetic and real data. Once deployed operationally, it would theoretically be capable of learning and improving its capabilities based on data from previous engagements. This can improve its response to the next set of threats and overall success rate.

Analysis

Ukraine is showing what is possible with AI in a military context, but many efforts would be regarded as concept demonstrators or low TRL and under-developed by some industry standards. Proper infrastructure allows for rapid upgrades and iterations of algorithms to improve their capabilities after each new operational experience. The extent to which either side possesses this infrastructure or the engineers necessary to fully maximise the potential of AI-enabled capabilities is unclear. Nevertheless, it does appear that having many different clusters all innovating independently is valuable for the rapid development of AI for military applications.

Technically, the systems on display provide resilience to EW and likely improve flight performance and vehicle engagements. However, they do not represent a game changing capability. Alongside the videos of FPV strikes there are many others of conventional artillery or precision strikes against conventionally reconnoitred targets. That said, the uses described above provide an insight into the potential of AI to shape warfare, and an indication of where developments are likely to be seen.

Photo courtesy АрміяInform, CC BY 4.0 via Wikimedia Commons


[1] Minimum Viable Product

Tweet
Share
Share