In the coming years the Navy will gain access to a rapidly growing profusion of sensors, not just through new fleets of unmanned vehicles combined with existing systems, but through
multi-service sensors as well, as part of a joint operating environment. If the Navy is to maintain dominance in the INDOPACOM AOR, it must be able to extract maximum insight from those sensor assets.
One of the key challenges in gaining that insight is resolving the inconsistencies that frequently arise when multiple sensors are looking at the same contact. Different sensors often have their own inherent strengths and weaknesses. One sonar sensor might have more precise bearing resolution on a contact, for example, allowing for a better targeting solution. But a different sonar sensor might have better narrowband frequency information, making contact classification more accurate. The greater the number of sensors, the more valuable data is available—but also the greater number of differences in the data, and the more noise that operators have to sort out to make the best identification.
Machine learning and other forms of artificial intelligence will aid this process, but they also contribute to the problem themselves. In many cases there will be multiple algorithms looking at the same stream of sensor data, each making its own prediction of classification, location track, and mission intent—all based on the algorithm’s particular strengths and weaknesses. It may not be easy to reconcile their differences.
One advantage of machine learning is its ability to present a confidence value, or score, that a commander can use in decision-making. For example, machine learning algorithms—based on data from multiple surface and undersea sensors—might say that there is a 99.99 percent chance the contact is a manmade object, a 95 percent chance the contact is a Chinese submarine, and an 85 percent chance the contact is a Han Class SSN. But how do you know if the conclusion is reliable if there is so much variability between the sensors, and between the algorithms themselves?
The Navy can address this challenge by using AI in another way. The AI fuses the algorithms that process the sensor data (algorithm fusion), and then fuses that result with the results of other sensors using non-linear models such as deep neural networks (sensor-data fusion). The AI then refines that result with a third layer (context fusion), which brings together and analyzes additional Navy datasets for contact identification.
The result of this multi-layer, AI-enabled fusion is a far more accurate score for the commander— and one that can rapidly bring together a large number of sensors from manned and unmanned systems, significantly shortening the time to decision-making and action.
The three-step process works in a particular order—first algorithm fusion, then sensor fusion, then context fusion. Each step is critical to the final score.
ALGORITHM FUSION
Machine learning algorithms identify objects by looking for patterns in historical and current data, and then finding those same patterns in real-world situations. As the Navy rolls out machine learning for sensor data, there will likely be multiple algorithms for each radar, sonar or other sensor stream. This gives the AI more ways to detect, classify and analyze a contact, but it also adds complexity— each algorithm will generate its own and possibly different confidence score for the contact information.
Algorithm fusion addresses this complexity through ensemble learning approaches that produce a single, overarching score. It doesn’t do this by averaging the algorithms’ scores. Rather, it uses a dynamic weighting scheme applied to each score, based partly on how well the algorithm has performed historically in similar situations. For example, there may be five algorithms looking at the same sonar data of a contact. One algorithm might have proved more accurate at identifying submarines based on the particular frequencies the contact is emitting. Another algorithm might be more accurate at the particular angle on the bow that the sensor has with the contact. A third algorithm might be more accurate in the particular combination of environmental factors such as water depth, sound-velocity profile, and arrival path.
The weighting is also based on mission and domain knowledge that has been programmed into the fusion process. In the example, this weighting takes into consideration the relative importance of all relevant factors in making an identification.
The fusion process doesn’t throw out any of the algorithms, but instead identifies the strengths of each one in the current situation, and then brings those strengths together to produce the single confidence score. Fusion uses all the available algorithms to full advantage.
SENSOR-DATA FUSION
Often, multiple sensors may be looking at the same contact—radars on different manned and unmanned surface vehicles in a group, for example, or different types of sensors, such as radar and SIGINT, on the same platform. In the next phase—sensor- data fusion—the AI brings together and evaluates all the relevant data streams, to produce a more comprehensive score for the commander.
Sensor-data fusion assigns weights to each of the data streams, largely based on the quality of its data. There are a number of reasons why sensor data quality can vary. For example, one sensor might generate a lower resolution than others, based on its location. Or, the sensor might be older, and have a lower sensitivity than newer versions. Some sensors—such as those on unmanned vehicles—may have smaller optics than large, complex sensors, and so might generate less robust results. Once the AI assigns weights to the different data streams—based on their strengths and weakness—it fuses the results, refining the overarching confidence score.
CONTEXT FUSION
In the same way that Navy operators of radar, sonar and other sensors look at the larger context of a contact to help make an identification, the AI brings in disparate data sources to refine the score. Data sources can range from known military training routes (for both friend and foe), to previous operational data collected on missions, to the seasonal migration of dolphins and whales.
The AI can bring together and analyze large numbers of relevant datasets at once—far more than an individual operator could review. The results of the context fusion may lower or raise the final confidence score for the commanding officer.
Ultimately, AI-enabled fusion squeezes more insight from the Navy’s existing and growing sensor assets—resolving conflicting data and creating a clearer understanding of the INDOPACOM AOR tactical environment.
ADAM WEINER
[email protected], a Vice President at Booz Allen, leads the firm’s Navy Sensor Fusion, Human Signatures, and Navy Warfare Center business.
DR. NATHANIEL J. SHORT
[email protected] is a Senior Lead Scientist at Booz Allen, where he conducts research and development in sensor exploitation, computer vision and data fusion for Department of Defense and other government clients.
BOOZALLEN.COM/ DEFENSE