A distributed surface action group relies on “having information superior to that of the enemy in order to be hard to find and thus avoid attack and achieve the offensive advantage of surprise.”1 The better it can sense and understand the operating environment, the more likely it will complete the targeting kill chain before the adversary.
In today’s fleet, battlespace awareness is accomplished through the combined efforts of many platforms, and the current fleet architecture possesses “limited organic capability and capacity to conduct intelligence, surveillance, reconnaissance, and targeting (ISR&T); instead, it relies on national and joint-force assets for this mission.”2 Three units provide the overwhelming majority of warning intelligence to afloat units: the aircraft carrier intelligence center, the joint intelligence center on amphibious ships, and the fleet’s shore-based maritime operations center (MOC).3 Concentrating intelligence collection and analysis in only a few places seems counterintuitive in an operating concept based on the smart distribution of offensive capability. A hunter-killer surface action group (SAG), dispersed from the main fleet forces and operating under strict electromagnetic emissions control (EMCON), may not be able to communicate with an intelligence center or receive timely, relevant warnings of adversary activity.
The Four Vs of Battlespace Awareness
Further complicating the battlespace awareness problem is the sheer quantity of information produced by organic, theater, and national collection sensors. Four elements of data overload make battlespace awareness so challenging: variety, volume, velocity, and veracity.4 The various types of raw data available and the resulting combinations of that data (variety) increase exponentially over time, and the amount of usable data available (volume) makes it difficult to know in advance which combinations may have intelligence value. Sensors also produce these large quantities of data faster than ever before (velocity), and often without the power necessary to process it at near-real-time speed. This can create trust issues (veracity) when intelligence is not available because of system latency. The sensor data overload needs autonomous technological solutions for a SAG to maintain battlespace awareness.
To maintain and improve battlespace awareness in a distributed lethality environment, the Navy should develop solutions using the latest breakthroughs in machine learning and artificial intelligence—an automated surveillance imagery exploitation capability, an organic distributed unmanned underwater vehicle (UUV) sensor capability, and a common operating picture (COP) enhanced with better predictive analysis tools.
Automated Imagery Interpretation
One technology that would enhance battlespace awareness is the automated processing and exploitation of surveillance imagery. This capability exists today in private industry and academia through the use of advanced computer science research in machine learning and neural networks.
Researchers at Google used machine learning techniques to develop the Google Cloud Vision software, which helps users understand the content of any image.5 The software “quickly classifies images into thousands of categories, detects individual objects and faces within images, and finds and reads printed words contained within images” through powerful machine-learning models.6 Any image can be uploaded into the Cloud Vision program and it will provide a customizable analysis of the image content. Imagery tasks associated with order-of-battle management, such as identification of military equipment and the analysis of an enemy’s composition, disposition, and strength, are simple yet time-consuming tasks for humans that could be handed over to machine-learning tools.
However, the automated understanding of images is “still immature for defense needs.”7 The growth of imagery acquisition through organic, theater, and national reconnaissance has outpaced the ability of naval intelligence to process and exploit it. With an incessant demand for sensors, naval intelligence relies on the expert work of imagery interpreters to provide battlespace awareness.8
The current process of imagery analysis in the Navy is inefficient. An imagery interpreter “analyzes full-motion video, identifies and measures objects of intelligence interest found in imagery, prepares imagery interpretation reports, and maintains files related to imagery interpretation.”9 Interpreters’ training is extensive and highly specialized, and their skills are in high demand. These skills, however, are not unique to humans. Machine-learning software could tackle many (if not all) of the repetitive tasks associated with imagery exploitation, making imagery analysts better able to provide the human-specific capability of delivering intelligence relevant to operating forces.
To reduce the workload of imagery analysts, scientists at the Office of Naval Research are focused on detecting and recognizing objects, tracking activities of people and vehicles, inferring intentions and threats, and creating concise descriptions of activities taking place in images.10 Similar to how a human analyzes an image, this understanding process must discriminate between objects and their associated activities. A typical image could contain hundreds of objects that can vary with changes in illumination and other environmental factors, challenging even the most advanced software’s ability to keep pace.11 However, a sufficiently powerful machine-learning tool should be able to exploit images faster than even the most capable human analysts.
Automating the interpretation of surveillance imagery would enable imagery interpreters to forward deploy away from the centralized fleet intelligence centers and into hunter-killer SAGs. These specialized sailors could then assist with the localized exploitation of imagery pulled from organic intelligence, surveillance, and reconnaissance (ISR) assets, such as Fire Scout or Scan Eagle. Additional intelligence specialists in a SAG would help create knowledge for commanders and watchstanders, maintain complicated intelligence systems, and help make sense of a complex information environment when separated from normal reach-back capabilities. In addition, more intelligence specialists afloat could help operate another much-needed autonomous capability: UUVs.
Autonomous UUVs and Sensors
In a 2015 Proceedings article, Admirals Rowden, Gumataotao, and Fanta argued for persistent organic airborne ISR to support distributed lethality.12 This recommendation ignored the autonomous capability being developed for use in UUVs. Airborne ISR assets such as the MQ-8 Fire Scout do provide a rapid response capability, but true persistent wide-area surveillance is best accomplished through the distributed use of autonomous UUVs.
An autonomous UUV organic to a hunter-killer SAG would provide several unique capabilities preferable to airborne ISR. The first is stealth: UUVs are able to operate with a low probability of detection or compromise within the coastal waters of competing nations, as well as in close proximity to adversary targets. Airborne platforms likely will not be able to operate as freely in contested or restricted environments. Another advantage of UUVs is the cost-benefit trade-off, especially when compared to more complex airborne systems. The Defense Science Board highlights this advantage by noting that “it may cost the adversary more to detect, track, and defeat a low-cost platform than it costs the U.S. to acquire and deploy it.”13 A low-cost UUV platform with reliable sensors would significantly complicate an adversary’s targeting problem, forcing it to expend resources against an elusive threat.
An autonomous UUV tasked with ISR needs to understand the undersea and sea surface environment to provide intelligence. Platform sensors will actively hunt for specific conditions or signatures, defined in advance, and then automatically broadcast that information to the consumer. The intent is to deliver only select information to help intelligence personnel overcome the challenges associated with data overload.
This requires robust on-board sensor processing and a different method of data relay. Currently, data collected by UUV sensors cannot be processed until after the vehicle has been recovered, delaying delivery of critical intelligence. To operate as designed, UUVs must have long-range communication systems for covert and occasional high-bandwidth exchanges.14 In addition, the likelihood of a severely contested electromagnetic spectrum requires some form of direct peer-to-peer communications.15 The specific details of how UUVs could meet this requirement remain a significant challenge. Could legacy broadcast systems reliably penetrate the water column to a submerged UUV? Or, rather, does an inverse relationship exist between stealth and communications, akin to that in the submarine force? Methods that use existing communications technology should be incorporated into UUVs with an awareness of the likely engineering and operational trade-off between stealth and timeliness of information.
Intelligent Common Operating Picture
Tackling the large volume of sensor data produced is a challenging task for humans; AI algorithms will help. AI has the ability to make sense of conflicting evidence through constraint satisfaction and algorithmic programming logic and does not need explicit instructions, definitions, or lists of necessary conditions to make sense of messy evidence. And AI can sort through huge amounts of data to recognize damaged or incomplete patterns in much the same way humans can recognize and recall the melody of a song after hearing only a few notes. Put into a maritime context, an AI algorithm might be able to predict a range of possible outcomes for an adversary operating in a dynamic environment. It could potentially distinguish between training missions and combat patrols, routine maintenance and significant system casualty, or possibly even hostile intent and erratic behavior. The proper role for naval intelligence is to help manage and understand the relevancy of AI-derived information.
An AI COP tool could help with several analytical problems related to human cognitive bias. Some fundamental notions of statistics are not part of a person’s intuition and decision-making abilities. Human cognitive biases may be amplified through the challenges of distance, operational tempo, and relative experience of analysts. Richard Heuer, formerly of the CIA, contends that even when human analysts are keenly aware of their own biases they likely will be unable to effectively counter them.16 AI could assist in the efficient “structuring [of] information, challenging assumptions, and exploring alternative interpretations.”17 The crucial limiting factor becomes the processing speed of the machine, not the heuristic biases of human analysts.
A challenge for an AI COP is to make relevant information observable to human and machine teammates.18 Battlespace awareness requires “the capability to automatically correlate relevant active and passive information from organic and nonorganic sensors with intelligence at all classifications and compartments for presentation to the commander.”19 A SAG must be able to manage its own COP and not rely completely on regular networked updates from other fleet nodes. AI tools could forecast potential adversary behavior based on algorithmic modeling, organic sensor information, and the last updates known to be good from a networked COP. Understanding the environment and how an adversary’s behavior is shaped by the environment through the use of AI could provide an asymmetric advantage to SAG commanders through better situational awareness and faster decision-making cycles.
Most analysis looks backward in time to interpret data. Predictive analytics seeks to identify the relationships underlying threat behavior; quite different from the correlation-based forecasting models of today.20 Also, the quality of predictions will improve given sufficient time to compute the likelihood of all possible outcomes. The combination of time, reliable data, and behavioral modeling are elements of the process human intelligence professionals use to analyze complex environments; AI can conceivably complete the process faster and with greater accuracy. The ability to predict adversary behavior, potentially with machine-like precision, is a critical enabling tool for a SAG operating at EMCON or in a communications-degraded environment.
Challenges And Opportunities
Despite advances of modern technology, causal predictive models of adversary behavior may be too complex for current AI. This challenge spotlights the continued need for human intelligence professionals whose judgment and expertise, in most cases, still exceeds that of computers. To facilitate a smooth integration of new capabilities, intelligence derived from AI or autonomous sources should be treated as just another intelligence discipline. An intelligence officer should treat AI collection and analysis as part of a multidiscipline, all-source fusion process to better understand the threat environment and reduce uncertainty for afloat commanders.
Operators and leaders need time to experiment with new tactics, techniques, and procedures for autonomous technology, as well as to build trust and familiarity with new tools. Recently, the Navy selected an experimental large displacement unmanned underwater vehicle (LDUUV) named Snakehead as one of three rapid acquisition projects. The Maritime Accelerated Capabilities Office hopes the Snakehead LDUUV will have advanced capabilities in antisubmarine and mine warfare, but the first phase of the project “will focus on intelligence preparation of the environment and intelligence, surveillance and reconnaissance mission sets.”21 The first operational mission for this platform should be in support of a SAG. Quickly moving the technology out of development and into operational use will allow experimentation in an environment where success or failure will be apparent.
Any expansion of the battlespace will complicate the targeting and decision-making process for adversary forces but also “put a premium on obtaining and distributing highly accurate and timely information.”22 Investments in offensive lethality capability alone will not guarantee success in the next war; they must be matched by similar investments in AI and autonomous technology.
1. Richard Mosier, “Distributed Lethality and Situational Awareness,” Center for International Maritime Security, 21 February 2017, http://cimsec.org/distributed-lethality-situational-awareness/30949.
2. Navy Project Team, “Alternative Future Fleet Platform Architecture Study,” (Washington, DC: Department of Defense, 2016), 7.
3. Department of the Navy, “Navy Warfare Publication 2-01: Intelligence Support to Naval Operations” (Washington, DC: Office of the Chief of Naval Operations, 2010).
4. Gary Toth, “Information Fusion Challenges” (Washington, DC: Office of Naval Research Science & Technology, 8 July 2014), 3, www.fusion2014.org/sites/default/files/Gary-Toth_Fusion-Challenges-Strategies.pdf.
5. “Cloud Vision API,” Google Cloud Platform, https://cloud.google.com/vision/.
6. “Cloud Vision API.”
7. Paul Kaminski, James Shields, and James Tegneli, “Study on Technology and Innovation Enablers for Superiority in 2030” (Washington, DC: Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, 2013), 69.
8. Navy Enlisted Classification 3910—Intelligence Specialist, Naval Imagery Interpreter
9. Department of the Navy, “Navy Warfare Publication 2-01: Intelligence Support to Naval Operations” (Washington, DC: Office of the Chief of Naval Operations, 2010), 2–9.
10. “Automated Image Understanding,” Office of Naval Research, www.onr.navy.mil/Media-Center/Fact-Sheets/Automated-Image-Understanding.
11. “Automated Image Understanding.”
12. Vice Admiral Thomas Rowden, Rear Admiral Peter Gumataotao, and Rear Admiral Peter Fanta, USN, “Distributed Lethality,” U.S. Naval Institute Proceedings 141, no.1 (January 2015): 18–23.
13. Michael Anastasio, Christopher Day, Eric Evans, Craig Fields, James Gosler, John Miriam, Anita Jones, et al., “Seven Defense Priorities for the New Administration” (Washington, DC: Defense Science Board, 2016), 49.
14. Paul Kaminski, James Shields, and James Tegnelia, “Study on Technology and Innovation Enablers for Superiority in 2030” (Washington, DC: Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. 2013), 41.
15. Ruth David and Paul Nielsen, “Report of the Defense Science Board Summer Study on Autonomy” (Washington, DC: Defense Science Board, 2016), 84.
16. Richards J. Heuer Jr., “Psychology of Intelligence Analysis” (Washington, DC: Center for the Study of Intelligence, 1999), xx.
17. Heuer, “Psychology of Intelligence Analysis,” xxi.
18. David and Nielsen, “Report of the Defense Science Board Summer Study on Autonomy,” 15.
19. Mosier, Center for International Maritime Security, “Tactical Information Warfare and Distributed Lethality.”
20. David and Nielsen, “Report of the Defense Science Board Summer Study on Autonomy,” 80.
21. Megan Eckstein, “Navy Accelerating Work on ‘Snakehead’ Large Displacement Unmanned Underwater Vehicle,” USNI News, 4 April 2017, https://news.usni.org/2017/04/04/navy-splits-lduuv-into-rapid-acquisition-program-at-peo-lcs-rd-effort-at-onr.
22. Milan Vego, Joint Operational Warfare: Theory and Practice (Newport, RI: Naval War College Press, 2009), III–24.
Lieutenant Commander Wilson is a naval intelligence officer currently serving at U.S. Central Command Joint Operations Center in Tampa, Florida. He won third prize in the 2018 Emerging and Disruptive Technology Essay Contest (sponsored by Leidos) with this essay.