Sunday, May 6, 2018

Can the U.S. Navy Brave the Waves of Autonomous Warfare?

Olivia Miltner, Ozy
1 May 2918

It was January 1945, and the Nazis knew the end was near. The German ocean liner Wilhelm Gustloff, designed to accommodate 1,900, was packed with more than 10,000 soldiers and refugees when it ventured into the freezing Baltic Sea, part of efforts to evacuate two million Germans out of eastern Prussia and away from an advancing Soviet army. But a Soviet submarine spotted the ship and fired three torpedoes into it, killing more than 9,000, including 5,000 children.
More than seven decades later, the United States Navy is trying to reduce some of the risks of maritime warfare highlighted by the Gustloff’s end, which remains the deadliest maritime disaster in history — at a time when Secretary of Defense James Mattis has signaled a return of America’s security focus on “great power competition.” Traditional ships are expensive to build and carry enough personnel to turn any midsea mishap into a potential financial and human disaster. So the U.S. Navy, which has sought automated solutions to technical and operational challenges for decades, is increasingly turning to autonomous vehicles with the hope they can improve the efficiency and range of naval capabilities while decreasing their cost.
The Navy’s plans span surface, air and undersea platforms. In early February, the Office of Naval Research took over a prototype of the Sea Hunter, a surface-level submarine-hunting drone ship, from the Defense Advanced Research Projects Agency. In December 2017, the ONR successfully demonstrated an autonomous helicopter flight that is part of an autonomous aerial cargo/utility system (AACUS) program in collaboration with American technology firm Aurora Flight Sciences. Apart from dozens of disclosed autonomous underwater vehicles already in operation, the Navy established its first underwater drone squadron in September 2017. In December, President Trump signed a bill authorizing almost $8 billion to submarine programs. And defense contractors such as Lockheed Martin and Boeing are developing fully automated submarines called extra-large unmanned underwater vehicles (XLUUVs).
The shift toward autonomous vehicles is sparking ethical questions for the Navy. How much can you trust a machine loaded with other machines to always function properly, and how much power should they have? But autonomous vehicles could prove cheaper to run, and because the lives of sailors wouldn’t be at stake, they could assume a greater level of risk than a manned ship at a time when the U.S. is particularly vulnerable at sea. AUVs will help “improve and expand undersea superiority,” the Navy said in 2016 testimony to Congress. The Navy’s new focus on these technologically advanced weapons systems comes at a time the Department of Defense has unveiled the Trump administration’s first National Defense Strategy, summarized by Mattis in a January speech, where he said that “great power competition, not terrorism, is now the primary focus of U.S. national security.”
“We’re in a period now where war at sea is dangerous,” says Steven Wills, strategy and policy analyst at CNA, a nonprofit research and analysis organization located in Arlington, Virginia. “Potential adversaries have better weapons than they’ve had in the past, and these weapons have proliferated to more places.”
Countries such as China, Russia, North Korea and Iran have large arsenals of cruise missiles, which are relatively cheap but can cause a significant amount of damage. Other groups, like Yemen’s Houthi rebels, have also been able to acquire them — and tend to use them indiscriminately. Autonomous vehicles, in response to these threats, are more expendable. They can augment a fleet and do search and reconnaissance, says Dan McLeod, Lockheed Martin’s program manager for the Orca, an XLUUV the company is designing for the Navy.
In the air, an unmanned helicopter armed with AACUS sensors and software can take supplies from a base, select the optimal route and best landing site closest to fighters on the front lines, land, resupply and return to base — all with a finger tap on a hand-held tablet. The Seahunter drones are designed to autonomously carry out 70 daylong sea surface patrols at a time, as far out from base as 10,000 nautical miles. And the XLUUVs that Boeing and Lockheed Martin are building for the Navy would have an extended range, the ability to deliver a variety of payload and the capability of operating independently of manned ships. In July, the DARPA also contracted BAE Systems to build a small unmanned underwater vehicle that would help detect enemy submarines.
This concerted rush marks a departure from the isolated use of underwater unmanned vehicles in the past. The Navy sent UUVs to search for an Argentine submarine that disappeared in South Atlantic waters in November, and had used them as far back as 2003 to clear an Iraqi port of mines. But many of its AUVs are working on sea-sensing and mine-countermeasure tasks “with human-in-the-loop supervision,” the Navy said in the 2016 report to Congress. By 2025, it expects AUVs to support undersea warfare by going into denied waters that are too deep or too shallow for manned platforms — and the military, some experts anticipate, will lead the development of these technologies rather than the commercial sector. AUVs that can comprehend “purpose,” are able to execute missions and can make decisions are already on the way; they will
present their own ethical dilemmas, apart from questions of trust and responsibility.
The extent to which warfare functions will become automated is a moral issue for much of the military, says naval historian and strategist Norman Friedman. “If you actually kill somebody, in theory, anyway, you’d prefer to have someone responsible for doing it,” Friedman says, adding that increasingly, that’s already becoming difficult to do. Putting human lives at the mercy of a machine also relies on trust that the system is going to do what it’s supposed to do while simultaneously balancing that with the level of risk one is willing to accept. To McLeod, the question is, “Trust under what risk profile?”
Still, the debate over the specifics of what autonomy will — and should — look like isn’t challenging the fundamental argument for such technology: that it could help the U.S. maintain its dominance at sea. The conundrum? The technology could simultaneously end up posing as many questions as the answers it provides.

No comments: