Video thumbnail

The limits to what humans can create seem bound only by the limits of human imagination. Advances in computing and connectivity have thrust us into a new world of increasingly sophisticated robots and other "autonomous systems." Such progress, while exciting, can provoke anxiety and pose ethical challenges. Erin Hahn, a senior national security analyst at The Johns Hopkins University Applied Physics Laboratory, described the challenges that she has identified in research with the U.S. Defense Department. Rather than explore the vast possibilities of what can be built, she focused on determining what should be built.

Autonomous systems can apprehend complex environments and react without human intervention. Hahn began her session with videos of robots created by Boston Dynamics showing a robotic dog, "Spot," and humanoid, "Atlas," performing everyday tasks. The images underscored the range of emotions prompted by autonomous systems. While initially we may find the robots' capabilities exciting—even feeling sympathy when they are pushed and prodded by humans—eventually we may feel alarm when considering the long-term risks posed by such lifelike machines. This same range of emotions has found expression in popular culture. On the one hand, there are C-3PO and R2-D2, the friendly autonomous robots who help Luke Skywalker save part of a galaxy in "Star Wars." On the other hand, there are the lethal autonomous systems in "The Terminator" that threaten to destroy the world.

Given the broad spectrum of associations, Hahn emphasized the imperative of aligning government policy with public attitudes. We must responsibly create systems so that we can trust the technology that we develop as we interact with it and deploy it on a large scale. Hahn highlighted four elements of trust when considering mobilizing autonomous systems for military purposes:

When people talk about innovative solutions today, chances are autonomy is part of the conversation. While many people are excited about the potential of these systems, others are wary of their implications and how they will impact society. We know there are many things we could build, but how do we understand what we should build? In her presentation, Erin Hahn will look at important issues of law, policy, ethics and public sentiment to explore this question—and propose a framework for pursuing thoughtful development of emerging technologies.We must determine how the system handles intent. When a system operates without human direction, we must ensure that it can identify an enemy's unarmed surrender. Such a safeguard would ensure we meet our commitments to international law.

We must determine who is accountable if something goes wrong. Who is ultimately responsible for the actions of an autonomous system? Can autonomous systems fully understand commander intent? Are developers at fault if there is a failure?

We must ensure that autonomy will not desensitize us to violence. Will it be easier to hit the "kill button" with these technologies than to pull the trigger on a pistol? Will our behavior and ethics change as we mobilize machines on military missions?

We must ensure that autonomous machines will not lead us to dystopia. Some experts in technology, including Bill Gates and Elon Musk, believe that if not properly controlled, sophisticated autonomous systems and artificial intelligence could spell the end of the human race. Are we thinking through the potential consequences of what we are building?

Hahn emphasized the importance of setting standards to ensure the systems we create are worthy of our trust over the long term. With a steady pace of innovation, generations will be born that know only a world with autonomous technology. Their dependence may help speed creation of more adept autonomous systems. Eventually, humans may deploy systems that could operate beyond our control. If we were to create machinery that crosses that threshold, would we be able to rein it in? The question underscores the essential distinction at the core of Hahn's research—not what can we build, but what should we build.


The views expressed are those of the author and Brown Advisory as of the date referenced and are subject to change at any time based on market or other conditions. These views are not intended to be and should not be relied upon as investment advice and are not intended to be a forecast of future events or a guarantee of future results. Past performance is not a guarantee of future performance. The information provided in this material is not intended to be and should not be considered to be a recommendation or suggestion to engage in or refrain from a particular course of action or to make or hold a particular investment or pursue a particular investment strategy, including whether or not to buy, sell, or hold any of the securities mentioned. It should not be assumed that investments in such securities have been or will be profitable. To the extent specific securities are mentioned, they have been selected by the author on an objective basis to illustrate views expressed in the commentary and do not represent all of the securities purchased, sold or recommended for advisory clients. The information contained herein has been prepared from sources believed reliable but is not guaranteed by us as to its timeliness or accuracy, and is not a complete summary or statement of all available data. This piece is intended solely for our clients and prospective clients, is for informational purposes only, and is not individually tailored for or directed to any particular client or prospective client.

This communication and any accompanying documents are confidential and privileged. They are intended for the sole use of the addressee. Any accounting, business or tax advice contained in this communication, including attachments and enclosures, is not intended as a thorough, in-depth analysis of specific issues, nor a substitute for a formal opinion, nor is it sufficient to avoid tax-related penalties