top of page

Defining the Future

By: Dave Johnson

Soon after the terrorist attacks of September 11, 2001, Congress passed the Authorization for Use of Military Force (AUMF).  In Section 2(a), it authorized the President to use “all appropriate force against those nations, organizations, or persons he determines planned, authorized, committed, or aided the terrorist attacks…”  This authorization is unique because the AUMF allowed the President to target non-state actors and did not specify what nations or organizations, as highlighted in a memorandum from the Congressional Research Service.  The AUMF has been the primary defense presidents since George W. Bush have used as a justification for targeted killings, specifically by unmanned aerial vehicles, better known as drones.  

From strategic to tactical levels, using drones to conduct targeted killings is advantageous, rather than conducting raids such as the 2011 raid in Abbottabad, because American lives are kept distant from dangerous situations, drones are cheaper to produce than manned aircraft, they have enhanced surveillance capabilities, and they are lower-risk to use when conducting cross-border strikes.  But drone strikes have become controversial as well, particularly after their increased usage since 2008 throughout, but not limited to, countries such as Afghanistan, Pakistan, Yemen, and Somalia.  There are many legal challenges that are implicated when drone strikes are used: sovereignty, imminence in a self-defense attack, human rights violations, etc.

In a Journal of Ethics and International Affairs article, Rosa Brooksstates in her conclusion that “U.S. drone strikes thus present not an issue of law-breaking, but of law’s brokenness…After all, though these strikes (or, more accurately, the legal theories that underlie them) challenge the international rule of law, they also represent an effort to respond to gaps and failures in the international system.” If that was the case in 2013 when the article was written, it is certainly the case that these gaps and failures are widening as technology continues to advance.

There are drone operators in the military and various U.S. organizations that operate the drone. It might seem an obvious fact, but it is important to realize that for now, a human operates the unmanned aerial vehicle, just not from inside the drone. One day that will change. More than likely, the United States or another country will develop the technology for an artificial intelligence(AI) to operate a strike drone independently. We have yet to agree on the appropriateness of drone strikes within a national and international legal framework, but there will be AIs making life and death decisions of possible targets in the near future.

The first key to understanding problems arising from truly unmanned aerial vehicles, is understanding what an artificial intelligence is. In a white papertitled Exploring Legal, Ethical, and Policy Implications of Artificial Intelligence, the authors state that AI can be understood as a “science and a set of computational technologies that are inspired by…the ways people use their nervous systems to sense, learn, reason, and take action.” Current forms of AI, such as Siri, are defined as “narrow” AI in that they are created to solve specific problems within limited parameters. There is the possibility of AI reaching the equivalent of human-level awareness, but that has not happened yet, and might be some time away.

The second key to understanding problems arising from AIs conducting drone strikes is how do we define liability, civil or criminal. Do we legally define AI as a person, like a corporation, or as its own entity? Do we not define the AI and instead hold the programmer writing the software as liable? How do we define mens rea in an AI, and how do we prove it? I can’t answer these questions because there is a significant gap surrounding this topic in the legal community. Perhaps in the past, the legal community could push off asking these questions because we did not quite expect technology to get be as advanced as it is, but it is here now and only going become more advanced.

Today, there are various forms of AI performing tasks with serious implications in the law, e.g. the Uber car that crashed and killed a person. But there is a difference between an AI that has been programmed to be safe yet accidentally kills a person, and an AI that is meant to conduct war by targeting and killing humans. This technology will not just be used by drones operating in the sky, but tanks and humanoid robots on the ground, ships and submarines in the ocean. There must be incentives and repercussions in place so that there will not be an AI who commits war crimes or has no regard for collateral damage that was “just following” its programmed orders.

コメント


bottom of page