Conclusion
Conclusion
Conclusion
If a military robot refuses an order, e.g., if it has better situational awareness, then who would be responsible for its subsequent actions? How stringent should we take the generally-accepted eyes on target requirement, i.e., under what circumstances might we allow robots to make attack decisions on their own? What precautions ought to be taken to prevent robots from running amok or turning against our own side, whether through malfunction, programming error, or capture and hacking? To the extent that military robots can help reduce instances of war crimes, what is the harm that may arise if the robots also unintentionally erode squad cohesion given their role as an outside observer? Should robots be programmed to defend themselvescontrary to Arkins positiongiven that they represent costly assets? Would using robots be counterproductive to winning the hearts and minds of occupied populations or result in more desperate terrorist-tactics given an increasing asymmetry in warfare?
1. Creating autonomous military robots that can act at least as ethically as human soldiers appears to be a sensible goal, at least for the foreseeable future and in contrast to a greater demand of a perfectly-ethical robot. However, there are still daunting challenges in meeting even this relativelylow standard, such as the key difficulty of programming a robot to reliably distinguish enemy combatants from non-combatants, as required by the Laws of War and most Rules of Engagement.
2. While a faster introduction of robots in military affairs may save more lives of human soldiers and reduce war crimes committed, we must be careful to not unduly rush the process. Much different than rushing technology products to commercial markets, design and programming bugs in military robotics would likely have serious, fatal consequences. Therefore, a rigorous testing phase of robots is critical, as well as a thorough studies of related policy issues, e.g., how the US Federal Aviation Administration (FAA) handles UAVs flying in our domestic National Airspace System (which we have not addressed here).
3. Understandably, much ongoing work in military robotics is likely shrouded in secrecy; but a balance between national security and public disclosure needs to be maintained in order to help accurately anticipate and address issues of risk or other societal concerns. For instance, there is little information on US military plans to deploy robots in space, yet this seems to be a highly strategic area in which robots can lend tremendous value; however, there are important environmental and political sensitivities that would surround such a program.
4. Serious conceptual challenges exist with the two primary programming approaches today: topdown (e.g., rule-following) and bottom-up (e.g., machine learning). Thus a hybrid approach should be considered in creating a behavioural framework. To this end, we need to a clear understanding of what a warrior code of ethics might entail, if we take a virtue-ethics approach in programming.
5. In the meantime, as we wait for technology to sufficiently advance in order to create a workable behavioural framework, it may be an acceptable proxy to program robots to comply with the Laws of War and appropriate Rules of Engagement. However, this too is much easier said than done, and at least the technical challenge of proper discrimination would persist and require resolution.
6. Given technical limitations, such as programming a robot with the ability to sufficiently discriminate against valid and invalid targets, we expect that accidents will continue to occur, which raise the question of legal responsibility. More work needs to be done to clarify the chain of responsibility in both military and civilian contexts. Product liability laws are informative but untested as they relate to robotics with any significant degree of autonomy.
7. Assessing technological risks, whether through the basic framework we offer in section 6 or some other framework depends on identifying potential issues in risk and ethics. These issues vary from: foundational questions of whether autonomous robotics can be legally and morally deployed in the first place, to theoretical questions about adopting precautionary approaches, to forward-looking questions about giving rights to truly autonomous robots. These discussions need to be more fully developed and expanded.
8.
Specifically, the challenge of creating a robot that can properly discriminate among targets is one of the most urgent, particularly if one believes that the (increased) deployment of war robots is inevitable. While this is a technical challenge and resolvable depending on advances in programming and AI, there are some workaround policy solutions that can be anticipated and further explored, such as: limiting deployment of lethal robots to only inside a kill box; or designing a robot to target only other machines or weapons; or not giving robots a self-defence mechanism so that they may act more conservatively to prevent; or even creating robots with only non-lethal or less-than-lethal strike capabilities, at least initially until they are proven to be reliable.