My Research

Areas of Specialization: Ethics and Applied Ethics, especially Military Ethics, AI Ethics, and Bioethics

“The Moral Case for the Development of Autonomous Weapon Systems” in The Journal of Military Ethics (link)

  • A short post based on the above article can be found on the Blog of the American Philosophical Association (link)

Work In progress:

  1. “A Technological Systems Approach to AI Ethics: Reframing Some Important Debates”

  2. “The Moral Case Revisited: Why Lethal Autonomy is a Good Idea”

  3. “Consciousness and Considerability: Why AI Ethics is Asking the Wrong Questions”

My Papers

My Dissertation

“Systems and Machines: The Moral Import of Autonomous AI Technologies”

My dissertation is comprised of four related papers investigating the ethics of autonomous AI systems (AAISs). Chapter 1 develops a systems-based account of AI technology that conceives of AAISs as extended socio-technical systems that include human beings and machines as parts. I argue that the systems framework does a better job than the dominant machine-based conception at capturing the metaphysics of AI technology. Furthermore, the systems conception opens the door to solutions to classic problems in AI ethics—such as the responsibility gap problem—whereas machine-based conceptions often lead to fatalism. Chapter 2 applies my systems framework to autonomous weapon systems (AWS). I argue that there are actually strong moral reasons in favor of AWS technology that have been underemphasized in the literature. I then respond to a number of objections to AWS, showing how they stem from an inaccurate (machine-based) portrayal of the technology as “killer robots,” thereby eschewing the important roles that humans play in the design, operation, and testing of AWS. Chapters 3 and 4 investigate ethics for AI, meaning they explore whether AAIAs now or in the future might themselves deserve moral consideration. Chapter 3 argues that current and foreseeable AAIAs, including humanoid machines, are unlikely to have phenomenally conscious mental states, and further, that having such states is the best reason for thinking an entity is morally considerable. Chapter 4 assumes that intelligent machines will not be phenomenally conscious, and then investigates whether there might be other grounds on which to grant them moral status or moral rights. I look at a variety of reasons recently provided by philosophers and technologists and find them all wanting. The overarching conclusion from chapters 3 and 4, then, is that it is unlikely that AAIAs either now or in the future will be moral patients or deserve moral rights, suggesting that philosophers and technologists better spend their time doing ethics of AI as opposed to ethics for AI.

My Work

My current research looks at the ethics of AI-based military technologies, especially the moral debate over autonomous weapon systems. A survey of the ethical literature provides a host of reasons not to deploy such systems; international campaigns (e.g., The Campaign to Stop Killer Robots), NGOs, and governments are calling for a ban. I disagree with these trends. If we ban autonomous weapons now, then we give up on large moral benefits accruing in the future. Such weapons have the potential to massively reduce psychological, moral, and lethal risk on both sides of future conflicts.

I have been developing a systems-based account of AI tech that reflects the nature of existing and foreseeable AI research products far better than the dominant, machine-focused perspective. The tendency to start theorizing by looking at machines and their intrinsic features—for example, the tendency to talk of autonomous machines as “self-sufficient, self-reliant, and independent” —stems from a misunderstanding of both autonomy (at least as the notion applies to AI agents) and the human-machine relationship. The systems account, by focusing on the complex causal relations that obtain between machine and human parts of technological systems, also dispels some common moral objections to autonomous AI tech, such as the idea that they open up an unacceptable “gap” where no one can be held morally responsible for their behavior. By keeping the connections between humans and machines in focus, my account opens the door to novel ideas about how to maintain control and moral responsibility over ever more intelligent systems.