
My Research
Areas of Specialization: Ethics and Applied Ethics, especially Military Ethics, AI Ethics, and Bioethics
2025 “A Sociotechnological System Approach to AI Ethics” (forthcoming) in AI and Society
2025 “Aligning with Ideal Values: A Proposal for Anchoring AI in Moral Expertise (w/ Mark Boespflug) in AI & Ethics
2022 “The Moral Case for the Development of Autonomous Weapon Systems” in The Journal of Military Ethics
A short post based on the above article can be found on the Blog of the American Philosophical Association (link)
Op-eds:
2025 “The Moral Argument to Opt Out of Facebook, Instagram, and X” in The Austin Chronicle
Under Review (draft available upon request)
“The Morality of Autonomous Weapons: An Answer to the Threshold Objection”
Work on Progress
“The Case for Moral Expertise” (w/ Mark Boespflug)
“Expert Voting: A Better Metanormative Approach to Practical Ethics”
“The Social Model: How Voting Among Moral Experts Informs What We Ought to Do” (w/ Mark Boespflug)
My Papers
My Dissertation
“Systems and Machines: The Moral Import of Autonomous AI Technologies”
My dissertation is comprised of four related papers investigating the ethics of autonomous AI systems (AAISs). Chapter 1 develops a systems-based account of AI technology that conceives of AAISs as extended socio-technical systems that include human beings and machines as parts. I argue that the systems framework does a better job than the dominant machine-based conception at capturing the metaphysics of AI technology. Furthermore, the systems conception opens the door to solutions to classic problems in AI ethics—such as the responsibility gap problem—whereas machine-based conceptions often lead to fatalism. Chapter 2 applies my systems framework to autonomous weapon systems (AWS). I argue that there are actually strong moral reasons in favor of AWS technology that have been underemphasized in the literature. I then respond to a number of objections to AWS, showing how they stem from an inaccurate (machine-based) portrayal of the technology as “killer robots,” thereby eschewing the important roles that humans play in the design, operation, and testing of AWS. Chapters 3 and 4 investigate ethics for AI, meaning they explore whether AAIAs now or in the future might themselves deserve moral consideration. Chapter 3 argues that current and foreseeable AAIAs, including humanoid machines, are unlikely to have phenomenally conscious mental states, and further, that having such states is the best reason for thinking an entity is morally considerable. Chapter 4 assumes that intelligent machines will not be phenomenally conscious, and then investigates whether there might be other grounds on which to grant them moral status or moral rights. I look at a variety of reasons recently provided by philosophers and technologists and find them all wanting. The overarching conclusion from chapters 3 and 4, then, is that it is unlikely that AAIAs either now or in the future will be moral patients or deserve moral rights, suggesting that philosophers and technologists better spend their time doing ethics of AI as opposed to ethics for AI.
My Work
My research focuses on the ethics of emerging technologies, particularly AI technologies. My paper “A Sociotechnological System Approach to AI Ethics” (forthcoming in AI & Society) conceptualizes AI research products as distributed sociotechnological systems with human and artifactual components. The tendency to start theorizing with machines and their intrinsic features—for example, talking of autonomous systems as “self-sufficient, self-reliant, and independent” —stems from a misunderstanding of both autonomy (at least as the notion applies to AI agents) and the human-machine relationship. The sociotechnological systems account, by focusing on the relationships that obtain between machine and human elements of AI systems, dispels common moral objections to autonomous AI tech, such as the idea that they open up an unacceptable “gap” where no one can be held morally responsible for their behavior. By keeping the connections between humans and machines in focus, the perspective opens the door to novel ideas about how to maintain the safety of, and human control over, increasingly intelligent AI systems.
My research also looks at the ethics of AI-based military technologies, especially the moral debate over autonomous weapons. A survey of the ethical literature provides a host of reasons not to deploy such systems; international campaigns (e.g., The Campaign to Stop Killer Robots), NGOs, and governments are calling for a ban. I disagree with these trends (Riesen, 2022). If we ban autonomous weapons now, then we give up on large moral benefits accruing in the future because they have the potential to massively reduce psychological, moral, and lethal risk on both sides of future conflicts. There is, therefore, an extremely strong positive moral case for continued development, even when weighed against the objections leveled against such systems by ethicists.
I’m also engaged in projects related to the existence of moral expertise. According to recent surveys, roughly two out of three ethicists deny their own expertise. At the same time, preliminary evidence suggests that four in five ethicists are cognitivists (i.e., four in five believe in moral facts). If there are moral facts, and if humans can come to know (or have epistemically warranted beliefs concerning) them, then how does making their study one’s life occupation not make the one’s moral beliefs more likely to be true? The epistemic conditions constitutive of expertise generally apply no less to morality than science or the law. And the existence of moral expertise has important ramifications for technology ethics, particularly the AI safety research program known as value alignment. We want AI systems to do what is best, or at least what is permissible, when operating in morally complex domains. Mark Boespflug and I argue that AI behavior ought to be aligned with ideal values, at least for AI agents confronting difficult moral choices. Ideal values are the moral preferences, beliefs, virtues, judgments, intuitions, etc. that humans ought to have. In our paper “Aligning with Ideal Values: A Proposal for Anchoring AI in Moral Expertise” (2025, AI & Ethics), we argue that the best way to approximate the content of ideal values for the purposes of alignment is to study the aggregated judgments of moral experts.