hero image
Thomas Powers - University of Delaware. Newark, DE, US

Thomas Powers

Associate Professor, Philosophy | University of Delaware

Newark, DE, UNITED STATES

Prof. Powers specializes in ethics in information technology (including AI), ethics in science and engineering and research ethics.

Media

Publications:

Thomas Powers Publication Thomas Powers Publication

Documents:

Photos:

Videos:

What's in an engineer's mind?: Dr. Tom Powers at TEDxUD

Audio/Podcasts:

Social

Biography

Thomas Powers, an associate professor of philosophy, specializes in scientific ethics. Powers directs UD's Center for Science, Ethics and Public Policy.

Industry Expertise (2)

Education/Learning

Biotechnology

Areas of Expertise (6)

Scientific Ethics

Biomedical Ethics

Environmental Ethics

Research Ethics

Computer Ethics

Immanuel Kant

Media Appearances (2)

Climate Panel Details Its Review Plan

The Wall Street Journal  online

2020-03-11

The InterAcademy Council, a body representing scientific academies around the world, is to conduct a wide-ranging review of the procedures and management of the U.N.'s Intergovernmental Panel on Climate Change. The review, to be done by August, comes in response to revelations of questionable behavior and factual errors by some scientists who contributed to the IPCC's 2007 report, which won a Nobel Peace Prize.

view more

Interdisciplinary team to address global issues in STEM research

University of Delaware  online

2014-10-15

“As research, practice, and education in science and engineering become increasingly global, we must expand our efforts to address their ethical dimensions,” says Powers. “Two important factors are the cultural and linguistic diversity among engineering and science researchers. Through this collaboration, the UD teams will help the NAE to undertake a worldwide project in ethics to address emerging global issues.”

view more

Articles (10)

Toward a rational and ethical sociotechnical system of autonomous vehicles: A novel application of multi-criteria decision analysis

PLoS ONE

2021 The impacts of autonomous vehicles (AV) are widely anticipated to be socially, economically, and ethically significant. A reliable assessment of the harms and benefits of their large-scale deployment requires a multi-disciplinary approach. To that end, we employed Multi-Criteria Decision Analysis to make such an assessment. We obtained opinions from 19 disciplinary experts to assess the significance of 13 potential harms and eight potential benefits that might arise under four deployments schemes. Specifically, we considered: (1) the status quo, i.e., no AVs are deployed; (2) unfettered assimilation, i.e., no regulatory control would be exercised and commercial entities would “push” the development and deployment; (3) regulated introduction, i.e., regulatory control would be applied and either private individuals or commercial fleet operators could own the AVs; and (4) fleets only, i.e., regulatory control would be applied and only commercial fleet operators could own the AVs. Our results suggest that two of these scenarios, (3) and (4), namely regulated privately-owned introduction or fleet ownership or autonomous vehicles would be less likely to cause harm than either the status quo or the unfettered options.

view more

Modelling Ethical Algorithms in Autonomous Vehicles Using Crash Data

IEEE Transactions on Intelligent Transportation Systems

2021 In this paper we provide a proof of principle of a new method for addressing the ethics of autonomous vehicles (AVs), the Data-Theories Method , in which vehicle crash data is combined with philosophical ethical theory to provide a guide to action for AV algorithm design. We use this method to model three scenarios in which an AV is exposed to risk on the road, and determine possible actions for the AV. We then examine how different philosophical perspectives on agent partiality, or the degree to which one can act in one’s own self-interest, might address each scenario. This method shows why modelling the ethics of AVs using data is essential. First, AVs may sometimes have options that human drivers do not, and designing AVs to mimic the most ethical human driver would not ensure that they do the right thing. Second, while ethical theories can often disagree about what should be done, disagreement can be reduced and compromises found with a more complete understanding of the AV’s choices and their consequences. Finally, framing problems around thought experiments may elicit preferences that are divergent with what individuals might prefer once they are provided with information about the real risks for a scenario. Our method provides a principled and empirical approach to productively address these problems and offers guidance on AV algorithm design.

view more

Taxonomy of trust-relevant failures and mitigation strategies

Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction

2020 We develop a taxonomy that categorizes HRI failure types and their impact on trust to structure the broad range of knowledge contributions. We further identify research gaps in order to support fellow researchers in the development of trustworthy robots. Studying trust repair in HRI has only recently been given more interest and we propose a taxonomy of potential trust violations and suitable repair strategies to support researchers during the development of interaction scenarios. The taxonomy distinguishes four failure types: Design, System, Expectation, and User failures and outlines potential mitigation strategies. Based on these failures, strategies for autonomous failure detection and repair are presented, employing explanation, verification and validation techniques. Finally, a research agenda for HRI is outlined, discussing identified gaps related to the relation of failures and HR-trust.

view more

Modelling Ethical Algorithms in Autonomous Vehicles Using Crash Data

IEEE Transactions on Intelligent Transportation Systems

2020 In this paper we provide a proof of principle of a new method for addressing the ethics of autonomous vehicles (AVs), the Data-Theories Method , in which vehicle crash data is combined with philosophical ethical theory to provide a guide to action for AV algorithm design. We use this method to model three scenarios in which an AV is exposed to risk on the road, and determine possible actions for the AV. We then examine how different philosophical perspectives on agent partiality, or the degree to which one can act in one’s own self-interest, might address each scenario. This method shows why modelling the ethics of AVs using data is essential. First, AVs may sometimes have options that human drivers do not, and designing AVs to mimic the most ethical human driver would not ensure that they do the right thing. Second, while ethical theories can often disagree about what should be done, disagreement can be reduced and compromises found with a more complete understanding of the AV’s choices and their consequences.

view more

The Oxford Handbook of Ethics of Ai

Oxford Handbooks

2020 This 44-chapter volume tackles a quickly-evolving field of inquiry, mapping the existing discourse as part of a general attempt to place current developments in historical context; at the same time, breaking new ground in taking on novel subjects and pursuing fresh approaches. The term "A.I." is used to refer to a broad range of phenomena, from machine learning and data mining to artificial general intelligence. The recent advent of more sophisticated AI systems, which function with partial or full autonomy and are capable of tasks which require learning and 'intelligence', presents difficult ethical questions, and has drawn concerns from many quarters about individual and societal welfare, democratic decision-making, moral agency, and the prevention of harm. This work ranges from explorations of normative constraints on specific applications of machine learning algorithms today-in everyday medical practice, for instance-to reflections on the (potential) status of AI as a form of consciousness with attendant rights and duties and, more generally still, on the conceptual terms and frameworks necessarily to understand tasks requiring intelligence, whether "human" or "A.I."

view more

Taxonomy of Trust-Relevant Failures and Mitigation Strategies

Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction

2020 We develop a taxonomy that categorizes HRI failure types and their impact on trust to structure the broad range of knowledge contributions. We further identify research gaps in order to support fellow researchers in the development of trustworthy robots. Studying trust repair in HRI has only recently been given more interest and we propose a taxonomy of potential trust violations and suitable repair strategies to support researchers during the development of interaction scenarios. The taxonomy distinguishes four failure types: Design, System, Expectation, and User failures and outlines potential mitigation strategies. Based on these failures, strategies for autonomous failure detection and repair are presented, employing explanation, verification and validation techniques. Finally, a research agenda for HRI is outlined, discussing identified gaps related to the relation of failures and HR-trust.

view more

On the Autonomy and Threat of "Killer Robots"

APA Newsletters

2018 In the past, renowned scientists such as Albert Einstein and Bertrand Russell publicly engaged, with courage and determination, the existential threat of nuclear weapons. In more recent times, scientists, industrialists, and business leaders have called on states to institute a ban on what are—in the popular imagination—" killer robots." In technical terms, they are objecting to LAWS (Lethal Autonomous Weapons Systems), and their posture seems similar to their earlier, courageous counterparts. During the 2015 International Joint Conference on Artificial Intelligence (IJCAI)—which is the premier international conference of artificial intelligence—some researchers in the field of AI announced an open letter warning of a new AI arms race and proposing a ban on offensive lethal autonomous systems. To date, this letter has been signed by more than 3,700 researchers and by more than 20,000 others, including (of note) Elon Musk, Noam Chomsky, Steve Wozniak, and Stephen Hawking.

view more

Introduction: Intersecting Traditions in the Philosophy of Computing

Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics

2017 This volume consists of selected papers from the 2015 joint international conference—the first-ever meeting of the Computer Ethics-Philosophical Enquiry conference series of the International Society for Ethics and Information Technology, and the International Association for Computing and Philosophy—held at the University of Delaware from June 22–25 of 2015. The organizing themes of the conference are well represented in the volume. They include theoretical topics at the intersection of computing and philosophy, including essays that explore current issues in epistemology, philosophy of mind, logic, and philosophy of science, and also normative topics on matters of ethical, social, economic, and political import. All of the essays provide views of their subject matter through the lens of computation.

view more

Prospects for a Kantian Machine

Intelligent Systems, IEEE

2006 Rule-based ethical theories like Kant's appear to be promising for machine ethics because of the computational structure of their judgments. Kant's categorical imperative is a procedure for mapping action plans (maxims) onto traditional deontic categories--forbidden, permissible, obligatory--by a simple consistency test on the maxim. This test alone, however, would be trivial. We might enhance it by adding a declarative set of "buttressing" rules. The ethical judgment is then an outcome of the consistency test, in light of the supplied rules. While this kind of test can generate nontrivial results, it might do no more than reflect the prejudices of the builder of the declarative set; the machine will "reason" straightforwardly, but not intelligently. A more promising (though speculative) option would be to build a machine with the power of nonmonotonic inference. But this option too faces formal challenges. The author discusses these challenges to a rule-based machine ethics, starting from a Kantian framework. This article is part of a special issue on Machine Ethics.

view more

Philosophy and Computing: Essays in epistemology, philosophy of mind, logic, and ethics

Philosopical Studies Series

2017 This book features papers from CEPE-IACAP 2015, a joint international conference focused on the philosophy of computing. Inside, readers will discover essays that explore current issues in epistemology, philosophy of mind, logic, and philosophy of science from the lens of computation. Coverage also examines applied issues related to ethical, social, and political interest. The contributors first explore how computation has changed philosophical inquiry. Computers are now capable of joining humans in exploring foundational issues. Thus, we can ponder machine-generated explanation, thought, agency, and other quite fascinating concepts. The papers are also concerned with normative aspects of the computer and information technology revolution. They examine technology-specific analyses of key challenges, from Big Data to autonomous robots to expert systems for infrastructure control and financial services. =

view more

Education (2)

University of Texas, Austin: PhD, Philosophy 1995

College of William and Mary, Williamsburg: BA, Philosophy 1987

Languages (2)

  • English
  • German

Event Appearances (2)

“Can AIs Behave Themselves? Towards a Genuine Machine Ethics”

(2022) European and North American Workshop on the Ethics of Artificial Intelligence  Rome, Italy

“Corporate Robotic Responsibility”

(2022) International Association for Computing and Philosophy (IACAP)  Santa Clara, CA