Thomas Powers

Associate Professor, Philosophy University of Delaware

  • Newark DE

Prof. Powers specializes in ethics in information technology (including AI), ethics in science and engineering and research ethics.

Contact

University of Delaware

View more experts managed by University of Delaware

Social

Biography

Thomas Powers, an associate professor of philosophy, specializes in scientific ethics. Powers directs UD's Center for Science, Ethics and Public Policy.

Industry Expertise

Education/Learning
Biotechnology

Areas of Expertise

Scientific Ethics
Biomedical Ethics
Environmental Ethics
Research Ethics
Computer Ethics
Immanuel Kant

Media Appearances

Climate Panel Details Its Review Plan

The Wall Street Journal  online

2020-03-11

The InterAcademy Council, a body representing scientific academies around the world, is to conduct a wide-ranging review of the procedures and management of the U.N.'s Intergovernmental Panel on Climate Change. The review, to be done by August, comes in response to revelations of questionable behavior and factual errors by some scientists who contributed to the IPCC's 2007 report, which won a Nobel Peace Prize.

View More

Interdisciplinary team to address global issues in STEM research

University of Delaware  online

2014-10-15

“As research, practice, and education in science and engineering become increasingly global, we must expand our efforts to address their ethical dimensions,” says Powers. “Two important factors are the cultural and linguistic diversity among engineering and science researchers. Through this collaboration, the UD teams will help the NAE to undertake a worldwide project in ethics to address emerging global issues.”

View More

Articles

Toward a rational and ethical sociotechnical system of autonomous vehicles: A novel application of multi-criteria decision analysis

PLoS ONE

2021

The impacts of autonomous vehicles (AV) are widely anticipated to be socially, economically, and ethically significant. A reliable assessment of the harms and benefits of their large-scale deployment requires a multi-disciplinary approach. To that end, we employed Multi-Criteria Decision Analysis to make such an assessment. We obtained opinions from 19 disciplinary experts to assess the significance of 13 potential harms and eight potential benefits that might arise under four deployments schemes. Specifically, we considered: (1) the status quo, i.e., no AVs are deployed; (2) unfettered assimilation, i.e., no regulatory control would be exercised and commercial entities would “push” the development and deployment; (3) regulated introduction, i.e., regulatory control would be applied and either private individuals or commercial fleet operators could own the AVs; and (4) fleets only, i.e., regulatory control would be applied and only commercial fleet operators could own the AVs. Our results suggest that two of these scenarios, (3) and (4), namely regulated privately-owned introduction or fleet ownership or autonomous vehicles would be less likely to cause harm than either the status quo or the unfettered options.

View more

Modelling Ethical Algorithms in Autonomous Vehicles Using Crash Data

IEEE Transactions on Intelligent Transportation Systems

2021

In this paper we provide a proof of principle of a new method for addressing the ethics of autonomous vehicles (AVs), the Data-Theories Method , in which vehicle crash data is combined with philosophical ethical theory to provide a guide to action for AV algorithm design. We use this method to model three scenarios in which an AV is exposed to risk on the road, and determine possible actions for the AV. We then examine how different philosophical perspectives on agent partiality, or the degree to which one can act in one’s own self-interest, might address each scenario. This method shows why modelling the ethics of AVs using data is essential. First, AVs may sometimes have options that human drivers do not, and designing AVs to mimic the most ethical human driver would not ensure that they do the right thing. Second, while ethical theories can often disagree about what should be done, disagreement can be reduced and compromises found with a more complete understanding of the AV’s choices and their consequences. Finally, framing problems around thought experiments may elicit preferences that are divergent with what individuals might prefer once they are provided with information about the real risks for a scenario. Our method provides a principled and empirical approach to productively address these problems and offers guidance on AV algorithm design.

View more

Taxonomy of trust-relevant failures and mitigation strategies

Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction

2020

We develop a taxonomy that categorizes HRI failure types and their impact on trust to structure the broad range of knowledge contributions. We further identify research gaps in order to support fellow researchers in the development of trustworthy robots. Studying trust repair in HRI has only recently been given more interest and we propose a taxonomy of potential trust violations and suitable repair strategies to support researchers during the development of interaction scenarios. The taxonomy distinguishes four failure types: Design, System, Expectation, and User failures and outlines potential mitigation strategies. Based on these failures, strategies for autonomous failure detection and repair are presented, employing explanation, verification and validation techniques. Finally, a research agenda for HRI is outlined, discussing identified gaps related to the relation of failures and HR-trust.

View more

Show All +

Education

University of Texas, Austin

PhD

Philosophy

1995

College of William and Mary, Williamsburg

BA

Philosophy

1987

Languages

  • English
  • German

Event Appearances

“Can AIs Behave Themselves? Towards a Genuine Machine Ethics”

(2022) European and North American Workshop on the Ethics of Artificial Intelligence  Rome, Italy

“Corporate Robotic Responsibility”

(2022) International Association for Computing and Philosophy (IACAP)  Santa Clara, CA