Spotlight
Areas of Expertise (6)
Artifical Intelligence
Philosophical Foundations of AI
Computational Economics
AI Systems
Formal Logic
Computational Logic
Biography
Selmer Bringsjord specializes in the logico-mathematical and philosophical foundations of artificial intelligence (AI) and cognitive science, and in collaboratively building AI systems on the basis of computational logic. Though he spends considerable “engineering" time in pursuit of ever-smarter computing machines, he claims that “armchair" reasoning time has enabled him to deduce that the human mind will forever be superior to such machines.
"Soon enough, much of what many humans do for a living will be better done by indefatigable machines who require not a cent in pay,” Bringsjord said. “I figure the ultimate growth industry will be building smarter and smarter such machines on the one hand, and philosophizing about whether they are truly conscious and free on the other. Job security is nice. I've worked in this two-fold industry for a long time, and plan to continue as long as my health holds out."
Bringsjord is the author of papers and essays ranging in approach from the mathematical to the informal, and covering such areas as AI, logic, gaming, philosophy of mind, philosophy of religion, robotics, and ethics and he has of late begun to move into the area of computational economics, for which he has invented a new paradigm based on formal logic.
He is the author of What Robots Can & Can't Be, concerned with the future of attempts to create robots that behave as humans, and also Superminds: People Harness Hypercomputation, and More. Before the second of these books he wrote, with IBM's David Ferrucci, Articial Intelligence and Literary Creativity: Inside the Mind of Brutus, A Storytelling Machine.
Bringsjord currently holds appointments in the Department of Cognitive Science, the Department of Computer Science, and the Lally School of Management & Technology, and teaches AI, formal logic, human and machine reasoning, philosophy of AI, other topics relating to formal logic, and the intellectual history of New York City and the Hudson Valley. Funding for his research and development has come from the Luce Foundation, the National Science Foundation, the Templeton Foundation, AT&T, IBM, Apple, AFRL, ARDA/DTO/IARPA, ONR, DARPA, AFOSR, and other sponsors. Bringsjord has consulted to and advised many companies in the general realm of intelligent systems, and continues to do so.
Media
Documents:
Photos:
Audio/Podcasts:
Education (2)
Brown University: PhD, Philosophy
University of Pennsylvania: BA, Philosophy
Links (2)
Media Appearances (10)
How to Slow Down Time
Popular Science print
2021-12-08
...Challenge yourself and engage your brain For Selmer Bringsjord, a professor of logic and philosophy, as well as director of the Rensselaer Artificial Intelligence and Reasoning Laboratory, the longest days often involve spending time tackling problems known in mathematical computer science to be lengthy, difficult solves. ...
Can AI Teach Us To Be More Human? Maybe.
Lifewire online
2021-04-16
... At Rensselaer Polytechnic Institute, Selmer Bringsjord’s laboratory is building mathematical models of human emotion. The research is intended to create an AI that can score high on emotional intelligence tests and apply them to humans. But Bringsjord, an AI expert, says any teaching AI does is inadvertent. "But this is pure engineering work, and I'm under no such illusion that the AI in question, itself has emotions or genuinely understands emotions," he said in an email interview.
Conciencia robótica: Sentir y razonar como humano
Al límite de la ficción tv
2021-04-01
Selmer Bringsford and his doctoral student, Mike Giancola, of the Rensselaer Artificial Intelligence and Reasoning (RAIR) Laboratory discuss whether robots and AI can feel and think like a human in "Al límite de la ficción," a multi-part science series from the Chilean news channel, T13.
Ethical AI: Should Artificial Intelligence Be Used in Weapons
NPR - Academic Minute radio
2021-03-22
Engineers succeed by making pessimistic assumptions. Today, AI, artificial intelligence, where my research lies, follows suit. A new Acura auto has AI designed under the pessimistic assumption that its human driver will sooner or later fail to brake for some obstacle; hence the car is engineered as an artificial agent that can stop itself. With collaborators, I design and engineer ethically correct artificial agents under the assumption that human agents will behave badly. When they do, our moral machines can intervene, and save the day.
The Turing Test is Dead. Long Live the Lovelace Test.
Mind Matters podcast online
2020-04-02
The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behaviour indistinguishable from a human. Many think that Turing’s proposal for intelligence, especially creativity, has been proven inadequate. Is the Lovelace test a better alternative? Robert J. Marks and Dr. Selmer Bringsjord discuss the Turing test, the Lovelace test, and machine creativity.
When AI Goes Wrong, We Won’t Be Able to Ask It Why
Vice Motherboard
2016-07-06
I called Selmer Bringsjord, computer scientist and chair of the Department of Cognitive Science at Rensselaer Polytechnic Institute, to hear his thoughts on the matter. He told me that all of this means one thing: "We are heading into a black future, full of black boxes."...
Watch the moment when a really cute robot becomes self-aware
Quartz
2015-07-17
Rensselaer Polytechnic Institute professor Selmer Bringsjord has conducted a self-awareness experiment with a commercially-available Nao robot that he says proves that the robot has the faintest glimmer of self-awareness...
Robot homes in on consciousness by passing self-awareness test
NewScientist
2015-07-15
Selmer Bringsjord of Rensselaer Polytechnic Institute in New York, who ran the test, says that by passing many tests of this kind – however narrow – robots will build up a repertoire of abilities that start to become useful. Instead of agonising over whether machines can ever be conscious like humans, he aims to demonstrate specific, limited examples of consciousness...
AI researcher says amoral robots pose a danger to humanity
Computerworld
2014-03-07
"I'm worried about both whether it's people making machines do evil things or the machines doing evil things on their own," said Selmer Bringsjord, professor of cognitive science, computer science and logic and philosophy at RPI in Troy, N.Y. "The more powerful the robot is, the higher the stakes are. If robots in the future have autonomy..., that's a recipe for disaster...
Forget Turing, the Lovelace Test Has a Better Shot at Spotting AI
Vice Motherboard
2014-07-08
“This is unfortunate. I’m a huge fan of Turing, but his test is indeed inadequate,” Selmer Bringsjord, one of the designers of the Lovelace Test, a more rigorous AI detector, told me in an interview...
Articles (3)
Beyond the Doctrine of Double Effect: A Formal Model of True Self-sacrifice
Robotics and Well BeingNaveen Sundar Govindarajulu, Selmer Bringsjord, Rikhiya Ghosh, Matthew Peveler
2019 The doctrine of double effect (DDE) is an ethical principle that can account for human judgment in moral dilemmas: situations in which all available options have large good and bad consequences. We have previously formalized DDE in a computational logic that can be implemented in robots. DDE, as an ethical principle for robots, is attractive for a number of reasons: (1) Empirical studies have found that DDE is used by untrained humans; (2) many legal systems use DDE; and finally, (3) the doctrine is a hybrid of the two major opposing families of ethical theories (consequentialist/utilitarian theories versus deontological theories). In spite of all its attractive features, we have found that DDE does not fully account for human behavior in many ethically challenging situations. Specifically, standard DDE fails in situations wherein humans have the option of self-sacrifice. Accordingly, we present an enhancement of our DDE -formalism to handle self-sacrifice; we end by looking ahead to future work.
Toward the Engineering of Virtuous Machines
PreprintNaveen Sundar Govindarajulu, Selmer Bringsjord, Rikhiya Ghosh
2018 While various traditions under the 'virtue ethics' umbrella have been studied extensively and advocated by ethicists, it has not been clear that there exists a version of virtue ethics rigorous enough to be a target for machine ethics (which we take to include the engineering of an ethical sensibility in a machine or robot itself, not only the study of ethics in the humans who might create artificial agents). We begin to address this by presenting an embryonic formalization of a key part of any virtue-ethics theory: namely, the learning of virtue by a focus on exemplars of moral virtue. Our work is based in part on a computational formal logic previously used to formally model other ethical theories and principles therein, and to implement these models in artificial agents.
Tentacular Artificial Intelligence, and the Architecture Thereof, Introduced
PreprintSelmer Bringsjord, Naveen Sundar Govindarajulu, Atriya Sen, Matthew Peveler, Biplav Srivastava, Kartik Talamadupula
2018 We briefly introduce herein a new form of distributed, multi-agent artificial intelligence, which we refer to as "tentacular." Tentacular AI is distinguished by six attributes, which among other things entail a capacity for reasoning and planning based in highly expressive calculi (logics), and which enlists subsidiary agents across distances circumscribed only by the reach of one or more given networks.