UNIVERSITY PARK, Pa. — As artificial intelligence (AI) continues to influence all facets of our personal and professional lives, questions abound, like, “Can people have actual feelings for robots?” or “Can a chatbot comfort someone in distress?”
Penn State faculty members and Rock Ethics Institute senior research associates Daryl Cameron and Alan Wagner are working hard to better understand these questions. Recently, Cameron, associate professor of psychology, and Wagner, associate professor of aerospace engineering, collaborated with other scholars on an article assessing how empathy for and from robots is considered from an interdisciplinary perspective. They published their work in the journal Current Directions in Psychological Science.
The article was co-written with Martina Orlandi, a former postdoctoral scholar in the Rock Ethics Institute; Eliana Hadjiandreou and Stephen Anderson, doctoral alumni of the Department of Psychology and Cameron’s Empathy and Moral Psychology Lab; and India Oates, a former post-baccalaureate research associate in Cameron’s lab.
Cameron and Wagner discussed the potential for empathy between humans and AI in the Q&A below.
Q: How did you end up collaborating on this article?
Cameron: Alan and I have been talking about empathy and robots for about 10 years now and have worked on several projects together. We applied for and received the Center for Socially Responsible Artificial Intelligence’s Moral Psychology of Human-Technology Interaction seed grant, which allowed us to develop this theory paper on the complexities of human-robot relationships.
In the paper, we go into what empathy is and how difficult it is to define. So, if we’re talking about compassion, we can picture those cute little robots that make expressions that seem to convey warmth. And perspective taking, which is being able to predict and understand the minds of others around you. It doesn’t involve emotion in the same way; it’s many things that can predict emotions.
Someone might say, “I value this in terms of my personal wellbeing. This robot makes me feel cared for; it makes me feel happy.” But then you might have some scientists or philosophers or engineers who say, “Well, you’re not really happy.” We suggest it’s important to consider how people relate to their own feelings when they empathize with robots and realize their experiences may have some value.
Q: How do humans and AI agents interact in ways that resemble empathy?
Wagner: People are interacting a lot with AI agents like chatbots — and treating them like their best friends. If you made the AI more empathetic, could that help people? It’s tough to say; we don’t know at this point. AI can say, “Sorry you’re having a bad day,” but it’s just words from an algorithm. It doesn’t have the experiences to feel those things — it doesn’t know what it feels like to have a family member die of cancer. But on the other hand, even just saying “sorry” can help people. It could be helpful, but there’s still a lot of territory to explore.