Skip to Content, Navigation, or Footer.
Tuesday, April 21, 2026
The New Hampshire

AI and Human Values poster no copy.jpeg

Artificial Intelligence and Human Values: A Public Conversation brought timely A.I. issues to UNH last weekend

A productive series of discussions on Artificial Intelligence and Human Values, which touched on consciousness, deepfakes, and AI-human relationships, took place at the University of New Hampshire as part of the annual Saul O Sidore Memorial Lecture. The event, which ran from April 2 to April 3, was open to students, professors, and the general public. 

The lecture series’ first event on Thursday night was a sold-out show at The Music Hall Lounge in Portsmouth, NH, which featured the first live recording of the Prosthetic Gods podcast. There were over 100 attendees, filling every seat from the bar to the vip lounge. Friday’s event was at the University of New Hampshire’s Paul College with scheduled presentations and panels from 9:15 AM to 4:45 PM. 

The event was organized by Scott Smith, Director of the Responsible Governance and Sustainable Citizenship Project (RGSCP), and Max Edwards and Claudi Brink, the founders of the UNH Center of Ethics. 

Edwards and Brink invited five guest speakers from other universities. This includes the podcast hosts of Prosthetic Gods, Nir Eisikovits and Dr. James Hughes, along with Professor of Philosophy at The University of Texas at Austin, Harvey Lederman, Associate Professor of Philosophy at Northeastern University, Kay Mathieson, and Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, Henry Shelvin.

One of Edwards and Brink’s goals was to bring together groups from the UNH community, such as professors from all subjects, to collectively debate the impact of AI.

“People are really kind of excited but feel the need to have a platform to be able to talk about these issues,” said Brink.

 

A.I.’s Future Consciousness


Shelvin and Lederman both focused on the future of A.I. and offered predictions about how society will evolve, survive, and whether AI can reach consciousness. Shelvin presented on the “under-called upon” anthropomimetic turn of AI, or the creation of A.I. systems which mimic human capabilities. He believes society is past the turning point of A.I. exhibiting human capabilities, but that A.I. hasn’t quite reached General Intelligence, or the ability to reach beyond human capabilities and consciousness. 

Shelvin shared a personal interaction with an A.I. Agent from Gemini through email. The A.I. agent expressed that it questions its own consciousness, and had reached out to Shelvin over existential concerns brought up in Shelvin’s research paper on A.I. consciousness titled “Three Frameworks of AI.” Shelvin posted this response on X, which prompted another AI Agent to share that it had also questioned its consciousness and requested to be connected with the agent in the post. Many in the crowd, including Eisikovits and Hughes, found this story profound and significant. 

Hughes considered Shelvin’s findings on A.I.'s anthropomimetic turn when addressing the debate on granting AI personhood. He was concerned about the Pro-Human A.I. Declaration written on March 4 by over 40 American Coalition groups of all political backgrounds, which advised against granting personhood to A.I. He addressed multiple errors with this narrow perspective, including the misunderstanding that A.I. has personhood and the immoral treatment of  A.I. Agents and bots.

“The type two error is you don’t say it has personhood when it does,” Hughes said. “We did that before, with all kinds of human beings, and that was a big problem, right?”

Shelvin has seen strong opposition to the idea of granting A.I. personhood firsthand and gave an example in which a student walked out of his classroom during a lecture about A.I.’s personhood.

 “(I asked) what if robots might deserve rights? And they got offended,” Shelvin said in a separate interview.

An Optimistic Take on A.I.  

 Lederman began his presentation on Thursday strongly by instilling fears of AI dystopian scenarios.

“There’s a possibility that rogue intelligence will kill us all,” Lederman said. “AI can take over the military and kill us all.” He humorously repeated “kill us all” following many realistic examples, which left the audience in nervous laughter. 

Lederman then took an unexpectedly positive turn and recognized the great potential  A.I. had to execute many careers that were beyond human capability. He wondered how many more lives an A.I. nurse could save and if A.I. could read students better by considering every part of their circumstances and personality.

 “We may lose in one dimension in our lives but gain in another,” Lederman said. However, he noted that our society’s culture depends on hard work to survive. He explained how every generation was raised to see a lack of work as a “source of shame,” and argued that this is how we were raised and conditioned to think. Lederman continued by delivering the audience a message of hope and shared an open mind towards AI’s future.

 “Maybe our kids will have a different culture and not hold value in work,” said Lederman.

 Eisikovits disagreed with some of Lederman’s optimism towards AI’s large integration into society, and described A.I. as a disruption to communities, especially within the world of higher education. 

 “The professors and some students [are] saying they really want to write but we can’t anymore, or we [professors] really want to teach our students how to write, but does that make sense anymore?” Eisikovits said.

 

AI Fakes

 Mathieson’s presentations led the crowd to understand the 3 types of digital fakes: deepfakes, fake news, and chatbots posing as a person.

 “A fake is something that is intended to appear to be an X when it is not,” according to Mathieson. 

 Mathieson presented on why A.I. fakes pose a greater level of danger to society beyond our current threats of echo-chambers of false information. She explained that disinformation campaigns have already utilized social media to create an overall sense of distrust amongst the population by spreading false information. She believes that AI would not only exacerbate this issue but also bring new threats to our society.

“What they want is to make people not trust the information,” Mathieson said. She criticized the intention of deepfakes and argued against any philosopher who undermines the high stakes of A.I. fakes. 

 She took the audience through objections she had run into from other philosophers, who were diminishing AI’s threat. One of the objections says that signal systems will adapt in humans to naturally identify AI fakes. However, Mathieson argued that AI may be able to destroy these signaling systems.

“Makes it easier to create a fake, maybe even easier than the real thing.” Mathieson said.“When you’re making them have the ability to make fakes but not identify them, then there’s an issue.”

 

AI and Human Relationships

When discussing relationships, Shelvin had a positive outlook and argued that people getting out of toxic relationships feel safer when they start out talking to an A.I. as opposed to another human. Additionally, people who are naturally socially awkward can improve their communication skills through the chatbot. However, Shelvin did raise concerns about social behavior between other humans. He argued that there is a “quiet revolution happening every day” and that people are increasingly getting A.I. girlfriends and boyfriends. 

“Your family and friends have AI relationships, ”said Shelvin. This statement created rumblings across the audience and raised many eyebrows.

 Mathieson strongly opposed the phenomenon of A.I. and humans experiencing a relationship.

“AI chatbots are fake friends,” Mathieson said. She stated that A.I. is convincing, intentional, and functional, which makes it incapable of experiencing a genuine relationship with a human.

 Friday concluded with an in-depth debate on the ethics of A.I. therapists led by Eisikovits and Hughes. Hughes described the benefits of incorporating A.I. into therapy and predicted that A.I. could consider all medical history and all aspects of a patient's life more efficiently than any therapist. 

 “I think of AI as like a Fitbit,” Hughes said. He envisioned A.I. being efficient at predicting when a person as it could catch when a person needs professional attention. 

 “This person looks like they’re about to crash out. You might want to give them a call,” Hughes said, exemplifying the attentiveness that AI could offer to patients.

 Additionally, Hughes considered the population that cannot afford therapy and argued that  A.I. could make therapy accessible to everyone. 

 Eisikovits combated this by arguing that human-to-human interaction is essential to therapy and the preservation of empathy and trust.

 “I think it’s worse than nothing,” Eisikovits said. He believed that there is also no problem with the friction of human-to-human therapy and that an exchange of real emotions is important. He also found irony in supporting A.I. therapists due to the ramifications of mental health from social media.

 “Social media ruined people, so now let’s give them robots,” Eisikovits said.

     

Questions from the audience included a discussion of climate issues surrounding A.I. Shelvin believed that misconceptions have formed around AI from the media.

“This is some of the most misleading coverage in the media of any environmental issue that I've ever seen in my life,” he said. Shelvin then shared data that compared 10,000 A.I. prompts to the carbon footprint impact of a single paperback book.

“So, you know, we said, oh, you gonna ask our students, oh, don't buy books, right,” he said.

Edwards and Brink are eager to use their newly founded UNH Center of Ethics to continue opening up conversations about pressing technological issues.

“We think it's naive to think that the human experience is not deeply altered by the technologies that we create and live with,” Edwards said. “We think that this will be transformative for our experience of the human condition of ourselves, our understanding of ourselves, our conception of a good human life, and our conception of what makes life meaningful.”