LSC Researchers Study User Experience and Public Perceptions of AI in Recent Publications

An undergraduate's Apple laptop computer, decorated with a graphic of circling atoms, is pictured.
LSC students and faculty are at the forefront of AI research.

From evaluating public trust in Artificial Intelligence (AI) to highlighting its potential equity implications, researchers in the Department of Life Sciences Communication (LSC) are paving the way for research at the intersection of AI and science communication. By examining how people interact with AI and the effects the technology has at the individual and societal levels more broadly, LSC’s AI research focuses on producing data that can inform regulatory practice surrounding large language models (LLMs), while also assessing how humans encounter AI in their day-to-day lives. Many LSC projects have focused on AI in communication contexts over the last two years (see this international study, and this one on public opinion in the USA for instance), below we present some of the most recent ones.

Anqi Shao, a 2025 PhD graduate in the LSC Department (advised by Professor Dietram Scheufele), recently focused on AI hallucinations for her dissertation research. She examined the epistemic risk (or risk of inaccuracy) produced by AI hallucinations, or inaccurate AI-generated outputs. Using a 2 × 4 design, the study randomly assigned participants to one of two contested science topics: “Measles, Mumps & Rubella (MMR) vaccines” or “holistic or complementary health.” Building on inaccurate claims related to the MMR vaccine and to holistic health, the experiment compared four communicative behaviors of generative AI and asked participants to evaluate the AI responses based on perceived accuracy, their trust in the message, the AI’s performance, and their current attitudes toward the topic. The study concluded that AI hallucinations lowered perceived accuracy, trust in AI performance, and attitudes toward the science topics. “In both conditions of hallucination and disinformation in which AI generated inaccurate information,” Shao said, “people’s attitudes towards the science topic decreased.”

A headshot of Anqi Shao
LSC PhD graduate, Anqi Shao, recently focused on AI hallucinations for her dissertation research.

The study also compared the effects of AI behaviors, including hallucinations, user experiences, and information processing. Professor Dominique Brossard stresses the importance of this type of research, arguing: “We all need to come together and have conversations about what values matter to us and how we want to harness the immense potential of these technologies.” Professor Dominique Brossard (who co-directs SciLab, where Shao worked as a research assistant) poses a guiding question for this research: “How do we take into account people’s values while producing new science in a productive and ethical way?”

LSC researchers are also exploring concerns about inequities in user experiences with AI. A recent scientific publication in Nature Scientific Reports led by Shao and LSC professor Kaiping Chen (with collaborators in the Department of Computer Sciences) focused on potential inequities in AI.

The researchers conducted an algorithm auditing study to evaluate equity in human–AI communication based on user interactions from different sociodemographic backgrounds.

In the study, participants engaged in conversations with the GPT-3 chatbot on two contested topics: “climate change” and “Black Lives Matter (BLM).” The study used a three-step auditing design that included a pre-dialogue survey measuring participant demographics, participants’ dialogue with GPT-3 on their assigned topics, and a post-dialogue survey assessing user experience with the chatbot.

The results of the study indicate a substantially worse chatbot experience for users from opinion or education minorities. Opinion minority groups (those in the bottom quartile of belief in climate change or BLM) reported lower satisfaction when interacting with the chatbot and a lower intention to use and recommend the chatbot in the future. Education minority groups also reported more negative experiences with the chatbot. Although both opinion and education minority groups reported worse user experiences with the chatbot, their attitudes toward climate change and BLM increased positively. This trend showed a convergence between participants’ issue stances and GPT-3 responses, which aligned more often with views rooted in scientific consensus.

By studying user interactions with AI through studies such as the ones summarized above, LSC researchers seek to better understand the diversity in user experiences and public perceptions of AI while also examining both the potential of AI as an emerging technology and its ability to produce positive societal impacts.

Written by: Catie Stumpf, LSC Lenore Landry Intern
Published: March 2026