A research participant sits at a computer, inflating a virtual balloon that occasionally emits a startling “pop” when it gets too big. The goal is to collect as many points as possible by inflating the balloon to its largest size before it bursts. As the participant goes through the exercise, sensors measure their heart rate, perspiration and pupil dilation.
The research going on inside this Department of Psychological and Brain Sciences lab at Indiana University Bloomington is part of the Trusted Artificial Intelligence Initiative, a partnership between Naval Surface Warfare Center, Crane Division; IU; University of Notre Dame; and Purdue University. The workforce development research initiative aims to answer several questions about human trust in artificial intelligence, while getting more students interested in STEM fields. The research could help the government and military determine how to apply AI technologies to benefit the defense and security of the United States.
In Bertenthal’s lab, participants are tested on a series of decision-making and risk-taking tasks that help the research team understand the body’s responses to uncertain or untrustworthy information.
“Very few if any of the individual physiological variables are really predictive of what we’re looking at by themselves, but together they are able to help us predict people’s judgments on whether something is trustworthy or not,” Bertenthal said. “The physiological markers we’re measuring give us a look under the hood into how people are processing information — even when it’s unconscious to them — that influences their decisions.”
Lewis is also measuring physiological responses in his lab, where two participants are paired together in a video game that gives them incentives to work against each other. Rather than using sensors on the body to track someone’s responses, he’s using cameras and microphones.
“It’s a wonderful aspect of this, the fact that we’re using a range of different methods to answer a single question,” Lewis said. “I measure the pulse by seeing through the camera the subtle changes in a person’s face. You can pick up on danger cues, facial expressions and see how they’re feeling about the other person.”
The goal of the research happening in both labs is to eventually develop software that can assess in real time whether a person perceives information as trustworthy.
“It’s nice to see the different perspectives people in psychology and neuroscience bring to the table,” Kiefer said. “It’s widened my view of how I look at the work I do on the computer science side.”
Macie Schmitt, who graduated with a bachelor’s degree in neuroscience in December, said being part of the research team exposed her to new disciplines and skills, inspiring her to learn coding and consider pursuing a Ph.D.
“This has been the most influential experience of my college career,” Schmitt said. “And I feel like because of the people that I’ve been able to meet and learn from with this experience, I’m much more confident as a student and upcoming researcher.”
The Trusted AI Initiative launched in 2021 and is intended to scale over several years. The initiative is part of SCALE, the Scalable Asymmetric Lifecycle Engagement workforce development program funded by the Office of the Undersecretary of Defense for Research and Engineering Trusted and Assured Microelectronics program.