Skip to main content

IU psychologist awarded $1.7M to study connection between visual attention and language learning

Apr 3, 2018

An Indiana University psychologist has been awarded $1.7 million from the National Institutes of Health to better understand the earliest phases of language learning in children.

The grant will support research on subjects such as the connection between where infants look at the moment their parent names an object during early-stage development, how many words they are learning, and other later outcomes like cognitive development, vocabulary size and success in school. Chen Yu, a professor in the IU Bloomington College of Arts and Sciences’ Department of Psychological and Brian Sciences, is leading the project.

“One problem that young children deal with in language learning is that they live in a world that is visually cluttered,” Yu said. “When they hear a label, there are so many objects in their environment that as language learners, they need to figure out what the label may refer to. What we want to understand is how they map what they hear to what they see in a cluttered environment.”

In previous studies, Yu’s Computational Cognition and Learning Laboratory found high amounts of variation in how clearly an object appeared in an infant’s field of vision at the moment their parent labeled it.

“Some naming moments were clear since the named object was visually dominant in the infant’s view; others were ambiguous since several objects were competing for the infant’s attention at the same time,” Yu said. “Moreover, some naming moments contained misleading information as the parent named object A while the infant visually attended to object B.”

To understand how young learners deal with different kinds of “naming moments,” IU researchers outfit infants with head-mounted cameras to measure the eye direction of their gaze as they play with a parent in a toy room. They then use data-mining techniques to infer the learner’s “state of knowledge” when exposed to different types of learning situations. Yayun Zhang, a Ph.D. student studying psychological and brain sciences and cognitive science at IU Bloomington, has conducted much of this work with parents and children.

Instead of using artificial stimuli or pictures on a screen – common methods in many experiments on language learning – the research conducted under the new grant will take a “naturalistic approach” in which video vignettes recorded from head-mounted cameras tracking an infant’s eye movement from their perspective will be played back and viewed by another group of young learners. This means that separate infants will individually view the same vignettes – collected in a naturalistic environment – after which their visual attention will be measured and analyzed.

Lab researcher attaches head-mounted camera to child.
Child wears head-mounted camera.

Photos by Chris Meyer, IU Communications

“Even if they’re shown the same video vignette, different children will attend to different aspects of the scenes – and what they attend to is directly fed into their learning system,” Yu said. “We know in the real world there are many different situations. We’re taking data from the real world, but also controlling it in a scientifically systematic way to leverage the best of both worlds.”

Linda Smith, IU Distinguished Professor and Chancellor’s Professor of Psychological and Brain Sciences, is co-investigator on the project. The study is a part of the “Learning: Brains, Machines and Children” initiative, a part of the Emerging Areas of Research program. Smith is the leader of the initiative, which aims to​ apply research on toddler learning to improving machine learning and artificial intelligence​​.

“This new grant focuses directly on one of the core questions in the Emerging Areas of Research: how learners generate their own data,” Smith said. “When an infant looks at a scene, they select a small portion of the available information for learning. Learners who can select just the right information, based on their own current state of knowledge, provide potentially important computational pathways for optimal learning.”

One of the primary goals for the new study is to compare individual differences among the infants’ information selection at the moment their parent names an object. To achieve good “naming moments,” Yu said parents and children need to work together.

“Even for the same age, some kids will get 50 words, while others may acquire 200 to 300,” Yu said. “Those who get more words are the children who attend to the right information at the right time when facing uncertainty and ambiguity in their learning environment.

“Children learn language in social context or in contact with their parents,” he added. “When they play with toys, they naturally create good naming moments. If parents let their child lead and name the object in that moment, it helps learning.”

He also noted their preliminary research has found that children who direct their course of play are significantly more successful at matching a label to the correct object.

“All this research shows that earlier language learning is predictive of later development and outcome of school achievement,” Yu said. “It’s really the starting point of development, and the goal of the project is to understand underlying cognitive processes that support early word learning.”

More stories