Can we use artificial intelligence (AI) to help drive student success? A panel of higher education and tech industry leaders met on the first day of the 27thannual Statewide IT Conference to explore this topic.
Since the release of ChatGPT in late 2022, the integration of AI into everyday life is fast becoming our new reality. Will faculty embrace or resist AI adoption? What will students need to know about the use of AI tools in the classroom and in the workplace? The panel addressed these questions and more.
Associate Vice President for Learning TechnologiesJay Gladdenmoderated and began the discussion by mentioning the “wide variety of early happenings” in the AI space, including tools like the announcement of Microsoft’sCopilotproject and the recently launchedturnitinAI plagiarism detection tool.
Joanna Millunchick,dean of the Luddy School of Informatics, Computing, and Engineering at IU, believes that the age of AI is an exciting time for education. She compared AI technology to tools like the printing press, books, and the internet, each of which was perceived as an “existential threat to universities.”
When asked what we should be doing with our students regarding AI, Millunchick said we should be clear about how we want them to use the technology. “Instructors should talk about ChatGPT and what it is good for and what it isn’t good for,” she said. She recommended creating instruction around using the tool.
On the question of AI taking jobs away, Millunchick stated that she thinks AI will make our jobs easier. For example, academic advisors can focus on a student’s more complex needs while leaving class selection to AI.
Clayton Nicholas,an industry research development specialist at Indiana University-Purdue University Indianapolis, said that AI will be a requisite in industry. He expressed that students need to learn how to use AI technology. “Part of it is making sure that we really train people on how to use these tools, what the limitations are, and how to best utilize the tools,” he said.
Nicholas described how a program that used automated vehicles to deliver food to the needy could not provide the necessary human service of checking on the welfare of food recipients.
Stefano Fiorini, manager of research and analytics, Office of Institutional Analytics at IU, talked about using data to understand the needs of different types of students. He said that AI and machine learning can help educators access data that has been difficult to analyze because of limited human capabilities. “We can tap into video recordings of activities that happened in the classroom to identify what is working and what is not working.” He added that we can use AI to create environments that are adjusted to the needs of a student.
Gladden directed a question to IU’s Chief Privacy OfficerMark Werlingabout areas of concern or caution around AI. Werling mentioned a 2019 review of hospitals and health insurance companies that used data to make decisions about patient care, insurance premiums, etc. He said the study found that “AI was generating systemic bias on a racial basis.”
He added, “One of the things we always want to think about as an IU community is: ‘are we conscious ethically of what these impacts are and are we thinking about them and mitigating them so that we have an equitable outcome and not perpetuation of these systemic problems?’” Werling advised setting up safety protocols when adding institutional data to an AI system.
Other topics discussed included the socioeconomic divide in higher education, AI and intellectual property, using AI as a collaborator, and the effect AI will have on student assessment.