Skip to main content

IU advancing safe, trustworthy AI as part of new US government consortium

Feb 15, 2024

Indiana University is advancing the development and deployment of safe, trustworthy artificial intelligence as a member of the recently announced U.S. AI Safety Institute Consortium, from the United States Department of Commerce’s National Institute of Standards and Technology.

Scott Shackelford stands at a table Scott Shackelford is co-leading IU's involvement in the U.S. AI Safety Institute Consortium. Photo by Wendi Chitwood, Indiana University IU joins more than 200 other companies and organizations as part of the consortium. The new initiative brings together artificial intelligence creators and users, academics, government and industry researchers, and civil society organizations to develop science-based and empirically backed guidelines and standards for AI measurement and policy, with the goal of improving AI safety across the world.

“AI technology is hurtling forward, while efforts to ensure trust and safety are advancing globally, though not at the same pace,” said Scott Shackelford, provost professor of business law and ethics at the IU Kelley School of Business, executive director of the Center for Applied Cybersecurity Research and executive director of the Ostrom Workshop. “IU has a tremendous amount to offer in this space given our diverse community of researchers who are undertaking groundbreaking work on the development of generative AI, its application to addressing challenges in the public and private sectors, and how it should be governed.

“We are thrilled to join colleagues from across the nation in the U.S. AI Safety Institute Consortium to help ensure that this technology is developed with sufficient guardrails to protect Hoosiers, and all Americans, while giving them powerful new productivity tools.”

IU is a leader in AI research, taking a human-centered approach to improve lives and better communities in Indiana and around the world. Ongoing initiatives like the Trusted AI Initiative, the Ostrom Workshop’s focus on AI governance and the new IU Indianapolis Artificial Intelligence Consortium are tackling today’s AI challenges in a variety of ways.

David Crandall speaks to Selma Sabanovic. David Crandall is one of the university experts leading IU's participation in the U.S. AI Safety Institute Consortium. Photo by Wendi Chitwood, Indiana University

Shackelford — along with co-leads David Crandall, Angie Raymond, Sagar Samtani and Rob Templeman — will leverage IU researchers’ extensive knowledge and institutional strengths to tackle the critical issues being addressed through the consortium. Working with colleagues at the Luddy Artificial Intelligence Center, Kelley’s Data Science and AI Lab, IU’s Center for Applied Cybersecurity Research and others, their plans for the consortium are far-reaching: developing best practices for industry and government efforts, developing secure testing environments, addressing workforce shortages and more.

“IU has a long history in AI that positions us nicely to participate in this consortium,” said Crandall, Luddy professor of computer science, director of the Luddy AI Center and director of the Center for Machine Learning. “Not only are we researching and creating new AI technologies, but we’re doing so with an intentional human- and society-centered focus. With our strong multidisciplinary approach, we ask questions that ensure society remains the focal point.”

Author

IU Newsroom

Kelsey Cook

Deputy director for research communication

More stories