UGA computing professor wins Google Research Award to develop AI safety

Image:
The Google award will be used to increase AI safety literacy. (Photo by Chamberlain Smith)

By Alan Flurry

The award will develop, examine effects of a program focused on artificial intelligence safety literacy

University of Georgia faculty member Ari Schlesinger is a recipient of a 2025 Google Academic Research Award. The 2025 Google Academic Research Awards will support 56 projects led by 84 researchers across 12 countries working on innovative computing and technology research.

Schlesinger, an assistant professor in the UGA School of Computing, was awarded in the Trust, Safety, Security and Privacy Research category, which centers on research to improve digital trust, safety, privacy and security across the online ecosystem.

Schlesinger is leading the project, called Cultivating AI Safety Literacy for University Computing Students, in collaboration with Nick Falkner, an associate professor in the School of Computer and Mathematical Sciences at the University of Adelaide, Australia.

AI safety is an interdisciplinary field that strives to prevent harm that could arise from AI systems. The goals of the research are to develop and assess a repeatable AI safety literacy program and then evaluate it for use in a variety of contexts and environments.

According to the researchers, even though AI already plays a role in safety features we use every day like email spam filters, modern AI systems pose new safety challenges for the tech workforce. To prevent unanticipated harms, university computer science students need interdisciplinary training on how to make digital safety a part of AI systems from the beginning.

“Harm from things like deepfakes, misinformation and scams have posed safety risks for years, but modern AI systems like large language models increase the scale and scope of online risk,” Schlesinger said. “We want the benefits of tech systems for communication and efficiency, which means we need to design technology where the benefits aren’t outweighed by the harms.”  

A repeatable literacy program and evaluation method will help reduce the severity of the AI safety literacy skills gap in tech sector careers.

“When we have new technology, we get new risks, but we can mitigate those risks through an interdisciplinary approach to safety,” Schlesinger said. “Cybersecurity, privacy and content moderation all play a role in this process. Safety provides a holistic framework for thinking about and addressing digital risk. We need specialists in cybersecurity, but we also need people that are trained to address broader social harms, interpersonal harms, in addition to stopping malicious actors.”

"We need specialists in cybersecurity, but we also need people that are trained to address broader social harms.”

—Ari Schlesinger,

Franklin College of Arts and Sciences

The project will focus on a set of research studies to examine the effects of an AI safety literacy program focused on training undergraduate and graduate computer science students in

  • understanding sociotechnical harm
  • identifying and assessing safety risks
  • implementing safe design principles in the AI development lifecycle
  • evaluating safe AI implementation
  • communicating AI safety principles and practices to external stakeholders

By implementing this research in two countries and university/course contexts, the team will be able to understand opportunities and barriers to cultivating AI safety literacy in real-world learning contexts.

Each award recipient receives up to $100,000 in funding to support their work. In addition, awardees are paired with a Google research sponsor, providing a direct connection to Google’s research community and fostering long-term collaboration. 

Type of News/Audience: