
Pictured from left to right: Helen Jin, Arjun Krishna, Christopher Watson, Neelay Velingker, Vivian Lin, Ezra Edelman, Alex Shypula, Nicolas Menand and Thomas Zhang. Not pictured: Behrad Moniri, Joey Velez-Ginorio and Yue Yang.
As AI continues to evolve, its integration into safety-critical domains, such as autonomous vehicles or medical diagnosis tools, has raised pressing concerns. Current machine learning algorithms can offer statistical guarantees, but they do not always provide the predictable behavior required for high-risk applications. Additionally, beyond just making predictions, AI systems need to offer explanations that are understandable by a wide range of stakeholders, from engineers to the general public. To gain and retain public trust, AI systems must prioritize safety, transparency and accountability in their design and decision-making processes.
The ASSET (AI-Enabled Systems: Safe, Explainable and Trustworthy) Center is designed to address these challenges by bringing together a multidisciplinary group of experts spanning machine learning, data science, formal methods, human-computer interaction, natural language processing and domain-specific expertise. By focusing on key research areas such as fairness, privacy, safety guarantees and explainability, ASSET is working to lay the groundwork for AI systems that users can rely on with confidence.
In recognition of the initiative’s importance, Amazon Web Services (AWS) has generously provided $840,000 in funding this year to support the next generation of AI researchers. The funding is intended to help the 12 selected Ph.D. students push the boundaries of knowledge in AI safety, robustness and interpretability, among other key areas.
“In the first two years of this fruitful partnership, the support from AWS AI was for 10 Ph.D. students,” says Rajeev Alur, Zisman Family Professor in Computer and Information Science (CIS) and Founding Director of the ASSET Center. “This year, we are excited that, thanks to the interest by the Automated Reasoning group in AWS, two more students can be funded with a specific focus on research at the intersection of logical reasoning and AI.”
Meet the 2025 ASSET-AWS Ph.D. Scholars
Alex Shypula, a third-year CIS Ph.D. student, is advised by Osbert Bastani, Assistant Professor in CIS. Shypula’s work focuses on enhancing the trustworthiness of code generation models. His innovative dataset for adapting large language models to program optimization was recently spotlighted at International Conference on Learning Representatives (ICLR) 2024. Currently, he is improving the interpretability of these models by breaking down complex optimizations into smaller, more understandable steps.
Arjun Krishna, a second-year CIS Ph.D. student, is advised by Dinesh Jayaraman, Assistant Professor in CIS. Krishna’s research delves into the resource costs involved in sequential decision-making policies, such as those used by robots in unstructured environments. By studying these systems, he aims to reduce their resource usage while maintaining or improving performance.
Behrad Moniri, a fourth-year Electrical Systems and Engineering (ESE) Ph.D. student, is advised by Hamed Hassani, Associate Professor in ESE. Moniri is investigating the foundations of deep learning, specifically focusing on out-of-distribution generalization (robustness). His work aims to predict performance in unfamiliar environments and improve feature learning as models scale.
Ezra Edelman, a second-year CIS Ph.D. student, is advised by Surbhi Goel, Magerman Term Assistant Professor in CIS. Edelman’s research is centered on improving our understanding of large language models, aiming to develop new algorithmic insights to enhance the efficiency and robustness of these systems.
Helen Jin, a fifth-year CIS Ph.D. student, is advised by Eric Wong, Assistant Professor in CIS. Jin seeks to bridge the gap between AI explanations and real-world applications in medicine and science. Her research focuses on developing AI explanations that are understandable to practitioners, such as surgeons or cosmologists, who require clear and actionable insights.
Neelay Velingker, a first-year CIS Ph.D. student, is advised by Mayur Naik, Misra Family Professor in CIS. Velingker is investigating the challenges of adapting large language models for deployment on consumer devices, aiming to address issues of quality, safety and resource usage to make these models more accessible across various hardware platforms.
Nicolas Menand, a second-year CIS Ph.D. student, is advised by Erik Waingarten, Assistant Professor in CIS. Menand is working on clustering and optimization algorithms for massive datasets, with a focus on developing solutions that generalize well from smaller sub-sampled datasets.
Thomas T.C.K. Zhang, a fifth-year ESE Ph.D. student, is advised by Nikolai Matni, Assistant Professor in ESE. Zhang is investigating multi-task transfer learning for safety-critical dynamical systems, such as robotics and autonomous vehicles. His goal is to develop algorithms that are not only efficient but also provably safe and explainable.
Vivian Lin, a fifth-year ESE Ph.D. student, is advised by Insup Lee, Cecilia Fitler Moore Professor in CIS. Lin’s research focuses on trustworthy machine learning applications for cyber-physical systems and medicine. She is developing a predictive safety monitor for autonomous vehicles and a survival analysis model for glaucoma patients.
Yue Yang, a fifth-year CIS Ph.D. student, is advised by Mark Yatskar, Assistant Professor in CIS, and Chris Callison-Burch, Professor in CIS. Yang is researching multimodal systems that are interpretable, robust and controllable. His work aims to leverage natural language as a tool for enhancing the interpretability and reliability of AI systems, especially in medical imaging.
Christopher Watson, a fourth-year CIS Ph.D. student, is advised by Rajeev Alur, Zisman Family Professor in CIS. Watson is developing safe reinforcement learning theories for long-horizon robotic tasks, with applications in areas such as autonomous navigation and manipulation.
Joey Velez-Ginorio, a third-year CIS Ph.D. student, is advised by Stephan Zdancewic, Schlein Family President’s Distinguished Professor in CIS, and Konrad Kording, Nathan Francis Mossell University Professor in Bioengineering and in Neuroscience. Velez-Ginorio is pioneering the development of a programming language defined by RELU-style neural networks with attention mechanisms, a novel approach to AI system design.
Follow these student research projects and more news from the ASSET Center by visiting their website.