Purdue-led team creates interpretable AI for emotion recognition

02-09-2026

From left to right: Sumit Dalal, Bharat Bhargava, Bhavya Jain and Mijanur R. Palash

From left to right: Sumit Dalal, Bharat Bhargava, Bhavya Jain and Mijanur R. Palash

A new artificial intelligence system, developed with guidance from Purdue University Computer Science Professor Bharat Bhargava, could transform how technology recognizes and explains human emotions, offering a transparent and trustworthy tool for mental health assessment.

The system utilizes a combination of voice, facial expressions, and language to detect signs of depression and emotion, surpassing traditional “black box” models that make predictions without explanation. Instead, it provides reasoning and evidence for its conclusions, helping healthcare professionals understand why an AI reached a particular result.

“We wanted to build an AI system that behaves more like a human observer,” said Bhargava. “It looks for patterns such as a flat or monotone voice, reduced facial expressions, or shorter, sadder language — and connects them to known emotional states. But most importantly, it explains how it made that connection.”

At the core of the project is a knowledge map, or ontology, that links measurable behaviors, such as tone of voice or gaze direction, with emotional and psychological patterns associated with depression. By training the neural network to focus on meaningful signals and filter out random noise, the system can produce results that are both accurate and interpretable.

The research was conducted under Purdue’s REALM (Real Applications of Machine Learning) project, funded by the Northrop Grumman Corporation consortium.

Outside Purdue, the work involved collaboration with Bhavya Jain, an undergraduate student at the Indian Institute of Technology, Bhilai, and Sumit Dalal, Assistant Professor at Bennett University, India. Dalal proposed incorporating audio–video ontology into the system, while Jain led the implementation, analysis, and experimentation under Bhargava’s guidance.

Potential applications extend beyond healthcare. Bhargava notes that similar systems could help law enforcement, military personnel, and first responders recognize individuals experiencing distress or crisis.

“This work lays the foundation for ethical, explainable AI in mental healthcare and beyond,” Bhargava said. “It provides not just a prediction, but a reason, and that’s what makes it trustworthy.”

The work has already earned international recognition. A related paper, “Ontology-Regularized Multimodal Neural Networks for Explainable Mental Health Assessment,” received the Best Student Paper Award at the International Semantic Intelligence Conference (ISIC 2025).

Related publications include Ontology-Regularized Multimodal Neural Networks for Explainable Mental Health Assessment (ISIC 2025), EMERSK – Explainable Multimodal Emotion Recognition with Situational Knowledge (IEEE Transactions on Multimedia, 2024), and a Ph.D. dissertation by Purdue’s Mijanur Palash titled A Deep Learning Based Framework for Novelty-Aware Explainable Multimodal Emotion Recognition with Situational Knowledge.

 

About the Department of Computer Science at Purdue University

Founded in 1962, the Department of Computer Science was created to be an innovative base of knowledge in the emerging field of computing as the first degree-awarding program in the United States. The department continues to advance the computer science industry through research. U.S. News & World Report ranks the department No. 16 and No. 19 overall in undergraduate and graduate computer science, respectively. Graduates of the program can solve complex and challenging problems in many fields. Our consistent success in an ever-changing landscape is reflected in the record undergraduate enrollment, increased faculty hiring, innovative research projects, and the creation of new academic programs. Learn more at cs.purdue.edu.  

Last Updated: Feb 9, 2026 2:22 PM