Ramya Hebbalaguppe

I am a researcher specializing in visual computing and machine learning. My doctoral research, supervised by Prof. Chetan Arora at IIT Delhi, focused on proposing novel methods to enhance trust and reliability in deep neural networks (DNNs) based classifiers. During my doctoral studies, I developed techniques for out-of-distribution detection, uncertainty quantification, and the refinement of DNN models backed by theoretical insights.

Before pursuing my doctoral research, I had the privilege of working with Prof. Ramakrishna Kakarala at Nanyang Technological University on High Dynamic Range (HDR) Imaging algorithms. This work formed a part of the image processing pipeline aimed at smartphone cameras. Our research was recognized with the Best Student Paper award at the 2012 SPIE conference in Burlingame, California. I completed my Master’s degree at the School of Electronic Engineering and Computing at Dublin City University in 2014, under the guidance of Prof. Noel O'Connor and Prof. Alan Smeaton. My focus during this time was on reducing false alarms in surveillance camera networks, with a portion of this research being licensed to Netwatch Systems and the team received Invent award for the project.

Currently, I am employed as a Senior Scientist at TCS Research Labs, within the Deep Learning and Artificial Intelligence Group (DLAI) located at the Research and Development Park, IIT Delhi, India. At TCS, my work spans various areas, including efficient inference of DNNs through model compression, trustworthy ML, continual learning, and, more importantly, the development of algorithms for creative and immersive content generation, such as images, videos, and 3D/4D data

Outside work, I enjoy painting, traveling, cooking and baking, composting, planting tree saplings, and music.

> Email  /  Google Scholar  /  Twitter  /  Github  /  Travel

profile photo
Research

Representative papers spanning the following themes reliable machine learning (out-of-distribution detection, uncertainty quantification, continual learning), 3D/4D/2D computer vision are highlighted.

Refine and Align: Confidence Calibration Through Multi-Agent Interaction in VQA
Ayush Pandey , Jai Bardhan , Ishita Jain , Ramya Hebbalaguppe, Rohan Raju Dhanakshirur , Lovekesh Vig ,
AAAI 2026 (Alignment Track)  

Calibration in VQA measures how well a model’s confidence reflects correctness, which is critical for autonomous, high-stakes applications where models are often overconfident. We propose AlignVQA, a debate-based multi-agent framework where specialized VLMs generate answers and generalist agents critique and aggregate them to produce better-calibrated confidence estimates. Additionally, our AlignCal loss improves calibration during training, leading to significantly reduced calibration errors across VQA benchmarks.

Lifelong Learning in StyleGAN through Latent Subspaces
Adarsh K, Anmol Garg , Ramya Hebbalaguppe, Prathosh AP,
Transactions on Machine Learning Research , 2024  

StyleGAN is a powerful generative model but suffers from catastrophic forgetting when trained continuously on new data distributions. We propose StyleCL, which enables lifelong learning by learning task-specific latent subspace dictionaries and lightweight feature adaptors, while reusing prior knowledge when beneficial. This approach avoids forgetting, improves generation quality across datasets, and requires significantly fewer additional parameters per task.

Calibration Transfer via Knowledge Distillation
Ramya Hebbalaguppe, Mayank Baranwal, Kartik Anand, Chetan Arora,
ACCV , 2024   -- [Oral Presentation] (top 5.6 %)
[paper] [Suppl. Material (proofs)]

Knowledge Distillation for Calibration (KD(C)) extends distillation beyond accuracy transfer to produce lightweight, well-calibrated models. We show—both theoretically and empirically—that calibration can be improved without sacrificing accuracy. In some cases, KD(C) even surpasses standard training, achieving both better calibration and higher accuracy.

LoMOE: Localized Multi-Object Editing via Multi-Diffusion
Goirik Chakrabarty, Aditya Chandrasekar, Ramya Hebbalaguppe, Prathosh AP,
ACM International Conference on Multimedia , 2024  
[paper] [project page]

Diffusion models enable high-quality prompt-based image editing but struggle with precise, fine-grained changes using text alone. We propose a zero-shot multi-diffusion framework for localized multi-object editing, allowing additions, replacements, and edits in a single pass. Our method, along with the LoMOE-Bench dataset, outperforms prior approaches in both editing quality and speed.

ReMOVE: A Reference-free Metric for Object Erasure
Aditya Chandrasekar, Goirik Chakrabarty, Jai Bardhan, Ramya Hebbalaguppe, Prathosh AP,
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPRW) The First Workshop on the Evaluation of Generative Foundation Models, 2024  
project page

We introduce ReMOVE, a novel reference-free metric for assessing object erasure efficacy in diffusion-based image editing models post-generation. Unlike existing measures such as LPIPS and CLIPScore, ReMOVE addresses the challenge of evaluating inpainting without a reference image, common in practical scenarios. ReMOVE effectively distinguishes between object removal and replacement, a key issue in diffusion models due to stochastic nature of image generation.

Transfer4D: A framework for frugal motion capture and deformation transfer
Shubh Maheshwari, Rahul Narain, Ramya Hebbalaguppe,
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023  
project page

Animating a virtual character based on a real performance of an actor is a challenging task that currently requires expensive motion capture setups and additional effort by expert animators, rendering it accessible only to large production houses. The goal of our work is to democratize this task by developing a frugal alternative termed Transfer4D that uses only commodity depth sensors and further reduces animators' effort by automating the rigging and animation transfer process. Our approach can transfer motion from an incomplete, single-view depth video to a semantically similar target mesh, unlike prior works that make a stricter assumption on the source to be noise-free and watertight.

Calibrating Deep Neural Networks Using Explicit Regularisation and Dynamic Data Pruning
Rishabh Patra*, Ramya Hebbalaguppe*, Tirtharaj Dash, Gautam Shroff, Lovekesh Vig,
IEEE/CVF Winter Conference on Applications of Computer Vision, 2023   -- [Spotlight Presentation] (top 10%)
project page

We demonstrate state-of-the-art Deep Neural Network calibration performance via proposing a differentiable loss term that can be used effectively in gradient descent optimisation and dynamic data pruning strategy not only enhances legitimate high confidence samples to enhance trust in DNN classifiers but also reduce the training time for calibration.

A Novel Data Augmentation Technique for Out-of-Distribution Sample Detection using Compounded Corruptions
Ramya Hebbalaguppe, Soumya Suvra Ghosal, Jatin Prakash, Harshad Khadilkar, Chetan Arora,
European Conference on Machine Learning , 2022  
project page

We propose a novel Compounded Corruption(CnC) technique for the Out-of-Distribution data augmentation. One of the major advantages of CnC is that it does not require any hold-out data apart from the training set. Our extensive comparison with 20 methods from the major conferences in last 4 years show that a model trained using CnC based data augmentation, significantly outperforms SOTA, both in terms of OOD detection accuracy as well as inference time.

A stitch in time saves nine: A train-time regularizing loss for improved neural network calibration
Ramya Hebbalaguppe*, Jatin Prakash, Neelabh Madan*, Chetan Arora,
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2022   -- [ORAL Presentation] (top 4%)
project page

We propose a novel auxiliary loss function: Multi-class Difference in Confidence and Accuracy (MDCA) for Deep Neural Network calibration. The loss can be combined with any application specific classification losses for image, NLP, Speech domains. We also demonstrate the utility of the loss in semantic segmentation tasks.

Current Research Team, Research and Innovation Park, IIT Delhi

  1. Ayush Pandey Topic: Calibration in Agentic AI and LLMs
    Interns(Summer'25) : Arman Fatima (IIT Dharwad), Ishita Jain (GaTech), Ayush Pandey (ISER Bhopal), Khuushi Maheshwari (MIT), and Nishant Singh (IIIT Delhi)

Alumni

(The list of researchers include full-time, pre-doctoral fellows, and research interns.)

  1. Srinidhi Hegde → Doctoral Student at Purdue University (Prof. Sotiris Nousias )
  2. Jai Bardhan (IIIT Hyderabad) → Researcher at the Czech Technical University in Prague (With Dr. Josef Sivic)
  3. Adarsh Kappiyath → Doctoral student at the University of Surrey, UK (With Dr. Lu Yin and Dr. Xiatian Zhu.)
  4. Pranay Gupta → Doctoral student at CMU (With Prof. Henny Admoni)
  5. Meghal Dani → Doctoral student at IMPRS-IS, Max Planck School (With Prof. Zenep Akata and Prof. Bernhard Schölkopf)
  6. Surabhi Nath → Doctoral student at the Max Planck School of Cognition and the MPI for Biological Cybernetics (With Prof. Peter Dayan)
  7. Goirik Chakrabarty → Doctoral student, University Göttingen (With Prof. Fabian Sinz)
  8. Jatin Prakash → Doctoral student at New York University (With Rajesh Ranganath)
  9. Neelabh Madan → Doctoral student at New York University
  10. Gaurav Gupta → Doctoral student at Rice University
  11. Apoorv Khattar → Doctoral student at University of Manchester, UK
  12. Neel Rakholia → Masters Student at Stanford
  13. Sharan Yalburgi → Visiting researcher at MIT proabilistic ML project
  14. Shubh Maheshwari → Graduate student at UCSD
  15. Jitender Maurya → Researcher, Toshiba
  16. Archie Gupta → SDE, Microsoft
  17. Varun Jain → Masters student at CMU → Microsoft Fellow
  18. Additya Popli → SDE at Google
  19. Kshitijz Jain → Grad student at IITD
  20. Aravind Udupa → Grad student at IITD
  21. Soumya Suvra Ghosal → Masters Student at University of Wisconsin
  22. Gaurav Garg → Accenture
  23. Ramakrishna Perla → TTEC Digital

Academia: Thesis supervision

  1. Aditya C (IISc, Bangalore) - co-supervisor for M. Tech Thesis Topic: Metrics for image editing (Best Thesis Award)
  2. Shreyash Mohatta (BITS, Goa) - supervised M.Tech thesis on Egocentric Realtime Gesture Recognition with Dr. Ashwin Srinivasan → Masters student at NCSU
  3. Rishabh Patra (BITS, Goa)- supervised B.Tech thesis on uncertainty calibration with Dr. Tirtharaj Dash → SDE Amazon
  4. Ashwin Vaswani (BITS, Goa) - supervised BTP on Data-free Iterative Knowledge Distillation with Prof. Ashwin Srinivasan → Google Research → Masters student at CMU
  5. Het Shah (BITS, Goa) - supervised BTP on Knowledge Distillation, Pruning and Quantization with Prof. Ashwin Srinivasan → Research Associate at Google Research

Website inspired from Jon Barron's.