I am a Ph.D. candidate at the School of Information at the University of Texas at Austin. I am co-advised by Dr. Matt Lease and Dr. Jessy Li. I am a part of the Laboratory for Artificial Intelligence and Human-Centered Computing (AI&HCC) and associated with the UT NLP Group. During my PhD, I have also interned at Amazon Alexa Responsible AI Research, Cisco Responsible AI Research, and The Max-Planck Institute for Informatics where I worked with Dr. Gerhard Weikum.
Before joining the Ph.D. program, I worked as a Software Engineer in Microsoft and as a Decision Scientist in Mu Sigma. I received my Bachelor of Engineering Degree in Computer Science and Technology from IIEST, Shibpur.
Research
I am interested in the intersection of Natural Language Processing and Human-Computer Interaction, specifically focused on developing NLP technologies that complement the capabilities of human experts. My work centers on three key thrusts of research:
-
Human-Centered NLP: How can we identify the needs of the stakeholders for the practical adoption of NLP applications? How can we evaluate if NLP applications are meeting stakeholder needs? How can research in human-centered NLP help push forward basic NLP research? How can we align NLP models to complement human experts in critical fields effectively? [đ
CSCWâ24] [IPM Journal]
-
Interpretable NLP Models: How can we build NLP models to help stakeholders understand its inner workings? How can we effectively evaluate interpretable models? How can we use insights from interpretable models to steer generative model outputs? How can we build interpretable models to help promote responsible and productive human-AI partnerships? [NAACL Findings â25] [ACLâ22] [IPM Journal]
-
Responsible Language Technologies: How can we detect and mitigate potential harms caused by language technonologies? How can we make these models behave responisbly and not perpetuate societal biases? How can we protect workers who contribute to data-collection for AI? [FnTIR Journal] [HCOMPâ20] [ASIS&Tâ19]
News
- Our work on localizing and deleting toxic beliefs in autoregressive language models is accepted at NAACL 2025 findings. Arxiv is coming soon.
- Our co-design paper on Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using Matchmaking for AI has been awarded an honorable mention (top 3%) at CSCW 2024 [Arxiv]
- I spent Fall 2023 as a research intern at Cisco Responsible AI research team and worked on evaluating interpretable NLP models
- I spent Summer 2023 at the Amazon Alexa Responsible AI team and worked on developing interpretable NLP
- Paper on Human-Centered NLP for Fact-Checking is published in a special issue of the IPM (Impact Factor: 6.222) journal [Arxiv]
- Paper on Explaining Black-box NLP models with Case-based reasoning is accepted in ACL 2022. [arxiv] [code]
- Paper on Interactive AI for Fact-Checking is accepted in ACM CHIIR 2022Â [arxiv]