Login with HarvardKey to view all events.

Causal Inference for Robust, Reliable, and Responsible NLP

This is a past event.

Tuesday, February 27, 2024 4pm to 5pm

Image of Causal Inference for Robust, Reliable, and Responsible NLP

Event Dates

Tuesday, February 27, 2024 4pm to 5pm

Science and Engineering Complex (SEC), SEC LL2.224
Add to calendar

Despite the remarkable progress in large language models (LLMs), it is well-known that natural language processing (NLP) models tend to fit for spurious correlations, which can lead to unstable behavior under domain shifts or adversarial attacks. In my research, I develop a causal framework for robust and fair NLP, which investigates the alignment of the causality of human decision-making and model decision-making mechanisms. Under this framework, I develop a suite of stress tests for NLP models across various tasks, such as text classification, natural language inference, and math reasoning; and I propose to enhance robustness by aligning model learning direction with the underlying data generating direction. Using this causal inference framework, I also test the validity of causal and logical reasoning in models, with implications for fighting misinformation, and also extend the impact of NLP by applying it to analyze the causality behind social phenomena important for our society, such as causal analysis of policies, and measuring gender bias in our society. Together, I develop a roadmap towards socially responsible NLP by ensuring the reliability of models, and broadcasting its impact to various social applications.

Event Details