Kunal Handa

I am a MSc by Research student at the University of Oxford advised by Yarin Gal and Sebastian Farquhar. Previously, I was a linguistics and computer science student at Brown University advised by Ellie Pavlick and Chen Sun. I'm broadly interested in 1) exploring how foundation models can be safely and robustly deployed in complex, real-world scenarios and 2) grounding this discussion in our understanding of human cognition and value systems.

I {am, have been} fortunate to work with some amazing people as a part of the Oxford OATML Group, Stanford CoCoLab, Brown LUNAR Lab, Stanford NLP Group, Brown PALM Lab, and Stanford LangCog Lab.

You can contact me at: kunal [underscore] handa [at] alumni [dot] brown [dot] edu


I'm exploring how to improve large language models' preference-learning abilities with Alex Tamkin, Belinda Li, Ellie Pavlick, Noah Goodman, and Jacob Andreas.


Kunal Handa, Margarett Clapper, Jessica Boyle, Rose E Wang, Diyi Yang, David Yeager, Dorottya Demszky. Pending in Empirical Methods in Natural Language Processing (EMNLP), 2023.

Tian Yun*, Zilai Zeng*, Kunal Handa, Ashish V Thapliyal, Bo Pang, Ellie Pavlick, Chen Sun. Pending in Empirical Methods in Natural Language Processing (EMNLP), 2023.

Task Ambiguity in Humans and Language Models
Alex Tamkin*, Kunal Handa** denotes equal contribution., Avash Shrestha, Noah Goodman. In the International Conference on Learning Representations (ICLR), 2023.

Peekbank: An open, large-scale repository for developmental eye-tracking data of children’s word recognition
Martin Zettersten... Kunal Handa... & Michael C Frank. In Behavior Research Methods (BRM), 2022.

Other Writing

The Role of Technology in Elections: The Voyage of Voters’ Data
Kunal Handa. In Conduit, the Brown University Computer Science Annual Magazine, Volume 32, 2022.

Racial Bias and the Loaded Language of Gun-Violence Related Reporting
Kunal Handa*, Arun Chintalapati*

Trying to Give a Shit About "Give a Shit": A Compositional Semantics Perspective
Kunal Handa


At Brown, I served as the Socially-Responsible Computing Teaching Assistant for CS1470: Deep Learning. I designed contentsome of which is available via the course's website on deep learning’s potential societal harms and conducted exercises that examined ethical frameworks, the cyclical nature of language models' biases, and the pros and cons of regulating ML advancements.

I also tutor incarcerated and previously incarcerated individuals in HiSET test preparation and essay writing. Intermittently, I teach group classes such as Applying to College and Introduction to Philosophy at juvenille detention centers.

Academic Service

Earlier, I reviewed for Empirical Methods in Natural Language Processing (EMNLP) 2023. And, later, I will review for the Socially Responsible Language Modelling Research (SoLaR) Workshop at the Conference on Neural Information Processing Systems (NeurIPS) 2023.