Karthik Narasimhan

Assistant Professor, Computer Science, Princeton University
Co-director, Princeton Natural Language Processing

My research spans the areas of natural language processing and reinforcement learning, with a view towards building intelligent agents that learn to operate in the world through both experience and existing human knowledge (ex. text). I am especially interested in developing autonomous systems that can 1) acquire an understanding of language through interaction with their environment and 2) utilize textual knowledge to drive their decision making.

I received my PhD from MIT, advised by Prof. Regina Barzilay, and spent a year as a visiting research scientist at OpenAI before Princeton.

Note for prospective students: We are always looking for motivated students to join our lab. If you are a non-Princeton student, please apply directly to the PhD/Masters programs. If you are already at Princeton and 1) a PhD student, send me an email, or 2) a Masters/Undergraduate student, fill out this form.


research


image

Embodied Language Understanding

While traditional NLP tasks have focused on learning representations and making decisions from text alone, understanding language requires situational context in order to resolve ambiguities, provide appropriate responses and avoid incorrect inferences. A major part of our research focuses on methods for embodied language understanding, with the goal of teaching agents to understand and use language in a grounded, multi-modal environment. Examples include building agents for interactive settings like text adventure games [1,2,3,4], or tackling tasks that involve visual understanding [1,2] and instruction following [1,2].


image

Language-guided machine learning

In machine learning, language has predominantly been treated as just another modality for applying statistical learning techniques. The question of ‘what can natural language provide for machine learning’ remains relatively unexplored. Humans have used language as a means of encoding and communicating task and domain knowledge for centuries – how can we similarly leverage such knowledge to train machines? We explore this through several angles by training computers to use language for semantic supervision[1], read manuals for decision making [1,2], understand safety constraints specified in language [1], and learn from linguistic feedback [1].



image

Representation learning for NLP

Learning universal semantic representations for words and sentences has been a long-standing challenge for NLP. We develop new methods for representation learning with an emphasis on efficiency and simplicity. Examples include generative pre-training (GPT) [1], attention guidance for Transformers [1], and calibration techniques for language models [1]. We also analyze the representations learned by large LMs, including theoretical questions such as the ability of self-attention networks to capture recursion [1] or the amount of memory in a language model [1], and empirical ones such as the multilingual capability of Transformers [1].



image

Safety and generalization for reinforcement learning

Despite impressive achievements, classic algorithms for reinforcement learning remain brittle, slow and difficult to train agents with in a safe manner. We work on improving reinforcement learning along two axes – a) robustness and b) generalization. Sample projects include constrained policy learning for safe RL [1,2,3], algorithms for multi-stage exploration [1], personalized or multi-objective decision making [1,2], and incorporating task-agnostic dynamics priors into RL agents [1].