Dan Jurafsky - Home Page - Stanford University

Headshot of Dan Jurafsky by Do Pham, Stanford Image: Do Pham, Stanford
DAN JURAFSKY Reynolds Professor in Humanities, Professor of Linguistics Professor of Computer Science Stanford University
I study NLP as well as its implications for society and application to linguistics and the other social and cognitive sciences. I am a past MacArthur Fellow and also work on the language of food.
[email protected] Margaret Jacks 117 Stanford CA 94305-2150 BIO X/BLUESKY: jurafsky, @jurafsky CV PEOPLE NLP group WHERE'S DAN? LANGUAGE OF FOOD blog seminar class articles TEACHING THIS YEAR AUTUMN 2025 CS 329R: Race and Natural Language Processing   (co-taught with Jennifer Eberhardt) Tue 1:30-4:00 PM WINTER 2026 cs124/ling180:From Languages to Information Tu/Thu 3:00 PM - 4:20 PM, Hewlett 200 SPRING 2026 The Language of Food at Stanford's Madrid campus! FOLLOWING YEAR On Sabbatical for 2026-2027 FOLLOWING FOLLOWING YEAR CS124 probably in Winter 2028 Earlier Courses LECTURE VIDEOS CS124: YouTube lecture videos 2012 NLP Online w/Chris Manning: - Youtube channel lecture videos - Slides
BOOKS
Speech and Language Processing, Dan Jurafsky and James H. Martin, 3rd edition draft chapters Dan Jurafsky The Language of Food, James Beard Award Finalist
2026 ARTICLES [ALL PUBS] [GOOGLE SCHOLAR]

Preprints

Myra Cheng, Robert D. Hawkins, Dan Jurafsky. 2026. Accommodation and Epistemic Vigilance: A Pragmatic Account of Why LLMs Fail to Challenge Harmful Beliefs. Preprint

Bianca Datta, Markus J. Buehler, Yvonne Chow, Kristina Gligoric, Dan Jurafsky, David L. Kaplan, Rodrigo Ledesma-Amaro, Giorgia Del Missier, Lisa Neidhardt, Karim Pichara, Benjamin Sanchez-Lengeling, Miek Schlangen, Skyler R. St. Pierre, Ilias Tagkopoulos, Anna Thomas, Nicholas J. Watson, Ellen Kuhl. 2025. AI for Sustainable Future Foods. arXiv preprint arXiv:2509.21556 (2025).

Kaitlyn Zhou, Kristina Gligorić, Myra Cheng, Michelle S. Lam, Vyoma Raman, Boluwatife Amin, Caeley Woo, Michael Brockman, Hannah Cha, and Dan Jurafsky. Attention to Non-Adopters. arXiv.

2026

Myra Cheng, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, Dan Jurafsky. Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence. In print, Science

Isabel O. Gallegos, Chen Shani, Weiyan Shi, Federico Bianchi, Izzy Gainsburg, Dan Jurafsky, Robb Willer. 2026. Labeling messages as AI-generated does not reduce their persuasive effects. PNAS Nexus, Volume 5, Issue 2, February 2026, pgag008.

Moran Mizrahi, Chen Shani, Gabriel Stanovsky, Dan Jurafsky, Dafna Shahaf. 2026 Cooking Up Creativity: A Cognitively-Inspired Approach for Enhancing LLM Creativity through Structured Representations. In press , TACL 2026.

Doumbouya, Moussa Koulako Bala, Dan Jurafsky, and Christopher D. Manning. 2026. Tversky Neural Networks: Psychologically Plausible Deep Learning with Differentiable Tversky Similarity. To appear, ICLR 2026

Myra Cheng*, Sunny Yu*, Cinoo Lee, Pranav Khadpe, Lujain Ibrahim, Dan Jurafsky. ELEPHANT: Measuring and Understanding Social Sycophancy in LLMs.. To appear, ICLR 2026. [code] Press coverage by MIT Technology Review and VentureBeat.

Chen Shani, Dan Jurafsky, Yann LeCun, Ravid Shwartz-Ziv. 2026. From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning. To appear, ICR 2026.

*Martijn Bartelds, *Nandi, Ananjan, Doumbouya, Moussa K. B., Jurafsky, Dan, Hashimoto, Tatsunori, and Karen Livescu (2026). CTC-DRO: Robust Optimization for Reducing Language Disparities in Speech Recognition. To appear, ICLR.

Mirac Suzgun, Mert Yuksekgonul, Federico Bianchi, Dan Jurafsky, James Zou. 2026. Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory. To appear, EACL 2026.

Laya Iyer; Pranav Somani; Alice Guo; Dan Jurafsky; Chen Shani. 2026. Beyond Tokens: Concept-Level Training Objectives for LLMs. To appear, EACL 2026