Shuqi Wang

About Me

Shuqi Wang

Shuqi Wang 王书琪

I am actively seeking postdoctoral positions starting late 2026 in predictive processing, computational neurolinguistics, and human–LLM comparison.

I am a PhD candidate in the Language Processing Lab at The Chinese University of Hong Kong, supervised by Prof. Zhenguang G. Cai. I am supported by the Hong Kong PhD Fellowship (HKPF).

My research investigates predictive language processing, with a focus on how humans and large language models anticipate upcoming linguistic input during real-time comprehension. Using behavioral methods such as eye-tracking and the visual world paradigm, I compare how humans and multimodal LLMs integrate structural, semantic, and prosodic cues to generate predictions — and where their mechanisms diverge. I am also interested in bilingual and L2 processing, particularly how cognitive load modulates prediction during consecutive interpreting. Looking ahead, I am keen to extend this work using neuroimaging methods to investigate the neural dynamics of predictive coding in language comprehension.

Before my PhD, I studied L2 processing and bilingualism at Peking University (M.A.) and Nanjing University (B.A.). I am also passionate about building academic communities — I co-founded the Computational Neurolinguistics Forum (计算神经语言学, 7,500+ followers), an academic platform bridging psycholinguistics, neuroscience, and computational modeling for the Chinese-speaking research community.

Email: shuqiwang@link.cuhk.edu.hk
Google Scholar

Curated Research

VWP time-course
Same Predictions, Different Processes: Comparing Human and Multimodal LLM Prediction

Using the visual world paradigm with spoken Mandarin and visual displays, we show that humans and Qwen-2.5-Omni produce similar prediction outputs but differ in processing dynamics — a 650ms delay in structural sensitivity and anomalous cue integration in the model reveal algorithmic-level divergence beneath computational-level convergence.

CoNLL 2026 @ ACL (under review)
DO/PO structure
What to Predict? Sentence Structure Influences Contrast Predictions in Humans and LLMs

Mandarin dative structures (DO/PO) modulate contrastive focus predictions differently in humans and LLMs. Both show the same direction of structural effects, but models amplify them 3× relative to humans — and correlate more with spoken than written human data. 🏆 Best Paper Nomination.

CMCL 2025 @ NAACL →
LLaVA attention
A Multimodal LLM "Foresees" Objects Based on Verb Information but Not Gender

LLaVA 1.5 shows verb-based anticipatory attention in visual scenes — paralleling human VWP patterns — but fails at gender-based prediction. Layer-wise analysis reveals middle transformer layers are most responsible for verb-driven predictive behavior.

CoNLL 2024 →

Publications

2026
Same predictions, different processes: A multi-level comparison of human and multimodal LLM language prediction
Wang, S. & Cai, Z. G.
CoNLL 2026 Under Review
2025
Wang, S., Duan, X., & Cai, Z. G.
CMCL 2025 @ NAACL 🏆 Best Paper Nomination
2024
Wang, S., Duan, X., & Cai, Z. G.
CoNLL 2024, 435–441
2024
Cai, Z. G., Duan, X., Haslett, D., Wang, S., & Pickering, M. J.
CMCL 2024 @ ACL, 37–56 · 169+ citations
2021
Jiang, S., Lu, S., & Wang, S.
Journal of Chinese Language Acquisition
2018
Wu, M., Wang, S., & He, W.
Chinese Language Globalization Studies, (01), 163–173
prep
Predictive language processing in humans and large language models: A review
Wang, S. & Cai, Z. G.
In Preparation

Conference Talks

Wang, S., Duan, X., & Cai, Z. G. (2025). What to predict? Exploring how sentence structure influences contrast predictions in humans and large language models. CMCL 2025 @ NAACL. Albuquerque, NM. 🏆 Best Paper Nom.
Wang, S., Duan, X., & Cai, Z. G. (2024). A multimodal large language model "foresees" objects based on verb information but not gender. CoNLL 2024. Miami, FL.
Wu, M., Wang, S., & He, W. (2018). The effects of language background on listener's perception, intelligibility and comprehensibility of accented Chinese speech. 14th International Conference on Chinese Language Teaching.

Poster Presentations

Wang, S. & Cai, Z. G. (2024). Fewer predictions in L1 source language in consecutive interpreting when cognitive load is high. HSP 2024. University of Michigan.