PARAMVEER DHILLON
Associate Professor
Office:5544 Leinweber, 2200 Hayward St., Ann Arbor, MI 48109 |
Please follow the links below to navigate to specific subsections of the site or just scroll down to view all the content.
Research Interests Recent Updates Publications Professional Background Teaching Awards Research Group Service Software
My current research focuses on how Large Language Models (LLMs) reshape human creativity, decision-making, and information consumption. My work bridges the disciplines of AI, Human-Computer Interaction (HCI), and Information Systems and is broadly situated in Human-Centered AI.
Key Research Areas:
Human-LLM Collaboration/Co-writing and Creative Labor Markets: We investigate how people create with LLMs, examining questions of authorship, ownership, and market disruption. Our empirical work provides the first systematic evidence that fine-tuned LLMs can produce undetectable, professional-quality writing that competes directly with human authors.
Personalization and Human-Centric Recommender Systems: We design LLM-based systems that respect human agency while providing personalized experiences. This includes developing temptation-aware recommendation algorithms that help users navigate between immediate desires and long-term goals, and creating personalization methods that adapt to individual preferences without reinforcing filter bubbles.
Causal Methods for Language-Based Interventions: We advance causal inference techniques for high-dimensional text treatments. Our recent work introduces policy learning frameworks for natural language action spaces, enabling LLMs to learn optimal intervention strategies through gradient-based optimization on language embeddings. This line of work supports a variety of applications ranging from therapeutic dialogue refinement to content moderation, where each text-based decision impacts future outcomes.
Active Research Threads:
I am an Associate Professor in the School of Information at the University of Michigan (tenured 2025) and a Digital Fellow at MIT's Initiative on the Digital Economy. I joined Michigan as an Assistant Professor in 2019.
I hold an A.M. in Statistics and an M.S.E. and a Ph.D. (2015) in Computer Science all from the University of Pennsylvania, where I was advised by Professors Lyle Ungar, Dean Foster, and James Gee. My dissertation, "Advances in Spectral Learning with Applications to Text Analysis and Brain Imaging," received the Morris and Dorothy Rubinoff Award for outstanding doctoral dissertation. This work introduced theoretically-grounded spectral methods for learning word embeddings (JMLR 2015, ICML 2012, NeurIPS 2011) and brain image segmentation (NeuroImage 2014), achieving both computational efficiency and provable convergence guarantees. My contributions to spectral decomposition and context-dependent representation learning provided early theoretical foundations for understanding how distributed representations capture semantic relationships, principles that remain central to modern transformer architectures. I also did other research in my Ph.D. on establishing connections between PCA and ridge regression (JMLR 2013) and on provably faster row and column subsampling algorithms for least squares regression (NeurIPS 2013a,b).
Following my Ph.D., I completed a postdoctoral fellowship at MIT with Professor Sinan Aral, where I worked on problems at the intersection of Machine Learning, Causal Inference, Network Science, and Information Systems. This research program produced several foundational contributions: establishing tractable methods for influence maximization under empirically-grounded network models (Nature Human Behaviour 2018); designing optimal digital paywall strategies that balance subscription revenue with content demand leveraging quasi-experiments (Management Science 2020); developing neural matrix factorization techniques for modeling temporal dynamics in user preferences (Marketing Science 2021); quantifying the information advantages of network brokers through novel diversity metrics (Management Science 2023); and creating surrogate-index methods for optimizing long-term outcomes in sequential decision problems (Management Science 2024).
Much before all this, I was a carefree undergrad studying Electronics & Electrical Communication Engineering at PEC in my hometown of Chandigarh, India. I developed my interest in AI/ML and the desire to pursue a Ph.D. as a result of three memorable summer internships, before my Ph.D., at Computer Vision Center @ Barcelona [summer 2006], Max Planck Institute for Intelligent Systems @ Tuebingen [summer 2008], and Information Sciences Institute/USC @ Los Angeles [summer 2009].
I supervise students interested in Human-centric AI and Information Systems. For Fall ’26, I’m specifically recruiting Ph.D. students with strong AI/HCI skills—e.g., building polished front-ends (React/TypeScript), rapid prototyping and performing online user studies, fine-tuning and evaluating LLMs, and solid data/ML engineering/programming skills. The nature of the research project will be similar in spirit to this paper.
Masters/Undergrads (already at University of Michigan) may email their CV and transcripts. Prospective Ph.D. students are encouraged to apply to our Ph.D. program here and mention my name as a potential advisor. The deadline is December 1 each year.
Below is a list of selected publications that highlight my core research interests and contributions. A complete list of all my publications is available here.
*indicates alphabetical author listing.
Recommendation and Temptation.
Sanzeed Anwar, Paramveer Dhillon, and Grant Schoenebeck.
RecSys (ACM Conference on Recommender Systems), 2025.
[PDF]
How Digital Paywalls Shape News Coverage.
Paramveer Dhillon, Anmol Panda, and Libby Hemphill.
PNAS Nexus, 2025.
[PDF]
Causal Inference for Human-Language Model Collaboration.
Bohan Zhang, Yixin Wang, and Paramveer Dhillon.
NAACL(Main Conference) (Annual Conference of the North American Chapter of ACL), 2024.
[PDF]
Shaping Human-AI Collaboration: Varied Scaffolding Levels in Co-writing with Language Models.
Paramveer Dhillon, Somayeh Molaei, Jiaqi Li, Maximilian Golub, Shaochun Zheng, and Lionel Robert.
CHI (SIGCHI Conference on Human Factors in Computing Systems), 2024.
[PDF]
Filter Bubble or Homogenization? Disentangling the Long-Term Effects of Recommendations on User Consumption Patterns.
Sanzeed Anwar, Grant Schoenebeck, and Paramveer Dhillon.
WWW (The Web Conference), 2024.
[PDF]
Targeting for long-term outcomes.
Jeremy Yang, Dean Eckles, Paramveer Dhillon, and Sinan Aral.
Management Science, 2023.
[PDF]
What (Exactly) is Novelty in Networks? Unpacking the Vision Advantages of Brokers, Bridges, and Weak Ties.
Sinan Aral, Paramveer Dhillon.
Management Science, 2022.
[PDF]
Modeling Dynamic User Interests: A Neural Matrix Factorization Approach.
Paramveer Dhillon, Sinan Aral.
Marketing Science, 2021.
[PDF]
Digital Paywall Design: Implications for Content Demand & Subscriptions.*
Sinan Aral, Paramveer Dhillon.
Management Science, 2020.
[PDF]
Social Influence Maximization under Empirical Influence Models.*
Sinan Aral, Paramveer Dhillon.
Nature Human Behaviour, 2018.
[PDF]
[Supplementary Information]
Eigenwords: Spectral Word Embeddings.
Paramveer Dhillon, Dean Foster, and Lyle Ungar.
JMLR (Journal of Machine Learning Research), 2015.
[PDF]
New Subsampling Algorithms for Fast Least Squares Regression.
Paramveer Dhillon, Yichao Lu, Dean Foster, and Lyle Ungar.
NeurIPS (Advances in Neural Information Processing Systems Conference), 2013.
[PDF]
[Supplementary Information]
Faster Ridge Regression via the Subsampled Randomized Hadamard Transform.
Yichao Lu, Paramveer Dhillon, Dean Foster, and Lyle Ungar.
NeurIPS (Advances in Neural Information Processing Systems Conference), 2013.
[PDF]
[Supplementary Information]
A Risk Comparison of Ordinary Least Squares vs Ridge Regression.
Paramveer Dhillon, Dean Foster, Sham Kakade, and Lyle Ungar.
JMLR (Journal of Machine Learning Research), 2013.
[PDF]
Two Step CCA: A new spectral method for estimating vector models of words.
Paramveer Dhillon, Jordan Rodu, Dean Foster, and Lyle Ungar.
ICML(International Conference on Machine Learning), 2012.
[PDF] [Supplementary Information]
Multi-View Learning of Word Embeddings via CCA.
Paramveer Dhillon, Dean Foster, and Lyle Ungar.
NeurIPS(Advances in Neural Information Processing Systems Conference), 2011.
[PDF] [Supplementary Information]
Minimum Description Length Penalization for Group and Multi-Task Sparse Learning.
Paramveer Dhillon, Dean Foster, and Lyle Ungar.
JMLR (Journal of Machine Learning Research), February 2011.
[PDF]
Last Modified: 1.7.26