Richa Rastogi

Email

Hi! I'am a Computer Science PhD Candidate at Cornell University and I'am fortunate to be advised by Professor Thorsten Joachims.
My research interests broadly lie in the area of learning from feedback in interactive systems, ranging from recommender systems to LLMs. As such I am a machine learning generalist with focus in probabilistic modeling and reinforcement learning. I am interested in studying and developing principled methods that tackle real-world complexities such as sparse rewards, long horizons, computational inefficiency, etc. with an eye towards social impact of these systems. During 2025, I enjoyed the opportunity to work with several Netflix researchers.
Prior to starting PhD, I had a wonderful time working at Amazon, Stanford, Georgia Tech, and Delhi University, India.

News

I will be at NeurIPS'25 in San Diego presenting MultiScale Bandits paper. If you are interested in collaborating or wish to schedule a meeting, please feel free to reach out.

Selected Works (complete works at Google Scholar)

MultiScale Contextual Bandits for Long-term Objectives
Richa Rastogi , Yuta Saito, Thorsten Joachims.
Neural Information Processing Systems (NeurIPS), 2025 , ICML workshop on Models of Human Feedback for AI Alignment, 2024
MultiScale Policy Learning to contextually reconcile the disconnect in the timescales of short-term interventions (e.g., rankings, token feedback) and the long-term feedback (e.g., user retention, sentence feedback). Our method learns interventions and policies at multiple interdependent timescales in various settings ranging from recommender systems to LLMs.


Fairness in Ranking under Disparate Uncertainty
Richa Rastogi , Thorsten Joachims.
ACM Conference on on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO) 2024 , UAI workshop on Epistemic Uncertainty, 2023 Spotlight (Oral)
We introduce Equal-Opportunity Ranking (EOR) as a new fairness criterion for ranking and a practical algorithm that provably reduces the group unfairness induced when the uncertainty of the underlying relevance model differs between groups of candidates.
code | slides | bibtex | poster | Media: Cornell News

Semi-Parametric Inducing Point Networks and Neural Processes
Richa Rastogi , Yair Schiff, Alon Hacohen, Zhaozhi Li, Ian Lee, Yuntian Deng, Mert R Sabuncu, Volodymyr Kuleshov.
International Conference on Learning Representations (ICLR), 2023
We introduce Semi-Parametric Inducing Point Networks (SPIN), a general-purpose architecture that can query the training set at inference time in a compute-efficient manner.
code | slides | bibtex