<div dir="ltr"><div class="gmail_quote gmail_quote_container"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><div style="text-align:center"><br><br></div><div style="text-align:center"><img src="cid:ii_195a9b0c05acb971f161" alt="image.png" width="542" height="87"><br></div><div style="text-align:center"><font color="#0b5394" size="4"><b>Please join us for a joint CS/EE faculty candidate lecture next week!</b></font></div><div><br></div><div><font color="#0b5394" size="4"><b>When:</b> Wednesday, March 26th</font></div><div><font color="#0b5394" size="4"><b>Where:</b> CSB 451</font></div><div><font color="#0b5394" size="4"><b>Who:</b> Sadhika Malladi</font></div><div><font color="#0b5394" size="4"><br></font></div><div><font color="#0b5394" size="4"><b>Title: </b></font></div><div><font color="#0b5394">Deep Learning Theory in the Age of Generative AI</font></div><div><br></div><div><b style="color:rgb(11,83,148);font-size:large">Abstract:</b></div><div><font color="#0b5394">Modern deep learning has achieved remarkable results, but the design of training methodologies largely relies on guess-and-check approaches. Thorough empirical studies of recent massive language models (LMs) is prohibitively expensive, underscoring the need for theoretical insights, but classical ML theory struggles to describe modern training paradigms. I present a novel approach to developing prescriptive theoretical results that can directly translate to improved training methodologies for LMs. My research has yielded actionable improvements in model training across the LM development pipeline — for example, my theory motivates the design of MeZO, a fine-tuning algorithm that reduces memory usage by up to 12x and halves the number of GPU-hours required. Throughout the talk, to underscore the prescriptiveness of my theoretical insights, I will demonstrate the success of these theory-motivated algorithms on novel empirical settings published after the theory.</font></div><div><br></div><div><font color="#0b5394" size="4"><b>Bio:</b></font></div></div><div><font color="#0b5394">Sadhika Malladi is a final-year PhD student in Computer Science at Princeton University advised by Sanjeev Arora. Her research advances deep learning theory to capture modern-day training settings, yielding practical training improvements and meaningful insights into model behavior. She has co-organized multiple workshops, including Mathematical and Empirical Understanding of Foundation Models at ICLR 2024 and Mathematics for Modern Machine Learning (M3L) at NeurIPS 2024. She was named a 2025 Siebel Scholar.</font><font color="#0b5394" size="4"><b></b></font></div></div></div></div></div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><font><div style="color:rgb(136,136,136)"><br></div></font></div></div></div>
</div></div>