Kerem Dayı


Research

Through my research I hope to tackle core problems in practical applications through a rigorous perspective to design faster and more robust algorithms. To that end, a unifying theme in my research is learning and optimization, as these two topics underlie many topics at the frontier of CS reserach. In particular, I have worked on deep learning theory and distributed optimization towards this goal.

Deep Learning Theory

My most recent work is on quantifying separations between low-rank fine tuning with online SGD and feature learning ('learning from scratch'), advised by Sitan Chen. In this work, we identified a distinct regime under which LoRA operates, which is different from the linearized kernel ('NTK') regime and the feature learning regime. Remarkably, while learning from scratch with $d$ dimensional data can require $O(d^{l})$ iterations where $l$ is governed by the leap complexity (or information exponent), we prove LoRA can converge in $O(d)$ iterations, marking a separation between these two learning regimes.

Distributed Optimization

Previously, I was fortunate work on distributed optimization advised by Stephanie Gil and Angelia Nedich, focusing on aspects relevant to constrained optimization and resilience. These problems involve solving convex optimization problems over distributed networks, using only local communication and computation. In particular, our research addressed the following issues:

  • Solving constrained problems over directed graphs, where the asymmetry of communication adds significant challenges.
  • Making algorithms resilient to malicious attacks using the trust framework from cyberphysical systems.

Selected Publications

Publications listed in reverse chronological order, * denotes equal contribution. You can also see a full list of my research projects on my Google Scholar page.