Research
Through my research I hope to tackle core problems in practical applications through a rigorous perspective to design faster and more robust algorithms. To that end, a unifying theme in my research is learning and optimization, as these two topics underlie many topics at the frontier of CS reserach. In particular, I have worked on deep learning theory and distributed optimization towards this goal.
Deep Learning Theory
My most recent work is on quantifying separations between low-rank fine tuning with online SGD and feature learning ('learning from scratch'), advised by Sitan Chen. In this work, we identified a distinct regime under which LoRA operates, which is different from the linearized kernel ('NTK') regime and the feature learning regime. Remarkably, while learning from scratch with $d$ dimensional data can require $O(d^{l})$ iterations where $l$ is governed by the leap complexity (or information exponent), we prove LoRA can converge in $O(d)$ iterations, marking a separation between these two learning regimes.
Distributed Optimization
Previously, I was fortunate work on distributed optimization advised by Stephanie Gil and Angelia Nedich, focusing on aspects relevant to constrained optimization and resilience. These problems involve solving convex optimization problems over distributed networks, using only local communication and computation. In particular, our research addressed the following issues:
- Solving constrained problems over directed graphs, where the asymmetry of communication adds significant challenges.
- Making algorithms resilient to malicious attacks using the trust framework from cyberphysical systems.
Selected Publications
Publications listed in reverse chronological order, * denotes equal contribution. You can also see a full list of my research projects on my Google Scholar page.
-
Dayi, Arif Kerem, and Sitan Chen. Gradient dynamics for low-rank fine-tuning beyond kernels. arXiv preprint arXiv:2411.15385 (2024).
-
Dayı, Arif Kerem*, Orhan Eren Akgün*, Stephanie Gil, Michal Yemini, and Angelia Nedić. Fast Distributed Optimization over Directed Graphs under Malicious Attacks using Trust. arXiv preprint arXiv:2407.06541 (2024).
-
Akgün, Orhan Eren*, Arif Kerem Dayı*, Stephanie Gil, and Angelia Nedić. Projected Push-Pull for Distributed Constrained Optimization Over Time-Varying Directed Graphs. In 2024 American Control Conference (ACC), pp. 2082-2089. IEEE, 2024.
-
Akgun, Orhan Eren, Arif Kerem Dayi, Stephanie Gil, and Angelia Nedich. Learning trust over directed graphs in multiagent systems. In Learning for Dynamics and Control Conference, pp. 142-154. PMLR, 2023.