熟女

Jingzhao Zhang

optimization, learning theory, artificial intelligence

Assistant Professor @ Tsinghua, IIIS

Jointly Affiliated as PI @ Shanghai

Short Bio

Jingzhao Zhang is an assistant professor at Tsinghua, IIIS. He graduated in 2022 from MIT EECS PhD program under the supervision of Prof. Ali Jadbabaie and Prof. Suvrit Sra. His research focused on providing theoretical analyses to practical large-scale algorithms. He now aims to propose theory that are simple and can predict experiment observations. Jingzhao Zhang is also interested in machine learning applications, specifically those involving dynamical system formulations. He received Ernst A. Guillemin SM Thesis Award and George M. Sprowls PhD Thesis Award.

What's new

Fall 2023 optimization class materials are now available .

Please check our ICLR2024 on Bridging the Gap between Theory and Practice for Learning.

Uploaded the research project on the two-phase scaling law in the research section (Aug 2023).

If you want to join as an intern, please prepare a 15min presentation on a recent DL / ML / AI paper and then send me an email.

If you are interested in joining as a PhD, please refer to my post .

Research interests

I am interested in theoretical explanations of practical optimization algorithms.

I am working on developing faster training algorithms.

I enjoy applying optimization algorithms to real world problems.

Our group

PhD students:

Bei Luo

Hongyi Zhou

Undergraduate students:

Huaqing Zhang

Jiazheng Li

Hong Lu

Alumni:

Peiyuan Zhang (PhD at Yale)

Yusong Zhu (PhD at UT Austin)

Kaiyue Wen (PhD at Stanford)

Research Projects

For a complete list, please refer to my .


2024: Statistical learning in LLMs.

A on several recent works.


2023: Two phases of scaling laws for kNN classifiers.

A on the arxiv .


2022: On the nonsmoothness of neural network training.

A of three recent works: why is neural network training non-smooth from an optimization perspective, and how should we analyze the process?


2021: Theoretical understanding of adaptive gradient methods

My phd defense .


2019: An ODE perspective for Nesterov's accelerated gradient method

My master thesis (RQE at MIT) .

Teaching

Fall 2023 Introduction to Optimization

References:

Bertsimas, Dimitris, and John N. Tsitsiklis. Introduction to linear optimization.

Boyd, Stephen P., and Lieven Vandenberghe. Convex optimization.

Bubeck, Sébastien. Convex optimization: Algorithms and complexity.

Grading: 40% HW + 30 % Midterm + 30 % Final

Weekly schedule:

1.Linear programming and Polyhdra. ,

2.Simplex and Duality. ,

3.Linear Duality and Ellipsoid. ,

4.Ellipsoid and Convexity. ,

5.Convex Optimization, MaxCut. ,

6. SDP Relaxation; Lagrangian Duality. ,

7. Lagrangian Duality and KKT. ,

8. Midterm

9. Newton's method. ,

10. Self-concordance and Convergence of Newton. ,

11. Interior Point Method. ,

12. Gradient Method and Oracle Complexity. ,

13. Gradient Methods with Stochasticity, Nonconvexity and Mirror Maps. ,

14. Mirror Descent and Online Learning. ,

15. Final

Related information

Email

Google Scholar

//scholar.google.com/citations?user=8NudxYsAAAAJ&hl=en
TOP