FOLDS seminar (new!)
Foundations of Optimization, Learning, and Data Science


Quick logistics


About




Schedule for Fall 2025

Date Speaker Affiliation Title
Sep 4 (room AGH 306) Joel Tropp Caltech Positive random walks and positive-semidefinite random matrices
Sep 11 Rina Barber UChicago Algorithmic stability for regression and classification
Sep 18 Jong-Shi Pang USC Heaviside Composite Optimization: A new paradigm of optimization
Sep 25 Joan Bruna NYU
Oct 2 Yury Polyanskiy MIT
Oct 9 [Fall Break] ------------ ------------
Oct 16 Adam Klivans UT Austin
Oct 23 Pratik Chaudhari UPenn
Oct 30 Nati Srebro TTIC
Nov 6 Zhimei Ren UPenn
Nov 13 Eva Tardos Cornell
Nov 20 Rob Nowak UW Madison





Abstracts

Joel Tropp: Positive random walks and positive-semidefinite random matrices

On the real line, a random walk that can only move in the positive direction is very unlikely to remain close to its starting point. After a fixed number of steps, the left tail has a Gaussian profile, under minimal assumptions. Remarkably, the same phenomenon occurs when we consider a positive random walk on the cone of positive-semidefinite matrices. After a fixed number of steps, the minimum eigenvalue is also described by a Gaussian model.

This talk introduces a new way to make this intuition rigorous. The methodology provides the solution to an open problem in computational mathematics about sparse random embeddings. The presentation is targeted at a general mathematical audience. Preprint: https://arxiv.org/abs/2501.16578


Rina Barber: Algorithmic stability for regression and classification

In a supervised learning setting, a model fitting algorithm is unstable if small perturbations to the input (the training data) can often lead to large perturbations in the output (say, predictions returned by the fitted model). Algorithmic stability is a desirable property with many important implications such as generalization and robustness, but testing the stability property empirically is known to be impossible in the setting of complex black-box models. In this work, we establish that bagging any black-box regression algorithm automatically ensures that stability holds, with no assumptions on the algorithm or the data. Furthermore, we construct a new framework for defining stability in the context of classification, and show that using bagging to estimate our uncertainty about the output label will again allow stability guarantees for any black-box model. This work is joint with Jake Soloff and Rebecca Willett.


Jong-Shi Pang: Heaviside Composite Optimization: A new paradigm of optimization

A Heaviside function is an indicator function of a semi-infinite interval. A Heaviside composite function is a Heaviside function composed with a multivariate function that may be nonsnooth. This talk presents a touch of this novel class of discontinuous optimization problems that borders on continuous and discrete optimization. Among the rich applications of such problems, we mention one pertaining to classification and treatment with constraints solvable by an elementary yet novel progressive integer programming (PIP) method that can be applied broadly to many other problems, including indefinite quadratic programs and linear programs with linear complementarity constraints. Computational results demonstrate the excellent performance of the PIP idea and its superiority over a full integer programming approach.

IDEAS Logo Penn AI Logo STAT Logo