Convex optimization, sparsity and regression in high dimension.
Joseph Salmon (TELECOM ParisTech):

-Abstract:

Following seminal works from R. Tibshirani and D. Donoho in the mid 90's, a tremendous amount of new tools have been developed to handle regression when the number of explanatory variables (or features) is potentially larger than the sample size. The main ingredient, though, has been the design of methods leveraging sparsity. In this lecture, I will present a point on view relying mainly on modern convex optimization techniques providing sparse solutions. A particular emphasize on non-smooth regularized regression, including l1 regularization (Lasso) or sparse-group regularization (Group-Lasso),  will be given.

Algorithmic challenges depending on the nature of the data will be addressed, with potential applications in image processing, bio-statistics and text mining. Last but not least, statistical results assessing the successes and failures of such methods will be presented.