## Fast Randomized Algorithms for Large-Scale Machine Learning

Martin Wainwright, University of California, Berkeley

Large-scale data sets are now ubiquitous throughout engineering and science, and present a number of interesting challenges at the interface between machine learning, optimization and information theory. In this talk, we discuss the use of randomized dimensionality reduction techniques, also known as sketching, for obtaining fast but approximate solutions to large-scale convex programs. Using information-theoretic techniques, we first reveal a surprising deficiency of the most widely used sketching technique. We then show how a simple iterative variant leads to a much faster algorithm, and one which adapts to the intrinsic dimension of the solution space. This leads naturally to a randomized version of the Newton algorithm with provable guarantees.

Martin Wainwright joined the faculty at University of California at Berkeley in Fall 2004, and is currently a Professor with a joint appointment between the Department of Statistics and the Department of Electrical Engineering and Computer Sciences. He received his Bachelor's degree in Mathematics from University of Waterloo, Canada, and his Ph.D. degree in Electrical Engineering and Computer Science (EECS) from Massachusetts Institute of Technology (MIT), for which he was awarded the George M. Sprowls Prize from the MIT EECS department in 2002. He is interested in high-dimensional statistics, information theory and statistics, and statistical machine learning. He has received an Alfred P. Sloan Foundation Research Fellowship (2005), IEEE Best Paper Awards from the Signal Processing Society (2008) and Communications Society (2010); the Joint Paper Award from IEEE Information Theory and Communication Societies (2012); a Medallion Lecturer (2013) of the Institute for Mathematical Statistics; a Section Lecturer at the International Congress of Mathematicians (2014); and the COPSS Presidents' Award in Statistics (2014). He is currently serving as an Associate Editor for the Annals of Statistics, Journal of Machine Learning Research, Journal of the American Statistical Association, and Journal of Information and Inference.

## Is Non-Convex Optimization Really Hard?

Emmanuel J. Candès, Stanford University

In recent years, there has been astounding progress in the theory and practice (algorithms, professional-grade software development, applications) of convex optimization to the point that it has become a real pillar of modern engineering. On the other hand, the field of non-convex optimization is far less mature and may draw comparisons with 17th century medicine (ad-hoc methods, no performance guarantees, unreliable results, and so on). This is unfortunate because most problems of interest to information scientists are non-convex in nature; e.g. many maximum likelihood estimates are, in fact, solutions to non-convex problems, some of which being notoriously hard. This talk will briefly review a rapidly emerging literature showing that, perhaps surprisingly, some important non-convex problems may not be as hard as they seem. We will discuss some of this exciting research emphasizing applications in signal and image processing such as phase retrieval, and in machine learning such as low-rank factorization.

Emmanuel Candès is the Barnum-Simons Chair in Mathematics and Statistics, and professor of electrical engineering (by courtesy) at Stanford University. Up until 2009, he was the Ronald and Maxine Linde Professor of Applied and Computational Mathematics at the California Institute of Technology. His research interests are in applied mathematics, statistics, information theory, signal processing and mathematical optimization with applications to the imaging sciences, scientific computing and inverse problems. Candès graduated from the Ecole Polytechnique in 1993 with a degree in science and engineering, and received his Ph.D. in statistics from Stanford University in 1998. Emmanuel received the 2006 Alan T. Waterman Award from NSF, which recognizes the achievements of early-career scientists. Other honors include the 2013 Dannie Heineman Prize presented by the Academy of Sciences at Göttingen, the 2010 George Polya Prize awarded by the Society of Industrial and Applied Mathematics (SIAM), and the 2015 AMS-SIAM George David Birkhoff Prize in Applied Mathematics. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences.

RESEARCH INTERESTS: Emmanuel’s work lies at the interface of mathematics, statistics, information theory, signal processing and scientific computing, and is about finding new ways of representing information and of extracting information from complex data. For example, he helped launch the field known as compressed sensing, which is a mathematical technique that has led to advances in the efficiency and accuracy of data collection and analysis, and can be used to significantly speed up MRI scanning times. More broadly, he is interested in theoretical and applied problems characterized by incomplete information. His work combines ideas from probability theory, statistics and mathematical optimization to answer questions such as whether it is possible to recover the phase of a light field from intensity measurements only as in X-ray crystallography; or users’ preferences for items from just a few samples as in recommender systems and/or fine details of an object from low-frequency data as in microscopy. His most recent research is concerned with the development of statistical techniques addressing the issue of the irreproducibility of scientific research (the fact that many follow up studies cannot reproduce early findings).