Workshop on "Randomized Numerical Linear Algebra (RandNLA): Theory and Practice"
(Under the auspices of FOCS 2012, the 53rd Annual IEEE Symposium on Foundations of Computer Science)
Hyatt Regency, New Brunswick NJ, USA
October 20, 2012
algorithms are the foundation for many methods in data analysis,
scientific computing, and engineering applications, and Numerical
Linear Algebra (NLA) is the key discipline enabling the analysis and
implementation of such algorithms. Without highly efficient and
scalable NLA algorithms, the challenge of big data will not be met. To
meet this challenge a true boost in performance is neccessary, and that
necessitates new algorithmic paradigms to be developed,
analyzed, and deployed. Randomization is one of only a handful of
paradigms that have the potential to deliver the desired true boost in
performance. In the past, scientists and application users were
distrustful and shied away from randomized numerical linear algebra
algorithms, because (i) their output is unpredictable, (ii) it may not be reproducible, (iii) the error analysis is probabilistic, and (iv)
the obtained accuracy is very crude. However, the NLA community is now
seriously considering the use of "radical" ideas like randomization,
and, indeed, recent years saw much research on randomized numerical
linear algebra algorithms. [November 15, 2012] Read this blog post (written by Ludwig Schmidt and edited by Michael Mitzenmacher) for a concise summary of the workshop's talks.
These practical developments
represent huge opportunities for the theoretical computer science
community: how can randomization and sampling be leveraged in order to
design faster numerical algorithms? The goal of the workshop is to
expose the participants to recent progress on developing randomized
numerical linear algebra algorithms, as well as on the application of
such algorithms to a variety of disciplines and domains.
[October 25, 2012] Slides from all talks have been posted.
Session 1 (9:00am-10:30am): "RandNLA: Overview"
|9:00 - 9:15 || Petros Drineas|
Welcome and Introduction (slides)
|9:15 - 10:15 || Michael Mahoney |
Tutorial: Theory (and Some Practice) of Randomized Algorithms for Matrices and Data (slides, abstract)
|10:15 - 10:30 || Q & A|
Coffee Break (provided) (10:30am-11am)
Session 2 (11:00am-12:30pm): "Matrix Concentration: Theory & Applications"
|11:00 - 11:30 || Nick Harvey |
Matrix Concentration and Sparsification (slides, abstract)
|11:30 - 12:00 || Nikhil Srivastava|
Graph Sparsification (slides, abstract)
|12:00 - 12:30 || Ilse Ipsen |
Randomly Sampling from Orthonormal Matrices: Coherence and Leverage Scores (slides, abstract)
Lunch break (on your own) (12:30pm-2:30pm)
Session 3 (2:30pm-4:00pm): "Randomized Linear Solvers"
|2:30 - 3:00 || Haim Avron |
Randomized Preconditioning (slides, abstract)
|3:00 - 3:30 || Ioannis Koutis |
Solving Laplacian Linear Equations: Theory and Practice (slides, abstract)
|3:30 - 4:00 || Anastasios Zouzias |
Randomized Gossip Algorithms for Solving Laplacian Systems (slides, abstract)
Coffee Break (provided) (4:00pm-4:30pm)
Session 4 (4:30pm-6:00pm): "Matrix Reconstruction"
|4:30 - 5:00 || Christos Boutsidis |
Near-Optimal Column-Based Matrix Reconstruction (slides, abstract)
|5:00 - 5:30 || David Woodruff |
Low Rank Approximation and Regression in Input Sparsity Time (slides, abstract)
|5:30 - 6:00 || Ben Recht |
An Incomplete Survey of Matrix Completion (slides, abstract)
Haim Avron (IBM T. J. Watson Research Center),
Christos Boutsidis (IBM T. J. Watson Research Center),
(Rensselaer Polytechnic Institute).
Webpage maintained by: Abhisek Kundu (Rensselaer Polytechnic Institute)