The tentative program of the workshop can be found below. More details, including the abstracts of the talks, will be available soon.

All talks will be held in the room BC 01  INM 200 that is located just to the left of the main entrance to the BC building.

Update: All talks are moved to the room INM 200 - directions from BC 01 are here.

Monday, June 11:

9:00 AM Coffee Break (Provided)
9:30 AM "Spectral Sparsification of Graphs and Approximations of Matrices", Daniel Spielman (Yale University), Video (skip to 10. min.)
We will review the fundamental algorithms for spectrally approximating matrices, with an emphasis on the Laplacian matrices of graphs. We then survey recent improvements in algorithms for sparsifying Laplacian matrices.

We devote the last part of the talk to making connections and stating open problems. In particular, we observe that one can sparsify Maximum Cut problems and Unique Games, as well as the linear equations that occur in many optimization algorithms.
10:45 AM "Is Machine Learning Easy?", Sanjeev Arora (Princeton University), Video
Machine learning currently relies on many heuristics---algorithms for which we have no provable guarantees on running time and/or performance. In fact, many underlying problems are NP-hard or worse! Is machine learning really intractable?

The talk will present some thoughts on this, some recent new algorithms for some classic machine learning problems (nonnegative matrix factorization and topic models) under some realistic assumptions, and sketch the myriad open problems in this arena.

(Based upon joint papers with Rong Ge, Ravi Kannan, Ankur Moitra, Sushant Sachdeva)
12:00 PM Lunch Break (see Lunch options at EPFL)
2:00 PM "Solve, Walk, Cut", Nisheeth Vishnoi (MSR India), Video
In this talk I will address the following question:

Can one speed-up simulation of random walks given the ability to solve a system of linear equations quickly?

Computing probability distributions that arise from random walks on undirected graphs quickly is a primitive useful in many combinatorial and algorithmic settings and, more recently, have been used in solving certain convex programs combinatorially.

The answer to the above question will be revealed via an intricate story which stitches together methods from spectral graph theory, numerical linear algebra and approximation theory.

As an application, I will highlight a joint work with L. Orecchia and S. Sachdeva, which is the first near-linear time approximation algorithm, that achieves the Cheeger bound, to cut a graph into two roughly equal parts while minimizing the conductance of the partition.
3:15 PM "The Algorithmic Frontier of Convex Geometry", Santosh Vempala (Georgia Tech), Video
Convex geometry has provided some of the most powerful and general tools for efficient algorithms. In this talk, we discuss developments in algorithmic convex geometry under 5 different general problems --- sampling, optimization, integration, rounding and learning (SOIRL, pronounced SWIRL) --- and their interplay with isoperimetric and probabilistic inequalities. We will highlight several open problems and conjectures on the way.
4:30 PM Coffee Break (Provided)
5:00 PM "Efficient First-Order Methods for Large-Scale Stochastic Convex Optimization", Satyen Kale (IBM Research), Video
Recent ubiquity of big data sets has rendered traditional convex optimization procedures such as interior point methods completely impractical. The current methods of choice for solving large-scale convex programs in practice are the first-order methods (FOMs), which only rely on very minimal first-order information (viz. subgradients). Randomization plays a crucial role in these methods, whether as part of the input or as a means of acceleration. Research in FOMs has been growing explosively of late, and a well-developed complexity theory for various kinds of optimization problems has emerged.

In this talk I will survey the state-of-the-art in FOMs (specifically focusing on stochastic convex optimization), provide upper and lower bounds for various classes of problems, and indicate future avenues for research in this exciting and very applicable area.
6:30 PM Workshop Reception and Poster Session, Fourth floor of BC building

Tuesday, June 12:

9:00 AM Coffee Break (Provided)
9:30 AM "Matroidal Approaches for Approximation Algorithms", Michel Goemans (MIT), Video
Matroids are powerful combinatorial structures that allow to model many combinatorial optimization problems exactly. Using them to design and analyze approximation algorithms for hard combinatorial optimization problems appears very promising but also very challenging.

In this talk, after describing some of the important features of matroids for optimization purposes, I will discuss some of its applications to the design and analysis of approximation algorithms.
10:45 AM "Adapting Approximation Algorithms", R. Ravi (CMU), Video
Problems modeling several stages of uncertainty typically require solutions that must necessarily adapt to the evolving input to be (near-)optimal. We survey how linear programming methods have recently been successful in designing approximation algorithms for such multi-stage problems, even when the so-called adaptivity gap between non-adapting and adapting solutions is large.
12:00 PM Lunch Break (see Lunch options at EPFL)
1:30 PM IC School seminar of possible interest: "The Computational Universe", Sanjeev Arora (Princeton)
Can we teach introductory computer science without programming?

Conventional wisdom holds that introductory computer science courses must have programming as a key component. This talk will describe an introductory computer science course *with labs* that the speaker created and has taught at Princeton University to non-majors. The course covers a broad array of computer science topics ---theory, AI, robotics, hardware, software, systems, etc.--- and satisfies the university's "Science and Technology (with labs)" requirement. The course does not use any programming per se, though students do learn pseudocode. The message is: "Computer science is about more than programming."
3:00 PM "Query Complexity and Computational Complexity of Combinatorial Auctions", Jan Vondrák (IBM Research), Video
The problem of combinatorial auctions is one of the basic questions in algorithmic mechanism design: how can we allocate m items to n agents with private valuations of different combinations of items, so that the agents are motivated to reveal their true valuations and the outcome is (approximately) optimal in terms of social welfare? While approximation algorithms exist for several non-trivial classes of valuations, they typically do not motivate agents to report truthfully. The classical VCG mechanism, although truthful, is not computationally efficient. Thus the main question is whether the requirements of truthfulness and computational efficiency can be combined, or whether they are incompatible.

Until recently, it was not known whether there is any gap between the power of truthful and non-truthful mechanisms for combinatorial auctions. Recent progress led to the first such separation (by Dobzinski). The initial hardness results were valid only for deterministic mechanisms in the value oracle model. More recently, they have been extended to randomized mechanisms, and also to computational hardness for succinctly represented valuations. I will discuss these developments and some remaining open questions.
4:15 PM Coffee Break (Provided)
4:45 PM "Faster Algorithms for Sparse Fourier Transform", Piotr Indyk (MIT), Video

Wednesday, June 13:

9:00 AM Coffee Break (Provided)
9:30 AM "Sublinear Optimization", Elad Hazan (Technion), Video
In many modern optimization problems, specifically those arising in machine learning, the amount data is too large to apply standard convex optimization methods. We'll discuss new optimization algorithms that make use of randomization to prune the data produce a correct solution albeit running in time which is smaller than the data representation, i.e. sublinear running time. We'll present such sublinear-time algorithms for linear classification, support vector machine training, semi-definite programming and other optimization problems. These new algorithms are based on a primal-dual approach, and use a combination of novel sampling techniques and the randomized implementation of online learning algorithms.
We give lower bounds which show our running times to be nearly best possible in the unit-cost RAM model.
10:45 AM "Edge-Disjoint Paths in Networks", Sanjeev Khanna (University of Pennsylvania), Video
A fundamental problem in combinatorial optimization is the edge-disjoint paths problem (EDP). We are given a network and a collection of source-destination pairs in the network. The goal is to maximize the number of pairs that can be connected by edge-disjoint paths. Even special cases of EDP correspond to non-trivial optimization problems, and the problem becomes NP-hard in very restricted settings. We will survey some recent progress on understanding the approximability threshold of EDP.
12:00 PM Lunch Break (see Lunch options at EPFL)
2:00 PM "Perspectives on Nearest Neighbor Search in High-Dimensional Spaces", Alexandr Andoni (MSR Silicon Valley), Video
Nearest neighbor search in high-dimensional spaces is a ubiquitous problem in searching and analyzing massive data sets. In this problem, the goal is to preprocess a set of objects (such as images), so that later, given a new query object, one can efficiently return the object most similar to the query. This problem is of key importance in several areas, including machine learning, information retrieval, image/video/music clustering, and others. For instance, it forms the basis of a widely used classification method in machine learning: to label a new object, just find a similar but already-labeled object. Nearest neighbor search also serves as a primitive for other computational problems such as closest pair, minimum spanning tree, or variants of clustering.

In this talk, I will survey the state-of-the-art for the nearest neighbor search. I will give a flavor of the main algorithms and techniques involved, both some classical and more recent ones. Along the way, I will highlight the current challenges in the area.
3:15 PM "Fast Solvers for SDD Systems and Their Application to Image Processing", Gary Miller (CMU), Video
Spectral Graph Theory is the interplay between linear algebra and combinatorial graph theory. Laplace's equation and its discrete form, the Laplacian Matrix, appears ubiquitously in mathematical physics.

Due to the recent discovery of very fast solvers for these equations, they are becoming increasingly popular in combinatorial optimization, computer vision, computer graphics and machine learning.

This talk will give an overview of our solver algorithms, including a sequential algorithm for solving Symmetric Diagonally Domiate (SDD) linear systems in O(m log n) expected time to constant precision. We will also discus parallel versions of the solver with near linear work.

The talk will be motivated by considering the classic problem of image denoising. More recent convex formulations of this problem will natural lead us into better algorithms for problems such as approximate multicommodity flow and a faster flow algorithm for graphs with good separator structure.

Topics in this talk represent joint work with Guy Blelloch, Hui Han Chin, Anupam Gupta, Jon Kelner, Yiannis Koutis, Aleksander Madry, Richard Peng and Kanat Tangwongsan.
4:30 PM Coffee Break (Provided)
5:00 PM "Semidefinite Programming Hierarchies and the Unique Games Conjecture", David Steurer (MSR New England/Cornell University), Video
We survey recent results about semidefinite programming (SDP) hierarchies in the context of the Unique Games Conjecture. This conjecture has emerged as a unifying approach towards settling many central open problems in the theory of approximation algorithms. It posits the hardness of a certain constraint satisfaction problem, called Unique Games.

We explain recent approximation algorithms based on SDP hierarchies that approximate this problem in subexponential time. This result relies on a connection between the spectrum of graphs and the expansion of small sets. Recent constructions show that algorithms based on this connection cannot run much faster than in subexponential time. Finally, we discuss evidence how stronger SDP hierarchies based on sum-of-squares proofs might be able to go beyond this limitation.

Thursday, June 14:

8:30 AM "Algebraic Algorithms and Combinatorial Optimization", Lap Chi Lau (CUHK), Video
First we survey some previous work on designing fast linear algebraic algorithms for combinatorial optimization problems. Then we present a new algebraic formulation for computing edge connectivities, using the ideas developed in network coding. Then we present a fast "combinatorial" algorithm for computing matrix rank using expander graphs, and discuss its applications. Finally we conclude with some open problems in this area.
9:35 AM Coffee Break (Provided)
9:55 AM "The Traveling Salesman Problem: Low-dimensionality Implies a Polynomial Time Approximation Scheme", Robert Krauthgamer (Weizmann Institute), Video
The Traveling Salesman Problem (TSP) is among the most famous NP-hard optimization problems. We design for this problem a randomized polynomial-time algorithm that computes a (1+eps)-approximation to the optimal tour, for any fixed eps>0, in TSP instances that form an arbitrary metric space with bounded intrinsic dimension.

The celebrated results of Arora and of Mitchell prove that the above result holds in the special case of TSP in a fixed-dimensional Euclidean space. Thus, our algorithm demonstrates that the algorithmic tractability of metric TSP depends on the dimensionality of the space and not on its specific geometry. This result resolves a problem that has been open since the quasi-polynomial time algorithm of Talwar.

Joint work with Yair Bartal and Lee-Ad Gottlieb.
11:05 AM "Randomized Pivoting Rules for the Simplex Algorithm", Uri Zwick (Tel Aviv University), Video
The simplex algorithm is one of the most widely used algorithms for solving linear programs in practice. It's worst case complexity, however, with essentially all known deterministic pivoting rules is exponential. There is, however, a randomized pivoting rule, called Random-Facet, discovered independently by Kalai and by Matousek, Sharir and Welzl, under which the expected running time of the simplex algorithm is subexponential. We obtain subexponential lower bounds for Random-Facet and two other randomized pivoting rules.

Joint work with Thomas Dueholm Hansen and Oliver Friedmann
12:10 PM Workshop Adjourns