To enter your research report:

**sign in**via the link in the upper-right corner**username:**"ow2018" (no quotation marks)**password:**last name (all*lowercase*) of the person who discovered matrix multiplication in time $n^{\mathrm{log}_2 7}$

- click on the
**edit**button in the lower-right corner of this page. **write**your request**save**the edited wiki

(Please do not keep the wiki page in edit mode for long, since you may lock other users from editing the text. )

Abboud, Amir

Recently I've been interested in the following:

- Exploring the sum of squares algorithm, with some works on using it for results in machine learning and quantum information theory, breaking some cryptographic assumptions, as well as lower bounds for it.
- I'm interested in SOS in both the worst-case and average-case setting. In the former case the unique games conjecture is of course an outstanding open problem, in the latter case we seem to be getting better and better understanding of how to pinpoint exactly the performance of SOS via connections to low degree polynomial distinguishers. I've also been wondering whether there are reasonable ways to define "intermediate case complexity" that will be a more distribution-independent variant of average case complexity. (Something capturing solving a problem on all instances except "few degenerate ones".)
- Also exploring to what extent SOS can be an *optimal* algorithm in a wide range of problems.
- Related to the last point, I've been interested in other algorithmic paradigms including those arising from physics.

My main focus currently is on algebraic natural proofs and its relation to circuit lower bounds

and Boolean complexity, see. We can show that there are

no algebraic polynomial size natural proofs for all matrices in the variety of matrices of border completion rank bounded by some value b,

unless coNP \subseteq \exists BPP, and for the

variety of matrices with permanent zero, unless P^#P \subseteq \exists BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni ({SIAM} J. Comput. 31(2): 496—526, 2001) yields proofs of polynomial size for the latter variety, therefore overcoming the natural proofs barrier in this case.

I worked on derandomising variants of PIT, in particular, with Gorav Jindal and Anurag Pandey, we gave

a deterministic approximation scheme for the rank of a matrix space, see.

Moreover, with Vladimir Lysikov, I worked on characterisations of tensors of minimal border

rank. Such tensors are supposed to be good starting tensors to design fast matrix multiplication

algorithms, see.

Some recent research interests:

- Continued work on understanding the learning with errors (LWE) problem (and variants) and its applications in cryptography (to fully homomorphic encryption, attribute based encryption, constrained pseudorandom functions and secure multiparty computation, among others). I am also trying to understanding the hardness of LWE and in particular its variant over polynomial rings (RLWE) when imperfect distribution on secrets. Recently showed that some properties of LWE also extend to learning parity with noise (LPN) problem, but only achieved slightly non-trivial parameters.

- Using cryptography in a quantum and post-quantum world (post-quantum refers to a setting where honest users are classical but adversaries are quantum). This includes understanding the quantum complexity of LWE, constructing quantum fully homomorphic encryption, and using cryptography to allow single-prover protocols for tasks that previously required two entangled non-communicating provers (at the cost of relying on computational soundness). For the latter, a recent work with Christiano, Mahadev, Vazirani and Vidick shows that a quantum entity can prove to a classical verifier that it has quantum powers, and furthermore allows to generate statistical certifiable randomness.

- Very recent line of work with Applebaum and Tsabary on a formal framework for analyzing secure multiparty computation via an object we call "multi-party randomized encoding". So far this allowed us to resolve the round complexity of information-theoretic semi-honest MPC.

My main current research interest is on expanding the surprising connections between invariant theory, complexity, and optimization: a picture of the state of the art can be found in a workshop organized by Avi Wigderson in June 2018 at IAS, see. These developments provide efficient numerical algorithms in certain settings of geometric invariant theory, for which the classical algebraic methods are not known to lead to efficient algorithms. With Franks, Garg, Oliveira, Walter and Wigderson, we analyzed operator scaling algorithms for tensors in link and link. Recently, we realized that the second order geodesic convex optimization from here can be generalized to the general framework. A main challenge is to find a polynomial time algorithm for testing membership to the null cone problem in the general setting of the action of a reductive group on a complex vector space. Mulmuley had predicted such algorithm (and more) here. The main evidence for the existence of such an algorithm, besides already knowing such in special settings, is that the null cone problem is in NP and coNP, by an argument similar as in here. Moreover, for actions of abelian groups, the null cone problem is reduces to linear programming. Finding such an efficient algorithm in the setting of tensors would be a great advance!

With Ikenmeyer and Panova, I proved a negative result in geometric complexity theory, which says that one cannot separate the permanent versus determinant conjecture via represention theoretic occurrence obstructions, see.

In a different direction, I recently proved that Koiran's Real Tau Conjecture is true “on average”, see.

Another line of research of mine with Felipe Cucker, Pierre Lairez and Josue Tonelli-Cueto, is to develop numerically stable grid-based algorithms for computing the homology of semialgebraic sets. Our algorithms have “only” single exponential running time on most inputs, while the best symbolic algorithms (which work for all inputs) have doubly exponential running time. See here and here.

I also worked on developing a Probabilistic Schubert Calculus, see.

I have been working towards hardness of unique games and 2:1 games together with Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra. A key component is "agreement tests" which generalize low degree tests and can be found in essentially all PCP constructions. I have been studying agreement tests more generally and for domains beyond the low degree test Grassmann scheme.

An exciting new focus for me is "high dimensional expansion" which is a generalization of graph expansion to hypergraphs, and is surprisingly related to agreement tests, as discovered in this work with Tali Kaufman. High dimensional expansion is a recently emerging topic connecting several areas in math (combinatorics, group theory, geometry, algebraic topology), and I spent the last year learning more about this. I am currently focused on understanding this phenomenon better and in particular how it can be harnessed towards various applications in TCS: from coding theory to PCPs to derandomization.

Recently I've been working/thinking mainly on Locally Decodable Codes (and PIR), Matrix rigidity and geometric incidence theorems (Kakeya/Sylvester-Gallai) and their applications.

Some recent works:

- With Moran/Filmus: A couple of papers on VC dimension and Sauer-Shelah lemma in various settings (VC dimension of sumsets and VC dimension over lattices).

- With Ben Edleman we showed that certain matrices arising in group algebras are not rigid (at least not enough to get circuit lower bounds).

- with Garg/Oliveira/Solymosi: A generalization of earlier work (with Saraf/Wigderson/Barak/Yehudayoff) on rank of design matrices to the case of block matrices (matrices whose entries are matrices). This has some applications in geometric rigidity (not matrix rigidity :) and I hope to find more applications (perhaps in coding theory).

In the past 2-3 years, I have been working on so called scaling algorithms along with

Zeyuan Allen-Zhu, Peter Burgisser, Cole Franks, Leonid Gurvits, Yuanzhi Li, Rafael Oliveira, Michael Walter and

Avi Wigderson. One of the first scaling problems to be studied was that of matrix scaling

(by Sinkhorn in 1964) which has since had applications in a variety of areas. We have been

generalizing this theory to the "non-commutative" setting and have designed efficient algorithms

for operator scaling, tensor scaling and non-uniform versions of these. These scaling problems have

rich and surprising connections to a number of areas such as derandomization, invariant theory,

convexity on manifolds, quantum information theory, non-commutative algebra, operator theory and

functional analysis. Some sample papers can be found here, here, here and here.

Starting Summer 2015, I was working on an introductory book on Property Testing, which I completed by April 2017. It was published in November 2017.

Research-wise (see list of my recent papers), my focus was on property testing as well as on doubly-efficient interactive proof systems and worst-case to average-case reductions for problems in P. By doubly-efficient interactive proof systems we mean interactive proofs in which the (prescribed) prover strategy runs in polynomial-time, whereas the verifier runs in almost linear time (or much faster than the decision procedure known for the set). Developments in the study of such proof systems seems to occur alongside developments in the study of worst-case to average-case reductions. Is this a coincidence? Guy has an opinion.

Goldwasser, Shafi

My purpose in life is to prove unconditional lower bounds for concrete models of computation.

Recently, we've developed a new technique to derive monotone circuit lower bounds (which are often hard to prove) from Resolution lower bounds (which are often easy to prove). This is a converse to a connection ("monotone interpolation") known since the 90s. We can now show in upcoming work that a monotone version of XOR-SAT has exponential monotone circuit complexity. Since XOR-SAT is in NC^2, this improves qualitatively on the monotone vs. non-monotone separation of Tardos (1988).

Can our tools be used to show exponential monotone lower bounds for perfect matching? Or to prove that SETH holds for monotone circuits? Or to make progress in proof complexity?

A broad future project is to develop a theory of the communication analogues of subclasses of TFNP (total NP search problems). This provides a Karchmer-Wigderson style lens on monotone complexity. For example, the aforementioned result can be interpreted as separating the communication analogues of PPA and PLS.

My research work has been mostly around the derandomization question. Specifically, derandomization of some special cases of the polynomial identity testing problem (PIT). The problem is to test if a implicitly given multivariate polynomial is zero. A simple randomized test is to check if the polynomial evaluates to zero at a random point. The goal is to design a deterministic algorithm.

Derandomizing the Isolation Lemma: In some recent works, we derandomized a special case of PIT which gives parallel algorithms for bipartite perfect matching (with Fenner, Thierauf) and linear matroid intersection (with Thierauf). This was done by a derandomization of the Isolation Lemma of Mulmuley-Vazirani-Vazirani. Later, we gave a generic approach towards derandomizing the Isolation Lemma by relating it to a question about number of near-shortest vectors in some lattices. Towards this, we showed that the number of near-shortest vectors in totally unimodular lattices is polynomially bounded (with Thierauf and Vishnoi, see here and here). This result relies on some results from matroid theory, namely Seymour's characterization of regular matroids.

More recently, I have been thinking about derandomizing some other applications of PIT, e.g., graph rigidity, matroid representations, non-commutative rational identity testing, odd acyclic orientation etc.

- I have been very interested recently in the interplay between polymorphisms (loosely, operations which preserve the solution space) associated with a problem and its computational complexity. The algebraic CSP dichotomy conjecture established a precise link between these for constraint satisfaction problems. We have been investigating more general manifestations of this phenomenon, in promise CSPs, eg. here and here, as well as fine-grained complexity and optimization. A recent result in the latter vein interpolates between 0-1 and linear programming: given an n-variable LP feasible over {0,1}^n, it finds a solution in E^n, where E contains {0,1}, in time at most (2-measure(E))^n.

- “Algebraic pseudorandomness” which studies linear-algebraic analogs of fundamental Boolean pseudorandom objects is another current interest. Here a recent highlight is a construction of lossless dimension expanders over large fields.

- In approximability, we have been interested in problems related to polynomial optimization and operator norms. In particular, we showed that hypercontractive norms (that don’t straddle 2) are NP-hard to approximate within any constant.

- Coding theory in various forms of course still occupies a lot of my time. Coding models motivated by distributed storage, such as locally repairable codes, maximally recoverable codes, and MSR/regenerating codes, are a recent focus area. Codes for deletions, polar codes, combinatorial results for perfect hashing, etc. are other topics I have been investigating.

During the last few years I have been working on geometric complexity theory and in general on applying algebra and geometry to questions in complexity theory.

With Panova I proved a no-go result on occurrence obstructions link that I improved with Bürgisser and Panova link.

In order to understand GCT better, with Bringmann and Zuiddam I studied simpler algebraic computational models, which resulted in width 2 algebraic branching programs and continuantal complexity link. With Bläser, Jindal and Lysikov I recently got interested in algebraic natural proofs link.

With Bläser I gave several introductory lectures on geometric complexity theory, whose lecture notes can be found online link.

I am also continuously working on matrix multiplication algorithms with symmetry with Ballard, Chiantini, Hauenstein, Landsberg, Lysikov, Ottaviani, and Ryder

link link link link.

Recently with Komarath, Lenzen, Lysikov, Mokhov, and Sreenivasaiah we found an interesting generalization of monotone complexity to non-monotone Boolean functions, called hazard-free complexity link. The proof uses a new type of directional derivative for Boolean functions.

One of the topics I have been obsessed with is that of delegating computation, i.e., how to prove to a (very weak) verifier that a computation was done correctly. In particular, my focus was on constructing such proofs that are *non-interactive*. This is known to be impossible information theoretically, and we overcome this impossibility by relying on cryptographic assumptions. I have several works in this topic.

In my most recent one, with Omer Paneth and Lisa Yang, we show how to construct a delegation scheme which is *publicly verifiable*; i.e., anyone can verify the proof.

As a stepping stone, we convert the interactive sum-check protocol of [LFKN92] into a non-interactive (and publicly verifiable) one.

I have been working on high dimensional expanders and their potential applicability towards locally testable codes. Sipser and Speilman in their seminal expander codes construction were aiming, in fact, towards obtaining locally testable codes (LTCs) from expanders (as stated explicitly in Speilman's thesis); However, they discovered that expanders are too weak for that;

High dimensional expanders can be thought of as much stronger notion of the well known expanders. They seem useful in TCS applications where expanders are too weak, or in another words, where random objects do NOT yield the desired object (e.g. as it is the case for LTCS and PCPs where random constructions are not known). They combine in them pseudo-randomness as well as a lot of structure;

Even if you do not care about high dimensional expansion phenomena per se, high dimensional expanders should interest you; one example is this: LPS Ramanujan graphs are well known; However, the proof that they are indeed expanders is beyond reach. High dimensional expanders give expander graphs with quality similar to the LPS graphs, for which, there exists a very elementary proof for their expansion that rely on LOCAL neighborhoods of vertices.

In the last few years, I have been working on lower bounds for (classes of) arithmetic circuits. Simultaneously, I have also been exploring the related problem of reconstructed/properly learning arithmetic circuits: given blackbox access to the polynomial computed by an unknown circuit C (from some subclass of arithmetic circuits), to efficiently reconstruct C.

A general theme here is that the proof techniques used to prove lower bounds are often the starting point of such reconstruction algorithms. Some recent work along this direction (that is still being written up) includes algorithms (which have low smoothed complexity) for polynomial decomposition and reconstructing depth three circuits with large top fanin.

Most recently, I have worked with Nicolas Ressayre on factorization of polynomials into products of linear froms. This problem has several interesting algorithmic applications.

In particular, it occurs in an algorithm by Neeraj Kayal for the decompositionof n-variate polynomials into sums of powers of n linearly independent linear forms.

One of our factorization algorithms is based on elementary ideas from invariant theory, and can only factor a polynomial into a product of linearly independent forms.

Generalizing this algorithm to arbitrary products of linear forms seems to raise an interesting challenge from the point of view of Geometric Complexity Theory (also, a better understanding of Kayal's algorithm would be interesting for GCT).

In the last couple of years, I have also worked with Timothée Pecatte and Ignacio Garcia Marco on the decomposition of univariate and multivariate polynomials into sums of powers of affine functions.

The tools that we use include a higher-order version of the Hessian determinant, and linear differential equations with polynomial coefficients (the latter can be seen as an algorithmic application of the method of shifted derivatives, originally designed as a lower bound method). We proposed algorithms for a number of special cases, but many questions remain open even for univariate polynomials (both from the point of view of lower bounds and of algorithm design).

Some relevant links: lower bounds, reconstruction algorithms, linear independence, more reconstruction.

I like codes, polynomials and probability and everything in their convex hull. In recent years, I have been interested in locally decodable/testable codes (especially of high rate) and connections to pseudorandomness and PCPs.

A very recent highlight is this paper, with Noga Ron-Zewi, Shubhangi Saraf, Mary Wootters, which shows that some nice and lovable algebraic codes (folded Reed-Solomon codes, multiplicity codes) are list decodable and locally list decodable even better than known before — this leads to locally list-decodable codes of constant rate with the smallest known query complexity.

My interests in the last few years have been in understanding **algorithmic thresholds** for average-case problems.

Such problems arise in a number of sub-areas of theoretical computer science including

computational complexity (e.g. what's the threshold "clause-to-variable ratio" or density for efficiently refuting constraint satisfaction problems?),

cryptography (e.g., what's the maximum stretch of low-degree pseudorandom generators?),

theoretical machine learning (e.g. can we efficiently cluster mixture of gaussians whenever statistically possible?) and

algorithmic statistics (e.g., can we efficiently estimate basic parameters such as mean and covariance from samples with adversarial corruptions?).

The state-of-the-art algorithms for most of these problems are based on spectral methods. The "signal-to-noise" ratio (e.g. samples in learning problems, clause-to-variable ratio for constraint satisfaction problems, stretch for pseudorandom generators etc.) required for such algorithms to succeed is markedly higher than the statistically optimal value in many cases. On the other hand, due to the general difficulty of constructing distribution preserving reductions, lower-bounds based on standard assumptions are hard to come by.

With my co-authors, I have focused on an attempt to build a unified story for such problems based on a general-purpose algorithmic scheme called as the **Sum-of-Squares** method. This is a method based on semi-definite programming and captures most general techniques known for the problems of the kind discussed above.

For some of the problems discussed above, we have had some success in obtaining faster algorithms via the sum-of-squares paradigm.

Examples include a quasi-polynomial time algorithm for clustering *mixtures of high-dimensional Gaussians* whenever statistically possible (link),

efficient algorithms for *robust estimation of Mean, Covariance and Higher Moments* with statistically optimal error (link) and

strong attacks on *local Pseudorandom Generators* (link) that imply that a proposed candidate indistinguishability obfuscation scheme due to Lin and Tessaro is insecure.

On the flip side, with co-authors, we have also had some success in understanding the limitations of SoS method in the context of average-case complexity. Examples include: a tight *Sum-of-Squares Lower Bound for the Planted Clique Problem* link,

*Asymptotically Tight Estimates of Sum-of-Squares Thresholds for Refuting Random Constraint Satisfaction Problems* link and

a general *equivalence between Sum-of-Squares and Spectral Methods* for a large class of average-case problems link.

Please also see Tselil Schramm's report for more details about this last work.

Beyond average-case complexity, I have also been thinking about establishing limitations of linear/semidefinite programming for worst-case combinatorial optimization problems. In this direction, we showed (link) that *arbitrary LPs* of sub-exponential size (formally, linear extended formulations) cannot beat the guarantees of the simple algorithm that returns a random assignment for worst-case constraint satisfaction problems. A somewhat surprising consequence is that sub-exponential size LPs cannot obtain an approximation ratio that noticeably beats 0.5 - while a simple poly size SDP can obtain a 0.878 approximation. Please see Raghu Meka's report for some more details about this research direction.

In the past 3-5 years I went from a geometer with a casual interest in complexity to working full time on Valiant's permanent v. determinant hypothesis

and the complexity of matrix multiplication, as well as other related complexity problems. The catalyst was the 2014 Simons Institute program:

"Algorithms and Complexity in Algebraic Geometry", which I co-organized and resulted in the book "Geometry and Complexity Theory" (Cambridge 2017).

Recent progress includes new lower bounds for the complexity of matrix multiplication (with Ottaviani and Michalek) here,

proof of Valiant's hypothesis assuming the supplementary hypothesis of equivariance (with Ressayre) here, the use of geometry to construct efficient matrix multiplication algorithms

(with Ballard, Ikenmeyer, and Ryder) here, and the discovery that the exponent of matrix multiplication is also determined by the Waring rank of the

matrix multiplication polynomial (with Chiantini, Hauenstein, Ikenmeyer, and Ottaviani) here, which has led my student Austin Conner to construct

some remarkable new symmetrized matrix multiplication decompositions, the first of which is here. Current work includes addressing the Ambainus-Filimus-LeGall

challenge of finding new tensors to unblock Strassen's laser method using geometric methods.

Recently, I've been quite interested in understanding program obfuscation, or more precisely, the notion of Indistinguishability Obfuscation (IO). The literature has demonstrated that IO is an extremely powerful tool. It also has connection to complexity theory such as implying the hardness of Nash equilibrium. Unfortunately, securely realizing IO remains a major open question. The first IO candidates all rely on a complex cryptographic object called multilinear map and its variants. Roughly speaking, multilinear map allows for evaluating high-degree polynomials over secret encoded elements in Z_p and learning whether the output is zero. Itself is extremely powerful and has been long sought-after. So far, we only have sound instantiation of bilinear maps, supporting merely evaluation of quadratic polynomials. A recent line of works, including mine, showed that 3-linear map (supporting degree 3 polynomials) suffices for constructing IO, using crucially Pseudo-Random Generators (PRG) with certain simple structure. The simple structure, in particular, allows the PRG to be evaluated using 3-linear map.

What is disappointing is that if relying on bilinear map instead, we need PRG with even simpler structure (computable by quadratic polynomials, and more), which can be attacked by known algorithms, such as, the SOS algorithms. My most recent works continue to explore whether we can find other types of simple PRG that can resist known algorithmic techniques, and are useful for constructing IO from bilinear map. in general, I am interested in understanding whether there is an inherent gap between the power of bilinear map and trilinear map, and whether we can base IO on other standard assumptions such as lattices.

I am also interested applications of IO, connection of IO with complexity theory, and how to implement special-case obfuscation for different applications. For instance, in a recent work, Fabrice and I showed that 2-round MPC can be based on 2-round OT, which is necessary and sufficient. What's interesting is that the construction uses IO as an intermediate step and implements this application-specific IO using standard assumptions.

In another recent work with Nir, we proposed a 1-message Zero Knowledge argument system, with weak soundness and zero-knowledge properties. The weakening in soundness and zero-knowledge is necessary as 2-message ZK is impossible, as shown by Goldreich and Oren. Using this 1ZK argument, we constructed 1 message non-malleable commitments.

My main research focus in the last few years is the KRW conjecture, which is a promising approach for proving circuit-depth lower bounds. Roughly speaking, the conjecture asserts that the composition of boolean functions increases their depth complexity. Specifically, I have been studying different relaxations of the conjecture in an attempt to develop new techniques that would allow proving new lower bounds.

My first paper on the subject, joint with Dmitry Gavinsky, Omri Weinstein, and Avi Wigderson, proved a variant of the KRW conjecture for the composition of a function and a "universal relation" (a relaxation of a function). A recent joint paper with my postdoc Sajin Koroth improves the parameters of this result significantly. In another recent work, I proved a "derandomized" version of this result.

Together with Irit Dinur, we proved the special case of the KRW conjecture in which the inner function is the parity function. In fact, this result was already implicit in Hastad's work on the shrinkage exponent, but we believe that the techniques developed in our new proof would be useful in the study of the full conjecture.

Finally, the study of the KRW conjecture is based on the Karchmer-Wigderson framework, which allows us to study depth complexity through the lens of communication complexity. In the recent years, I studied the possibility of this framework to prove AC0 lower bounds. This study has led to a joint paper with Avi Wigderson, in which we used the KW framework to prove a new result about depth-3 circuits. Along the way, we also proved an information-theoretic theorem on the ability to predict high-entropy strings using non-deterministic queries, which may be of independent interest.

Over the past few years I have been interested in the following directions:

(1) Pseudorandom generators for small-space algorithms. In the most recent work here (joint with Omer Reingold & Avishay Tal) we got a generator that does better than Nisan's PRG for width three branching programs and gets nearly optimal seed-length (for not too small error).

(2) SDP extension complexity. We know a lot about the limitations of LP extension complexity to the extent that we now know that even beating the approximation factor of the trivial LP requires sub-exponential size extensions for many CSPs (e.g., Max-Cut, 3SAT, 3XOR, …). For SDP extension complexity on the other hand, the only result we have due to Lee, Raghavendra, Steurer and that only shows a quasi-polynomial size lower bound for CSPs. I've thought more about obtaining a similar for SDPs and learnt some basic quantum communication/information theory along the way as this seems connected to many questions in quantum communication and quantum information (much like the LP result drew inspiration from communication complexity). No concrete new results to report since last workshop though (apart from a simplified proof of the LP result).

(3) Provable learning algorithms. I've been working on several learning problems, mainly joint with Adam Klivans, trying to get algorithms with provable learning guarantees. For example, one recent result was to get the first algorithm with provable guarantees for learning Markov random fields efficiently; our algorithm was a variant of the popular stochastic gradient descent that actually had nearly-optimal sample complexity and nearly-optimal run-time (assuming hardness of sparse LPN). There are many interesting open questions here that I don't think have been studied from a complexity perspective as much as they should have been (and sometimes surprisingly, e.g., the statistical vs computational complexity of sparse linear regression with Gaussian measurements!).

(4) Communication complexity (CC)/Cryptography. In an upcoming work we use CC techniques to construct efficient leakage resilient secret sharing (LRSS) schemes that are secure against adversaries with lot more power in leakage than was previously known and in particular almost reach the 'extent possible' in some important cases (without breaking information-theoretic limits or proving new breakthroughs in communication complexity). These stem from viewing adversaries leaking information in crypto as running communication protocols and using CC hardness results to design/analyze appropriate LRSS schemes.

I have been working on the 2-to-2 Games Conjecture that we established recently. A main component therein is Fourier analysis over the Grassmann graph, which is more complex than analysis on the Boolean hypercube.

I have been interested in analysis on such less classical settings.

I am interested in the Minimum Circuit Size Problem (MCSP), which takes the truth table of an $n$-bit function and a size parameter $k \leq 2^n$ and asks is there a circuit with at most $k$ gates that computes the function. While it is in NP, it is not known to be NP-hard. In the past I have proven that MCSP is not NP-hard under certain limited reductions, and that even unrestricted polynomial time reductions to MCSP imply new lower bounds. I am interested in continuing this line of research involving the hardness of MCSP, as well as examining the complexity of the problem directly.

I also have some preliminary results involving sparse-NP, that is, NP languages which accept only a polynomial number of inputs on any given length. There are already known relations between sparse NP and NTIME($2^n$) (see [https://www.sciencedirect.com/science/article/pii/S0019995885800048]), and I'm interested in how far these kinds of results can go when applied to circuit lower bounds.

Over the last few years I've been very interested in the problem of *quantum tomography*, i.e., learning/testing an unknown quantum state. This is the quantum analogue of the classical problem of learning/testing an unknown probability distribution. The research area touches on many disparate areas of math: the combinatorics of longest increasing subsequences; representation theory of the symmetric group; integrable systems and free probability; differential privacy. For a (hopefully) gentle survey (joint with John Wright), see here.

I'm also excited about an upcoming work (joint with Sidhanth Mohanty) which gives a new, generalized method for lifting graphs, a new generalization of the matching polynomial of a graph, and some new perspective on the Marcus-Spielman-Srivastava interlacing polynomial research, and how it connects to free probability (again!). There are also some applications of this work to the understanding the ability of SDPs to analyze random CSPs.

Together with Thomas Vidick, we have been looking at various dimension reduction questions for matrices. See his report for the main example involving dimension reduction for unitary matrices, and which is closely connected to Connes embedding conjecture. A related question we recently started investigating is dimension reduction for the Schatten-1 norm. Analogous results for the \ell_1 norm were proven by Brinkman and Charikar, and the matrix case of Schatten-1 has recently been analyzed by Naor, Pisier, and Schechtman.

Closely related to the above are questions on *graph limit*. For instance, we were trying to show that certain problems on graph limits are computable (assuming the Aldous-Lyons conjecture). This led to us classical results such as the incomputability of Wang tiling, where many basic questions are still not answered. One vague question we would be interested in answering is the existence of a "robust/PCP" uncomputable problem. But one first has to phrase it properly.

I've also been interested in basic questions in quantum algorithms; for instance, can we speed up Shor's factoring algorithm and get a subquadratic-time algorithm?

The three focal points of my research in the last year or so are (1) pseudorandomness of space bounded computations where I have been pursuing two separate threads. Recent papers: Undirected Laplacian, unordered, constant-width, branching programs, width-3 branching programs; (2) Algorithmic Fairness (with a rather complexity-theoretic perspective). Recent papers: multicalibration, Computationally-Bounded Awareness; (3) Complexity theoretic aspects of Cryptography. Recent papers: Batch Verification for UP, Communication Complexity of Key-Agreement. More recently (past two-three years), one of the topics I studied is robust adaptive data-analysis via differential privacy and related notions.

Some focuses of my recent research:

- doubly-efficient interactive proofs, which are interactive proof systems for languages in BPP where the prover runs in polynomial time and the verifier in linear time (or more generally, verification using resources that are not known to be sufficient for deciding the languages).
- differential private data analysis: recent work with Bun, Dwork and Steinke explores relaxed definitions and their behavior under composition. In recent work with Aaronson we also make and study a new connection to gentle measurement of quantum states.
- worst-case to average-case reductions for languages in BPP, where the reduction runs in linear or nearly-linear time. For example, in recent work with Oded Goldreich we show such a reduction for the problem of counting the number of t-cliques in a graph (where t is constant).
- algorithmic fairness: can we make sure that automated decisions made by machine learning algorithms do not exhibit discriminatory behavior? Recent works with Herbert-Johnson, Kim, Reingold and Yona explore various aspects of this question.

My main area of research is doubly-efficient interactive proofs (interactive proofs in which the verifier is super efficient and the honest prover is relatively efficient).

Recent results include:

- A doubly-efficient interactive proof systems for bounded space computations (together with Omer and Guy), which I briefly presented in the previous Oberwolfach.
- An interactive proof for batch verification of UP statements (also together with Omer and Guy).
- A *computationally-sound* interactive proof for all of P, in which the overhead of the prover is close to optimal, in contrast to most prior works in which there was large polynomial overhead (together with Justin Holmgren).
- A line of work on interactive proof in which the verifier runs in *sub-linear* time.

I have also been interested recently in other problems at the intersection of complexity and cryptography, including the theoretical foundations of public-key encryption, studying relaxations of collision-resistant hash functions and trying to constuct a more rigorous foundation for the popular Fiat-Shamir heuristic.

I spend most of my time thinking about connections between "fine-grained complexity" and approximation problems.

In our Distributed PCP paper with Amir and Ryan, we showed that some SETH-hard problems still require near-quadratic time even if approximation is allowed. (There have also been several related followups in the last year by myself and others.)

For other SETH-hard problems, I am thinking about faster approximation algorithms (for a good example which is unfortunately not mine, see the recent breakthrough on approximating Edit Distance).

I have been working on PCP and the Unique Games Conjecture.

Recently we have proved the related, albeit weaker, 2-to-2 Games Conjecture, with Minzer and Khot.

The proof involves Fourier transform on the Grassnann graph.

This led us to think about related phenomena regarding the Johnson graph.

Structure theorems of that type could lead to results on Boolean functions, as well as in complexity theory.

(1) Constant factor approximation algorithm for edit distance in truly subquadratic time.

(2) Discrepancy of random matrices with many columns [https://arxiv.org/abs/1807.04318].

(3) Tight bounds on the Number of Relevant Variables in a Bounded degree Boolean function [https://arxiv.org/abs/1801.08564].

I've been working on questions in algebraic complexity theory

(lower bounds for arithmetic circuits, PIT, factoring) and coding theory (locally decodable and locally testable codes).

I also like thinking about questions in incidence geometry (Kakeya, Sylvester-Gallai type problems)

Recently I was thinking about some questions in polynomial factorization.

In a recent work (with Vishwas Bhargava and Ilya Volkovich) we showed that

sparse polynomials where each variable has low individual degree must have

somewhat sparse factors. Using this we were able to devise a blackbox

factorization algorithm for sparse polynomials with low individual degree.

Improving the sparsity bound is a very nice question.

In coding theory, in joint work with Swastik Kopparty, Or Meir and Noga Ron-Zewi we showed

that there exist high rate locally decodable and testable codes with only

sub polynomial query complexity. Even more recently, in joint work with Swastik Kopparty,

Noga Ron-Zewi and Mary Wootters we showed that the well studied Folded Reed Solomon codes

achieve list decoding capacity with only constant list size. We also showed how to obtain

capacity achieving locally list decodable codes with lower query complexity than previously known.

I have been thinking mainly about average-case and distributional problems, both from the algorithmic side and about providing evidence for information-computation gaps. One area of focus has been understanding the performance of the sum-of-squares SDP hierarchy for problems such as random-CSPs, planted clique, and tensor decomposition. A running underlying question is when is the full power of sum-of-squares necessary, and when do simpler algorithms, such as spectral algorithms, linear programs, or local algorithms (such as gradient descent) succeed?

In the wake of the planted clique lower bound of Boaz Barak et al. from 2016, there is a beautiful conjecture that the sum-of-squares hierarchy is no more powerful for many average-case problems than low-degree polynomial statistics (e.g. such as cycle counts for planted clique). In this paper, my co-authors and I go partway to establishing this conjecture, showing an equivalence between sum-of-squares and a restricted class of spectral algorithms for a broad class of average-case problems. Still, the original conjecture is stronger and would give us a more satisfying and precise characterization of the limitations of sum-of-squares, and establishing it remains completely open. (See also the discussion in the survey that I recently co-wrote with Prasad Raghavendra and David Steurer).

One notable phenomenon is that for these average-case problems, the settings which we predict to be hard for sum-of-squares correspond to the predicted hard regime for sampling in statistical physicis. Curiously, the heuristics we use to predict whether a problem is or isn't difficult appear, on the surface, to be unrelated to those of the physicists. I am interested in understanding why this is the case, and whether the predictions can be unified, rigorously or even intuitively.

Some things I have been working on include

(1) Pseudorandom generators for geometrically defined classes such as intersections of halfspaces ( https://eccc.weizmann.ac.il/report/2018/100

and https://arxiv.org/abs/1808.04035 ) and other pseudorandomness problems for Boolean functions such as PRGs for AC0 https://arxiv.org/abs/1801.03590 and deterministic approximate counting of CNF satisfying assignments https://arxiv.org/abs/1801.03588

(2) Various algorithmic and lower bounds problems in property testing https://arxiv.org/abs/1802.04859 and computational learning https://arxiv.org/abs/1807.07013

(3) Problems related to "trace reconstruction" https://arxiv.org/abs/1612.03148 and upcoming work in which trace reconstruction is extended to reconstruct a distribution over strings rather than a single unknown string

Most recently I have been working on extensions of the classical Sylvester-Gallai (SG) theorem to higher degree polynomials. This question is closely related to derandomizing polynomial identity testing of depth-4 circuits as noticed by Gupta here. So far I have been able to prove the following SG type theorem: If Q1,…,Qm are irreducible homogeneous polynomials of degree at most 2 in n variables so that for every i and j there is a k so that whenever Qi and Qj vanish then so does Qk, then the linear span of the {Qi} has dimension O(1). Extending the SG theorem to higher degree polynomials is an interesting question on the borderline of combinatorics and algebraic geometry. It seems that tools from both world are needed to understand the problem.

Another area I've been thinking about is coding theory. Together with Ori Sberlo we showed that Reed-Muller codes achieve capacity for the Binary Erasure Channel and Binary Symmetric Channel for degrees up to linear in the number of variables (i.e. up to some degree O(m)). With Roni Con we give an explicit construction of codes for the insertion/deletion channel achieving better rates than earlier constructions.

Recently I am interested in:

(1) hyperbolic programming, a generalization of

SDP which is not very well understood, in particular whether or not it is actually more powerful than semidefinite programming.

(2) spectral graph theory, especially questions related to high girth graphs, the nonbacktracking operator, and irregular Ramanujan graphs.

(3) connections between the "interlacing polynomials" method and free probability / random matrix theory.

Much of my work has focused on better understanding the computational complexity of high-dimensional estimation problems, where the goal is to infer the (high-dimensional) input to a specified randomized process upon observing its (high-dimensional) output.

This notion of high-dimensional estimation captures many interesting computational problems, e.g., learning mixtures of Gaussians, tensor decomposition, community detection, or planted clique.

We have investigated the computational complexity of such estimation problems through the lens of the sum-of-squares meta algorithm.

A surprising general result in this context is that the existence of certain low-degree proofs imply polynomial-time algorithms for estimation problems (whereas one would expect such proofs only imply NP-algorithms or perhaps decision algorithms).

This perspective on estimation algorithms explains many previously known algorithmic guarantees and has led to significantly improved guarantees for several problems, e.g., learning mixtures of Gaussians and tensor decomposition.

For many estimation problems, it is plausible that the guarantees of sum-of-squares are optimal among polynomial-time algorithms.

This perspective also gives a new way to reason about gaps between guarantees that are statistically feasible and guarantees that are computationally feasible by proving limitations of the sum-of-squares meta-algorithm for particular estimation problems.

(See the reports of Tselil and Pravesh for more details about these developments.)

I have mostly been thinking about Communication with uncertainty. Some papers relating to the topic are here, here and here - though the questions bothering me the most are still to be answered: What is the right model for the kind of proof verification that we do when reading a journal paper (Contextual proofs)? And how to write a proof that is readable to a broad audience (Contextual proofs with uncertainty)? Hopefully I can explain these questions to some of you over beer and find some answers.

A side interest I have developed (that has nothing to do with the focus of the workshop) is Polar codes. Some papers here and here. More upcoming.

I have been thinking of the wide zig-zag product and its application for constructing efficient parity samplers and close to optimal error correcting codes with efficient encoding (see here). More generally, the zig-zag product and the wide zig-zag product, are examples of Markovian processes with side information (non-backtracking walks and exploration sequences are other examples of such a walk). I wonder what other things can be economically achieved with such a walk.

I was also interested in understanding and using the Chattopadhyay and Zuckerman paradigm. In particular, I showed, together with Avraham Ben-Aroya and Dean Doron:

- A more efficient reduction from non-malleable extractors to two-source extractors that strenghens and simplifies the CZ reduction, see here.

- A new construction of a strong disperser with one output bit simultaneously having close to optimal seed length and close to optimal entropy loss. This, equivalently, gives close to optimal erasure list-decodable codes. This also gives the first construction of a *non-linear* code, whose parameters greatly improve upon the best possible *linear* code. See here.

I've been working on quantified derandomization, which is a relaxed derandomization problem introduced by Goldreich and Wigderson. In this problem the input is a description of a circuit (or another computational device) on n input bits, and we are guaranteed that the circuit is extremely biased: It either accepts all but B(n) of its inputs or rejects all but B(n) of its inputs, where B(n) « 2^n is extremely small (e.g., subexponential). Our goal is to deterministically detect the direction of this extreme bias - that is, to deterministically decide if the circuit accepts almost all inputs or rejects almost all inputs. This is a relaxation of the classical derandomization problem, in which B(n) = 2^n/3.

One highlight from my research so far is that for several classes, such as AC0 or TC0, the results obtained are very tight. That is, the parameter values (such as B(n) or circuit size) for which we can unconditionally solve the problem are very close to parameter values that would yield breakthrough results in standard derandomization of the corresponding class.

Using the techniques that underlie the proofs of these results, for some classes we can significantly relax the requirements in Williams' "algorithmic method" for circuit lower bounds. For example, we show that in order to prove that NEXP is not contained in TC0 it suffices to solve the following problem: Given a description of a TC0 circuit of size that is "just beyond" the size required to compute parity, deterministically find a subset of {0,1}^n sub-exponential size on which the circuit "simplifies" to some function from a simple class.

I have been interested in graph sparsifiers, previously on lower bounds and now on constructions. I am also interested in robust recovery problems, that is in finding a planted solution in semi-random models.

I continue to work on matrix multiplication, via group theory and representation theory. The effort saw some progress in the direction of ruling out certain families of groups, via extending the 2016 cap-set conjecture resolution by Ellenberg and Gijswijit; in our most recent paper we extend this quite significantly to large classes of non-abelian groups. However, the families of groups that remain feasible still cover most of the settings that we have felt for some time should support an eventual exponent 2 algorithm! The new work has generated some nice intermediate questions.

A somewhat related topic is that of finding fast DFTs for general finite groups. With Chloe Hsu, we found a new algorithm that essentially solves the problem for linear groups, which had been a long-standing stumbling block. We also improve the exponent for general finite groups from 1.5 to sqrt(2) (assuming omega = 2). With some recent new ideas, exponent 1 for all finite groups appears close, but there is still a gap to fill.

Finally, I continue to think on and off about L vs. RL; with William Hoza we proved an interesting statement that essentially shows that such a derandomization is equivalent to finding a transformation between two variants of standard pseudorandom generators.

I continue to think actively about the RL vs. L problem. Over the past couple of years, I've been working with my student Jack Murtagh, Omer Reingold, and Aaron Sidford on graph-theoretic approaches, specifically on trying to come up with deterministic, nearly logspace algorithms for problems beyond Undirected S-T Connectivity. In our first work, we did this for solving Undirected Laplacian Systems, and in a new work (to be posted soon) we did it for (spectrally) approximating random walks of arbitrary length on undirected graphs.

Another continuing focus is on using computational aspects of entropy to understand how efficiently we can construct cryptographic primitives from one-way functions. The main efficiency bottleneck in existing constructions is "flattening" - converting Shannon entropy to min- and max-entropy. In a new work with my student Yi-Hsiu Chen, Mika Goos, and Jiapeng Zhang, we prove a lower bound on the efficiency of flattening algorithms, which might be a first step towards proving (black-box) lower bounds on the efficiency of the aforementioned cryptographic constructions.

Finally, much of my research continues to be about both the theory and practice of differential privacy. In particular, I wrote this survey on complexity aspects of differential privacy for Oded's 60th birthday.

My recent work of interest to the complexity community is on two topics:

(1) Matrix multiplication. Together with my student Josh Alman we have been investigating the complexity of n-by-n matrix multiplication (MM). In recent work [appeared in ITCS'18 and to appear in FOCS'18] we consider generalizations of the known methods for designing MM algorithms and give limitations for these methods. In particular, we show that none of the known approaches can prove that the MM exponent \omega is 2. To get to 2, one would have to either radically change the starting tensors, or step away from "combinatorial degenerations".

(2) Fine-grained complexity of problems in P. Recently, I have been working on giving conditional lower bounds for estimating graph parameters such as the diameter, radius or girth of a graph, under a few popular hardness hypothesis. For instance, in recent work [appeared in STOC'18] we give an approximation ratio-running time tradeoff lower bound for the Eccentricities problem (and to a lesser extent for Diameter), under the Strong Exponential Time hypothesis. The tradeoff shows that several known algorithms are conditionally optimal, up to n^{o(1)} factors.

I have been following lines of work in quantum complexity and quantum cryptography, that overlap through the model of interactive proof systems with multiple quantum provers.

A first question in this area is a "quantum PCP theorem for games", which asks for the complexity of the class MIP*, the analogue of MIP with the only difference that provers may use quantum entanglement. This class seems larger than the classical MIP (the conjecture is that it contains the exponential-sized version of QMA, QMA_EXP), but there are no upper bounds on it, so it could in principle be much bigger. Recently, we showed the conjecture, with some caveats, by developing quantum analogues of classical techniques such as the linearity and low-degree tests. This uses some nice work on approximate representations of non-abelian groups by Gowers and Hatami. I wrote about some of this here.

In cryptography, recently I worked on the problem of certifying properties of a quantum device (= prover) by having a classical interaction with it. In this paper we designed a "cryptographic test for randomness", that certifies the device generates information-theoretic randomness, assuming that it is computationally bounded (the device generating the randomness is bounded, but the randomness is indistinguishable from uniform from the point of view of even a computationally unbounded third party; this is impossible classically).

In a more mathematical direction I have been thinking about formulations of dimension reduction that arise from the study of quantum games. A particularly simple instance is the following: suppose given a Gram matrix G (of constant size) with the promise that each entry Gij of the matrix can be written as the (dimension-normalized) trace inner product of two unitary matrices. What is the best bound one can show on the dimension of the unitaries that achieve this (even up to small additive error)? This asks for a more structured variant of the Johnson-Lindenstrauss lemma, and turns out to be a seemingly much harder question.

Some recent publications are here.

In the past few years I have been writing a text on computational complexity (and beyond) called "Mathematics and Computation", a draft of which is available here: [https://www.math.ias.edu/avi/book]

Research-wise, most of my efforts in recent years went to works expanding the set of techniques, results and connections arising from work on "Operator Scaling" in 2015. This has snow-balled into a large and still growing activity with applications to optimization and complexity theory on the one hand, and to several mathematical areas on the other, including algebra, analysis and quantum information theory on the other. A good picture of where things stand can be gained from the agenda, videos, slides and references in a week-long workshop held at IAS in June 2018, all available here: [https://www.math.ias.edu/ocit2018]

I've been developing the theory of fine-grained complexity (what is the "optimal" exponent for the running time of a problem?) and as well as lower bounds via algorithm design. Some highlights:

STOC16: Daniel Kane and I gave a function in depth-3 TC0 which requires n^(1.5-o(1)) gates and n^(2.5-o(1)) wires, the first such lower bounds for this circuit class.

STOC17: among other results, Josh Alman and I proved that the Walsh-Hadamard transform is *not* rigid in the sense of Valiant (contradicting much belief in the contrary).

CCC17: Cody Murray and I gave a problem solvable in n^(1.1) time without logspace-uniform circuits of n^(1+o(1)) size, using a new (non-relativizing) proof method.

STOC18: Cody and I showed how slightly faster Circuit SAT algorithms imply strong circuit lower bounds for nondeterministic quasi-polytime, and even for NP if the algorithms are "good enough". In that paper we showed nondeterministic quasi-polytime is not in ACC^0; in CCC18, I show lower bounds for NP against various depth-2 circuit classes that have been widely studied in other communities.

Recently I've been working on the following:

Hitting sets for RL: My student William Hoza and I gave a simple hitting set generator for RL that has optimal dependence on the error. We also interpolate between the Saks-Zhou and Savitch derandomizations of RL with small success probability.

Randomness extraction: My student Fu Li and I gave simple improved extractors for recognizable sources, including algebraic sources (varieties). My former student Xue Chen and I used chaining to show the existence of simple extractors with small seed length, including linear extractors. I've thought more about two-source extractors and potential applications, but don't have anything new to report since the last Oberwolfach.

a dummy line for fixing a system bug.