Makrand Sinha

Networks & Optimization Group
Centrum Wiskunde & Informatika
Science Park 123
1098 XG Amsterdam

Email: Makrand.Sinha then the at sign followed by cwi nl separated by dots

About Me

I am a postdoctoral reseacher in the Networks and Optimization group at CWI in Amsterdam. My primary research interests lie in Communication Complexity, Lower Bounds for Linear and Semidefinite Programs, and Convex Geometry and Optimization.

I received my PhD in August 2018 from the Paul G. Allen School of Computer Science & Engineering at University of Washington in Seattle under the guidance of Anup Rao.

Previously I completed my Masters at the Computer Science Department in ETH Zürich advised by Thomas Holenstein. Before that I did a Bachelor's in Computer Science (2005-2009) at IIT Kanpur.

You can find my curriculum vitae here: CV [PDF]


  • Exponential Separation between Quantum Communication and Logarithm of Approximate Rank
    Makrand Sinha and Ronald de Wolf
    To appear in FOCS 2019
  • [arXiv] [ECCC] [Abstract +]

    Chattopadhyay, Mande and Sherif (ECCC 2018) recently exhibited a total Boolean function, the sink function, that has polynomial approximate rank and polynomial randomized communication complexity. This gives an exponential separation between randomized communication complexity and logarithm of the approximate rank, refuting the log-approximate-rank conjecture. We show that even the quantum communication complexity of the sink function is polynomial, thus also refuting the quantum log-approximate-rank conjecture.

    Our lower bound is based on the fooling distribution method introduced by Rao and Sinha (ECCC 2015) for the classical case and extended by Anshu, Touchette, Yao and Yu (STOC 2017) for the quantum case. We also give a new proof of the classical lower bound using the fooling distribution method.

  • Simplified Separation of Information and Communication
    Anup Rao and Makrand Sinha
    Appeared in Theory of Computing, Volume 14, Article 20, pp. 1-29, December 2018.
  • [Journal] [ECCC] [Abstract +]

    We give an example of a boolean function whose information complexity is exponentially smaller than its communication complexity. This was first proven recently by Ganor, Kol and Raz (J. ACM 2016) and our work gives a simpler proof of the same result. In the course of this simplification, we make several new contributions: we introduce a new communication lower bound technique, the notion of a fooling distribution, which allows us to separate information and communication complexity, and we also give a more direct proof for the information complexity upper bound.

    We also prove a generalization of Shearer's Lemma that may be of independent interest. A version of Shearer's original lemma bounds the expected mutual information of a subset of random variables with another random variable, when the subset is chosen independently of all the random variables that are involved. Our generalization allows some dependence between the random subset and the random variables involved, and still gives us similar bounds with an appropriate error term.

  • Edge Estimation with Independent Set Oracles
    Paul Beame, Sariel Har-Peled, Sivaramakrishnan Natarajan Ramamoorthy, Cyrus Rashtchian and Makrand Sinha
    To appear in Proceedings of the 9th Innovations in Theoretical Computer Science
    (ITCS '18)
  • [arXiv] [Abstract +]

    We study the problem of estimating the number of edges in a graph with access to only an independent set oracle. Independent set queries draw motivation from group testing and have applications to the complexity of decision versus counting problems. We give two algorithms to estimate the number of edges in an \(n\)-vertex graph: one that uses only \(\mathrm{polylog}(n)\) bipartite independent set queries, and another one that uses \({n}^{2/3} \cdot \mathrm{polylog}(n)\) independent set queries.

  • Lower Bounds for Approximating the Matching Polytope
    Makrand Sinha
    To appear in Proceedings of the 29th Annual ACM-SIAM Symposium on Discrete Algorithms
    (SODA '18)
  • [arXiv] [ECCC] [Abstract +]

    We prove that any extended formulation that approximates the matching polytope on \(n\)-vertex graphs up to a factor of \((1+\epsilon)\) for any \(\frac2n \le \epsilon \le 1\) must have at least \({n} \choose {{\alpha}/{\epsilon}}\) defining inequalities where \(0<\alpha<1\) is an absolute constant. This is tight as exhibited by the \((1+\epsilon)\) approximating linear program obtained by dropping the odd set constraints of size larger than \(({1+\epsilon})/{\epsilon}\) from the description of the matching polytope. Previously, a tight lower bound of \(2^{\Omega(n)}\) was only known for \(\epsilon = O\left(\frac{1}{n}\right)\) [Rothvoss, STOC '14; Braun and Pokutta, IEEE Trans. Information Theory '15] whereas for \(\frac2n \le \epsilon \le 1\), the best lower bound was \(2^{\Omega\left({1}/{\epsilon}\right)}\) [Rothvoss, STOC '14]. The key new ingredient in our proof is a close connection to the non-negative rank of a lopsided version of the unique disjointness matrix.

  • A Direct-sum Theorem for Read-Once Branching Programs
    Anup Rao and Makrand Sinha
    In Proceedings of the 20th International Workshop on Randomization and Computation (RANDOM'16)
  • [pdf] [Abstract +]

    We study a direct-sum question for read-once branching programs. If \(M(f)\) denotes the minimum average memory required to compute a function \(f(x_1,x_2, \dotsc, x_n)\) how much memory is required to compute \(f\) on \(k\) independent inputs that arrive in parallel? We show that when the inputs (updates) are sampled independently from some domain \(\mathcal{X}\) and \(M(f) = \Omega(n)\), then computing the value of \(f\) on \(k\) streams requires average memory at least \(\Omega\left(k \cdot \frac{M(f)}{n}\right)\).
    Our results are obtained by defining new ways to measure the information complexity of read-once branching programs. We define two such measures: the transitional and cumulative information content. We prove that any read-once branching program with transitional information content \(\mathtt{I}\) can be simulated using average memory \(\mathcal{O}(n(\mathtt{I}+1))\). On the other hand, if every read-once branching program with cumulative information content \(\mathtt{I}\) can be simulated with average memory \(\mathcal{O}(\mathtt{I}+1)\), then computing \(f\) on \(k\) inputs requires average memory at least \(\Omega(k \cdot (M(f)-1))\).

  • Fooling Pairs in Randomized Communication Complexity
    Shay Moran, Makrand Sinha and Amir Yehudayoff
    23rd International Colloquium on Structural Information and Communication Complexity (SIROCCO'16)
  • [ECCC] [Abstract +]

    Fooling pairs are one of the standard methods for proving lower bounds for deterministic two-player communication complexity. We study fooling pairs in the context of randomized communication complexity. We show that every fooling pair induces far away distributions on transcripts of private-coin protocols. We then conclude that the private-coin randomized \(\varepsilon\)-error communication complexity of a function \(f\) with a fooling set \(\mathcal{S}\) is at least order \(\log \frac{\log |\mathcal{S}|}{\varepsilon}\). This is tight, for example, for the equality and greater-than functions.

  • On the Communication Complexity of Greater-Than
    Sivaramakrishnan Natarajan Ramamoorthy and Makrand Sinha
    In Proceedings of the 53rd Annual Allerton Conference on Communication, Control and Computing (Allerton '15)
  • [pdf] [Abstract +]

    We give a simple information theoretic proof that the public-coin randomized communication complexity of the greater-than function is \(\Omega(\log n)\) for bit-strings of length \(n\).

  • Constructing a Pseudorandom Generator Requires an Almost Linear Number of Calls
    Thomas Holenstein and Makrand Sinha
    In proceedings of IEEE 53rd Annual Symposium on Foundations of Computer Science (FOCS '12), p. 698-707, 2012
  • [arXiv] [Abstract +]

    We show that a black-box construction of a pseudorandom generator from a one-way function needs to make \(\Omega(n/log(n))\) calls to the underlying one-way function. The bound even holds if the one-way function is guaranteed to be regular. In this case it matches the best known construction due to Goldreich, Krawczyk, and Luby (SIAM J. Comp. 22, 1993), which uses \(\mathcal{O}(n/log(n))\) calls.

  • Vertices of Degree \(k\) in Random Unlabeled Trees
    Konstantinos Panagiotou and Makrand Sinha
    Preliminary version in proceedings of the European Conference on Combinatorics, Graph Theory and Applications (EuroComb '09), Electronic Notes in Discrete Mathematics, Volume 34, p. 41-45.
    Full version appeared in Journal of Graph Theory, Volume 69, Issue 2, p. 114-130, February 2012.
  • [pdf] [Abstract +]

    Let \(\mathcal{H}_n\) be the class of unlabeled trees with \(n\) vertices, and denote by \(\mathcal{H}_n\) a tree that is drawn uniformly at random from this set. The asymptotic behavior of the random variable \(\deg_k(\mathcal{H}_n)\) that counts vertices of degree \(k\) in \(\mathcal{H}_n\) was studied, among others, by Drmota and Gittenberger, who showed that this quantity satisfies a central limit theorem. This result provides a very precise characterization of the ``central region'' of the distribution, but does not give any non-trivial information about its tails.

    In this work we study further the number of vertices of degree \(k\) in \(\mathcal{H}_n\). In particular, for \(k = \mathcal{O}\left(\sqrt{\frac{\log n}{\log\log n}}\right)\) we show exponential-type bounds for the probability that \(\deg_k(\mathcal{H}_n)\) deviates from its expectation. On the technical side, our proofs are based on the analysis of a randomized algorithm that generates unlabeled trees in the so-called Boltzmann model. The analysis of such algorithms is quite well-understood for classes of labeled graphs, see e.g. the work by Bernasconi, the first author, and Steger. Comparable algorithms for unlabeled classes are unfortunately much more complex. We demonstrate in this work that they can be analyzed very precisely for classes of unlabeled graphs as well.