pdf icon
Volume 18 (2022) Article 3 pp. 1-29
APPROX-RANDOM 2016 Special Issue
Near-Optimal NP-Hardness of Approximating Max k-CSPR
Received: April 23, 2017
Revised: November 7, 2021
Published: February 14, 2022
Download article from ToC site:
[PDF (322K)] [PS (1724K)] [Source ZIP]
Keywords: constraint satisfaction, hardness of approximation, approximation algorithm
ACM Classification: F.2.2
AMS Classification: 68Q17, 68W25

Abstract: [Plain Text Version]

We prove almost optimal hardness for Max k-CSPR. In Max k-CSPR, we are given a set of constraints, each of which depends on at most k variables. Each variable can take any value from 1,2,,R. The goal is to find an assignment to variables that maximizes the number of satisfied constraints.

We show that, for any k2 and R16, it is NP-hard to approximate Max k-CSPR to within factor kO(k)(logR)k/2/Rk1. In the regime where 3k=o(logR/loglogR), this ratio improves upon Chan's O(k/Rk2) factor NP-hardness of approximation of Max k-CSPR (J. ACM 2016). Moreover, when k=2, our result matches the best known hardness result of Khot, Kindler, Mossel and O'Donnell (SIAM J. Comp. 2007). We remark here that NP-hardness of an approximation factor of 2O(k)log(kR)/Rk1 is implicit in the (independent) work of Khot and Saket (ICALP 2015), which is better than our ratio for all k3.

In addition to the above hardness result, by extending an algorithm for Max 2-CSPR by Kindler, Kolla and Trevisan (SODA 2016), we provide an Ω(logR/Rk1)-approximation algorithm for Max k-CSPR. Thanks to Khot and Saket's result, this algorithm is tight up to a factor of O(k2) when kRO(1). In comparison, when 3k is a constant, the previously best known algorithm achieves an O(k/Rk1)-approximation for the problem, which is a factor of O(klogR) from the inapproximability ratio in contrast to our gap of O(k2).

-----------------

A conference version of this paper appeared in the Proceedings of the 19th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX'16).