ABSTRACT
We describe new ways to simulate 2-party communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the inputs to the participating parties can be simulated by a new protocol involving at most ~O(√CI) bits of communication. If the protocol reveals I bits of information about the inputs to an observer that watches the communication in the protocol, we show how to carry out the simulation with ~O(I) bits of communication.
These results lead to a direct sum theorem for randomized communication complexity. Ignoring polylogarithmic factors, we show that for worst case computation, computing n copies of a function requires √n times the communication required for computing one copy of the function. For average case complexity, given any distribution μ on inputs, computing n copies of the function on n inputs sampled independently according to μ requires √n times the communication for computing one copy. If μ is a product distribution, computing n copies on n independent inputs sampled according to μ requires n times the communication required for computing the function. We also study the complexity of computing the sum (or parity) of n
evaluations of f, and obtain results analogous to those above.
- M. Braverman and A. Rao. Work in progress. Z. Bar-Yossef, T. S. Jayram, R. Kumar, and D. Sivakumar. An information statistics approach to data stream and communication complexity. Journal of Computer and System Sciences, 68(4):702--732, 2004. Google ScholarDigital Library
- A. Chakrabarti, Y. Shi, A. Wirth, and A. Yao. Informational complexity and the direct sum problem for simultaneous message complexity. In B. Werner, editor, Proceedings of the 42nd Annual IEEE Symposium on Foundations of Computer Science, pages 270--278, Los Alamitos, CA, Oct. 14-17 2001. IEEE Computer Society. Google ScholarDigital Library
- T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley series in telecommunications. J. Wiley and Sons, New York, 1991. Google ScholarDigital Library
- T. Feder, E. Kushilevitz, M. Naor, and N. Nisan. Amortized communication complexity. SIAM Journal on Computing, 24(4):736--750, 1995. Prelim version by Feder, Kushilevitz, Naor FOCS 1991. Google ScholarDigital Library
- U. Feige, D. Peleg, P. Raghavan, and E. Upfal. Computing with noisy information. SIAM Journal on Computing, 23(5):1001--1018, 1994. Google ScholarDigital Library
- G. Galbiati and M. Fischer. On the complexity of 2-output boolean networks. Theor. Comput. Sci., 16:177--185, 1981.Google ScholarCross Ref
- P. Harsha, R. Jain, D. A. McAllester, and J. Radhakrishnan. The communication complexity of correlation. In IEEE Conference on Computational Complexity, pages 10--23. IEEE Computer Society, 2007. Google ScholarDigital Library
- R. Jain, J. Radhakrishnan, and P. Sen. A direct sum theorem in communication complexity via message compression. In J. C. M. Baeten, J. K. Lenstra, J. Parrow, and G. J. Woeginger, editors, ICALP, volume 2719 of Lecture Notes in Computer Science, pages 300--315. Springer, 2003. Google ScholarDigital Library
- H. Klauck. A strong direct product theorem for disjointness. In STOC, 2010. Google ScholarDigital Library
- E. Kushilevitz and N. Nisan. Communication complexity. Cambridge University Press, Cambridge, 1997. Google ScholarDigital Library
- M. Karchmer, R. Raz, and A. Wigderson. Super-logarithmic depth lower bounds via the direct sum in communication complexity. Computational Complexity, 5(3/4):191--204, 1995. Prelim version CCC 1991.Google ScholarCross Ref
- I. Newman. Private vs. common random bits in communication complexity. Information Processing Letters, 39(2):67--71, 31 July 1991. Google ScholarDigital Library
- W. Paul. Realizing boolean functions on disjoint sets of variables. Theor. Comput. Sci., 2:383--396, 1976.Google ScholarCross Ref
- Razborov. On the distributed complexity of disjointness. TCS: Theoretical Computer Science, 106, 1992. Google ScholarDigital Library
- R. Raz. A parallel repetition theorem. SIAM Journal on Computing, 27(3):763{803, June 1998. Prelim version in STOC '95. Google ScholarDigital Library
- C. E. Shannon. A mathematical theory of communication. Bell System Technical Journal, 27, 1948. Monograph B-1598.Google Scholar
- R. Shaltiel. Towards proving strong direct product theorems. Computational Complexity, 12(1-2):1--22, 2003. Google ScholarDigital Library
- D. Uhlig. On the synthesis of self-correcting schemes from functional elements with a small number of reliable elements. Matematicheskie Zametki, 15(6):937--944, 1974.Google Scholar
- A. C.-C. Yao. Theory and applications of trapdoor functions (extended abstract). In FOCS, pages 80--91. IEEE, 1982. Google ScholarCross Ref
Index Terms
- How to compress interactive communication
Recommendations
Exponential Separation of Information and Communication for Boolean Functions
We show an exponential gap between communication complexity and information complexity by giving an explicit example of a partial boolean function with information complexity ≤ O(k), and distributional communication complexity ≥ 2k. This shows that a ...
How to Compress Interactive Communication
† Special Section on the Forty-Second Annual ACM Symposium on Theory of Computing (STOC 2010)We describe new ways to simulate two-party communication protocols to get protocols with potentially less communication. We show that every communication protocol that communicates $C$ bits and reveals $I$ bits of information about the inputs to the ...
Information Equals Amortized Communication
FOCS '11: Proceedings of the 2011 IEEE 52nd Annual Symposium on Foundations of Computer ScienceWe show how to efficiently simulate the sending of a message to a receiver who has partial information about the message, so that the expected number of bits communicated in the simulation is close to the amount of additional information that the ...
Comments