Skip to main content

Impact of Quantized Inter-agent Communications on Game-Theoretic and Distributed Optimization Algorithms

  • Chapter
  • First Online:
Book cover Uncertainty in Complex Networked Systems

Abstract

Quantized inter-agent communications in game-theoretic and distributed optimization algorithms generate uncertainty that affects the asymptotic and transient behavior of such algorithms. This chapter uses the information-theoretic notion of differential entropy power to establish universal bounds on the maximum exponential convergence rates of primal-dual and gradient-based Nash seeking algorithms under quantized communications. These bounds depend on the inter-agent data rate and the local behavior of the agents’ objective functions, and are independent of the quantizer structure. The presented results provide trade-offs between the speed of exponential convergence, the agents’ objective functions, the communication bit rates, and the number of agents and constraints. For the proposed Nash seeking algorithm, the transient performance is studied and an upper bound on the average time required to settle inside a specified ball around the Nash equilibrium is derived under uniform quantization. Furthermore, an upper bound on the probability that the agents’ actions lie outside this ball is established. This bound decays double exponentially with time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. G. N. Nair and R. J. Evans, “Stabilizability of Stochastic Linear Systems with Finite Feedback Data Rates,” SIAM Journal on Control and Optimization, vol. 43, no. 2, pp. 413–436, 2004.

    Article  MathSciNet  MATH  Google Scholar 

  2. G. N. Nair, F. Fagnani, S. Zampieri, and R. J. Evans, “Feedback Control Under Data Rate Constraints: An Overview,” Proceedings of the IEEE, vol. 95, no. 1, pp. 108–137, 2007.

    Article  Google Scholar 

  3. E. Nekouei, T. Alpcan, G. Nair, and R. J. Evans, “Convergence Analysis of Quantized Primal-dual Algorithms in Network Utility Maximization Problems,” IEEE Transactions on Control of Network Systems, vol. PP, no. 99, pp. 1–1, 2016.

    Google Scholar 

  4. E. Nekouei, G. N. Nair, and T. Alpcan, “Performance Analysis of Gradient-Based Nash Seeking Algorithms Under Quantization,” IEEE Transactions on Automatic Control, vol. 61, no. 12, pp. 3771–3783, 2016.

    Article  MathSciNet  MATH  Google Scholar 

  5. F. Kelly, A. Maulloo, and D. Tan, “Rate control in communication networks: shadow prices, proportional fairness and stability,” in Journal of the Operational Research Society, vol. 49, 1998.

    Article  MATH  Google Scholar 

  6. S. Shakkottai and R. Srikant, “Network Optimization and Control,” Found. Trends Netw., vol. 2, no. 3, pp. 271–379, 2007.

    Article  MATH  Google Scholar 

  7. A. Nedić, A. Olshevsky, A. Ozdaglar, and J. N. Tsitsiklis, “Distributed subgradient methods and quantization effects,” in 47th IEEE Conference on Decision and Control (CDC), Dec 2008, pp. 4177–4184.

    Google Scholar 

  8. P. Yi and Y. Hong, “Quantized Subgradient Algorithm and Data-Rate Analysis for Distributed Optimization,” IEEE Transactions on Control of Network Systems, vol. 1, no. 4, pp. 380–392, 2014.

    Article  MathSciNet  MATH  Google Scholar 

  9. J. S. Freudenberg, R. H. Middleton, and V. Solo, “Stabilization and Disturbance Attenuation Over a Gaussian Communication Channel,” IEEE Transactions on Automatic Control, vol. 55, no. 3, pp. 795–799, 2010.

    Article  MathSciNet  MATH  Google Scholar 

  10. E. Nekouei, T. Alpcan, G. Nair, and R. J. Evans, “Convergence Analysis of Quantized Primal-dual Algorithm in Network Utility Maximization Problems,” arXiv:1604.00723, Tech. Rep., Apr 2016.

  11. C. U. Saraydar, N. B. Mandayam, and D. J. Goodman, “Efficient power control via pricing in wireless data networks,” IEEE Transactions on Communications, vol. 50, no. 2, pp. 291–303, 2002.

    Article  Google Scholar 

  12. J. R. Marden and J. S. Shamma, “Chapter 16 - Game Theory and Distributed Control,” ser. Handbook of Game Theory with Economic Applications, H. P. Young and S. Zamir, Eds. Elsevier, 2015, vol. 4, pp. 861–899.

    Google Scholar 

  13. N. D. Stein, “Characterization and Computation of Equilibria in Infinite Games,” Master’s thesis, M.I.T., June 2007.

    Google Scholar 

  14. S. Li and T. Basar, “Distributed algorithms for the computation of noncooperative equilibria,” Automatica, vol. 23, no. 4, pp. 523–533, 1987.

    Article  MathSciNet  MATH  Google Scholar 

  15. J. B. Rosen, “Existence and Uniqueness of Equilibrium Points for Concave N-Person Games,” Econometrica, vol. 33, no. 3, pp. 520–534, 1965.

    Article  MathSciNet  MATH  Google Scholar 

  16. E. Nekouei, T. Alpcan, and D. Chattopadhyay, “Game-Theoretic Frameworks for Demand Response in Electricity Markets,” IEEE Transactions on Smart Grid, vol. 6, no. 2, pp. 748–758, 2015.

    Article  Google Scholar 

  17. T. M. Cover and J. A. Thomas, Elements of Information Theory. Wiley-Interscience, 2006.

    Google Scholar 

  18. A. Leon-Garcia, Probability, Statistics, and Random Processes for Electrical Engineering, 2nd ed. Massachusetts: Addison-Wesley, 1994.

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank Prof. Girish Nair from The University of Melbourne for his contributions and fruitful discussions. This work was supported by the Australian Research Councils Discovery Projects funding scheme (DP140100819).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tansu Alpcan .

Editor information

Editors and Affiliations

Appendices

Appendix

Proof of Theorem 1

This appendix presents the main steps of the proof of Theorem 1. To this end, first, the notion of conditional differential entropy power of a random vector is defined. Then, the notion of entropy power facilitates establishing a universal lower bound on the DDE of the PD variables. The differential entropy power of the random vector \(\varvec{z}\in \mathbb {R}^{N+M} \) conditioned on the event \(A=a\), denoted by \(\mathsf {N}\left[ \left. \varvec{z}\right| A=a\right] \), is defined as

$$\begin{aligned} \mathsf {N}\left[ \left. \varvec{z}\right| A=a\right] =\frac{1}{{2\pi \mathrm{e}^{}}}\mathrm{e}^{\frac{2}{M+N}\mathsf {h}\left[ \left. \varvec{z}\right| A=a\right] }, \end{aligned}$$

where \(\mathsf {h}\left[ \left. \varvec{z}\right| A=a\right] \) is the conditional differential entropy of \(\varvec{z}\) given \(A=a\) defined as

$$\begin{aligned} \mathsf {h}\left[ \left. \varvec{z}\right| A=a\right] =-\int \log \left( p\left( \left. \varvec{z}\right| A=a\right) \right) p\left( \left. \varvec{z}\right| A=a\right) d\varvec{z}, \end{aligned}$$

where \(p\left( \left. \varvec{z}\right| A=a\right) \) is the conditional distribution of \(\varvec{z}\) given \(A=a\). Using the entropy maximizing property of Gaussian distributions, the conditional entropy power of \(\varvec{z}\) given \(A=a\) can be upper bounded [1] as

$$\begin{aligned} \mathsf {N}\left[ \left. \varvec{z}\right| A=a\right] \le \mathrm{e}^{1/\left( M+N\right) -1}\mathsf {E}\left[ \left. \left\| \varvec{z}\right\| _{2}^2\right| A=a\right] , \end{aligned}$$
(30)

where \(\mathsf {E}\left[ \left. \varvec{z}\right| A=a\right] \) is conditional expectation of \(\varvec{z}\) given \(A=a\). Let \(\mathsf {E}_A\left[ \mathsf {N}\left[ \left. \varvec{z}\right| A=a\right] \right] \) denote the average conditional entropy power of \(\varvec{z}\) given \(A=a\). Using (30), \(\mathsf {E}_A\left[ \mathsf {N}\left[ \left. \varvec{z}\right| A=a\right] \right] \) can be upper bounded as

$$\begin{aligned} {\mathsf E}_{A}\left[ \mathsf {N}\left[ \left. \varvec{z}\right| A\right] \right] \le \mathrm{e}^{1/\left( M+N\right) -1}\mathsf {E}\left[ \left\| \varvec{z}\right\| _{2}^2 \right] . \end{aligned}$$
(31)

Next, the inequality (31) is used to establish the universal lower bound on the DDE of the PD variables under OA quantization schemes. To this end, let \(\mathscr {D}_{k-1}=\left\{ \hat{Q}_n=\varvec{\hat{q}}_n\right\} _{n=0}^{k-1}\) where \(\hat{Q}_n=\left[ \hat{Q}^{\varvec{x}}_{1,n},\ldots ,\hat{Q}^{\varvec{x}}_{M,n},\hat{Q}^{\varvec{\lambda }}_{1,n},\ldots ,\hat{Q}^{\varvec{\lambda }}_{N,n}\right] \) and \(\varvec{\hat{q}}_n\) is a possible realization of \(\hat{Q}_n\). Using (31), \(\mathsf {E}\left[ \left\| \varvec{\varepsilon }_k\right\| _{2}^2 \right] \) can be lower bounded as \(\frac{\mathrm{e}^{1-\frac{1}{M+N}}}{2\pi \mathrm{e}}\mathrm{e}^{\frac{2}{M+N}\mathsf {E}\left[ \mathsf {h}\left[ \varvec{\varepsilon }_k\left| {\mathscr {D}_{k-1}}\right. \right] \right] }\)

$$\begin{aligned} \mathsf {E}\left[ \left\| \varvec{\varepsilon }_k\right\| _{2}^2 \right]&\ge \mathrm{e}^{1-\frac{1}{M+N}}\mathsf {E}\left[ \mathsf {N}\left[ \left. \varvec{\varepsilon }_k\right| \mathscr {D}_{k-1}\right] \right] \nonumber \\&{\mathop {\ge }\limits ^{\left( *\right) }} \frac{\mathrm{e}^{1-\frac{1}{M+N}}}{{2\pi \mathrm{e}^{}}}\mathrm{e}^{\frac{2}{M+N}\mathsf {E}\left[ \mathsf {h}\left[ \left. \varvec{\varepsilon }_k\right| \mathscr {D}_{k-1}\right] \right] }, \end{aligned}$$
(32)

where \(\left( *\right) \) is obtained using the Jensen inequality. The term \(\mathsf {h}\left[ \left. \varvec{\varepsilon }_k\right| \mathscr {D}_{k-1}\right] \) on the right hand side of (32) can be expanded as

$$\begin{aligned} \mathsf {h}\left[ \left. \varvec{\varepsilon }_k\right| \mathscr {D}_{k-1}\right]&=\mathsf {h}\left[ \left. \varvec{y}_k-\varvec{y}^\star \right| \mathscr {D}_{k-1}\right] \nonumber \\&{\mathop {=}\limits ^{\left( *\right) }}\mathsf {h}\left[ \left. \varvec{y}_k\right| \mathscr {D}_{k-1}\right] , \end{aligned}$$
(33)

where \(\left( *\right) \) follows from the translation invariance property of differential entropy as \(\varvec{y}^\star \) is a constant vector (see [17] Theorem 8.6.3 page 253).

The next lemma establishes a useful expression between \(\mathsf {h}\left[ \left. \varvec{y}_n\right| \mathscr {D}_{k-1}\right] \) and \(\mathsf {h}\left[ \left. \varvec{y}_{n-1}\right| \mathscr {D}_{k-1}\right] \) for \(n\le k\), which is used to further expand \(\mathsf {h}\left[ \left. \varvec{y}_k\right| \mathscr {D}_{k-1}\right] \).

Lemma 1

For \(n\le k\), \(\mathsf {h}\left[ \left. \varvec{y}_n\right| \mathscr {D}_{k-1}\right] \) can be expanded as

$$\begin{aligned} \mathsf {h}\left[ \left. \varvec{y}_{n}\right| \mathscr {D}_{k-1}\right] =\mathsf {h}\left[ \left. \varvec{y}_{n-1}\right| \mathscr {D}_{k-1}\right] +\mathsf {E}\left[ \sum _{j=1}^M\log \left( 1+\mu _{n-1}\left. \frac{d ^2}{d {x^j}^2}U_{j}\left( \varvec{x}^{j}_{n-1}\!\right) \right) \right| \mathscr {D}_{k-1}\right] \end{aligned}$$
(34)

Proof

Let \(\tilde{x}^i_n={x}^{i}_{n}+\mu _{n} \left( \frac{d}{d {x}^{i}}U_i\left( {x}^{i}_{n}\right) \right) \) and \(\varvec{\tilde{x}}_n=\left[ \tilde{x}^i_1,\ldots ,\tilde{x}^i_M\right] ^\top \). Let \(\varvec{\tilde{y}}_n\) be the vector concatenation of \(\varvec{\tilde{x}}_n\) and \(\varvec{\lambda }_n\). This lemma is proved in two steps. First, it is shown that the conditional differential entropy of \(\varvec{y}_n\) given \(\mathscr {D}_k\) is equal to that of \(\varvec{\tilde{y}}_{n-1}\) given \(\mathscr {D}_k\) (see (35)). Next, a relation between the conditional differential entropy of \(\varvec{\tilde{y}}_{n-1}\) given \(\mathscr {D}_k\) and that of \(\varvec{y}_{n-1}\) given \(\mathscr {D}_k\) is established. Note that, \(\mathsf {h}\left[ \left. \varvec{y}_n\right| \mathscr {D}_{k-1}\right] \) can be written as

$$\begin{aligned} \mathsf {h}\left[ \left. \varvec{y}_n\right| \mathscr {D}_{k-1}\right]&=\mathsf {h}\left[ \left. \varvec{x}_n,\varvec{\lambda }_n\right| \mathscr {D}_{k-1}\right] \nonumber \\&{\mathop {=}\limits ^{*}}\mathsf {h}\left[ \left. \varvec{\tilde{x}}_{n-1},\varvec{\lambda }_{n-1}\right| \mathscr {D}_{k-1}\right] \nonumber \\&=\mathsf {h}\left[ \left. \varvec{\tilde{y}}_{n-1}\right| \mathscr {D}_{k-1}\right] \end{aligned}$$
(35)

where \(\left( *\right) \) follows from the translation invariance property of the differential entropy and the fact that \(Q_{k-1}\) is fixed given \(\mathscr {D}_{k-1}=\left\{ \hat{Q}_n=\varvec{\hat{q}}_n\right\} _{n=0}^{k-1}\). Next, we derive an expression for the probability density function (PDF) of \(\varvec{\tilde{y}}_n\) in terms of the PDF of \(\varvec{y}_{n}\). Let \(p_{\varvec{\tilde{y}}_{n}}\left( \varvec{y}\left| \mathscr {D}_{k-1}\right. \right) \) and \(p_{\varvec{y}_{n}}\left( \varvec{y}\left| \mathscr {D}_{k-1}\right. \right) \) to denote the PDFs of \(\varvec{\tilde{y}}_n\) and \(\varvec{y}_{n}\), respectively, conditioned on \(\mathscr {D}_{k-1}\). Let \(\varvec{F}\left( \cdot \right) \) represent the mapping between \(\varvec{\tilde{y}}_n\) and \(\varvec{y}_n\), i.e., \(\varvec{\tilde{y}}_n=\varvec{F}\left( \varvec{y}_n\right) \). Note that \(0<1+\mu _{n}\frac{d^2}{d {x^i}^2}U_i\left( x^i\right) <1\) since \(0<\mu _{n}< \min _i\frac{1}{\left| U^\mathrm{min}_i\right| }\) which implies that the mapping \(\varvec{F}\left( \cdot \right) \) is invertible. Thus, the change-of-variables formula for invertible diffeomorphisms of random vectors (see e.g., (4.63) in [18]) can be applied to write

$$\begin{aligned} p_{\varvec{\tilde{y}}_{n-1}}\!\!\left( \varvec{y}\left| \mathscr {D}_{k-1}\right. \right) =\frac{1}{\det J_{\varvec{F}}\!\!\left[ \varvec{F}^{-1}\!\!\left( \varvec{y}\right) \right] }p_{\varvec{y}_{n-1}}\!\!\left( \varvec{F}^{-1}\left( \varvec{y}\right) \left| \mathscr {D}_{k-1}\right. \right) , \end{aligned}$$
(36)

where \(J_{\varvec{F}}\left[ \varvec{x}\right] \) is Jacobian of \(\varvec{F}\left( \varvec{x}\right) \) evaluated at \(\varvec{x}\). Using (36), the conditional entropy of \(\varvec{\tilde{y}}_{n-1}\) given \(\mathscr {D}_{k-1}\) can be written as

$$\begin{aligned} \mathsf {h}\left[ \left. \varvec{\tilde{y}}_{n-1}\right| \mathscr {D}_{k-1}\right]&=\int \log \left( \det J_{\varvec{F}}\left[ \varvec{F}^{-1}\left( \varvec{y}\right) \right] \right) \frac{1}{\det J_{\varvec{F}}\left[ \varvec{F}^{-1}\left( \varvec{y}\right) \right] } p_{\varvec{y}_{n-1}}\left( F^{-1}\left( \varvec{y}\right) \left| \mathscr {D}_{k-1}\right. \right) d\varvec{y}\nonumber \\&-\int \log \left( p_{\varvec{y}_{n-1}}\left( F^{-1}\left( \varvec{y}\right) \left| \mathscr {D}_{k-1}\right. \right) \right) \frac{1}{\det J_{\varvec{F}}\left[ \varvec{F}^{-1}\left( \varvec{y}\right) \right] }p_{\varvec{y}_{n-1}}\left( \varvec{F}^{-1}\left( \varvec{y}\right) \left| \mathscr {D}_{k-1}\right. \right) d\varvec{y},\nonumber \\&{\mathop {=}\limits ^{\left( *\right) }}\int \log \left( \det J_{\varvec{F}}\left[ \varvec{z}\right] \right) p_{\varvec{y}_{n-1}}\left( \varvec{z}\left| \mathscr {D}_{k-1}\right. \right) d\varvec{z}-\int \log \left( p_{\varvec{y}_{n-1}}\left( \varvec{z}\left| \mathscr {D}_{k-1}\right. \right) \right) p_{\varvec{y}_{n-1}}\left( \varvec{z}\left| \mathscr {D}_{k-1}\right. \right) d\varvec{z},\nonumber \\&=\!\!\sum _{j=1}^M\mathsf {E}\!\left[ \left. \!\log \!\!\left( \!1\!+\!\mu _{n-1}\frac{d ^2}{d {x^j}^2}U_{j}\!\!\left( x^j_{n-1}\right) \right) \right| {\mathscr {D}_{k-1}}\right] +\mathsf {h}\left[ \left. \varvec{y}_{n-1}\right| \mathscr {D}_{k-1}\right] , \end{aligned}$$
(37)

where \(\left( *\right) \) follows from the change of variable \(\varvec{z}=F^{-1}\left( \varvec{x}\right) \).

Using Lemma 1, \(\mathsf {h}\left[ \left. \varvec{y}_{k}\right| \mathscr {D}_{k-1}\right] \) can be further expanded as

$$\begin{aligned} \mathsf {h}\left[ \left. \varvec{y}_{k}\right| \mathscr {D}_{k-1}\right] =\mathsf {h}\left[ \left. \varvec{y}_{0}\right| \mathscr {D}_{k-1}\right] +\sum _{j=1}^M\sum _{n=0}^{k-1}\mathsf {E}\left[ \left. \log \left( 1+\mu _n\frac{d ^2}{d {x^j}^2}U_{j}\left( x^{j}_{n}\right) \right) \right| \mathscr {D}_{k-1}\right] \end{aligned}$$
(38)

Using (38), \(\mathsf {E}\left[ \mathsf {h}\left[ \left. \varvec{y}_{k}\right| \mathscr {D}_{k-1}\right] \right] \) can be written as

$$\begin{aligned} \mathsf {E}\left[ \mathsf {h}\left[ \left. \varvec{y}_{k}\right| \mathscr {D}_{k-1}\right] \right] =\sum _{j=1}^M\sum _{n=0}^{k-1}\mathsf {E}\left[ \log \left( 1+\mu _n\frac{d ^2}{d {x^j}^2}U_{j}\left( x^{j}_{n}\right) \right) \right] +\mathsf {E}\left[ \mathsf {h}\left[ \left. \varvec{y}_{0}\right| \mathscr {D}_{k-1}\right] \right] , \end{aligned}$$
(39)

The following lemma, adapted from [1], establishes a lower bound on \(\mathsf {E}\left[ \mathsf {h}\left[ \left. \varvec{y}_{k}\right| \mathscr {D}_{k-1}\right] \right] \):

Lemma 2

The average conditional entropy of \(\varvec{y}_0\) given \(\mathscr {D}_{k-1}\), i.e., \(\mathsf {E}\left[ \mathsf {h}\left[ \left. \varvec{y}_{0}\right| \mathscr {D}_{k-1}\right] \right] \), can be lower bounded as

$$\begin{aligned} \mathsf {E}\left[ \mathsf {h}\left[ \left. \varvec{y}_{0}\right| \mathscr {D}_{k-1}\right] \right] \!\ge \!\mathsf {h}\left[ \varvec{y}_{0}\right] \!-\!\sum _{t=0}^{k-1}\!\left( \!\!\!\left( \sum _{i=1}^M\log \left| \mathscr {A}^{\varvec{x}}_{i,t}\right| \right) \!\!+\!\!\sum _{j=1}^N\log \left| \mathscr {A}^{\varvec{\lambda }}_{j,t}\right| \!\!\right) . \end{aligned}$$

Proof

Follows directly from the first inequality in appendix C in [1]; alternatively, it can be derived from (8.48) and (8.89) in [17].

Applying Lemma 2 to (39) yields

$$\begin{aligned}&\mathsf {E}\left[ \mathsf {h}\left[ \left. \varvec{y}_{k}\right| \mathscr {D}_{k-1}\right] \right] \ge \sum _{j=1}^M\sum _{n=0}^{k-1}\mathsf {E}\left[ \log \left( 1+\mu _n\frac{d ^2}{d {x^j}^2}U_{j}\left( x^{j}_{n}\right) \right) \right] +\mathsf {h}\left[ \varvec{y}_{0}\right] -\sum _{t=0}^{k-1}\left( \left( \sum _{i=1}^M\log \left| \mathscr {A}^{\varvec{x}}_{i,t}\right| \right) +\sum _{j=1}^N\log \left| \mathscr {A}^{\varvec{\lambda }}_{j,t}\right| \right) , \end{aligned}$$
(40)

Since \(\varvec{x}_0\) and \(\varvec{\lambda }_0\) are independent, the differential entropy of \(\varvec{y}_0\) can be written as \(\mathsf {h}\left[ \varvec{y}_{0}\right] =\mathsf {h}\left[ \varvec{x}_{0}\right] +\mathsf {h}\left[ \varvec{\lambda }_{0}\right] \) which implies that \(\varvec{y}_0\) has finite differential entropy. Using (32), (33), (40) and the fact that \(\varvec{y}_{0}\) has a finite entropy, the DDE can be lower bounded as

$$\begin{aligned} \liminf _{k\longrightarrow \infty }\frac{1}{k}\log \mathsf {E}\left[ \left\| \varvec{\varepsilon }_k\right\| _{2}^2 \right] \ge \frac{2}{M+N} \left( \liminf _{k\longrightarrow \infty }\sum _{j=1}^M\frac{1}{k}\sum _{n=0}^{k-1}\right. \left. \mathsf {E}\left[ \log \left( 1+\mu _n\frac{d ^2}{d {x^j}^2}U_{j}\left( x^{j}_{n}\right) \right) \right] -R_{\mathscr {Q}}\right) . \end{aligned}$$
(41)

The next lemma presents the asymptotic behavior of the first term in the right hand side of equation (41).

Lemma 3

([10]) Consider the primal-dual update rule (6) under an OA quantization scheme. Then,

$$\begin{aligned} \lim _{k\longrightarrow \infty }\sum _{j=1}^M\frac{1}{k}\sum _{n=0}^{k-1}\mathsf {E}\left[ \log \left( 1+\mu _n\frac{d ^2}{d {x^j}^2}U_{j}\left( {x}^{j}_{n}\right) \right) \right] =\sum _{j=1}^M\log \left( 1+\mu ^\star \frac{d ^2}{d {x^j}^2}U_{j}\left( {x^{j}}^\star \right) \right) . \end{aligned}$$

Applying Lemma 3 to (41) yields

$$\begin{aligned} \liminf _{k\rightarrow \infty }\frac{1}{k}\log \mathsf {E}\left[ \left\| \varvec{\varepsilon }_k\right\| _{2}^2 \right] \ge \frac{2}{N+M}\left( \sum _{i=1}^m\log \left( 1+\mu ^\star \frac{d^2}{d {x^i}^2}U_i\left( {x^i}^\star \right) \right) -R_{\mathscr {Q}}\right) . \end{aligned}$$
(42)

which completes the proof.

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Nekouei, E., Alpcan, T., Evans, R.J. (2018). Impact of Quantized Inter-agent Communications on Game-Theoretic and Distributed Optimization Algorithms. In: Başar, T. (eds) Uncertainty in Complex Networked Systems. Systems & Control: Foundations & Applications. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-04630-9_15

Download citation

Publish with us

Policies and ethics