Abstract
Quantized inter-agent communications in game-theoretic and distributed optimization algorithms generate uncertainty that affects the asymptotic and transient behavior of such algorithms. This chapter uses the information-theoretic notion of differential entropy power to establish universal bounds on the maximum exponential convergence rates of primal-dual and gradient-based Nash seeking algorithms under quantized communications. These bounds depend on the inter-agent data rate and the local behavior of the agents’ objective functions, and are independent of the quantizer structure. The presented results provide trade-offs between the speed of exponential convergence, the agents’ objective functions, the communication bit rates, and the number of agents and constraints. For the proposed Nash seeking algorithm, the transient performance is studied and an upper bound on the average time required to settle inside a specified ball around the Nash equilibrium is derived under uniform quantization. Furthermore, an upper bound on the probability that the agents’ actions lie outside this ball is established. This bound decays double exponentially with time.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
G. N. Nair and R. J. Evans, “Stabilizability of Stochastic Linear Systems with Finite Feedback Data Rates,” SIAM Journal on Control and Optimization, vol. 43, no. 2, pp. 413–436, 2004.
G. N. Nair, F. Fagnani, S. Zampieri, and R. J. Evans, “Feedback Control Under Data Rate Constraints: An Overview,” Proceedings of the IEEE, vol. 95, no. 1, pp. 108–137, 2007.
E. Nekouei, T. Alpcan, G. Nair, and R. J. Evans, “Convergence Analysis of Quantized Primal-dual Algorithms in Network Utility Maximization Problems,” IEEE Transactions on Control of Network Systems, vol. PP, no. 99, pp. 1–1, 2016.
E. Nekouei, G. N. Nair, and T. Alpcan, “Performance Analysis of Gradient-Based Nash Seeking Algorithms Under Quantization,” IEEE Transactions on Automatic Control, vol. 61, no. 12, pp. 3771–3783, 2016.
F. Kelly, A. Maulloo, and D. Tan, “Rate control in communication networks: shadow prices, proportional fairness and stability,” in Journal of the Operational Research Society, vol. 49, 1998.
S. Shakkottai and R. Srikant, “Network Optimization and Control,” Found. Trends Netw., vol. 2, no. 3, pp. 271–379, 2007.
A. Nedić, A. Olshevsky, A. Ozdaglar, and J. N. Tsitsiklis, “Distributed subgradient methods and quantization effects,” in 47th IEEE Conference on Decision and Control (CDC), Dec 2008, pp. 4177–4184.
P. Yi and Y. Hong, “Quantized Subgradient Algorithm and Data-Rate Analysis for Distributed Optimization,” IEEE Transactions on Control of Network Systems, vol. 1, no. 4, pp. 380–392, 2014.
J. S. Freudenberg, R. H. Middleton, and V. Solo, “Stabilization and Disturbance Attenuation Over a Gaussian Communication Channel,” IEEE Transactions on Automatic Control, vol. 55, no. 3, pp. 795–799, 2010.
E. Nekouei, T. Alpcan, G. Nair, and R. J. Evans, “Convergence Analysis of Quantized Primal-dual Algorithm in Network Utility Maximization Problems,” arXiv:1604.00723, Tech. Rep., Apr 2016.
C. U. Saraydar, N. B. Mandayam, and D. J. Goodman, “Efficient power control via pricing in wireless data networks,” IEEE Transactions on Communications, vol. 50, no. 2, pp. 291–303, 2002.
J. R. Marden and J. S. Shamma, “Chapter 16 - Game Theory and Distributed Control,” ser. Handbook of Game Theory with Economic Applications, H. P. Young and S. Zamir, Eds. Elsevier, 2015, vol. 4, pp. 861–899.
N. D. Stein, “Characterization and Computation of Equilibria in Infinite Games,” Master’s thesis, M.I.T., June 2007.
S. Li and T. Basar, “Distributed algorithms for the computation of noncooperative equilibria,” Automatica, vol. 23, no. 4, pp. 523–533, 1987.
J. B. Rosen, “Existence and Uniqueness of Equilibrium Points for Concave N-Person Games,” Econometrica, vol. 33, no. 3, pp. 520–534, 1965.
E. Nekouei, T. Alpcan, and D. Chattopadhyay, “Game-Theoretic Frameworks for Demand Response in Electricity Markets,” IEEE Transactions on Smart Grid, vol. 6, no. 2, pp. 748–758, 2015.
T. M. Cover and J. A. Thomas, Elements of Information Theory. Wiley-Interscience, 2006.
A. Leon-Garcia, Probability, Statistics, and Random Processes for Electrical Engineering, 2nd ed. Massachusetts: Addison-Wesley, 1994.
Acknowledgements
The authors would like to thank Prof. Girish Nair from The University of Melbourne for his contributions and fruitful discussions. This work was supported by the Australian Research Councils Discovery Projects funding scheme (DP140100819).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix
Proof of Theorem 1
This appendix presents the main steps of the proof of Theorem 1. To this end, first, the notion of conditional differential entropy power of a random vector is defined. Then, the notion of entropy power facilitates establishing a universal lower bound on the DDE of the PD variables. The differential entropy power of the random vector \(\varvec{z}\in \mathbb {R}^{N+M} \) conditioned on the event \(A=a\), denoted by \(\mathsf {N}\left[ \left. \varvec{z}\right| A=a\right] \), is defined as
where \(\mathsf {h}\left[ \left. \varvec{z}\right| A=a\right] \) is the conditional differential entropy of \(\varvec{z}\) given \(A=a\) defined as
where \(p\left( \left. \varvec{z}\right| A=a\right) \) is the conditional distribution of \(\varvec{z}\) given \(A=a\). Using the entropy maximizing property of Gaussian distributions, the conditional entropy power of \(\varvec{z}\) given \(A=a\) can be upper bounded [1] as
where \(\mathsf {E}\left[ \left. \varvec{z}\right| A=a\right] \) is conditional expectation of \(\varvec{z}\) given \(A=a\). Let \(\mathsf {E}_A\left[ \mathsf {N}\left[ \left. \varvec{z}\right| A=a\right] \right] \) denote the average conditional entropy power of \(\varvec{z}\) given \(A=a\). Using (30), \(\mathsf {E}_A\left[ \mathsf {N}\left[ \left. \varvec{z}\right| A=a\right] \right] \) can be upper bounded as
Next, the inequality (31) is used to establish the universal lower bound on the DDE of the PD variables under OA quantization schemes. To this end, let \(\mathscr {D}_{k-1}=\left\{ \hat{Q}_n=\varvec{\hat{q}}_n\right\} _{n=0}^{k-1}\) where \(\hat{Q}_n=\left[ \hat{Q}^{\varvec{x}}_{1,n},\ldots ,\hat{Q}^{\varvec{x}}_{M,n},\hat{Q}^{\varvec{\lambda }}_{1,n},\ldots ,\hat{Q}^{\varvec{\lambda }}_{N,n}\right] \) and \(\varvec{\hat{q}}_n\) is a possible realization of \(\hat{Q}_n\). Using (31), \(\mathsf {E}\left[ \left\| \varvec{\varepsilon }_k\right\| _{2}^2 \right] \) can be lower bounded as \(\frac{\mathrm{e}^{1-\frac{1}{M+N}}}{2\pi \mathrm{e}}\mathrm{e}^{\frac{2}{M+N}\mathsf {E}\left[ \mathsf {h}\left[ \varvec{\varepsilon }_k\left| {\mathscr {D}_{k-1}}\right. \right] \right] }\)
where \(\left( *\right) \) is obtained using the Jensen inequality. The term \(\mathsf {h}\left[ \left. \varvec{\varepsilon }_k\right| \mathscr {D}_{k-1}\right] \) on the right hand side of (32) can be expanded as
where \(\left( *\right) \) follows from the translation invariance property of differential entropy as \(\varvec{y}^\star \) is a constant vector (see [17] Theorem 8.6.3 page 253).
The next lemma establishes a useful expression between \(\mathsf {h}\left[ \left. \varvec{y}_n\right| \mathscr {D}_{k-1}\right] \) and \(\mathsf {h}\left[ \left. \varvec{y}_{n-1}\right| \mathscr {D}_{k-1}\right] \) for \(n\le k\), which is used to further expand \(\mathsf {h}\left[ \left. \varvec{y}_k\right| \mathscr {D}_{k-1}\right] \).
Lemma 1
For \(n\le k\), \(\mathsf {h}\left[ \left. \varvec{y}_n\right| \mathscr {D}_{k-1}\right] \) can be expanded as
Proof
Let \(\tilde{x}^i_n={x}^{i}_{n}+\mu _{n} \left( \frac{d}{d {x}^{i}}U_i\left( {x}^{i}_{n}\right) \right) \) and \(\varvec{\tilde{x}}_n=\left[ \tilde{x}^i_1,\ldots ,\tilde{x}^i_M\right] ^\top \). Let \(\varvec{\tilde{y}}_n\) be the vector concatenation of \(\varvec{\tilde{x}}_n\) and \(\varvec{\lambda }_n\). This lemma is proved in two steps. First, it is shown that the conditional differential entropy of \(\varvec{y}_n\) given \(\mathscr {D}_k\) is equal to that of \(\varvec{\tilde{y}}_{n-1}\) given \(\mathscr {D}_k\) (see (35)). Next, a relation between the conditional differential entropy of \(\varvec{\tilde{y}}_{n-1}\) given \(\mathscr {D}_k\) and that of \(\varvec{y}_{n-1}\) given \(\mathscr {D}_k\) is established. Note that, \(\mathsf {h}\left[ \left. \varvec{y}_n\right| \mathscr {D}_{k-1}\right] \) can be written as
where \(\left( *\right) \) follows from the translation invariance property of the differential entropy and the fact that \(Q_{k-1}\) is fixed given \(\mathscr {D}_{k-1}=\left\{ \hat{Q}_n=\varvec{\hat{q}}_n\right\} _{n=0}^{k-1}\). Next, we derive an expression for the probability density function (PDF) of \(\varvec{\tilde{y}}_n\) in terms of the PDF of \(\varvec{y}_{n}\). Let \(p_{\varvec{\tilde{y}}_{n}}\left( \varvec{y}\left| \mathscr {D}_{k-1}\right. \right) \) and \(p_{\varvec{y}_{n}}\left( \varvec{y}\left| \mathscr {D}_{k-1}\right. \right) \) to denote the PDFs of \(\varvec{\tilde{y}}_n\) and \(\varvec{y}_{n}\), respectively, conditioned on \(\mathscr {D}_{k-1}\). Let \(\varvec{F}\left( \cdot \right) \) represent the mapping between \(\varvec{\tilde{y}}_n\) and \(\varvec{y}_n\), i.e., \(\varvec{\tilde{y}}_n=\varvec{F}\left( \varvec{y}_n\right) \). Note that \(0<1+\mu _{n}\frac{d^2}{d {x^i}^2}U_i\left( x^i\right) <1\) since \(0<\mu _{n}< \min _i\frac{1}{\left| U^\mathrm{min}_i\right| }\) which implies that the mapping \(\varvec{F}\left( \cdot \right) \) is invertible. Thus, the change-of-variables formula for invertible diffeomorphisms of random vectors (see e.g., (4.63) in [18]) can be applied to write
where \(J_{\varvec{F}}\left[ \varvec{x}\right] \) is Jacobian of \(\varvec{F}\left( \varvec{x}\right) \) evaluated at \(\varvec{x}\). Using (36), the conditional entropy of \(\varvec{\tilde{y}}_{n-1}\) given \(\mathscr {D}_{k-1}\) can be written as
where \(\left( *\right) \) follows from the change of variable \(\varvec{z}=F^{-1}\left( \varvec{x}\right) \).
Using Lemma 1, \(\mathsf {h}\left[ \left. \varvec{y}_{k}\right| \mathscr {D}_{k-1}\right] \) can be further expanded as
Using (38), \(\mathsf {E}\left[ \mathsf {h}\left[ \left. \varvec{y}_{k}\right| \mathscr {D}_{k-1}\right] \right] \) can be written as
The following lemma, adapted from [1], establishes a lower bound on \(\mathsf {E}\left[ \mathsf {h}\left[ \left. \varvec{y}_{k}\right| \mathscr {D}_{k-1}\right] \right] \):
Lemma 2
The average conditional entropy of \(\varvec{y}_0\) given \(\mathscr {D}_{k-1}\), i.e., \(\mathsf {E}\left[ \mathsf {h}\left[ \left. \varvec{y}_{0}\right| \mathscr {D}_{k-1}\right] \right] \), can be lower bounded as
Proof
Follows directly from the first inequality in appendix C in [1]; alternatively, it can be derived from (8.48) and (8.89) in [17].
Applying Lemma 2 to (39) yields
Since \(\varvec{x}_0\) and \(\varvec{\lambda }_0\) are independent, the differential entropy of \(\varvec{y}_0\) can be written as \(\mathsf {h}\left[ \varvec{y}_{0}\right] =\mathsf {h}\left[ \varvec{x}_{0}\right] +\mathsf {h}\left[ \varvec{\lambda }_{0}\right] \) which implies that \(\varvec{y}_0\) has finite differential entropy. Using (32), (33), (40) and the fact that \(\varvec{y}_{0}\) has a finite entropy, the DDE can be lower bounded as
The next lemma presents the asymptotic behavior of the first term in the right hand side of equation (41).
Lemma 3
([10]) Consider the primal-dual update rule (6) under an OA quantization scheme. Then,
Applying Lemma 3 to (41) yields
which completes the proof.
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Nekouei, E., Alpcan, T., Evans, R.J. (2018). Impact of Quantized Inter-agent Communications on Game-Theoretic and Distributed Optimization Algorithms. In: Başar, T. (eds) Uncertainty in Complex Networked Systems. Systems & Control: Foundations & Applications. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-04630-9_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-04630-9_15
Published:
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-030-04629-3
Online ISBN: 978-3-030-04630-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)