Skip to main content
Log in

Convergence to Walrasian equilibrium with minimal information

  • Regular Article
  • Published:
Journal of Economic Interaction and Coordination Aims and scope Submit manuscript

Abstract

We consider convergence to Walrasian equilibrium in a situation where firms know only market price and their own cost function. We term this a situation of minimal information. We model the problem as a large population game of Cournot competition. The Nash equilibrium of this model is identical to the Walrasian equilibrium. We apply the best response (BR) dynamic as our main evolutionary model. This dynamic can be applied under minimal information as firms need to know only the market price and the their own cost to compute payoffs. We show that the BR dynamic converges globally to Nash equilibrium in an aggregative game like the Cournot model. Hence, it converges globally to the Walrasian equilibrium under minimal information. We extend the result to some other evolutionary dynamics using the method of potential games.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Gintis (2007, 2012) provides simulation based results on such convergence using methods from evolutionary game theory in a general equilibrium framework. We discuss those papers in more detail towards the end of this section.

  2. Such models in which each agent is of measure zero are also called nonatomic games (Aumann and Shapley 1974).

  3. The method of applying the aggregate form of a best response dynamic has also been applied by Ely and Sandholm (2005) in the context of Bayesian population games. Their best response dynamic is defined for any finite strategy game with a continuum of types. In our case, the BR dynamic is defined only for an aggregative game with a continuous strategy set and a finite number of types.

  4. The standard evolutionary dynamics that have been extended to continuous strategy games are the replicator dynamic (Oechssler and Riedel 2001, 2002; Cheung 2016), the Brown–von Neumann–Nash (BNN) dynamic (Hofbauer et al. 2009), the Smith dynamic (Cheung 2014) and the logit dynamic (Perkins and Leslie 2014; Lahkar and Riedel 2015).

  5. Cheung and Lahkar (2018) define a continuous strategy potential game with a single population and apply that definition to a large population Cournot competition model in which all firms have the same cost function. Lahkar and Mukherjee (2018) extend that definition to multi-type aggregative games. Lahkar (2017) also analyzes a single population Cournot model, but with a finite strategy approximation. We note here that we need a multi-type model in order for the issue of firms lacking information about the type of other firms to be meaningful.

  6. Here, \(\delta _{x_p}\) is the Dirac distribution with probability 1 on \(x_p\).

  7. These derivatives will be required in characterizing the Nash equilibrium of the Cournot competition model.

  8. These assumptions imply that agents are not aware of the population state \(\mu _p\) of any \(p\in \mathcal {P}\), including their own population.

  9. Recall that we have assumed that \(v(x,A(\mu ))\) is concave with respect to x. In (3), \(v(x,A(\mu ))=x\beta (A(\mu ))\) is linear with respect to x. We will use the assumption of bounded derivatives in “Appendix A.2”.

  10. It is possible that best responses are not well defined at every social state in certain games with continuous strategy sets. However, as we will see, this problem does not occur in an aggregative game like (4).

  11. This is because v is concave with respect to x and \(c_p\) is strictly convex.

  12. This is because \(\beta (\bar{x})\) and \(\beta (\underline{x})\) are, respectively, the lowest and highest values of \(\beta (\alpha )\). Hence, \(\beta (\alpha )\ge \beta (\bar{x})\), which rules out the first case of (7) since, by Assumption 2.2(3), \(\beta (\bar{x})>c^{\prime }(\underline{x})\). Similarly, \(\beta (\alpha )\le \beta (\underline{x})\), which rules out the third case of (7) since, by Assumption 2.2(3), \(\beta (\underline{x})<c^{\prime }(\bar{x})\).

  13. This is unlike the case of the best response dynamic for finite strategy games where the best response may not be uniquely defined (Gilboa and Matsui 1991). In that case, the best response dynamic needs to be defined as a differential inclusion.

  14. This is because if \(\mu ^{*}\) is a Nash equilibrium of the aggregative game (4), then by Proposition 3.1, \(A(\mu ^{*})=\alpha ^{*}\) is the solution to (6). In that case, by (10), \(\alpha ^{*}\) is the rest point of the ABR dynamic.

  15. As mentioned earlier, the BR dynamic cannot be generally extended to the continuous strategy case due to the possibility that the best response may not exist. Only in special cases like aggregative games with continuous strategy sets can it be defined.

  16. For the definition of the variational norm, see (17) in “Appendix A.3”.

  17. These extensions are required because the domain of the potential function f is \(\mathcal {M}\). With such an extension, it is possible that \(A(\mu )<0\), which requires us to extend \(\beta \) to \(\mathbf {R}\).

  18. This result has been established for the BNN dynamic and the Smith dynamic by Hofbauer et al. (2009) and Cheung (2014) respectively.

  19. Convergence may not happen from boundary states because, as is well known, non-Nash monomorphic states are also rest points of the replicator dynamic.

  20. Sandholm (2010a) provides a detailed discussion of the informational requirements of revision protocols that generate different evolutionary dynamics.

  21. The average payoff in population p at social state \(\mu \) is \(\bar{F}_p(\mu )=\int _{\mathcal {S}}F_{x,p}(\mu )\mu _p(dx)\).

  22. See Cheung (2016) for continuous strategy versions of these revision protocols.

  23. Of course, as mentioned earlier, we could have used the potential game argument even to the BR dynamic. The relevant result establishing convergence in aggregative potential games under this dynamic is in Lahkar and Mukherjee (2018).

  24. Consider \(\mu ,\nu \in \mathscr {M}\). Then, \(|A(\mu )-A(\nu )|=\left| \sum _p\int _\mathcal {S}x\mu _p(dx)-\sum _p\int _\mathcal {S}x\nu _p(dx)\right| \le \sum _p\int _\mathcal {S}x\left| \mu _p-\nu _p\right| (dx)=\Vert \mu -\nu \Vert \).

References

  • Alós-Ferrer C, Ania A (2005) The evolutionary stability of perfectly competitive behavior. Econ Theory 26:497–516

    Article  Google Scholar 

  • Arrow KJ, Hurwicz L (1958) On the stability of the competitive equilibrium, I. Econometrica 26:522–552

    Article  Google Scholar 

  • Askari H, Cummings JT (1977) Estimating agricultural supply response with the nerlove model: a survey. Int Econ Rev 18:257–292

    Article  Google Scholar 

  • Aumann RJ, Shapley LS (1974) Values of non-atomic games. Princeton University Press, Princeton

    Google Scholar 

  • Björnerstedt J, Weibull JW (1996) Nash equilibrium and evolution by imitation. In: Arrow KJ et al (eds) The rational foundations of economic behavior. St. Martins Press, New York, pp 155–181

    Google Scholar 

  • Brown GW, von Neumann J (1950) Solutions of games by differential equations. In: Kuhn HW, Tucker AW (eds) Contributions to the theory of games I. Annals of mathematics studies, vol 24. Princeton University Press, Princeton, pp 73–79

    Google Scholar 

  • Cheung MW (2014) Pairwise comparison dynamics for games with continuous strategy space. J Econ Theory 153:344–375

    Article  Google Scholar 

  • Cheung MW (2016) Imitative dynamics for games with continuous strategy space. Games Econ Behav 99:206–223

    Article  Google Scholar 

  • Cheung MW, Lahkar R (2018) Nonatomic potential games: the continuous strategy case. Games Econ Behav 108:341–362

    Article  Google Scholar 

  • Corchón L (1994) Comparative statics for aggregative games the strong concavity case. Math Soc Sci 28:151–165

    Article  Google Scholar 

  • Ely JC, Sandholm WH (2005) Evolution in Bayesian games I: theory. Games Econ Behav 53:83–109

    Article  Google Scholar 

  • Fisher FM (1983) Disequilibrium foundations of equilibrium economics. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Fudenberg D, Levine DK (1998) The theory of learning in games. MIT Press, Cambridge

    Google Scholar 

  • Gilboa I, Matsui A (1991) Social stability and equilibrium. Econometrica 59:859–867

    Article  Google Scholar 

  • Gintis H (2007) The dynamics of general equilibrium. Econ J 117:1289–1309

    Article  Google Scholar 

  • Gintis H (2012) The dynamics of pure market exchange. In: Aoki M, Binmore K, Deakin S, Gintis H (eds) Complexity and Institutions: norms and corporations. Palgrave, London

    Google Scholar 

  • Hofbauer J (2000) From Nash and Brown to Maynard Smith: equilibria, dynamics, and ESS. Selection 1:81–88

    Article  Google Scholar 

  • Hofbauer J, Oechssler J, Riedel F (2009) Brown–von Neumann–Nash dynamics: the continuous strategy case. Games Econ Behav 65(2):406–429

    Article  Google Scholar 

  • Kirman AP (1992) Whom or what does the representative individual represent? J Econ Perspect 6:117–136

    Article  Google Scholar 

  • McKenzie LW (1960) Stability of equilibrium and value of positive excess demand. Econometrica 28:606–617

    Article  Google Scholar 

  • Lahkar R (2017) Large population aggregative potential games. Dyn Games Appl 7:443–467

    Article  Google Scholar 

  • Lahkar R, Riedel F (2015) The logit dynamic for games with continuous strategy sets. Games Econ Behav 91:268–282

    Article  Google Scholar 

  • Lahkar R, Mukherjee S (2018) Evolutionary implementation in a public goods game. Working paper, https://sites.google.com/site/rlahkar/home. Accessed 6 Mar 2019

  • Monderer D, Shapley L (1996) Potential games. Games Econ Behav 14:124–143

    Article  Google Scholar 

  • Nax HH, Pradelski BSR (2015) Evolutionary dynamics and equitable core selection in assignment games. Int J Game Theory 44:903–932

    Article  Google Scholar 

  • Nerlove M (1958) Estimates of the elasticities of supply of selected agricultural commodities. J Farm Econ 38:496–508

    Article  Google Scholar 

  • Nikaido H, Uzawa H (1960) Stability and nonnegativity in a Walrasian Tâtonnement process. Int Econ Rev 1:50–59

    Article  Google Scholar 

  • Oechssler J, Riedel F (2001) Evolutionary dynamics on infinite strategy spaces. Econ Theory 17:141–162

    Article  Google Scholar 

  • Oechssler J, Riedel F (2002) On the dynamic foundation of evolutionary stability in continuous models. J Econ Theory 107:223–252

    Article  Google Scholar 

  • Perkins S, Leslie D (2014) Stochastic fictitious play with continuous action sets. J Econ Theory 152:179–213

    Article  Google Scholar 

  • Sandholm WH (2001) Potential games with continuous player sets. J Econ Theory 97:81–108

    Article  Google Scholar 

  • Sandholm WH (2010a) Population games and evolutionary dynamics. MIT Press, Cambridge

    Google Scholar 

  • Sandholm WH (2010b) Pairwise comparison dynamics and evolutionary foundations for Nash equilibrium. Games 1:3–17

    Article  Google Scholar 

  • Schlag KH (1998) Why imitate, and if so, how? A boundedly rational approach to multi-armed bandits. J Econ Theory 78:130–156

    Article  Google Scholar 

  • Smith MJ (1984) The stability of a dynamic model of traffic assignment: an application of a method of Lyapunov. Transp Sci 18:245–252

    Article  Google Scholar 

  • Solymosi T, Raghavan TES (1994) An algorithm for finding the nucleolus of assignment games. Int J Game Theory 23:119–143

    Article  Google Scholar 

  • Taylor PD, Jonker L (1978) Evolutionarily stable strategies and game dynamics. Math Biosci 40:145–156

    Article  Google Scholar 

  • Walras L (1954[1874]) Elements of pure economics. George Allen and Unwin, London

  • Vega-Redondo F (1997) The evolution of Walrasian behavior. Econometrica 65:375–384

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ratul Lahkar.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Nash equilibrium

Proof of Proposition 3.1

First, consider the only if part. Let \(\mu ^{*}\) be a Nash equilibrium of F. We need to show that \(\mu _p^{*}=m_p\delta _{b_p(\alpha ^{*})}\), where \(\alpha ^{*}\) is a solution to (6). Since \(\mu ^{*}\) is a Nash equilibrium, Definition 2.1 implies that every agent in every population p plays a payoff maximizer. By (5), the unique payoff maximizer for population p is \(b_p(\alpha ^{*})\), where \(\alpha ^{*}=A(\mu ^{*})\). Therefore, \(\mu _p^{*}=m_p\delta _{b_p(\alpha ^{*})}\). But this means \(\int _{\mathcal {S}}x\mu _p^{*}(dx)=m_pb_p(\alpha ^{*})\), which means

$$\begin{aligned} \alpha ^{*}=A(\mu ^{*})=\sum _{p\in \mathcal {P}}\int _{\mathcal {S}}x\mu _p^{*}(dx)=\sum _{p\in \mathcal {P}}m_pb_p(\alpha ^{*}). \end{aligned}$$

Hence, \(\alpha ^{*}\) is a solution to (6).

Now consider the if part. We need to show that if \(\alpha ^{*}\) is a solution to (6), then \(\mu ^{*}\) such that \(\mu ^{*}_p=m_p\delta _{b_p(\alpha ^{*})}\) is a Nash equilibrium of F. If \(\mu _p^{*}=m_p\delta _{b_p(\alpha ^{*})}\), then

$$\begin{aligned} A(\mu ^{*})=\sum _{p\in \mathcal {P}}\int _{\mathcal {S}}x\mu _p^{*}(dx)=\sum _{p\in \mathcal {P}}m_pb_p(\alpha ^{*}). \end{aligned}$$

But since \(\alpha ^{*}\) is a solution to (6), this must mean \(A(\mu ^{*})=\alpha ^{*}\). Hence, by the definition of \(b_p(\alpha )\) in (5), if \(\mu _p^{*}=m_p\delta _{b_p(\alpha ^{*})}\), then at \(\mu _p^{*}\), every agent in every population p is playing the unique best response to \(\mu ^{*}\). But this must mean that \(\mu ^{*}\) is a Nash equilibrium of F. \(\square \)

1.2 Best response dynamic

Proof of Lemma 4.1

Consider \(\mu ^{*}\) and denote \(A(\mu ^{*})=\alpha ^{*}\). For the if part, let \(\mu ^{*}\) be a Nash equilibrium of F. Hence, every agent plays the unique best response to \(\mu ^{*}\), which is \(b_p(\alpha ^{*})\). Therefore, \(\mu _p^{*}=m_p\delta _{b_p(\alpha ^{*})}\) so that \(\dot{\mu }_p^{*}=0\).

For the only if part, let \(\mu ^{*}\) be a rest point of the BR dynamic. Hence, \(\mu _p^{*}=m_p\delta _{b_p(\alpha ^{*})}\). Therefore,

$$\begin{aligned} \sum _{p\in \mathcal {P}}m_p\int _{\mathcal {S}}x\delta _{b_p(\alpha ^{*})}(dx)=\sum _{p\in \mathcal {P}}\int _{\mathcal {S}}x\mu _p^{*}(dx)\Rightarrow \sum _{p\in \mathcal {P}}m_pb_p(\alpha ^{*})=A(\mu ^{*})=\alpha ^{*}. \end{aligned}$$

Hence, \(\alpha ^{*}\) is a solution to (6). Hence, by Proposition 3.1, \(\mu ^{*}\) is a Nash equilibrium of F. \(\square \)

To show that the BR dynamic is well defined in an aggregative game like (4), we first need to impose a norm on \(\mathcal {M}(\mathcal {S})\). This will allow us to establish Lipschitz continuity of the dynamic. Due to the aggregative nature of the underlying population game, the appropriate norm turns out to be

$$\begin{aligned} \Vert \mu \Vert =\sum _{p\in \mathcal {P}}\int _\mathcal {S}x|\mu _p|(dx)=\sum _{p\in \mathcal {P}}a(|\mu _p|)=A(|\mu |), \end{aligned}$$
(14)

where \(\mu \in \mathcal {M}(\mathcal {S})\). We refer to this norm concisely as the AS norm. The following theorem states the relevant result. In writing this theorem, we use \(\tau \) to denote time.

Theorem A.1

Consider the aggregative game F defined in (4). From every initial state \(\mu (0)\in \Delta \), there exists a unique solution trajectory \(\{\mu (\tau )\in \Delta \}_{\tau \ge 0}\) under the BR dynamic (9). Further, the solutions to the dynamic are continuous in the AS norm (14) with respect to initial conditions.

We provide a review of the proof of this result in Lahkar and Mukherjee (2018). First, we extend the domain of the payoff function (4) from \(\Delta \) to \(\mathscr {M}\). With this extension, the range of \(A(\mu )\) extends from \([\underline{x},\bar{x}]\) to \(\mathbf {R}\). The extended payoff function is

$$\begin{aligned} \tilde{F}_{x,p}(\mu )=\left\{ \begin{array}{l l} v(x,\bar{x})-c_p(x), &{} \quad \text {for}~\mu ~\hbox {such that}~A(\mu )>\bar{x},\\ v(x,A(\mu ))-c_p(x), &{} \quad \text {for}~ \mu ~\hbox {such that}~A(\mu )\in [\underline{x},\bar{x}],\\ v(x,\underline{x})-c_p(x), &{} \quad \text {for}~ \mu ~ \hbox {such that}~ A(\mu )<\underline{x}. \end{array} \right. \end{aligned}$$
(15)

We then obtain the following best response function for \(\alpha \in \mathbf {R}\) from (15).

$$\begin{aligned} \tilde{b}_p(\alpha )=\left\{ \begin{array}{l l} b_p(\bar{x}), &{} \quad \text {for}~ \alpha > \bar{x}\\ b_p(\alpha ), &{} \quad \text {for}~ \alpha \in [\underline{x},\bar{x}]\\ b_p(\underline{x}), &{} \quad \text {for}~ \alpha < \underline{x}. \end{array} \right. \end{aligned}$$
(16)

The proof of Theorem A.1 is based on two additional lemmas. Lemma A.2 in Lahkar and Mukherjee (2018) establishes that the aggregate strategy function \(A(\cdot )\) is Lipschitz continuous with respect to the AS norm (14) with Lipschitz constant 1. Thus, for \(\mu ,\nu \in \mathscr {M}\),

$$\begin{aligned} |A(\mu )-A(\nu )|\le \Vert \mu -\nu \Vert . \end{aligned}$$

This follows from (2) and (14).Footnote 24

The second lemma, Lemma A.3 in Lahkar and Mukherjee (2018), establishes the \(\tilde{b}_p(\alpha )\) defined in (16), is Lipschitz continuous with respect to \(\alpha \in \mathbf {R}\). That is, there exists a constant \(K_{B,p}\) such that for every \(\alpha ^1,\alpha ^2\in \mathbf {R}\)

$$\begin{aligned} |\tilde{b}_p(\alpha ^1)-\tilde{b}_p(\alpha ^2)|\le K_{B,p}|\alpha ^1-\alpha ^2|. \end{aligned}$$

The proof of this result requires us to show that \(\tilde{b}_p(\alpha )\) has a bounded derivative for almost all \(\alpha \in \mathbf {R}\). For this, note from (16) that since \(\tilde{b}_p(\alpha )\in [\underline{x},\bar{x}]\), it must be either (i) \(\underline{x}\) or (ii) \(\bar{x}\) of (iii) \(x_p\in (\underline{x},\bar{x})\) such that \(v_1(x_p,\alpha )=c_p^{\prime }(x_p)\). This case is possible only if \(\alpha \in [\underline{x},\bar{x}]\). In the first two cases, it is obvious that \(\tilde{b}_p(\alpha )\) has a bounded derivative. In the third case, \(v_1(\tilde{b}_p(\alpha ),\alpha )=c_p^{\prime }(\tilde{b}_p(\alpha ))\). We can then calculate

$$\begin{aligned} \tilde{b}_p^{\prime }(\alpha )=\frac{v_{12}(\tilde{b}_p(\alpha ),\alpha )}{c_p^{\prime \prime }(\tilde{b}_p(\alpha ))-v_{11}(\tilde{b}_p(\alpha ),\alpha )}, \end{aligned}$$

which is bounded by our assumptions that v and \(c_p\) have bounded first and second derivatives on \([\underline{x},\bar{x}]\).

Lemmas A.2 and A.3 then lead to the Lipschitz continuity of the BR dynamic on \(\mathscr {M}\). Standard results on ODE systems in Banach spaces then imply Theorem A.1. Details are in Appendix A.3 in Lahkar and Mukherjee (2018).

1.3 Potential games

We first define the notion of the Fréchet derivative.

Definition A.2

Let X and Y are Banach spaces. We say that \(g:X\rightarrow Y\) is Fréchet-differentiable at x if there exists a continuous linear map \(T:X\rightarrow Y\) such that \(g(x+\vartheta ) = g(x) + T\vartheta + o(\Vert \vartheta \Vert )\) for all \(\vartheta \) in some neighborhood of zero in X. If it exists, this T is called the Fréchet-derivative of g at x, and is written as Dg(x).

In order to apply the Fréchet derivative, we impose the strong topology on \(\mathcal {M}(\mathcal {S})\). This is the topology induced by the variational norm on \(\mathcal {M}(\mathcal {S})\). For \(\nu \in \mathcal {M}(\mathcal {S})\), the variational norm is given by \(\Vert \nu \Vert =\sup _{g}|\int _\mathcal {S}g d\nu |\) where g is a measurable function \(g:\mathcal {S}\rightarrow \mathbf {R}\) such that \(\sup _{x\in S}|g(x)|\le 1\). The variational norm on \(\mathscr {M}=\prod _{p=1}^n\mathcal {M}(\mathcal {S})\) is then given by (see, for example, Perkins and Leslie 2014)

$$\begin{aligned} \Vert \mu \Vert =\max \{\Vert \mu _1\Vert ,\ldots ,\Vert \mu _n\Vert \}\text { for }\mu =(\mu _1,\ldots ,\mu _n)\in \mathscr {M}. \end{aligned}$$
(17)

We now prove Proposition 5.3. Recall that in stating this proposition, we had extended the domain of \(A(\cdot )\) and \(C(\cdot )\) to \(\mathcal {M}\) and that of \(\beta \) to \(\mathbf {R}\).

Proof of Proposition 5.3

Let f be defined as in (13) with appropriate extensions of \(A(\cdot )\), \(C(\cdot )\) and \(\beta (\cdot )\). We need to show that for all \(\mu \in \Delta \) and all \((x,p)\in \mathcal {S}\times \mathcal {P}\),

$$\begin{aligned} \nabla f(\mu )(x,p)=x\beta (A(\mu ))-c_p(x). \end{aligned}$$
(18)

Let \(\zeta =(\zeta _1,\zeta _2,\ldots ,\zeta _n)\in \mathscr {M}\). Here, \(\zeta _p\) represents the direction of change of \(\mu _p\). Then,

$$\begin{aligned} Df(\mu )\zeta =\beta (A(\mu ))DA(\mu )\zeta -DC(\mu )\zeta , \end{aligned}$$
(19)

where \(C(\mu )\) is as defined in (12). Note that

$$\begin{aligned} A(\mu +\zeta )=\sum _{p\in \mathcal {P}}a(\mu _{p}+\zeta _p)=\sum _{p\in \mathcal {P}}\int _\mathcal {S}x(\mu _p+\zeta _p)(dx)=A(\mu )+\sum _{p\in \mathcal {P}}\int _{\mathcal {S}}\tilde{x}\zeta _p(d\tilde{x}). \end{aligned}$$

Therefore,

$$\begin{aligned} DA(\mu )\zeta =\sum _{p\in \mathcal {P}}\int _{\mathcal {S}}\tilde{x}\zeta _p(d\tilde{x}). \end{aligned}$$
(20)

Further,

$$\begin{aligned} C(\mu +\zeta )&=\sum _{p\in \mathcal {P}}\int _{\mathcal {S}}c_p(x)(\mu _p+\zeta _p)(dx)\\&=\sum _{p\in \mathcal {P}}\int _{\mathcal {S}}c_p(x)\mu _p(dx)+\sum _{p\in \mathcal {P}}\int _{\mathcal {S}}c_p(x)\zeta _p(dx)\\&=C(\mu )+\sum _{p\in \mathcal {P}}\int _{\mathcal {S}}c_p(x)\zeta _p(dx). \end{aligned}$$

Hence,

$$\begin{aligned} DC(\mu )\zeta =\sum _{p\in \mathcal {P}}\int _{\mathcal {S}}c_p(x)\zeta _p(dx). \end{aligned}$$
(21)

Inserting (20) and (21) into (19) and using (11), we obtain

$$\begin{aligned} \sum _{p\in \mathcal {P}}\int _\mathcal {S}\nabla f(\mu )(x,p)\zeta _p(dx)=\beta (A(\mu ))\sum _{p\in \mathcal {P}}\int _\mathcal {S}\tilde{x}\zeta _p(d\tilde{x})-\sum _{p\in \mathcal {P}}\int _{\mathcal {S}}c_p(\tilde{x})\zeta _p(d\tilde{x}) \end{aligned}$$

This equation holds for all \(\zeta \in \mathscr {M}\). In particular, it holds for \(\zeta \) such that \(\zeta _p=\delta _x\) and \(\zeta _k=0\) for all \(k\ne p\). With this \(\zeta \), we obtain

$$\begin{aligned} \nabla f(\mu )(x,p)=x\beta (A(\mu ))-c_p(x), \end{aligned}$$

which gives us (18). Hence, \(\nabla f(\mu )(x,p)=F_{x,p}(\mu )\), where \(F_{x,p}(\mu )\) is as defined in (3). This establishes the result. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lahkar, R. Convergence to Walrasian equilibrium with minimal information. J Econ Interact Coord 15, 553–578 (2020). https://doi.org/10.1007/s11403-019-00243-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11403-019-00243-8

Keywords

Navigation