Abstract
We introduce and examine the notion of T-Schur convexity which is naturally connected with Schur convexity. As a particular case, we consider T-Wright convex maps which generalize a well-known and intensively investigated class of t-Wright convex functions. We discuss several properties of this class of functions. In the last part of the paper we give a characterization of T-Wright affine maps i.e. maps satisfying the corresponding functional equation.
Similar content being viewed by others
1 Introduction
The relations between the inequality of convexity
assumed for all numbers \(t\in [0,1]\) and between the same inequality assumed for one fixed number t are now well known and classical. We will write a few words concerning these relations later in the paper.
Similarly natural is the question about the inequality of Schur convexity assumed for one fixed doubly stochastic matrix which, up to our knowledge, has not been studied yet. We will study the properties of such maps (which we call T-Schur convex) in this paper. But before we do this we need to recall some basic facts and notions connected with the notion of Schur convexity.
Let \(x=(x_1,\ldots ,x_n)^T, y=(y_1,\ldots ,y_n)^T\in {\mathbb {R}}^n\) denote the column vectors and let \(x_{[1]}\ge \ldots \ge x_{[n]}\), \(y_{[1]}\ge \ldots \ge y_{[n]}\) be rearrangements of x and y in a descending order, respectively. Then x is said to be majorized by y (briefly \(x\prec y\)) if
The relation of majorization defined above turns out to be a pre-ordering relation i.e. it is reflexive and transitive. This notation and terminology was introduced by Hardy et al. [5].
Recall that an \(n\times n\) matrix \(S=[s_{ij}]\) is said to be doubly stochastic, if
and
The well-known Hardy, Littlewood and Pólya theorem says that \(x\prec y\), if and only if, \(x=Sy\) for some doubly stochastic matrix S (in general, the matrix S is not unique).
The functions that preserve the order of majorization (in Schur’s honor who first considered them in 1923 [16]) are said to be convex in the sense of Schur. Thus we say that a function \(f:W\rightarrow {\mathbb {R}}\), where \(W\subseteq {\mathbb {R}}^n\) is Schur convex, if for all \(x, y\in W\) the implication
holds. In the case, where \(W=I^n\) with some interval \(I\subseteq {\mathbb {R}}\) the above condition is equivalent to the following one
and for all doubly stochastic matrices \(S\in {\mathbb {R}}^{n\times n}\).
The Schur convex functions have many important applications in analytic inequalities, elementary quantum mechanics and quantum information theory. A survey of results concerning a majorization and Schur convex functions may be found in an extensive monograph by Arnold et al. [1].
Now, assume that D is a convex subset of a real linear space. Recall that a function \(f:D\rightarrow {\mathbb {R}}\) is said to be convex if
If the above inequality is satisfied for all \(x, y\in D\) and fixed number \(t\in (0,1)\), then we say that f is a t-convex. If \(t=\frac{1}{2}\) then f is said to be convex in the sense of Jensen.
Obviously, each convex function is t-convex for all \(t\in (0,1)\), in particular, convex in the sense of Jensen. The converse implication does not hold in general. Indeed, fix \(t\in (0,1)\). Any discontinuous additive function \(a:{\mathbb {R}}\rightarrow {\mathbb {R}}\) i.e. a solution of the Cauchy’s functional equation
satisfying additionally the condition
is an example of t-convex and Jensen-convex function which is not convex. (proof of the existence of such function can be found, for example, in [7], Theorem 5.4.2). On the other hand, every t-convex function has to be convex in the sense of Jensen. This result was proved by Kuhn in [8]. An easy proof of this fact was done by Daróczy and Páles in [4].
In 1954 Wright [18] introduced a new convexity property. A function \(f:D \rightarrow {\mathbb {R}}\) is called Wright convex if
Clearly, each convex and additive function is Wright convex, and each Wright-convex functions is convex in the sense of Jensen.
The following theorem shows the connection between the classes of Schur convex and Wright convex functions.
Theorem 1
[12]. Let \(D\subseteq {\mathbb {R}}^m\) be a nonempty open and convex set, \(f:D\rightarrow {\mathbb {R}}\) and \(F(x_1,\ldots ,x_n)=\sum _{j=1}^{n}f(x_j)\). The following conditions are equivalent to each other:
- (a)
F is Schur convex for some \(n\ge 2\),
- (b)
F is Schur convex for every \(n\ge 2\),
- (c)
f is convex in the sense of Wright,
- (d)
f admits the representation
$$\begin{aligned} f(x)=w(x)+a(x),\ \ x\in D, \end{aligned}$$where \(w:D\rightarrow {\mathbb {R}}\) is a convex function, and \(a:{\mathbb {R}}^m\rightarrow {\mathbb {R}}\) is an additive function.
If the inequality (1) is satisfied for all \(x, y\in D\) and a fixed number \(t\in (0,1)\), then we say that f is a t-Wright convex function. The definition of t-Wright convex functions was introduced by Matkowski in [11]. The connection between t-Wright convexity and Jensen convexity was investigated in [10, 15]. In [15] the necessary and sufficient topological conditions under which every t-Wright convex function has to be Jensen convex are given. In [10] the authors solved an algebraic problem posed by Matkowski in [11], who asked whether a t-Wright convex function with a \(t \in (0, 1)\) has to be Jensen convex? In [10] Maksa et al. gave the positive answer to the problem of Matkowski for all rational \(t \in (0, 1)\) and certain algebraic values of t. However, they proved that if t is either transcendental or the distance of some of the algebraic (maybe complex) conjugate of t from \(\frac{1}{2}\) is at least \(\frac{1}{2}\), then there exists a function which is t-Wright convex but not Jensen convex.
In the class of continuous functions the notions of: convexity, Jensen convexity, Wright convexity and t-Wright convexity coincide.
The organization of the paper is as follows: in Sect. 2 we introduce the notion of T-Schur convexity and we give some background results which are the starting point for our further considerations. In Sect. 3 we introduce a notion of T-Wright convexity as a natural generalization of usual t-Wright convexity. We prove that the local boundedness at a point of T-Wright convex maps implies its local boundedness at every point, moreover, we show that any semi-continuous T-Wright convex map has to be convex. Section 4 is devoted to the separation theorem for T-Wright convex maps. We show that if f and \(-g\) are T-Wright convex functions satisfying \(g\le f\) then there exists a T-Wright affine function h such that \(g\le h\le f\). In the last section we give a characterization of T-Wright affine maps. This result generalizes a theorem proved by Lajko [9] who gave a characterization of t-Wright affine function.
2 T-Schur Convex Maps
Through this paper (unless explicitly stated otherwise) D stands for a convex subset of a real linear space, \(n\in {\mathbb {N}},\ n\ge 2\) is a fixed number and \(T\in {\mathbb {R}}^{n\times n}\) is a fixed doubly stochastic matrix. Motivated by the concept of Schur-convexity we introduce the notion of T-Schur-convexity in the following way.
Definition 1
A function \(f:D^n\rightarrow {\mathbb {R}}\) is said to be T-Schur-convex if
If \(f:D^n\rightarrow {\mathbb {R}}\) is a function such that \(-f\) is T-Schur convex then f is called T-Schur concave. If f is at the same time T-Schur convex and T-Schur concave then we say that it is a T-Schur affine. In this case f satisfies the following functional equation
This definition together with some results was presented by the first author at the XII International Symposium on Generalized Convexity and Monotonicity held in Hajduszoboszlo, Hungary from August 27 to September 2, 2017, whereas a very particular case (\(n=2\)) the class of T-Schur convex functions was examined by Burai and Makó in [3].
Now, given a function \(f:D^n\rightarrow {\mathbb {R}}\) we consider the set
Obviously, for any function f the above set is nonempty, because the identity matrix I is a member of \(W_f\).
Proposition 1
For every function \(f:D\rightarrow {\mathbb {R}}\) the following implication holds:
in particular,
Proof
Take arbitrarily \(T, S\in W_f\). Then, directly from the definition, we get
\(\square \)
For a fixed matrix T define the set
where I is the identity matrix. Let us define the relation \(\preceq _T\) on \(D^n\) in the following manner
It is easy to observe that if \(T\in {\mathbb {R}}^{n\times n}\) is a doubly stochastic matrix which is not a permutation matrix then the relation \(\preceq _T\) defines a partial order on \(D^n\), moreover, a function \(f:D^n\rightarrow {\mathbb {R}}\) is a T-Schur convex if and only if
Now, we prove a separation type theorem for T-Schur convex maps.
Theorem 2
Let \(f, -g:D^n\rightarrow {\mathbb {R}}\) be T-Schur convex functions. If
then there exists a T-Schur affine function \(h:D^n\rightarrow {\mathbb {R}}\) such that
Proof
First we define two sequences of functions \(f_k, g_k:D^n\rightarrow {\mathbb {R}}\) by the formulas
By the assumption
for \(k\in {\mathbb {N}},\ x\in D^n\). Then \((g_k)_{k\in {\mathbb {N}}}\) is an increasing and bounded above sequence of functions (similarly \((f_k)_{k\in {\mathbb {N}}}\) is a decreasing and bounded below sequence of functions). Putting
we clearly have
and
which means that h is a T-Schur affine map and this finishes the proof. \(\square \)
To quote the next result which we will use in the sequel we need the following definition.
Definition 2
A doubly stochastic matrix \(T\in {\mathbb {R}}^{n\times n}\) is called semi-positive if all entries of some power \(T^m\) are positive.
Theorem 3
[2]. If \(T\in {\mathbb {R}}^{n\times n}\) is semi-positive doubly stochastic matrix, then
where \(E=(e_{i,j}),\ e_{i,j}=1,\) for all \(i, j=1,\ldots ,n\).
As the following two simple examples show, the above theorem without the assumption of semi-positivity of T is not true.
Example 1
The identity matrix \(I \in {\mathbb {R}}^{n\times n},\ I=(\delta _{i,j}),\ i, j=1,\ldots ,n,\ n\ge 2,\)
has the property \(I=I^2=I^3=\ldots =I^m,\ m\in {\mathbb {N}}\) and therefore \(\lim _{m\rightarrow \infty }I^m=I\).
Example 2
The matrix
is periodic
and \(T^3=T\). So, in general \(T^{2k+1}=T\) and \(T^{2k}=T^2\) for \(k=1,2,\ldots \).
Theorem 4
Let D be a convex subset of a real linear topological space, let \(T\in {\mathbb {R}}^{n\times n}\) be a semi-positive doubly stochastic matrix, and let \(f:D^n\rightarrow {\mathbb {R}}\) be a T-Schur convex function. If f is lower semi-continuous, then
Proof
Let \(x=(x_1,\ldots ,x_n)\in D^n\) and denote by \({\overline{x}}=\Big (\frac{x_1+\cdots +x_n}{n},\ldots ,\frac{x_1+\cdots +x_n}{n}\Big )\). Using the semi-continuity of f and Theorem 3 we get
\(\square \)
3 T-Wright Convex Maps
In this section, motivated by [12], we will consider a T-Schur convex sums i.e. T-Schur convex maps \(g:D^n\rightarrow {\mathbb {R}}\) of the form:
where \(f:D\rightarrow {\mathbb {R}}\) is a given function defined on a convex subset D of a real linear space. Let \(T=(t_{ij}),\ i, j=1,2,\ldots ,n\) be a fixed doubly stochastic matrix. The function g of the form (2) is T-Schur convex if and only if it satisfies the following functional inequality:
Observe that the class of functions satisfying the inequality (3) generalizes the class of t-Wright convex functions. Indeed, an arbitrary doubly stochastic matrix \(T\in {\mathbb {R}}^{2\times 2}\) is of the form
thus a function \(f:D\rightarrow {\mathbb {R}}\) satisfies the inequality (3), with \(n=2\) if and only if it is a t-Wright-convex function. This allows us to formulate the following definition.
Definition 3
A function \(f:D\rightarrow {\mathbb {R}}\) satisfying the inequality (3) is called a T-Wright convex. If f is a function such that \(-f\) satisfies the inequality (3) then we say that it is a T-Wright concave. A function which is at the same time T-Wright convex and T-Wright concave is said to be a T-Wright affine.
In the proof of our next result (and in the sequel) we will use the following theorem which is a particular case of Lemma 3.7 from [6].
Theorem 5
Let D be an open and convex subset of a real linear topological space and let \(f:D\rightarrow {\mathbb {R}}\) be a Jensen convex function. If f is lower semi-continuous in D then it is convex.
As an immediate consequence of the above result and Theorem 4 we obtain the following theorem.
Theorem 6
Let D be an open and convex subset of a real linear topological space and assume that \(f:D\rightarrow {\mathbb {R}}\) is a T-Wright convex function, where \(T=(t_{ij})_{i,j=1,\ldots ,n}\) is a semi-positive doubly stochastic matrix. If f is lower semi-continuous in D then it is convex.
Proof
On account of Theorem 5 it is enough to show that f is a convex function in the sense of Jensen. It follows from Theorem 4 that
Putting \(x_1:=x,\ x_2=\ldots =x_n=y\) we get
so f is a \(\frac{1}{n}\)-convex function and due to the Kuhn’s result [8] (see also [4]) convex in the sense of Jensen. \(\square \)
Let D be a subset of a topological space. Recall that a function \(f:D\rightarrow {\mathbb {R}}\) is called locally bounded (locally bounded above, locally bounded below) at a point \(x_0\in D\) if there exists a neighbourhood U of \(x_0\) such that the function f is bounded (bounded above, bounded below) on \(U\cap D\). The next theorem refers to the local boundedness below of T-Wright convex maps and generalized corresponding theorem for t-Wright convex functions obtained in [13] (Theorem 2, p. 404).
Theorem 7
Let D be an open and convex subset of a locally convex real linear topological space, let \(T\in {\mathbb {R}}^{n\times n}\) be a semi-positive doubly stochastic matrix and let \(f:D\rightarrow {\mathbb {R}}\) be a T-Wright convex function. If f is locally bounded below at a point \(x_0\in D\) then it is locally bounded below at every point \(x\in D\).
Proof
By the assumption there is a neighbourhood \(U_{x_0}\) of \(x_0\) and a real number m such that
Without loss of generality we may assume that \(U_{x_0}\) is a convex set. For an arbitrary number \(k\in {\mathbb {N}}_0={\mathbb {N}}\cup \{0\}\) we put
Note that \(V_k\) is a convex neighbourhood of \(x_0\), \(k\in {\mathbb {N}}_0\). We will prove by induction that for all \(k\in {\mathbb {N}}_0\)
If \(k=0\) then the above inequality coincides with (4). Assume (5) for some \(k\in {\mathbb {N}}\). Fix an arbitrary point \(y\in V_{k+1}\). There exists a \(z\in U_{x_0}\) such that
From the convexity of the set \(V_k\) we get
Now, using the fact that the function \(\phi :{\mathbb {R}}\rightarrow X\) given by the formula
is continuous and \(\phi (\frac{1}{n})=\Big (1-\frac{1}{n}\Big )x_0+\frac{1}{n}y\in V_k,\) we can find an \(\varepsilon >0\) such that
Since by Theorem 3 \(\lim _{k\rightarrow \infty }T^k=\frac{1}{n}E,\) then
where \(T^k=(t^k_{ij})_{i,j=1,\ldots ,n},\ k\in {\mathbb {N}}\). In view of the fact that \(T^k\in W_f,\) on account of Proposition 1 and from the formula (3) for \(x_1=x, x_2=\ldots =x_n=x_0\) we obtain
which together with the induction assumption implies that
or, equivalently,
and the proof of (5) is complete. We have shown that the function f is bounded below on every sets \(V_k,\ k\in {\mathbb {N}}_0\) and since
the proof of theorem is finished. \(\square \)
Let \((X,\tau )\) be a topological space. By \(\tau _x\) we denote the family of all open subsets of X containing x. Let \(D\subset X\) be an open set and let \(f:D\rightarrow [-\infty ,\infty )\) be a function. Let us recall that the lower hull \(m_f\) of f is defined by the formula
Thus \(m_f\) is a function defined in D and with values in \([-\infty ,\infty )\); \(m_f:D\rightarrow [-\infty ,\infty )\). Note that definition (6) implies that
Observe that, if for some \(U\in \tau _x\), f is bounded below on U then \(m_f(x)>-\infty \).
In the proof of our next result we will use the Theorem 5 and the following theorem which is a particular cases of Theorem 4.1 from [6].
Theorem 8
Let D be an open subset of a real linear topological space and let \(f:D\rightarrow [-\infty ,\infty )\) be a function. Then the function \(m_f\) given by (6) is lower semi-continuous in D.
Theorem 9
Let D be an open and convex subset of a locally convex real linear topological space and let \(f:D\rightarrow {\mathbb {R}}\) be a T-Wright convex function with semi-positive doubly stochastic matrix \(T\in {\mathbb {R}}^{n\times n}\). Then the function \(m_f\) given by (6) is convex in D.
Proof
By Theorem 7 \(m_f\equiv -\infty \) in D or \(m_f:D\rightarrow {\mathbb {R}}\) has finite values. In the former case \(m_f\) is convex. Assume that
We will show that \(m_f\) is a T-Wright convex function. To do it, fix \(x_1,\ldots ,x_n\in D\) and \(\varepsilon >0\) arbitrarily. Put
By the definition of \(m_f\) there exists a convex neighbourhood of zero U such that
and
On the other hand, there exist the points \(r_i\in U_{x_i}\) such that
By the convexity of U, we get
By (7) and (8) and T-Wright convexity of f we have
Passing here with \(\varepsilon \) to zero, we obtain the T-Wright convexity of \(m_f\). From Theorem 8 we infer that the function \(m_f\) is lower semi-continuous in D. Consequently, \(m_f\) is convex in D on account of Theorem 6. \(\square \)
4 Separation Theorem for T-Wright Convex Function
In this section we deal with the separation problem for T-Wright convex maps. Let \(n\in {\mathbb {N}},\ n\ge 2\) and let \(k\in \{1,\ldots ,n\}\). Throughout this section \(A_k\) and \(B_k\) will denote two disjoint sets such that \(A_k\cup B_k =\{1,\ldots ,n\} {\setminus } \{k\}\).
In the proof of the main result of this section we will need the following technical lemma (we use here the convention that the sum over an empty set of indexes is equal to zero).
Lemma 1
Assume that \(h:D\rightarrow {\mathbb {R}}\) is a T-Wright convex function and \(g:D\rightarrow {\mathbb {R}}\). For fixed elements \(x_i\in D,\ i\in A_k\), define the function \(h_k:D\rightarrow [-\infty ,\infty )\) by the formula
Then
- (i)
if \(\inf _{x\in D}[h(x)-g(x)]\le 0\), then \(h_k(x)\le h(x),\ \ x\in D\);
- (ii)
if g is a T-Wright concave function then the function \(h_k\) is T-Wright convex.
Proof
To prove the first assertion suppose that h is a T-Wright-convex function i.e.
Subtracting the expression \(\sum _{i\in A_k}h(x_i)+\sum _{j\in B_k}g(x_j)\) from both sides of the above inequality, we get
Now, taking the infimum with respect to all \(x_j\in D,\ j\in B_k,\) we obtain
In order to prove the second statement suppose that g is a T-Wright concave map. We will show that the function \(h_k\) is T-Wright convex. To this end fix the points \(y_1,\ldots ,y_n\in D\) arbitrarily. Fix arbitrary real numbers \(s_1,\ldots ,s_n\) such that
By definition of \(h_k\) there exist points \(u_j^p\in D\ \text {for}\ p=1,\ldots ,n,\ j\in B_k\) such that
Therefore, using the T-Wright convexity of h and \(-g\) we obtain
Tending in the above inequalities with \(s_p\) to \(h_k(y_p),\ p=1,\ldots ,n\) we get the T-Wright convexity of \(h_k\). \(\square \)
Now, we prove the separation type theorem for T-Wright convex map. The corresponding theorem for t-Wright convex functions was proved in [14].
Theorem 10
Let D be a convex subset of a real linear space, and let \(f, -g:D\rightarrow {\mathbb {R}}\) be T-Wright convex functions. If
then there exists a T-Wright affine function \(h:D\rightarrow {\mathbb {R}}\) such that
Proof
Observe that without loss of generality we may assume that
considering otherwise the function \(f(x)-\alpha \) instead of f(x). Let us define the family of maps
Note that \({\mathcal {H}}\ne \emptyset \), since \(f\in {\mathcal {H}}\). The pair \(({\mathcal {H}},\preceq )\) yields on partially ordered set, where an order relation is defined as follows
We will show that any chain in \({\mathcal {H}}\) has a lower bound in \({\mathcal {H}}\). Let \({\mathcal {L}}\subset {\mathcal {H}}\) be an arbitrary chain. Define the function \({\overline{k}}:D\rightarrow [-\infty ,\infty )\) by the formula
Clearly,
which, in particular, implies that \({\overline{k}}\) has finite values. Now, we will show that \({\overline{k}}\) is a T-Wright convex map. To prove it fix \(x_1,\ldots ,x_n\in D\) and a number \(\varepsilon >0\) arbitrarily. There exist \(k_1,\ldots ,k_n\in {\mathcal {H}}\) such that
Therefore, by the definition of \({\overline{k}}\) and T-Wright convexity of \(k_1,\ldots ,k_n\) we obtain
where \(p=\min _{\preceq }\{k_1,\ldots ,k_n\}\). By Kuratowski–Zorn lemma there exists a minimal element h in \({\mathcal {H}}\). Since \(h\in {\mathcal {H}},\) the inequalities
hold true for all \(x_1,\ldots ,x_n\in D\). Now, fix arbitrarily \(k\in \{1,\ldots ,n\}\). We are going to use Lemma 1 for \(B_k:=\{1,\ldots ,n\}\setminus \{k\}\). (clearly, in this case \(A_k=\emptyset \)). Let us rewrite the above inequalities in the form
By Lemma 1 the function
belongs to the family \({\mathcal {H}}\), moreover,
By the minimality of h the inequality
holds for every \(k\in \{1,\ldots ,n\}\) and all \(x_1,\ldots ,x_{k-1},x_k,x_{k+1},\ldots ,x_n\in D\). In particular, putting \(k=1,\) we get
or equivalently,
Using again Lemma 1 for \(k=2,\ A_2=\{1\},\ B_2=\{3,\ldots ,n\}\) we infer that the function
belongs to the family \({\mathcal {H}}\), moreover,
Therefore by the minimality of h we have
or equivalently,
for all \(x_1,\ldots ,x_n\in D\).
Repeating this procedure n-times in the last step we obtain
For arbitrarily chosen \(x_1,\ldots ,x_{n-1}\in D\) we define the function \(h_n:D\rightarrow {\mathbb {R}}\) (as a function of one variable) by the formula
It follows from Lemma 1 (applied for \(k=n, A_k=\{1,\ldots ,n-1\}, B_k=\emptyset \)) that \(h_n\in {\mathcal {H}}\), moreover,
Using again the minimality of h, we obtain
which ends the proof. \(\square \)
5 The Corresponding Functional Equation
Let X be a real linear space and let D be a convex subset of X. In this part of the paper we will deal with the corresponding functional equation:
However without any additional assumption on f nothing interesting can be said about the solutions. For example every function \(f:D^n\rightarrow {\mathbb {R}}\) of the form
with arbitrary \(\Phi :X\rightarrow {\mathbb {R}}\) is a solution to the above equation.
Therefore we will give a representation of T-Wright affine functions i.e. we solve the functional equation
To deal with this equation some definitions and results from the paper of Székelyhidi [17] will be needed. In this part of the paper G will be a topological, Abelian group and H will be an Abelian group.
Definition 4
Let \(U\subset G\) be a neighbourhood of zero and n be a positive integer. Then a function \(A:U^n\rightarrow H\) is called locally n-additive if
holds for \(i=1,\dots ,n\) whenever \(x_1,\dots ,x_{i-1},x_i,{\tilde{x}}_i, x_{i+1},\dots ,x_n,x_i+{\tilde{x}}_i\in U\)
We will use also the following notation, if n is a positive integer and \(A:U^n\rightarrow H\) is a given function then the diagonalization of A, \(\text {diag}A:U\rightarrow H\) is defined by
In the next definition we introduce the notion of local polynomials.
Definition 5
Let \(D\subset G\) be an open set and let \(f:D\rightarrow H\) be a function. Then f is called a local polynomial of degree at most n at the-point \(x_0\in D\) if there exists a neighbourhood of zero \(U\subset G\) and there exist locally k-additive and symmetric functions \(A_k:U^k\rightarrow H,\) \(k=0,\dots ,n;\) such that \(U+x_0\subset D\) and
holds whenever \(x-x_0\in U\) (here \(U^0=U\) and a 0-additive function is a constant).
Now we may quote the main result from [17].
Theorem 11
Let the group H be divisible and torsion free and \(\varphi _j\) be a local isomorphism of G for \(j=1,\dots ,n+1.\) Let \(D\subset G\) be an open set and \(f,f_j:D\rightarrow H\) be functions for which
holds whenever \(x,x+\varphi _j(y)\in D.\) Then f is a local polynomial of order at most n on D.
In the following lemma we use the above result to show that the solutions of our main equation are local polynomials.
Lemma 2
Let X be a real linear space, let \(D\subset X\) be an open and convex set, let \(T=(t_{ij})_{i,j=1,\ldots ,n}\) be a doubly stochastic matrix which is not a permutation matrix. If \(f:D\rightarrow {\mathbb {R}}\) satisfies the equation
then f is a local polynomial of order at most n on D.
Proof
Since T is not a permutation matrix, there exist \(i_0,j_0\in \{1,\dots ,n\}\) such that \(t_{i_0,j_0}\in (0,1).\) Putting in (12) \(z_1\) in place of \(x_{i_0}\) and \(z_2\) in places of \(x_1,x_2,\dots ,x_{i_0-1},x_{i_0+1},\dots ,x_n,\) and denoting \(r_j:=t_{i_0,j},s_j:=\sum _{i\in \{1,\dots ,n\},i\ne i_0}t_{i,j},\) we get
Now, since \(r_j+s_j=1,j=1,\dots ,n,\) we have
further \(z_2\) may be written as \( z_2=z_1+(z_2-z_1).\) Thus, taking \(x=z_1\) and \(y=z_2-z_1,\) we may write (13) in the form
Observe that all numbers \(s_j\) are different from zero. Indeed if for some \(j_1\) we have \(s_{j_1}=0,\) then \(r_{j_1}=1\) which is impossible, since \(r_{j_0}>0\) and all \(r_j\) sum up to 1. Thus we take in Theorem 11 one of the functions \(f_i\) as \((n-1)f\) and all the others as \(-f.\) After these substitutions the proof is finished. \(\square \)
Now we will show that the monomial summands of the solution of Eq. (12) also satisfy this equation.
Lemma 3
Let X be a real linear topological space, let \(D\subset X\) be an open and convex set and let
where \(a_j=diag A_j\) for some j-additive and symmetric functions \(A_j:D^n\rightarrow {\mathbb {R}},j=1,\dots ,m\) and \(a_0\) is a constant. If f satisfies Eq. (12) then each of the functions \(a_j,j=1,\dots ,m\) separately also satisfies this equation.
Proof
Take any \(x_1,\dots ,x_n\in D.\) Since D is an open set, there exists an \(\varepsilon \) greater than 0 such that for all \(b\in (1-\varepsilon ,1+\varepsilon )\) we have \(bx_j\in D,j=1,\dots ,n\) and, consequently, \(\sum _{i=1}^nt_{i,j}bx_i\in D,j=1,\dots ,n.\) Now take a rational number \(q\in (1-\varepsilon ,1+\varepsilon ).\) Then we have
Using this equality in (12) and on account of the form of f, we get
The above equality is satisfied for all rational numbers from a non-degenerated interval, therefore the coefficients standing at each \(q^k,k=1,\dots ,m\) must be equal to zero which simply means that our equation is satisfied by all functions \(a_k,k=1,\dots ,m,\) as claimed. \(\square \)
Now we can present the general solution of Eq. (12).
Theorem 12
Let X be a real linear topological space, let \(D\subset X\) be an open and convex set and let T be doubly stochastic matrix which is not a permutation matrix. A function \(f:D\rightarrow {\mathbb {R}}\) satisfies Eq. (12) if and only if
where \(a_0\) is a given constant and
where \(A_k:D^k\rightarrow {\mathbb {R}}\) is k-additive and symmetric function such that
and for \(k\ge 2\) and for each \(l_1,\dots ,l_n\in \{0,\dots ,k-1\}\) satisfying \(l_1+\dots +l_m=k\)
Proof
Using Lemma 2 we know that f is of the form (15), from Lemma 3 we know that each of the functions \(a_k\) satisfies Eq. (12). It is easy to see that each \(a_k\) satisfies the equality
where
Using all the above facts, we may write
Now the summands containing equal numbers of occurrences of each variable at both sides must be equal. Thus, taking
we get
which, in view of (16), yields (17). Now assume that \(k\ge 2\) and take any \((l_1,\dots ,l_n)\) of the form different from (21). Then the corresponding expression is missing at the right-hand side of (20) and therefore all the terms at the left hand side connected with this \((l_1,\dots ,l_n)\) must sum up to zero, giving us (18).
\(\square \)
Corollary 1
Under the assumptions of Theorem 12 we have the following two assertions:
(i) If a function \(f:D\rightarrow {\mathbb {R}}\) is a continuous solution of (12) then
where a is continuous additive function and b is a constant.
(ii)If \(t_{i,j}\in {\mathbb {Q}}\) and a function \(f:D\rightarrow {\mathbb {R}}\) is a solution of (12) then
where a is an additive function and b is a constant.
Proof
From Theorem 12 we know that f is of the form (15). To prove this corollary we need to show that the summands \(a_k\) of f with \(k\ge 2\) must be equal to zero. Thus let \(k\in {\mathbb {N}}\) be greater than or equal to 2. Observe that for continuous functions \(A_k\) as well as for the rational coefficients \(t_{1,j}\) we have
Now, since T is not a permutation matrix, there exist pairs \((i_0,j_0),(i_1,j_0)\) such that \(t_{i_0,j_0},t_{i_1,j_0}\in (0,1).\) Take \(l_1,\dots ,l_n\) of the form: \(l_{i_0}=1,l_{i_1}=k-1\) and \(l_i=0,\) for \(i\ne i_0,i_1.\) Then, from (18), we get
Using here the equality (22), we get
Finally, putting here x in places of \(x_{i_0},x_{i_1},\) we get
Using here the fact that \(t_{i_0,j_0},t_{i_1,j_0}\ne 0\) we obtain \(A_k=0.\) \(\square \)
We end the paper with final remark concerning a known result connected with Eq. (12).
Remark 1
Taking in Theorem 12 \(n=2,\) we get the result obtained by Lajkó in [9].
References
Arnold, B.C., Marshall, A.W., Olkin, I.: Inequalities: Theory of Majorization and Its Applications. Springer Series in Statistics, 2nd edn. Springer, New York (2011)
Baik, S., Bang, K.: Limit theorem of the doubly stochastic matrices. Kangweon-Kyungki Math. Jour. 11(2), 155–160 (2003)
Burai, P., Makó, J.: On certain Schur-convex functions. Publ. Math. Debr. 89(3), 307–319 (2016)
Daróczy, Z., Páles, Z.: Convexity with given infinite weight sequences. Stochastica 11(1), 5–12 (1987)
Hardy, G., Littlewood, J.E., Pólya, G.: Inequalities, 1st, 2nd edn. Cambridge Univ. Press, Cambridge, 1934 (1952)
Kominek, Z.: Convex Functions in Linear Spaces. With Polish and Russian summaries, Prace Naukowe Uniwersytetu Śla̧skiego w Katowicach [Scientific Publications of the University of Silesia], 1087. Uniwersytet Śla̧ski, Katowice, (1989)
Kuczma, M.: An Introduction to the Theory of Functional Equations and Inequalities. Birkhäuser, Basel (2009)
Kuhn, N.: A note on \(t\)-convex functions. In: General inequalities 4 (Oberwolfach, 1983), 269–276, Int. Schriftenreihe Numer. Math., vol. 71. Birkhäuser, Basel (1984)
Lajkó, K.: On a functional equation of Alsina and Garcia-Roig. Publ. Math. Debr. 52, 507–515 (1998)
Maksa, Gy, Nikodem, K., Páles, Zs: Result on \(t\)-Wright convexity. C. R. Math. Rep. Acad. Sci. Canada 13, 274–278 (1991)
Matkowski, J.: On \(a\)-Wright convexity and the converse of Minkowki’s inequality. Aequationes Math. 43, 106–112 (1992)
Ng, C.T.: Functions generating Schur-convex sums. In: Walter, W. (ed.) General Inequalities, S. Oberwolach, 1986, Int. Ser. Numer. Math. vol. 80. Birkhäuser, Boston, pp. 433–438 (1987)
Olbryś, A.: Some conditions implying the continuity of \(t\)-Wright convex functions. Publ. Math. Debr. 68(3–4), 401–418 (2006)
Olbryś, A.: A support theorem for t-Wright convex functions. Math. Inequal. Appl. 14(2), 399–412 (2011)
Olbryś, A.: Representation theorems for \(t\)-Wright convexity. J. Math. Anal. Appl. 384(2), 273–283 (2011)
Schur, I.: Über eine Klasse von Mittelbildungen mit Anwendungen auf der Determinanten Theorie. Sitzunsber. Berlin Math. Ges. 22, 9–20 (1923)
Szèkelyhidi, L.: Local polynomials and functional equations. Publ Math. Debr. 30, 283–290 (1983)
Wright, E.M.: An inequality for convex functions. Amer. Math. Mon. 61, 620–622 (1954)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Olbryś, A., Szostok, T. On T-Schur Convex Maps. Results Math 75, 30 (2020). https://doi.org/10.1007/s00025-020-1154-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00025-020-1154-0