Academia.eduAcademia.edu

Pareto Optimality for Multioptimization of Continuous Linear Operators

2021, Symmetry

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY

Introduction

Multiobjective optimization problems (MOPs) appear quite often in all areas of pure and applied mathematics, for instance, in the geometry of Banach spaces [1][2][3], in operator theory [4][5][6][7], in lineability theory [8][9][10], in differential geometry [11][12][13][14], and in all areas of Experimental, Medical and Social Sciences [15][16][17][18][19][20]. By means of MOPs, many real-life situations can be modeled accurately. However, the existence of a global solution that optimizes all the objective functions of an MOP at once is very unlikely. This is were Pareto optimal solutions (POS) come into play. Informally speaking, a POS is a feasible solution such that, if any other feasible solution is more optimal at one objective function, then it is less optimal at another objective function. Pareto optimal solutions are sometimes graphically displayed in Pareto charts (PC). In this manuscript, we prove a characterization of POS by relying on orderings and equivalence relations. We also provide a sufficient topological condition to guarantee the existence of Pareto optimal solutions.

This work is mainly motivated by certain MOPs appearing in engineering, such as the design of truly optimal transcranial magnetic stimulation (TMS) coils [18][19][20][21][22][23]. The main goal of this manuscript is to characterize (Theorem 6) the set of Pareto optimal solutions of the MOPs that appear in the design of coils, such as (3). In the case of MOPs in which operators are defined on Hilbert spaces, this characterization is improved (Corollary 1). Under this Hilbert space setting, we also study the relationships between different MOPs involving different operators, but which are defined on the same Hilbert space. These operators can be naturally combined to obtain a new MOP. The set of Pareto optimal solutions of this new MOP is compared (Corollary 2) to the set of Pareto optimal solutions of the initial MOPs.

Materials and Methods

In this section, we compile all necessary tools to accomplish our results. We also develop new and original tools, such as Theorem 1 and Corollary 2, which contribute to enriching the literature on optimization theory.

Formal Description of MOPs

A generic multiobjective optimization problem (MOP) has the following form:

. . , p, min g j (x) j = 1, . . . , q, x ∈ R, (1) where f i , g j : X → R are called objective functions, defined on a nonempty set X, and R is a nonempty subset of X called the feasible region or region of constraints/restrictions. The set of general solutions of the above MOP is denoted by sol(M). In fact,

where

x ∈ R, and

are single-objective optimization problems (SOPs) and sol(P i ), sol(Q j ) denote the set of general solutions of P i , Q j for i = 1, . . . , p and j = 1, . . . , q, respectively. The set of Pareto optimal solutions of MOP M is defined as

To guarantee the existence of general solutions, it is usually asked for X to be a Hausdorff topological space, R is a compact subset of X, f i s are upper semicontinuous, and g j s are lower semicontinuous. This way, at least we make sure that the SOPs P i s and Q j s have at least one solution (Weierstrass extreme value theorem). Even more, solution sets sol(P i ) and sol(Q j ) are closed and thus compact, which makes sol(M) also compact. Nevertheless, even under these conditions, sol(M) might still be empty, as we can easily infer from Equation (2).

Characterizing Pareto Optimal Solutions

A more abstract way to construct the set of Pareto optimal solutions follows. Let X be a nonempty set, f i , g j : X → R functions and R a nonempty subset of X. In R, consider the equivalence relation given by

Next, in the quotient set of R by S, R S , consider the order relation given by

As a consequence, sol(M) ⊆ Pos(M). If there exists i 1 ∈ {1, . . . , p} or j 1 ∈ {1, . . . , q} such that sol(P i 1 ) or sol(Q j 1 ) is a singleton, respectively, then sol(

Proof. Fix an arbitrary x 0 ∈ Pos(M). Let us assume that there is y ∈ R, so that

Conversely, fix an arbitrary x 0 ∈ R, such that [x 0 ] S is a maximal element of R S endowed with ≤. Take y ∈ R satisfying that there exists i 0 ∈ {1, . . . , p} or j 0 ∈ {1, . . . , q} with

Lastly, suppose that sol(P i 1 ) is a singleton for some i 1 ∈ {1, . . . , p}, and write sol(

If there is

If there is

Proof. We only prove the first item since the other follows a dual proof. Assume that x i 0 S is not a maximal element of R/S. Then, we can find y ∈ R in such a way that

Theorem 2. Consider MOP (1). If X is a topological space, R is a compact Hausdorff subset of X and all the objective functions are continuous, then Pos(M) = ∅.

Proof. Fix i 0 ∈ {1, . . . , p}. In accordance with Lemma 1, it is only sufficient to find a maximal element of A :

We rely on Zorn's lemma. Consider a chain in A, that is, a totally ordered subset of elements [x k ] S , with k ranging a totally ordered set K in such a way that k 1 < k 2 if and only if x k 1 S < x k 2 S . Since K is totally ordered, we have that (x k ) k∈K is a net in R. The compactness of R allows for extracting a subnet (y h ) h∈H of (x k ) k∈K convergent to some x 0 ∈ R. Let us first show that

The arbitrariness of ε shows that max

. , p} and suppose to the contrary that

In a similar way, it can be shown that

Since every chain of A has an upper bound, Zorn's lemma ensures the existence of maximal elements in A.

MOPs in a Functional-Analysis Context

A large number of objective functions in an MOP may cause a lack of general solutions, that is, sol(M) = ∅. This happens quite often with MOPs involving matrices. Even if the number of objective functions is short, we might still have sol(M) = ∅. The following theorem [20], Theorem 2, is a very representative example of this situation of lack of general solutions. Theorem 3. Let T : X → Y be a nonzero continuous linear operator, where X, Y are normed spaces; then, the following max-min problem is free of general solutions:

(3)

Equation 3describes an MOP that appears in bioengineering quite often after the linearization of forces or fields [18].

Results

We focus on MOPs similar to (3). In fact, we find Pos(3) (Theorem 6 and Corollary 1). If X, Y are Hilbert spaces, say H, K, and T 1 , . . . , T k ∈ B(H, K) are continuous linear operators, then the sets of Pareto optimal solutions of the MOPs

for i = 1, . . . , k are compared (Corollary 2) with the set of Pareto optimal solutions of MOP

where

Formatting of Mathematical Components

Let X, Y be normed spaces. Consider a nonzero continuous linear operator T : X → Y. Then

is the norm of T. On the other hand,

stands for the set of supporting vectors of T, where B X := {x ∈ X : x ≤ 1} is a (closed) unit ball, and S X := {x ∈ X : x = 1} is the unit sphere. Continuous linear operators are also called bounded because they are bounded on the unit ball. The space of bounded linear operators from X to Y is denoted as B(X, Y). Let H be a Hilbert space, and consider the dual map of H:

J H is a surjective linear isometry between H and H * (Riesz representation theorem). In the frame of the geometry of Banach spaces, J H is called duality mapping. Consider H, K Hilbert spaces, and let T ∈ B(H, K) be a bounded linear operator. We define the adjoint operator of T as T : If T ∈ B(H) verifies T = T , then T is self-adjoint. This is equivalent to equality (T(x)|y) = (x|T(y)) held for every x, y ∈ H. If T satisfies (T(x)|x) ≥ 0 for each x ∈ H, then T is called positive. If H is complex, then T ∈ B(H) is self-adjoint if and only if (T(x)|x) ∈ R for each x ∈ H. Thus, in complex Hilbert spaces, positive operators are self-adjoint. T is strongly positive if there exists S ∈ B(H, K) with T = S • S. Typical examples of self-adjoint positive operators are strongly positive operators.

For each T ∈ B(H), the following set is the spectrum of T

where U (B(H)) is the multiplicative group of invertible operators on H. Among spectral properties, it is compact, nonempty, and T ≥ max |σ(T)|. We work with a special subset of the spectrum: If T is an eigenvalue of T or, in other words, T ∈ σ p (T), then T is the maximal element of |σ(T)|, i.e., T = max |σ(T)|. In this situation, we also write T = λ max (T).

therefore, T(x) = T and hence x ∈ suppv(T).

In general, T / ∈ σ p (T), unless, for instance, T is compact, self-adjoint, and positive. This is why we have to rely on adjoint T and strongly positive operator T • T. It is straightforward to verify that the eigenvalues of a positive operator are positive, and in the case of a self-adjoint operator, the eigenvalues are real. When T is compact, it holds that T • T is compact, self-adjoint, and positive.

The next result was obtained by refining ( [10] [Theorem 9]). In particular, we obtained the same conclusions with fewer hypotheses.

Theorem 4. Consider H, K Hilbert spaces, and T ∈ B(H, K)

. Then, 1.

supp(T) = ∅ if and only if T

In this situation, T = λ max (T • T) and suppv(T)

Proof.

1.

Fix an element x ∈ H, and the associated mapping

If element x is taken in the unit sphere, i.e., x ∈ S X , and considering the previous inequalities, we concluded that T 2 = T • T .

Pos (11)

Proof. Consider bounded linear operator

The next equality trivially holds for every x ∈ H,

Since the square root is strictly increasing, (11) is equivalent to

which is an MOP of form (3).

2.

Let x ∈ suppv(T) be an arbitrary element; then, Equation (6) implies that T (T(x)) = T 2 = T T(x) = T T(x) .

Then, x ∈ suppv(T • T).

We rely on Theorem 6 and Corollary 1. Fix an arbitrary x ∈ k i=1 Pos (12). If (11). Suppose that x = 0. In view of Theorem 6, x

x ∈ k i=1 suppv(T i ). We prove that

Take any y ∈ B H . Since

As a consequence,

In accordance with Theorem 6,

3.

Take v ∈ suppv(T). Before anything else, since suppv(T) ⊆ suppv(T • T), we have

Following chain of equalities (6),

Thanks to the strict convexity of space H,

We implicitly proved that suppv(T)

As we remarked before, T • T is a strongly positive operator, so the eigenvalues of that operator are real and positive. Therefore, equality λ max (T • T) = T • T holds, which implies that

Take

This chain of equalities proves that w ∈ suppv(T). Consequently,

The following technical lemma establishes the behavior of the point spectrum of a linear combination of operators. However, we first introduce some notation. Considering bounder linear operator T ∈ B(H, K) defined between H and K, Hilbert spaces, then

Lemma 2. If we consider Hilbert spaces, H, K, and T 1 , . . . , T k ∈ B(H, K), then, for every α 1 , . . . ,

. If x = 0, there is nothing to prove, x is actually in V ∑ k i=1 α i T i . So, assume that x = 0. For every i ∈ {1, . . . , k}, there exists

This shows that

The hypothesis in Lemma 3 is, in fact, very restrictive.

If H is another Hilbert space and T i : H → H i is a continuous linear operator for each i = 1, . . . , p, then the direct sum of T 1 , . . . , T p is defined as

If S i : H i → H is a continuous linear operator for each i = 1, . . . , p, then the direct sum of S 1 , . . . , S p is now defined as

Proof. Fix arbitrary elements x ∈ H and Theorem 6. Let X, Y be normed spaces, and T : X → Y be a nonzero continuous linear operator. Then, Pos(3) = Rsuppv(T).

Proof. Fix an arbitrary x 0 ∈ Pos(3). Since

, it is sufficient if we show that

x 0 x 0 ∈ suppv(T). Therefore, we may assume that x 0 = 1, so our aim was summed up to prove that T(x 0 ) = T . Since

By the definition of sup,there exists y ∈ B X , such that T(x 0 ) < T(y) ≤ T . y ≤ 1 = x 0 and T(x 0 ) < T(y) , which contradicts that x 0 ∈ Pos(3). As a consequence, T(x 0 ) = T ; hence, x 0 ∈ suppv(T). The arbitrariness of x 0 ∈ Pos (3) shows that Pos(3) ⊆ Rsuppv(T). Conversely, fix an arbitrary x 0 ∈ Rsuppv(T). There exists y 0 ∈ suppv(T) and α ∈ R, such that x 0 = αy 0 . Observe that x 0 = |α| y 0 = |α|. We prove that x 0 ∈ Pos(3). Let us consider an element y ∈ X satisfying that y < x 0 = |α|, and we distinguish cases: if y = 0, then T(x 0 ) = |α| T(y 0 ) = |α| T > 0 = T(y) . If y = 0, then

Lastly, if there exists y ∈ X, such that T(y) > T(x 0 ) , then

When X, Y are Hilbert spaces, the Pareto optimal solutions of (3) are directly obtained via combining Theorems 4 and 6. Corollary 1. Let T : H → K be a continuous linear operator with H, K Hilbert spaces. Then,

This last result allows for solving the following MOP (motivated in Section 4), given by

The Pareto optimal solutions of (11) are related to those of

for i = 1, . . . , k.

Corollary 2.

If T 1 , . . . , T k ∈ B(H, K) are continuous linear operators between Hilbert spaces H and K, then:

Discussion

In order to design truly optimal TMS coils, and depending on the nature and characteristics of the coil that we want to maximize or minimize, a linearization technique is applied to the electromagnetic field [18,[23][24][25]; then, MOPs like (3) come out:

where E is a matrix representing the electromagnetic field, E x , E y , E z are the components of E, and L represents inductance with a positive definite symmetric matrix. Using Cholesky decomposition, as L is positive definite and symmetric, the existence of an invertible matrix C, such that L = C T C, is guaranteed. Then, ψ T Lψ = ψ T C T Cψ = (Cψ) T (Cψ) = Cψ 2 2 .

Next, we apply the following change of variables: ϕ := Cψ. Then, the previous problems can be rewritten as follows:

Since the square root is strictly increasing, the previous MOPs are equivalent to the following (in the sense that they have the same set of global solutions and the same set of Pareto optimal solutions):

The three MOPs above are of the form (3). Therefore, in view of Corollary 1, the Pareto optimal solutions of each of them is determined by

respectively. On the other hand, we can consider the combined MOP, as in (11):

Let us define the following linear operator:

The corresponding matrix to T is precisely

For every ϕ ∈ R n ,

Then (17) is the same as  

According to Corollary 1,

A very illustrative example of this situation is displayed in the Appendix A