Next Article in Journal
Inverse Relations for Blossoms and Parametrisations of Rational Curves and Surfaces
Previous Article in Journal
Identifiability and Parameter Estimation of Within-Host Model of HIV with Immune Response
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learning Rate of Regularized Regression Associated with Zonal Translation Networks

1
School of Mathematical Physics and Information, Shaoxing University, Shaoxing 312000, China
2
Department of Economic Statistics, School of International Business, Zhejiang Yuexiu University, Shaoxing 312000, China
3
School of Information Engineering, Jingdezhen Ceramic University, Jingdezhen 333403, China
*
Author to whom correspondence should be addressed.
Submission received: 1 August 2024 / Revised: 8 September 2024 / Accepted: 9 September 2024 / Published: 12 September 2024

Abstract

:
We give a systematic investigation on the reproducing property of the zonal translation network and apply this property to kernel regularized regression. We propose the concept of the Marcinkiewicz–Zygmund setting (MZS) for the scattered nodes collected from the unit sphere. We show that under the MZ condition, the corresponding convolutional zonal translation network is a reproducing kernel Hilbert space. Based on these facts, we propose a kind of kernel regularized regression learning framework and provide the upper bound estimate for the learning rate. We also give proof for the density of the zonal translation network with spherical Fourier-Laplace series.

1. Introduction

It is known that convolutional neural networks provide various models and algorithms to process data models in many fields, such as computer vision (see [1]), natural language processing (see [2]), and sequence analysis in bioinformatics (see [3]). Regularized neural network learning has thus become an attractive research topic (see [4,5,6,7,8,9]). In this paper, we shall give theory analysis for the learning rate of regularized regression associated with the zonal translation network on the unit sphere.

1.1. Kernel Regularized Learning

Let Ω be a compact subset in the d-dimensional Euclidean space R d with the usual norm x = k = 1 d x k 2 for x = ( x 1 , x 2 , , x d ) R d and Y be a nonempty closed subset contained in [ M , M ] for a given M > 0 . The aim of regression learning is to learn the target function, which describes the relationship between the input x Ω and the output y Y from a hypothesis function space. In most cases, the target function is offered as a set of observations z = { z i } i = 1 m = { ( x i , y i ) } i = 1 m Z m drawn independently and identically distributed (i.i.d.) according to a probability joint distribution (measure) ρ ( x , y ) = ρ Ω ( x ) ρ ( y | x ) on Z = Ω × Y , where ρ ( y | x ) ( x Ω ) is the conditional probability of y for a given x and ρ Ω ( x ) is the marginal probability about x, i.e., for every integrable functions φ : Ω × Y R , there hold
Ω × Y φ ( x , y ) d ρ = Ω Y φ ( x , y ) d ρ ( y x ) d ρ Ω .
For a given normed space ( B , · B ) consisting of real functions on Ω , we define the regularized learning framework with B as
f z , λ ( B ) : = a r g min f B E z ( f ) + λ 2 f B 2 ,
where λ > 0 is the regularization parameter, E z ( f ) is the empirical mean
E z ( f ) = 1 m i = 1 m ( y i f ( x i ) ) 2 .
To give theory analysis for the convergence of algorithm (1) quantitatively, we often use the integral framework (see [10,11])
f ρ , λ ( B ) : = a r g min f B E ρ ( f ) + λ 2 f B 2 ,
where E ρ ( f ) = Z ( y f ( x ) ) 2 d ρ .
The optimal target function is the regression function
f ρ ( x ) = Y y d ρ ( y | x )
satisfying
f ρ = inf f E ρ ( f ) ,
where the inf is taken over all the ρ Ω -measurable functions f. Moreover, there holds the famous equality (see [12])
f f ρ L 2 ( ρ Ω ) 2 = E ρ ( f ) E ρ ( f ρ ) , f L 2 ( ρ Ω ) .
The choices for the hypothesis space B in (1) are rich. For example, C.P. An et al. choose the algebraic polynomial class as B (see [13,14,15]). In [16], C. De Mol et al chose the dictionary as B. Recently, some papers chose the Sobolev space as the hypothesis space B (see [17,18]). By the kernel method, we mean traditionally replacing B with a reproducing kernel Hilbert space (RKHS) ( H K , · , · K ) , which is a Hilbert space consisting of real functions defined on Ω , and there is a Mercer kernel K x ( y ) = K ( x , y ) on Ω × Ω (i.e., K x ( y ) is a continuous and symmetric function on Ω × Ω and for any n 1 and any { x 1 , x 2 , , x n } X the Mercer matrices ( K ( x i , x j ) ) i , j = 1 , 2 , , n are positive semi-definite) such that
f ( x ) = f , K x K , x Ω , f H K .
and there holds the embedded inequality
| f ( x ) | c f K , x Ω , f H K ,
where c is a constant independent of f and x. There are two results for the optimal solution f z , λ ( B ) . The reproducing property (3) yields the representation
f z , λ ( B ) ( x ) = k = 1 m c k K x k ( x ) , x Ω .
The embedded inequality (4) yields the inequality
| f z , λ ( B ) ( x ) | 2 M λ x Ω .
Representation (5) is the theory basis for kernel regularized regression (see [19,20]). Inequality (6) is the key inequality for bounding the learning rate with covering number method (see [21,22,23]). For other skills of the kernel method, one can cite [10,11,12,24] et al.

1.2. Marcinkiewicz-Zygmund Setting (MZS)

It is particularly important to mention here that the translation network has recently been used as the hypothesis space of regularized learning (see [25,26]). From the view of approximation theory, a simple single-layer translation network with m neurons is a function class produced by translating a given function ϕ , and can be written as
Δ ϕ , X ¯ Ω = { i = 1 n c i T x i ( ϕ , · ) : c i R , x i Ω , i = 1 , 2 , , n } ,
where X ¯ = { x i } i = 1 n Ω is a given node set, and a given x Ω T x ( ϕ , y ) is a translation operator corresponding to Ω . For example, when Ω = R d and Ω = [ π , π ] d , we choose T x ( ϕ , y ) as the usual convolution translation operator ϕ ( x y ) for a ϕ defined on R d or an 2 π -periodic function ϕ (see [27,28]. When Ω = S d 1 = { x R d : x = 1 } is the unit sphere in R d , one can choose T x ( ϕ , y ) as the zonal translation operator ϕ ( x y ) for a given ϕ defined on the interval [ 1 , 1 ] (see [29]). In [30], we defined a kind of translation operator T x ( ϕ , y ) for Ω = [ 1 , 1 ] . To ensure that the single-layer translation network Δ ϕ , X ¯ Ω can approximate the constant function, Δ ϕ , X ¯ Ω is modified as
N ϕ , X ¯ Ω = { i = 1 n c i T x i ( ϕ , · ) + c 0 : c 0 , c i R , x i Ω , i = 1 , 2 , , n } .
In the case of T y ( ϕ , y ) = σ ( w x + b ) and w , x R d , b R , R.D.Nowak et al. use (7) to design regularized learning frameworks (see [31]). An algorithm is provided by S.B. Lin et al. [26] for designing such a kind of network, and is applied to construct regularized learning algorithms. In [5], N ϕ , X ¯ Ω was used to construct deep neural network learning frameworks. The same type of investigations are given in [32,33,34].
It is easy to see that the approximation ability and the construction of a translation network depend upon the node set X ¯ = { x i } i = 1 n (see [35,36,37]). On the other hand, according to the view of [38], the quadrature rule and the Marcinkiewicz–Zygmund (MZ) inequality associated with X ¯ also influence the construction of the translation network Δ ϕ , X ¯ . For a bounded closed set Ω R d , with measure d ω satisfying Ω d ω = V < + . We denote by P n L 2 ( Ω ) the linear space of polynomials on Ω of a degree of, at most, n, equipped with the L 2 -inner product v , z = Ω v z d ω . The n-point quadrature rule (QR) is
Ω g d ω j = 1 n w j g ( x j ) ,
where X ¯ = { x j } j = 1 n Ω and weights w j are all positive for j = 1 , 2 , , n . We say the QR (8) has polynomial exactness n if there is a positive integer γ such that
Ω g d ω = j = 1 n w j g ( x j ) , g P γ n .
The Marcinkiewicz–Zygmund (MZ) inequality based on the set X ¯ = { x j } j = 1 n Ω is
j = 1 n A j | g ( x j ) | p 1 p Ω | g ( ω ) | p d ω 1 p , 1 < p < + , g P n ,
where the weights A j in (10) may be not the same as the w j in (8) and (9). Another important inequality, which is called the MZ condition in the case of unit sphere in [39], associated with polynomial approximation, is
j = 1 n A j | g ( x j ) | p 1 p C Ω | g ( ω ) | p d ω 1 p , 1 < p < + , g P r n ,
where C is a constant independent of p, and n and r are any positive integers.
We give examples to show explanations for (9)–(11), which will form the main idea of this paper:
(i) In many cases of the domain Ω , relations (9)–(11) are coexistent. For example, when Ω = [ 1 , 1 ] , (9)–(11) hold in the case that x j = x j , n ( j = 1 , 2 , , n ) are the zeros of the n-th Jacobi polynomial orthogonal with respect to d ω , and w j = A j ( j = 1 , 2 , , n ) are the Cotes–Christoffel numbers associated with d ω (see Theorem A and Theorem B in [40] and Theorem 3.4.1 in [41]). H.N. Mhaskar et al first showed in [42] that (9) and (10) are coexistent on the unit sphere S d 1 , and the corresponding relation (11) was shown in [43].
(ii) The relations (9)–(11) are equivalent.
Accord to the view of [38], the quadrature rule (QR) follows automatically from the Marcinkiewicz–Zygmund (MZ) inequality in many cases of Ω . H.N. Mhaskar et al. gave a general method of transition from MZ inequality to the polynomial exact QR in [44]; see also Theorem 4.1 in [42]. In particular, in the case of Ω = [ 1 , 1 ] , (10) may be obtained from (9) and (11) directly (see [45]).
These show that besides the polynomial exactness formula (QR) (9), the MZ inequality (10) is also an important feature for describing the node set X ¯ . For this reason, the node set X ¯ = { x j } j = 1 n Ω , which yields an MZ inequality, is given a special terminology, called the Marcinkiewicz–Zygmynd Family (MZF) (see [38,46,47,48]). However, from this literature, we know that the MZF does not totally coincide with the Lagrange interpolation nodes in the case of d > 1 . The hyperinterpolations are then developed with the help of exact QR (see [49,50,51,52,53]) and are applied to approximation theory and regularized learning (see [13,14,15,54]). On the other hand, we find that the problem of the polynomial exact QR is investigated individually (see [55,56]). The concept of spherical t-design was first defined in [57], and has been given investigations by many papers subsequently; one can see the classical references [58,59]. We say T t = { x i } i = 1 | T t | S d 1 is a spherical t-design if
1 | T t | i = 1 | T t | π ( x i ) = 1 ω d 1 S d 1 π ( x ) d ω ( x ) ,
where ω d 1 is the volume of S d 1 and π ( x ) is a spherical polynomial with the degree t. Moreover, in many applications, the polynomial exact QR and the MZF have been used as assumptions. For example, C.P. An et al. gave an approximation order for the hyperinterpolation approximation under the assumptions that (9), (12), and the MZ inequality (10) hold (see [60,61]). Also, in [25], Lin et al. gave investigations on regularized regression associated with a zonal translation network by assuming that the node set X ¯ = { x j } j = 1 n S d 1 is a type of spherical t-design.
The polynomial exact QR is also a good tool in approximation theory. For example, H.N. Mhaskar et al used a polynomial exact QR to construct the first periodic translation operators (see [27]) and the zonal translation network operators (see [29]). Along this line, the translation operators defined on the unit ball, on the Euclidean space R d , and on the interval [ 1 , 1 ] are constructed (see [28,30,62]).
The above investigations encourage us to define a terminology that contains both the Marcinkiewicz–Zygmynd family (MZF) and the polynomial exact QR: we call it the Marcinkiewicz–Zygmund setting (MZS).
Definition 1
(Marcinkiewicz–Zygmund setting (MZS) on Ω). We say a given finite node set Ω ( n ) Ω forms a Marcinkiewicz–Zygmund setting on Ω if (9)–(11) simultaneously hold.
In this paper, we design the translation network N ϕ , X ¯ Ω by taking Ω = S d 1 , assuming that X ¯ = Ω ( n ) = { x j } j = 1 n S d 1 satisfies MZS and choosing the zonal translation T x ( ϕ , y ) = ϕ ( x y ) with ϕ being a given integrable function ϕ on [ 1 , 1 ] . Under these assumptions, we provide a learning framework with N ϕ , Ω ( n ) S d 1 being the hypothesis space, and show the learning rate.
The contributions of this paper are twofold. First, after absorbing the ideas of [38,46,47,48] and the successful experience of [13,25,27,29,42,60,61,63], we propose the concept of the Marcinkiewicz–Zygmund setting (MZS) for scattered nodes on the sphere unit; based on this assumption, we show the convergence rate for the approximation error of kernel regularized learning associated with spherical Fourier analysis. Second, we give a new application of translation network and, at the same way, expand the application scopes of kernel regularized learning.
The paper is organized as follows. In Section 2, we first show the density for the zonal translation class and then show the reproducing property for the translation network Δ ϕ , Ω ( n ) S d 1 . In Section 3, we provide the results in the present paper, for example, a new regression learning framework and a learning setting, the error decomposition for the error analysis, and an estimate for the convergence rate. In Section 4, we give several lemmas, which are used to prove the main results. The proofs for all the theorems and propositions are given in Section 5.
Throughout the paper, we write A = O ( B ) if there is a positive constant C independent of A and B such that A C B . In particular, by A = O ( 1 ) we show that A is a bounded quantity. We write AB if both A = O ( B ) and B = O ( A ) .

2. The Properties of the Translation Networks on the Unit Sphere

Let ϕ L W η 1 = { ϕ : ϕ 1 , W η = 1 1 | ϕ ( x ) | W η ( x ) d x < + } , W η ( x ) = ( 1 x 2 ) η 1 2 , η > 1 2 . Then, H.N. Mhaskar et al constructed in [29] a sequence of approximation operators to show that the zonal translation class
Δ ϕ S d 1 = c l { ϕ ( x y ) : y S d 1 } { 1 } = c l { i = 1 n c i ϕ ( x i · ) + c 0 : c 0 , c i R , x i S d 1 , i = 1 , 2 , , n ; n = 1 , 2 , }
is dense in L p ( S d 1 ) ( 1 p + ) if a l η ( ϕ ) ^ > 0 for all l = 0 , 1 , 2 , , where
a l η ( ϕ ) ^ = c η 1 1 ϕ ( x ) C l η ( x ) C l η ( 1 ) W η ( x ) d x , η = d 2 2
and C n η ( x ) = p n ( η 1 2 , η 1 2 ) ( x ) is the n-th Legendre polynomial satisfies the orthogonal relation
c η 1 1 C n η ( x ) C m η ( x ) W η ( x ) d x = h n η δ n , m .
with c η = Γ ( η + 1 ) Γ ( 1 2 ) Γ ( η + 1 2 ) , h n η = η n + η C n η ( 1 ) , and it is known that (see from (B.2.1), (B.2.2) and (B.5.1) of [64]) C n η ( x ) C n η ( 1 ) = n 2 η 1 , x [ 1 , 1 ] . It follows that
ϕ ( t ) = l = 0 + a l η ( ϕ ) ^ l + η η C l η ( t ) = l = 0 + a l η ( ϕ ) ^ Z l η ( t ) ,
where Z l η ( t ) = l + η η C l η ( t ) , η = d 2 2 .
Let P n d denote the space of all homogeneous polynomials of degree n in d variables. We denote by L p ( S d 1 ) the class of all measurable functions defined on S d 1 with the finite norm
f p , S d 1 = S d 1 | f ( x ) | p d σ ( x ) 1 p , 1 p < + , max x S d 1 | f ( x ) | , p = + ,
and for p = + , we assume that L + ( S d 1 ) is the space C ( S d 1 ) of continuous functions on S d 1 with the uniform norm.
For a given integer n 0 , the restriction to S d 1 of a homogeneous harmonic polynomial of degree n is called the spherical harmonics of degree n. If Y P n d , then Y ( x ) = x n Y ( x ) , x = x x , so that Y is determined by its restriction on the unit sphere. Let H n ( S d 1 ) denote the space of the spherical harmonics of degree n. Then,
dim H n ( S d 1 ) = ( n + d 2 n ) + ( n + d 3 n 1 ) , n = 1 , 2 , 3 , ,
Spherical harmonics of different degrees are orthogonal on the unit sphere. For further properties about harmonics, one can refer to [65].
For n = 0 , 1 , 2 , , let { Y l n : 1 l dim H n ( S d 1 ) } be an orthonormal basis of H n ( S d 1 ) . Then,
1 ω d 1 S d 1 Y l n ( ξ ) Y l m ( ξ ) d σ ( ξ ) = δ l , l δ m , n ,
where ω d 1 denotes the surface area of S d 1 and ω d 1 = 2 π d 2 Γ ( d 2 ) . Furthermore, by (1.2.8) in [64], we have,
j = 1 dim H n ( S d 1 ) Y l n ( x ) Y l n ( y ) = n + η η C n η ( x y ) , x , y S d 1 ,
where C n η ( t ) is the n-th generalized Legendre polynomial, the same as in (13). Combining (13) and (14), we have
ϕ ( x y ) = l = 0 + a l η ( ϕ ) ^ l + η η C l η ( x · y ) = l = 0 + a l η ( ϕ ) ^ k = 1 dim H l ( S d 1 ) Y k l ( x ) Y k l ( y ) , x , y S d 1 .
Also, there holds the Funk–Hecke formula (see (1.2.11) in [64] or (1.2.6) in [66]):
S d 1 f ( x · y ) Y n ( y ) d σ ( y ) = a l η ( f ) ^ Y n ( x ) , Y n H n ( S d 1 ) .
In particular, there holds
S d 1 ϕ ( x · y ) d σ ( y ) = ω d 1 ϕ 1 , W η , x S d 1 , ϕ L W η 1 .
For a f L 1 ( S d 1 ) we define a l ( n ) ( f ) = S d 1 f ( x ) Y l n ( x ) d σ ( x ) . Then,
f ( x ) l = 0 j = 1 dim H l ( S d 1 ) a j ( l ) ( f ) Y j l ( x ) = l = 0 Y l ( f , x ) ,
where Y l ( f , x ) = j = 1 dim H l ( S d 1 ) a j ( l ) ( f ) Y j l ( x ) . It is known that (see (6.1.4) in [66])
f 2 , S d 1 = l = 0 j = 1 dim H l ( S d 1 ) | a j ( l ) ( f ) | 2 1 2 = l = 0 Y l ( f ) 2 , S d 1 2 1 2 , f L 2 ( S d 1 ) .

2.1. Density

We first give a general discrimination method for density.
Proposition 1
(see Lemma 1 in Chapter 18 of [67]). For a subset V in a normed linear space E, the following two properties are equivalent:
(a) V is fundamental in E (that is, its linear span is dense in E).
(b) V = { 0 } (that is, 0 is the only element of E * that annihilates V).
Based on this proposition, we can show the density of Δ ϕ S d 1 in a qualitative way.
Theorem 1.
Let ϕ L W η 2 satisfy a l η ( ϕ ) ^ > 0 for all l = 0 , 1 , 2 , . Then, Δ ϕ S d 1 is dense in L 2 ( S d 1 ) .
Proof. 
See the proof in Section 5. □
We can quantitatively show the density of Δ ϕ S d 1 in L + ( S d 1 ) .
Let C ( [ 1 , 1 ] ) = L W η denote the set of all continuous functions defined on [ 1 , 1 ] and
ϕ , W η = ϕ C ( [ 1 , 1 ] ) = sup x [ 1 , 1 ] | ϕ ( x ) | .
Define a differential operator P η ( D ) as
P η ( D ) = W η ( x ) 1 d d x W η ( x ) ( 1 x 2 ) d d x
and P η ( D ) l = P η ( D ) ( P η ( D ) l 1 ) , l = 1 , 2 , .
Theorem 2.
Let ϕ L W η 2 be sufficient smoothness (for example, P η ( D ) l ϕ C ( [ 1 , 1 ] ) for all l 1 ) and satisfy a l η ( ϕ ) ^ > 0 for all l = 0 , 1 , 2 , . Then, for a given f L + ( S d 1 ) and ε > 0 , there is a g Δ ϕ S d 1 such that
f g , S d 1 < ε .
Proof. 
See the proof in Section 5. □

2.2. MZS on the Unit Sphere

We first restate a proposition.
Proposition 2.
There is a finite subset Ω ( n ) S d 1 and positive constants N d , c such that for any given n N d , we have two nonnegative number sets { μ k : x k Ω ( n ) } and { A k : x k Ω ( n ) } satisfying
x k Ω ( n ) μ k ( n ) A k ( n ) c ,
such that
S d 1 f ( x ) d σ ( x ) = x k Ω ( n ) μ k ( n ) f ( x k ) , f H n ( S d 1 ) .
and
S d 1 | f ( x ) | p d σ ( x ) x k Ω ( n ) A k ( n ) | f ( x k ) | p , f H n ( S d 1 ) , 1 p < + .
Moreover, for any m n and p 1 , there exists a constant c p , d > 0 such that
x k Ω ( n ) A k ( n ) | f ( x k ) | p c p , d m n d 1 S d 1 | f ( x ) | p d σ ( x ) , f H m ( S d 1 )
and
| { x k Ω ( n ) : μ k ( n ) 0 } | dim ( H n ( S d 1 ) ) .
Proof. 
Inequalities (21)–(22) were proved by H.N. Mhaskar et al in [42] or see [29], which have now been extended to other domains (see [68]). Inequality (24) may be found from [29]. Inequality (23) is followed by (22) and the following facts (see Theorem 2.1 in [43]):
Suppose that Ω is a finite subset of S d 1 , { μ ω : ω Ω } is a set of positive numbers, and n is a positive integer. If for a 0 < p o < + holds the inequality
ω Ω μ ω | f ( ω ) | p 0 C 1 S d 1 | f ( x ) | p 0 d σ ( x ) , f H n ( S d 1 )
with C 1 independent of f, then for any 0 < p < + and any f H m ( S d 1 ) with m n
ω Ω μ ω | f ( ω ) | p C d , p m n d 1 S d 1 | f ( x ) | p d σ ( x ) , f H m ( S d 1 ) ,
where C d , p > 0 depends only on d and p.
Inequalities (21)–(22) show the existence of Ω ( n ) S d 1 , which has the polynomial exact QR (9) and satisfies the MZ inequality (10). Inequality (23) often goes with (21) and (22), and is needed in discussing the approximation order (see [63]). □
Proposition 3.
For any given n 1 , there exists a finite subset Ω ( n ) S d 1 , which forms an MZS on S d 1 .
Proof. 
The results follow from (21)–(23). □

2.3. The Reproducing Property

Let ϕ C ( [ 1 , 1 ] ) be a given even function. For a given finite set Ω ( n ) S d 1 , i.e., | Ω ( n ) | < + , and the corresponding finite number set { μ k ( n ) : x k Ω ( n ) } , we define a zonal translation network as
H Ω ( n ) ϕ : = c l { f ( x ) = x k Ω ( n ) c k μ k ( n ) T x ( ϕ ) ( x k ) + c 0 : c k R , k = 0 , 1 , 2 , , | Ω ( n ) | } ,
where T x ( ϕ ) ( y ) = ϕ ( x y ) . Then, it is easy to see that
H Ω ( n ) ϕ = H ϕ ( n ) R ,
where for A , B R , and A B = { 0 } we define A B = { a + b : a A , b B } and
H ϕ ( n ) : = c l { f ( x ) = x k Ω ( n ) c k μ k ( n ) T x ( ϕ ) ( x k ) : c k R , k = 0 , 1 , 2 , , | Ω ( n ) | } .
For f ( x ) = x k Ω ( n ) c k μ k ( n ) T x ( ϕ ) ( x k ) H ϕ ( n ) , g ( x ) = x k Ω ( n ) d k μ k ( n ) T x ( ϕ ) ( x k ) H ϕ ( n ) , we define a bivariate operation as
f , g ϕ = x k Ω ( n ) c k d k μ k ( n )
and
f ϕ = x k Ω ( n ) μ k ( n ) | c k | 2 1 2 .
Because of (15), we see by Theorem 4 in Chapter 17 of [67] that the matrix T x i ( ϕ , x j ) i , j = 1 , 2 , , | Ω ( n ) | is positive and definite for a given n.
It follows that the vector c = { μ k ( n ) c k } x k Ω ( n ) is unique. Then, for a given n ( H ϕ ( n ) , · ϕ ) is a finite-dimensional Hilbert space, whose dimensional dim H ϕ ( n ) = | Ω ( n ) | n d 1 , and is isometrically isomorphic with l 2 ( Ω ( n ) ) , where
l 2 ( Ω ( n ) ) = { c = { μ k ( n ) c k } x k Ω ( n ) : c l 2 ( Ω ( n ) ) = x k Ω ( n ) μ k ( n ) | c k | 2 1 2 < + } .
Since H ϕ ( n ) is a finite-dimensional Hilbert space, we know by Theorem A in section 3 of Part I in [69] that H ϕ ( n ) must be a reproducing kernel Hilbert space; what we need to do is to find the reproducing kernel.
We have a proposition.
Proposition 4.
Let ϕ C [ 1 , 1 ] satisfy P η ( D ) l ϕ C ( [ 1 , 1 ] ) and
x k Ω ( n ) | μ k ( n ) | 1 2 = O ( n α ) , 2 l > α 0 .
If Ω ( n ) S d 1 satisfies the MZ condition (23) by A k ( n ) = | μ k ( n ) | , then ( H ϕ ( n ) , · ϕ ) is a finite-dimensional reproducing kernel Hilbert space associated with the kernel
K x * ( ϕ ) ( y ) = K * ( ϕ , x , y ) = x k Ω ( n ) μ k ( n ) T x k ( ϕ , x ) T x k ( ϕ , y ) ,
i.e.,
f ( x ) = f , K x * ( ϕ ) ( · ) ϕ x S d 1 , f H ϕ ( n )
and there is a constant k * > 0 such that
| f ( x ) | k * f ϕ f H ϕ ( n ) , x S d 1 .
Proof. 
See the proof in Section 5. □
Corollary 1.
Under the assumptions of Proposition 4, H Ω ( n ) ϕ is a finite-dimensional reproducing kernel Hilbert space associated with the inner product defined by
f , g H Ω ( n ) ϕ = f 1 , g 1 ϕ + c 0 d 0 ,
where
f ( x ) = f 1 ( x ) + c 0 , g ( x ) = g 1 ( x ) + d 0 ,
f 1 ( x ) = x k Ω ( n ) c k μ k ( n ) T x ( ϕ ) ( x k ) , g 1 ( x ) = x k Ω ( n ) d k μ k ( n ) T x ( ϕ ) ( x k ) .
and the corresponding reproducing kernel K x ( ϕ ) ( y ) is
K x ( ϕ ) ( y ) = K x * ( ϕ ) ( y ) + 1 , x , y S d 1 .
Furthermore, there is a constant κ > 0 such that
| f ( x ) | κ f H Ω ( n ) ϕ , f H Ω ( n ) ϕ , x S d 1 .
Proof. 
The results can be obtained from Proposition 4, and the fact that the real set R is a reproducing kernel Hilbert space whose reproducing kernel is 1, and the inner product is the usual product of two real numbers. □
Corollary 2.
Let Ω ( n ) S d 1 be defined as in Proposition 2 and ϕ L W η . Then, H Ω ( n ) ϕ is a finite-dimensional reproducing kernel Hilbert space associated with kernel (29), and there holds inequality (30).
Proof. 
See the proof in Section 5. □

3. Apply to Kernel Regularized Regression

We now apply the above reproducing kernel Hilbert spaces to kernel regularized regression.

3.1. Learning Framework

For a set of observations z = { ( x i , y i ) } i = 1 m drawn i.i.d. according to a joint distribution ρ ( x , y ) on Z = S d 1 × Y , Y = [ M , M ] , M > 0 is a given real number, satisfying ρ ( x , y ) = ρ ( y | x ) ρ S d 1 ( x ) ; we define a regularized framework as
f z , λ Ω ( n ) : = a r g min f H Ω ( n ) ϕ E z ( f ) + λ 2 f H Ω ( n ) ϕ 2 ,
where λ = λ ( m ) 0 ( m + ) are the regularization parameters, and
E z ( f ) = 1 m i = 1 m ( y i f ( x i ) ) 2 .
It can be seen that the n in (31) may be different from the sample number m. But it can be chosen according to our needs for the purpose of increasing the learning rate.
The general integral model of (31) is
f ρ , λ Ω ( n ) : = a r g min f H Ω ( n ) ϕ E ρ ( f ) + λ 2 f H Ω ( n ) ϕ 2 ,
where
E ρ ( f ) = Z ( y f ( x ) ) 2 d ρ .
To show the convergence analysis for (31), we need to bound the error
f z , λ Ω ( n ) f ρ L 2 ( ρ S d 1 ) ,
which is an approximation problem whose convergence rate depends upon the approximation ability of H Ω ( n ) . An error decomposition will be given in Section 3.2.

3.2. Error Decompositions

By (2) and the definition of f ρ , λ Ω ( n ) , we have
f z , λ Ω ( n ) f ρ L 2 ( ρ S d 1 ) f z , λ Ω ( n ) f ρ , λ Ω ( n ) L 2 ( ρ S d 1 ) + f ρ , λ Ω ( n ) f ρ L 2 ( ρ S d 1 ) = f z , λ Ω ( n ) f ρ , λ Ω ( n ) L 2 ( ρ S d 1 ) + E ρ ( f ρ , λ Ω ( n ) ) E ρ ( f ρ ) + λ 2 f ρ , λ Ω ( n ) H Ω ( n ) ϕ 2 f z , λ Ω ( n ) f ρ , λ Ω ( n ) L 2 ( ρ S d 1 ) + D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) ,
where we have used the fact that for a > 0 , b > 0 , c > 0 and 0 < p 1 , there holds
( a + b + c ) p a p + b p + c p
and
D Ω ( n ) ( f ρ , λ ) = inf g H Ω ( n ) ϕ g f ρ L 2 ( ρ S d 1 ) + λ 2 g H Ω ( n ) ϕ
is a K-functional that denotes the approximation error, whose decay will be described later. So, the main estimate that we need to deal with is the sample error
f z , λ Ω ( n ) f ρ , λ Ω ( n ) L 2 ( ρ S d 1 ) .

3.3. Convergence Rate for the K-Functional

We first provide a convergence rate for the K-functional D Ω ( n ) ( f , λ ) .
Proposition 5.
Let ϕ L W η satisfy a l η ( ϕ ) ^ > 0 for all l 0 , and let N n be a positive integer. Then, there is a Ω ( 2 ( N + n ) ) S d 1 , which forms an MZS on S d 1 such that
D Ω ( 2 ( N + n ) ) ( f , λ ) L 2 ( ρ S d 1 ) = O n 1 2 E N ( ϕ ) 2 , W η f 2 , S d 1 ϕ n + E n ( f ) 2 , S d 1 + n 1 2 λ f 2 , S d 1 ϕ n ,
where ϕ n = min 0 l 2 n a l η ( ϕ ) ^ .
Proof. 
See the proof in Section 5. □
Corollary 3.
Let ϕ L W η satisfy a l η ( ϕ ) ^ > 0 for all l 0 , and for a given l, there holds P η ( D ) l C ( [ 1 , 1 ] ) . If m = m ( n ) N n are chosen such that n 1 2 N 2 l ϕ n 0 ( n + ) and n 1 2 λ ϕ n 0 ( n + ) , then
D Ω ( 2 ( N + n ) ) ( f , λ ) L 2 ( ρ S d 1 ) 0 + , n + .

3.4. The Learning Rate

Theorem 3.
Let Ω ( n ) S d 1 satisfy (25) and γ = Z y 2 d ρ ( x , y ) 1 2 < + .
If κ D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) > M λ , then we have a constant C > 0 such that for any δ ( 0 , 1 ) , with confidence 1 δ , holds
f z , λ Ω ( n ) f ρ , λ Ω ( n ) H Ω ( n ) ϕ κ γ λ m + 2 κ 2 D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) λ 3 2 m l o g 2 δ .
If κ D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) M λ , then we have a constant C > 0 such that for any δ ( 0 , 1 ) , with confidence 1 δ , holds
f z , λ Ω ( n ) f ρ , λ Ω ( n ) H Ω ( n ) ϕ 2 ( M + γ ) λ m l o g 2 δ .
Proof. 
See the proof in Section 5. □
Corollary 4.
Let Ω ( n ) S d 1 satisfy (25) and γ = Z y 2 d ρ ( x , y ) 1 2 < + .
If κ D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) > M λ , then we have a constant C > 0 such that for any δ ( 0 , 1 ) , with confidence 1 δ , holds
f z , λ Ω ( n ) f ρ , λ Ω ( n ) H Ω ( n ) ϕ κ γ λ m + 2 κ 2 D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) λ 3 2 m l o g 2 δ + D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) .
If κ D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) M λ , then we have a constant C > 0 such that for any δ ( 0 , 1 ) , with confidence 1 δ holds
f z , λ Ω ( n ) f ρ , λ Ω ( n ) H Ω ( n ) ϕ 2 ( M + γ ) λ m l o g 2 δ + D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) .
Proof. 
See the proof in Section 5. □
Corollary 5.
Let ϕ L W η satisfy a l η ( ϕ ) ^ > 0 for all l 0 and Ω ( 2 ( N + n ) ) S d 1 be the MZS defined as in Proposition 5. If f ρ H Ω ( n ) ϕ , then for any δ ( 0 , 1 ) , with confidence 1 δ , holds
f z , λ Ω ( n ) f ρ L 2 ( ρ S d 1 ) = O ( l o g 2 δ λ m ) + O n 1 2 E N ( ϕ ) 2 , W η f ρ 2 , S d 1 ϕ n + E n ( f ρ ) 2 , S d 1 + n 1 2 λ f 2 , S d 1 ϕ n .
Proof. 
In this case, D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) = O ( λ ) . We have (40) by (39) and (34). □

3.5. Comments

We propose the concept of MZS for a scattered node set on the unit sphere, with which we show that the related convolutional zonal translation network is a reproducing kernel Hilbert space, and we show the learning rate for the kernel regularized least square regression model. We give further comments for the results.
(1) The zonal translation network that we have chosen is a finite-dimensional reproducing kernel Hilbert space; our discussions belong to the scope of the kernel method, which is a combination and application of (neural) translation networks with learning theory.
(2) Compared with the existing convergence rate estimate of neural network learning, our upper estimates are dimensional-independent (see Theorem in [25], Theorem 3.1 in [70], Theorem 7 in [71], Theorem 1 in [26]).
(3) The density derivation in Theorem 1 for the zonal translation network is qualitative; the density deduction in Theorem 2 is quantitative with the help of spherical Fourier analysis. We think that this method can be extended to other domains such as the unit ball, the Euclidean space R d and R + = [ 0 , + ) , et al.
(4) It is hopeful that with the help of the MZ-condition, one may show the reproducing property for a deep translation network, and thus give investigations for the performance of deep convolutional translation learning with the kernel method (see [6,7,33]).
(5) We provide a method of constructing a finite-dimensional reproducing kernel Hilbert space with a convolutional kernel defined on the domains having a near-best-approximation operator, for example, the interval [ 1 , 1 ] , the unit sphere S d 1 , and the unit ball B d , etc. (see [64]). The only assumption that we need is (25). The set Ω ( n ) Ω may be any finite scattered sets satisfying the MZ condition (23), whose parameters A k ( n ) can be obtained according to (5.3.5) in Theorem 5.3.6 of [64], as we know that this is the first time that the reproducing property of the zonal neural network is shown.
(6) In many research references, to obtain an explicit learning rate, one often assumes that the approximation error (i.e., the K-functional) has a decay of power convergence, i.e., D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) = O ( λ β ) , λ 0 + , 0 < β 1 . It was proved in [72] that the K-functional is equivalent to a modulus of smoothness. In this paper, it is the first time that an upper estimate for the convergence rate of the K-functional has been provided (see (34)).
(7) One advantage of framework (31) is that it is a finite-dimensional quadratic nonlinear strict convex optimization problem; since the structure of H Ω ( n ) ϕ , so the optimal solution for (31) is unique and can be obtained by the gradient descent algorithm.
(8) It is easy to see that the optimal solution f ρ , λ Ω ( n ) depends upon both the distribution ρ and the function ϕ . How to quantitatively describe the influence level, i.e., the robustness of f ρ , λ Ω ( n ) with respect to ρ and ϕ , is a significant research direction; for such kinds of research, one can refer to references [73,74,75].
(9) Combining the upper estimate (40) and convergence (35), we know that if m = m ( n ) N n are chosen such that n 1 2 N 2 l ϕ n 0 ( n + ) and n 1 2 λ ϕ n 0 ( n + ) , then for any δ ( 0 , 1 ) , with confidence 1 δ , holds the convergence
lim n + f z , λ Ω ( n ) f ρ L 2 ( ρ S d 1 ) = 0 , f ρ H Ω ( n ) ϕ .
Convergence (41) shows that under these assumptions, algorithm (31) is convergent.
(10) We now make a comparison of the learning framework in present paper with the general learning framework (1) associated with a reproducing kernel Hilbert space H K . The Theorem 1 in [76] shows the sample error
f z , λ ( H K ) f ρ , λ ( H K ) K 6 c M log ( 2 / δ ) λ m .
Inequality (42) is a fundamental inequality for obtaining the optimal learning rate with the integral operator approach (see Theorem 2 in [76]). Inequality (37) in Theorem 3 shows that the sample error estimate (42) also holds for (31). But H Ω ( n ) ϕ is a finite-dimensional proper subset of Δ ϕ S d 1 , which is a reproducing kernel Hilbert space. These show that the learning framework (1) may obtain algorithm convergence as well as the optimal learning rate if n , N and ϕ are chosen properly.
(11) In this paper, we have shown our idea of the kernel regularized translation network learning with the zonal translation network. The essence is an application of the MZ inequality and the exact QR, or the MZ-condition and the MZS. We conjecture that the results in the present may be extended to many other translation networks whose domains satisfy the MZ-condition and the MZS, for example, the periodic translation network (see [27]), the translation on the interval [ 1 , 1 ] (see [30]), the unit sphere S d 1 (see [29]), and the unit ball B d (see [62]).
(12) Recently, the exact spherical QR is used to investigate the convergence for the spherical scattered data-fitting problem (see [25,77,78]). The Tikhonov regularization model used is the following type:
f Λ ( m ) , λ ( N ϕ ) : = a r g min f N ϕ z k Λ ( m ) μ k ( m ) ( f ( z k ) y k ) 2 + λ f N ϕ 2 ,
where N ϕ is a native space of the type Δ ϕ S d 1 , Λ ( m ) = { z k } z k Λ ( m ) S d 1 is a scattered set, and y k = f * ( z k ) + ε k ( z k Λ ( m ) ) and μ k ( m ) are the positive numbers defined in the polynomial exact QR as in (21); f * ( x ) is the target function to be fitted. It is hopeful that the method used in the present may be used to investigate the convergence of the algorithm
f Λ ( m ) , λ Ω ( n ) : = a r g min f H Ω ( n ) ϕ z k Λ ( m ) μ k ( m ) ( f ( z k ) y k ) 2 + λ f H Ω ( n ) ϕ 2 ,
where Ω ( n ) and H Ω ( n ) ϕ are defined as in Section 2.

4. Lemmas

To give a capacity-independent generalization error for algorithm (31), we need some concepts of convex analysis.
G a ^ t e a u x differentiable. Let ( H , · H ) be a Hilbert space and F ( f ) : H R { } be a real function. We say that F is G a ^ t e a u x differentiable at f 0 H if there is an ξ H such that for any g H , there holds
lim t 0 F ( f 0 + t g ) F ( f 0 ) t = g , ξ H
and we write F G ( f 0 ) = ξ or f F ( f ) = ξ . It is known that for a differentiable convex function, F ( f ) on H  f 0 = a r g min f H F ( f ) if and only if ( f F ( f ) ) | f = f 0 = 0 . (see Proposition 17.4 in [79]).
To prove the main results, we need some lemmas.
Lemma 1.
Let ( H , · ) be a Hilbert space, ξ be a random variable on ( Z , ρ ) with values in H, and { z i } i = 1 m be independent sample drawers of ρ. Assume that ξ H M ˜ < + almost surely. Denote σ 2 ( ξ ) = E ( ξ H 2 ) . Then, for any 0 < δ < 1 , with confidence 1 δ , holds
1 m i = 1 m ξ ( z i ) E ( ξ ) H 2 M ˜ log ( 2 δ ) m + 2 σ 2 ( ξ ) log ( 2 δ ) m .
Proof. 
Find it from [76]. □
Lemma 2.
Let ( H , · H ) be a Hilbert space over X with respect to kernel K. If ( E , · E ) and ( F , · F ) be closed subspaces of H such that E F and E F = H ; then, K = L + M , where L and M are the reproducing kernels of E and F, respectively. Moreover, for h = e + f , e E , f F , we have
h H = e E 2 + f F 2 1 2 .
Proof. 
See Corollary 1 in Chapter 31 of [67] or the Theorem in Section 6 in part I of [69]. □
Lemma 3.
There hold the following equalities:
f E z ( f ) ( · ) = 2 m i = 1 m ( y i f ( x i ) ) K x i ( ϕ ) ( · ) , f H Ω ( n ) ϕ .
and
f E ρ ( f ) ( · ) = 2 Z ( y f ( x ) ) K x ( ϕ ) ( · ) d ρ , f H Ω ( n ) ϕ .
Proof of (44). By the equality
a 2 b 2 = 2 ( a b ) b + ( a b ) 2 , a R , b R
we have
lim t 0 E z ( f + t g ) E z ( f ) t = lim t 0 1 m i = 1 m ( 2 ) t ( y i f ( x i ) ) g ( x i ) + t 2 g ( x i ) 2 t = 2 m i = 1 m ( y i f ( x i ) ) g ( x i )
Since g ( x ) = g , K x ( ϕ ; · ) H Ω ( n ) ϕ and the definition of G a ^ t e a u x is derivative, we have by the above equality that
lim t 0 E z ( f + t g ) E z ( f ) t = g , 2 m i = 1 m ( y i f ( x i ) ) g ( x i ) K x i ( ϕ ) ( · ) H Ω ( n ) ϕ .
We then have (44). By the same way, we can have (45).
Lemma 4.
Let ( H , · H ) be a Hilbert space consisting of real functions on X. Then,
f H 2 g H 2 = 2 f g , g H + f g H 2 , f , g H
and
f ( f H 2 ) ( · ) = 2 f ( · ) , f H .
Proof. 
Equality (47) is the deformation of the parallelogram formula. Equality (48) can be shown with (47).
Lemma 5.
Framework (31) has a unique solution f z , λ Ω ( n ) and (32) has a unique solution f ρ , λ Ω ( n ) . Moreover, There holds the bound
f ρ , λ Ω ( n ) C ( S d 1 ) 2 κ D Ω ( n ) ( f ρ , λ ) λ ,
where κ is defined as in (30).
There hold the equality
λ f z , λ Ω ( n ) ( · ) = 2 m i = 1 m y i f z , λ Ω ( n ) ( x i ) K x i ( ϕ ) ( · )
and the equality
λ f ρ , λ Ω ( n ) ( · ) = 2 Z y f ρ , λ Ω ( n ) ( x ) K x ( ϕ ) ( · ) d ρ
Proof. 
Proof of (49). Since E ρ ( f ρ , λ Ω ( n ) ) E ρ ( f ρ ) , we have by (2) that
λ 2 f ρ , λ Ω ( n ) H Ω ( n ) ϕ 2 E ρ ( f ρ , λ Ω ( n ) ) E ρ ( f ρ ) + λ 2 f ρ , λ Ω ( n ) H Ω ( n ) ϕ 2 = inf g H Ω ( n ) ϕ E ρ ( g ) E ρ ( f ρ ) + λ 2 g H Ω ( n ) ϕ 2 = inf g H Ω ( n ) ϕ g f ρ L 2 ( ρ S d 1 ) 2 + λ 2 g H Ω ( n ) ϕ 2 .
By (52), we have (49).
Proof of (50). By the definition of f z , λ Ω ( n ) and (48), we have
0 = f E z ( f ) + λ 2 f H Ω ( n ) ϕ 2 | f = f z , λ Ω ( n ) ,
i.e.,
0 = f E z ( f ) | f = f z , λ Ω ( n ) + λ f ( 1 2 f H Ω ( n ) ϕ 2 ) | f = f z , λ Ω ( n ) = 2 m i = 1 m ( y i f z , λ Ω ( n ) ( x i ) ) K x i ( ϕ ) ( · ) + λ f z , λ Ω ( n ) ( · ) .
Hence, (50) holds. We can prove (51) in the same way. □
Lemma 6.
The solutions f z , λ Ω ( n ) and f ρ , λ Ω ( n ) satisfy the inequality
f z , λ Ω ( n ) f ρ , λ Ω ( n ) H Ω ( n ) ϕ 2 A ( z ) λ ,
where
A ( z ) = Z y f ρ , λ Ω ( n ) ( x ) K x ( ϕ ) ( · ) d ρ 1 m i = 1 m y i f ρ , λ Ω ( n ) ( x i ) K x i ( ϕ ) ( · ) H Ω ( n ) ϕ .
Proof. 
By (46), we have
a 2 b 2 2 ( a b ) b , a R , b R
It follows that
E z ( f z , λ Ω ( n ) ) E z ( f ρ , λ Ω ( n ) ) = 1 m i = 1 m ( y i f z , λ Ω ( n ) ( x i ) ) 2 ( y i f ρ , λ Ω ( n ) ( x i ) ) 2 2 m i = 1 m ( y i f ρ , λ Ω ( n ) ( x i ) ) × f z , λ Ω ( n ) ( x i ) f ρ , λ Ω ( n ) ( x i ) = f z , λ Ω ( n ) f ρ , λ Ω ( n ) , 2 m i = 1 m y i f ρ , λ Ω ( n ) ( x i ) K x i ( ϕ ) ( · ) H Ω ( n ) ϕ ,
where we have used the reproducing property
f z , λ Ω ( n ) ( x i ) f ρ , λ Ω ( n ) ( x i ) = f z , λ Ω ( n ) f ρ , λ Ω ( n ) , K x i ( ϕ ) ( · ) H Ω ( n ) ϕ .
By the definition of f z , λ Ω ( n ) , we have
0 E z ( f z , λ Ω ( n ) ) + λ 2 f z , λ Ω ( n ) H Ω ( n ) ϕ 2 E z ( f ρ , λ Ω ( n ) ) + λ 2 f ρ , λ Ω ( n ) H Ω ( n ) ϕ 2 .
On the other hand, by the above inequality (54) and (47), we have
0 E z ( f z , λ Ω ( n ) ) E z ( f ρ , λ Ω ( n ) ) + λ 2 f z , λ Ω ( n ) H Ω ( n ) ϕ 2 f ρ , λ Ω ( n ) H Ω ( n ) ϕ 2 f z , λ Ω ( n ) f ρ , λ Ω ( n ) , 2 m i = 1 m y i f ρ , λ Ω ( n ) ( x i ) K x i ( ϕ ) ( · ) H Ω ( n ) ϕ + f z , λ Ω ( n ) f ρ , λ Ω ( n ) , λ j q ( f ρ , λ Ω ( n ) ) H Ω ( n ) ϕ + λ f z , λ Ω ( n ) f ρ , λ Ω ( n ) H Ω ( n ) ϕ 2 .
Because of (51), we have
0 2 f z , λ Ω ( n ) f ρ , λ Ω ( n ) , Z y f ρ , λ Ω ( n ) ( x ) K x ( ϕ ) ( · ) d ρ 1 m i = 1 m y i f ρ , λ Ω ( n ) ( x i ) K x i ( ϕ ) ( · ) H Ω ( n ) ϕ + λ f z , λ Ω ( n ) f ρ , λ Ω ( n ) H Ω ( n ) ϕ 2 .
By the Cauchy inequality, we have
λ f z , λ Ω ( n ) f ρ , λ Ω ( n ) H Ω ( n ) ϕ 2 2 f ρ , λ Ω ( n ) f z , λ Ω ( n ) , Z y f ρ , λ Ω ( n ) ( x ) K x ( ϕ ) ( · ) d ρ 1 m i = 1 m y i f ρ , λ Ω ( n ) ( x i ) K x i ( ϕ ) ( · ) H Ω ( n ) ϕ Z y f ρ , λ Ω ( n ) ( x ) K x ( ϕ ) ( · ) d ρ 1 m i = 1 m y i f ρ , λ Ω ( n ) ( x i ) K x i ( ϕ ) ( · ) H Ω ( n ) ϕ × 2 f z , λ Ω ( n ) f ρ , λ Ω ( n ) H Ω ( n ) ϕ .
We then have (53). □

5. Proof for Theorems and Propositions

Proof of Theorem 1.
If Δ ϕ S d 1 is not dense in L 2 ( S d 1 ) , i.e.,
c l s p a n ( Δ ϕ S d 1 ) L 2 ( S d 1 ) .
then by (b) in Proposition 1, we know ( c l s p a n ( Δ ϕ S d 1 ) ) { 0 } , and there is a nonzero functional F L 2 ( S d 1 ) such that
F ( f ) = 0 , f c l s p a n ( Δ ϕ S d 1 ) .
It follows that F ( ϕ ( · y ) ) = 0 for all y S d 1 . By the Riesz representation Theorem, F corresponds to a nonzero h L 2 ( S d 1 ) in such a way that
F ( f ) = S d 1 f ( x ) h ( x ) d σ ( x ) , f L 2 ( S d 1 ) .
Consequently, S d 1 ϕ ( x y ) h ( y ) d σ ( y ) = 0 , x S d 1 , which gives
S d 1 S d 1 ϕ ( x y ) h ( y ) d σ ( y ) h ( x ) d σ ( x ) = 0 .
Combining (55) with (15), we have
l = 0 + a l η ( ϕ ) ^ k = 1 dim H l ( S d 1 ) S d 1 h ( y ) Y k l ( y ) d σ ( y ) 2 = 0 .
It follows that S d 1 h ( y ) Y k l ( y ) d σ ( y ) = 0 for all l 0 . Therefore, h = 0 . We have obtained a contradiction. □
Proof of Theorem 2.
For a nonnegative function a ^ C ( R ) satisfying (a) s u p p a ^ [ 0 , 2 ] and a ^ ( t ) = 1 , t [ 0 , 1 ] , or (b) s u p p a ^ [ 1 2 , 2 ] , we define a near-best-approximation operator V n ( η ) ( f , x ) as
V n ( η ) ( f , x ) = l = 0 + a ^ ( l n ) a l η ( f ) ^ l + η η C l η ( x ) , x [ 1 , 1 ] .
Then, by [80], we know
V n ( η ) ( p , x ) = p ( x ) , p P n ,
V n ( η ) ( f ) P 2 n , and for any f L W η p , there hold V n ( η ) ( f ) p , W η c f p , W η ,
a l η ( V n ( η ) ( f ) ) ^ = a l η ( f ) ^ , 0 l n
and
V n ( η ) ( f ) f p , W η c E n ( f ) p , W η ,
where
E n ( f ) p , W η = inf p P n f p p , W η , f L W η p .
By Theorem 5.4 in [81], we have
E n ( f ) p , W η C K l f , P η ( D ) , n 2 l p , W η , l = 1 , 2 , , 1 p + ,
where
K l f , P η ( D ) , λ l p , W η = inf g C 2 l [ 1 , 1 ] f g p , W η + λ l P η ( D ) l ( f ) p , W η .
By (56), we know that if P η ( D ) l ( f ) C 2 l [ 1 , 1 ] , then
E n ( f ) p , W η = O 1 n 2 l , l = 1 , 2 , , 1 p + .
By the same way, we define a near-best-approximation operator V n ( f , x ) as
V n ( f , x ) = l = 0 + a ^ ( l n ) Y l ( f , x ) , x S d 1 , f L 1 ( S d 1 ) .
Then, it is known that (see Lemma 4.1.1 in [66] or see Theorem 2.6.3 in [64]) V n ( f ) H 2 n ( S d 1 ) and V n ( f ) = f for f H n ( S d 1 ) , and there is a constant c > 0 such that for any f L p ( S d 1 )
V n ( f ) f p , S d 1 c E n ( f ) p , S d 1 , V n ( f ) p , S d 1 c f p , S d 1 .
Since V n ( f ) H 2 n ( S d 1 ) , we have
V n ( f , x ) = a 0 ( 0 ) ( V n ( f ) ) + l = 1 2 n j = 1 dim H l ( S d 1 ) a j ( l ) ( V n ( f ) ) y j l ( x ) , x S d 1 .
On the other hand, by (16), we have for N 2 n and 1 l 2 n that
Y j l ( x ) = 1 a l η ( V N ( η ) ( ϕ ) ) ^ S d 1 V N ( η ) ( ϕ , x · y ) Y j l ( y ) d σ ( y ) = 1 a l η ( ϕ ) ^ S d 1 V N ( η ) ( ϕ , x · y ) Y j l ( y ) d σ ( y ) , Y j l H l ( S d 1 ) .
Since V N ( η ) ( ϕ , x · ) Y j l ( · ) H 2 ( N + n ) ( S d 1 ) , we have by (21) that
S d 1 V N ( η ) ( ϕ , x · y ) Y j l ( y ) d σ ( y ) = x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) V N ( η ) ( ϕ , x · x k ) Y j l ( x k ) .
It follows by (59) and (58) that
V n ( f , x ) = a 0 ( 0 ) ( V n ( f ) ) + l = 1 2 n j = 1 dim H l ( S d 1 ) a j ( l ) ( V n ( f ) ) y j l ( x ) = a 0 ( 0 ) ( V n ( f ) ) + x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) V N ( η ) ( ϕ , x · x k ) × l = 1 2 n j = 1 dim H l ( S d 1 ) a j ( l ) ( V n ( f ) ) y j l ( x k ) a l η ( ϕ ) ^ = a 0 ( 0 ) ( V n ( f ) ) + x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) V N ( η ) ( ϕ , x · x k ) l = 1 2 n Y l ( V n ( f ) , x k ) a l η ( ϕ ) ^ .
Define an operator as
V n ϕ ( f , x ) = a 0 ( 0 ) ( V n ( f ) ) + x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) ϕ ( x · x k ) l = 1 2 n Y l ( V n ( f ) , x k ) a l η ( ϕ ) ^ .
Then, V n ϕ ( f ) H Ω ( 2 ( N + n ) ) ϕ Δ ϕ S d 1 and V n ( f , x ) = V n V N ( η ) ( ϕ ) ( f , x ) .
It follows that
| V n ϕ ( f , x ) f ( x ) | | V n ϕ ( f , x ) V n ( f , x ) | + | V n ( f , x ) f ( x ) | c E n ( f ) , S d 1 + | V n ϕ ( f , x ) V n V N ( η ) ( ϕ ) ( f , x ) | ,
where
| V n ϕ ( f , x ) V n V N ( η ) ( ϕ ) ( f , x ) | x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) | ϕ ( x · x k ) V N ( η ) ( ϕ , x · x k ) | l = 1 2 n | Y l ( V n ( f ) , x k ) | a l η ( ϕ ) ^ c E N ( ϕ ) , [ 1 , 1 ] x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) l = 1 2 n | Y l ( V n ( f ) , x k ) | a l η ( ϕ ) ^ .
Because of (22) and (20), we have
x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) l = 1 2 n | Y l ( V n ( f ) , x k ) | a l η ( ϕ ) ^ x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) A k ( 2 ( N + n ) ) × A k ( 2 ( N + n ) ) l = 1 2 n | Y l ( V n ( f ) , x k ) | a l η ( ϕ ) ^ c x k Ω ( 2 ( N + n ) ) A k ( 2 ( N + n ) ) l = 1 2 n | Y l ( V n ( f ) , x k ) | a l η ( ϕ ) ^ ,
where we have used the fact that x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) A k ( 2 ( N + n ) ) c . It follows that
x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) l = 1 2 n | Y l ( V n ( f ) , x k ) | a l η ( ϕ ) ^ c ϕ n x k Ω ( 2 ( N + n ) ) A k ( 2 ( N + n ) ) l = 1 2 n | Y l ( V n ( f ) , x k ) | c ϕ n S d 1 l = 1 2 n S d 1 | Y l ( V n ( f ) , x ) | d σ ( x ) 2 c ω d 1 n 1 2 ϕ n l = 1 2 n Y l ( V n ( f ) ) 2 , S d 1 2 1 2 2 c ω d 1 n 1 2 V n ( f ) 2 , S d 1 ϕ n = O ( n 1 2 f 2 , S d 1 ϕ n ) ,
where we have used equality (18) and ϕ n = min 1 l 2 n a l η ( ϕ ) ^ . Take (62) into (61). Then,
| V n ϕ ( f , x ) V n V N ( η ) ( ϕ ) ( f , x ) | = O ( n 1 2 f 2 , S d 1 E N ( ϕ ) , [ 1 , 1 ] ϕ n ) .
Collecting (63) and (60), we have a constant C > 0 such that
| V n ϕ ( f , x ) f ( x ) | C n 1 2 f 2 , S d 1 E N ( ϕ ) , [ 1 , 1 ] ϕ n + E n ( f ) , S d 1 .
Since ϕ n depends upon n and N n , we can choose sufficient l and N such that
n 1 2 f 2 , S d 1 E N ( ϕ ) , [ 1 , 1 ] ϕ n n 1 2 O ( P η ( D ) l ϕ , [ 1 , 1 ] ) f 2 , S d 1 N 2 l ϕ n < ε 2 C .
Also, since E n ( f ) 0 for n + , we have for sufficient large n that
E n ( f ) , S d 1 ε 2 C .
Taking (65) and (66) into (64), we finally arrive at (19). □
Proof of Proposition 4.
By the definition of · , · ϕ and the definition kernel K x * ( ϕ , y ) in (26), we have for any f ( x ) = x k Ω ( n ) c k μ k ( n ) T x ( ϕ ) ( x k ) that
f , K x * ( ϕ , · ) ϕ = x k Ω ( n ) c k μ k ( n ) T x ( ϕ ) ( x k ) = f ( x ) .
Kernel (27) then holds. We now show (28). In fact, by Cauchy’s inequality, we have
| f ( x ) | = | x k Ω ( n ) c k μ k ( n ) T x ( ϕ ) ( x k ) | c L 2 ( Ω ( n ) ) x k Ω ( n ) | μ k ( n ) | | T x ( ϕ ) ( x k ) | 2 1 2 = f ϕ x k Ω ( n ) | μ k ( n ) | | ϕ ( x k · x ) | 2 1 2 .
On the other hand, by the Minkowski inequality and inequality (23), we have
x k Ω ( n ) | μ k ( n ) | | ϕ ( x k · x ) | 2 1 2 x k Ω ( n ) | μ k ( n ) | | ( ϕ V n ( ϕ ) ) ( x k · x ) | 2 1 2 + x k Ω ( n ) | μ k ( n ) | | V n ( ϕ ) ( x k · x ) | 2 1 2 c E n ( ϕ ) C ( [ 1 , 1 ] ) ( x k Ω ( n ) | μ k ( n ) | ) 1 2 + O ( 1 ) S d 1 | V n ( ϕ ) ( x · y ) | 2 d σ ( x ) 1 2 = O ( n α ) E n ( ϕ ) C ( [ 1 , 1 ] ) + O ( 1 ) 1 1 | V n ( ϕ ) ( u ) | 2 W η ( u ) d u 1 2 = O ( 1 ) ,
where we have used (17), (57), and (25). Kernel (28) thus holds, where we have used (16). □
Proof of Corollary 2.
Since Ω ( n ) is defined as in Proposition 2, we know μ k ( n ) 0 and x k Ω ( n ) | μ k ( n ) | = ω d 1 < + , and condition (25) is satisfied with α = 0 ; we then have the results of Corollary 2 by Proposition 4.
Proof of Proposition 5.
Since V n ϕ ( f ) H Ω ( 2 ( N + n ) ) ϕ , we have
D Ω ( 2 ( N + n ) ) ( f , λ ) f V n ϕ ( f ) L 2 ( ρ S d 1 ) + λ 2 V n ϕ ( f ) H Ω ( 2 ( N + n ) ) ϕ = A + λ 2 B ,
where
A = f V n ϕ ( f ) L 2 ( ρ S d 1 )
and
B = | a 0 ( 0 ) ( V n ( f ) ) | 2 + x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) l = 1 2 n Y l ( V n ( f ) , x k ) a l η ( ϕ ) ^ 2 1 2 .
We first bound A. By (60), we have
A V n ϕ ( f ) V n ( f ) 2 , S d 1 + V n ( f ) f 2 , S d 1 = V n ϕ ( f ) V n ( f ) 2 , S d 1 + O ( E n ( f ) 2 , S d 1 ) ,
where
V n ϕ ( f ) V n ( f ) 2 , S d 1 x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) S d 1 | ϕ ( x · x k ) V N ( η ) ( ϕ , x · x k ) | 2 d σ ( x ) 1 2 × l = 1 2 n | Y l ( V n ( f ) , x k ) | a l η ( ϕ ) ^ c E N ( ϕ ) 2 , W η x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) l = 1 2 n | Y l ( V n ( f ) , x k ) | a l η ( ϕ ) ^ = O ( n 1 2 E N ( ϕ ) 2 , W η f 2 , S d 1 ϕ n ) ,
where we have used (62) and (17). Collecting (69) and (68), we have
A = O ( n 1 2 E N ( ϕ ) 2 , W η f 2 , S d 1 ϕ n ) + O ( E n ( f ) 2 , S d 1 ) .
We now bound B. By (20), (22), and (18), we have
B | a 0 ( 0 ) ( V n ( f ) ) | 2 + x k Ω ( 2 ( N + n ) ) μ k ( 2 ( N + n ) ) A k ( 2 ( N + n ) ) × A k ( 2 ( N + n ) ) l = 1 2 n Y l ( V n ( f ) , x k ) a l η ( ϕ ) ^ 2 1 2 | a 0 ( 0 ) ( V n ( f ) ) | 2 + c x k Ω ( 2 ( N + n ) ) A k ( 2 ( N + n ) ) l = 1 2 n Y l ( V n ( f ) , x k ) a l η ( ϕ ) ^ 2 1 2 = O | a 0 ( 0 ) ( V n ( f ) ) | 2 + S d 1 l = 1 2 n Y l ( V n ( f ) , x ) a l η ( ϕ ) ^ 2 d σ ( x ) 1 2 = O n 1 2 ϕ n l = 1 2 n Y l ( V n ( f ) ) 2 , S d 1 2 1 2 = O ( n 1 2 V n ( f ) 2 , S d 1 ϕ n ) = O ( n 1 2 f 2 , S d 1 ϕ n ) .
Taking (71) and (70) into (67), we have (34). □
Proof of Theorem 3.
Taking ξ ( x , y ) ( · ) = y f ρ , λ Ω ( n ) ( x ) K x ( ϕ ) ( · ) , by (49), we have
ξ ( x , y ) ( · ) H Ω ( n ) ϕ = | K x ( ϕ ) ( x ) | | y f ρ , λ Ω ( n ) ( x ) | κ ( M + | f ρ , λ Ω ( n ) ( x ) | ) κ ( M + κ D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) λ ) .
If κ D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) > M λ , then we have by (72) that
ξ ( x , y ) ( · ) H Ω ( n ) ϕ 2 κ 2 D Ω ( n ) ( f ρ , λ ) λ .
Also,
σ 2 ( ξ ) = E ( ξ H Ω ( n ) ϕ 2 ) κ 2 S d 1 y f ρ , λ Ω ( n ) ( x ) 2 d ρ ( x , y ) κ 2 ( S d 1 y f ρ , λ Ω ( n ) ( x ) 2 d ρ ( x , y ) + λ 2 f ρ , λ Ω ( n ) H Ω ( n ) ϕ 2 ) κ 2 min f H Ω ( n ) ϕ E ρ ( f ) + λ 2 f H Ω ( n ) ϕ 2 κ 2 Z y 2 d ρ ( x , y ) < + .
Taking (73) and (74) into (43), we have (36).
Also, if κ D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) > M λ , then we have by (72) that
ξ ( x , y ) ( · ) H Ω ( n ) ϕ 2 M .
Taking (75) and (74) into (43), we have (37). □
Proof of Corollary 4.
Because of (36), we have by (30) that
f z , λ Ω ( n ) f ρ , λ Ω ( n ) L 2 ( ρ S d 1 ) f z , λ Ω ( n ) f ρ , λ Ω ( n ) C ( S S d 1 ) κ f z , λ Ω ( n ) f ρ , λ Ω ( n ) H Ω ( n ) ϕ 4 κ M λ m + D Ω ( n ) ( f ρ , λ ) L 2 ( ρ S d 1 ) λ 3 2 m l o g 4 δ .
Taking (76) into (33), we have (38). By the same way, we can have (39). □

Author Contributions

Conceptualization, X.R.; methodology, B.S.; validation, X.R.; formal analysis, X.R.; resources, B.S. and S.W.; writing—original draft preparation, X.R. and B.S.; writing—review and editing, B.S. and S.W.; supervision, B.S. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by the National Natural Science Foundation of China under Grants No. 61877039, the NSFC/RGC Joint Research Scheme (Project No. 12061160462 and N_CityU102/20) of China and Natural Science Foundation of Jiangxi Province of China (20232BAB201021).

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  2. Wu, Y.; Schuster, M.; Chen, Z.; Le, Q.-V.; Norouzi, M.; Macherey, W.; Cao, Y.; Gao, Q. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv 2016, arXiv:1609.08144. [Google Scholar]
  3. Alipanahi, B.; Delong, A.; Weirauch, M.T.; Frey, B.J. Predicting the sequence specificities of DNA-and RNA-binding proteins by deep learning. Nat. Biotechnol. 2015, 33, 831–838. [Google Scholar] [CrossRef] [PubMed]
  4. Chui, C.K.; Lin, S.-B.; Zhou, D.-X. Construction of neural networks for realization of localized deep learning. arXiv 2018, arXiv:1803.03503. [Google Scholar] [CrossRef]
  5. Chui, C.K.; Lin, S.-B.; Zhou, D.-X. Deep neural networks for rotation-invariance approximation and learning. Anal. Appl. 2019, 17, 737–772. [Google Scholar] [CrossRef]
  6. Fang, Z.-Y.; Feng, H.; Huang, S.; Zhou, D.-X. Theory of deep convolutional neural networks II: Spherical analysis. Neural Netw. 2020, 131, 154–162. [Google Scholar] [CrossRef]
  7. Feng, H.; Huang, S.; Zhou, D.-X. Generalization analysis of CNNs for classification on spheres. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 6200–6213. [Google Scholar] [CrossRef]
  8. Zhou, D.-X. Deep distributed convolutional neural networks: Universality. Anal. Appl. 2018, 16, 895–919. [Google Scholar] [CrossRef]
  9. Zhou, D.-X. Universality of deep convolutional neural networks. Appl. Comput. Harmon. Anal. 2020, 48, 787–794. [Google Scholar] [CrossRef]
  10. Cucker, F.; Zhou, D.-X. Learning Theory: An Approximation Theory Viewpoint; Cambridge University Press: New York, NY, USA, 2007. [Google Scholar]
  11. Steinwart, I.; Christmann, A. Support Vector Machines; Springer: New York, NY, USA, 2008. [Google Scholar]
  12. Cucker, F.; Smale, S. On the mathematical foundations of learning. Bull. Amer. Math. Soc. 2001, 39, 1–49. [Google Scholar] [CrossRef]
  13. An, C.-P.; Chen, X.-J.; Sloan, I.H.; Womersley, R.S. Regularized least squares approximations on the sphere using spherical designs. SIAM J. Numer. Anal. 2012, 50, 1513–1534. [Google Scholar] [CrossRef]
  14. An, C.-P.; Wu, H.-N. Lasso hyperinterpolation over general regions. SIAM J. Sci. Comput. 2021, 43, A3967–A3991. [Google Scholar] [CrossRef]
  15. An, C.-P.; Ran, J.-S. Hard thresholding hyperinterpolation over general regions. arXiv 2023, arXiv:2209.14634. [Google Scholar]
  16. De Mol, C.; De Vito, E.; Rosasco, L. Elastic-net regularization in learning theory. J. Complex. 2009, 25, 201–230. [Google Scholar] [CrossRef]
  17. Fischer, S.; Steinwart, I. Sobolev norm learning rates for regularized least-squares algorithms. J. Mach. Learn. Res. 2020, 21, 8464–8501. [Google Scholar]
  18. Lai, J.-F.; Li, Z.-F.; Huang, D.-G.; Lin, Q. The optimality of kernel classifiers in Sobolev space. arXiv 2024, arXiv:2402.01148. [Google Scholar]
  19. Sun, H.-W.; Wu, Q. Least square regression with indefinite kernels and coefficient regularization. Appl. Comput. Harmon. Anal. 2011, 30, 96–109. [Google Scholar] [CrossRef]
  20. Wu, Q.; Zhou, D.-X. Learning with sample dependent hypothesis spaces. Comput. Math. Appl. 2008, 56, 2896–2907. [Google Scholar] [CrossRef]
  21. Chen, H.; Wu, J.-T.; Chen, D.-R. Semi-supervised learning for regression based on the diffusion matrix. Sci. Sin. Math. 2014, 44, 399–408. (In Chinese) [Google Scholar]
  22. Sun, X.-J.; Sheng, B.-H. The learning rate of kernel regularized regression associated with a correntropy-induced loss. Adv. Math. 2024, 53, 633–652. [Google Scholar]
  23. Wu, Q.; Zhou, D.-X. Analysis of support vector machine classification. J. Comput. Anal. Appl. 2006, 8, 99–119. [Google Scholar]
  24. Sheng, B.-H. Reproducing property of bounded linear operators and kernel regularized least square regressions. Int. J. Wavelets Multiresolut. Inf. Process. 2024, 22, 2450013. [Google Scholar] [CrossRef]
  25. Lin, S.-B.; Wang, D.; Zhou, D.-X. Sketching with spherical designs for noisy data fitting on spheres. SIAM J. Sci. Comput. 2024, 46, A313–A337. [Google Scholar] [CrossRef]
  26. Lin, S.-B.; Zeng, J.-S.; Zhang, X.-Q. Constructive neural network learning. IEEE Trans. Cybern. 2019, 49, 221–232. [Google Scholar] [CrossRef]
  27. Mhaskar, H.N.; Micchelli, C.A. Degree of approximation by neural and translation networks with single hidden layer. Adv. Appl. Math. 1995, 16, 151–183. [Google Scholar] [CrossRef]
  28. Sheng, B.-H.; Zhou, S.-P.; Li, H.-T. On approximation by tramslation networks in Lp(Rk) spaces. Adv. Math. 2007, 36, 29–38. [Google Scholar]
  29. Mhaskar, H.N.; Narcowich, F.J.; Ward, J.D. Approximation properties of zonal function networks using scattered data on the sphere. Adv. Comput. Math. 1999, 11, 121–137. [Google Scholar] [CrossRef]
  30. Sheng, B.-H. On approximation by reproducing kernel spaces in weighted Lp-spaces. J. Syst. Sci. Complex. 2007, 20, 623–638. [Google Scholar] [CrossRef]
  31. Parhi, R.; Nowak, R.D. Banach space representer theorems for neural networks and ridge splines. J. Mach. Learn. Res. 2021, 22, 1–40. [Google Scholar]
  32. Oono, K.; Suzuki, Y.J. Approximation and non-parameteric estimate of ResNet-type convolutional neural networks. arXiv 2023, arXiv:1903.10047. [Google Scholar]
  33. Shen, G.-H.; Jiao, Y.-L.; Lin, Y.-Y.; Huang, J. Non-asymptotic excess risk bounds for classification with deep convolutional neural networks. arXiv 2021, arXiv:2105.00292. [Google Scholar]
  34. Mallat, S. Understanding deep convolutional networks. Phil. Trans. R. Soc. A 2016, 374, 20150203. [Google Scholar] [CrossRef] [PubMed]
  35. Narcowich, F.J.; Ward, J.D.; Wendland, H. Sobolev error estimates and a Bernstein inequality for scattered data interpolation via radial basis functions. Constr. Approx. 2006, 24, 175–186. [Google Scholar] [CrossRef]
  36. Narcowich, F.J.; Ward, J.D. Scattered data interpolation on spheres: Error estimates and locally supported basis functions. SIAM J. Math. Anal. 2002, 33, 1393–1410. [Google Scholar] [CrossRef]
  37. Narcowich, F.J.; Sun, X.P.; Ward, J.D.; Wendland, H. Direct and inverse Sobolev error estimates for scattered data interpolation via spherical basis functions. Found. Comput. Math. 2007, 7, 369–390. [Google Scholar] [CrossRef]
  38. Gröchenig, K. Sampling, Marcinkiewicz-Zygmund inequalities, approximation and quadrature rules. J. Approx. Theory 2020, 257, 105455. [Google Scholar] [CrossRef]
  39. Gia, Q.T.L.; Mhaskar, H.N. Localized linear polynomial operators and quadrature formulas on the sphere. SIAM J. Numer. Anal. 2008, 47, 440–466. [Google Scholar] [CrossRef]
  40. Xu, Y. The Marcinkiewicz-Zygmund inequalities with derivatives. Approx. Theory Its Appl. 1991, 7, 100–107. [Google Scholar] [CrossRef]
  41. Szegö, G. Orthogonal Polynomials; American Mathematical Society: New York, NY, USA, 1967. [Google Scholar]
  42. Mhaskar, H.N.; Narcowich, F.J.; Ward, J.D. Spherical Marcinkiewicz-Zygmund inequalities and positive quadratue. Math. Comput. 2001, 70, 1113–1130, Corrigendum in Math. Comp. 2001, 71, 453–454. [Google Scholar] [CrossRef]
  43. Dai, F. On generalized hyperinterpolation on the sphere. Proc. Amer. Math. Soc. 2006, 134, 2931–2941. [Google Scholar] [CrossRef]
  44. Mhaskar, H.N.; Narcowich, F.J.; Sivakumar, N.; Ward, J.D. Approximation with interpolatory constraints. Proc. Amer. Math. Soc. 2001, 130, 1355–1364. [Google Scholar] [CrossRef]
  45. Xu, Y. Mean convergence of generalized Jacobi series and interpolating polynomials, II. J. Approx. Theory 1994, 76, 77–92. [Google Scholar] [CrossRef]
  46. Marzo, J. Marcinkiewicz-Zygmund inequalities and interpolation by spherical harmonics. J. Funct. Anal. 2007, 250, 559–587. [Google Scholar] [CrossRef]
  47. Marzo, J.; Pridhnani, B. Sufficiant conditions for sampling and interpolation on the sphere. Constr. Approx. 2014, 40, 241–257. [Google Scholar] [CrossRef]
  48. Wang, H.P. Marcinkiewicz-Zygmund inequalities and interpolation by spherical polynomials with respect to doubling weights. J. Math. Anal. Appl. 2015, 423, 1630–1649. [Google Scholar] [CrossRef]
  49. Gia, T.L.; Slon, I.H. The nuiform norm of hyperinterpolation on the unit sphere in an arbitrary number of dimensions. Constr. Approx. 2001, 17, 249–265. [Google Scholar] [CrossRef]
  50. Sloan, I.H. Polynomial interpolation and hyperinterpolation over general regions. J.Approx.Theory 1995, 83, 238–254. [Google Scholar] [CrossRef]
  51. Sloan, I.H.; Womersley, R.S. Constructive polynomial approximation on the sphere. J. Approx. Theory 2000, 103, 91–118. [Google Scholar] [CrossRef]
  52. Wang, H.-P. Optimal lower estimates for the worst case cubature error and the approximation by hyperinterpolation operators in the Sobolev space sertting on the sphere. Int. J. Wavelets Multiresolut. Inf. Process. 2009, 7, 813–823. [Google Scholar] [CrossRef]
  53. Wang, H.-P.; Wang, K.; Wang, X.-L. On the norm of the hyperinterpolation operator on the d-dimensional cube. Comput. Appl. 2014, 68, 632–638. [Google Scholar]
  54. Sloan, I.H.; Womersley, R.S. Filtered hyperinterpolation: A constructive polynomial approximation on the sphere. Int. J. Geomath. 2012, 3, 95–117. [Google Scholar] [CrossRef]
  55. Bondarenko, A.; Radchenko, D.; Viazovska, M. Well-seperated spherical designs. Constr. Approx. 2015, 41, 93–112. [Google Scholar] [CrossRef]
  56. Hesse, K.; Womersley, R.S. Numerical integration with polynomial exactness over a spherical cap. Adv. Math. Math. 2012, 36, 451–483. [Google Scholar] [CrossRef]
  57. Delsarte, P.; Goethals, J.M.; Seidel, J.J. Spherical codes and designs. Geom. Dedicata 1977, 6, 363–388. [Google Scholar] [CrossRef]
  58. An, C.-P.; Chen, X.-J.; Sloan, I.H.; Womersley, R.S. Well conditioned spherical designs for integration and interpolation on the two-sphere. SIAM J. Numer. Anal. 2010, 48, 2135–2157. [Google Scholar] [CrossRef]
  59. Chen, X.; Frommer, A.; Lang, B. Computational existence proof for spherical t-designs. Numer. Math. 2010, 117, 289–305. [Google Scholar] [CrossRef]
  60. An, C.-P.; Wu, H.-N. Bypassing the quadrature exactness assumption of hyperinterpolation on the sphere. J. Complex. 2024, 80, 101789. [Google Scholar] [CrossRef]
  61. An, C.-P.; Wu, H.-N. On the quadrature exactness in hyperinterpolation. BIT Numer. Math. 2022, 62, 1899–1919. [Google Scholar] [CrossRef]
  62. Sun, X.-J.; Sheng, B.-H.; Liu, L.; Pan, X.-L. On the density of translation networks defined on the unit ball. Math. Found. Comput. 2024, 7, 386–404. [Google Scholar] [CrossRef]
  63. Wang, H.-P.; Wang, K. Optimal recovery of Besov classes of generalized smoothness and Sobolev class on the sphere. J. Complex. 2016, 32, 40–52. [Google Scholar] [CrossRef]
  64. Dai, F.; Xu, Y. Approximation Theory and Harmonic Analysis on Spheres and Balls; Springer: New York, NY, USA, 2013. [Google Scholar]
  65. Müller, C. Spherical Harmonic; Springer: Berlin/Heidelberg, Germany, 1966. [Google Scholar]
  66. Wang, K.-Y.; Li, L.-Q. Harmonic Analysis and Approximation on the Unit Sphere; Science Press: New York, NY, USA, 2000. [Google Scholar]
  67. Cheney, W.; Light, W. A Course in Approximation Theory; China Machine Press: Beijing, China, 2004. [Google Scholar]
  68. Dai, F.; Wang, H.-P. Positive cubature formulas and Marcinkiewicz-Zygmund inequalities on spherical caps. Constr. Approx. 2010, 31, 1–36. [Google Scholar] [CrossRef]
  69. Aronszajn, N. Theory of reproducing kernels. Trans. Amer. Math. Soc. 1950, 68, 337–404. [Google Scholar] [CrossRef]
  70. Lin, S.-B.; Wang, Y.-G.; Zhou, D.-X. Distributed filtered hyperinterpolation for noisy data on the sphere. SIAM J. Numer. Anal. 2021, 59, 634–659. [Google Scholar] [CrossRef]
  71. Montúfar, G.; Wang, Y.-G. Distributed learning via filtered hyperinterpolation on manifolds. Found. Comput. Math. 2022, 22, 1219–1271. [Google Scholar] [CrossRef]
  72. Sheng, B.-H.; Wang, J.-L. Moduli of smoothness, K-functionals and Jackson-type inequalities associated with kernel function approximation in learning theory. Anal. Appl. 2024, 22, 981–1022. [Google Scholar] [CrossRef]
  73. Christmann, A.; Xiang, D.-H.; Zhou, D.-X. Total stability of kernel methods. Neurocomputing 2018, 289, 101–118. [Google Scholar] [CrossRef]
  74. Sheng, B.-H.; Liu, H.-X.; Wang, H.-M. The learning rate for the kernel regularized regression (KRR) with a differentiable strongly convex loss. Commun. Pure Appl. Anal. 2020, 19, 3973–4005. [Google Scholar] [CrossRef]
  75. Wang, S.-H.; Sheng, B.-H. Error analysis of kernel regularized pairwise learning with a strongly convex loss. Math. Found. Comput. 2023, 6, 625–650. [Google Scholar] [CrossRef]
  76. Smale, S.; Zhou, D.-X. Learning theory estimates via integral operators and their applications. Constr. Approx. 2007, 26, 153–172. [Google Scholar] [CrossRef]
  77. Lin, S.-B. Integral operator approaches for scattered data fitting on sphere. arXiv 2024, arXiv:2401.15294. [Google Scholar]
  78. Feng, H.; Lin, S.-B.; Zhou, D.-X. Radial basis function approximation with distributively stored data on spahere. Constr. Approx. 2024, 60, 1–31. [Google Scholar] [CrossRef]
  79. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2010. [Google Scholar]
  80. Kyriazis, G.; Petrushev, P.; Xu, Y. Jacobi decomposition of weighted Triebel-Lizorkin and Besov spaces. Stud. Math. 2008, 186, 161–202. [Google Scholar] [CrossRef]
  81. Chen, W.; Ditzian, Z. Best approximation and K-functionals. Acta Math. Hung. 1997, 75, 165–208. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ran, X.; Sheng, B.; Wang, S. Learning Rate of Regularized Regression Associated with Zonal Translation Networks. Mathematics 2024, 12, 2840. https://doi.org/10.3390/math12182840

AMA Style

Ran X, Sheng B, Wang S. Learning Rate of Regularized Regression Associated with Zonal Translation Networks. Mathematics. 2024; 12(18):2840. https://doi.org/10.3390/math12182840

Chicago/Turabian Style

Ran, Xuexue, Baohuai Sheng, and Shuhua Wang. 2024. "Learning Rate of Regularized Regression Associated with Zonal Translation Networks" Mathematics 12, no. 18: 2840. https://doi.org/10.3390/math12182840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop