Hostname: page-component-6bb9c88b65-x9fsb Total loading time: 0 Render date: 2025-07-23T11:50:12.404Z Has data issue: false hasContentIssue false

Rates for the SLLN for long-memory and heavy-tailed processes

Published online by Cambridge University Press:  24 June 2025

Samir Ben Hariz*
Affiliation:
Université du Maine
Salim Bouzebda*
Affiliation:
Université de technologie de Compiègne
*
*Postal address: Laboratoire de Statistique et Processus, Département de Mathématiques, Université du Maine, Av. Olivier Messiaen BP 535 72017 Le Mans CEDEX, France. Email: Samir.Ben_Hariz@univ-lemans.fr
**Postal address: Université de technologie de Compiègne, LMAC (Laboratory of Applied Mathematics of Compiègne), 57 Avenue de Landshut CS 60319 60203 Compiègne CEDEX, France. Email: Salim.Bouzebda@utc.fr
Rights & Permissions [Opens in a new window]

Abstract

The present paper develops a unified approach when dealing with short- or long-range dependent processes with finite or infinite variance. We are concerned with the convergence rate in the strong law of large numbers (SLLN). Our main result is a Marcinkiewicz–Zygmund law of large numbers for $S_{n}(f)= \sum_{i=1}^{n}f(X_{i})$, where $\{X_i\}_{i\geq 1}$ is a real stationary Gaussian sequence and $f(\!\cdot\!)$ is a measurable function. Key technical tools in the proofs are new maximal inequalities for partial sums, which may be useful in other problems. Our results are obtained by employing truncation alongside new maximal inequalities. The result can help to differentiate the effects of long memory and heavy tails on the convergence rate for limit theorems.

Information

Type
Original Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Let $\{X_i\}_{i\geq 1}$ be a stationary process defined on a probability space $(\Omega, A, \mathbb{P})$ , and let $f(\!\cdot\!)$ be a measurable function with respect to the Borel $\sigma$ -algebra. We define the partial sum by

\begin{equation*}S_n(f) = \sum_{i=1}^n f(X_i).\end{equation*}

Arcones [Reference Arcones3, Reference Arcones4] investigated the limiting distribution of $S_n(f)$ . Specifically, in [Reference Arcones3] he extended the existing asymptotic distribution theory for partial sums of sequences of random variables, which are functions of a real stationary Gaussian sequence, to include cases where the underlying Gaussian sequence consists of vectors. Further research on this topic can be found in [Reference Ben Hariz7], [Reference Buchsteiner12], [Reference Hu, Nualart, Tindel and Xu19], [Reference Ivanov, Leonenko, Ruiz-Medina and Savich20], [Reference Jirak21], [Reference Kratz and León22], and [Reference Kulik and Soulier23].

In this study, our objective is to investigate the convergence rate in the strong law of large numbers (SLLN) under conditions related to moments and dependence. This investigation will provide new insights into the issues of heavy tails [Reference Kulik and Soulier24] and long-range dependence [Reference Beran, Feng, Ghosh and Kulik8]. One way to quantify the strength of memory in a time series is by examining the decay of correlations. Dependence can also be characterized by considering the decay of correlations. While our focus includes infinite variance cases, in some instances we assume finite variance to facilitate the analysis. When the sequence $\{ X_i \}_{1\leq i\leq n}$ is independent, there are many inequalities available that allow the study of almost sure convergence for partial sums, such as those of Kolmogorov, as discussed in [Reference Alvarez-Andrade and Bouzebda1, Reference Alvarez-Andrade and Bouzebda2], [Reference Baum and Katz5], and [Reference Gut and Stadtmüller16]. In the weakly dependent case, Rio [Reference Rio28] extended the law of Marcinkiewicz and Zygmund [Reference Marcinkiewicz and Zygmund26] for strong mixing sequences, Shao [Reference Shao29] did so under $\rho$ -mixing conditions, and Szewczak [Reference Szewczak31] under $\varphi$ -mixing conditions. Hechner and Heinkel [Reference Hechner and Heinkel17] established a necessary and sufficient condition under which the SLLN is also a quasimartingale. Furthermore, Dedecker and Merlevède [Reference Dedecker and Merlevède13] extended the Marcinkiewicz–Zygmund strong law of large numbers for martingales to Banach space-valued weakly dependent random variables. The question of almost sure convergence was also investigated by Houdré [Reference Houdré18]. For associated sequences, we refer to Birkel [Reference Birkel10]. In the case of long-range dependent processes and infinite variance, the problem was considered by Louhichi and Soulier [Reference Louhichi and Soulier25], who obtained an SLLN for a linear process with possibly infinite variance innovations. More recently, Fazekas and Klesov [Reference Fazekas and Klesov15] and Shuhe and Ming [Reference Shuhe and Ming30] proposed a general approach for the rate in the SLLN, which can handle weak and long-memory sequences.

This work will provide a high-order expansion for the partial sum process when dealing with infinite variance and long-range dependent (LRD) sequences. In particular, we will derive a Marcinkiewicz–Zygmund strong law for sequences that may exhibit long memory and potentially infinite variance. To the best of our knowledge, this problem has remained open until now, and this serves as the main motivation for our paper. The principal result of this paper fills this gap.

The rest of the paper is structured as follows. In Section 2 we present a sharp maximal inequality for our model, which is of independent interest. In Section 3 we state the main theorem, which deals with almost sure convergence of partial sums. Section 4 provides some examples. Section 5 offers concluding remarks about limit theorems for long-range dependent and heavy-tailed processes. Section 6 is devoted to the proof of the main results. Some technical results are given in the Appendix.

2. Maximal inequalities

The estimate of moments for the maximum of partial sums is one of the most useful tools for various proofs in limit theorems. Here also, the key step in the proof of Theorem 3.1 is use of the following maximal inequalities stated in Theorems 2.1 and 2.2. We first introduce the following notation:

\begin{equation*}M_{N}(f)\equiv \max_{k \leq N}| S_{k}(f)| .\end{equation*}

Since the fundamental tool in the analysis of Gaussian functionals is the decomposition of the functional on the basis formed by Hermite polynomials, we will first recall a few concepts. Let X be a standard normal random variable with density $\phi(\!\cdot\!)$ and let $f(\!\cdot\!)$ be a measurable real function. Let $H_{k}(\!\cdot\!)$ be the kth Hermite polynomial, that is,

\begin{equation*}H_{k}(x)=(\!-\!1)^{k}\phi ^{(k)}(x)/\phi (x),\end{equation*}

where $\phi(\!\cdot\!)$ denotes the standard normal density. Hence $H_{0}(x)=1$ , $H_{1}(x)=x$ , $H_{2}(x)=x^{2}-1$ , and so on.

For $p>0$ , let $\mathbb L^p=\mathbb L^p(\mathbb R, \phi)$ denote the space of functions satisfying

\begin{equation*}\mathbb{E}( |f^{p}(X)|) =\int_{\mathbb{R}}|f^{p}(x)|\phi(x)\,{\mathrm{d}} x<\infty .\end{equation*}

If f(X) is square-integrable, then it can be expanded as

\begin{equation*} f(x)=\sum_{k=0}^{\infty }{\dfrac{c_{k}}{{k!}}}H_{k}(x),\quad \text{where}\,\, c_{k}\equiv c_{k}(f)=\mathbb{E}( f(X)H_{k}(X)).\end{equation*}

The first non-zero index, called the Hermite rank of $ f(\!\cdot\!) $ , is defined as

\begin{equation*}m \equiv m(f) \equiv \inf \{ k > 0 \colon c_k \neq 0 \}.\end{equation*}

It plays a fundamental role, along with the covariance function, in the limit laws of the partial sums $ S_n(f) $ . See [Reference Taqqu32] or [Reference Dobrushin and Major14] for a definition.

Theorem 2.1. Let $f(\!\cdot\!)$ be a function such that $\mathbb{E}[ f ( X ) ] =0$ , $\mathbb{E}[ f^{2}( X ) ] <\infty $ , with m as Hermite rank. For an integer $ p>2$ , we have

(2.1) \begin{equation}\mathbb{E}( S_{n}(f)) ^{p}\leq \Biggl( \sum_{k=m}^{\infty }{\dfrac{{|c_{k}|}}{\sqrt{k!}}}\Biggl( 2n\sum_{i=-n}^{n}|r^{k}(i)|\Biggr)^{1/2}(p-1)^{k/2}\Biggr) ^{p}. \end{equation}

Further, there exists a constant $K=K(p)$ such that

(2.2) \begin{equation}\mathbb{E}( M_{N}(f)) ^{p}\leq K\Biggl( \sum_{k=m}^{\infty }{\dfrac{{|c_{k}|}}{\sqrt{k!}}}\Biggl( N\sum_{i=-N}^{N}|r^{k}(i)|\Biggl)^{1/2}(p-1)^{k/2}\Biggr) ^{p}. \end{equation}

The above theorem proves particularly useful when the expansion of $ f(\!\cdot\!) $ is finite. Relation (2.1) serves as the discrete analogue of Proposition 2 in [Reference Ben Hariz7] for continuous processes. Due to the similarity of their proofs, we omit the detailed demonstration here. The second part, (2.2), follows from the first part using standard arguments; see e.g. [Reference Billingsley9], [Reference Móricz, Serfling and Stout27], or refer to the proof of Theorem 2.2 provided below.

The proof of Theorem 2.2 is founded on the following moment inequality, which is formalized in Proposition 2.1. In establishing Proposition 2.1, we require Lemma 4.1 from [Reference Ben Hariz6, page 101], which we restate here as Lemma A.1. For the reader’s convenience, we include the proof of this lemma at the end of the paper.

Proposition 2.1. Assume $f \in \mathbb{L}^{4}$ . Suppose $r^m$ is integrable (i.e. $\sum_{k} \lvert r^m(k)\rvert < \infty$ ), where m denotes the Hermite rank of $f(\!\cdot\!)$ . Then there exists a constant K, depending on r and m, such that, for all $n > 0$ ,

(2.3) \begin{equation}\mathbb{E}( S_{n}(f-\mathbb{E}(f(X)))) ^{4}\leq K(r,m)\bigl(\bigl( \sqrt{n}\| f \| _{2}\bigr) ^{4}+n\| f \|_{4}^{4}\bigr) . \end{equation}

The proof of Proposition 2.1 is postponed to Section 6.

Theorem 2.2. Assume $f \in \mathbb{L}^{4}$ . Let $m\geq 1$ ; if $r^{m} \in \mathbb{L}^1$ , then there exists a constant $K=K(r,m)$ such that for any measurable function $f(\!\cdot\!)$ with Hermite rank greater or equal than m and every $N\geq 1$ , we have

\begin{equation*}\mathbb{E}| M_{N}(f-\mathbb{E}(f(X))| ^{4}\leq K(r,m)\bigl[\bigl( \sqrt{N}\| f \| _{2}\bigr) ^{4}+N\| f \| _{4}^{4}\bigr].\end{equation*}

The combination of Theorems 2.1 and 2.2 allows us to handle both long and short dependence sequences giving a sharp bound for $\mathbb{E}| M_{N}(f-\mathbb{E}(f(X))| ^{4}$ . These inequalities, which are of independent interest, are very useful in limiting theorems for partial sums processes.

3. Statement of the results

In Theorem 3.1 below, for a function $ f \in \mathbb{L}^p $ with $ 1 < p < 2 $ , it is still possible to define the Fourier coefficients $ c_k $ , $ k = 1, \ldots, m^\star - 1 $ , and the Hermite rank m of $ f(\!\cdot\!) $ in the same manner, even though the series may no longer converge. Indeed, $ c_k = \mathbb{E}[f(X)H_k(X)]$ . Hence, by Hölder’s inequality,

\begin{align*}|c_k|\leq (\mathbb{E}[|f^p(X)|] )^{1/p}\bigl(\mathbb{E}\bigl[\bigl|H_k^{p/(p-1)}(X)\bigr|\bigr] \bigr)^{1-1/p} < \infty .\end{align*}

Note that, in particular, $ c_0 = \mathbb{E}[f(X)] $ .

Throughout the paper (see e.g. equation (3.1) in Theorem 3.1), we let

\begin{equation*} f^K(x) = f(x) - \sum_{k=0}^{K-1} \dfrac{c_k}{k!} H_k(x)\end{equation*}

denote the error term associated with the K-dimensional approximation of f in the closed subspace of $ \mathbb{L}^2 $ spanned by $ H_0, \ldots, H_{K-1} $ .

We assume that $ \{X_i\}_{i\geq 1} $ is a stationary Gaussian sequence satisfying $ \mathbb{E}(X_n) = 0 $ , $ \mathbb{E}(X_n^2) = 1 $ , and $ r(n) \equiv \mathbb{E}(X_0 X_n) $ , where r(n) is the correlation function. Before proceeding further, we recall some properties of the variance of partial sums. Let $ d_{n,m}^2 $ denote the variance of $ S_n(H_m) $ . Then

\begin{align*}d_{n,m}^{2} &= m!\sum_{i=-n}^{n}( n-i) r^{m}( |i| ) \\&= m!\Biggl( n+2\sum_{i=1}^{n}(n-i)r^{m}( i )\Biggr) \\&\leq m!n\Biggl( 1+2\sum_{i=1}^{n}| r^{m}( i ) |\Biggr) \\& \equiv D_{n,m}^{2}.\end{align*}

Now, for two sequences $u_n$ and $v_n$ , we write $u_{n}\sim v_{n}$ if $\lim u_{n}/v_{n}=1$ . Hence, if $r(n)\sim n^{-\alpha } L(n)$ , where $L(\!\cdot\!)$ is a slowly varying function and $m\alpha <1$ , then $D_{n,m}^{2}\sim 2m! (1-m\alpha ) ^{-1} ( 2-m\alpha) ^{-1}n^{2-m\alpha }L^{m}(n)$ . If $r^{m}$ is summable, then

\begin{align*} D_{n,m}^{2} \sim m! \, n \Biggl(1 + 2 \sum_{i=1}^{\infty} |r^{m}(i)| \Biggr). \end{align*}

Lemma 3.1. Let $\{X_i\}_{i\geq 1}$ be a Gaussian sequence of real random variables and let $f(\!\cdot\!)$ be a real-valued centered function. Assume that $f\in \mathbb{L}^{p}$ for $1<p<2$ and $r^{m}$ is integrable, where m is the Hermite rank of $f(\!\cdot\!)$ . Then,

\begin{equation*}\textit{for all}\ \varepsilon >0,\quad \sum_{k=1}^{\infty }k^{-1}\mathbb{P}\bigl( M_{k}( f ) >\varepsilon k^{1/p}\bigr) <\infty .\end{equation*}

In particular, we have

\begin{equation*}\lim_{n\rightarrow \infty }n^{-1/p}S_{n}(f)=0\quad \textit{a.s.}\end{equation*}

Lemma 3.2. Let $\{X_i\}_{i\geq 1}$ be a Gaussian sequence of real random variables. Then, for all $\beta >0$ and $ \varepsilon >0$ ,

\begin{equation*}\sum_{j=1}^{\infty}j^{-1}\mathbb{P} \bigl( M_{j} ( H_{k}) >\varepsilon j^{\beta}D_{j,k} \bigr) <\infty .\end{equation*}

In particular, we have

\begin{equation*}\lim_{n\rightarrow \infty }n^{-\beta }D_{n,k}^{-1}S_{n}(H_{k})=0 \quad \textit{a.s.}\end{equation*}

The proofs of Lemmas 3.1 and 3.2 are postponed to Section 6. The following theorem is a consequence of Lemmas 3.1 and 3.2.

Theorem 3.1. Let $f(\!\cdot\!)$ be a function such that $\mathbb{E}[ |f( X ) | ^{p}] <\infty $ for some $1<p<2$ . Let $\{X_{i}\}_{i\geq 1}$ be a stationary Gaussian sequence with $\mathbb{E}(X_{n})=0$ , $\mathbb{E}(X_{n}^{2})=1$ , and let $r(n)\equiv \mathbb{E}(X_{0}X_{n})$ be the correlation function. Let m denote the Hermite rank of $f(\!\cdot\!)$ and assume that there exists $m^{\ast }\geq m$ such that $r^{m^{\ast }}$ is integrable. Then

(3.1) \begin{equation}S_{n}(f)\equiv \sum_{i=1}^{n}f(X_{i})=n\mathbb{E}[ f( X )] +\sum_{k=m}^{m\ast -1}{\dfrac{{c_{k}}}{{k!}}}S_{n}(H_{k})+S_{n}(f^{m\ast }), \end{equation}

where

  1. (i) $n^{-1/p}S_{n}(f^{m^{\ast}})\rightarrow 0\ a.s$ .,

  2. (ii) for $m \leq k < m^{\ast}$ , for all $ \beta >0$ , $({{n^{-\beta }}/{D_{n,k}}})S_{n}(H_{k})\rightarrow 0\ a.s$ .

Remark 3.1. A key feature of Theorem 3.1 is its unified approach of almost sure convergence for both short- and long-memory sequences, with finite or infinite variance. Moreover, for each pair of conditions on the intensity of memory and the moment of the marginal law, we give the optimal convergence rate (see Example 4.1 below). We also note that the expansion (3.1) makes a separation between long memory and heavy tails in the partial sum $S_{n}(f)$ . The remainder part $S_{n}(f^{m\ast })$ keeps the heavy tail property of the original sequence but weakens the dependence, while the first part represents the eventual LRD component with finite variance. An easy consequence of Theorem 3.1 is the following Marcinkiewicz–Zygmund law of large numbers.

Corollary 3.1. Let $f(\!\cdot\!)$ be a function such that $\mathbb{E}[ |f( X ) |^{p}] <\infty $ for some $1<p<2$ and $\mathbb{E}[ f( X ) ] =0$ . Let m denote the Hermite rank of $f(\!\cdot\!)$ .

  • If $r^{m}$ is integrable, then

    \begin{equation*}n^{-1/p}S_{n}(f)\rightarrow 0\quad \textit{a.s.}\end{equation*}
  • If $r(n)\sim n^{-\alpha }L(n)$ for $\alpha >0$ , $m\alpha <1$ and $L(\!\cdot\!)$ is a slowly varying function, then, for $ \epsilon >0$ ,

    \begin{equation*} n^{-\max (1/p,1-m\alpha /2+\epsilon )}S_{n}(f)\rightarrow 0\quad \textit{a.s.}\end{equation*}

One of the surprising consequences of Theorem 3.1 is that we can still obtain a normal limit law for partial sums even when the marginal distribution is heavy-tailed. This is in sharp contrast to the weakly dependent case, where a finite second moment is required for the central limit theorem. Although this work is not devoted to the weak convergence of partial sums, we state the following result, which is an easy consequence of Theorem 3.1 and the results of [Reference Taqqu32] or [Reference Dobrushin and Major14]. The question of weak convergence for the remaining part in the expansion (3.1) remains an open problem.

Corollary 3.2. Let $f(\!\cdot\!)$ be a function such that $\mathbb{E}[ | f( X )| ^{p}] <\infty$ . Assume that $r(n)\sim n^{-\alpha }L(n)$ , $m\alpha <1$ , and $(2-m\alpha )\min (2,p)>2$ , then

\begin{equation*}\dfrac{1}{d_{n,m}}S_{n}(f-\mathbb{E}(f))\rightarrow \dfrac{{c_{m}}}{m{!}}Y_{m}(1), \end{equation*}

where $Y_{m}(\cdot )$ is the Hermite process of order m (for a definition, see [Reference Taqqu32] or [Reference Dobrushin and Major14]).

The result can be explained as follows: the quantity $(2-m\alpha )$ measures the size of the variance of partial sums with a finite variance, while p measures the moment of the marginal distribution. The convergence to the Hermite process will happen if $\min (p,2)(2-m\alpha )>2$ . For example, if $p=2$ , this reduces to the usual condition $m\alpha <1$ . When $p<2$ , we need to strengthen the dependence so that we can still have this convergence. Otherwise we believe that the limiting law will be some stable distribution similar to the i.i.d. case with stable marginal. We also note the contrast for $m=1$ with the i.i.d. case, in which the limiting distribution in the infinite variance case is some stable law.

4. Examples

In this section we give a few examples to illustrate Theorem 3.1.

Example 4.1. Let $f(\!\cdot\!)$ be defined by $f(x)=\exp ( {{bx^{2}}/{2}}) $ , where $b<1$ ; then ${c_{0}}=(1-b)^{-1/2}$ , $c_{1}= 0$ , and $ {c_{2}} =$ $b(1-b)^{-3/2}$ . Moreover, $\mathbb{E}( f^{p}(X)) $ is finite if and only if $pb<1$ , in which case we have

\begin{align*}\mathbb{E}( f^{p}(X))=(1-pb)^{-1/2}.\end{align*}

Let $\{X_{i}\}_{i\geq 1} $ be a Gaussian stationary sequence, and assume for simplicity that $r(n)\sim Cn^{-\alpha }$ , for some $\alpha >0$ . Let $\epsilon $ be any strictly positive number; then from Theorem 3.1 we deduce the following.

(a) Short-range dependence (SRD) and finite variance: $2\alpha >1$ and $b<1/2$ . For any small $\epsilon>0$ , we have $n^{-1/2-\epsilon }S_{n}(f-c_{0})\rightarrow0$ a.s. This result is optimal since we know also that $n^{-1/2}S_{n}(f-c_{0})\rightarrow N(0,\sigma ^{2}(f))$ in law; see e.g. [Reference Breuer and Major11].

(b) SRD and infinite variance: $2\alpha >1$ and $1/2<b<1$ . We have

\begin{equation*}n^{-b-\epsilon }S_{n}(f-c_{0})\rightarrow 0\quad \text{a.s.}\end{equation*}

Let us now control the size of $ S_{n}(f)$ . To do so, we write

\begin{align*}S_{n}(f)=S_{n}\bigl(f{\unicode{x1D7D9}}_{f\leq n^{b}}\bigr)+S_{n}\bigl(f{\unicode{x1D7D9}}_{f>n^{b}}\bigr).\end{align*}

First, observe that

\begin{align*}\textrm{Var}\bigl( S_{n}\bigl(f{\unicode{x1D7D9}}_{f\leq n^{b}}\bigr) \bigr) &=\sum_{k=2}^{+\infty }\dfrac{c_{k}^{2}\bigl( f{\unicode{x1D7D9}}_{f\leq n^{b}}\bigr) }{k!}\sum_{i,j=1}^{n}r^{k}(i-j) \\&=n\sum_{k=2}^{+\infty }\dfrac{c_{k}^{2}\bigl( f{\unicode{x1D7D9}}_{f\leq n^{b}}\bigr) }{k!}+\sum_{k=2}^{+\infty }\dfrac{c_{k}^{2}\bigl( f{\unicode{x1D7D9}}_{f\leq n^{b}}\bigr) }{k!}2\sum_{i=1}^{n}(n-i)r^{k}(i) \\&=n\textrm{Var}\bigl( f{\unicode{x1D7D9}}_{f\leq n^{b}}\bigr) +R_{n}.\end{align*}

Now, after some derivations, we prove that

\begin{align*}\mathbb{E}\bigl(f{\unicode{x1D7D9}}_{f > n^{b}}\bigr) &\sim ( 2^{3/2}\sqrt{\pi }( 1-b)) ^{-1}\,{n}^{b-1}\ln ^{-{{1}/{2}}}n, \\\mathbb{E}\bigl( f^{2}(X){\unicode{x1D7D9}}_{f(X) < n^{b}}\bigr) &\sim {(2\,b-1) }^{-1}{\pi ^{-1/2}}n^{2b-1}\ln ^{-{{1}/{2}}}n.\end{align*}

Therefore we conclude the following:

\begin{equation*}n\textrm{Var}\bigl( f{\unicode{x1D7D9}}_{f\leq n^{b}}\bigr) \sim ( 2\,b-1)^{-1}{\pi^{-1/2}}n^{2b}\ln ^{-{{1}/{2}}}n.\end{equation*}

Now we get

\begin{equation*}R_{n}\leq \sum_{k=2}^{+\infty }\dfrac{c_{k}^{2}\bigl( f{\unicode{x1D7D9}}_{f\leq n^{b}}\bigr)}{k!} | r^{\ast }(1)| ^{k}2n\sum_{i=1}^{n}\biggl| \dfrac{r(i)}{r^{\ast }(1)}\biggr| ^{2},\end{equation*}

where $r^{\ast }(1)=\max_{i\geq 1}| r(i)| $ . By Hölder’s inequality applied to control $c_{k}^{2}\bigl( f{\unicode{x1D7D9}}_{f\leq n^{b}}\bigr) $ , we can show that $R_{n}=o(n^{2b-\delta })$ , for some small $\delta >0$ . Therefore we conclude that

\begin{equation*}S_{n}(f-c_{0})\sim {( 2\,b-1) }^{-1/2}{\pi ^{-1/4}n}^{b} ( \ln ( n ) ) ^{-1/4}\end{equation*}

in probability; again we can see that the result is sharp.

(c) LRD and finite variance: $2\alpha <1$ and $b<1/2$ . We have

\begin{equation*}n^{-{{(2-2\alpha )}/{2}}-\epsilon }S_{n}(f-c_{0})\rightarrow 0\quad \text{a.s.}\end{equation*}

Observe also that $n^{-{{(2-2\alpha )}/{2}}}S_{n}(f-c_{0})\rightarrow Y_{2}(1) $ in law, which shows that the result is sharp.

(d) LRD and infinite variance: $2\alpha <1$ and $1/2 < b < 1$ . We have

\begin{equation*}n^{-\max (b,{{(2-2\alpha )}/{2}})-\epsilon }S_{n}(f-c_{0})\rightarrow0\quad \text{a.s.,}\end{equation*}

and if $b<{{(2-2\alpha )}/{2}}$ then $n^{-{{(2-2\alpha )}/{2}}}$ $S_{n}(f-c_{0})\rightarrow Y_{2}(1)$ in law.

Example 4.2. If we replace $f(\!\cdot\!)$ in Example 4.1 with $f_{2}(x)\equiv x+f(x)$ , then we modify the Hermite rank of $f_{2}(\!\cdot\!)$ to 1, that is, we strengthen the dependence but we keep the moment property unchanged. Hence we obtain the following.

  • If $\alpha >1$ and $b<1/2$ , then $n^{-1/2-\epsilon}S_{n}(f_{2}-c_{0})\rightarrow 0$ a.s. and $n^{-1/2}S_{n}(f_{2}-c_{0})\rightarrow N(0,\sigma ^{2}(f))$ in law.

  • If $\alpha >1$ and $1/2<b<1$ , then $n^{-b-\epsilon}S_{n}(f_{2}-c_{0})\rightarrow 0$ a.s.

  • If $\alpha <1$ and $b<1/2$ , then $n^{-{{(2-\alpha )}/{2}}-\epsilon}S_{n}(f_{2}-c_{0})\rightarrow 0$ a.s. and $n^{-{{(2-\alpha )}/{2}}}S_{n}(f_{2}-c_{0})\rightarrow Y_{1}(1)$ in law ( $Y_{1}(1)$ is Gaussian).

  • If $\alpha <1$ and $1/2<b<1$ , then $n^{-\max (b,{{(2-\alpha )}/{2}})-\epsilon }S_{n}(f_{2}-c_{0})\rightarrow 0$ a.s.

Example 4.3. The Lévy $\theta$ -stable distribution. Assume that the marginal distribution of $\{ f(X_{i})\} _{i\geq 1}$ is Lévy $\theta$ -stable, with characteristic function given by

\begin{equation*}\mathbb{E}(\!\exp ({\mathrm{i}} tf(X)) =\exp ( {\mathrm{i}} t\mu -|ct|^{\theta }) ,\end{equation*}

where $\theta \in ]1,2]$ , $\mu \in \mathbb{R}$ , and $ c\in [ 0,+\infty] $ . Then it is well known that $\mathbb{E}( | f(X)|) ^{p}<\infty $ , for all $p<\theta $ . Assume that r is integrable, $(\sum_{k} |r(k)| < \infty)$ ; then, with $c_{0}=\mathbb{E}( f(X)) $ , we have, according to Theorem 3.1,

\begin{equation*}n^{-\theta -\epsilon }S_{n}(f-c_{0})\rightarrow 0\quad \text{a.s.}\end{equation*}

Moreover, if the sequence is independent, then $n^{-\theta }S_{n}(f-c_{0})$ converges in law to a $\theta$ -stable Lévy distribution. The limit law when r is integrable is still an open question.

5. Conclusion

For a fairly general model which is a natural extension of both independent and Gaussian sequences, and which can model finite or infinite variance, short- or long-range dependence, we have provided a high-order expansion for the empirical mean. We have also given the best rate of convergence in the almost sure sense for each element of the decomposition. In particular, we can observe that the rate in the SLLN is governed by the moment of the marginal law if the dependence is weak, and by the dependence and the moment if the dependence is strong. In fact we have the following: let $Y_{1},\ldots,Y_{n}$ be a sequence of stationary random variables such that $\mathbb E(| Y_{1}|^{p})<\infty $ for $1 < p\leq 2$ . If there exists a positive constant C independent of n, in such a way that

\begin{equation*}\textrm{Var}\Biggl( \sum_{i=1}^{n}Y_{i}{\unicode{x1D7D9}}_{\{ | Y_{i}| \leq n^{1/p}\} }\Biggr) \leq Cn\textrm{Var}\bigl( Y_{1}{\unicode{x1D7D9}}_{\{ |Y_{1}| \leq n^{1/p}\} }\bigr),\end{equation*}

then the Marcinkiewicz–Zygmund SLLN occurs under the same moment condition and with the same rate as in the i.i.d. case. Otherwise the rate of convergence will be slower than the i.i.d. case. This is true when Y is a function of a Gaussian sequence, but it is not clear whether this can be successfully extended to other models. The proof of such a statement, however, should necessitate a methodology different to that used in this paper, and we leave this problem open for future study.

6. Mathematical developments

This section is devoted to the proofs of our results. The previously presented notation continues to be used in the following.

Proof of Proposition 2.1. We assume that $\mathbb{E}( f(X_{i}) ) =0$ . For $ k=1,2,3,4$ , let $S_{k,n}$ denote the sum over k different indexes:

\begin{equation*}S_{k,n}=\sum_{1\leq i_{1}\neq i_{2}\cdots \neq i_{k}\leq n}f( X_{i_{1}} )\cdots f( X_{i_{k}} ) .\end{equation*}

To prove Proposition 2.1, we repeatedly apply Lemma 4.1 from [Reference Ben Hariz6]. For completeness, we state and prove this lemma in the Appendix (see also the proof of Proposition 2 in [Reference Ben Hariz7]).

Case 1. Assume that $\mathbb{E}( f^{6} ( X_{i})) <\infty $ and $r^{\ast }(1)\equiv \sup_{k\geq 1}| r(k)|<(8m)^{-1}$ .

$\bullet $ Obviously, we have $\mathbb{E}( S_{1,n}) =n\mathbb{E}(f( X_{0}) ) ^{4}$ . Now we focus on the control of $S_{2,n}$ :

\begin{align*}\mathbb{E}( S_{2,n}) &= 3\sum_{1\leq i_{1}\neq i_{2}\leq n}\mathbb{E}\bigl[ ( f( X_{i_{1}}) ) ^{2}( f(X_{i_{2}}) ) ^{2}\bigr] +4\sum_{1\leq i_{1}\neq i_{2}\leq n}\mathbb{E}\bigl[ ( f( X_{i_{1}}) ) ^{3}f(X_{i_{2}}) \bigr] \\&\equiv 3E_{2,1}+4E_{2,2}.\end{align*}

Here K denotes some constant that may be different from line to line. For the first term, we write

\begin{align*}E_{2,1} &\leq n^{2} ( \mathbb{E}f^{2}( X_{i}) )^{2}+2\sum_{k=1}^{\infty }\dfrac{c_{k}^{2}(f^{2})}{k!}n\sum_{i=1}^{n} | r^{k}(i) | \\&\leq ( \sqrt{n}\| f \| _{2})^{4}+2\sum_{k=1}^{m-1}\dfrac{c_{k}^{2}(f^{2})}{k!}n\sum_{i=1}^{n}| r^{k}(i)| +2\sum_{k=m}^{\infty }\dfrac{c_{k}^{2}(f^{2})}{k!}\Biggl( n\sum_{i=1}^{n}| r^{m}(i)| \Biggr) \\&\leq ( \sqrt{n}\| f \| _{2})^{4}+2\sum_{k=1}^{m-1}\dfrac{c_{k}^{2}(f^{2})}{k!}n\sum_{i=1}^{n}| r^{k}(i)| +2n\| f \| _{4}^{4}\sum_{i=1}^{n}|r^{m}(i)| .\end{align*}

Now, we apply Hölder’s inequality in order to control $c_{k}(f^{2})$ with $f_{1}=f^{2(1-\theta )}$ , $q_{1}=1/(1-\theta )$ , $f_{2}=f^{2\theta}$ , $q_{2}=2/\theta $ , $f_{3}=H_{k}$ , $q_{3}=2/\theta$ . We have

(6.1) \begin{equation}c_{k}(f^{2})=\mathbb{E}( f^{2}( X ) H_{k}( X )) \leq \| f \| _{2}^{2(1-\theta )}\| f \|_{4}^{2\theta }\| H_{k}\| _{2/\theta }. \end{equation}

On the other hand, we get

(6.2) \begin{equation}\sum_{i=1}^{n}| r^{k}(i)| \leq \Biggl( \sum_{i=1}^{n}|r^{m}(i)| \Biggr) ^{k/m}n^{1-k/m}. \end{equation}

Taking $\theta =k/(2m)$ yields

\begin{align*}c_{k}^{2}(f^{2})n\sum_{i=1}^{n}| r^{k}(i)| &\leq \Biggl(\sum_{i=1}^{n}| r^{m}(i)| \Biggr) ^{k/m}n^{2-k/m}\|f\| _{2}^{4(1-\theta )}\| f \| _{4}^{4\theta }\|H_{k}\| _{2/\theta }^{2} \\&\leq \Biggl( \sum_{i=1}^{n}| r^{m}(i)| \Biggr) ^{k/m}(\sqrt{n}\| f \| _{2}) ^{4(1-\theta )}\| f \|_{4}^{4\theta }\| H_{k}\| _{2/\theta }^{2} \\&\leq K(r,m,k)\bigl( ( \sqrt{n}\| f \| _{2})^{4}+n\| f \| _{4}^{4}\bigr) .\end{align*}

Hence we obtain

(6.3) \begin{equation}E_{2,1}\leq K(r,m)\bigl( ( \sqrt{n}\| f \| _{2})^{4}+n\| f \| _{4}^{4}\bigr) . \end{equation}

We obtain the same bound for $E_{2,2}$ . Indeed, since $\| f \|_{6}<\infty $ , then we have

\begin{equation*}\sum_{1\leq i_{1}\neq i_{2}\leq n}\bigl| \mathbb{E}\bigl[ ( f(X_{i_{1}}) ) ^{3}f( X_{i_{2}}) \bigr] \bigr| \leq2\sum_{k=m}^{\infty }\dfrac{c_{k}(f^{3})c_{k}(f)}{k!}n\sum_{i=1}^{n}|r^{k}(i)| .\end{equation*}

Now, once again, apply Hölder’s inequality to $f_{1}=f^{3(1-\theta)}$ , $q_{1}=2/( 3(1-\theta )) $ , $f_{2}=f^{3\theta}$ , $q_{2}=4/3\theta $ , $f_{3}=H_{k}$ , $q_{3}=4/( 3\theta -2)$ , $2/3<\theta <1$ . For example, take $\theta =14/15$ , in which case we have $q_{3}=5$ . Therefore we get

\begin{equation*}| c_{k}(f^{3})| =| \mathbb{E}( f^{3}(X)H_{k}(X))| \leq \| f \| _{2}^{3(1-\theta )}\| f \|_{4}^{3\theta }\| H_{k}\| _{q_{3}}.\end{equation*}

Using the fact that

(6.4) \begin{equation}\| H_{k}\| _{q_{3}}\leq \sqrt{k!}( q_{3}) ^{k/2}\end{equation}

(see e.g. [Reference Ben Hariz7] or [Reference Taqqu32]), we deduce

(6.5) \begin{align}E_{2,2} &\leq 2\| f \| _{2}^{4-3\theta }\| f \|_{4}^{3\theta }\sum_{k=m}^{\infty }( q_{3})^{k/2}n\sum_{i=1}^{n}| r^{k}(i)| \notag \\&\leq 2( \sqrt{n}\| f \| ) ^{4-3\theta }(n^{1/4}\| f \| _{4}) ^{3\theta }\sum_{k=m}^{\infty }(q_{3}) ^{k/2}\sum_{i=1}^{n}| r^{k}(i)|. \end{align}

The right-hand side of equation (6.5) is bounded by

\begin{align*}K(r,m)\bigl( ( \sqrt{n} \| f \| _{2}) ^{4}+n\| f \| _{4}^{4}\bigr)\end{align*}

as soon as $\sum_{i=1}^{n}| r^{m}(i)| <\infty $ and $r^{\ast}(1)\equiv \sup_{k\geq 1}| r(k)| <1/5$ . From (6.3) and (6.5), we infer

\begin{equation*}\mathbb{E}( S_{2,n}) \leq K(r,m)\bigl( ( \sqrt{n}\|f\| _{2}) ^{4}+n\| f \| _{4}^{4}\bigr) .\end{equation*}

$\bullet $ Using Lemma 4.1 of [Reference Ben Hariz6, page 101], we get

(6.6) \begin{equation}\mathbb{E}( S_{3,n}) \leq K(r,m)\bigl( ( \sqrt{n}\|f\| _{2}) ^{4}+n\| f \| _{4}^{4}\bigr) . \end{equation}

(Lemma 4.1 is stated as Lemma A.1 hereafter. Its proof is given at the end of the paper for the reader’s convenience.) Indeed, by this lemma,

\begin{align*}\mathbb{E}( S_{3,n}) &\leq \sum_{k=m(f^{2})}^{\infty }{\dfrac{{|c_{k}}( f^{2}) {|}}{\sqrt{k!}}}\Biggl(4n\sum_{i=1}^{n}|r^{k}(i)|\Biggr) ^{1/2}2^{k/2} \\&\quad \times\Biggl( \sum_{k=m(f)}^{\infty }{\dfrac{{|c_{k}}( f ) {|}}{\sqrt{k!}}}\Biggl( 4n\sum_{i=1}^{n}|r^{k}(i)|\Biggr) ^{1/2}2^{k/2}\Biggr)^{2}.\end{align*}

Now, by applying (6.1) with $\theta =(2m)^{-1}$ and by (6.2) and (6.4), we deduce

\begin{align*}&\sum_{k=m(f^{2})}^{\infty }{\dfrac{{|c_{k}}( f^{2}) {|}}{\sqrt{k!}}}\Biggl( 4n\sum_{i=1}^{n}|r^{k}(i)|\Biggr) ^{1/2}2^{k/2} \\&\quad \leq 2\| f \| _{2}^{2(1-\theta )}\| f \| _{4}^{2\theta}\sum_{k=1}^{\infty }{\dfrac{\| H_{k}\| _{4m}}{\sqrt{k!}}}(r^{\ast }(1)2) ^{k/2}( \sqrt{n}) ^{2-1/m}\Biggl(\sum_{i=1}^{n}\biggl| \dfrac{r(i)}{r^{\ast }(1)}\biggr| ^{m}\Biggr) ^{1/(2m)}.\end{align*}

Therefore we obtain

\begin{align*}\mathbb{E}( S_{3,n}) &\leq 2{n}^{1-1/2m}( \|f\| _{2}) ^{2-1/m}\| f \| _{4}^{2/m}\sum_{k=1}^{\infty}{\dfrac{\| H_{k}\| _{4m}}{\sqrt{k!}}}( 2r^{\ast }(1))^{k/2}\Biggl( \sum_{i=1}^{n}\biggl| \dfrac{r(i)}{r^{\ast }(1)}\biggr|^{m}\Biggr) ^{1/(2m)} \\&\quad \times \Biggl( \sum_{k=m(f)}^{\infty }\sqrt{n}\| f \| _{2}(2r^{\ast }(1)) ^{k/2}\Biggl( 4\sum_{i=1}^{n}\biggl| \dfrac{r(i)}{r^{\ast}(1)}\biggr| ^{m}\Biggr) ^{1/2}\Biggr) ^{2} \\&\leq 8\bigl( \max \bigl( \sqrt{n}\| f \| _{2},n^{1/4}\|f\| _{4}\bigr) \bigr) ^{4}\sum_{k=1}^{\infty }( 8mr^{\ast}(1)) ^{k/2}\Biggl( \sum_{i=1}^{n}\biggl| \dfrac{r(i)}{r^{\ast }(1)}\biggr| ^{m}\Biggr) ^{1/(2m)} \\&\quad \times \Biggl( \sum_{k=1}^{\infty }(2r^{\ast }(1)) ^{k/2}\Biggr) ^{2}\Biggl( \sum_{i=1}^{n}\biggl| \dfrac{r(i)}{r^{\ast }(1)}\biggr| ^{m}\Biggr) .\end{align*}

Hence (6.6) is proved as soon as $8mr^{\ast }(1)<1$ .

$\bullet $ Again by Lemma 4.1 of [Reference Ben Hariz6, page 101], we infer

\begin{equation*}\mathbb{E}( S_{4,n}) \leq K(r,m)( \sqrt{n}\| f \|_{2}) ^{4}.\end{equation*}

This completes the proof in the case when $f^{3}(\!\cdot\!)$ can be expanded in terms of Hermite’s polynomials, namely when $\| f \| _{6}$ is finite.

Case 2. Assume that $\mathbb{E}( f^{6}( X_{i})) <+\infty $ . We split the sample into blocks for which elements inside the same block are at least T distant, where T is large enough to satisfy $8mr^{\ast }(T)<1$ . Then we apply the first case to conclude.

General case. For the general case, i.e. $\| f \| _{4}$ is finite, we proceed as follows. For a real-valued function $ f(\!\cdot\!) $ and a positive real number M, we define the following decomposition:

\begin{equation*} f = f {\unicode{x1D7D9}}_{\{|f| \leq M\}} + f {\unicode{x1D7D9}}_{\{|f| > M\}} \equiv f_{M} + \tilde{f}_{M}, \end{equation*}

where $ f_{M}(\!\cdot\!) $ represents the truncated part of $ f (\!\cdot\!)$ , and $ \tilde{f}_{M} (\!\cdot\!)$ corresponds to the remaining part. For $ m \in \mathbb{N} $ , we further define

\begin{align*} f_{M}^{m} &\equiv f_{M} - \sum_{l=0}^{m-1} \dfrac{c_{l}(M)}{l!} H_{l}, \\ c_{l}(M) &\equiv \mathbb{E} [ f_{M}(X) H_{l}(X) ], \\ \tilde{c}_{l}(M) &\equiv \mathbb{E} | \tilde{f}_{M}(X) H_{l}(X) |. \end{align*}

Since

\begin{align*}f=f_{M}^{m}+\sum_{l=0}^{m-1}\dfrac{c_{l}( M) }{l{!}}H_{l}+\tilde{f}_{M},\end{align*}

then

\begin{equation*}\| S_{n}(f)\| _{4}\leq \bigl\| S_{n}\bigl( f_{M}^{m}\bigr)\bigr\| _{4}+\sum_{l=1}^{m-1}\dfrac{| c_{l}( M)| }{l{!}}\| S_{n}( H_{l}) \| _{4}+\bigl\|S_{n}\bigl( \tilde{f}_{M}\bigr) \bigr\| _{4}.\end{equation*}

We apply the previous case to $f_{M}^{m}(\!\cdot\!)$ , and we let M go to infinity to end the proof. Indeed, observe that for $ l < m $ , since $ c_{l} = 0 $ , we have

\begin{align*}0 = c_{l}(M) = \mathbb{E}[ f_{M}(X) H_{l}(X) + \tilde{f}_{M}(X) H_{l}(X) ].\end{align*}

This implies that

\begin{align*}c_{l}(M) = \mathbb{E}[ f_{M}(X) H_{l}(X) ] = -\mathbb{E}[ \tilde{f}_{M}(X) H_{l}(X) ].\end{align*}

Furthermore, as $ M \to \infty $ , we have $ c_{l}(M) \to 0 $ , by the dominated convergence theorem.

Proof of Theorem 2.2. Assume without loss of generality that $\mathbb{E}(f(X))=0$ . We shall proceed by induction on N. Assume that there exists $K=K(r,m)>1$ such that for any $f(\!\cdot\!)$ with Hermite rank greater or equal than $m,\, N^{\prime} < N$ , we have

(6.7) \begin{equation}\mathbb{E}| M_{N^{\prime }}(f)| ^{4}\leq K^{4}\bigl[ ( \sqrt{N^{\prime }}\| f \| _{2}) ^{4}+N^{\prime }\| f \|_{4}^{4}\bigr] . \end{equation}

We will prove that (6.7) remains true for N. In what follows, $K_{1}$ , $K_2$ , and $K_{3}$ are constants independent of N that may be different from line to line. Now we focus on $\| M_{N}(f)\| _{4}$ , since

\begin{align*}f=f_{M}^{m}+\sum_{l=0}^{m-1}\dfrac{c_{l}( M ) }{l{!}}H_{l}+\tilde{f}_{M}.\end{align*}

Then

\begin{align*}\| M_{N}(f)\| _{4} &\leq \| M_{N}(f_{M}^{m})\|_{4}+\sum_{l=0}^{m-1}\dfrac{| c_{l}( M ) | }{l{!}}\| M_{N}( H_{l}) \| _{4}+\| M_{N}( \tilde{f}_{M}) \| _{4} \\&\leq \| M_{N}(f_{M}^{m})\| _{4}+\sum_{l=0}^{m-1}\dfrac{| c_{l}( M ) | }{l{!}}\| M_{N}( H_{l})\| _{4}+\| S_{N}( | \tilde{f}_{M}| )\| _{4} \\&\leq \| M_{N}(f_{M}^{m})\| _{4}+\sum_{l=0}^{m-1}\dfrac{| c_{l}( M ) | }{l{!}}\| M_{N}( H_{l})\| _{4} \\&\quad +\sum_{l=0}^{m-1}\dfrac{\tilde{c}_{l}( M ) }{l{!}}\|S_{N}( H_{l}) \| _{4}+ \bigl\| S_{N}\bigl( | \tilde{f}_{M}| ^{m}\bigr) \bigr\| _{4}.\end{align*}

For $l<m, $

\begin{align*}c_{l}( M ) =\mathbb{E}[ f_{M}(X)H_{l}(X) ] +\mathbb{E}[ \tilde{f}_{M}(X)H_{l}(X)] =0,\end{align*}

we have

\begin{align*}|c_{l}( M ) | \leq \tilde{c}_{l}( M ).\end{align*}

Therefore, we obtain

(6.8) \begin{align}\| M_{N}(f)\| _{4} &\leq \| M_{N}(f_{M}^{m})\|_{4}+2\sum_{l=0}^{m-1}\dfrac{\tilde{c}_{l}( M ) }{l{!}}\| M_{N}( H_{l}) \| _{4}+\bigl\| S_{N}\bigl( |\tilde{f}_{M}| ^{m}\bigr) \bigr\| _{4} \notag \\&\equiv A+ 2\sum_{l=0}^{m-1}B_{l} + C.\end{align}

Control of A. Let

\begin{align*}M(m, N, f) \equiv\max_{m < k\leq N}\Biggl| \sum_{i=m+1}^{k}f( X_{i}) \Biggr|.\end{align*}

First, we prove for $0<j<N$

(6.9) \begin{equation}| M_{N}(f)| ^{4}\leq | M_{j}(f)| ^{4}+| M(j,N,f) | ^{4}+\sum_{k+l=4,k,l\neq 0}C_{k,l}|S_{j}(f)| ^{k}| M( j,N,f) | ^{l}, \end{equation}

where $C_{k,l}\equiv (k+l)!(k!l!)^{-1}$ . Indeed, for n such that $j < n \leq N$ , we write

\begin{align*}| S_{n}(f)| ^{4} &\leq ( |S_{j}(f)|+|M( j,N,f)|) ^{4} \\&\leq | S_{j}(f)| ^{4}+| M( j,N,f) |^{4}+\sum_{k+l=4,k,l\neq 0}C_{k,l}| S_{j}(f)| ^{k} | M(j,N,f) | ^{l}.\end{align*}

For $n\leq j$ , we have $| S_{j}(f)| ^{4}\leq |M_{j}(f)| ^{4}$ ; then (6.9) follows. By the stationarity of the underlying sequence and (6.9) we obtain, using Hölder’s inequality,

(6.10) \begin{align}\mathbb{E}| M_{N}(f)| ^{4} &\leq \mathbb{E}|M_{j}(f)| ^{4}+\mathbb{E}| M_{N-j}(f)| ^{4} \notag \\&\quad +\sum_{k+l=4,k,l\neq 0}C_{k,l}( \mathbb{E}|S_{j}(f)| ^{4}) ^{k/4}( \mathbb{E}| M_{N-j}(f)|^{4}) ^{l/4}. \end{align}

Apply to $f_{M}^{m}$ the induction hypothesis to obtain

\begin{equation*}\mathbb{E}| M_{j}(f_{M}^{m})| ^{4}\leq K^{4}\bigl[ \bigl( \sqrt{j}\| f_{M}^{m}\| _{2}\bigr) ^{4}+j\| f_{M}^{m}\|_{4}^{4}\bigr] .\end{equation*}

Keep in mind that

\begin{align*}\| f_{M}^{m}\| _{2} &\leq \| f_{M}\| _{2}\leq \|f\| _{2}, \\\| f_{M}^{m}\| _{4} &\leq \| f_{M}\|_{4}+\sum_{l=0}^{m-1}\biggl\| \dfrac{c_{l}( M ) }{l{!}}H_{l}\biggr\| _{4} \\&\leq \sqrt{M\| f \| _{2}}\Biggl( 1+\sum_{l=0}^{m-1}\biggl\|\dfrac{H_{l}}{\sqrt{l{!}}}\biggr\| _{4}\Biggr) \\&\equiv C(m)\sqrt{M\| f \| _{2}}.\end{align*}

Then we infer

\begin{equation*}\mathbb{E}| M_{j}(f_{M}^{m})| ^{4}\leq K^{4}\bigl[ \bigl( \sqrt{j}\| f \| _{2}\bigr) ^{4}+j\bigl( C(m)\sqrt{M\| f \| _{2}}\bigr) ^{4}\bigr] .\end{equation*}

Similarly, we have

\begin{equation*}\mathbb{E}| M_{N-j}(f_{M}^{m})| ^{4}\leq K^{4}\bigl[ \bigl( \sqrt{N-j}\| f \| _{2}\bigr) ^{4}+( N-j) \bigl( C(m)\sqrt{M\| f \| _{2}}\bigr) ^{4}\bigr] .\end{equation*}

From (2.3), we deduce

\begin{align*}\mathbb{E}| S_{j}(f_{M}^{m})| ^{4} &\leq K_{1}^{4}\bigl[ \bigl(\sqrt{j}\| f_{M}^{m}\| _{2}\bigr) ^{4}+j\|f_{M}^{m}\| _{4}^{4}\bigr] \\&\leq K_{1}^{4}\bigl[ \bigl( \sqrt{j}\| f \| _{2}\bigr)^{4}+jC(m)\bigl( \sqrt{M\| f \| _{2}}\bigr) ^{4}\bigr] .\end{align*}

Hence, from (6.10) we get

\begin{align*}A^{4} &\leq K^{4}\bigl[ \bigl( \sqrt{j}\| f \| _{2}\bigr)^{4}+j\bigl( C(m)\sqrt{M\| f \| _{2}}\bigr) ^{4}\bigr] \\&\quad +K^{4}\bigl[ \bigl( \sqrt{N-j}\| f \| _{2}\bigr) ^{4}+\bigl(N-j\bigr) \bigl( C(m)\sqrt{M\| f \| _{2}}\bigr) ^{4}\bigr] \\&\quad +\sum_{k+l=4,k,l\neq 0}C_{k,l}\bigl( K_{1}^{4}\bigl[ \bigl( \sqrt{j}\| f \| _{2}\bigr) ^{4}+j\bigl( C(m)\sqrt{M\| f \| _{2}}\bigr) ^{4}\bigr] \bigr) ^{k/4} \\&\quad\quad \times\bigl( K^{4}\bigl[ \bigl( \sqrt{N-j}\| f \| _{2}\bigr) ^{4}+( N-j)\bigl( C(m)\sqrt{M\| f \| _{2}}\bigr) ^{4}\bigr] \bigr) ^{l/4}.\end{align*}

Let $j= \lceil{N/2} \rceil $ be the integer part of $N/2$ and $M=\delta \sqrt{j}\|f\| _{2}$ . Then

\begin{align*}A^{4} &\leq K^{4}\bigl[ \bigl( \sqrt{N}\| f \| _{2}\bigr)^{4}2^{-1}(1+C^{4}(m)\delta ^{2})\bigr] \\&\quad +K^{3}\bigl( \sqrt{N}\| f \| _{2}\bigr)^{4}2^{-1}(1+C^{4}(m)\delta ^{2})\sum_{k+l=4,k,l\neq 0}C_{k,l}\bigl(K_{1}^{4}\bigr) ^{k/4} \\&\leq K^{4}\bigl[ \bigl( \sqrt{N}\| f \| _{2}\bigr)^{4}2^{-1}(1+C^{4}(m)\delta ^{2})\bigr] \\&\quad +K^{3}\bigl( \sqrt{N}\| f \| _{2}\bigr)^{4}2^{-1}(1+C^{4}(m)\delta ^{2})( 1+K_{1}) ^{4}.\end{align*}

Finally, we derive

(6.11) \begin{equation}A\leq \bigl( \sqrt{N}\| f \| _{2}\bigr) K\bigl[ (1+C^{4}(m)\delta^{2})^{1/4}( 2^{-1/4}+K^{-1/4}( 1+K_{1})) \bigr] .\end{equation}

Control of $B_{l}$ . For $l=0$ ,

\begin{equation*}B_{0}\leq N\mathbb{E}| \tilde{f}_{M}(X)| \leq \sqrt{4N}\delta^{-1}\| f \| _{2}.\end{equation*}

For $0<l<m$ , relation (2.2) combined with Hölder’s inequality yields

\begin{equation*}\| M_{N}( H_{l}) \| _{4}\leq K_{2}\Biggl(\sum_{i=-N}^{N}|r(i)|^{m}\Biggr) ^{l/2m}N^{1-l/2m}.\end{equation*}

Let $\bar{p}={{2m}/{(2m-l)}}$ and $\bar{q}={{2m}/{l}}$ . By Hölder’s inequality, we obtain

\begin{equation*}\tilde{c}_{l}( M ) \leq M^{1-2/\bar{p}}\| f \| _{2}^{2/\bar{p}}\| H_{l}\| _{\bar{q}}\leq 2 ( \delta \sqrt{N})^{1-2/\bar{p}}\| f \| _{2}\| H_{l}\| _{\bar{q}}.\end{equation*}

Therefore we have

\begin{align*}B_{l} &\leq \dfrac{\tilde{c}_{l}( M ) }{l{!}}\| M_{N}(H_{l}) \| _{4} \\&\leq K_{2}\bigl( \delta \sqrt{N}\bigr) ^{1-2/\bar{p}}\| f \|_{2}\Biggl( \sum_{i=-N}^{N}|r(i)|^{m}\Biggr) ^{1/2}N^{1-l/2m}.\end{align*}

Since $1-1/\bar{p}-l/2m=0$ , we get

\begin{equation*}B_{l}\leq K_{2}\delta ^{-1}\| f \| _{2}\Biggl(N\sum_{i=-N}^{N}|r(i)|^{m}\Biggr) ^{1/2}. \end{equation*}

Hence we obtain the bound

(6.12) \begin{align}B &\equiv 2\sum_{l=0}^{m-1}B_{l} \notag \\&\leq \sqrt{16N}\delta ^{-1}\| f \| _{2}+2K_{2}\delta^{-1}\| f \| _{2}\Biggl( N\sum_{i=-N}^{N}|r(i)|^{m}\Biggr) ^{1/2}\notag \\&\leq \sqrt{N}\| f \| _{2}K_{2}\delta ^{-1}. \end{align}

Control of C. By Proposition 2.1, we have

\begin{equation*}C\leq K_{3}\bigl( \bigl( \sqrt{N}\bigl\| | \tilde{f}_{M}|^{m}\bigr\| _{2}\bigr) +N^{1/4}\bigl\| | \tilde{f}_{M}|^{m}\bigr\| _{4}\bigr) .\end{equation*}

Since $\bigl\| | \tilde{f}_{M}| ^{m}\bigr\| _{2}\leq \|f\| _{2}$ and $\bigl\| | \tilde{f}_{M}| ^{m}\bigr\|_{4}\leq D(m)\| f \| _{4}$ , we get

(6.13) \begin{equation}C\leq K_{3}\bigl( \bigl( \sqrt{N}\| f \| _{2}\bigr)+N^{1/4}D(m)\| f \| _{4}\bigr) . \end{equation}

Combining (6.8), (6.11), (6.12), and (6.13), we obtain

\begin{align*}\| M_{N}(f)\| _{4} &\leq A+B+C \\&\leq \bigl( \sqrt{N}\| f \| _{2}\bigr) K\bigl[(1+C^{4}(m)\delta ^{2})^{1/4}\bigl( 2^{-1/4}+K^{-1/4}( 1+K_{1})\bigr) \bigr] \\&\quad +\sqrt{N}\| f \| _{2}\bigl[ K_{2}\delta ^{-1}+K_{3}\bigr]+K_{3}\bigl( N^{1/4}D(m)\| f \| _{4}\bigr) \\&\leq K\bigl( \sqrt{N}\| f \| _{2}\bigr) \bigl\{(1+C^{4}(m)\delta ^{2})^{1/4}\bigl( 2^{-1/4}+K^{-1/4}( 1+K_{1})\bigr) \\&\quad + K^{-1}\bigl[ K_{1}\delta ^{-1}+K_{3}\bigr] \bigr\}+K_{3}\bigl( N^{1/4}D(m)\| f \| _{4}\bigr) .\end{align*}

Finally, choose K large and $\delta $ small such that

\begin{align*}(1+C^{4}(m)\delta^{2})^{1/4}( 2^{-1/4}+K^{-1/4}( 1+K_{1}) ) +K^{-1}\bigl[ K_{2}\delta ^{-1}+K_{3}\bigr] <1,\end{align*}

and $K_{3}K^{-1}D(m)\leq 1$ , to obtain the desired result.

Proof of Lemma 3.1. Let $M=n^{1/p}$ and as in the proof of Theorem 2.1,

\begin{align*} f&= f{\unicode{x1D7D9}}_{\{|f|\leq M\}}+f{\unicode{x1D7D9}}_{\{|f|>M\}}\equiv f_{M}+\tilde{f}_{M},\\f_{M}^{m}&\equiv f_{M}-\sum_{l=0}^{m-1}\dfrac{c_{l}(M) }{l{!}}H_{l} ,\end{align*}

where

\begin{align*}c_{l}( M ) \equiv \mathbb{E}[ f_{M}(X)H_{l}(X) ].\end{align*}

Then, by the union bound inequality, we have

\begin{align*}\mathbb{P}(M_{n}(f) > \varepsilon n^{1/p}) &\leq \mathbb{P}\bigl(M_{n}\bigl( f{\unicode{x1D7D9}}_{| f | \leq M}-E\bigl(f{\unicode{x1D7D9}}_{| f | \leq M}\bigr)\bigr) > \varepsilon /2\,\,n^{1/p}\bigr) \\&\quad +\mathbb{P}\bigl(M_{n}\bigl(f{\unicode{x1D7D9}}_{| f | > M}\bigr) -E\bigl(f{\unicode{x1D7D9}}_{|f| > M}\bigr) > \varepsilon /2\,\,n^{1/p}\bigr) \\&\equiv E_{1}+E_{2}.\end{align*}

We will control the term $E_1$ in the previous equation using the maximal inequalities established earlier. To handle the term $E_2$ , we leverage the moment condition $f\in\mathbb L^p$ , which justifies our choice of the truncation level $M=n^{1/p}$ . Since the truncated function $f_M$ may have a Hermite rank lower than m, controlling $E_1$ requires a two-step approach: first we address the initial terms in the expansion of $f_M$ , and then we manage the remainder.

Control of ${E}_{{1}}$ . By the union bound in combination with Markov inequality, we infer that

\begin{align*}E_{1} &=\mathbb{P}\bigl(M_{n}\bigl( f{\unicode{x1D7D9}}_{| f | \leq M}-c_{0}(M)\bigr) > \varepsilon /2\,\,n^{1/p}\bigr) \\&\leq \sum_{l=1}^{m-1}\mathbb{P}\biggl( \biggl| M_{n}\biggl( {\dfrac{c_{l}( M ) }{l{!}}}H_{l}\biggr) \biggr| > \varepsilon n^{1/p}/2m\biggr) +\mathbb{P}\bigl( \bigl| M_{n}\bigl( f_{M}^{m}\bigr)\bigr| > \varepsilon n^{1/p}/2m\bigr) \\&\leq \sum_{l=1}^{m-1}\biggl( \dfrac{2mc_{l}( M ) }{l{!}\varepsilon n^{1/p}}\biggr) ^{4}\mathbb{E}( | M_{n}(H_{l}) | ) ^{4}+\biggl( \dfrac{2m}{\varepsilon n^{1/p}}\biggr) ^{4}\mathbb{E}\bigl( \bigl| M_{n}\bigl( f_{M}^{m}\bigr) \bigr|\bigr) ^{4} \\&\equiv \sum_{l=1}^{m}E_{1,l}.\end{align*}

For $l < m$ , we use (2.2) to infer that

(6.14) \begin{align}\mathbb{E}( | M_{n}( H_{l}) | ) ^{4} &\leq K\Biggl( n\sum_{i=0}^{n}|r(i)|^{l}\Biggr) ^{2} \notag \\&\leq Kn^{2}\Biggl( \sum_{i=0}^{n}|r(i)|^{m}\Biggr) ^{2l/m}n^{2(1-l/m)}\notag \\&\leq Kn^{2}\Biggl( \sum_{i=0}^{n}|r(i)|^{m}\Biggr) ^{2l/m}n^{2(1-l/m)}.\end{align}

Now we control $c_{l}( M)$ . First, since $c_{l}=0$ , for $l<m$ , then

\begin{equation*}c_{l}( M ) =\mathbb{E}( f_{M}( X ) H_{l}(X) ) =-\mathbb{E}( \tilde{f}_{M}( X ) H_{l}(X) ) .\end{equation*}

Let $p^{\prime} $ and $q^\prime $ such that $p^{\prime} < p$ and $1/p^\prime+1/q^\prime =1$ . Using Hölder’s inequality yields

(6.15) \begin{align}| c_{l}( M ) | &\leq \| \tilde{f}_{M}(X) \| _{p^{\prime }}\| H_{l}( X ) \|_{q^{\prime }} \notag \\&\leq M^{1-p/p^{\prime }}\| \tilde{f}_{M}( X ) \|_{p}^{p/p^{\prime }}\| H_{l}( X ) \| _{q^{\prime }}.\end{align}

Combining (6.14) and (6.15), we get

\begin{align*}E_{1,l} &\leq K\biggl( \dfrac{2m}{l{!}\varepsilon n^{1/p}}M^{1-p/p^{\prime}}\| \tilde{f}_{M}( X ) \| _{p}^{p/p^{\prime }}\|H_{l}( X ) \| _{q^{\prime }}\biggr) ^{4}n^{4-2l/m}\Biggl(\sum_{i=0}^{n}|r(i)|^{m}\Biggr) ^{2l/m} \\&\leq K(m,\varepsilon )( \| f( X ) \|_{p}^{p/p^{\prime }}\| H_{l}( X ) \| _{q^{\prime}}) ^{4}n^{4( 1-1/p^{\prime }) -2l/m}\Biggl(\sum_{i=0}^{n}|r(i)|^{m}\Biggr) ^{2l/m}.\end{align*}

Choosing $ p^\prime $ close enough to 1 gives

\begin{equation*}E_{1,l}\leq K(m,\varepsilon ,f)n^{-\delta },\end{equation*}

for some $\delta >0$ . To control the last term, we use the maximal inequality (6.7) to write

\begin{align*}E_{1,m} &\leq \biggl( \dfrac{2m}{\varepsilon n^{1/p}}\biggr) ^{4}\mathbb{E}\bigl( \bigl| M_{n}\bigl( f_{M}^{m}\bigr) \bigr| \bigr) ^{4} \\&\leq \biggl( \dfrac{2m}{\varepsilon n^{1/p}}\biggr) ^{4}K\bigl( \bigl( \sqrt{n}\| f{\unicode{x1D7D9}}_{| f | \leq M}\| _{2}\bigr) ^{4}+\bigl( \sqrt{n}\| f{\unicode{x1D7D9}}_{| f | \leq M}\| _{2}\bigr) ^{2}M^{2}\bigr) \\\ &\leq \biggl( \dfrac{2m}{\varepsilon n^{1/p}}\biggr) ^{4}Kn\mathbb{E}\bigl( f^{2}{\unicode{x1D7D9}}_{| f | \leq M}\bigr) M^{2}.\end{align*}

Therefore we obtain

\begin{align*}\sum_{n=1}^{\infty }n^{-1}E_{1,m} &\leq K(m,\varepsilon,r)\sum_{n=1}^{\infty }n^{-2/p}\mathbb{E}\bigl( f^{2}{\unicode{x1D7D9}}_{| f | \leq M}\bigr) \\&\leq K(m,\varepsilon ,r)\sum_{k=0}^{\infty }\mathbb{E}\bigl( f^{2}{\unicode{x1D7D9}}_{k\leq| f | ^{p} < k+1}\bigr) \sum_{n\geq k+1}n^{-2/p} \\&\leq K(m,\varepsilon ,r)\sum_{k=1}^{\infty }( k+1) \mathbb{E}\bigl( {\unicode{x1D7D9}}_{k\leq | f | ^{p} < k+1}\bigr) ,\end{align*}

where $K(m,\varepsilon ,r)$ is a constant independent of n but depends on $m$ , $\varepsilon $ , and r.

Control of ${E}_{{2}}$ . First observe by the Markov inequality that

\begin{equation*}E_{2} =\mathbb{P}\bigl( M_{n}\bigl( f{\unicode{x1D7D9}}_{| f | >M}\bigr)>\varepsilon /2n^{1/p}\bigr)\leq 4\varepsilon ^{-1}n^{-1/p}n\mathbb{E}| f{\unicode{x1D7D9}}_{| f |>M}| .\end{equation*}

Therefore we get

\begin{align*}\sum_{n=1}^{\infty }n^{-1}E_{2} &\leq \sum_{n=1}^{\infty }4\varepsilon^{-1}n^{-1/p}\mathbb{E}| f{\unicode{x1D7D9}}_{| f | > M}| \\&\leq 4\varepsilon ^{-1}\sum_{k=1}^{\infty }\sum_{n\leq k}n^{-1/p}\mathbb{E}| f{\unicode{x1D7D9}}_{k\leq | f | ^{p} < k+1}| \\&\leq K\varepsilon ^{-1}\sum_{k=1}^{\infty }\sum_{n\leq k}n^{-1/p}\mathbb{E}| f{\unicode{x1D7D9}}_{k\leq | f | ^{p} < k+1}| \\&\leq \sum_{k=1}^{\infty }( k+1) \mathbb{E}| {\unicode{x1D7D9}}_{k\leq| f | ^{p} < k+1}| .\end{align*}

Hence, for all $\varepsilon >0$ , we conclude that

\begin{equation*}\sum_{n=1}^{\infty }n^{-1}\mathbb{P}( M_{n}( f ) > \varepsilon n^{1/p}) < K\Biggl( \sum_{ n=1}^{\infty }n^{-1-\delta }\mathbb{+}\sum_{k=1}^{\infty }( k+1) \mathbb{E}\bigl( {\unicode{x1D7D9}}_{k\leq| f | ^{p}\leq k+1}\bigr) \Biggr) .\end{equation*}

The right-hand side of the above display is finite as soon as $f\in \mathbb{L}^{p}$ .

Proof of Lemma 3.2. Using the maximal inequality (2.2) in connection with the Markov inequality, we readily infer

\begin{equation*}\mathbb{P}( M_{n}( H_{k}) >\varepsilon n^{\beta}D_{n,k}) \leq ( \varepsilon n^{\beta }D_{n,k}) ^{-4}\mathbb{E}( M_{n}( H_{k}) ) ^{4}\leq K( \varepsilon n^{\beta }D_{n,k}) ^{-4}D_{n,k}^{4},\end{equation*}

which proves the lemma.

Appendix A.

Lemma A.1. Let $f_{1},\ldots, f_{p}$ , be real centered functions and let $\| f \| _{r,p}$ be defined by

\begin{equation*}\| f \| _{r,p}=\sum_{k=m(f)}^{\infty }{\dfrac{{|c_{k}(f)|}}{{\sqrt{k}!}}}(p-1)^{k/2}\Biggl( 4\sum_{i=1}^{n}|r^{k}(i)|\Biggr) ^{1/2}. \end{equation*}

Then

\begin{equation*}\sum_{\mathbf{i}\in N(p)}\Biggl| \mathbb{E}\Biggl( \prod_{l=1}^{p}f_{l}(X_{i_{l}}) \Biggr) \Biggr| \leq \prod_{l=1}^{p}\sqrt{n}\|f_{l}\| _{r,p}, \end{equation*}

where

(A.1) \begin{equation}N(p)=\{ \mathbf{i}=(i_{1},\ldots,i_{p})\colon 1\leq i_{k}\leq n;\ i_{k}\neq i_{l}\ \text{if}\ k\neq l\} .\end{equation}

In this part, we prove Lemma A.1. First we recall the diagram formula and some related notions in the following.

The diagram technique

Let $k_1,\ldots,k_p$ denote some integer numbers, and let V denote a set of points of cardinal $k_1+\cdots+k_p$ . An undirected graph of type $\Gamma (k_1,\ldots,k_p)$ is an element of G(V) satisfying the following.

  1. (i) V is the union of disjoint p levels with respective cardinals $k_{1},\ldots,k_{p}$ :

    \begin{equation*}V=\bigcup _{i=1}^{p}L_{i},\quad L_{i}=\{(i,l)\colon l=1,\ldots,k_{i}\}.\end{equation*}
  2. (ii) Only edges between different levels are allowed:

    \begin{equation*}w=((i,l),(i^{\prime },l^{\prime }))\Rightarrow i\neq i^{\prime }.\end{equation*}
  3. (iii) Every point has exactly one edge:

    \begin{equation*}{\text{for all}\ (i,l)\in V,\ \text{there is a unique} \,\, (i^{\prime}\!,l^{\prime})\ \text{such that}\,\,((i,l),(i^{\prime}\!,l^{\prime})) \in G(V).}\end{equation*}

For $w=((i,l),(i^{\prime },l^{\prime }))$ in G(V), we define $n_1(w) \equiv i \vee i^{\prime }$ as the first level of w and $n_2(w)\equiv i \wedge i^{\prime }$ as the second level.

Lemma A.2. (Diagram formula.) Let $(X_{s_{1}},\ldots,X_{s_{p}})$ be a Gaussian vector centered and with a covariance matrix given by $( r(s_{i},s_{j})) _{1\leq i,j\leq p}$ . Then we have

\begin{equation*}\mathbb{E}\Biggl[ \prod_{i=1}^{p}H_{k_{i}}(X_{s_{i}})\Biggr] =\sum_{G\in\Gamma (k_{1},\ldots,k_{p})}\prod_{w\in G}r(s_{n_{1}(w)},s_{n_{2}(w)}),\end{equation*}

where $n_{1}(w)$ , $ n_{2}(w)$ , respectively, are the first and second levels of w.

Proof of Lemma A.1. To prove the lemma, we need two inequalities stated in what follows. The formula (A.2) is proved by [Reference Taqqu32]. The second is proved below. We write $\mathbf{k}$ for $(k_{1},\ldots,k_{p})$ . For G element of $\Gamma (k_{1},\ldots,k_{p})$ , we introduce the following notation: $k_{G}(i)$ is the number of edges going from level i and $g(i)={{{{k_{G}(i)}}/{{k_{i}}}}}$ and $I(G,\mathbf{k},n)$ is defined by

\begin{equation*}I(G,\mathbf{k},n)=\sum_{i\in N(p)}\prod_{w\in G}|r(i_{n_{1}(w)},i_{n_{2}(w)})| , \end{equation*}

where N(p) is defined in (A.1).

Lemma A.3.

  1. (i) Let X be a standard Gaussian random variable. Then we have

    (A.2) \begin{equation}{\mathbb{E}}( |H_{k_{1}}(X)\cdots H_{k_{p}}(X)|) \leq\prod_{i=1}^{p}(p-1)^{k_{i}/2}\sqrt{k_{i}!}. \end{equation}
  2. (ii) If $G\in \Gamma (k_{1},\ldots,k_{p})$ , then we get

    (A.3) \begin{equation}I^{2}(G,\mathbf{k},n)\leq n^{p}\prod_{l=1}^{p}4\sum_{i=1}^{n}|r^{k_{l}}(i)|.\end{equation}

We have the inequalities

\begin{align*}&\sum_{i\in N(p)}\Biggl| \mathbb{E}\Biggl( \prod_{l=1}^{p}f_{l}(X_{i_{l}}) \Biggr) \Biggr|\\&\quad \leq \sum_{k_{1},\ldots,k_{p}=1,k_{l}\geq m_{l}}^{\infty}\prod_{l=1}^{p}{\dfrac{| {c_{k_{l}(f_{l})}}| }{{k_{l}!}}}\sum_{i\in N(p)}| \mathbb{E}(H_{k_{1}}(X_{i_{1}})\cdots H_{k_{p}}(X_{i_{p}})) | \\&\quad \leq \sum_{k_{1},\ldots,k_{p}=1,k_{l}\geq m_{l}}^{\infty}\prod_{l=1}^{p}{\dfrac{| {c_{k_{l}(f_{l})}}| }{{k_{l}!}}}\sum_{G\in \Gamma (k_{1},\ldots,k_{p})}I(G,\mathbf{k},n).\end{align*}

By combining (A.2) with (A.3), we conclude that

\begin{align*}&\sum_{i\in N(p)}\Biggl| \mathbb E\Biggl( \prod_{l=1}^{p}f(X_{i_{l}}) \Biggr) \Biggr|\\&\quad \leq \sum_{k_{1},\ldots,k_{p}=1,k_{l}\geq m_{l}}^{\infty}\prod_{l=1}^{p}{\dfrac{| {c_{k_{l}(f_{l})}}| }{{k_{l}!}}}|\Gamma(k_{1},\ldots,k_{p})|\sup_{G\in \Gamma (k_{1},\ldots,k_{p})}I(G,\mathbf{k},n) \\&\quad \leq \sum_{k_{1},\ldots,k_{p}=1k_{l}\geq m_{l}}^{\infty }\prod_{l=1}^{p}{\dfrac{| {c_{k_{l}(f_{l})}}| }{\sqrt{{k_{l}!}}}}(p-1)^{k_{l}/2}\Biggl( 4n\sum_{i=1}^{n}|r^{k_{l}}(i)|\Biggr) ^{1/2}.\end{align*}

Proof of Lemma A.3. We assume that $k_1\leq \cdots \leq k_p$ . Moreover, without loss of generality, assume that edges go from lower to higher levels (by the symmetry of the covariance function). Therefore we have

\begin{align*} I(G,\mathbf{k},n) &=\sum_{i_{1}=1}^{n}\cdots\sum_{i_{p}=1}^{n}\prod_{w\in G}| r(i_{n_{1}(w)},i_{n_{2}(w)})| \\& = \sum_{i_{1}=1}^{n}\cdots\sum_{i_{p}=1}^{n}\prod_{l=1}^{p}\prod_{\{ w\in G\colon n_{1}(w)=l\} }| r(i_{l},i_{n_{2}(w)})| \\& = \sum_{i_{1}=1}^{n}\cdots\sum_{i_{p}=1}^{n}\prod_{\{ w\in G\colon n_{1}(w)=1\} }| r(i_{1},i_{n_{2}(w)})|\prod_{l=2}^{p}\prod_{\{ w\in G\colon n_{1}(w)=l\} }| r(i_{l},i_{n_{2}(w)})| .\end{align*}

By Hölder’s inequality we obtain

\begin{align*} I(G,\mathbf{k},n) & \leq \sum_{i_{1}=1}^{n}\prod_{\{ w\in G\colon n_{1}(w)=1\} }|r(i_{1},i_{n_{2}(w)})|\sum_{i_{2}=1}^{n}\cdots\sum_{i_{p}=1}^{n}\prod_{l=2}^{p}\prod_{\{ w\in G\colon n_{1}(w)=l\} }|r(i_{l},i_{n_{2}(w)})| \\& \leq2\sum_{i_{1}=1}^{n}|r(i_{1})|^{k_{G}(1)}\sum_{i_{2}=1}^{n}\cdots\sum_{i_{p}=1}^{n}\prod_{l=2}^{p}\prod_{\{ w\in G,n_{1}(w)=l\}}|r(i_{l},i_{n_{2}(w)})|.\end{align*}

Repeating the same for $i_{2},\ldots,i_{p}$ , we get

(A.4) \begin{equation}|I(G,\mathbf{k},n)|\leq \prod_{l=1}^{p}2\sum_{i=1}^{n}|r(i)|^{k_{G}(l)}.\end{equation}

In order to prove the inequality (A.3), for any graph G we write the two symmetric formulas

\begin{align*}|I(G,\mathbf{k},n)| &\leq \prod_{l=1}^{p}2\sum_{i=1}^{n}|r(i)|^{k_{G}(l)}, \\|I(G,\mathbf{k},n)| &\leq\prod_{l=1}^{p}2\sum_{i=1}^{n}|r(i)|^{k_{G}^{\prime }(l)},\end{align*}

where $k_{G}(i)+k_{G}^{\prime }(i)=k_{i}$ . The first relation is (A.4). For the second relation assume that edges go from high levels to lower levels, proceed as in (A.4) considering $n_{2}(w)$ instead of $n_{1}(w)$ , and begin by integrating out $i_{p}$ instead of $i_{1}$ . Hence, by Hölder’s inequality, we get

\begin{align*}I^{2}(G,\mathbf{k},n) &\leq \prod_{l=1}^{p}4\sum_{i=1}^{n}\sum_{i^{\prime}=1}^{n}|r(i)|^{k_{G}(l)}|r(i^{\prime })|^{k_{G}^{\prime }(l)} \\&\leq \prod_{i=1}^{p}4\Biggl( n\sum_{i=1}^{n}|r(i)|^{k_{l}}\Biggr) ^{{{{{k_{G}(l)}}/{{k_{l}}}}}}\Biggl( n\sum_{i=1}^{n}|r(i)|^{k_{l}}\Biggr) ^{{{{{k_{G}^{\prime }(l)}}/{{k_{l}}}}}} \\&\leq \prod_{l=1}^{p}4n\sum_{i=1}^{n}|r^{k_{l}}(i)|.\end{align*}

This completes the proof of the second display of the lemma.

Acknowledgements

The authors extend their sincere gratitude to the Editor-in-Chief, the Associate Editor, and the referee for their invaluable feedback and for pointing out a number of oversights in the version initially submitted. Their insightful comments have greatly refined and focused the original work, resulting in a markedly improved presentation.

Funding information

There are no funding bodies to thank relating to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Alvarez-Andrade, S. and Bouzebda, S. (2014). Asymptotic results for hybrids of empirical and partial sums processes. Statist. Papers 55, 11211143.10.1007/s00362-013-0557-3CrossRefGoogle Scholar
Alvarez-Andrade, S. and Bouzebda, S. (2018). Almost sure central limit theorem for the hybrid process. Statistics 52, 519532.10.1080/02331888.2018.1425864CrossRefGoogle Scholar
Arcones, M. A. (1994). Limit theorems for nonlinear functionals of a stationary Gaussian sequence of vectors. Ann. Prob. 22, 22422274.10.1214/aop/1176988503CrossRefGoogle Scholar
Arcones, M. A. (2000). Distributional limit theorems over a stationary Gaussian sequence of random vectors. Stoch. Process. Appl. 88, 135159.10.1016/S0304-4149(99)00122-2CrossRefGoogle Scholar
Baum, L. E. and Katz, M. (1965). Convergence rates in the law of large numbers. Trans. Amer. Math. Soc. 120, 108123.10.1090/S0002-9947-1965-0198524-1CrossRefGoogle Scholar
Ben Hariz, S. (1999). Limit theorems for weakly and strongly dependent sequences: Statistical applications. PhD dissertation, Université du Paris Sud, Orsay.Google Scholar
Ben Hariz, S. (2002). Limit theorems for the non-linear functional of stationary Gaussian processes. J. Multivariate Anal. 80, 191216.10.1006/jmva.2001.1986CrossRefGoogle Scholar
Beran, J., Feng, Y., Ghosh, S. and Kulik, R. (2013). Long-Memory Processes: Probabilistic Properties and Statistical Methods. Springer, Heidelberg.10.1007/978-3-642-35512-7CrossRefGoogle Scholar
Billingsley, P. (1999). Convergence of Probability Measures, 2nd edn (Wiley Series in Probability and Statistics). John Wiley, New York.10.1002/9780470316962CrossRefGoogle Scholar
Birkel, T. (1988). A note on the strong law of large numbers for positively dependent random variables. Statist. Prob. Lett. 7, 1720.10.1016/0167-7152(88)90080-6CrossRefGoogle Scholar
Breuer, P. and Major, P. (1983). Central limit theorems for nonlinear functionals of Gaussian fields. J. Multivariate Anal. 13, 425441.10.1016/0047-259X(83)90019-2CrossRefGoogle Scholar
Buchsteiner, J. (2018). The function-indexed sequential empirical process under long-range dependence. Bernoulli 24, 21542175.10.3150/17-BEJ924CrossRefGoogle Scholar
Dedecker, J. and Merlevède, F. (2007). Convergence rates in the law of large numbers for Banach-valued dependent variables. Teor. Veroyat. Primen. 52, 562587.10.4213/tvp78CrossRefGoogle Scholar
Dobrushin, R. L. and Major, P. (1979). Non-central limit theorems for nonlinear functionals of Gaussian fields. Z. Wahrscheinlichkeitsth. 50, 2752.10.1007/BF00535673CrossRefGoogle Scholar
Fazekas, I. and Klesov, O. (2000). A general approach to the strong laws of large numbers. Teor. Veroyat. Primen. 45, 568583.10.4213/tvp486CrossRefGoogle Scholar
Gut, A. and Stadtmüller, U. (2010). On the strong law of large numbers for delayed sums and random fields. Acta Math. Hungar. 129, 182203.10.1007/s10474-010-9272-xCrossRefGoogle Scholar
Hechner, F. and Heinkel, B. (2010). The Marcinkiewicz–Zygmund LLN in Banach spaces: A generalized martingale approach. J. Theoret. Prob. 23, 509522.10.1007/s10959-009-0212-zCrossRefGoogle Scholar
Houdré, C. (1995). On the almost sure convergence of series of stationary and related nonstationary variables. Ann. Prob. 23, 12041218.10.1214/aop/1176988180CrossRefGoogle Scholar
Hu, Y., Nualart, D., Tindel, S. and Xu, F. (2015). Density convergence in the Breuer–Major theorem for Gaussian stationary sequences. Bernoulli 21, 23362350.10.3150/14-BEJ646CrossRefGoogle Scholar
Ivanov, A. V., Leonenko, N., Ruiz-Medina, M. D. and Savich, I. N. (2013). Limit theorems for weighted nonlinear transformations of Gaussian stationary processes with singular spectra. Ann. Prob. 41, 10881114.10.1214/12-AOP775CrossRefGoogle Scholar
Jirak, M. (2017). On weak invariance principles for partial sums. J. Theoret. Prob. 30, 703728.10.1007/s10959-016-0670-zCrossRefGoogle Scholar
Kratz, M. F. and León, J. R. (2001). Central limit theorems for level functionals of stationary Gaussian processes and fields. J. Theoret. Prob. 14, 639672.10.1023/A:1017588905727CrossRefGoogle Scholar
Kulik, R. and Soulier, P. (2012). Limit theorems for long-memory stochastic volatility models with infinite variance: Partial sums and sample covariances. Adv. Appl. Prob. 44, 11131141.10.1239/aap/1354716591CrossRefGoogle Scholar
Kulik, R. and Soulier, P. (2020). Heavy-Tailed Time Series (Springer Series in Operations Research and Financial Engineering). Springer, New York.10.1007/978-1-0716-0737-4CrossRefGoogle Scholar
Louhichi, S. and Soulier, P. (2000). Marcinkiewicz–Zygmund strong laws for infinite variance time series. Statist. Inference Stoch. Process. 3, 3140.10.1023/A:1009985318510CrossRefGoogle Scholar
Marcinkiewicz, J. and Zygmund, A. (1937). Sur les fonctions independantes. Fundam. Math. 29, 6090.10.4064/fm-29-1-60-90CrossRefGoogle Scholar
Móricz, F. A., Serfling, R. J. and Stout, W. F. (1982). Moment and probability bounds with quasisuperadditive structure for the maximum partial sum. Ann. Prob. 10, 10321040.10.1214/aop/1176993724CrossRefGoogle Scholar
Rio, E. (1995). A maximal inequality and dependent Marcinkiewicz–Zygmund strong laws. Ann. Prob. 23, 918937.10.1214/aop/1176988295CrossRefGoogle Scholar
Shao, Q. M. (1995). Maximal inequalities for partial sums of $\rho$ -mixing sequences. Ann. Prob. 23, 948965.10.1214/aop/1176988297CrossRefGoogle Scholar
Shuhe, H. and Ming, H. (2006). A general approach rate to the strong law of large numbers. Statist. Prob. Lett. 76, 843851.10.1016/j.spl.2005.10.016CrossRefGoogle Scholar
Szewczak, Z. (2011). On Marcinkiewicz–Zygmund laws. J. Math. Anal. Appl. 375, 738744.10.1016/j.jmaa.2010.10.011CrossRefGoogle Scholar
Taqqu, M. S. (1979). Convergence of integrated processes of arbitrary Hermite rank. Z. Wahrscheinlichkeitsth. 50, 5383.10.1007/BF00535674CrossRefGoogle Scholar