Hostname: page-component-6bb9c88b65-vqtzn Total loading time: 0 Render date: 2025-07-26T10:47:21.033Z Has data issue: false hasContentIssue false

Extremes for systemic expected shortfall and marginal expected shortfall in a multivariate continuous-time risk model

Published online by Cambridge University Press:  22 July 2025

Lei Zou
Affiliation:
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, P.R. China
Jiangyan Peng*
Affiliation:
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, P.R. China
Chenghao Xu
Affiliation:
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, P.R. China
*
Corresponding author: Jiangyan Peng; Email: pengjiangyan@uestc.edu.cn
Rights & Permissions [Opens in a new window]

Abstract

In this article, we focus on the systemic expected shortfall and marginal expected shortfall in a multivariate continuous-time risk model with a general càdlàg process. Additionally, we conduct our study under a mild moment condition that is easily satisfied when the general càdlàg process is determined by some important investment return processes. In the presence of heavy tails, we derive asymptotic formulas for the systemic expected shortfall and marginal expected shortfall under the framework that includes wide dependence structures among losses, covering pairwise strong quasi-asymptotic independence and multivariate regular variation. Our results quantify how the general càdlàg process, heavy-tailed property of losses, and dependence structures influence the systemic expected shortfall and marginal expected shortfall. To discuss the interplay of dependence structures and heavy-tailedness, we apply an explicit order 3.0 weak scheme to estimate the expectations related to the general càdlàg process. This enables us to validate the moment condition from a numerical perspective and perform numerical studies. Our numerical studies reveal that the asymptotic dependence structure has a significant impact on the systemic expected shortfall and marginal expected shortfall, but heavy-tailedness has a more pronounced effect than the asymptotic dependence structure.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

1. Introduction

In this article, all stochastic quantities are defined on a complete probability space $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\geq0}, P)$. The filtration $(\mathcal{F}_t)_{t\geq0}$ is right-continuous, and all stochastic processes throughout the article are adapted.

After the global financial crisis triggered by the collapse of Lehman Brothers, the International Monetary Fund, the Bank for International Settlements, and the Financial Stability Board cooperated to establish an initial framework for evaluating the systemic importance of financial institutions in 2,009. In this framework, systemic risk was defined as “[the] risk of disruption to financial services that (i) [is] caused by an impairment of all or parts of the financial system and (ii) has the potential to have serious negative consequences for the real economy.” From the viewpoint of policymakers, risk measures are vital for developing a comprehensive and efficient regulatory framework to manage systemic risk and enhance financial stability. Based on this, numerous scholars have proposed a series of systemic risk measures, such as Value-at-Risk (VaR), Conditional Tail Expectation (CTE), Contagion Risk (CR), Expected shortfall (ES), and Joint Expected Shortfall (JES), as demonstrated in works such as [Reference Cai, Einmahl, de Haan and Zhou3, Reference Hua and Joe15, Reference Ji, Tan and Yang16, Reference Li, Luo and Yao23], [Reference Landsman, Makov and Tomer19] and [Reference Zhou, Dhaene and Yao31]. In practice, when there are various dependence structures among individual risks, calculating the analytical expressions of systemic risk measures becomes challenging. Therefore, many researchers have shifted their focus to investigating asymptotic expressions with the help of Extreme Value Theory, as shown in works such as [Reference Asimit, Furman, Tang and Vernic1, Reference Chen and Liu4, Reference Fu and Liu9, Reference Fu, Ni and Chen10, Reference Hao and Tang14, Reference Liu and Yang24] and [Reference Tang27]. A significant point to note is that the systemic risk measures mentioned above have traditionally been studied within a static framework. Thus, recent research has extended these measures into continuous-time risk models, as demonstrated by [Reference Li22]. Inspired by [Reference Guo and Wang11], we assume that for any integer $k\in[1,d]$, the loss process $L_k(t)$ of the k-th line of business can be modeled by

(1.1)\begin{align} L_k(t)=\sum_{i=1}^{N_k(t)}X_{ki}e^{-\xi(\tau_{ki})}, \end{align}

where d denotes the total number of lines of business, Xki describes the i-th loss in the k-th line of business whose arrival times τki constitute a loss-number process $N_k(t)$, and $\xi(t)$ denotes a general càdlàg process, which measures the exposure to common macroeconomic factors. Consequently, the aggregate loss process S(t) can be expressed as follows:

(1.2)\begin{align} S(t)=\sum_{k=1}^{d}L_k(t)=\sum_{k=1}^{d}\sum_{i=1}^{N_k(t)}X_{ki}e^{-\xi(\tau_{ki})} . \end{align}

For a general risk variable X, its VaR at level $0 \lt q \lt 1$ is defined by $VaR_X(q)=\inf\{s\in\mathbb{R}: P(X\leq s)\geq q\}$. Motivated by relations (1.3) and (1.4) in [Reference Chen and Liu4], in the context of ES capital allocations, for any integer $k\in[1,d]$, we introduce two risk measures based on our multivariate continuous-time risk model: the systemic expected shortfall $SES_k(t,q)$ and the marginal expected shortfall $MES_k(t,q)$, as described below:

(1.3)\begin{align}&SES_k(t,q)=\mathbb{E}\left((L_k(t)-A_k(t,q))_+~|~S(t) \gt A(t,q)\right), \end{align}

and

(1.4)\begin{align}&MES_k(t,q)=\mathbb{E}\left(L_k(t)~|~S(t) \gt A(t,q)\right), \end{align}

where the capital allocation of the k-th line of business $A_k(t,q)$ and the total capital allocation $A(t,q)$ can be described by

(1.5)\begin{align}&A_k(t,q)=\mathbb{E}\left(L_k(t)~|~S(t) \gt VaR_S(t,q)\right), \end{align}

and

(1.6)\begin{align}&A(t,q)=\sum_{k=1}^{d}A_k(t,q)=\mathbb{E}\left(S(t)~|~S(t) \gt VaR_S(t,q)\right). \end{align}

Here and throughout the article, we use $x_+=x\mathbb{I}_{\{x\geq0\}}$, where $\mathbb{I}_A$ denotes the indicator function of an event A.

The multivariate continuous-time risk model (1.1) that includes a general càdlàg process highlights the shared financial environment and the correlations among individual businesses. The general càdlàg processes are a class of stochastic processes characterized by right continuity and left limits, with significant applications in mathematics, finance, and engineering. The càdlàg processes are often determined by some important investment return processes such as the fractional Brownian motions, the Vasicek model, the Cox–Ingersoll–Ross model (CIR), and the Heston model, as shown in [Reference Cheng, Konstantinides and Wang6, Reference Guo and Wang11] and [Reference Guo and Wang12]. As indicated by Section 5 of [Reference Cheng, Konstantinides and Wang6] and Section 3 of [Reference Fu and Li8], it is evident that when a general càdlàg process is determined by the classical geometric Brownian motion or the Vasicek model, the value of the general càdlàg process at each time point can be directly calculated. Nevertheless, it is not practical to calculate the value of the general càdlàg process at each time point directly when the general càdlàg process is determined by more complex models such as the CIR model or the Stochastic Volatility model et al. In addition, it is emphasized in Section 3 of [Reference Guo and Wang11] that calculating the expectation related to the general càdlàg process is a challenging task. Consequently, these limitations restrict the application of the general càdlàg process in current risk theory.

A key point to mention is that the càdlàg process often connects with some complex and specific stochastic differential equations (SDEs). To overcome the challenge of direct calculation, we employ the explicit order 3.0 weak scheme, which is widely used for solving SDEs. A significant point to note is that the explicit order 3.0 weak scheme does not involve derivatives of the drift and diffusion coefficients, which allows us to approximate the values of the càdlàg process at discrete time points in a more convenient way. Additionally, the explicit order 3.0 weak scheme can be also viewed as an alternative method for approximating the expectation related to the general càdlàg process. Therefore, this numerical method contributes to verifying moment condition related to the general càdlàg process from a numerical standpoint and to understanding the sensitivity of parameters. Owing to these advantages, we believe that the explicit order 3.0 weak scheme extends the applicability of the general càdlàg process in the field of risk theory. For detailed information on the explicit order 3.0 weak scheme, we refer the reader to [Reference Kloeden and Platen17, Reference Platen25], among others.

In previous research on systemic risk measures, the study of dependent losses has primarily focused on two different aspects: asymptotic independence and asymptotic dependence. For instance, [Reference Liu and Yang24] utilized Extreme Value Theory to investigate the asymptotic expressions of systemic risk measures with asymptotically independent losses. Common asymptotically independent structures include pairwise strong quasi-asymptotic independence, Gaussian copula, and the Johnson-Kotz iterated Farlie-Gumbel-Morgenstern copula, as discussed by [Reference Fu and Li8, Reference Guo, Wang and Yang13], [Reference Li20] and [Reference Li21]. Similarly, [Reference Hao and Tang14] has examined the asymptotic relationships of CTE with asymptotically dependent losses. Examples of asymptotically dependent structures can be found in [Reference Asimit, Furman, Tang and Vernic1, Reference Chen and Yuan5, Reference Konstantinides and Li18] and [Reference Li22]. In this article, we explore extremes for $SES_k(t,q)$ and $MES_k(t,q)$ within a framework that considers wide dependence structures among losses, concluding pairwise strong quasi-asymptotic independence and multivariate regular variation.

The current article extends the existing works on systemic risk measures in four main aspects. First, compare with [Reference Li22] that explores the extremes for systemic risk measures in a multivariate continuous-time risk model with a constant interest rate, we introduce a general càdlàg process into the multivariate continuous-time risk model. Since the general càdlàg process lacks independent and stationary incremental properties, it adds great difficulty to the theoretical proof, but it also makes the $SES_k(t,q)$ and $MES_k(t,q)$ more practical. Our results reveal that the general càdlàg process has a significant impact on the $SES_k(t,q)$ and $MES_k(t,q)$. Second, we propose a mild moment assumption for the general càdlàg process. This assumption can be easily satisfied by selecting appropriate model parameters when the general càdlàg process is determined by some investment return processes, such as the fractional Brownian motions, the Vasicek model, the CIR model, and the Heston model. In addition, compared to [Reference Cheng, Konstantinides and Wang6] and [Reference Guo and Wang11], who verify the conditions associated with the general càdlàg process from a theoretical viewpoint, our numerical findings suggest that this moment assumption is well-founded from a numerical standpoint. Third, we note that during the European sovereign debt crisis, the high levels of debt in several European countries, along with the interconnectedness of European banks, created a contagion effect that posed a threat to the stability of the entire Eurozone and had widespread impacts on global financial markets. In this context, we explore the extremes for $SES_k(t,q)$ and $MES_k(t,q)$ under the framework of dependence structures, namely pairwise strong quasi-asymptotic independence and multivariate regular variation. Our results indicate that the influence of the pairwise strong quasi-asymptotically independence structure among losses on the $SES_k(t,q)$ and $MES_k(t,q)$ is minimal, while the property of multivariate regular variation among losses significantly impacts the $SES_k(t,q)$ and $MES_k(t,q)$. Finally, we note that operational risks and large insurance losses are identified as having heavy tails, and that in the 2,008 financial crisis, extreme risks are shown to be contagious. Based on this, we aim to discuss the interplay of dependence structures and heavy-tailedness with the help of the explicit order 3.0 weak scheme. Our numerical studies demonstrate that while the asymptotic dependence structure among losses plays a role in $SES_k(t,q)$ and $MES_k(t,q)$, the heavy-tailed property has a more substantial effect.

In the rest of this article, Section 2 introduces the mild moment condition, as well as dependence structures. Section 3 states our main results. Section 4 proves the main results after introducing some useful lemmas. Section 5 applies the explicit order 3.0 weak scheme to perform numerical studies.

2. Preliminaries and assumptions

2.1. Preliminaries

Throughout this article, C represents a generic constant, which may vary with the context. Hereafter, all limit relations are for $x\rightarrow\infty$ or $q\uparrow 1$ unless stated otherwise. For two positive functions $a(\cdot )$ and $b(\cdot )$, we write $a(x)\lesssim b(x)$ if $\limsup\limits_{x\rightarrow\infty}\frac{a(x)}{b(x)}\leq 1$; $a(x) \gt rsim b(x)$ if $\liminf\limits_{x\rightarrow\infty}\frac{a(x)}{b(x)}\geq 1$; and $a(x)\sim b(x)$ if $\lim\limits_{x\rightarrow\infty}\frac{a(x)}{b(x)}=1$. Also, we write $a(x)\asymp b(x)$ if $0 \lt \liminf\limits_{x\rightarrow\infty}\frac{a(x)}{b(x)} \lt \limsup\limits_{x\rightarrow\infty}\frac{a(x)}{b(x)} \lt \infty$. Furthermore, for two positive bivariate functions $a(\cdot,\cdot)$ and $b(\cdot,\cdot)$ satisfying $0\leq L_1\leq \liminf_{x\rightarrow\infty}\inf_{t\in\Delta}\frac{a(x,t)}{b(x,t)}\leq \limsup_{x\rightarrow\infty}\sup_{t\in\Delta}\frac{a(x,t)}{b(x,t)}\leq L_2 \lt \infty,~\Delta\neq\emptyset$. We say the relation $a(x,t)\asymp b(x,t)$ holds uniformly for all $t\in\Delta$ if $0 \lt L_1\leq L_2 \lt \infty$; $a(x,t)\lesssim b(x,t)$ holds uniformly for all $t\in\Delta$ if $L_2\leq1$; $a(x,t) \gt rsim b(x,t)$ holds uniformly for all $t\in\Delta$ if $L_1\geq1$; and $a(x,t)\sim b(x,t)$ holds uniformly for all $t\in\Delta$ if $L_1=L_2=1$. The vectors are denoted by bold letters and assumed to be d-dimensional, as for example, $\boldsymbol{a}=(a_1,a_2,\ldots,a_d)$. Furthermore, we denote $\boldsymbol{1}=(1,1,\ldots,1)$. For for any integer $k\in[1,d]$, the notation Ik denotes the unit vector whose k-th component is 1, so $\boldsymbol{1}=\sum_{k=1}^{d}\boldsymbol{I}_{k}$ is obtained.

At first, we recall some related classes of heavy-tailed distributed functions (distribution function). By definition, a distribution function belongs to the class of dominated variation, denoted by $F\in \mathcal{D}$, if F has an ultimate right tail and for any $y\in(0,1)$,

\begin{equation*} \limsup_{x\rightarrow\infty}\frac{\overline{F}(xy)}{\overline{F}(x)} \lt \infty. \end{equation*}

An important subclass of the class $\mathcal{D}$ is the class $\mathcal{R}_{-\alpha}$ of regularly varying functions specified by

\begin{equation*} \lim_{x\rightarrow\infty}\frac{\overline{F}(xy)}{\overline{F}(x)}=y^{-\alpha},~\mathrm{for}~y \gt 0. \end{equation*}

For any distribution $F\in \mathcal{R}_{-\alpha}$ with α > 0, Theorem 1.5 of [Reference Bingham, Goldie and Teugels2] ensures that

(2.1)\begin{align} \lim_{x\rightarrow\infty} \sup_{y\in[b,\infty)} \left|\frac{\overline{F}(xy)}{\overline{F}(x)}-y^{-\alpha}\right|=1, \end{align}

holds for arbitrarily fixed $0 \lt b \lt \infty$. According to Karamata’s Theorem, if $F\in\mathcal{R}_{-\alpha}$ for some α > 1, then,

\begin{align*} \lim_{x\rightarrow\infty}\frac{\int_{x}^{\infty}\overline{F}(y)dy}{x\overline{F}(x)}=\frac{1}{\alpha-1}. \end{align*}

In addition, a distribution function F is said to be long-tailed, denoted by $F\in\mathcal{L}$, if F has an ultimate right tail and for any $b\in\mathbb{R}$,

\begin{align*} \lim_{x\rightarrow\infty}\frac{\overline{F}(x+b)}{\overline{F}(x)}=1. \end{align*}

Next, we introduce the upper and lower Matuszewska indices: $J_{F}^{+}=-\lim_{y\rightarrow\infty}\frac{\log{\overline{F_*}(y)}}{\log{y}} $ and $J_{F}^{-}=-\lim_{y\rightarrow\infty}\frac{\log{\overline{F^*}(y)}}{\log{y}}$, where $ \overline{F_*}(y)=:\liminf_{x\rightarrow\infty}\frac{\overline{F}(xy)}{\overline{F}(x)}$ and $\overline{F^*}(y)=:\limsup_{x\rightarrow\infty}\frac{\overline{F}(xy)}{\overline{F}(x)}$. Generally, $0\leq{J_{F}^{-}}\leq{J_{F}^{+}}\leq\infty$. Especially, if $F\in \mathcal{D}$, then ${J_{F}^{+}} \lt \infty$, and if $F\in \mathcal{R}_{-\alpha}$ with α > 0, then ${J_{F}^{-}}={J_{F}^{+}}={\alpha}$.

Lastly, we introduce two different dependence structures: Definition 2.1, which is a well-known asymptotic independence structure, and Definition 2.2, which is a common asymptotic dependence structure.

Definition 2.1.

Let Y 1, Y 2, … be non-negative random variables. We say that Y 1, Y 2, … are pairwise strong quasi-asymptotically independent (pSQAI), if, for any ij,

\begin{align*} \lim_{\min\{y_i,~y_j\}\rightarrow\infty}P(Y_i \gt y_i~|~Y_j \gt y_j)=0. \end{align*}

For more details on pSQAI, see [Reference Li20].

Definition 2.2.

For a non-negative random vector $(Y_1, \ldots, Y_d)$, assume that there exists some α > 0, a distribution function $G\in \mathcal{R}_{-\alpha}$, and some Radon measure υ, which satisfies that for any Borel set Q away from 0, $\upsilon~(Q) \gt 0$ and assigns zero mass to the boundary $\partial Q$, such that the following vague convergence holds as $x\longrightarrow\infty$:

(2.2)\begin{equation} \frac{P\left(\frac{(Y_1,Y_2,\ldots, Y_d)}{x}\in\cdot\right)}{\overline{G}(x)}\rightarrow \upsilon(\cdot). \end{equation}

In the context of Definition 2.2, we write $(Y_1,Y_2,\ldots, Y_d)\in MRV_d(\alpha, G, \upsilon)$. For any Borel set Q away from 0, the measure υ exhibits the following property of homogeneity:

(2.3)\begin{align} \upsilon(yQ)=y^{-\alpha}\upsilon(Q),~\mathrm{for~any}~y \gt 0. \end{align}

Besides, if $\upsilon((\boldsymbol{1},\boldsymbol{\infty}]) \gt 0$, then we have

(2.4)\begin{align} \lim_{x\rightarrow\infty}\frac{P\left(\bigcap_{k=1}^{d}\{Y_k \gt b_kx\}\right)}{\overline{G}(x)}=\upsilon((\boldsymbol{b},\boldsymbol{\infty}]) \gt 0, \mathrm{for~any~\boldsymbol{b}~away~from~\boldsymbol{0}}, \end{align}

and

(2.5)\begin{align} \lim_{x\rightarrow\infty}\frac{P(Y_k \gt x)}{\overline{G}(x)}=\upsilon((\boldsymbol{I}_k,\boldsymbol{\infty}]) \gt 0,~\mathrm{for~any~integer}~k\in[1,d]. \end{align}

Relation (2.5) indicates that $Y_1,Y_2$,…, and Yd have regularly varying tails and are mutually tail-equivalent. From relations (2.4) and (2.5), if $Y_1,Y_2$,…, and Yd have a joint distribution exhibiting multivariate regular variation with some Radon measure υ such that $\upsilon((\boldsymbol{1},\boldsymbol{\infty}]) \gt 0$, we have for any $1\leq i \lt j\leq d$,

\begin{align*} \lim_{x\rightarrow\infty}\frac{P(Y_i \gt x,Y_j \gt x)}{P(Y_i \gt x)}= \lim_{x\rightarrow\infty}\frac{P(Y_i \gt x,Y_j \gt x)}{\overline{G}(x)}\frac{\overline{G}(x)}{P(Y_i \gt x)}\geq \frac{\upsilon((\boldsymbol{I},\boldsymbol{\infty}])}{\upsilon((\boldsymbol{I}_i,\boldsymbol{\infty}])} \gt 0. \end{align*}

Thus, these random variables are pairwise asymptotically dependent. For more details on multivariate regular variation, see [Reference Chen and Yuan5, Reference Konstantinides and Li18] and [Reference Li22].

2.2. Assumptions

Here, we present some conditions that are used throughout the article.

(i) Let’s assume that N(t), $N_1(t)$, $\ldots$, and $N_d(t)$ are renewal counting processes with corresponding renewal functions $\lambda(t)$, $\lambda_1(t)$, $\ldots$, $\lambda_d(t)$ and loss inter-arrival times $\theta_{i}, \theta_{1i}, \ldots, \theta_{di}$, $i\in\mathbb{N}^+$, respectively. For any two distinct integers i and j, we suppose that $N_i(t)$ and $N_j(t)$ are mutually independent, as are the arrival times $\tau_{i\cdot}$ and $\tau_{j\cdot}$.

(ii) For any integer $k\in[1,d]$, we assume that losses $\{X_{ki},i\in\mathbb{N}^+\}$ along the k-th line are non-negative and identically distributed with a generic random variable Xk, distributed by Fk.

(iii) We assume the loss frequencies $\{N_{k}(t), k=1,2,\ldots,d\}$, the losses $\{X_{ki}, k=1,2,\ldots,d, i\in \mathbb{N}^+\}$ and the general càdlàg process are mutually independent.

Next, let us introduce the moment condition for the general càdlàg process.

Assumption 2.1.

Assume there exists a constant κ such that for any $\omega\in[0,\kappa)$, $l\in(0, \kappa]$ and integer $k\in[1,d]$, the following relation holds

\begin{align*} \sum_{i=1}^{\infty}i^{\omega}\mathbb{E}\left(e^{-l\xi(\tau_{ki})}\right) \lt \infty. \end{align*}

Remark 2.1.

Here, we choose several specific instances of the general càdlàg process to illustrate that Assumption 2.1 can be readily met.

(i) Let the general càdlàg process be a Lèvy process with characteristic triplet $(\gamma,\sigma^2,\nu_0)$, where γ is a real-valued constant in $(-\infty,\infty),~\sigma$ is a non-negative constant, and ν 0 represents a Lèvy measure on $(-\infty,\infty)$ satisfying $\nu_0(\{0\})=0$ and $\int_{-\infty}^{\infty}\min(y^2,1)\nu_0(dy) \lt \infty$. Define the Laplace exponent of the Lèvy process as $\phi(s)=\log \mathbb{E}(e^{-s\xi(1)}),s\in(-\infty,\infty)$. If $\phi(s) \lt \infty$, then, for all $t\geq0$, we can obtain $\mathbb{E}(e^{-s\xi(t)})=e^{t\phi(s)}$. For more details of the Lèvy process, we refer the reader to [Reference Cont and Tankov7] and [Reference Sato26]. Assume there exists a constant κ such that $\phi(\kappa) \lt 0$, it is easy to verify that $\phi(s)$ is convex in s, and finite. Since $\phi(0) = 0$, we see that the condition $\phi(\kappa) \lt 0$ implies that $\phi(s) \lt 0$ for all $s\in (0, \kappa]$. Additionally, suppose that for any integer $k\in[1,d]$, $N_k(t)$ is a renewal counting process. Therefore, it is obvious that for any $\omega\in[0,\kappa)$, $l\in(0, \kappa]$ and integer $k\in[1,d]$,

\begin{align*} \sum_{i=1}^{\infty}i^{\omega}\mathbb{E}\left(e^{-l\xi(\tau_{ki})}\right)=\sum_{i=1}^{\infty}i^{\omega}\mathbb{E}\left(e^{\tau_{ki}\phi(l)}\right)=\sum_{i=1}^{\infty}i^{\omega}\left(\mathbb{E}\left(e^{\theta_{k1}\phi(l)}\right)\right)^i \lt \infty. \end{align*}

(ii) When the general càdlàg process is determined by the fractional Brownian motion, the Vasicek model, the CIR model or the Heston model, it is evident from Section 3 of [Reference Guo and Wang11] that there exist constants κ and η satisfying $\eta \gt \kappa[\omega]/l-1+2\kappa/l \gt 0$, such that the following condition holds,

(2.6)\begin{equation} \int_{0}^{\infty}\max\{s^{\eta},1\}\mathbb{E}(e^{-\kappa\xi(s)})ds \lt \infty, \end{equation}

where $\omega\in[0,\kappa)$, $l\in(0, \kappa]$ and [A] denotes the integer part of A. Assume that for any integer $k\in[1,d]$, $N_k(t)$ is a homogeneous Poisson counting process with arrival times $\tau_{ki},~i\in\mathbb{N}^+$. Consequently, the inter-arrival times θki follow an exponential distribution with parameter λk. Note that τki is a gamma random variable satisfying

(2.7)\begin{align}P(\tau_{ki}\in ds)=\frac{s^{i-1}\lambda_k^i}{(i-1)!}e^{-\lambda_ks}ds. \end{align}

Thus, we have that for any δ > 0, $\omega\in[0,\kappa)$, $l\in(0, \kappa]$ and integer $k\in[1,d]$,

\begin{align*} &\sum_{i=1}^{\infty}i^{\omega}\mathbb{E}\left(e^{-l\xi(\tau_{ki})}\right)\\ &=\sum_{i=1}^{\infty}i^{\omega}\int_{0}^{\infty}\mathbb{E}\left(e^{-l\xi(s)}\right)\frac{s^{i-1}\lambda_k^i}{(i-1)!}e^{-\lambda_ks}ds\\ &\leq C \int_{0}^{\infty}\max\{s^{[\omega]+1},1\}\mathbb{E}\left(e^{-l\xi(s)}\right)ds\\ &\leq C \left(\int_{0}^{2}\max\{s^{[\omega]+1},1\}\left(\mathbb{E}\left(e^{-\kappa\xi(s)}\right)\right)^{l/\kappa}ds+\int_{2}^{\infty}\frac{s^{[\omega]+2-l/\kappa}log^{1+\delta}s}{s^{1-l/\kappa}log^{1+\delta}s}\left(\mathbb{E}\left(e^{-\kappa\xi(s)}\right)\right)^{l/\kappa}ds\right)\\ &\leq C\left(\int_{0}^{2}\mathbb{E}\left(e^{-\kappa\xi(s)}\right)ds\right)^{l/\kappa}\\ &+\left(\int_2^{\infty}s^{\kappa[\omega]/l-1+2\kappa/l}log^{(1+\delta)\kappa/l}s\mathbb{E}\left(e^{-\kappa\xi(s)}\right)ds\right)^{l/\kappa} \left(\int_{2}^{\infty}s^{-1}log^{-(1+\delta)\kappa/(\kappa-l)}sds\right)^{1-l/\kappa}\\ &\leq C\left(\left(\int_0^2\mathbb{E}\left(e^{-\kappa\xi(s)}\right)ds\right)^{l/\kappa}+\left(\int_{2}^{\infty}s^{\eta}\mathbb{E}\left(e^{-\kappa\xi(s)}\right)ds\right)^{l/\kappa}\right) \lt \infty, \end{align*}

where in the first step, we use relation (2.7), in the second step, we utilize the fact that for any non-negative constant n, the following relation holds

\begin{align*} \sum_{i=1}^{\infty}i^{n}\frac{s^{i-1}\lambda_k^i}{(i-1)!}e^{-\lambda_ks}\leq \begin{cases} C \max\{s^n,1\},&n=0,1,2,\ldots\\ C \max\{s^{[n]+1},1\},&\mathrm{else}, \end{cases} \end{align*}

in the third and fourth steps, we apply H $\ddot{\mathrm{o}}$lder’s inequality, and in the last step, we use relation (2.6).

(iii) Suppose that there exists a constant κ such that

(2.8)\begin{align} \mathbb{E}\left(e^{-\kappa\xi(t)}\right)\sim\varphi(\kappa)e^{\vartheta(\kappa)t}, \mathrm{as}~t\rightarrow\infty, \end{align}

where the function $\varphi(\kappa) \gt 0$ is bounded for any fixed κ, and the function $\vartheta(\kappa) \lt 0$. Additionally, it is assumed that there exists a constant T such that for sufficiently large $H' \gt 0$,

(2.9)\begin{align} \sup_{t\in [0, T]}\mathbb{E}\left(e^{-\kappa\xi(t)}\right) \lt H'. \end{align}

Section 4 of [Reference Cheng, Konstantinides and Wang6] implies that when the general càdlàg process is determined by the Vasicek model, the CIR model or the Heston model, the conditions (2.8) and (2.9) can be satisfied. Furthermore, assume that for any integer $k\in[1,d]$, $N_k(t)$ is a renewal counting process. Thus, it can be shown that there exists a large T 0 such that for any $\omega\in[0,\kappa)$, $l\in(0, \kappa]$ and integer $k\in[1,d]$, the following relation holds

\begin{align*} \sum_{i=1}^{\infty}i^{\omega}\mathbb{E}\left(e^{-l\xi(\tau_{ki})}\right)& \leq \sum_{i=1}^{\infty}i^{\omega}\left(\mathbb{E}\left(e^{-\kappa\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki}\leq T_0\}}\right)+\mathbb{E}\left(e^{-\kappa\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \gt T_0\}}\right)\right)^{l/\kappa}\\ &\leq C\sum_{i=1}^{\infty}i^{\omega}\left(H' P(\tau_{ki}\leq T_0)+\varphi(\kappa)\mathbb{E}\left(e^{\vartheta(\kappa)\tau_{ki}}\right)\right)^{l/\kappa}\\ &\leq C\sum_{i=1}^{\infty}i^{\omega}\left( \left(P(\theta_{k1}\leq T_0)\right)^i+\left(\mathbb{E}\left(e^{\vartheta(\kappa)\theta_{k1}}\right)\right)^{i}\right)^{l/\kappa}\\ &\leq C\sum_{i=1}^{\infty}i^{\omega}\left( \left(P(\theta_{k1}\leq T_0)\right)^{il/\kappa}+\left(\mathbb{E}\left(e^{\vartheta(\kappa)\theta_{k1}}\right)\right)^{il/\kappa}\right) \lt \infty, \end{align*}

where in the first step, we use H $\ddot{\mathrm{o}}$lder’s inequality, in the second step, we apply relations (2.8) and (2.9), and in the fourth step, we use Cr inequality.

Remark 2.2.

Under the conditions of Assumption 2.1 with $N_1(t)=N_2(t)=\cdots=N(t)$, by H $\ddot{\mathrm{o}}$lder’s inequality, we have that for any $l_1,l_2\in(0,\kappa/2)$,

\begin{align*} \mathbb{E}\left(e^{-l_1\xi(\tau_i)}e^{-l_2\xi(\tau_j)}\right)\leq \left(\mathbb{E}\left(e^{-2l_1\xi(\tau_i)}\right)\right)^{1/2}\left(\mathbb{E}\left(e^{-2l_2\xi(\tau_j)}\right)\right)^{1/2} \lt \infty,~i\neq j. \end{align*}

Indeed, the relation above is essential in the proof of Theorem 3.2, as evident in Lemmas 4.10-4.11.

3. Main results

Now, we are ready to state our main theorems. In the Theorem 3.1, we explore the extremes for systemic risk measures $SES_k(t,q)$ and $MES_k(t,q)$ under asymptotic independence structure. In the Theorem 3.2, we investigate the extremes for systemic risk measures $SES_k(t,q)$ and $MES_k(t,q)$ under asymptotic dependence structure.

Theorem 3.1.

Consider the $SES_k(t,q)$ and $MES_k(t,q)$ defined by (1.3) and (1.4) with Assumption 2.1. Suppose that random variables X11, X12, …, $X_{d1}$, … are pSQAI. For any integer $k\in[1,d]$, assume that there exists a representing distribution F satisfying $\lim_{x\rightarrow\infty}\frac{\overline{F}_k(x)}{\overline{F}(x)}=: b_k\in(0,\infty)$.

(i) Suppose that $F\in\mathcal{D}\cap\mathcal{L}$ with $\kappa \gt 2\max\{J_{F_1}^+,\ldots,J_{F_d}^+\}$ and $\min\{J_{F_1}^-,\ldots,J_{F_d}^-\} \gt 1$. For any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$, if as $q\uparrow1$, $VaR_S(t,q)$ diverges to $\infty$, then, the following relations hold

(3.1)\begin{align} SES_k(t,q)&\sim\frac{A(t,q)\left(1-D_k(t,1)\right)\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}{\sum_{k=1}^{d}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}\nonumber\\ &+\frac{A(t,q)\int_{1}^{\infty}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)y\right)\lambda_k(ds)dy}{\sum_{k=1}^{d}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}, \end{align}

and

(3.2)\begin{align} MES_k(t,q)&\sim\frac{A(t,q)\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}{\sum_{k=1}^{d}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}\nonumber\\ &+\frac{A(t,q)\int_{1}^{\infty}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)y\right)\lambda_k(ds)dy}{\sum_{k=1}^{d}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}, \end{align}

where

\begin{align*} &D_k(t,1)\\ &=\lim_{q\uparrow1}\frac{\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt VaR_S(t,q)\right)\lambda_k(ds)+\int_{1}^{\infty}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt VaR_S(t,q)y\right)\lambda_k(ds)dy}{\sum_{k=1}^{d}\left(\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt VaR_S(t,q)\right)\lambda_k(ds)+\int_{1}^{\infty}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt VaR_S(t,q)y\right)\lambda_k(ds)dy\right)}. \end{align*}

(ii) Assume $F\in\mathcal{R}_{-\alpha}$ with $\alpha\in(1,\kappa/2)$. For any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$, we have

(3.3)\begin{align} SES_k(t,q)&\sim \frac{\alpha b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)}{(\alpha-1)\left(\sum_{k=1}^{d}b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)\right)^{1-\frac{1}{\alpha}}}\nonumber\\ &\times\left(\frac{\alpha}{\alpha-1}-\frac{b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)}{\sum_{k=1}^{d}b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)}\right)VaR_X(q), \end{align}

and

(3.4)\begin{align}&MES_k(t,q)\sim\frac{\alpha^2b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)}{(\alpha-1)^2\left(\sum_{k=1}^{d}b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)\right)^{1-\frac{1}{\alpha}}}VaR_X(q). \end{align}

Actually, when $F\in\mathcal{R}_{-\alpha}$, from relation (4.28), it is clear that the condition “as $q\uparrow1$, $VaR_S(t,q)$ diverge to $\infty$.” in Theorem 3.1(i) can be directly satisfied. Consequently, we remove this condition in Theorem 3.1(ii).

Theorem 3.2.

Consider the $SES_k(t,q)$ and $MES_k(t,q)$ defined by (1.3) and (1.4) with Assumption 2.1. Let $N_1(t)=\cdots=N_d(t)=N(t)$. Suppose that $\{(X_1,X_2,\ldots,X_d),(X_{1i},X_{2i},\ldots,X_{di}),i\in\mathbb{N}^+\}$ is a sequence of independent and identically distributed random vectors with $(X_1,X_2,\ldots,X_d)\in MRV_{d}(\alpha,F,\upsilon)$, $\alpha\in(1,\kappa/2)$ such that $\upsilon((\boldsymbol{1},\boldsymbol{\infty}]) \gt 0$. Then, for any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\tau_{1}\leq t) \gt 0$, the following relations hold

(3.5)\begin{align}&SES_k(t,q)\sim\frac{\alpha\left(\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda(ds)\right)^{\frac{1}{\alpha}}\left[(\alpha-1)\int_{B(k,\alpha)}^{1}V_k(y,1)dy+\upsilon(\Upsilon_k)\right]}{(\alpha-1)^2\left(\upsilon\left(\Delta\right)\right)^{1-\frac{1}{\alpha}}}VaR_X(q), \end{align}

and

(3.6)\begin{align}MES_k(t,q)\sim\frac{\alpha\left(\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda(ds)\right)^{\frac{1}{\alpha}}\left[(\alpha-1)\int_{0+}^{1}V_k(y,1)dy+\upsilon(\Upsilon_k)\right]}{(\alpha-1)^2\left(\upsilon\left(\Delta\right)\right)^{1-\frac{1}{\alpha}}}VaR_X(q), \end{align}

where

(3.7)\begin{align} &\Upsilon_k=\left\{(x_1,x_2,\ldots,x_d)\in\mathbb{R}_{+}^{d}:~x_k \gt 1\right\},~~~\Delta=\left\{(x_1,x_2,\ldots,x_d)\in\mathbb{R}_{+}^{d}:~\sum_{l=1}^{d}x_l \gt 1\right\},\nonumber\\ &V_k(y,1)=\upsilon\left((x_1,x_2,\ldots,x_d)\in\mathbb{R}_{+}^{d}:~x_k \gt y,~\sum_{l=1}^{d}x_l \gt 1\right),~\mathrm{for}~y\in(0,1],~~~~~~~~ \end{align}

and

\begin{align*} B(k,\alpha)=\frac{(\alpha-1)\int_{0+}^{1}V_k(y,1)dy+\upsilon(\Upsilon_k)}{\alpha \upsilon\left(\Delta\right)}. \end{align*}

Indeed, all of our results (3.3)–(3.6) are presented in a concise form of $C\times VaR_X(q)$, where C is a constant unrelated to q, reflecting the combined effects of the general càdlàg process and heavy-tailed property of losses. In the case of Theorem 3.1, the influence of the dependence structure among losses on the $SES_k(t,q)$ and $MES_k(t,q)$ is minimal, but it is crucial in Theorem 3.2. This indicates that the asymptotic dependence structure among losses has a significant impact on the systemic risk measures.

Corollary 3.1.

(i) For any integer $k\in[1,d]$, we assume $N_k(t)$ is a homogeneous Poisson counting process with parameter λk. Under the remaining conditions of Theorem 3.1(ii), for any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$, the following relations hold,

\begin{align*} SES_k(t,q)&\sim\frac{\alpha b_k\lambda_k\left[\alpha\sum_{k=1}^{d}b_k\lambda_k-(\alpha-1)b_k\lambda_k\right]\left[\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)ds\right]^{\frac{1}{\alpha}}}{(\alpha-1)^2\left(\sum_{k=1}^{d}b_k\lambda_k\right)^{2-\frac{1}{\alpha}}}VaR_X(q), \end{align*}

and

\begin{align*} MES_k(t,q)\sim\frac{\alpha^2b_k\lambda_k\left[\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)ds\right]^{\frac{1}{\alpha}}}{(\alpha-1)^2\left(\sum_{k=1}^{d}b_k\lambda_k\right)^{1-\frac{1}{\alpha}}}VaR_X(q). \end{align*}

(ii) Assume that N(t) is a homogeneous Poisson counting process with parameter λ. Under the remaining conditions of Theorem 3.2, for any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\tau_{1}\leq t) \gt 0$, the following relations hold,

\begin{align*} SES_k(t,q)&\sim\frac{\alpha\left(\lambda\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)ds\right)^{\frac{1}{\alpha}}\left[(\alpha-1)\int_{B(k,\alpha)}^{1}V_k(y,1)dy+\upsilon(\Upsilon_k)\right]}{(\alpha-1)^2\left(\upsilon\left(\Delta\right)\right)^{1-\frac{1}{\alpha}}}VaR_X(q), \end{align*}

and

\begin{align*} MES_k(t,q)\sim\frac{\alpha\left(\lambda\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)ds\right)^{\frac{1}{\alpha}}\left[(\alpha-1)\int_{0+}^{1}V_k(y,1)dy+\upsilon(\Upsilon_k)\right]}{(\alpha-1)^2\left(\upsilon\left(\Delta\right)\right)^{1-\frac{1}{\alpha}}}VaR_X(q). \end{align*}

Remark 3.1.

A Financial institution needs to allocate an available capital amount Q across various business lines, and assign a proportional capital Qk to each business unit such that $Q=\sum_{k=1}^dQ_k$. As discussed in Section 3.2 of [Reference Zhou, Dhaene and Yao31], for any integer $k\in[1,d]$, the CTE capital allocation rule is

\begin{align*} \mathcal{Q}_k&=\mathcal{Q}\frac{A_k(t,q)}{A(t,q)}=\mathcal{Q}\frac{\mathbb{E}(L_k(t)~|~S(t) \gt VaR_S(t,q))}{\mathbb{E}(S(t)~|~S(t) \gt VaR_S(t,q))}. \end{align*}

(i) Under the conditions of Theorem 3.1(ii), for any integer $k\in[1,d]$, applying relation (4.27) yields that

\begin{align*} \mathcal{Q}_k &\sim\mathcal{Q}\frac{b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)}{\sum_{k=1}^{d}b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)}. \end{align*}

(ii) Under the conditions of Theorem 3.2, for any integer $k\in[1,d]$, using relation (4.32) gives that

\begin{align*} \mathcal{Q}_k \sim\mathcal{Q}\frac{\left[(\alpha-1)\int_{0+}^{1}V_k(y,1)dy+\upsilon(\Upsilon_k)\right]}{\sum_{k=1}^{d}\left[(\alpha-1)\int_{0+}^{1}V_k(y,1)dy+\upsilon(\Upsilon_k)\right]}. \end{align*}

Next, we give for Theorem 3.2 a specific example, in which the constant before $VaR_X(q)$ can be calculated explicitly.

Remark 3.2.

Consider a special case where the economic system only has two lines of businesses (i.e., d = 2). Let the loss Xi follows a Pareto distribution

(3.8)\begin{equation} F_i(x)=1-\left(\frac{\gamma_i}{x+\gamma_i}\right)^{\alpha},~\mathrm{for}~x \gt 0~\mathrm{and}~\gamma_i \gt 0,~i=1,2. \end{equation}

We assume that the survival copula of $(X_1,X_2)$ is an Archimedean one with a regularly varying generator function $\psi(\cdot)$, such that relation

\begin{align*} \lim_{u\rightarrow0^+}\frac{\psi(yu)}{\psi(u)}=y^{-\beta}, \end{align*}

holds for any y > 0 and β > 0. According to [Reference Asimit, Furman, Tang and Vernic1], we can obtain that $(X_1,X_2)\in MRV_2(\alpha,F_1,\upsilon)$ with the measure υ satisfying for any $(y_1,y_2)\in[0,\infty]^2\setminus{\{(0,0)\}}$,

(3.9)\begin{equation} \upsilon\left((y_1,\infty]\times(y_2,\infty]\right)=\left(y_1^{\alpha\beta}+\vartheta^{-\beta} y_2^{\alpha\beta}\right)^{-\frac{1}{\beta}}=:H(y_1,y_2),~~\beta \gt 0, \end{equation}

where $\vartheta=\big(\frac{\gamma_2}{\gamma_1}\big)^{\alpha}$. We write

(3.10)\begin{align} H^{(1)}(y_1,y_2)=-\frac{\partial H(y_1,y_2)}{\partial y_1}=\alpha \left(y_1^{\alpha\beta}+\vartheta^{-\beta}y_2^{\alpha\beta}\right)^{-1-1/\beta}y_1^{\alpha\beta-1}. \end{align}

Relations (3.9)-(3.10) imply that

(3.11)\begin{align}\int_{0+}^{1}V_1(y,1)dy&=\int_{0+}^{1}\int_{x}^{1}\upsilon(dy,(1-y,\infty])dx\nonumber\\ &=\int_{0}^{1}\int_{x}^{1}H^{(1)}(y,1-y)dydx\nonumber\\ &=\int_{0}^{1}\int_{x}^{1}\alpha \left(y^{\alpha\beta}+\vartheta^{-\beta}(1-y)^{\alpha\beta}\right)^{-1-1/\beta}y^{\alpha\beta-1}dydx, \end{align}

and

(3.12)\begin{align} \upsilon\left(\Delta\right)&=\upsilon\left((1,\infty]\times(0,\infty]\right)+\int_{0}^{1}\upsilon(dy,(1-y,\infty])\nonumber\\ &=1+\int_{0}^{1}\alpha \left(y^{\alpha\beta}+\vartheta^{-\beta}(1-y)^{\alpha\beta}\right)^{-1-1/\beta}y^{\alpha\beta-1}dy, \end{align}

Similarly, we can get explicit relation of $B(k,\alpha)$, $\int_{0+}^{1}V_2(y,1)dy$, $\int_{B(k,\alpha)}^{1}V_1(y,1)dy$ and $\int_{B(k,\alpha)}^{1}V_2(y,1)dy$. Thus, we have the ability to compute relations on the right side of Theorem 3.2. Formally speaking, the results in relations (3.11)-(3.12) are challenging to calculate directly. However, leveraging MATLAB software, we present the numerical calculation results in Section 5.

4. Proof of main results

In this section, we provide detailed proof of the main results. Section 4.1 proves Theorem 3.1, supported by three key lemmas. In Section 4.2, we prove Theorem 3.2 with the aid of three lemmas.

Additionally, we introduce some results essential for proving Theorems 3.1 and 3.2. For a distribution function F, with some simple adjustments on Proposition 2.2 in [Reference Bingham, Goldie and Teugels2], we see that for any fixed $0 \lt p_1 \lt J_F^{-}$ and $J_F^{+} \lt p_2 \lt \infty$, there exist positive constants C 1, C 2, D 1 and D 2 such that

(4.1)\begin{eqnarray} \frac{\overline{F}(xy)}{\overline{F}(x)}\leq C_1 y^{-{p_1}},~~~~~~\mathrm{and}~~~~~~~~\frac{\overline{F}(u)}{\overline{F}(uv)}\leq C_2 v^{p_2}, \end{eqnarray}

hold for all $xy\geq x \geq D_1$ and $uv\geq u\geq D_2$. It is easy to see that

(4.2)\begin{eqnarray} x^{-p}=o\left(\overline{F}(x)\right),~\mathrm{for~any}~p \gt J_F^{+}.\ \end{eqnarray}

Lemma 4.1. Let Y and Z be two independent random variables, where Y is distributed by $F\in\mathcal{D}\cap\mathcal{L}$ and Z is non-negative and nondegenerate at 0 satisfying $\mathbb{E}(Z^p) \lt \infty$ for some $p \gt J_F^{+}$. Then, the distribution of the product YZ belongs to the class $\mathcal{D}\cap\mathcal{L}$ and $P(YZ \gt x)\asymp\overline{F}(x)$.

Proof. See Lemma 4.1 in [Reference Wang and Tang29].

Lemma 4.2. Let Y and Z be two independent random variables, where Y is distributed by F. If $F\in \mathcal{D}$ with $J_F^- \gt 0$, and Z is non-negative and nondegenerate at 0, then, for arbitrarily fixed $0 \lt p_1 \lt J_{F}^-\leq J_{F}^+ \lt p_2 \lt \infty$, there exists a positive constant C irrespective to Z such that for large x,

\begin{eqnarray*} &&P(YZ \gt x\mid Z)\leq C \overline{F}(x)\max\{Z^{p_1}, Z^{p_2}\}. \end{eqnarray*}

Proof. Following the proof of Lemma 4.1 in [Reference Wang and Tang29], we can get this lemma.

Lemma 4.3. Assume that the random variables Y1, …, and Yd are real-valued and independent random variables, distributed by F1, …, and Fd, respectively, and that the random weights Z1, …, and Zd are non-negative, not degenerate at 0, and arbitrary dependent on each other, but independent of Y1, …, and Yd. If ${F}_i\in \mathcal{D}\cap\mathcal{L}$ and $\mathbb{E}(Z_i^{\beta_i}) \lt \infty$ for some $\beta_i \gt J_{F_i}^+$ and all $i=1,\ldots,d$, then the following relation holds,

(4.3)\begin{eqnarray} &&P\left(\sum_{i=1}^{d}Y_iZ_i \gt x\right)\sim\sum_{i=1}^{d}P(Y_iZ_i \gt x). \end{eqnarray}

Proof. See Theorem 3 of [Reference Tang and Yuan28].

Lemma 4.4. Let Y1, …, and Yd be d real-valued random variables with distribution function ${F}_i\in\mathcal{D}$ for $1\leq i\leq d$, and Z1, …, and Zd be another d non-negative random variables independent of Y1, …, and Yd such that $\mathbb{E}(Z_i^p) \lt \infty, 1\leq i\leq d$, for some $p \gt \max\{J_{F_1}^+,\ldots,J_{F_d}^+\}$. If Y1, …, and Yd are pSQAI, then $Y_1Z_1$, …, and $Y_dZ_d$ are pSQAI, respectively.

Proof. See Theorem 2.2 of [Reference Li20].

Lemma 4.5. Let Y1, …, and Yd be d real-valued and pSQAI random variables with survival functions ${F}_i\in\mathcal{D}\cap\mathcal{L}$ for $1\leq i\leq d$, and Z1, …, and Zd be another d non-negative random variables, independent of Y1, …, and Yd, such that $\mathbb{E}(Z_i^p) \lt \infty, 1\leq i\leq d$, for some $p \gt \max\{J_{F_1}^+,\ldots,J_{F_d}^+\}$. Then, relation (4.3) holds.

Proof. See Theorem 2.3 of [Reference Li20].

4.1. Proof of Theorem 3.1

Next, we present three key lemmas (i.e., Lemmas 4.6-4.8) in the proof of Theorem 3.1.

Lemma 4.6. (i) Under the conditions of Theorem 3.1(i), for any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$, we can obtain,

(4.4)\begin{align} P(S(t) \gt x)\sim\sum_{k=1}^{d}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt x\right)\lambda_k(ds). \end{align}

(ii) Under the conditions of Theorem 3.1(ii), for any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$, we can obtain

(4.5)\begin{align} P(S(t) \gt x)\sim\sum_{k=1}^{d}b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)\overline{F}(x). \end{align}

Proof. (i) By Assumption 2.1 and Lemma 4.5, choosing a large M, we can obtain

(4.6)\begin{align} P(S(t) \gt x)& \gt rsim\sum_{k=1}^{d}\left(\sum_{i=1}^{\infty}-\sum_{i=M+1}^{\infty}\right)P(X_{ki}e^{-\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}} \gt x)=:J_1-J_2. \end{align}

For J 1, it is obvious that

(4.7)\begin{align} J_1=\sum_{k=1}^{d}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt x\right)\lambda_k(ds). \end{align}

For $0 \lt p_1 \lt \min\{J_{F_1}^-,\ldots,J_{F_d}^-\} \lt \max\{J_{F_1}^+,\ldots,J_{F_d}^+\} \lt p_2 \lt \kappa/2$, it follows from Assumption 2.1, Lemma 4.2 and the condition of tail equivalence (i.e., $\lim_{x\rightarrow\infty}\frac{\overline{F}_k(x)}{\overline{F}(x)}=b_k,~k=1,2,\ldots,d$) that there exists a large enough M such that for any small enough ϵ,

(4.8)\begin{align} J_2\leq C\sum_{k=1}^{d}\sum_{i=M+1}^{\infty}\left[\mathbb{E}\left(e^{-p_1\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}}\right)+\mathbb{E}\left(e^{-p_2\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}}\right)\right]\overline{F}_k(x)\leq C\epsilon \overline{F}(x). \end{align}

On the one hand, choosing a constant M, using Assumption 2.1, Lemma 4.1 and the condition of tail equivalence gives that for any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$,

(4.9)\begin{align} & \int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt x\right)\lambda_k(ds)\geq \sum_{i=1}^{M}P\left(X_ke^{-\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}} \gt x\right)\geq C \overline{F}(x). \end{align}

On the other hand, choosing a large constant M, by Assumption 2.1, Lemma 4.1, the condition of tail equivalence and (4.8), we get,

\begin{align*} \int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt x\right)\lambda_k(ds)&\leq \sum_{i=1}^{M}P\left(X_ke^{-\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}} \gt x\right)+\sum_{i=M+1}^{\infty}P\left(X_ke^{-\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}} \gt x\right)\\ &\leq C \overline{F}(x). \end{align*}

Thus, we can obtain that for any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$, the following relation holds

(4.10)\begin{align} & \int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt x\right)\lambda_k(ds) \asymp \overline{F}(x). \end{align}

A combination of the arbitrariness of ϵ, (4.6)-(4.8) and (4.10) gives the lower bound of (4.4) immediately. Choosing a $\delta\in(0,1)$ and a large M satisfying $\sum_{i=M+1}^{\infty}\frac{1}{i^2} \lt 1$, we can get

(4.11)\begin{align} &P(S(t) \gt x)\nonumber\\ &\leq P\left(\sum_{k=1}^{d}\sum_{i=1}^{M}X_{ki}e^{-\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}} \gt \delta x\right)+P\left(\sum_{k=1}^{d}\sum_{i=M+1}^{\infty}X_{ki}e^{-\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}} \gt (1-\delta) x\right). \end{align}

Denote the two probabilities on (4.11) by K 1 and K 2, respectively. There exists some positive function g(x) satisfying as $x\rightarrow\infty$, $g(x)\rightarrow\infty$ and $\frac{g(x)}{x}\rightarrow0$. Obviously, let $g(x)=\frac{x}{lnx}$ which satisfies the assumption condition. Take $\max\{J_{F_1}^+,\ldots,J_{F_d}^+\}+\epsilon_1 \lt p \lt \kappa$ for some $\epsilon_1 \gt 0$. By Assumption 2.1, (4.1) and Lemma 4.5, we have

(4.12)\begin{align} K_1&\sim \sum_{k=1}^{d}\sum_{i=1}^{M}P\left(X_{ki}e^{-\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}} \gt \delta x\right)\nonumber\\ &\leq \sum_{k=1}^{d}\sum_{i=1}^{M}P\left(e^{-\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}} \gt g(x)\right)+\sum_{k=1}^{d}\sum_{i=1}^{M}\int_{0}^{g(x)}P\left(X_{ki} \gt \frac{\delta x}{y}\right)P\left(e^{-\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}}\in dy\right)\nonumber\\ &\lesssim\delta^{-p}\sum_{k=1}^{d}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt x\right)\lambda_k(ds)+o(1)\overline{F}(x), \end{align}

where in the third step, we use the fact that when $g(x)=\frac{x}{lnx}$, the following relation holds due to Markov’s inequality, Assumption 2.1 and (4.2),

(4.13)\begin{align} P(e^{-\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}} \gt g(x))\leq x^{-p+\epsilon_1}\mathbb{E}\left(e^{-p\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}}\right)\left(lnx\right)^{p}x^{-\epsilon_1}=o(1)\overline{F}(x). \end{align}

Similar to relation (4.8), we can obtain that there exists a large enough M such that for any small enough ϵ,

(4.14)\begin{align} K_2&\leq \sum_{k=1}^{d}\sum_{i=M+1}^{\infty}P\left(X_{ki}e^{-\xi(\tau_{ki})}\mathbb{I}_{\{\tau_{ki} \leq t\}} \gt \frac{(1-\delta) x}{di^2}\right) \leq C\epsilon\overline{F}(x). \end{align}

As $\delta\rightarrow1$, it follows from the arbitrariness of ϵ, (4.10)–(4.14) that the upper bound in (4.4) holds.

(ii) Following the proof of Lemma 4.6 (i), applying Breiman’s Theorem and the condition of tail equivalence, we establish that for any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$, relation (4.5) holds.

Remark 4.1.

(i) Under the conditions of Theorem 3.1(i), for any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$, we can derive that the following relation holds,

(4.15)\begin{align} P(L_k(t) \gt x)\sim\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt x\right)\lambda_k(ds). \end{align}

(ii) Under the conditions of Theorem 3.1(ii), for any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$, we can get

\begin{align*} P(L_k(t) \gt x)\sim b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)\overline{F}(x). \end{align*}

A combination of (4.10), Lemma 4.6(i) and Remark 4.1(i) gives that under the conditions of Theorem 3.1(i), for any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$,

(4.16)\begin{align} P(S(t) \gt x)\asymp P(L_k(t) \gt x)\asymp\overline{F}(x). \end{align}

This is essential in the proof of Lemma 4.7, Lemma 4.8, relations (4.20) and (4.21).

Lemma 4.7. Under the conditions of Theorem 3.1(i), for any $\delta_0\in(0,1)$ and any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$, we can get that the following relation

(4.17)\begin{align} P(L_i(t) \gt xy,L_j(t) \gt x)=o(1)P(L_i(t) \gt x),~i\neq j, \end{align}

holds uniformly for all $y\in[\delta_0,1]$.

Proof. Without loss of generality, we assume i = 1 and j = 2. Choosing a large M, $\delta_1\in(0,1)$ and $\delta_2\in(0,1)$, we can obtain

(4.18)\begin{align} &P(L_1(t) \gt xy,L_2(t) \gt x)\nonumber\\ &\leq P\left(\sum_{i=1}^{M}X_{1i}e^{-\xi(\tau_{1i})}\mathbb{I}_{\{\tau_{1i} \leq t\}} \gt \delta_1xy, \sum_{i=1}^{M}X_{2i}e^{-\xi(\tau_{2i})}\mathbb{I}_{\{\tau_{2i} \leq t\}} \gt \delta_2x\right)\nonumber\\ &+P\left(\sum_{i=1}^{M}X_{1i}e^{-\xi(\tau_{1i})}\mathbb{I}_{\{\tau_{1i} \leq t\}} \gt \delta_1xy, \sum_{i=M+1}^{\infty}X_{2i}e^{-\xi(\tau_{2i})}\mathbb{I}_{\{\tau_{2i} \leq t\}} \gt (1-\delta_2)x\right)\nonumber\\ &+P\left(\sum_{i=M+1}^{\infty}X_{1i}e^{-\xi(\tau_{1i})}\mathbb{I}_{\{\tau_{1i} \leq t\}} \gt (1-\delta_1)xy, \sum_{i=1}^{M}X_{2i}e^{-\xi(\tau_{2i})}\mathbb{I}_{\{\tau_{2i} \leq t\}} \gt \delta_2x\right)\nonumber\\ &+P\left(\sum_{i=M+1}^{\infty}X_{1i}e^{-\xi(\tau_{1i})}\mathbb{I}_{\{\tau_{1i} \leq t\}} \gt (1-\delta_1)xy, \sum_{i=M+1}^{\infty}X_{2i}e^{-\xi(\tau_{2i})}\mathbb{I}_{\{\tau_{2i} \leq t\}} \gt (1-\delta_2)x\right). \end{align}

It follows from Assumption 2.1 and Lemma 4.4 that for any finite integer M, random variables $X_{11}e^{-\xi(\tau_{11})}\mathbb{I}_{\{\tau_{11} \leq t\}}$, …, $X_{1M}e^{-\xi(\tau_{1M})}\mathbb{I}_{\{\tau_{1M} \leq t\}}$, $X_{21}e^{-\xi(\tau_{21})}\mathbb{I}_{\{\tau_{21} \leq t\}}$, …, $X_{2M}e^{-\xi(\tau_{2M})}\mathbb{I}_{\{\tau_{2M} \leq t\}}$ are still pSQAI. For the first term in (4.18), denote by K 1, considering the definitions of pSQAI and class $\mathcal{D}$, Assumption 2.1 and Lemma 4.1, we can obtain that for any small enough ϵ,

\begin{align*} K_1&\leq\sum_{i=1}^{M}\sum_{j=1}^{M}P\left(X_{1i}e^{-\xi(\tau_{1i})}\mathbb{I}_{\{\tau_{1i} \leq t\}} \gt \frac{\delta_1xy}{M}, X_{2j}e^{-\xi(\tau_{2j})}\mathbb{I}_{\{\tau_{2j} \leq t\}} \gt \frac{\delta_2x}{M}\right)\\ &\leq C\epsilon \sum_{j=1}^{M}P\left(X_{2j}e^{-\xi(\tau_{2j})}\mathbb{I}_{\{\tau_{2j} \leq t\}} \gt \frac{\delta_2x}{M}\right)\\ &\leq C\epsilon \overline{F}(x). \end{align*}

Similar to the proof of (4.8), we can obtain that for any small enough ϵ, the other terms in (4.18) are less than $C\epsilon \overline{F}(x)$. Therefore, using (4.16) yields that for any small enough ϵ,

\begin{align*} P(L_1(t) \gt xy,L_2(t) \gt x)\leq C\epsilon \overline{F}(x)\lesssim C\epsilon P(L_1(t) \gt x). \end{align*}

The proof is concluded by the arbitrariness of ϵ.

Lemma 4.8. Under the conditions of Theorem 3.1(i), for any $\delta_0\in(0,1)$, any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$, we can obtain that it holds uniformly for all $y\in[\delta_0,1]$,

(4.19)\begin{equation} P(L_k(t) \gt xy, S(t) \gt x)\sim P(L_k(t) \gt x). \end{equation}

Proof. Without loss of generality, we assume that k = 1. On the one hand, it is obvious that

\begin{align*} P(L_1(t) \gt xy, S(t) \gt x)\geq P(L_1(t) \gt xy, L_1(t) \gt x)= P(L_1(t) \gt x). \end{align*}

On the other hand, applying Lemma 4.6(i), Lemma 4.7 and (4.16) gives that,

\begin{align*} P(L_1(t) \gt xy, S(t) \gt x) &\leq P(S(t) \gt x)-P\left(L_1(t)\leq xy, \bigcup_{k=2}^{d}\{L_k(t) \gt x\}\right)\\ &\leq P(S(t) \gt x)-\sum_{k=2}^{d}P(L_k(t) \gt x)+\sum_{k=2}^{d}P(L_1(t) \gt xy, L_k(t) \gt x)\\ &+\sum_{2\leq i \lt j\leq d}P(L_i(t) \gt x, L_j(t) \gt x)\\ &\lesssim P(L_1(t) \gt x). \end{align*}

Thus, we can obtain that relation (4.19) holds.

A key point to mention is that if replacing the condition “Under the conditions of Theorem 3.1(i)” by “Under the conditions of Theorem 3.1(ii)” in Lemmas 4.7-4.8, the results in Lemmas 4.7-4.8 also hold. These are crucial in the proof of Theorem 3.1(ii).

Proof of Theorem 3.1. (i) By (4.1) and (4.16), we have for any y > 1, any $\beta_2\in(1,\min\{J_{F_1}^-,\ldots,J_{F_d}^-\})$, any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$,

(4.20)\begin{equation} P(L_k(t) \gt A(q)y)\lesssim C\overline{F}(A(q)y)\leq Cy^{-\beta_2}\overline{F}(A(q))\leq C y^{-\beta_2}, \end{equation}

which is integrable. First, we prove that relation (3.2) holds. For any $\varepsilon\in(0,1)$ and integer $k\in[1,d]$, on the one hand, using (4.16) and Lemma 4.8 yields that

(4.21)\begin{align} &\int_{0}^{1}P(L_k(t) \gt yA(t,q),S(t) \gt A(t,q))dy\nonumber\\ &=\int_{0}^{\varepsilon}P(L_k(t) \gt yA(t,q),S(t) \gt A(t,q))dy+\int_{\varepsilon}^{1}P(L_k(t) \gt yA(t,q),S(t) \gt A(t,q))dy\nonumber\\ &\lesssim \varepsilon P(S(t) \gt A(t,q))+(1-\varepsilon)P(L_k(t) \gt A(t,q))\leq (1+C\varepsilon)P(L_k(t) \gt A(t,q)). \end{align}

On the other hand, using Lemma 4.8 yields that for any $\varepsilon\in(0,1)$,

\begin{align*} \int_{0}^{1}P(L_k(t) \gt yA(t,q),S(t) \gt A(t,q))dy & \gt rsim (1-\varepsilon)P(L_k(t) \gt A(t,q)). \end{align*}

Thus, for any integer $k\in[1,d]$, by the arbitrariness of ɛ, we have

(4.22)\begin{equation} \int_{0}^{1}P(L_k(t) \gt yA(t,q),S(t) \gt A(t,q))dy\sim P(L_k(t) \gt A(t,q)). \end{equation}

For any integer $k\in[1,d]$, a combination of (4.20), (4.22), the Dominated Convergence Theorem, Lemma 4.6(i) and Remark 4.1(i) gives that

\begin{align*} &MES_k(t,q)=\frac{A(t,q)\left[\int_{0}^{1}P(L_k(t) \gt yA(t,q),S(t) \gt A(t,q))dy+\int_{1}^{\infty}P(L_k(t) \gt yA(t,q))dy\right]}{P(S(t) \gt A(t,q))}\\ &\quad\quad\sim\frac{A(t,q)\left[\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)+\int_{1}^{\infty}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)y\right)\lambda_k(ds)dy\right]}{\sum_{k=1}^{d}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}. \end{align*}

Next, we prove that relation (3.1) holds. Following the proof of relation (3.2), it is obvious that there exists a large enough constant q 0 such that, for any $q \gt q_0$, any small enough ϵ, any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\max\{\tau_{11},\ldots,\tau_{d1}\}\leq t) \gt 0$,

(4.23)\begin{equation} (1-\epsilon)D_k(t,1)\leq\frac{A_k(t,q)}{A(t,q)}\leq (1+\epsilon)D_k(t,1). \end{equation}

On the one hand, for any $q \gt q_0$, small enough ϵ and integer $k\in[1,d]$, we can establish

(4.24)\begin{align} &SES_k(t,q) =\frac{A(t,q)\int_{\frac{A_k(t,q)}{A(t,q)}}^{\infty}P(L_k(t) \gt yA(t,q),S(t) \gt A(t,q))dy}{P(S(t) \gt A(t,q))}\nonumber\\ &\leq \frac{A(t,q)\left[\int_{(1-\epsilon)D_k(t,1)}^{1}P(L_k(t) \gt yA(t,q),S(t) \gt A(t,q))dy+\int_{1}^{\infty}P(L_k(t) \gt yA(t,q))dy\right]}{P(S(t) \gt A(t,q))}\nonumber\\ &\sim\frac{A(t,q)\left(1-(1-\epsilon)D_k(t,1)\right)\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}{\sum_{k=1}^{d}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}\nonumber\\ &+\frac{A(t,q)\int_{1}^{\infty}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)y\right)\lambda_k(ds)dy}{\sum_{k=1}^{d}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}, \end{align}

where in the second step, we use relation (4.23), and in the last step, we use (4.20), the Dominated Convergence Theorem, Remark 4.1(i), Lemma 4.6(i), and Lemma 4.8. On the other hand, similar to the proof of (4.24), for any $q \gt q_0$, small enough ϵ and integer $k\in[1,d]$, we can obtain

(4.25)\begin{align} SES_k(t,q) & \gt rsim\frac{A(t,q)\left(1-(1+\epsilon)D_k(t,1)\right)\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}{\sum_{k=1}^{d}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}\nonumber\\ &+\frac{A(t,q)\int_{1}^{\infty}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)y\right)\lambda_k(ds)dy}{\sum_{k=1}^{d}\int_{0}^{t}P\left(X_ke^{-\xi(s)} \gt A(t,q)\right)\lambda_k(ds)}. \end{align}

Thus, by the arbitrariness of ϵ, (4.24) and (4.25), we derive relation (3.1) holds.

(ii) Applying Lemma 4.6(ii) yields that the distribution function of S(t) belongs to the class $\mathcal{R}_{-\alpha}$. It follows from Karamata’s Theorem that

(4.26)\begin{align} A(t,q) \sim \frac{\alpha VaR_S(t,q)}{\alpha-1}. \end{align}

Similar to relation (4.22), for any integer $k\in[1,d]$, using Lemma 4.6(ii), Remark 4.1(ii) and Karamata’s Theorem gives that the distribution function of $L_k(t)$ belongs to the class $\mathcal{R}_{-\alpha}$ and

(4.27)\begin{align} A_k(t,q) &\sim \frac{\alpha VaR_S(t,q)b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)}{(\alpha-1)\sum_{k=1}^{d}b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)}. \end{align}

Applying Lemma 4.6(ii) and Lemma 2.1 of [Reference Asimit, Furman, Tang and Vernic1] yields that

(4.28)\begin{align} VaR_S(t,q)\sim \left(\sum_{k=1}^{d}b_k\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda_k(ds)\right)^{\frac{1}{\alpha}}VaR_X(q). \end{align}

Following the proof of Theorem 3.1(i), a combination of (4.26)-(4.28), Remark 4.1(ii) and Lemma 4.6(ii) gives that relations (3.3) and (3.4) hold.

4.2. Proof of Theorem 3.2

Next, we show three key lemmas (i.e., Lemmas 4.9-4.11) in the proof of Theorem 3.2.

Lemma 4.9. Under the conditions of Theorem 3.2, for any fixed $t\in(0,\infty]$ satisfying $P(\tau_{1}\leq t) \gt 0$, we can get

(4.29)\begin{align} P(S(t) \gt x)\sim \int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda(ds)\overline{F}(x)\upsilon\left(\Delta\right). \end{align}

Proof. On the one hand, by (2.2), (2.3), Assumption 2.1 and Lemma 4.3, we can obtain that there exists a large enough M such that for any small enough ϵ,

\begin{align*} P(S(t) \gt x) & \gt rsim\left(\sum_{i=1}^{\infty}-\sum_{i=M+1}^{\infty}\right) \mathbb{E}\left(e^{-\alpha\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}}\right)\overline{F}(x)\upsilon\left(\Delta\right)\\ &\geq (1-\epsilon)\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda(ds)\overline{F}(x)\upsilon\left(\Delta\right). \end{align*}

On the other hand, choosing $\delta\in(0,1)$, by (2.1), (2.2) and (2.3), we get that there exists a large enough M such that for any small enough ϵ,

\begin{align*} P(S(t) \gt x) &\leq \sum_{i=1}^{M} P\left(\sum_{k=1}^{d}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \delta x\right)+ \sum_{i=M+1}^{\infty} P\left(\sum_{k=1}^{d}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt (1-\delta) x\right)\\ & \lesssim \delta^{-\alpha}\sum_{i=1}^{M}\mathbb{E}\left(e^{-\alpha\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}}\right)\overline{F}(x)\upsilon\left(\Delta\right)+ C\epsilon \overline{F}(x)\\ &\leq (1+C\epsilon)\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda(ds)\overline{F}(x)\upsilon\left(\Delta\right), \end{align*}

where the second step is analogous to (4.8), and in the last step, we let $\delta\rightarrow1$. Thus, by the arbitrariness of ϵ, we get relation (4.29) holds.

Remark 4.2.

Under the conditions of Theorem 3.2, for any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\tau_{1}\leq t) \gt 0$, we can obtain that the following relation,

\begin{align*} P(L_k(t) \gt x)\sim \int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda(ds)\overline{F}(x)\upsilon(\Upsilon_k). \end{align*}

A significant point to note is that under the conditions of Theorem 3.2, according to Lemma 4.9 and Remark 4.2, for any integer $k\in[1,d]$, we derive that relation (4.16) holds. This is essential in the proof of (4.20), (4.21), (4.27) and (4.32). Moreover, using Remark 4.2 gives that for any integer $k\in[1,d]$, the distribution function of $L_k(t)$ belongs to the class $\mathcal{R}_{-\alpha}$.

Lemma 4.10. Let Y1 and Y2 be independent random variables with distributions $G_1\in \mathcal{R}_{-\alpha_1}$ and $G_2\in \mathcal{R}_{-\alpha_2}$, respectively, where $0 \lt \alpha_1, \alpha_2 \lt \kappa/2$ and κ is a constant. Also, let Z1 and Z2 be two non-negative random variables independent of $\{Y_1,Y_2\}$. Assume $\max\left(\mathbb{E}\left(Z_1^{\beta_1}\right),\mathbb{E}\left(Z_2^{\beta_1}\right),\mathbb{E}\left(Z_1^{\alpha_1}Z_2^{\alpha_2}\right)\right) \lt \infty$ for any $0 \lt \beta_1 \lt \kappa$. Then, as $(x_1,x_2)\rightarrow(\infty,\infty)$, we have

\begin{align*} P(Y_1Z_1 \gt x_1,Y_2Z_2 \gt x_2)\sim\overline{G_1}(x_1)\overline{G_2}(x_2)\mathbb{E}\left(Z_1^{\alpha_1}Z_2^{\alpha_2}\right). \end{align*}

Proof. For some $0 \lt w \lt 1$, split the probability $P(Y_1Z_1 \gt x_1,Y_2Z_2 \gt x_2)$ into two parts as $I_1+I_2$, with

\begin{align*} I_1=P\left(Y_1Z_1 \gt x_1,Y_2Z_2 \gt x_2,\{Z_1\leq x_1^w\}\bigcap\{Z_2\leq x_2^w\}\right) \end{align*}

and

\begin{align*} ~I_2=P\left(Y_1Z_1 \gt x_1,Y_2Z_2 \gt x_2,\{Z_1 \gt x_1^w\}\bigcup\{Z_2 \gt x_2^w\}\right). \end{align*}

For I 1, by conditioning on Z 1 and Z 2, subject to a Dominated Convergence Theorem and relation (2.1), we get

\begin{align*} I_1&=\int\int_{0 \lt z_1\leq x_1^{w},0 \lt z_2\leq x_2^{w}}P\left(Y_1 \gt \frac{x_1}{z_1}\right)P\left(Y_2 \gt \frac{x_2}{z_2}\right)P(Z_1\in dz_1,Z_2\in dz_2)\\ &\sim \overline{G_1}(x_1)\overline{G_2}(x_2)\int\int_{0 \lt z_1\leq x_1^{w},0 \lt z_2\leq x_2^{w}}z_1^{\alpha_1}z_2^{\alpha_2}P(Z_1\in dz_1,Z_2\in dz_2)\\ &\sim \overline{G_1}(x_1)\overline{G_2}(x_2)\mathbb{E}\left(Z_1^{\alpha_1}Z_2^{\alpha_2}\right). \end{align*}

For I 2, obviously,

\begin{align*} I_2\leq P(Y_2Z_2 \gt x_2,Z_1 \gt x_1^w)+P(Y_1Z_1 \gt x_1,Z_2 \gt x_2^w). \end{align*}

We only need to deal with the first probability, denoted by I 21, as the same procedure can be applied to the second probability. For any $0 \lt p_1 \lt \alpha_2 \lt p_2 \lt \kappa/2$, choosing some $p_3\in(0,\kappa)$ satisfying $2\alpha_1 \lt p_3w \lt \kappa$, we obtain

\begin{align*} I_{21} &\leq C\overline{G_2}(x_2)\left(\mathbb{E}\left(Z_2^{p_1}\mathbb{I}_{\{Z_1 \gt x_1^w\}}\right)+\mathbb{E}\left(Z_2^{p_2}\mathbb{I}_{\{Z_1 \gt x_1^w\}}\right)\right)\\ &\leq C\overline{G_2}(x_2)\left(\left(\mathbb{E}\left(Z_2^{2p_1}\right)\right)^{1/2}\left(P(Z_1 \gt x_1^w)\right)^{1/2}+\left(\mathbb{E}\left(Z_2^{2p_2}\right)\right)^{1/2}\left(P(Z_1 \gt x_1^w)\right)^{1/2}\right)\\ &=o(1)\overline{G_1}(x_1)\overline{G_2}(x_2)\left[\left(\mathbb{E}\left(Z_2^{2p_1}\right)\mathbb{E}\left(Z_1^{p_3}\right)\right)^{1/2}+\left(\mathbb{E}\left(Z_2^{2p_2}\right)\mathbb{E}\left(Z_1^{p_3}\right)\right)^{1/2}\right]\\ &=o(1)\overline{G_1}(x_1)\overline{G_2}(x_2), \end{align*}

where in the first step, we use Lemma 4.2, in the second step, we use H $\ddot{\mathrm{o}}$lder’s inequality, and in the third step, we use Markov’s inequality and relation (4.2).

Lemma 4.11. Under the conditions of Theorem 3.2, for any $\delta_0\in(0,1)$, any integer $k\in[1,d]$ and any fixed $t\in(0,\infty]$ satisfying $P(\tau_{1}\leq t) \gt 0$, we can obtain that it holds uniformly for all $y\in [\delta_0,1]$,

(4.30)\begin{equation} P(L_k(t) \gt xy, S(t) \gt x)\sim \int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda(ds)\overline{F}(x) V_k(y,1). \end{equation}

Proof. On the one hand, for any integer $k\in[1,d]$, choosing $\delta_1\in(0,1)$ and a large M, we can obtain,

\begin{align*} &P(L_k(t) \gt xy, S(t) \gt x)\\ &\leq P\left(\sum_{i=1}^{M}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \delta_1xy,\sum_{i=1}^{M}\sum_{k=1}^{d}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \delta_1x\right)\\ &+P\left(\sum_{i=M+1}^{\infty}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt (1-\delta_1)xy,\sum_{i=1}^{M}\sum_{k=1}^{d}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \delta_1x\right)\\ &+P\left(\sum_{i=1}^{M}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \delta_1xy,\sum_{i=M+1}^{\infty}\sum_{k=1}^{d}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt (1-\delta_1)x\right)\\ &+P\left(\sum_{i=M+1}^{\infty}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt (1-\delta_1)xy,\sum_{i=M+1}^{\infty}\sum_{k=1}^{d}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt (1-\delta_1)x\right). \end{align*}

Denote the four probabilities on the above relation by K 1, K 2, K 3, and K 4, respectively. Choosing $\delta_2\in(0,1)$, for any integer $k\in[1,d]$, we can get

\begin{align*} &K_1\leq \sum_{i=1}^{M} P\left(X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \delta_1\delta_2xy, \sum_{k=1}^{d}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \delta_1\delta_2x\right)\\ &+\sum_{i=1}^{M} \sum_{j=1,j\neq i}^{M}P\left(X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \delta_1\delta_2xy, \sum_{k=1}^{d}X_{kj}e^{-\xi(\tau_j)}\mathbb{I}_{\{\tau_{j} \leq t\}} \gt \delta_1\delta_2x\right)\\ &+P\left(\sum_{i=1}^{M}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \delta_1 xy,\sum_{i=1}^{M}\sum_{k=1}^{d}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \delta_1x,\bigcap_{i=1}^{M}\left\{X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}}\leq \delta_1\delta_2xy\right\}\right)\\ &+P\left(\sum_{i=1}^{M}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \delta_1xy,\sum_{i=1}^{M}\sum_{k=1}^{d}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \delta_1x,\right.\\ &\quad \left.\bigcap_{i=1}^{M}\left\{\sum_{k=1}^{d}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}}\leq \delta_1\delta_2x\right\}\right). \end{align*}

Denote the four terms on the above relation by K 11, K 12, K 13 and K 14, respectively. For K 11, applying (2.1), (2.2) and (2.3) gives that

\begin{align*} K_{11}&\sim{(\delta_1\delta_2)}^{-\alpha} \sum_{i=1}^{M}\mathbb{E}\left(e^{-\alpha\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}}\right)\overline{F}(x) V_k(y,1).\\ &\leq (\delta_1\delta_2)^{-\alpha}\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda(ds)\overline{F}(x) V_k(y,1). \end{align*}

For K 12, using Assumption 2.1, Remark 2.2 and Lemma 4.10 yields that for any small enough ϵ,

\begin{align*} K_{12} &\sim \sum_{i=1}^{M} \sum_{j=1,j\neq i}^{M} P(X_{ki} \gt \delta_1\delta_2xy) P\left(\sum_{k=1}^{d}X_{kj} \gt \delta_1\delta_2x\right) \mathbb{E}\left(e^{-\alpha\xi(\tau_i)}e^{-\alpha\xi(\tau_j)}\mathbb{I}_{\{\max\{\tau_{i}, \tau_{j}\} \leq t\}}\right)\\ & \leq C \epsilon \overline{F}(x). \end{align*}

For K 13, similar to K 12, we can get that for any small enough ϵ,

\begin{align*} K_{13} & \leq C \sum_{i=1}^{M}\sum_{l=1,l\neq i}^{M}P\left(X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt \frac{\delta_1xy}{M} ,X_{kl}e^{-\xi(\tau_l)}\mathbb{I}_{\{\tau_{l} \leq t\}} \gt \frac{\delta_1(1-\delta_2) xy}{M-1}\right) \leq C \epsilon \overline{F}(x). \end{align*}

Similarly, we can obtain that for any small enough ϵ, $K_{14} \leq C \epsilon \overline{F}(x)$. Similar to the proof of (4.8), we can obtain that for any small enough ϵ, K 2, K 3, and K 4 are less than $C\epsilon \overline{F}(x)$. Let $\delta_1\rightarrow1$, $\delta_2\rightarrow1$ and $\epsilon\rightarrow0$ in turn, we can get the upper bound of relation (4.30).

On the other hand, for any integer $k\in[1,d]$, similar to the proof of K 11 and K 12, we can get that there exists a large enough M such that for any small enough ϵ,

\begin{align*} P(L_k(t) \gt xy, S(t) \gt x) &\geq P\left(\sum_{i=1}^{M}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt xy,\sum_{i=1}^{M}\sum_{k=1}^{d}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt x\right)\\ &\geq \sum_{i=1}^{M}P\left(X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt xy,\sum_{k=1}^{d}X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt x\right)\\ &+\sum_{i=1}^{M}\sum_{j=1,j\neq i}^{n}P\left(X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt xy,\sum_{k=1}^{d}X_{kj}e^{-\xi(\tau_j)}\mathbb{I}_{\{\tau_{j} \leq t\}} \gt x\right)\\ &-C\sum_{1\leq j \lt l\leq M}P\left(\sum_{k=1}^{d}X_{kj}e^{-\xi(\tau_j)}\mathbb{I}_{\{\tau_{j} \leq t\}} \gt x, \sum_{k=1}^{d}X_{kl}e^{-\xi(\tau_l)}\mathbb{I}_{\{\tau_{l} \leq t\}} \gt x\right)\\ &-\sum_{1\leq i \lt j\leq M}P\left(X_{ki}e^{-\xi(\tau_i)}\mathbb{I}_{\{\tau_{i} \leq t\}} \gt xy, X_{kj}e^{-\xi(\tau_j)}\mathbb{I}_{\{\tau_{j} \leq t\}} \gt xy\right)\\ & \gt rsim \int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)\lambda(ds)\overline{F}(x) V_k(y,1)- C\epsilon \overline{F}(x). \end{align*}

Let $\epsilon\rightarrow0$, we can get the lower bound of relation (4.30).

Proof of Theorem 3.2. Applying Lemma 4.9 and Lemma 2.1 of [Reference Asimit, Furman, Tang and Vernic1] yields that

(4.31)\begin{equation} VaR_{S}(t,q)\sim \left(\int_{0}^{t}\mathbb{E}(e^{-\alpha\xi(s)})\lambda(ds)\upsilon(\Delta)\right)^{\frac{1}{\alpha}}VaR_X(q). \end{equation}

For any integer $k\in[1,d]$, similar to (4.22), it follows from Karamata’s Theorem, Remark 4.2, Lemma 4.9 and Lemma 4.11 that,

(4.32)\begin{align} A_k(t,q) &\sim\frac{VaR_{S}(t,q)\left[(\alpha-1)\int_{0+}^{1}V_k(y,1)dy+\upsilon(\Upsilon_k)\right]}{(\alpha-1)\upsilon(\Delta)}. \end{align}

Applying (4.26) and (4.32) yields that for any integer $k\in[1,d]$,

(4.33)\begin{align} \lim_{q\uparrow1}\frac{A_k(t,q)}{A(t,q)}=\frac{(\alpha-1)\int_{0+}^{1}V_k(y,1)dy+\upsilon(\Upsilon_k)}{\alpha \upsilon(\Delta)}=B(k,\alpha) \gt 0. \end{align}

Thus, it is obvious that there exists a large enough constant q 1 such that, for any $q \gt q_1$, small enough ϵ, integer $k\in[1,d]$, and any fixed $t\in(0,\infty]$ satisfying $P(\tau_{1}\leq t) \gt 0$,

(4.34)\begin{align} (1-\epsilon)B(k,\alpha)\leq\frac{A_k(t,q)}{A(t,q)}\leq (1+\epsilon)B(k,\alpha). \end{align}

Similar to relations (4.24) and (4.25), for any integer $k\in[1,d]$ and all $q \gt q_1$, applying Karamata’s Theorem, (4.26), (4.31), (4.34), Remark 4.2, Lemma 4.9 and Lemma 4.11 gives that relation (3.5) holds. Likewise, for any integer $k\in[1,d]$, similar to relation (4.22), using (4.26) and (4.31), Karamata’s Theorem, Remark 4.2, Lemma 4.9 and Lemma 4.11 yields that relation (3.6) holds.

5. Numerical studies

5.1. The explicit order 3.0 weak scheme

In this subsection, we introduce the explicit order 3.0 weak scheme, which is employed to calculate the numerical values of SDEs at discrete time points and their associated expectations. All numerical studies presented in this article are conducted by MATLAB software with M = 500 sample paths.

Consider a SDE of the form:

\begin{align*} dX(t)=b(X(t))dt+\sigma(X(t))dW(t),~t\geq0, \end{align*}

where X(0) is the initial position, $b(X(t))$ denotes the drift coefficient, $\sigma(X(t))$ is the diffusion coefficient, and W(t) denotes the Brownian motion.

For any fixed time $\mathring{t}$, divide it into N intervals, each denoted as $(\mathring{t}_{i-1}, \mathring{t}_i)$, for $i = 1,2,\ldots,N$. Inspired by Section 15.2 in [Reference Kloeden and Platen17], we present the explicit order 3.0 weak scheme in vector form,

\begin{align*} X(\mathring{t}_{n+1})&=X(\mathring{t}_{n})+b(X(\mathring{t}_{n}))\bigtriangleup_n+\sigma(X(\mathring{t}_{n}))\bigtriangleup\hat{W}_n +\left(b_{\eta}^{+}+b_{\eta}^{-}-\frac{3b(X(\mathring{t}_{n}))}{2}-\frac{\tilde{b}_{\eta}^{+}+\tilde{b}_{\eta}^{-}}{4}\right)\frac{\bigtriangleup_n}{2}\\ &+\sqrt{\frac{2}{\bigtriangleup_n}}\left(\frac{b_{\eta}^{+}-b_{\eta}^{-}}{\sqrt{2}}-\frac{\tilde{b}_{\eta}^{+}-\tilde{b}_{\eta}^{-}}{4}\right)\eta \bigtriangleup\hat{Z}_n\\ &+\frac{1}{6}\left(b\left(X(\mathring{t}_{n})+(b(X(\mathring{t}_{n}))+b_{\eta}^{+})\bigtriangleup_n+(\eta+\rho)\sigma(X(\mathring{t}_{n}))\sqrt{\bigtriangleup_n}\right)-b_{\eta}^{+}-b_{\rho}^{+}+b(X(\mathring{t}_{n}))\right)\\ &\times\left((\eta+\rho)\bigtriangleup \hat{W}_n\sqrt{\bigtriangleup_n}+\bigtriangleup_n+\eta\rho\left((\bigtriangleup \hat{W}_n)^2-\bigtriangleup_n\right)\right), \end{align*}

with

\begin{align*} b_{\phi}^{\pm}=b\left(X(\mathring{t}_{n})+b(X(\mathring{t}_{n}))\bigtriangleup_n\pm\sigma(X(\mathring{t}_{n}))\sqrt{\bigtriangleup_n}\phi\right), \end{align*}

and

\begin{align*} \tilde{b}_{\phi}^{\pm}=b\left(X(\mathring{t}_{n})+2b(X(\mathring{t}_{n}))\bigtriangleup_n\pm\sigma(X(\mathring{t}_{n}))\sqrt{2\bigtriangleup_n}\phi\right), \end{align*}

where $\bigtriangleup_n=\mathring{t}_{n}-\mathring{t}_{n-1}$, ϕ is either η or ρ. Here, we use two correlated Gaussian random variables $\bigtriangleup\hat{W}_n\sim N(0,\bigtriangleup_n)$ and $\bigtriangleup\hat{Z}_n\sim N(0,\frac{\bigtriangleup_n^3}{3})$ with $\mathbb{E}\left(\bigtriangleup\hat{W}_n\bigtriangleup\hat{Z}_n\right)=\frac{\bigtriangleup_n^2}{2}$. We note that there is no difficulty in generating the pair of correlated, normally distributed random variables, $\bigtriangleup \hat{W}_n$ and $\bigtriangleup \hat{Z}_n$, using the transformation

\begin{align*} \bigtriangleup \hat{W}_n=\zeta_{n,1}\bigtriangleup_n^{\frac{1}{2}}~~~~ \mathrm{and}~~~~~\bigtriangleup \hat{Z}_n=\frac{1}{2}\left(\zeta_{n,1}+\frac{\zeta_{n,2}}{\sqrt{3}}\right) \bigtriangleup_n^{\frac{3}{2}}, \end{align*}

where $\zeta_{n,1}$ and $\zeta_{n,2}$ are independent and $N(0,1)$ distributed random variables. Alongside, we have two independent two-point distributed random variables, η and ρ, with

\begin{align*} P(\eta=\pm1)=P(\rho=\pm1)=\frac{1}{2}. \end{align*}

Subsequently, we illustrate how the explicit order 3.0 weak scheme can be beneficial for calculating the numerical values of SDEs at discrete time points, as well as its corresponding expectations, by using two common investment return processes.

Example 5.1.

Vasicek model

Assume that the general càdlàg process can be described by $\xi(t)=\int_{0}^{t}r(s)ds$ with $\xi(0)=0$, where r(t) is a stochastic short rate process. The evolution of the interest rate process is given by the following Vasicek model:

(5.1)\begin{equation} dr(t)=m(l-r(t))dt+\delta dW(t),~t\geq0, \end{equation}

where $m,l$ and δ are positive constants, and W(t) denotes a standard Brownian motion. Indeed, the solution of the SDE (5.1) can be determined in closed form as follows

\begin{align*} r(t)=e^{-mt}r(0)+l(1-e^{-mt})+\delta e^{-mt}\int_{0}^{t}e^{ms}dW(s). \end{align*}

Here, for any fixed time $\mathring{t}$, divide it into N 0 intervals, each denoted as $(t_{j-1}, t_j)$, for $j = 1,2,\ldots,N_0$. The parameters for the Vasicek model are determined as: $m=5,~l=0.08,~r(0)=0.5,~\delta=0.25,~N=20000,~N_0=10000,~\alpha=2$ and $\mathring{t}=40$. For any integer $k\in[1,M]$, we use $r_k(t)$ to describe the analytical solution of the Vasicek model in the k-th sample path. Likewise, we apply $\tilde{r}_k(t)$ to denote the numerical solution of the Vasicek model derived using the explicit order 3.0 weak scheme in the k-th sample path. Consequently, the analytical and numerical solution of the Vasicek model can be calculated by

(5.2)\begin{equation} r(t)=\frac{\sum_{k=1}^Mr_k(t)}{M}~~\mathrm{and} ~~\tilde{r}(t)=\frac{\sum_{k=1}^M\tilde{r}_k(t)}{M}, \end{equation}

respectively. To begin with, we compare the analytical solution with the numerical solution of the Vasicek model. The analytical solution and numerical solution of the Vasicek model at each time point are depicted in the top part of Figure 5.1.1. The absolute error of the analytical solution to the numerical solution of the Vasicek model at each time point is shown in the bottom part of Figure 5.1.1. Upon observing Figure 5.1.1, we notice that the analytical solution and numerical solution of the Vasicek model almost overlap, and that the absolute errors are equal to 0 almost everywhere. Thus, it is clear that the explicit order 3.0 weak scheme serves as an effective numerical method for solving SDEs.

Figure 5.1.1. r(t) for the Vasicek model

When $j = 1,2,\ldots,N_0$, it is obvious that for any time point tj, there exists an integer $K\in[0,N-1]$ such that $\mathring{t}_K \lt t_j\leq \mathring{t}_{K+1}$. For any integer $k\in[1,M]$, we denote by

(5.3)\begin{align} \xi_k(t_j)=\frac{\sum_{i=1}^{K+1}\tilde{r}_k(\mathring{t}_i)t_{K+1}}{K+1}. \end{align}

Consequently, on the one hand, for any $\alpha,~t_j \gt 0$, we assume that

(5.4)\begin{align} \mathbb{E}\left(e^{-\alpha\xi(t_j)}\right)= \frac{\sum_{k=1}^Me^{-\alpha\xi_k(t_j)}}{M}. \end{align}

This indicates that for any integer $j\in[1,N_0]$, the left side of relation (5.4) can be calculated using the explicit order 3.0 weak scheme. On the other hand, for any integer $j\in[1,N_0]$, Theorem 3.2 in [Reference Guo and Wang11] gives

(5.5)\begin{align} \mathbb{E}\left(e^{-\alpha\xi(t_j)}\right)=exp\left\{B_1(\alpha)+B_2(\alpha)t_j+B_3(\alpha)e^{-mt_j}+B_4(\alpha)e^{-2mt_j}\right\}, \end{align}

where

\begin{align*} &B_1(\alpha)=\frac{-\alpha(r(0)-l)}{m}-\frac{3\alpha^2\delta^2}{4m^3},~B_2(\alpha)=-\alpha l+\frac{\alpha^2\delta^2}{2m^2},~B_3(\alpha)=\frac{\alpha(r(0)-l)}{m}+\frac{\alpha^2\delta^2}{m^3}, \end{align*}

and

\begin{align*} ~B_4(\alpha)=-\frac{\alpha^2\delta^2}{4m^3}. \end{align*}

Next, we compare the numerical expectation calculated by the explicit order 3.0 weak scheme (5.4) with the analytical expectation calculated by relation (5.5). The expectation calculated by the explicit order 3.0 weak scheme and relation (5.5) are presented in the top part of Figure 5.1.2. The absolute error of the the analytical expectation (5.5) and numerical expectation (5.4) is shown in the bottom part of Figure 5.1.2. Upon reviewing Figure 5.1.2, we observe that the analytical expectation and numerical expectation nearly coincide, and that the absolute errors lie within the interval (0,0.01). As a result, one can see that the explicit order 3.0 weak scheme can be viewed as an alternative approach to computing the expectation associated with the general càdlàg process. Additionally, upon examining Figure 5.1.2 again, it is evident that the value of $\mathbb{E}\left(e^{-\alpha\xi(t)}\right)$ decreases rapidly. Therefore, from a numerical standpoint, Assumption 2.1 appears to be reasonable.

Figure 5.1.2. $\mathbb{E}(e^{-\alpha\xi(t)})$ for the Vasicek model

Example 5.2.

CIR model

Assume that the general càdlàg process can be described by $\xi(t)=\int_{0}^{t}r(s)ds$ with $\xi(0)=0$, where r(t) is a stochastic short rate process, which is given by the following CIR model:

(5.6)\begin{align} dr(t)=m(l-r(t))dt+\delta \sqrt{r(t)}dW(t),~t\geq0, \end{align}

where $m,l$ and δ are positive constants, and W(t) denotes a standard Brownian motion. Without loss of generality, we assume for any $t\geq0$, r(t) is non-negative. Define scalar functions $b(t,x)_{x=r(t)}=m(l-x)$ and $\sigma(t,x)_{x=r(t)}=\delta\sqrt{x}$. It is evident that there exist constants T > 0, D 1 and D 2 such that for all $x\geq0$, $y\geq0$ and $t\in[0,T]$,

\begin{align*} |b(t,x)|+|\sigma(t,x)|\leq D_1(1+|x|),~~\mathrm{and}~~|b(t,x)-b(t,y)|+|\sigma(t,x)-\sigma(t,y)|\leq D_2|x-y|. \end{align*}

Thus, it follows from Theorem 5.2 of [Reference Øksendal32] that the SDE (5.6) exists a unique solution r(t).

The parameters for the CIR model are set as follows: $m=0.5,~l=0.7,~r(0)=0.01,~\delta=0.025,~N=20000,~N_0=10000,~\alpha=2$ and $\mathring{t}=40$. Since there is no analytical solution for the SDE (5.6), we divide any fixed time $\mathring{t}$ into $2N_0$ intervals, namely stepsizes $\bigtriangleup(t)=\frac{\mathring{t}}{2N_0}$. Applying the explicit order 3.0 weak scheme, we can get a numerical solution $r_k(t)$ of the CIR model with stepsize $\bigtriangleup(t)=\frac{\mathring{t}}{2N_0}$ in the k-th sample path. Thus, motivated by [Reference Zhang, Wang, Zhou, Liu and Zhang30], we consider the $r_k(t)$ calculated by the explicit order 3.0 weak scheme with stepsize $\bigtriangleup(t)=\frac{\mathring{t}}{2N_0}$ as the analytical solution of CIR model in the k-th sample path. Similarly, we derive the numerical solution $\tilde{r}_k(t)$ for the CIR model with stepsize $\bigtriangleup(t)=\frac{\mathring{t}}{N_0}$ in the k-th sample path. Consequently, the analytical and numerical solution of the CIR model can be calculated by (5.2). First, we compare the analytical solution r(t) with the numerical solution $\tilde{r}(t)$. The analytical solution and the numerical solution are depicted in the top part of Figure 5.1.3. The absolute error of the analytical solution r(t) and the numerical solution $\tilde{r}(t)$ of the CIR model at each time point is shown in the bottom part of Figure 5.1.3. Observing Figure 5.1.3, we would like to highlight that the explicit order 3.0 weak scheme serves as an effective numerical method for solving SDEs.

Figure 5.1.3. r(t) for the CIR model

On the one hand, for any $\alpha,~t \gt 0$, we can also calculate $\mathbb{E}\left(e^{-\alpha\xi(t)}\right)$ using the explicit order 3.0 weak scheme according to relation (5.4). On the other hand, Theorem 3.3 in [Reference Guo and Wang11] gives that

(5.7)\begin{align} \mathbb{E}\left(e^{-\alpha\xi(t)}\right)=D_1(\alpha,~t)e^{D_2(\alpha,~t)r(0)}, \end{align}

where

\begin{align*} &\mathring{\varOmega}(\alpha)=\sqrt{m^2+2\alpha\delta^2},~~~ D_1(\alpha,t)=\left(\frac{e^{mt/2}}{cosh(\mathring{\varOmega}(\alpha)t/2)+m sinh(\mathring{\varOmega}(\alpha)t/2)/\mathring{\varOmega}(\alpha)}\right)^{2ml/\delta^2} \end{align*}

and

\begin{align*} D_2(\alpha,t)=-\frac{2\alpha}{m+\mathring{\varOmega}(\alpha)coth(\mathring{\varOmega}(\alpha)t/2)}. \end{align*}

Next, we compare the numerical expectation calculated by the explicit order 3.0 weak scheme (5.4) with the analytical expectation calculated by relation (5.7). The numerical expectation and the analytical expectation are shown in the top part of Figure 5.1.4. The absolute error of the numerical expectation and the analytical expectation is displayed at the bottom part of Figure 5.1.4. Upon examining Figure 5.1.4, it is evident that the explicit order 3.0 weak scheme can be viewed as an alternative method for estimating the expectation associated with the general càdlàg process. Consequently, when dealing with more complex càdlàg process, the explicit order 3.0 weak scheme proves to be a valuable tool for calculating the expectations connected to the càdlàg process. Furthermore, observing Figure 5.1.4 again, from a numerical standpoint, the rapid convergence of the expectations suggests that Assumption 2.1 appears to be well-founded.

Figure 5.1.4. $\mathbb{E}(e^{-\alpha\xi(t)})$ for the CIR model

5.2. Sensitivity analysis

In this subsection, we apply the explicit order 3.0 weak scheme to discuss the interplay of dependence structures and heavy-tailedness. We assume the general càdlàg process is determined by the Vasicek model. For any $\alpha,~t \gt 0$, the values of $\mathbb{E}\left(e^{-\alpha\xi(t)}\right)$ at each time point are calculated using the explicit order 3.0 weak scheme, as described in Section 5.1. This reveals that from the numerical standpoint, the constants $\int_{0}^{t}\mathbb{E}\left(e^{-\alpha\xi(s)}\right)ds$ on the right side of Corollary 3.1 can be calculated by using the explicit order 3.0 weak scheme. Additionally, for any integer $k\in[1,d]$, we assume the generic random variable Xk is distributed by a Pareto distribution with parameter γk (as given in (3.8)). Without loss of generality, we conduct numerical studies for risk measures $SES_1(t,q)$ and $MES_1(t,q)$.

Figure 5.2.1. Risk measures for α

5.2.1. Sensitivity analysis under asymptotic independence

Here, we primarily investigate the effects of variations in parameters α and q on $SES_1(t,q)$ and $MES_1(t,q)$ under the conditions of Corollary 3.1(i). We suppose that the representing distribution F of X also follows a Pareto distribution with parameter γ 0. Additionally, we assume that X 1, X 2, $\ldots$, and Xd are mutually independent. The parameters are selected as follows: $m=5,~l=0.08,~r(0)=0.5,~\delta=0.25,~d=5,~N=20000,~N_0=10000,~t=4,~\gamma_0=1,~\gamma_1=2,~\gamma_2=3,~\gamma_3=4,~\gamma_4=5,~\gamma_5=6,~\lambda_1=0.1,\lambda_2=0.2,\lambda_3=0.3,\lambda_4=0.4$ and $\lambda_5=0.5$. By varying q from 0.95 to 0.99, the numerical results can be presented in Figure 5.2.1.

From Figure 5.2.1, it is evident that under the fixed q, the smaller the heavy-tailed parameter α, the larger the value of risk measures $SES_1(t,q)$ and $MES_1(t,q)$. This correlation is expected, as a smaller value of α means the heavier tails of losses, and hence the higher likelihood of the collapse of the financial system. Upon examining Figure 5.2.1, with a fixed heavy-tailed parameter α, it is evident that as q increases, risk measures $SES_1(t,q)$ and $MES_1(t,q)$ also increase. Upon examining the definitions of $SES_1(t,q)$ and $MES_1(t,q)$, it becomes evident that the value of $MES_1(t,q)$ is larger than $SES_1(t,q)$, as shown in Figure 5.2.1.

5.2.2. Sensitivity analysis under asymptotic dependence

Here, we analyze how the asymptotic dependence structure and heavy-tailedness affect risk measures $SES_1(t,q)$ and $MES_1(t,q)$ under the conditions specified in Remark 3.2 and Corollary 3.1(ii). The parameters are chosen as follows: $m=5,~l=0.08,~r(0)=0.5,~\delta=0.25,~N=20000,~N_0=10000,~\gamma_1=2,~\gamma_2=3,~\lambda=1$, t = 4 and d = 2. If the parameter q ranges from 0.95 to 0.99, the numerical results are shown in Figures 5.2.2-5.2.3.

Figure 5.2.2. Risk measures for α under β = 10

Figure 5.2.3. Risk measures for β under α = 2

After examining Figures 5.2.2, we can draw conclusions that are consistent with those in Section 5.2. Upon reviewing Figure 5.2.3, it becomes evident that as the dependence parameter β increases, the values of the risk measures $SES_1(t,q)$ and $MES_1(t,q)$ also increase. This reveals that the asymptotic dependence structure among losses plays a crucial role in the systemic risk measures. When comparing Figure 5.2.2 with Figure 5.2.3, it is clear that a minor adjustment in the heavy-tailed parameter α can result in a significant impact on $SES_1(t,q)$ and $MES_1(t,q)$. In contrast, a significant adjustment in the dependence parameter β can only result in a minor impact on $SES_1(t,q)$ and $MES_1(t,q)$. This reveals that the heavy-tailedness of losses plays a more substantial role than the asymptotic dependence structure among losses.

5.3. Error analysis

In this subsection, we focus on conducting an error analysis for $SES_1(t,q)$ derived from Theorem 3.1(ii) with t = 4. We assume the general càdlàg process is determined by the Vasicek model with parameters $m=5,~l=0.08,~r(0)=0.5$ and δ = 0.25. Without loss of generality, we assume d = 2 and $N_1(t)=N_2(t)$, where $N_1(t)$ is a homogeneous Poisson counting process with parameter λ = 0.15. We suppose that the distributions F, F 1 and F 2 follow a Pareto distribution with parameters α = 2, $\gamma_0=1$, $\gamma_1=1$ and $\gamma_2=2$. Additionally, we assume that X 1 and X 2 are mutually independent. Now we are ready to perform some simulation studies on the $SES_1(t,q)$ according to the following algorithm.

  • Select sufficiently large values for the parameters: $M_0=300$ and $N_0=100000$. For any fixed time t = 4, apply the method described in Section 5.1 to generate $e^{-\xi(t_i)}, i=0,1,\ldots,20000$.

  • Generate Exponential random numbers $\theta_{1}^l,\theta_{2}^l,\ldots,\theta_{M_0}^l, l=1,2,\ldots,N_0$, then compute $\tau_{i}^l=\sum_{j=1}^{i}\theta_{j}^l,~i=1,2,\ldots,M_0$ and $N^l(t)=\sum_{i=1}^{M_0}\mathbb{I}_{(\tau_{i}^l\leq t)}$. By (5.3), we can obtain $e^{-\xi{(\tau_{i}^l)}}$.

  • For $l=1,2,\ldots,N_0,~j=1,2,\ldots, N^l(t)$, generate random number pairs $\left(u_{1j}^l,u_{2j}^l\right)$ using an independent copula. Then, generate $X_{1j}^l$ by $F_1^{-1}(u_{1j}^l)$ and $X_{2j}^l$ by $F_2^{-1}(u_{2j}^l)$.

  • Compute $Z_1^l=\sum_{i=1}^{N^l(t)}X_{1i}^le^{-\xi{(\tau_{i}^l)}}$, $Z_2^l=\sum_{i=1}^{N^l(t)}X_{2i}^le^{-\xi{(\tau_{i}^l)}}$ and $S^l=Z_1^l+Z_2^l$, with $S^{(1)}\leq S^{(2)}\leq\cdots\leq S^{(N_0)}$ denoting the order statistics of $S^l, l=1,2,\ldots,N_0$. Suppose $VAR_S=S^{([ N_0q ])}$, where $[ A ]$ represents the integer part after rounding A.

  • Compute

    \begin{align*} &A_1(t,q)=\frac{\sum_{l=1}^{N_0}Z_1^{l}\mathbb{I}_{\left(S^{l} \gt S^{([ N_0q ])}\right)}}{\sum_{l=1}^{N_0}\mathbb{I}_{\left(S^{l} \gt S^{([ N_0q ])}\right)}},~~~~~~~~~~A(t,q)=\frac{\sum_{l=1}^{N_0}S^{l}\mathbb{I}_{\left(S^{l} \gt S^{([ N_0q ])}\right)}}{\sum_{l=1}^{N_0}\mathbb{I}_{\left(S^{l} \gt S^{([ N_0q ])}\right)}},~~~~~~~~~~~~~~~~\\ \mathrm{and}~~~~~~~~~~~&SES_1{(t,q)}=\frac{\sum_{l=1}^{N_0}\left(Z_1^{l}-A_1(t,q)\right)\mathbb{I}_{\left(S^{l} \gt A(t,q),~Z_1^{l} \gt A_1(t,q)\right)}}{\sum_{l=1}^{N_0}\mathbb{I}_{\left(S^{l} \gt A(t,q)\right)}}. \end{align*}

By observing Table 5.3.1, it is evident that the absolute errors between the asymptotic estimation and Monte Carlo estimation of $SES_1(t,q)$ fall within the interval (0, 0.05), and that as q increases, the absolute error decreases. Consequently, the above conclusions validate the accuracy of our obtained asymptotic formula.

Table 5.3.1. Comparison of asymptotic estimation and Monte Carlo estimation.

Funding

Our article is supported by the National Natural Science Foundation of China (No.71871046), the National Natural Science Foundation of China (No.12471368), Intelligent Terminal Key Laboratory of Sichuan Province (No.SCITLAB-30,006), and Intelligent Terminal Key Laboratory of Sichuan Province (No.SCITLAB-30,008).

References

Asimit, A., Furman, E., Tang, Q. & Vernic, R. (2011). Asymptotics for risk capital allocations based on conditional tail expectation. Insurance: Mathematics and Economics. 49(3): 310324.Google Scholar
Bingham, N., Goldie, C. & Teugels, J. (1987). Regular Variation., Cambridge: Cambridge University Press.10.1017/CBO9780511721434CrossRefGoogle Scholar
Cai, J., Einmahl, J., de Haan, L. & Zhou, C. (2015). Estimation of the marginal expected shortfall: the mean when a related variable is extreme. Journal of the Royal Statistical Society. Series B. Statistical Methodology. 77(2): 417442.10.1111/rssb.12069CrossRefGoogle Scholar
Chen, Y. & Liu, J. (2022). An asymptotic study of systemic expected shortfall and marginal expected shortfall. Insurance: Mathematics and Economics. 105: 238251.Google Scholar
Chen, Y. & Yuan, Z. (2017). A revisit to ruin probabilities in the presence of heavy-tailed insurance and financial risks. Insurance: Mathematics and Economics. 73: 7581.Google Scholar
Cheng, M., Konstantinides, D. & Wang, D. (2022). Uniform asymptotic estimates in a time-dependent risk model with general investment returns and multivariate regularly varying claims. Applied Mathematics and Computation. 434: 3844.10.1016/j.amc.2022.127436CrossRefGoogle Scholar
Cont, R. & Tankov, P. (2004). Financial Modelling With Jump processes., London: Chapman and Hall/CRC.Google Scholar
Fu, K. & Li, H. (2016). Asymptotic ruin probability of a renewal risk model with dependent by-claims and stochastic returns. Journal of Computational and Applied Mathematics. 306: 154165.10.1016/j.cam.2016.03.038CrossRefGoogle Scholar
Fu, K. & Liu, Y. (2022). Ruin probability for a multidimensional risk model with non-stationary arrival and subexponential claims. Probability in the Engineering and Informational Sciences. 36(3): 799811.10.1017/S0269964821000085CrossRefGoogle Scholar
Fu, K., Ni, C. & Chen, H. (2020). A particular bidimensional time-dependent renewal risk model with constant interest rates. Probability in the Engineering and Informational Sciences. 34(2): 172182.10.1017/S0269964819000020CrossRefGoogle Scholar
Guo, F. & Wang, D. (2016). Finite- and Infinite-time ruin probabilities with general stochastic investment return processes and bivariate upper tail dependent and heavy-tailed claims. Advance in Applied Probability. 45(1): 241273.10.1239/aap/1363354110CrossRefGoogle Scholar
Guo, F. & Wang, D. (2019). Tail asymptotic for discounted aggregate claims with one-sided linear dependence and general investment return. Science China Mathematics. 62(4): 735750.10.1007/s11425-017-9167-0CrossRefGoogle Scholar
Guo, F. Wang, D. & Yang, H. (2017). Asymptotic results for ruin probability in a two-dimensional risk model with stochastic investment returns. Journal of Computational and Applied Mathematics. 325: 198221.10.1016/j.cam.2017.04.049CrossRefGoogle Scholar
Hao, X. & Tang, Q. (2022). Asymptotic risk decomposition for regularly varying distributions with tail dependence. Applied Mathematics and Computation. 427(127164): 13.Google Scholar
Hua, L. & Joe, H. (2014). Strength of tail dependence based on conditional tail expectation. Journal of Multivariate Analysis. 123: 143159.10.1016/j.jmva.2013.09.001CrossRefGoogle Scholar
Ji, L., Tan, K. & Yang, F. (2021). Tail dependence and heavy tailedness in extreme risks. Insurance: Mathematics and Economics. 99: 282293.Google Scholar
Kloeden, P. & Platen, E. (1999). Numerical Solution of Stochastic Differential Equations. In 3rd Of Applications in Mathematics Stochastic Modelling and Applied Probability., Vol. 23, Springer-Verlag, Berlin.Google Scholar
Konstantinides, D. & Li, J. (2016). Asymptotic ruin probabilities for a multidimensional renewal risk model with multivariate regularly varying claims. Insurance: Mathematics and Economics. 69: 3844.Google Scholar
Landsman, Z., Makov, U. & Tomer, S. (2016). Multivariate tail conditional expectation for elliptical distributions. Insurance: Mathematics and Economics. 70: 216223.Google Scholar
Li, J. (2013). On pairwise quasi-asymptotically independent random variables and their applications. Statistics and Probability Letters. 83(9): 20812087.10.1016/j.spl.2013.05.023CrossRefGoogle Scholar
Li, J. (2018). On the joint tail behavior of randomly weighted sums of heavy-tailed random variables. Journal of Multivariate Analysis. 164: 4053.10.1016/j.jmva.2017.10.008CrossRefGoogle Scholar
Li, J. (2022). Asymptotic analysis of a dynamic systemic risk measure in a renewal risk model. Insurance: Mathematics and Economics. 107: 3856.Google Scholar
Li, Z., Luo, J. & Yao, J. (2021). Convex bound approximations for sums of random variables under multivariate log-generalized hyperbolic distribution and asymptotic equivalences. Journal of Computational and Applied Mathematics 391: 113459.10.1016/j.cam.2021.113459CrossRefGoogle Scholar
Liu, J. & Yang, Y. (2021). Asymptotics for systemic risk with dependent heavy-tailed losses. Astin Bulletin-the Journal of the International Actuarial Association. 51(2): 571605.10.1017/asb.2021.11CrossRefGoogle Scholar
Platen, E. (1984). Zur zeitdiskreten Approximation von Itoprozessen. Diss. B. IMath: Akad. der Wiss. der DDR, BerlinGoogle Scholar
Sato, K. (1999). Lèvy Processes and Infinitely Divisible distributions., Cambridg: Cambridge University Press.Google Scholar
Tang, Q. (2006). Asymptotic ruin probabilities in finite horizon with subexponential losses and associated discount factors. Probability in the Engineering and Informational Sciences. 20(1): 103113.10.1017/S0269964806060062CrossRefGoogle Scholar
Tang, Q. & Yuan, Z. (2014). Randomly weighted sums of subexponential random variables with application to capital allocation. Extremes. Statistical Theory and Applications in Science, Engineering and Economics. 17(3): 467493.Google Scholar
Wang, D. & Tang, Q. (2006). Tail probabilities of randomly weighted sums of random variables with dominated variation. Stochastic Models. 22(2): 253272.10.1080/15326340600649029CrossRefGoogle Scholar
Zhang, L., Wang, J., Zhou, W., Liu, L. & Zhang, L. (2020). Convergence analysis of parareal algorithm based on Milstein scheme for stochastic differential equations. Journal of Computational Mathematics. 38(3): 487501.10.4208/jcm.1901-m2018-0085CrossRefGoogle Scholar
Zhou, M., Dhaene, J. & Yao, J. (2018). An approximation method for risk aggregations and capital allocation rules based on additive risk factor models. Insurance: Mathematics and Economics. 79: 92100.Google Scholar
Øksendal, B. (2003). Stochastic Differential Equations, Berlin: Springer-Verlag.10.1007/978-3-642-14394-6CrossRefGoogle Scholar
Figure 0

Figure 5.1.1. r(t) for the Vasicek model

Figure 1

Figure 5.1.2. $\mathbb{E}(e^{-\alpha\xi(t)})$ for the Vasicek model

Figure 2

Figure 5.1.3. r(t) for the CIR model

Figure 3

Figure 5.1.4. $\mathbb{E}(e^{-\alpha\xi(t)})$ for the CIR model

Figure 4

Figure 5.2.1. Risk measures for α

Figure 5

Figure 5.2.2. Risk measures for α under β = 10

Figure 6

Figure 5.2.3. Risk measures for β under α = 2

Figure 7

Table 5.3.1. Comparison of asymptotic estimation and Monte Carlo estimation.