To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge-org.demo.remotlog.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recently, there has been much progress in understanding stationary measures for colored (also called multi-species or multi-type) interacting particle systems, motivated by asymptotic phenomena and rich underlying algebraic and combinatorial structures (such as nonsymmetric Macdonald polynomials). In this paper, we present a unified approach to constructing stationary measures for most of the known colored particle systems on the ring and the line, including (1) the Asymmetric Simple Exclusion Process (multi-species ASEP, or mASEP); (2) the $q$-deformed Totally Asymmetric Zero Range Process (TAZRP) also known as the $q$-Boson particle system; (3) the $q$-deformed Pushing Totally Asymmetric Simple Exclusion Process ($q$-PushTASEP). Our method is based on integrable stochastic vertex models and the Yang–Baxter equation. We express the stationary measures as partition functions of new ‘queue vertex models’ on the cylinder. The stationarity property is a direct consequence of the Yang–Baxter equation. For the mASEP on the ring, a particular case of our vertex model is equivalent to the multiline queues of Martin (Stationary distributions of the multi-type ASEP, Electron. J. Probab. 25 (2020), 1–41). For the colored $q$-Boson process and the $q$-PushTASEP on the ring, we recover and generalize known stationary measures constructed using multiline queues or other methods by Ayyer, Mandelshtam and Martin (Modified Macdonald polynomials and the multispecies zero range process: II, Algebr. Comb. 6 (2022), 243–284; Modified Macdonald polynomials and the multispecies zero-range process: I, Algebr. Comb. 6 (2023), 243–284) and Bukh and Cox (Periodic words, common subsequences and frogs, Ann. Appl. Probab. 32 (2022), 1295–1332). Our proofs of stationarity use the Yang–Baxter equation and bypass the Matrix Product Ansatz (used for the mASEP by Prolhac, Evans and Mallick (The matrix product solution of the multispecies partially asymmetric exclusion process, J. Phys. A. 42 (2009), 165004)). On the line and in a quadrant, we use the Yang–Baxter equation to establish a general colored Burke’s theorem, which implies that suitable specializations of our queue vertex models produce stationary measures for particle systems on the line. We also compute the colored particle currents in stationarity.
We study a two-dimensional discounted optimal stopping zero-sum (or Dynkin) game related to perpetual redeemable convertible bonds expressed as game (or Israeli) options in a model of financial markets in which the behaviour of the ex-dividend price of a dividend-paying asset follows a generalized geometric Brownian motion. It is assumed that the dynamics of the random dividend rate of the asset paid to shareholders are described by the mean-reverting filtering estimate of an unobservable continuous-time Markov chain with two states. It is shown that the optimal exercise (conversion) and withdrawal (redemption) times forming a Nash equilibrium are the first times at which the asset price hits either lower or upper stochastic boundaries being monotone functions of the running value of the filtering estimate of the state of the chain. We rigorously prove that the optimal stopping boundaries are regular for the stopping region relative to the resulting two-dimensional diffusion process and that the value function is continuously differentiable with respect to the both variables. It is verified by means of a change-of-variable formula with local time on surfaces that the optimal stopping boundaries are determined as a unique solution to the associated coupled system of nonlinear Fredholm integral equations among the couples of continuous functions of bounded variation satisfying certain conditions. We also give a closed-form solution to the appropriate optimal stopping zero-sum game in the corresponding model with an observable continuous-time Markov chain.
We study a variant of the classical Markovian logistic SIS epidemic model on a complete graph, which has the additional feature that healthy individuals can become infected without contacting an infected member of the population. This additional ‘self-infection’ is used to model situations where there is an unknown source of infection or an external disease reservoir, such as an animal carrier population. In contrast to the classical logistic SIS epidemic model, the version with self-infection has a non-degenerate stationary distribution, and we derive precise asymptotics for the time to converge to stationarity (mixing time) as the population size becomes large. It turns out that the chain exhibits the cutoff phenomenon, which is a sharp transition in time from one to zero of the total variation distance to stationarity. We obtain the exact leading constant for the cutoff time and show that the window size is of constant (optimal) order. While this result is interesting in its own right, an additional contribution of this work is that the proof illustrates a recently formalised methodology of Barbour, Brightwell and Luczak (2022), ‘Long-term concentration of measure and cut-off’, Stochastic Processes and their Applications152, 378–423, which can be used to show cutoff via a combination of concentration-of-measure inequalities for the trajectory of the chain and coupling techniques.
We study continuous-time Markov chains on the nonnegative integers under mild regularity conditions (in particular, the set of jump vectors is finite and both forward and backward jumps are possible). Based on the so-called flux balance equation, we derive an iterative formula for calculating stationary measures. Specifically, a stationary measure $\pi(x)$ evaluated at $x\in\mathbb{N}_0$ is represented as a linear combination of a few generating terms, similarly to the characterization of a stationary measure of a birth–death process, where there is only one generating term, $\pi(0)$. The coefficients of the linear combination are recursively determined in terms of the transition rates of the Markov chain. For the class of Markov chains we consider, there is always at least one stationary measure (up to a scaling constant). We give various results pertaining to uniqueness and nonuniqueness of stationary measures, and show that the dimension of the linear space of signed invariant measures is at most the number of generating terms. A minimization problem is constructed in order to compute stationary measures numerically. Moreover, a heuristic linear approximation scheme is suggested for the same purpose by first approximating the generating terms. The correctness of the linear approximation scheme is justified in some special cases. Furthermore, a decomposition of the state space into different types of states (open and closed irreducible classes, and trapping, escaping and neutral states) is presented. Applications to stochastic reaction networks are well illustrated.
This paper characterizes irreducible phase-type representations for exponential distributions. Bean and Green (2000) gave a set of necessary and sufficient conditions for a phase-type distribution with an irreducible generator matrix to be exponential. We extend these conditions to irreducible representations, and we thus give a characterization of all irreducible phase-type representations for exponential distributions. We consider the results in relation to time-reversal of phase-type distributions, PH-simplicity, and the algebraic degree of a phase-type distribution, and we give applications of the results. In particular we give the conditions under which a Coxian distribution becomes exponential, and we construct bivariate exponential distributions. Finally, we translate the main findings to the discrete case of geometric distributions.
For a continuous-time phase-type (PH) distribution, starting with its Laplace–Stieltjes transform, we obtain a necessary and sufficient condition for its minimal PH representation to have the same order as its algebraic degree. To facilitate finding this minimal representation, we transform this condition equivalently into a non-convex optimization problem, which can be effectively addressed using an alternating minimization algorithm. The algorithm convergence is also proved. Moreover, the method we develop for the continuous-time PH distributions can be used directly for the discrete-time PH distributions after establishing an equivalence between the minimal representation problems for continuous-time and discrete-time PH distributions.
We consider time-inhomogeneous ordinary differential equations (ODEs) whose parameters are governed by an underlying ergodic Markov process. When this underlying process is accelerated by a factor $\varepsilon^{-1}$, an averaging phenomenon occurs and the solution of the ODE converges to a deterministic ODE as $\varepsilon$ vanishes. We are interested in cases where this averaged flow is globally attracted to a point. In that case, the equilibrium distribution of the solution of the ODE converges to a Dirac mass at this point. We prove an asymptotic expansion in terms of $\varepsilon$ for this convergence, with a somewhat explicit formula for the first-order term. The results are applied in three contexts: linear Markov-modulated ODEs, randomized splitting schemes, and Lotka–Volterra models in a random environment. In particular, as a corollary, we prove the existence of two matrices whose convex combinations are all stable but are such that, for a suitable jump rate, the top Lyapunov exponent of a Markov-modulated linear ODE switching between these two matrices is positive.
The problem of reservation in a large distributed system is analyzed via a new mathematical model. The target application is car-sharing systems. This model is motivated by the large station-based car-sharing system in France called Autolib’. This system can be described as a closed stochastic network where the nodes are the stations and the customers are the cars. The user can reserve a car and a parking space. We study the evolution of the system when the reservation of parking spaces and cars is effective for all users. The asymptotic behavior of the underlying stochastic network is given when the number N of stations and the fleet size M increase at the same rate. The analysis involves a Markov process on a state space with dimension of order $N^2$. It is quite remarkable that the state process describing the evolution of the stations, whose dimension is of order N, converges in distribution, although not Markov, to a non-homogeneous Markov process. We prove this mean-field convergence. We also prove, using combinatorial arguments, that the mean-field limit has a unique equilibrium measure when the time between reserving and picking up the car is sufficiently small. This result extends the case where only the parking space can be reserved.
The embedding problem of Markov chains examines whether a stochastic matrix$\mathbf{P} $ can arise as the transition matrix from time 0 to time 1 of a continuous-time Markov chain. When the chain is homogeneous, it checks if $ \mathbf{P}=\exp{\mathbf{Q}}$ for a rate matrix $ \mathbf{Q}$ with zero row sums and non-negative off-diagonal elements, called a Markov generator. It is known that a Markov generator may not always exist or be unique. This paper addresses finding $ \mathbf{Q}$, assuming that the process has at most one jump per unit time interval, and focuses on the problem of aligning the conditional one-jump transition matrix from time 0 to time 1 with $ \mathbf{P}$. We derive a formula for this matrix in terms of $ \mathbf{Q}$ and establish that for any $ \mathbf{P}$ with non-zero diagonal entries, a unique $ \mathbf{Q}$, called the ${\unicode{x1D7D9}}$-generator, exists. We compare the ${\unicode{x1D7D9}}$-generator with the one-jump rate matrix from Jarrow, Lando, and Turnbull (1997), showing which is a better approximate Markov generator of $ \mathbf{P}$ in some practical cases.
Continuous-time Markov chains are frequently used to model the stochastic dynamics of (bio)chemical reaction networks. However, except in very special cases, they cannot be analyzed exactly. Additionally, simulation can be computationally intensive. An approach to address these challenges is to consider a more tractable diffusion approximation. Leite and Williams (Ann. Appl. Prob.29, 2019) proposed a reflected diffusion as an approximation for (bio)chemical reaction networks, which they called the constrained Langevin approximation (CLA) as it extends the usual Langevin approximation beyond the first time some chemical species becomes zero in number. Further explanation and examples of the CLA can be found in Anderson et al. (SIAM Multiscale Modeling Simul.17, 2019).
In this paper, we extend the approximation of Leite and Williams to (nearly) density-dependent Markov chains, as a first step to obtaining error estimates for the CLA when the diffusion state space is one-dimensional, and we provide a bound for the error in a strong approximation. We discuss some applications for chemical reaction networks and epidemic models, and illustrate these with examples. Our method of proof is designed to generalize to higher dimensions, provided there is a Lipschitz Skorokhod map defining the reflected diffusion process. The existence of such a Lipschitz map is an open problem in dimensions more than one.
By the technique of augmented truncations, we obtain the perturbation bounds on the distance of the finite-time state distributions of two continuous-time Markov chains (CTMCs) in a type of weaker norm than the V-norm. We derive the estimates for strongly and exponentially ergodic CTMCs. In particular, we apply these results to get the bounds for CTMCs satisfying Doeblin or stochastically monotone conditions. Some examples are presented to illustrate the limitation of the V-norm in perturbation analysis and to show the quality of the weak norm.
Birth–death processes form a natural class where ideas and results on large deviations can be tested. We derive a large-deviation principle under an assumption that the rate of jump down (death) grows asymptotically linearly with the population size, while the rate of jump up (birth) grows sublinearly. We establish a large-deviation principle under various forms of scaling of the underlying process and the corresponding normalization of the logarithm of the large-deviation probabilities. The results show interesting features of dependence of the rate functional upon the parameters of the process and the forms of scaling and normalization.
This paper investigates tail asymptotics of stationary distributions and quasi-stationary distributions (QSDs) of continuous-time Markov chains on subsets of the non-negative integers. Based on the so-called flux-balance equation, we establish identities for stationary measures and QSDs, which we use to derive tail asymptotics. In particular, for continuous-time Markov chains with asymptotic power law transition rates, tail asymptotics for stationary distributions and QSDs are classified into three types using three easily computable parameters: (i) super-exponential distributions, (ii) exponential-tailed distributions, and (iii) sub-exponential distributions. Our approach to establish tail asymptotics of stationary distributions is different from the classical semimartingale approach, and we do not impose ergodicity or moment bound conditions. In particular, the results also hold for explosive Markov chains, for which multiple stationary distributions may exist. Furthermore, our results on tail asymptotics of QSDs seem new. We apply our results to biochemical reaction networks, a general single-cell stochastic gene expression model, an extended class of branching processes, and stochastic population processes with bursty reproduction, none of which are birth–death processes. Our approach, together with the identities, easily extends to discrete-time Markov chains.
A comparison theorem for state-dependent regime-switching diffusion processes is established, which enables us to pathwise-control the evolution of the state-dependent switching component simply by Markov chains. Moreover, a sharp estimate on the stability of Markovian regime-switching processes under the perturbation of transition rate matrices is provided. Our approach is based on elaborate constructions of switching processes in the spirit of Skorokhod’s representation theorem varying according to the problem being dealt with. In particular, this method can cope with switching processes in an infinite state space and not necessarily of birth–death type. As an application, some known results on the ergodicity and stability of state-dependent regime-switching processes can be improved.
We consider an SIR (susceptible $\to$ infective $\to$ recovered) epidemic in a closed population of size n, in which infection spreads via mixing events, comprising individuals chosen uniformly at random from the population, which occur at the points of a Poisson process. This contrasts sharply with most epidemic models, in which infection is spread purely by pairwise interaction. A sequence of epidemic processes, indexed by n, and an approximating branching process are constructed on a common probability space via embedded random walks. We show that under suitable conditions the process of infectives in the epidemic process converges almost surely to the branching process. This leads to a threshold theorem for the epidemic process, where a major outbreak is defined as one that infects at least $\log n$ individuals. We show further that there exists $\delta \gt 0$, depending on the model parameters, such that the probability that a major outbreak has size at least $\delta n$ tends to one as $n \to \infty$.
In this paper, we propose new Metropolis–Hastings and simulated annealing algorithms on a finite state space via modifying the energy landscape. The core idea of landscape modification rests on introducing a parameter c, such that the landscape is modified once the algorithm is above this threshold parameter to encourage exploration, while the original landscape is utilized when the algorithm is below the threshold for exploitation purposes. We illustrate the power and benefits of landscape modification by investigating its effect on the classical Curie–Weiss model with Glauber dynamics and external magnetic field in the subcritical regime. This leads to a landscape-modified mean-field equation, and with appropriate choice of c the free energy landscape can be transformed from a double-well into a single-well landscape, while the location of the global minimum is preserved on the modified landscape. Consequently, running algorithms on the modified landscape can improve the convergence to the ground state in the Curie–Weiss model. In the setting of simulated annealing, we demonstrate that landscape modification can yield improved or even subexponential mean tunnelling time between global minima in the low-temperature regime by appropriate choice of c, and we give a convergence guarantee using an improved logarithmic cooling schedule with reduced critical height. We also discuss connections between landscape modification and other acceleration techniques, such as Catoni’s energy transformation algorithm, preconditioning, importance sampling, and quantum annealing. The technique developed in this paper is not limited to simulated annealing, but is broadly applicable to any difference-based discrete optimization algorithm by a change of landscape.
A system of interacting multi-class finite-state jump processes is analyzed. The model under consideration consists of a block-structured network with dynamically changing multi-color nodes. The interactions are local and described through local empirical measures. Two levels of heterogeneity are considered: between and within the blocks where the nodes are labeled into two types. The central nodes are those connected only to nodes from the same block, whereas the peripheral nodes are connected to both nodes from the same block and nodes from other blocks. Limits of such systems as the number of nodes tends to infinity are investigated. In particular, under specific regularity conditions, propagation of chaos and the law of large numbers are established in a multi-population setting. Moreover, it is shown that, as the number of nodes goes to infinity, the behavior of the system can be represented by the solution of a McKean–Vlasov system. Then, we prove large deviations principles for the vectors of empirical measures and the empirical processes, which extends the classical results of Dawson and Gärtner (Stochastics20, 1987) and Léonard (Ann. Inst. H. Poincaré Prob. Statist.31, 1995).
We consider a class of processes describing a population consisting of k types of individuals. The process is almost surely absorbed at the origin within finite time, and we study the expected time taken for such extinction to occur. We derive simple and precise asymptotic estimates for this expected persistence time, starting either from a single individual or from a quasi-equilibrium state, in the limit as a system size parameter N tends to infinity. Our process need not be a Markov process on $ {\mathbb Z}_+^k$; we allow the possibility that individuals’ lifetimes may follow more general distributions than the exponential distribution.
Consider a two-type Moran population of size N with selection and mutation, where the selective advantage of the fit individuals is amplified at extreme environmental conditions. Assume selection and mutation are weak with respect to N, and extreme environmental conditions rarely occur. We show that, as $N\to\infty$, the type frequency process with time sped up by N converges to the solution to a Wright–Fisher-type SDE with a jump term modeling the effect of the environment. We use an extension of the ancestral selection graph (ASG) to describe the genealogical picture of the model. Next, we show that the type frequency process and the line-counting process of a pruned version of the ASG satisfy a moment duality. This relation yields a characterization of the asymptotic type distribution. We characterize the ancestral type distribution using an alternative pruning of the ASG. Most of our results are stated in annealed and quenched form.
For a quadratic Markov branching process (QMBP), we show that the decay parameter is equal to the first eigenvalue of a Sturm–Liouville operator associated with the partial differential equation that the generating function of the transition probability satisfies. The proof is based on the spectral properties of the Sturm–Liouville operator. Both the upper and lower bounds of the decay parameter are given explicitly by means of a version of Hardy’s inequality. Two examples are provided to illustrate our results. The important quantity, the Hardy index, which is closely linked to the decay parameter of the QMBP, is deeply investigated and estimated.