Brought to you by:
Review

Master equations and the theory of stochastic path integrals

and

Published 17 March 2017 © 2017 IOP Publishing Ltd
, , Citation Markus F Weber and Erwin Frey 2017 Rep. Prog. Phys. 80 046601 DOI 10.1088/1361-6633/aa5ae2

0034-4885/80/4/046601

Abstract

This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers–Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman–Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a 'generating functional', which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a 'forward' and a 'backward' path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers–Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker–Planck equation. One can rewrite this path integral in terms of an Onsager–Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.

Export citation and abstract BibTeX RIS

1. Introduction

1.1. Scope of this review

The theory of continuous-time Markov processes is largely built on two equations: the Fokker–Planck [14] and the master equation [4, 5]. Both equations assume that the future of a system depends only on its current state, memories of its past having been wiped out by randomizing forces. This Markov assumption is sufficient to derive either of the two equations. Whereas the Fokker–Planck equation describes systems that evolve continuously from one state to another, the master equation models systems that perform jumps in state space.

Path integral representations of the master equation were first derived around 1980 [615], shortly after such representations had been derived for the Fokker–Planck equation [1619]. Both approaches were heavily influenced by quantum theory, introducing such concepts as the Fock space [20] with its 'bras' and 'kets' [21], coherent states [2224], and 'normal-ordering' [25] into non-equilibrium theory. Some of these concepts are now well established and the original 'bosonic' path integral representation has been complemented with a 'fermionic' counterpart [2631]. Nevertheless, we feel that the theory of these 'stochastic' path integrals may benefit from a step back and a closer look at the probabilistic structures behind the integrals. Therefore, the objects imported from quantum theory make place for their counterparts from probability theory in this review. For example, the coherent states give way to the Poisson distribution. Moreover, we use the bras and kets as particular basis functionals and functions whose choice depends on the stochastic process at hand (a functional maps functions to numbers). Upon choosing the basis functions as Poisson distributions, one can thereby recover both a classic path integral representation of averaged observables as well as the Poisson representation of Gardiner and Chaturvedi [32, 33]. The framework presented in this review integrates a variety of different approaches to the master equation. Besides the Poisson representation, these approaches include a spectral method for the computation of stationary probability distributions [34], WKB approximations and other 'semi-classical' methods for the computation of rare event probabilities [3538], and a variational approach that was proposed in the context of stochastic gene expression [39]. All of these approaches can be treated within a unified framework. Knowledge about this common framework makes it possible to systematically search for new ways of solving the master equation.

Before outlining the organization of this review, let us note that by focusing on the above path integral representations of master and Fokker–Planck equations, we neglect several other 'stochastic' or 'statistical' path integrals that have been developed. These include Edwards path integral approach to turbulence [40, 41], a path integral representation of Haken [42], path integral representations of non-Markov processes [4358] and of polymers [5962], and a representation of 'hybrid' processes [6365]. The dynamics of these stochastic hybrid processes are piecewise-deterministic. Moreover, we do not discuss the application of renormalization group techniques, despite their significant importance. Excellent texts exploring these techniques in the context of non-equilibrium critical phenomena [6668] are provided by the review of Täuber, Howard, and Vollmayr-Lee [69] as well as the book by Täuber [70]. Our main interest lies in a mathematical framework unifying the different approaches from the previous paragraph and in two path integrals that are based on this framework. Both of these path integrals provide exact representations of the conditional probability distribution solving the master equation. We exemplify the use of the path integrals for elementary processes, which we choose for their pedagogic value. Most of these processes do not involve spatial degrees of freedom but the application of the presented methods to processes on spatial lattices or networks is straightforward. A process with diffusion and linear decay serves as an example of how path integrals can be evaluated perturbatively using Feynman diagrams. The particles' linear decay is treated as a perturbation to their free diffusion. The procedure readily extends to more complex processes. Moreover, we show how the two path integrals can be used for the computation of rare event probabilities. Let us emphasize that we only consider Markov processes obeying the Chapman–Kolmogorov equation and associated master equations [4]. It may be interesting to extend the discussed methods to 'generalized' or 'physical' master equations with memory kernels [7174].

1.2. Organization of this review

The organization of this review is summarized in figure 1 and is as follows. In the next section 1.3, we introduce the basic concepts of the theory of continuous-time Markov processes. After discussing the roles of the forward and backward Fokker–Planck equations for processes with continuous sample paths, we turn to processes with discontinuous sample paths. The probability of finding such a 'jump process' in a generic state n at time $\tau >{{t}_{0}}$ , given that the process has been in state n0 at time t0, is represented by the conditional probability distribution $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ . Whereas the forward master equation evolves this probability distribution forward in time, starting out at time $\tau ={{t}_{0}}$ , the backward master equation evolves the distribution $p\left(t,n|\tau,{{n}_{0}}\right)$ backward in time, starting out at time $\tau =t$ . Both master equations can be derived from the Chapman–Kolmogorov equation (see left side of figure 1). In section 1.4, we discuss two explicit representations of the conditional probability distribution solving the two master equations. Moreover, we comment on various numerical methods for the approximation of this distribution and for the generation of sample paths. Afterwards, section 1.5 provides a brief historical overview of contributions to the development of stochastic path integrals.

Figure 1.

Figure 1. Roadmap and summary of the methods considered in this review. The arrows represent possible routes for derivations. Labelled arrows represent derivations that are explicitly treated in the respective sections. For example, the forward and backward master equations are derived from the Chapman–Kolmogorov equation in section 1.3. In this section, we also discuss the path summation representation (19) of the conditional probability distribution $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ . This representation can be derived by examining the stochastic simulation algorithm (SSA) of Gillespie [7578] or by performing a Laplace transformation of the forward master equation (10) (see appendix B). In sections 2 and 3, the forward and the backward master equations are cast into four linear PDEs, also called 'flow equations'. These equations are obeyed by a probability generating function, a probability generating functional, a marginalized distribution, and a further series expansion. The flow equations can be solved in terms of a forward and a backward path integral as shown in sections 4 and 5. Upon performing inverse transformations, the path integrals provide two distinct representations of the conditional probability distribution solving the master equations. Moreover, they can be used to represent averaged observables as explained in section 6. Besides the methods illustrated in the figure, we discuss path integral representations of processes with continuous state spaces whose master equations admit Kramers–Moyal expansions (sections 4.4 and 5.3). A truncation of the backward Kramers–Moyal expansion at the level of a diffusion approximation results in a path integral representation of the (backward) Fokker–Planck equation whose original development goes back to works of Martin, Siggia, and Rose [16], de Dominicis [17], Janssen [18, 19], and Bausch, Janssen, and Wagner [19]. The representation can be rewritten in terms of an Onsager–Machlup function [79], and it simplifies to Wiener's path integral [80, 81] for purely diffusive Brownian motion [82]. Renormalization group techniques are not considered in this review. Information on these techniques can be found in [69, 70].

Standard image High-resolution image

The main part of this review begins with section 2. We first exemplify how a generalized probability generating function can be used to determine the stationary probability distribution of an elementary chemical reaction. This example introduces the bra-ket notation used in this review. In section 2.1, we formulate conditions under which a general forward master equation can be transformed into a linear partial differential equation (PDE) obeyed by the generating function. This function is defined as the sum of the conditional probability distribution $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ over a set of basis functions $\left\{|n\rangle \right\}$ , the 'kets' (see middle column of figure 1). The explicit choice of the basis functions depends on the process being studied. We discuss different choices of the basis functions in section 2.2, first for a random walk, afterwards for chemical reactions and for processes whose particles locally exclude one another. Several methods have recently been proposed for the analysis of the PDE obeyed by the generating function. These methods include the variational method of Eyink [83] and of Sasai and Wolynes [39], the WKB approximations [84] and spectral methods of Elgart and Kamenev [35] and of Assaf and Meerson [3638], and the spectral method of Walczak, Mugler, and Wiggins [34]. We comment on these methods in section 2.3.

In section 3.1, we formulate conditions under which a general backward master equation can be transformed into a novel, backward-time PDE obeyed by a 'marginalized distribution'. This object is defined as the sum of the conditional probability distribution $p\left(t,n|\tau,{{n}_{0}}\right)$ over a set of basis functions $\left\{|{{n}_{0}}\rangle \right\}$ (see middle column of figure 1). If the basis function $|{{n}_{0}}\rangle $ is chosen as a probability distribution, the marginalized distribution also constitutes a true probability distribution. Different choices of the basis function are considered in section 3.2. In section 3.3, the use of the marginalized distribution is exemplified in the calculation of mean extinction times. Afterwards, in section 3.4, we derive yet another linear PDE, which is obeyed by a 'probability generating functional'. This functional is defined as the sum of the conditional probability distribution $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ over a set of basis functionals $\left\{\langle n|\right\}$ , the 'bras'. In section 3.5, we show that the way in which the generating functional 'generates' probabilities generalizes the Poisson representation of Gardiner and Chaturvedi [32, 33].

Sections 4 and 5 share the goal of representing the master equations' solution by path integrals. In section 4.1, we first derive a novel backward path integral representation from the PDE obeyed by the marginalized distribution (see right side of figure 1). Its use is exemplified in sections 4.2, 4.3 and 4.5 in which we solve several elementary processes. Although we do not discuss the application of renormalization group techniques, section 4.5.4 includes a discussion of how the backward path integral representation can be evaluated in terms of a perturbation expansion. The summands of the expansion are expressed by Feynman diagrams. Besides, we derive a path integral representation for Markov processes with continuous state spaces in the 'intermezzo' section 4.4. This representation is obtained by performing a Kramers–Moyal expansion of the backward master equation and it comprises a classic path integral representation [1619] of the (backward) Fokker–Planck equation as a special case. One can rewrite the representation of the Fokker–Planck equation in terms of an Onsager–Machlup function [79] and, for purely diffusive Brownian motion [82], the representation simplifies to the path integral of Wiener [80, 81]. Moreover, we recover a Feynman–Kac like formula [85], which solves the (backward) Fokker–Planck equation in terms of an average over the paths of an Itô stochastic differential equation [8688] (or of a Langevin equation [89]).

In section 5, we complement the backward path integral representation with a forward path integral representation. Its derivation in section 5.1 starts out from the PDE obeyed by the generalized generating function (see right side of figure 1). The forward path integral representation can, for example, be used to compute the generating function of generic linear processes as we demonstrate in section 5.2. Besides, we briefly outline how a Kramers–Moyal expansion of the (forward) master equation can be employed to derive a path integral representation for processes with continuous state spaces in section 5.3. This path integral can be expressed in terms of an average over the paths of an SDE proceeding backward in time. Its potential use remains to be explored.

Before proceeding, let us briefly point out some properties of the forward and backward path integral representations. First, the paths along which these path integrals proceed are described by real variables and all integrations are performed over the real line. Grassmann path integrals [2631] for systems whose particles locally exclude one another are not considered. It is, however, explained in section 2.2.4 how such systems can be treated without the need for Grassmann variables, based on a method recently proposed by van Wijland [90]. Second, transformations of the path integral variables such as the 'Doi-shift' [91] are implemented on the level of the basis functions and functionals. Third, our derivations of the forward and backward path integral representations do not involve coherent states or combinatoric relations for the commutation of exponentiated operators. Last, the path integrals allow for time-dependent rate coefficients of the stochastic processes.

In section 6, we derive a path integral representation of averaged observables (see right side of figure 1). This representation can be derived both from the backward and forward path integral representations (see section 6.1), and by representing the forward master equation in terms of the eigenvectors of creation and annihilation matrices ('coherent states'; see section 6.2). The duality between these two approaches resembles the duality between the wave [92] and matrix [9395] formulations of quantum mechanics. Let us note that our resulting path integral does not involve a 'second-quantized' or 'normal-ordered' observable [96]. In fact, we show that this object agrees with the average of an observable over a Poisson distribution. In section 6.3, we then explain how the path integral can be evaluated perturbatively using Feynman diagrams. Such an evaluation is demonstrated for the coagulation reaction $2\,A\to A$ in section 6.4, restricting ourselves to the 'tree level' of the diagrams.

In section 7, we review and extend a recent method of Elgart and Kamenev for the computation of rare event probabilities [35]. As explained in section 7.1, this method evaluates a probability distribution by expanding the forward path integral representation from section 5 around 'stationary', or 'extremal', paths. In a first step, one thereby acquires an approximation of the ordinary probability generating function. In a second step, this generating function is transformed back into the underlying probability distribution. The evaluation of this back transformation typically involves an additional saddle-point approximation. In section 7.2, we demonstrate both of the steps for the binary annihilation reaction $2\,A\to \varnothing $ , improving an earlier approximation of the process by Elgart and Kamenev [35] by terms of sub-leading order. In section 7.3, we then extend the 'stationary path method' to the backward path integral representation from section 4. The backward path integral provides direct access to a probability distribution without requiring an auxiliary saddle-point approximation. However, the leading order term of its expansion is not normalized. We demonstrate the procedure for the binary annihilation reaction in section 7.4.

Finally, section 8 closes with a summary of the different approaches discussed in this review and outlines open challenges and promising directions for future research.

1.3. Continuous-time Markov processes and the forward and backward master equations

Our main interest lies in a special class of stochastic processes, namely in the class of continuous-time Markov processes with discontinuous sample paths. These processes are also called 'jump processes'. In the following, we outline the mathematical theory of jump processes and derive the central equations obeyed by them: the forward and the backward master equation. Before going into the mathematical details, let us explain when a system's time evolution can be modelled as a continuous-time Markov process with discontinuous sample paths and what that phrase actually means.

First of all, if the evolution of a system is to be modelled as a continuous-time Markov process, it must be possible to describe the system's state by some variable n. In fact, it must be possible to do so at every point τ in time throughout an observation period [t0,t]. A variable $n\in \mathbb{Z}$ could, for example, represent the position of a molecular motor along a cytoskeletal filament, or a variable $n\in {{\mathbb{R}}_{\geqslant 0}}$ the price of a stock between the opening and closing times of an exchange. The assumption of a continuous time parameter τ is rather natural and conforms to our everyday experience. Still, a discrete time parameter may sometimes be preferred, for example, to denote individual generations of an evolving population [97]. By allowing τ to take on any value between the initial time t0 and the final time t, we can choose it to be arbitrarily close to one of those times. Below, this possibility will allow us to describe the evolution of the process in terms of a differential equation.

The (unconditional) probability of finding the system in state n at time τ is represented by the 'single-time' probability distribution $p(\tau,n)$ . Upon demanding that the system has visited some state n0 at an earlier time ${{t}_{0}}<\tau $ , the probability of observing state n at time τ is instead encoded by the conditional probability distribution $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ . If the conditional probability distribution is known, the single-time distribution can be inferred from any given initial distribution $p\left({{t}_{0}},{{n}_{0}}\right)$ via $p(\tau,n)={\sum}_{{{n}_{0}}}p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)p\left({{t}_{0}},{{n}_{0}}\right)$ . A stochastic process is said to be Markovian if a distribution conditioned on multiple points in time actually depends only on the state that was realized most recently. In other words, a conditional distribution $p\left(t,n|{{\tau}_{k}},{{m}_{k}};\cdots ;{{\tau}_{1}},{{m}_{1}};{{t}_{0}},{{n}_{0}}\right)$ must agree with $p\left(t,n|{{\tau}_{k}},{{m}_{k}}\right)$ whenever $t>{{\tau}_{k}}>{{\tau}_{k-1}},\ldots,{{t}_{0}}$ 1 . Therefore, a Markov process is fully characterized by the single-time distribution $p\left({{t}_{0}},{{n}_{0}}\right)$ and the conditional distribution $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ . The latter function is commonly referred to as the 'transition probability' for Markov processes [98].

The stochastic dynamics of a system can be modelled in terms of a Markov process if the system has no memory. Let us explain this requirement with the example of a Brownian particle suspended in a fluid [82]. Over a very short time scale, the motion of such a particle is ballistic and its velocity highly auto-correlated [99]. But as the particle collides with molecules of the fluid, that memory fades away. A significant move of the particle due to fluctuations in the isotropy of its molecular bombardment then appears to be completely uncorrelated from previous moves (provided that the observer does not look too closely [100]). Thus, on a sufficiently coarse time scale, the motion of the particle looks diffusive and can be modelled as a Markov process. However, the validity of the Markov assumption does not extend beyond the coarse time scale.

The Brownian particle exemplifies only two of the properties that we are looking for: its position is well-defined at every time τ and its movement is effectively memoryless on the coarse time scale. But the paths of the Brownian particle are continuous, meaning that it does not spontaneously vanish and then reappear at another place. If the friction of the fluid surrounding the Brownian particle is high (over-damped motion), the probability of observing the particle at a particular place can be described by the Smoluchowski equation [101]. This equation coincides with the simple diffusion equation when the particle is not subject to an external force. In the general case, the probability of observing the particle at a particular place with a particular velocity obeys the Klein–Kramers equation [102, 103] (the book of Risken [104] provides a pedagogic introduction to these equations). From a mathematical point of view, all of these equations constitute special cases of the (forward) Fokker–Planck equation [14, 104]. For a single random variable $x\in \mathbb{R}$ , e.g. the position of the Brownian particle, this equation has the generic form

Equation (1)

The initial condition of this equation is given by the Dirac delta distribution (or generalized function) $p\left({{t}_{0}},x|{{t}_{0}},{{x}_{0}}\right)=\delta \left(x-{{x}_{0}}\right)$ . Here we used the letter x for the random variable because the letter n would suggest a discrete state space. The function ${{\alpha}_{\tau}}$ is often called a drift coefficient and ${{\beta}_{\tau}}$ a diffusion coefficient (note, however, that in the context of population genetics, ${{\beta}_{\tau}}$ describes the strength of random genetic 'drift' [105, 106]). For reasons addressed below, the diffusion coefficient must be non-negative at every point in time for every value of x (for a multivariate process, ${{\beta}_{\tau}}$ represents a positive-semidefinite matrix). In the mathematical community, the Fokker–Planck equation is better known as the Kolmogorov forward equation [105], honouring Kolmogorov's fundamental contributions to the theory of continuous-time Markov processes [4]. Whereas the above Fokker–Planck equation evolves the conditional probability distribution forward in time, one can also evolve this distribution backward in time, starting out from the final condition $p\left(t,x|t,{{x}_{0}}\right)=\delta \left(x-{{x}_{0}}\right)$ . The corresponding equation is called the Kolmogorov backward or backward Fokker–Planck equation. It has the generic form

Equation (2)

The forward and backward Fokker–Planck equations provide information about the conditional probability distribution but not about the individual paths of a Brownian particle. The general theory of how partial differential equations connect to the individual sample paths of a stochastic process goes back to works of Feynman and Kac [85, 107] 2 . Their theory allows us to write the solution of the backward Fokker–Planck equation (2) in terms of the following Kolmogorov formula, which constitutes a special case of the Feynman–Kac formula [108110]:

Equation (3)

The brackets ${{\langle \langle \centerdot \rangle \rangle}_{W}}$ represent an average over realizations of a Wiener process W, which evolves through uncorrelated Gaussian increments $\text{d}W$ . The Wiener process drives the evolution of the sample path x(s) from $x(\tau )={{x}_{0}}$ to x(t) via the Itô stochastic differential equation (SDE) [8688]

Equation (4)

The diffusion coefficient ${{\beta}_{s}}$ must be non-negative because x(s) describes the position of a real particle. Otherwise, the sample path heads off into imaginary space (for a multivariate process, $\sqrt{{{\beta}_{s}}}$ may be chosen as the unique symmetric and positive-semidefinite square root of ${{\beta}_{s}}$ [111]). Algorithms for the numerical solution of SDEs are provided in [108]. In the physical sciences, SDEs are often written as Langevin equations [89] 3 . For a discussion of stochastic differential equations the reader may refer to a recent report on progress [112].

After this brief detour to continuous-time Markov processes with continuous sample paths, let us return to jump processes, whose sample paths are discontinuous. A system that can be modelled as such a process are motor proteins on cytoskeletal filaments [113115]. The uni-directional walk of a molecular motor such as myosin, kinesin, or dynein along an actin filament or a microtubule is driven by the hydrolysis of adenosine triphosphate (ATP) and is intrinsically stochastic [116]. Once a sufficient amount of energy is available, one of the two 'heads' of the motor unbinds from its current binding site on the filament and moves to the next binding site. Each binding site can only be occupied by a single head. On a coarse-grained level, the state of the system at time τ is therefore characterized by the occupation of its binding sites. With only a single cytoskeletal filament whose binding sites are labelled as $\left\{0,1,2,\ldots \right\}=:\mathbb{L}$ , the variable $n\equiv \left({{n}_{0}},{{n}_{1}},{{n}_{2}},\ldots \right)\in {{\left\{0,1\right\}}^{|\mathbb{L}|}}$ can be used to represent the occupied and unoccupied binding sites. Here, $|\mathbb{L}|$ denotes the total number of binding sites along the filament and ni   =  1 signifies that the ith binding site is occupied. Since the state space of all the binding site configurations is discrete, a change in the binding site configuration involves a 'jump' in state space. Provided that the jumps are uncorrelated from one another (which needs to be verified experimentally), the dynamics of the system can be described by a continuous-time Markov process with discontinuous sample paths. Before addressing further systems for which this is the case, let us derive the fundamental equations obeyed by these processes: the forward and the backward master equation.

In his classic textbook [110], Gardiner presents a succinct derivation of both the (forward) master and the (forward) Fokker–Planck equation by distinguishing between discontinuous and continuous contributions to sample paths. In the following, we are only interested in the master equation, which governs the evolution of systems whose states change discontinuously. To prevent the occurrence of continuous changes, we assume that the state of our system is represented by a discrete variable n and that the space of all states is countable. With the state space being chosen as the set of integers $\mathbb{Z}$ , n could, for example, represent the position of a molecular motor along a cytoskeletal filament. On the other hand, a non-negative integer $n\in {{\mathbb{N}}_{0}}$ could represent the number of molecules in a chemical reaction. The minimal jump size is one in both cases. By keeping the explicit role of n unspecified, the following considerations also apply to systems harbouring different kind of molecules (e.g. $n\equiv \left({{n}^{A}},{{n}^{B}},{{n}^{C}}\right)\in \mathbb{N}_{0}^{3}$ ), and to systems whose molecules perform random walks in a (discrete) space (e.g. $n\equiv {{\left\{{{n}_{i}}\in {{\mathbb{N}}_{0}}\right\}}_{i\in \mathbb{Z}}}$ ).

To derive the master equation, we start out by marginalizing the joint conditional distribution $p\left(t,n;\tau,m|{{t}_{0}},{{n}_{0}}\right)$ over the state m at the intermediate time τ ($t\geqslant \tau \geqslant {{t}_{0}}$ ), resulting in

Equation (5)

Whenever the range of a sum is not specified, it shall cover the whole state space of its summation variable. The above equation holds for arbitrary stochastic processes. But for a Markov process, one can employ the relation between joint and conditional distributions to turn the equation into the Chapman–Kolmogorov equation

Equation (6)

Letting $p\left(t|{{t}_{0}}\right)$ denote the matrix with elements $p\left(t,n|{{t}_{0}},{{n}_{0}}\right)$ , the Chapman–Kolmogorov equation can also be written as $p\left(t|{{t}_{0}}\right)=p(t|\tau )p\left(\tau |{{t}_{0}}\right)$ (semigroup property). Note that the matrix notation requires a mapping between the state space of n and n0 and an appropriate index set $I\subset \mathbb{N}$ . However, we also make use of this notation when the state space is countably infinite.

To derive the (forward) master equation from the Chapman–Kolmogorov equation (6), we define

Equation (7)

for all values of n and m and assume the existence and finiteness of the limits

Equation (8)

These are the elements of the transition (rate) matrix ${{Q}_{\tau}}$ , which is also called the infinitesimal generator of the Markov process or is simply referred to as the ${{Q}_{\tau}}$ -matrix. Its off-diagonal elements ${{w}_{\tau}}(n,m):={{Q}_{\tau}}(n,m)$ denote the rates at which probability flows from a state m to a state $n\ne m$ . The 'exit rates' ${{w}_{\tau}}(m):=-{{Q}_{\tau}}(m,m)$ , on the other hand, describe the rates at which probability leaves state m. Both ${{w}_{\tau}}(n,m)$ and ${{w}_{\tau}}(m)$ are non-negative for all n and m. All of the processes considered here shall conserve the total probability, requiring that ${\sum}_{n}{{Q}_{\tau}}(n,m)=0$ or, equivalently, ${{w}_{\tau}}(m)={\sum}_{n}{{w}_{\tau}}(n,m)$ (with ${{w}_{\tau}}(m,m):=0$ ). The finiteness of the exit rate ${{w}_{\tau}}(m)$ and the conservation of total probability imply that we consider a stable and conservative Markov process [117]. In the natural sciences, the master equation is commonly written in terms of ${{w}_{\tau}}$ , but most mathematicians prefer ${{Q}_{\tau}}$ . These matrices can be converted into one another by employing

Equation (9)

We refer to both of the matrices as transition (rate) matrices and to their (identical) off-diagonal elements as transition rates. The transition rates fully specify the stochastic process.

Assuming that the limit in (8) interchanges with a sum over the state m, the (forward) master equation now follows from the Chapman–Kolmogorov equation (6) as

Equation (10)

Thus, the master equation constitutes a set of coupled, linear, first-order ordinary differential equations (ODEs). The time evolution of the distribution starts out from $p\left({{t}_{0}},n|{{t}_{0}},{{n}_{0}}\right)={{\delta}_{n,{{n}_{0}}}}$ . In matrix notation, the equation can be written as ${{\partial}_{\tau}}p\left(\tau |{{t}_{0}}\right)={{Q}_{\tau}}p\left(\tau |{{t}_{0}}\right)$ . In terms of ${{w}_{\tau}}$ , it assumes the intuitive gain-loss form

Equation (11)

The dot inside the probability distribution's argument abbreviates the initial parameters t0 and n0, which are of secondary concern right here. That will change below in the derivation of the backward master equation. An omission of the parameters also makes it impossible to distinguish the conditional distribution $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ from the single-time distribution $p(\tau,n)$ . The single-time distribution obeys the master equation as well, as can inferred directly from the relation $p(\tau,n)={\sum}_{{{n}_{0}}}p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)p\left({{t}_{0}},{{n}_{0}}\right)$ or by summing the above master equation over an initial distribution $p\left({{t}_{0}},{{n}_{0}}\right)$ . In fact, the single-time distribution would even obey the master equation if the process was not Markovian, but without providing a complete characterization of the process [70, 74]. The master equation (10) or (11) is particularly interesting for transition rates causing an imbalance between forward and backward transitions along closed cycles of states, i.e. for rates violating Kolmogorov's criterion [118] for detailed balance [70]. Such systems are truly out of thermal equilibrium. If detailed balance is instead fulfilled, the system eventually converges to a stationary Boltzmann–Gibbs distribution with vanishing probability currents between states [70]. Whether or not detailed balance is actually fulfilled is, however, not relevant for the methods discussed in this review. Information on the existence and uniqueness of an asymptotic stationary distribution of the master equation can be found in [117].

The name 'master equation' was originally coined by Nordsieck, Lamb, and Uhlenbeck [5] in their study of the Furry model of cosmic rain showers [119]. Shortly before, Feller applied an equation of the same structure to the growth of populations [120] and Delbrück to well-mixed, auto-catalytic chemical reactions [121]. Delbrück's line of research was followed by several others [122125], most notably by McQuarrie [126128] (see also the books [98, 110, 129]). In these articles, several elementary chemical reactions are solved by methods that also appear later in this review. When the particles engaging in a reaction can also diffuse in space, their density may exhibit dynamics that are not expected from observations made in well-mixed environments. Hence, reaction–diffusion master equations have been the focus of intense research and have been analysed using path integrals (see, for example, [96, 130133] and the references in section 1.5). Master equations, and simulations algorithms based on master equations, are now being used in numerous fields of research. They are being applied in the contexts of spin dynamics [134137], gene regulatory networks [34, 138143], the spreading of diseases [144147], epidermal homeostasis [148], nucleosome repositioning [149], ecological [150163] and bacterial dynamics [158, 164167], evolutionary game theory [168179], surface growth [180], and social and economic processes [181184]. Queuing processes are also often modelled in terms of master equations, but in this context, the equations are typically referred to as Kolmogorov equations [185]. Moreover, master equations and the SSA have helped to understand the formation of traffic jams on highways [186, 187], the walks of molecular motors along cytoskeletal filaments [113115, 188190], and the condensation of bosons in driven-dissipative quantum systems [178, 191, 192]. The master equation that was found to describe the coarse-grained dynamics of these bosons coincides with the master equation of the (asymmetric) inclusion process [193196]. Transport processes are commonly modelled in terms of the (totally) asymmetric simple exclusion process (ASEP or TASEP) [197200]. The ASEP describes the biased hopping of particles along a one-dimensional lattice, with each lattice site providing space for at most one particle. The ASEP and the TASEP are regarded as paradigmatic models in the field of non-equilibrium statistical mechanics, with many exact mathematical results having been established [201212]. Some of these results were established by applying the Bethe ansatz to the master equation of the ASEP [205, 209]. The review of Chou, Mallick, and Zia provides a comprehensive account of the ASEP and of its variants [190]. The master equation of the TASEP with Langmuir kinetics was recently used to understand the length regulation of microtubules [213].

Unlike deterministic models, the master equations describing the dynamics of the above systems take into account that discrete and finite populations are prone to 'demographic fluctuations'. The populations of the above systems consist of genes or proteins, infected persons, bacteria or cars and they are typically small, at least compared to the number of molecules in a mole of gas. For example, the copy number of low abundance proteins in Escherichia coli cytosol was found to be in the tens to hundreds [214]. Therefore, the presence or absence of a single protein is much more important than the presence or absence of an individual molecule in a mole of gas. A demographic fluctuation may even be fatal for a system, for example, when the copy number of an auto-catalytic reactant drops to zero. The master equation (10) provides a useful tool to describe such an effect.

Up to this point, we have only considered the forward master equation. But just as the (forward) Fokker–Planck equation (1) is complemented by the backward Fokker–Planck equation (2), the (forward) master equation (10) is complemented by a backward master equation. This equation can be derived from the Chapman–Kolmogorov equation (6) as

Equation (12)

Here, the transition rate is obtained in the limit ${{\lim}_{ \Delta t\to 0}}{{Q}_{\tau - \Delta t, \Delta t}}\left(m,{{n}_{0}}\right)$ (see (7)). In matrix notation, the backward master equation reads ${{\partial}_{-\tau}}p(t|\tau )=p(t|\tau ){{Q}_{\tau}}$ . In terms of ${{w}_{\tau}}$ , it assumes the form (see (9))

Equation (13)

In this equation, the dots abbreviate the final parameters t and n. The backward master equation evolves the conditional probability distribution backward in time, starting out from the final condition $p\left(t,n|t,{{n}_{0}}\right)={{\delta}_{n,{{n}_{0}}}}$ . Just as the backward Fokker–Planck equation, the backward master equation proves useful for the computation of mean extinction and first passage times (see [110, 215] and section 3.3). Furthermore, it follows from the backward master equation (12) that the (conditional) average $\langle A\rangle \left(t|\tau,{{n}_{0}}\right):={\sum}_{n}A(n)p\left(t,n|\tau,{{n}_{0}}\right)$ of an observable A fulfils an equation of just the same form, namely

Equation (14)

The final condition of the equation is given by $\langle A\rangle \left(t|t,{{n}_{0}}\right)=A\left({{n}_{0}}\right)$ . The validity of equation (14) is the reason why we later employ a 'backward' path integral to represent the average $\langle A\rangle $ (see section 6).

1.4. Analytical and numerical methods for the solution of master equations

If the dynamics of a system are restricted to a finite number of states and if its transition rates are independent of time, both the forward master equation ${{\partial}_{\tau}}p\left(\tau |{{t}_{0}}\right)=Qp\left(\tau |{{t}_{0}}\right)$ and the backward master equation ${{\partial}_{-{{\tau}_{0\,}}}}p\left(t|{{\tau}_{0}}\right)=p\left(t|{{\tau}_{0}}\right)Q$ are solved by [216]

Equation (15)

(recall that $p\left(\tau |{{\tau}_{0}}\right)$ is the matrix with elements $p\left(\tau,n|{{\tau}_{0}},{{n}_{0}}\right)$ ). The Chapman–Kolmogorov equation (6) is also solved by the distribution. Although the matrix exponential inside this solution can in principle be evaluated in terms of the (convergent) Taylor expansion ${\sum}_{k=0}^{\infty}\frac{{{\left(\tau -{{\tau}_{0}}\right)}^{k}}}{k!}{{Q}^{k}}$ , the actual calculation of this series is typically infeasible for non-trivial processes, both analytically and numerically (a truncation of the Taylor series may induce severe round-off errors and serves as a lower bound on the performance of algorithms in [217]). Consequently, alternative numerical algorithms have been developed to evaluate the matrix exponential. Moler and Van Loan reviewed 'nineteen dubious ways' of computing the exponential in [217, 218]. Algorithms that can deal with very large state spaces are considered in [219, 220]. For time-dependent transition rates, the matrix exponential generalizes to a Magnus expansion [221, 222].

In the previous paragraph, we restricted the dynamics to a finite state space to ensure the existence of the matrix exponential in (15). Provided that the supremum ${{\sup}_{m}}|Q(m,m)|$ of all the exit rates is finite (uniformly bounded Q-matrix), the validity of the above solution extends to state spaces comprising a countable number of states [223]. To see that, we define the left stochastic matrix $P:={\mathbb {1}}+{{\lambda}^{-1}}Q$ , with the parameter λ being larger than the above supremum. Writing ${{\text{e}}^{Q \Delta t}}={{\text{e}}^{-\lambda \Delta t}}{{\text{e}}^{\lambda \Delta tP}}$ with $ \Delta t:=\tau -{{\tau}_{0}}$ , the matrix exponential can be evaluated in terms of the convergent Taylor series

Equation (16)

Effectively, one has thereby decomposed the continuous-time Markov process with transition matrix Q into a discrete-time Markov chain with transition matrix P, subordinated to a continuous-time Poisson process with rate coefficient λ (the Poisson process acts as a 'clock' with sufficiently high ticking rate λ). Such a decomposition is called a uniformization or randomization and was first proposed by Jensen [224]. The series (16) can be evaluated via numerically stable algorithms and truncation errors can be bounded [225, 226]. Nevertheless, the uniformization method requires the computation of the powers of a matrix having as many rows and columns as the system has states. Consequently, a numerical implementation of the method is only feasible for sufficiently small state spaces. Further information on the method and on its improvements can be found in [224229].

The mathematical study of the existence and uniqueness of solutions of the forward and backward master equations was pioneered by Feller and Doob in the 1940s [230, 231]. Feller derived an integral recurrence formula [117, 230], which essentially constitutes a single step of the 'path summation representation' that we derive further below. In the following, we assume that the forward and the backward master equations have the same unique solution and we restrict our attention to processes performing only a finite number of jumps during any finite time interval. These conditions, and the conservation of total probability, are, for example, violated by processes that 'explode' after a finite time 4 . More information on such processes is provided in [117, 216].

In the following, we complement the above representations of the master equations's solution with a 'path summation representation'. This representation can be derived by examining the steps of the stochastic simulation algorithm (SSA) of Gillespie (its 'direct' version) [75, 76, 78] or by performing a Laplace transformation of the forward master equation. Here we follow the former, qualitative, approach. A formal derivation of the representation via the master equation's Laplace transform is provided in appendix B. Although the basic elements of the SSA had already been known before Gillespie's work [230236], its popularity largely increased after Gillespie applied it to the study of chemical reactions. As the SSA is restricted to time-independent transition rates, so is the following derivation.

To derive the path summation representation, we prepare a system in state n0 at time t0 as illustrated in figure 2. Since the process is homogeneous in time, we may choose t0  =  0. The total rate of leaving state n0 is given by the exit rate $w\left({{n}_{0}}\right)={\sum}_{{{n}_{1}}}w\left({{n}_{1}},{{n}_{0}}\right)$ . In order to determine how long the system actually stays in state n0, one may draw a random waiting time ${{\tau}_{0}}$ from the exponential distribution ${{\mathcal{W}}_{{{n}_{0}}}}\left({{\tau}_{0}}\right):=w\left({{n}_{0}}\right){{\text{e}}^{-w\left({{n}_{0}}\right){{\tau}_{0}}}} \Theta \left({{\tau}_{0}}\right)$ . The Heaviside step function $ \Theta $ prevents the sampling of negative waiting times and is here defined as $ \Theta (\tau )=1$ for $\tau \geqslant 0$ and $ \Theta (\tau )=0$ for $\tau <0$ . Thus far, we only know that the system leaves n0 but not where it ends up. It could end up in any state n1 for which the transition rate $w\left({{n}_{1}},{{n}_{0}}\right)$ is positive. The probability that a particular state n1 is realized is given by $w\left({{n}_{1}},{{n}_{0}}\right)/w\left({{n}_{0}}\right)$ . In a numerical implementation of the SSA, the state n1 is determined by drawing a second (uniformly-distributed) random number. Our goal is to derive an analytic representation of the probability $p\left(t,n|{{t}_{0}},{{n}_{0}}\right)$ of finding the system in state n at time t. Thus, after taking J  −  1 further steps, the sample path ${{\mathcal{P}}_{J}}:=\left\{{{n}_{J}}\leftarrow \cdots \leftarrow {{n}_{1}}\leftarrow {{n}_{0}}\right\}$ should eventually visit state nJ   =  n at some time ${{t}_{J}}\leqslant t$ . The total time ${{\tau}_{J-1}}+\cdots +{{\tau}_{0}}$ until the jump to state n occurs is distributed by the convolutions of the individual waiting time distributions, i.e. by $\underset{J-1}{\overset{j=0}{{\star}}}\,{{\mathcal{W}}_{{{n}_{j}}}}$ . For example, $\tau :={{\tau}_{1}}+{{\tau}_{0}}$ is distributed by

Equation (17)

Equation (18)

Figure 2.

Figure 2. Illustration of the stochastic simulation algorithm (SSA) and of the path summation representation of the probability distribution $p\left(t,n|{{t}_{0}},{{n}_{0}}\right)$ . In a numerical implementation of the SSA, the system is prepared in state n0 at time t0. After a waiting time ${{\tau}_{0}}$ that is drawn from the exponential distribution ${{\mathcal{W}}_{{{n}_{0}}}}\left({{\tau}_{0}}\right)=w\left({{n}_{0}}\right){{\text{e}}^{-w\left({{n}_{0}}\right){{\tau}_{0}}}} \Theta \left({{\tau}_{0}}\right)$ , the system transitions into a new state. An arrival state n1 is chosen with probability $w\left({{n}_{1}},{{n}_{0}}\right)/w\left({{n}_{0}}\right)$ . The procedure is repeated until after J steps, the current time ${{t}_{J}}\leqslant t$ plus an additional waiting time exceeds t. The sample path has thus resided in state nJ at time t. This information is recorded in a histogram approximation of $p\left(t,n|{{t}_{0}},{{n}_{0}}\right)$ . The path summation representation of $p\left(t,n|{{t}_{0}},{{n}_{0}}\right)$ requires nJ to coincide with n. The probability that the system has remained in state n over the last time interval [tJ , t] is given by the survival probability ${{\mathcal{S}}_{n}}\left(t-{{t}_{J}}\right)={{\text{e}}^{-w(n)\left(t-{{t}_{J}}\right)}} \Theta \left(t-{{t}_{J}}\right)$ . The total probability of the generated path, integrated over all possible waiting times, is represented by ${{p}_{\tau}}\left({{\mathcal{P}}_{J}}\right)$ in (20). A summation of this probability over all possible sample paths results in the path summation representation (19).

Standard image High-resolution image

The probability that the system still resides in state nJ   =  n at time t is determined by the 'survival probability' ${{\mathcal{S}}_{n}}\left(t-{{t}_{J}}\right)={{\text{e}}^{-w(n)\left(t-{{t}_{J}}\right)}} \Theta \left(t-{{t}_{J}}\right)$ . After putting all of these pieces together, we arrive at the following path summation representation of the conditional probability distribution:

Equation (19)

Equation (20)

Here, ${\sum}_{\left\{{{\mathcal{P}}_{J}}\right\}}:={\sum}_{{{n}_{1}}}\cdots {\sum}_{{{n}_{J-1}}}$ generates every path with J jumps between n0 and nJ   =  n. The probability of such a path, integrated over all possible waiting times, is represented by ${{p}_{\tau}}\left({{\mathcal{P}}_{J}}\right)$ . By an appropriate choice of integration variables, the probability ${{p}_{\tau}}\left({{\mathcal{P}}_{J}}\right)$ can also be written as

Equation (21)

The survival probability ${{\mathcal{S}}_{n}}$ is included in these integrations. Without the integrations, the expression would represent the probability of a path with J jumps and fixed waiting times. That probability is, for example, used in the master equation formulation of stochastic thermodynamics in associating an entropy to individual paths [237]. In appendix B, we formally derive the path summation representation (19) from the Laplace transform of the forward master equation (11).

The path summation representation (19) does not only form the basis of the SSA but also of some alternative algorithms [238243]. These algorithms either infer the path probability ${{p}_{\tau}}\left({{\mathcal{P}}_{J}}\right)$ numerically from its Laplace transform or evaluate the convolutions in (20) analytically. The analytic expressions that arise are, however, rather cumbersome generalizations of the convolution in (18) [244, 245]. They simplify only for the most basic processes (e.g. for a random walk or for the Poisson process). Moreover, care has to be taken when the analytic expressions are evaluated numerically because they involve pairwise differences of the exit rate w(n) (see (18)). When these exit rates differ only slightly along a path, a substantial loss of numerical significance may occur due to finite precision arithmetic. Future studies could explore how the convolutions of exponential distributions in (20) can be approximated efficiently (for example, in terms of a Gamma distribution or by analytically determining the Laplace transform of (20), followed by a saddle-point approximation [246] of the corresponding inverse Laplace transformation). In general, both the SSA as well as its competitors suffer from the enormous number of states of non-trivial systems, as well as from the even larger number of paths connecting different states. In [240], these paths were generated using a deterministic depth-first search algorithm, combined with a filter to select the paths that arrive at the right place at the right time. In [242], a single path was first generated using the SSA and then gradually changed into new paths through a Metropolis Monte Carlo scheme. Thus far, the two methods have only been applied to relatively simple systems and their prevalence is low compared to the prevalence of the SSA. Further research is needed to explore how relevant paths can be sampled more efficiently.

The true power of the SSA lies in its generation of sample paths with the correct probability of occurrence. Thus, just a few sample paths generated with the SSA are often sufficient to infer the 'typical' dynamics of a process. A look at individual paths may, for example, reveal that the dynamics of a system are dominated by some spatial pattern, e.g. by spirals [168]. Efficient variations of the above 'direct' version of the SSA are, for example, described in [78, 247249]. Algorithms for the fast simulation of biochemical networks or processes with spatial degrees of freedom are implemented in the simulation packages [250257].

The evaluation of the average $\langle A\rangle ={\sum}_{n}A(n)p(t,n|\centerdot )$ of an observable A typically requires the computation of a larger number of sample paths. However, since the occurrence probability of sample paths generated with the SSA is statistically correct, such an average typically converges comparatively fast. Furthermore, each path can be sampled independently of every other path. Therefore, the computation of paths can be distributed to individual processing units, saving real time, albeit no computation time. A distributed computation of the sample paths is most often required, but possibly not even sufficient, if one wishes to compute the full probability distribution $p\left(t,n|{{t}_{0}},{{n}_{0}}\right)$ . Vastly more sample paths are required for this purpose, especially if 'rare event probabilities' in the distribution's tails are sought for. In particular, if the probability of finding a system in state n at time t is only 10−10, an average of 1010 sample paths are needed to observe that event just once. Moreover, the probability of observing any particular state decreases with the size of a system's state space. Thus, the sampling of full distributions becomes less and less feasible as systems become larger. Various other challenges remain open as well; for example, the efficient simulation of processes evolving on multiple time scales. These processes are typically simulated using approximative techniques such as τ-leaping [78, 249, 258267]. Another challenge is posed by the evaluation of processes with time-dependent transition rates [268270].

For completeness, let us mention yet another numerical approach to the (forward) master equation. Since the master equation (10) constitutes a set of coupled linear first-order ODEs, it can of course be treated as such and be integrated numerically. The integration is, however, only feasible if the state space is sufficiently small (or appropriately truncated) and if all transitions occur on comparable time scales (otherwise, the master equation is quite probably stiff [271]). Nevertheless, a numerical integration of the master equation may be preferable over the use of the SSA if the full probability distribution is to be computed.

Neither the matrix exponential representation $p\left(t|{{t}_{0}}\right)={{\text{e}}^{Q\left(t-{{t}_{0}}\right)}}{\mathbb {1}}$ of the conditional probability distribution, nor its path summation representation (19) is universally applicable. Moreover, even if the requirements of these solutions are met, the size of the state space or the complexity of the transition matrix may make it infeasible to evaluate them. In the next sections, we formulate conditions under which the conditional probability distribution can be represented in terms of the 'forward' path integral

Equation (22)

and in terms of the 'backward' path integral

Equation (23)

The meaning of the integral signs and of the bras $\langle n|$ and kets $|n\rangle $ will become clear over the course of this review. Let us only note that the integrals do not proceed along paths of the discrete variable n, but over the paths of two continuous auxiliary variables that are introduced for this purpose. The relevance of each path is weighed by the exponential factors inside the integrals.

Besides these exact representations of the conditional probability distribution solving the master equations, there exist powerful ways of approximating this distribution and the values of averaged observables. These methods include the Kramers–Moyal [103, 272] and the system-size expansion [98, 273], as well as the derivation of moment equations. Information on these methods can be found in classic text books [98, 110] and in a recent review [274]. Although moment equations encode the complete information about a stochastic process, they typically constitute an infinite hierarchy whose evaluation requires a truncation by some closure scheme [275283]. On the other hand, the Kramers–Moyal and the system-size expansion approximate the master equation in terms of a Fokker–Planck equation. Both expansions work best if the system under consideration is 'large' (more precisely, they work best if the dynamics are centred around a stable or meta-stable state at a distance $N\gg 1$ from a potentially absorbing state; the standard deviation of its surrounding distribution is then of order $\sqrt{N}$ ). An extension of the system-size expansion to absorbing boundaries has recently been proposed in [284]. In sections 4.4 and 5.3, we show how Kramers–Moyal expansions of the backward and forward master equations can be used to derive path integral representations of processes with continuous state spaces. When the expansion of the backward master equation stops (or is truncated) at the level of a diffusion approximation, one recovers a classic path integral representation of the (backward) Fokkers–Planck equation [1619].

1.5. History of stochastic path integrals

The oldest path integral, both in the theory of stochastic processes and beyond [107, 285287], is presumably Wiener's integral for Brownian motion [80, 81]. The path integrals we consider here were devised somewhat later, namely in the 1970s and 80s: first for the Fokker–Planck (or Langevin) equation [1619] and soon after for the master equation. For the master equation, the theoretical basis of these 'stochastic' path integrals was developed by Doi [6, 7]. He first expressed the creation and annihilation of molecules in a chemical reaction by the corresponding operators for the quantum harmonic oscillator [288] (modulo a normalization factor), introducing the concept of the Fock space [20] into non-equilibrium theory. Furthermore, he employed coherent states [2224] to express averaged observables. Similar formalisms as the one of Doi were independently developed by Zel'dovich and Ovchinnikov [8], as well as by Grassberger and Scheunert [10]. Introductions to the Fock space approach, which are in part complementary to our review, are, for example, provided in [70, 91, 289291]. The review of Mattis and Glasser [289] provides a chronological list of contributions to the field. These contributions include Rose's renormalized kinetic theory [9], Mikhailov's path integral [11, 12, 14], which is based on Zel'dovich's and Ovchinnikov's work, and Goldenfeld's extension of the Fock space algebra to polymer crystal growth [13]. Furthermore, Peliti reviewed the Fock space approach and provided derivations of path integral representations of averaged observables and of the probability generating function [15]. Peliti also expressed the hope that future 'rediscoveries' of the path integral formalism would be unnecessary in the future. However, we believe that the probabilistic structures behind path integral representations of stochastic processes have not yet been clearly exposed. As illustrated in figure 1, we show that the forward and backward master equations admit not only one but two path integral representations: the forward representation (22) and the novel backward representation (23). Although the two path integrals resemble each other, they differ conceptually. While the forward path integral representation provides a probability generating function in an intermediate step, the backward representation provides a distribution that is marginalized over an initial state. Both path integrals can be used to represent averaged observables as shown in section 6. The backward path integral, however, will turn out to be more convenient for this purpose (upon choosing a Poisson distribution as the basis function, i.e. $|n\rangle :=\frac{{{x}^{n}}{{\text{e}}^{-x}}}{n!}$ , the representation is obtained by summing (23) over an observable A(n)). Let us note that even though we adopt some of the notation of quantum theory, our review is guided by the notion that quantum (field) theory is 'totally unnecessary' for the theory of stochastic path integrals. Michael E Fisher once stated the same about the relevance of quantum field theory for renormalization group theory [292] (while acknowledging that in order to do certain types of calculations, familiarity with quantum field theory can be very useful; the same applies to the theory of stochastic path integrals).

Thus far, path integral representations of the master equation have primarily been applied to processes whose transition rates can be decomposed additively into the rates of simple chemical reactions. Simple means that the transition rate of such a reaction is determined by combinatoric counting. Consider, for example, a reaction of the form $k\,A\to l\,A$ in which $k\in {{\mathbb{N}}_{0}}$ molecules of type A are replaced by $l\in {{\mathbb{N}}_{0}}$ molecules of the same type. Assuming that the k reactants are drawn from an urn with a total number of m molecules, the total rate of the reaction should be proportional to the falling factorial ${{(m)}_{k}}:=m(m-1)\cdots (m-k+1)$ . The global time scale of reaction is set by the rate coefficient ${{\gamma}_{\tau}}$ , which we allow to depend on time. Thus, the rate at which the chemical reaction $k\,A\to l\,A$ induces a transition from state m to state n is ${{w}_{\tau}}(n,m)={{\gamma}_{\tau}}{{(m)}_{k}}{{\delta}_{n,m-k+l}}$ . The Kronecker delta inside the transition rate ensures that k molecules are indeed replaced by l new ones. Note that the number of particles in the system can never become negative, provided that the initial number of particles was non-negative. Hence, the state space of n is ${{\mathbb{N}}_{0}}$ . Insertion of the above transition rate into the forward master equation (11) results in the 'chemical' master equation

Equation (24)

Microphysical arguments for its applicability to chemical reactions can be found in [77]. According to the chemical master equation, the mean particle number $\langle n\rangle ={\sum}_{n}n\,p(\tau,n|\centerdot )$ obeys the equation ${{\partial}_{\tau}}\langle n\rangle ={{\gamma}_{\tau}}(l-k)\langle {{(n)}_{k}}\rangle $ . For a system with a large number of particles ($n\gg k$ ), fluctuations can often be neglected in a first approximation, leading to the deterministic rate equation

Equation (25)

obeyed by a continuous particle number $\bar{n}(\tau )\in \mathbb{R}$ .

Path integral representations of the chemical master equation (24) are sometimes said to be 'bosonic'. First, because an arbitrarily large number of molecules may in principle be present in the system (both globally and, upon extending the system to a spatial domain, locally). Second, because the path integral representations are typically derived with the help of 'creation and annihilation operators' fulfilling a 'bosonic' commutation relation (see section 2.2.2). 'Fermionic' path integrals, on the other hand, have been developed for systems in which the particles exclude one another. Thus, the number of particles in these systems is locally restricted to the values 0 and 1. For systems with excluding particles, the master equation's solution may be represented in terms of a path integral whose underlying creation and annihilation operators fulfil an anti-commutation relation [2631]. However, van Wijland recently showed that the use of operators fulfilling the bosonic commutation relation is also possible [90]. These approaches are considered in section 2.2.4.

We do not intend to delve further into the historic development and applications of stochastic path integrals at this point. Doing so would require a proper introduction into renormalization group theory, which is of pivotal importance for the evaluation of the path integrals. Readers can find information on the application of renormalization group techniques in the review of Täuber, Howard, and Vollmayr-Lee [69] and in the book of Täuber [70]. Introductory texts are also provided by Cardy's (lecture) notes [91, 293]. Roughly speaking, path integral representations of the chemical master equation (24) have been used to assess how a macroscopic law of mass action changes due to fluctuations, both below [96, 130, 132, 294305] and above the (upper) critical dimension [295, 306, 307], using either perturbative [14, 70, 130133, 294301, 303305, 308316] or non-perturbative [306, 307, 317319] techniques. All of these articles focus on stochastic processes with spatial degrees of freedom for which alternative analytical approaches are scarce. Path integral representations of these processes, combined with renormalization group techniques, have been pivotal in understanding non-equilibrium phase transitions and they contributed significantly to the classification of these transitions in terms of universality classes [67, 70, 320]. Moreover, path integral representations of master equations have recently been employed in such diverse contexts as the study of neural networks [321323], of ecological populations [156, 157, 162, 163, 324, 325], and of the differentiation of stem-cells [326].

1.6. Résumé

Continuous-time Markov processes with discontinuous sample paths describe a broad range of phenomena, ranging from traffic jams on highways [186] and on cytoskeletal filaments [113115, 190] to novel forms of condensation in bosonic systems [178, 191, 192]. In the introduction, we laid out the mathematical theory of these processes and derived the fundamental equations governing their evolution: the forward and the backward master equations. Whereas the forward master equation (10) evolves a conditional probability distribution forward in time, the backward master equation (12) evolves the distribution backward in time. In the following main part of this review, we represent the conditional probability distribution solving the master equations in terms of path integrals. The framework upon which these path integrals are based unifies a broad range of approaches to the master equations, including, for example, the spectral method of Walzcak, Mugler, and Wiggins [34] and the Poisson representation of Gardiner and Chaturvedi [32, 33].

2. The probability generating function

The following two sections 2 and 3 are devoted to mapping the forward and backward master equations (10) and (12) to linear partial differential equations (PDEs). For brevity, we refer to such linear PDEs as 'flow equations'. In sections 4 and 5, the derived flow equations are solved in terms of path integrals.

It has been known since at least the 1940s that the (forward) master equation can be cast into a flow equation obeyed by the ordinary probability generating function

Equation (26)

at least when the corresponding transition rate describes a simple chemical reaction [122128, 327]. The generating function effectively replaces the discrete variable n by the continuous variable q. The absolute convergence of the sum in (26) is ensured (at least) for $|q|\leqslant 1$ . The generating function 'generates' probabilities in the sense that

Equation (27)

This inverse transformation from g to p involves the application of the (real) linear functional ${{L}_{n}}[\,f]:=\frac{1}{n!}\partial _{q}^{n}\,f(q){{|}_{q=0}}$ , which maps a (real-valued) function f to a (real) number. Moreover, it fulfils ${{L}_{n}}[\,f+\alpha g]={{L}_{n}}[\,f]+\alpha {{L}_{n}}[g]$ for two functions f and g, and $\alpha \in \mathbb{R}$ . A more convenient notation for linear functionals is introduced shortly. In the following, we generalize the probability generating function (26) and formulate conditions under which the generalized function obeys a linear PDE, i.e. a flow equation. But before proceeding to the general case, let us exemplify the use of a generalized probability generating function for a specific process (for brevity, we often drop the terms 'probability' and 'generalized' in referring to this function).

As the example, we consider the bi-directional reaction $\varnothing \rightleftharpoons A$ in which molecules of type A form at rate $\gamma \geqslant 0$ and degrade at per capita rate $\mu >0$ . According to the chemical master equation (24), the probability of observing $n\in {{\mathbb{N}}_{0}}$ such molecules obeys the equation

Equation (28)

with initial condition $p\left({{t}_{0}},n|{{t}_{0}},{{n}_{0}}\right)={{\delta}_{n,{{n}_{0}}}}$ . This master equation respects the fact that the number of molecules cannot become negative through the reaction $A\to \varnothing $ . By differentiating the probability generating function $g\left(\tau ;q|{{t}_{0}},{{n}_{0}}\right)$ in (26) with respect to the current time τ, one finds that it obeys the flow equation

Equation (29)

Its time evolution starts out from $g\left({{t}_{0}};q|{{t}_{0}},{{n}_{0}}\right)={{q}^{{{n}_{0}}}}$ . Instead of solving the flow equation right away, let us first simplify it by changing the basis function qn of the generating function (26). As a first step, we change it to (q  +  1)n , turning the corresponding flow equation into ${{\partial}_{\tau}}g=q\left(\gamma -\mu {{\partial}_{q}}\right)g$ . As a second step, we multiply the new basis function by ${{\text{e}}^{-\frac{\gamma}{\mu}(q+1)}}$ and arrive at the simplified flow equation

Equation (30)

The generating function obeying this equation reads

Equation (31)

As before, the dots inside the functions' arguments abbreviate the initial parameters t0 and n0.

The simplified flow equation (30) is now readily solved by separation of variables. But before doing so, let us introduce some new notation. From now on, we write the basis function as

Equation (32)

and the corresponding generalized probability generating function (31) as

Equation (33)

In quantum mechanics, an object written as $|\centerdot \rangle $ is called a 'ket', a notation that was originally introduced by Dirac [21]. In the above two expressions, the kets simply represent ordinary functions. For brevity, we write the arguments of the kets as subscripts and occasionally drop these subscripts altogether. In principle, the basis function could also depend on time (i.e. $|n{{\rangle}_{\tau,q}}$ ).

Later, in section 2.2, we introduce various basis functions for the study of different stochastic processes, including the Fourier basis function $|n{{\rangle}_{q}}:={{\text{e}}^{\text{i}nq}}$ for the solution of a random walk (with $n\in \mathbb{Z}$ ). Moreover, we consider a 'linear algebra' approach in which $|n\rangle :={{\widehat{\boldsymbol{e}}}_{n}}$ represents the unit column vector in direction $n\in {{\mathbb{N}}_{0}}$ (this vector equals one at position n  +  1 and is zero everywhere else; see section 2.2.3). The generating function (33) corresponding to this 'unit vector basis' coincides with the column vector $\boldsymbol{p}\left(\tau |{{t}_{0}},{{n}_{0}}\right)$ of the probabilities $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ . The unit vector basis will prove useful later on in recovering a path integral representation of averaged observables in section 6.2.

In our present example and in most of this review, however, the generating function (33) represents an ordinary function and obeys a linear PDE. For the basis function (32), this PDE reads

Equation (34)

Its time evolution starts out from $|g\left({{t}_{0}}|{{t}_{0}},{{n}_{0}}\right)\rangle =\,|{{n}_{0}}\rangle $ .

In order to recover the conditional probability distribution from the generating function (33), we now complement the 'kets' with 'bras'. Such a bra is written as $\langle \centerdot |$ and represents a linear functional in our present example. In particular, we define a bra $\langle m|$ for every $m\in {{\mathbb{N}}_{0}}$ by its following action on a test function f:

Equation (35)

The evaluation at q  =  −1 could also be written in integral form as ${{{\int}^{}}_{\mathbb{R}}}\text{d}q\,\delta (q+1)(\cdots )$ . The functional $\langle m|$ is obviously linear and it maps the basis function (32) to

Equation (36)

Thus, the 'basis functionals' in ${\left\{\langle m|\right\}}_{m\in {{\mathbb{N}}_{0}}}$ are orthogonal to the basis functions in $\{|n\rangle {{\}}_{n\in {{\mathbb{N}}_{0}}}}$ . The orthogonality condition can be used to recover the conditional probability distribution from (33) via

Equation (37)

Besides being orthogonal to one another, the kets (32) and bras (35) fulfil the completeness relation ${\sum}_{n}|n{{\rangle}_{q}}\langle n|\,f=f(q)$ with respect to analytic functions 5 (note that $\langle n|\,f$ as defined in (35) is just a real number and does not depend on q). Above, we mentioned that we later introduce alternative basis kets, including the Fourier basis function $|n{{\rangle}_{q}}={{\text{e}}^{\text{i}nq}}$ for the solution of a random walk (with $n\in \mathbb{Z}$ ), and the unit column vector $|n\rangle ={{\widehat{\boldsymbol{e}}}_{n}}$ with $n\in {{\mathbb{N}}_{0}}$ . These kets can also be complemented to obtain orthogonal and complete bases, namely by complementing the Fourier basis function with the basis functional $\langle m|\,f:={\int}_{-\pi}^{\pi}\frac{\text{d}q}{2\pi}{{\text{e}}^{-\text{i}mq}}f(q)$ ($m\in \mathbb{Z}$ ), and by complementing the unit column vector with the unit row vector $\langle m|:=\widehat{\boldsymbol{e}}_{m}^{\intercal}$ ($m\in {{\mathbb{N}}_{0}}$ ). For the unit vector basis $\{|n\rangle,{{\langle m|\}}_{n,m\in {{\mathbb{N}}_{0}}}}$ , the completeness condition ${\sum}_{n}|n\rangle \langle n|={\mathbb {1}}$ involves the (infinitely large) unit matrix ${\mathbb {1}}$ .

Thanks to the new basis function (32), the simplified flow equation (34) can be easily solved by separation of variables. Making the ansatz $|g\rangle =f(\tau )h(q)$ , one obtains an equation whose two sides depend either on f or on h but not on both. The equation is solved by $f(\tau )={{\text{e}}^{-k\mu \left(\tau -{{t}_{0}}\right)}}$ and h(q)  =  qk , with k being a non-negative parameter. The non-negativity of k ensures the finiteness of the initial condition $|g\left({{t}_{0}}|{{t}_{0}},{{n}_{0}}\right){{\rangle}_{q}}=\,|{{n}_{0}}{{\rangle}_{q}}$ in the limit $q\to 0$ . By the completeness of the polynomial basis, the values of k can be restricted to ${{\mathbb{N}}_{0}}$ . It proves convenient to represent also the standard polynomial basis in terms of bras and kets, namely by defining $\langle \langle k|\,f:=\frac{1}{k!}\partial _{q}^{k}f(q){{|}_{q=0}}$ and $|k\rangle {{\rangle}_{q}}:={{q}^{k}}$ . These bras and kets are again orthogonal to one another in the sense of $\langle \langle k|l\rangle \rangle ={{\delta}_{k,l}}$ and they also fulfil a completeness relation (${\sum}_{k}|k\rangle \rangle \langle \langle k|$ represents a Taylor expansion around q  =  0 and thus acts as an identity on analytic functions). Using the auxiliary bras and kets, the solution of the flow equation (34) can be written as

Equation (38)

We wrote the expansion coefficient in this solution as $\langle \langle k|{{n}_{0}}\rangle $ to respect the initial condition $|g\left({{t}_{0}}|\centerdot \right)\rangle =\,|{{n}_{0}}\rangle $ .

The conditional probability distribution can be recovered from the generating function (38) via the inverse transformation (37) as

Equation (39)

The coefficients $\langle n|k\rangle \rangle $ and $\langle \langle k|{{n}_{0}}\rangle $ can be computed recursively as explained in [328]. Here we are interested in the asymptotic limit $\tau \to \infty $ of the distribution (39) for which only the k  =  0 'mode' survives. Therefore, the distribution converges to the stationary Poisson distribution

Equation (40)

The above example illustrates how the master equation can be transformed into a linear PDE obeyed by a generalized probability generating function and how this PDE simplifies for the right basis function. The explicit choice of the basis function depends on the problem at hand. Moreover, the above example introduced the bra-ket notation used in this review. In section 4.2, the reaction $\varnothing \rightleftharpoons A$ will be reconsidered using a path integral representation of the probability distribution. We will then see that this process is not only solved by a Poisson distribution in the stationary limit, but actually for all times (at least, if the number of molecules in the system was initially Poisson distributed).

In the remainder of this section, as well as in section 3, we generalize the above approach and derive flow equations for the following four series expansions (with dynamic time variable $\tau \in \left[{{t}_{0}},t\right]$ ):

Equation (41)

Equation (42)

Equation (43)

Equation (44)

Apparently, the series (41) coincides with the generalized probability generating function (33). In the next section 2.1, we formulate general conditions under which this function obeys a linear PDE. The remaining series (42)–(44) may not be as familiar. We call the series (42) a 'marginalized distribution'. It will be shown in section 3 that this series does not only solve the (forward) master equation, but that it also obeys a backward-time PDE under certain conditions. The marginalized distribution proves useful in the computation of mean extinction times as we demonstrate in section 3.3. In section 3.4, we consider the 'probability generating functional' (43). For a 'Poisson basis function', the inverse transformation, which maps this functional to the conditional probability distribution, coincides with the Poisson representation of Gardiner and Chaturvedi [32, 33]. The potential use of the series (44) remains to be explored.

The goal of the subsequent sections 4 and 5 lies in the solution of the derived flow equations by path integrals. In section 4, we first solve the flow equations obeyed by the marginalized distribution (42) and by the generating functional (43) in terms of a 'backward' path integral. Afterwards, in section 5, the flow equations obeyed by the generating function (41) and by the series expansion (44) are solved in terms of a 'forward' path integral. Inverse transformations, such as (37), will then provide distinct path integral representations of the forward and backward master equations.

2.1. Flow of the generating function

We now formulate general conditions under which the forward master equation (10) can be cast into a linear PDE obeyed by the generalized probability generating function

Equation (45)

The basis function $|n{{\rangle}_{\tau,q}}$ is a function of the variable q and possibly of the time variable τ. But unless one of these variables is of direct relevance, its corresponding subscript will be dropped. The explicit form of the basis function depends on the problem at hand and is chosen so that the four conditions (O), (C), (E), and (Q) below are satisfied (the conditions (O) and (C) concern the orthogonality and completeness of the basis, which we already required in the introductory example). The variable n again represents some state from a countable state space. For example, n could describe the position of a molecular motor along a cytoskeletal filament ($n\in \mathbb{Z}$ ), the copy number of a molecule ($n\in {{\mathbb{N}}_{0}}$ ), the local copy numbers of the molecule on a lattice ($n\equiv {{\left\{{{n}_{i}}\in {{\mathbb{N}}_{0}}\right\}}_{i\in \mathbb{Z}}}$ ), or the copy numbers of multiple kinds of molecules ($n\equiv \left({{n}^{A}},{{n}^{B}},{{n}^{C}}\right)\in \mathbb{N}_{0}^{3}$ ). For the multivariate configurations, the basis function is typically decomposed into a product $|{{n}_{1}}\rangle |{{n}_{2}}\rangle \cdots $ of individual basis functions $|{{n}_{i}}\rangle $ , each depending on its own variable qi . A process with spatial degrees of freedom is considered in section 4.5.2. Besides, we also consider a system of excluding particles in section 2.2.4. There, the (local) number of particles n is restricted to the values 0 and 1. Note that the sum in (45) extends over the whole state space.

The definition of the generating function (45) assumes the existence of a set ${{\{|n\rangle}_{\tau}}\}$ of basis functions for every time $\tau \in \left[{{t}_{0}},t\right]$ . In addition, we assume that there exists a set $\left\{\langle m{{|}_{\tau}}\right\}$ of linear basis functionals for every time $\tau \in \left[{{t}_{0}},t\right]$ . These bras shall be orthogonal to the kets in the sense that at each time point τ, they act on the kets as

(for all m and n). Here we note the possible time-dependence of the basis because the (O)rthogonality condition will only be required for equal times of the bras and kets. In addition to orthogonality, the basis shall fulfil the (C)ompleteness condition

where f represents an appropriate test function. The completeness condition implies that the function f can be decomposed in the basis functions $|n\rangle $ with expansion coefficients $\langle n|\,f$ . In the introduction to this section, we introduced various bases fulfilling both the orthogonality and the completeness condition. As in the introductory example, the orthogonality condition allows one to recover the conditional probability distribution via the inverse transformation

Equation (48)

Before deriving the flow equation obeyed by the generating function, let us note that the (O)rthogonality condition differs slightly from the corresponding conditions used in most other texts on stochastic path integrals (see, for example, [13, 15] or the 'exclusive scalar product' in [10]). Typically, the orthogonality condition includes an additional factorial $n!$ on its right hand side. The inclusion of this factorial is advantageous in that it accentuates a symmetry between the bases that we consider in sections 2.2.2 and 3.2.2 for the study of chemical reactions. Its inclusion would be rather unusual, however, for the Fourier basis introduced in sections 2.2.1 and 3.2.1. The Fourier basis will be used to solve a simple random walk. Moreover, the factorial obscures a connection between the probability generating functional introduced in section 3.4 and the Poisson representation of Gardiner and Chaturvedi [32, 33]. We discuss this connection in section 3.5.

To derive the flow equation obeyed by the generating function $|g\rangle $ , we differentiate its definition (45) with respect to the time variable τ. The resulting time derivative of the conditional probability distribution $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ can be replaced by the right-hand side of the forward master equation (10). In matrix notation, this equation reads ${{\partial}_{\tau}}p\left(\tau |{{t}_{0}}\right)={{Q}_{\tau}}p\left(\tau |{{t}_{0}}\right)$ . Eventually, one finds that

Equation (49)

Our goal is to turn this expression into a partial differential equation for $|g\rangle $ . For this purpose, we require two differential operators. First, we require a differential operator ${{\mathcal{E}}_{\tau}}\left(q,{{\partial}_{q}}\right)$ encoding the time evolution of the basis function. In particular, this operator should fulfil, for all values of n,

By the (O)rthogonality condition, one could also define this operator in a 'constructive' way as

Equation (51)

We call ${{\mathcal{E}}_{\tau}}$ the basis (E)volution operator. In order to arrive at a proper PDE for $|g\rangle $ , ${{\mathcal{E}}_{\tau}}\left(q,{{\partial}_{q}}\right)$ should be polynomial in ${{\partial}_{q}}$ (later, in section 2.2.4 we also encounter a case in which it constitutes a power series with infinitely high powers of ${{\partial}_{q}}$ ). For now, the pre-factors of 1, ${{\partial}_{q}}$ , $\partial _{q}^{2}$ , may be arbitrary functions of q. Later, in our derivation of a path integral in section 5, we will also require that the pre-factors can be expanded in powers of q. Note that for a multivariate configuration $n\equiv \left({{n}^{A}},{{n}^{B}},\ldots \right)$ , the derivative ${{\partial}_{q}}$ represents individual derivatives with respect to $q\equiv \left({{q}^{A}},{{q}^{B}},\ldots \right)$ .

The actual dynamics of a jump process are encoded by its transition rate matrix ${{Q}_{\tau}}$ (see section 1.3). The off-diagonal elements of this matrix are the transition rates ${{w}_{\tau}}(n,m)$ from a state m to a state n, and its diagonal elements are the negatives of the exit rates ${{w}_{\tau}}(m)={\sum}_{n}{{w}_{\tau}}(n,m)$ from a state m (with ${{w}_{\tau}}(m,m)=0$ ; see (9)). We encode the information stored in ${{Q}_{\tau}}$ by a second differential operator called ${{\mathcal{Q}}_{\tau}}\left(q,{{\partial}_{q}}\right)$ . This operator should fulfil, for all values of n,

In analogy with the transition (rate) matrix ${{Q}_{\tau}}$ , we call ${{\mathcal{Q}}_{\tau}}$ the transition (rate) operator (note that we only speak of 'operators' with respect to differential operators, but not with respect to matrices). Just like the basis (E)volution operator, ${{\mathcal{Q}}_{\tau}}\left(q,{{\partial}_{q}}\right)$ should be polynomial in ${{\partial}_{q}}$ . In section 5, it will be assumed that ${{\mathcal{Q}}_{\tau}}\left(q,{{\partial}_{q}}\right)$ can be expanded in powers of both q and ${{\partial}_{q}}$ . In a constructive approach, one could also define the operator as

Equation (53)

This constructive definition does not guarantee, however, that ${{\mathcal{Q}}_{\tau}}\left(q,{{\partial}_{q}}\right)$ has the form of a differential operator. This property is, for example, not immediately clear for the Fourier basis function $|n{{\rangle}_{q}}={{\text{e}}^{\text{i}nq}}$ (with $n\in \mathbb{Z}$ ), which we complemented with the functional $\langle n|\,f={\int}_{-\pi}^{\pi}\frac{\text{d}q}{2\pi}{{\text{e}}^{-\text{i}nq}}f(q)$ in the introduction to this section. Most of the processes that we solve in later sections have polynomial transition rates. Suitable bases and operators for these processes are provided in the next section 2.2. It remains an open problem for the field to find such bases and operators for processes whose transition rates have different functional forms. That is, for example, the case for transition rates that saturate with the number of particles and have the form of a Hill function.

Provided that one has found a transition operator ${{\mathcal{Q}}_{\tau}}$ and a basis (E)volution operator ${{\mathcal{E}}_{\tau}}$ for a (C)omplete and (O)rthogonal basis, it follows from (49) that the generalized generating function $|g\left(\tau |{{t}_{0}},{{n}_{0}}\right)\rangle $ obeys the flow equation 6

Equation (54)

Its initial condition reads $|g\left({{t}_{0}}|{{t}_{0}},{{n}_{0}}\right)\rangle =\,|{{n}_{0}}\rangle $ , with the possibly time-dependent basis function $|{{n}_{0}}\rangle $ being evaluated at time t0. Although time-dependent bases prove useful in section 7, we mostly work with time-independent bases in the following. The operators ${{\tilde{\mathcal{Q}}}_{\tau}}$ and ${{\mathcal{Q}}_{\tau}}$ then agree because ${{\mathcal{E}}_{\tau}}$ is zero. Therefore, we refer to both ${{\tilde{\mathcal{Q}}}_{\tau}}$ and ${{\mathcal{Q}}_{\tau}}$ as transition (rate) operators.

In our above derivation, we assumed that the ket $|n\rangle $ represents an ordinary function and that the bra $\langle n|$ represents a linear functional. To understand why we chose similar letters for the ${{Q}_{\tau}}$ -matrix and the ${{\mathcal{Q}}_{\tau}}$ -operator, it is insightful to consider the unit column vectors $|n\rangle :={{\widehat{\boldsymbol{e}}}_{n}}$ and the unit row vectors $\langle n|\,:=\widehat{\boldsymbol{e}}_{n}^{\intercal}$ (with $m,n\in {{\mathbb{N}}_{0}}$ ). For these vectors, the right hand side of the transition operator (53) simply constitutes a representation of the ${{Q}_{\tau}}$ -matrix. Hence, ${{\mathcal{Q}}_{\tau}}$ is not a differential operator in this case but coincides with the ${{Q}_{\tau}}$ -matrix. This observation does not come as a surprise because we already noted that the generating function (45) represents the vector $\boldsymbol{p}\left(\tau |{{t}_{0}},{{n}_{0}}\right)$ of all probabilities in this case. Moreover, the corresponding flow equation (54) does not constitute a linear PDE but a vector representation of the forward master equation (10).

Following Doi, the transition operator ${{\tilde{\mathcal{Q}}}_{\tau}}$ could be called a 'time evolution' or 'Liouville' operator [6, 7], or, following Zel'dovich and Ovchinnikov, a 'Hamiltonian' [8]. The latter name is due to the formal resemblance of the flow equation (54) to the Schrödinger equation in quantum mechanics [92] (this resemblance holds for any linear PDE that is first order in ${{\partial}_{\tau}}$ ). The name 'Hamiltonian' has gained in popularity throughout the recent years, possibly because the path integrals derived later in this review share many formal similarities with the path integrals employed in quantum mechanics [107, 285287]. Nevertheless, let us point out that ${{\tilde{\mathcal{Q}}}_{\tau}}$ is generally not Hermitian and that the generating function $|g\rangle $ does not represent a wave function and also not a probability (unless one chooses the unit vectors $|n\rangle ={{\widehat{\boldsymbol{e}}}_{n}}$ and $\langle n|\,=\widehat{\boldsymbol{e}}_{n}^{\intercal}$ as basis). In this review, we stick to the name transition (rate) operator for ${{\tilde{\mathcal{Q}}}_{\tau}}$ (and ${{\mathcal{Q}}_{\tau}}$ ) to emphasize its connection to the transition (rate) matrix ${{Q}_{\tau}}$ .

2.2. Bases for particular stochastic processes

In the previous section, we formulated four conditions under which the master equation (10) can be cast into the linear PDE (54) obeyed by the generalized generating function (45). In the following, we illustrate how these conditions can be met for various stochastic processes.

2.2.1. Random walks.

A simple process that can be solved by the method from the previous section is the one-dimensional random walk. We model this process in terms of a particle sitting at position n of the one-dimensional lattice $\mathbb{L}:=\mathbb{Z}$ . The particle may jump to the nearest lattice site on its right with the (possibly time-dependent) rate ${{r}_{\tau}}>0$ , and to the site on its left with the rate ${{l}_{\tau}}>0$ . Given that the particle was at position n0 at time t0, the probability of finding it at position n at time τ obeys the master equation

Equation (55)

with initial condition $p\left({{t}_{0}},n|{{t}_{0}},{{n}_{0}}\right)={{\delta}_{n,{{n}_{0}}}}$ . One can solve this equation by solving the associated flow equation (54). But for this purpose, we first require an (O)rthogonal and (C)omplete basis, as well as a basis (E)volution operator ${{\mathcal{E}}_{\tau}}$ and a transition operator ${{\mathcal{Q}}_{\tau}}$ (condition (Q)).

An appropriate choice of the orthogonal and complete basis proves to be the time-independent Fourier basis

Equation (56)

with $n\in \mathbb{Z}$ and test function f. For this basis, the generalized generating function $|g\rangle ={\sum}_{n}{{\text{e}}^{\text{i}nq}}p(\tau,n|\centerdot )=\langle {{\text{e}}^{\text{i}nq}}\rangle $ coincides with the characteristic function. Moreover, the corresponding orthogonality condition $\langle m|n\rangle ={{\delta}_{m,n}}$ agrees with a common integral representation of the Kronecker delta. The completeness of the basis can be shown with the help of a Fourier series representation of the 'Dirac comb'

Equation (57)

It thereby follows that

Equation (58)

Since the Fourier basis function is time-independent, the condition (E) is trivially fulfilled for the evolution operator ${{\mathcal{E}}_{\tau}}:=0$ . The only piece still missing is the transition operator ${{\mathcal{Q}}_{\tau}}$ . Its defining condition (Q) requires knowledge of the transition matrix ${{Q}_{\tau}}$ whose elements

Equation (59)

can be inferred by comparing the master equation (55) with its general form (10). The condition (Q) therefore reads

Equation (60)

One can construct a transition operator with this property with the help of the functions $c(q):={{\text{e}}^{\text{i}q}}$ and $a(q):={{\text{e}}^{-\text{i}q}}$ . The function c shifts the basis function $|n{{\rangle}_{q}}={{\text{e}}^{\text{i}nq}}$ to the right via $c|n\rangle =\,|n+1\rangle $ , and the function a shifts it to the left via $a|n\rangle =\,|n-1\rangle $ . Thus, an operator with the property (60) can be defined as

Equation (61)

This operator can also be inferred from its constructive definition (53) by making use of the Dirac comb.

After putting the above pieces together, one finds that the generating function $|g\left(\tau |{{t}_{0}},{{n}_{0}}\right){{\rangle}_{q}}$ obeys the flow equation

Equation (62)

for ${{t}_{0}}\leqslant \tau \leqslant t$ with the initial value $|g\left({{t}_{0}}|{{t}_{0}},{{n}_{0}}\right)\rangle =|{{n}_{0}}\rangle ={{\text{e}}^{\text{i}{{n}_{0}}q}}$ . The flow equation is readily solved for $|g\rangle $ by

The conditional probability distribution can be recovered from this generating function by performing the inverse Fourier transformation (48), i.e. by evaluating $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)=\langle n|g\rangle $ . A series expansion of all of the involved exponentials and some laborious rearrangement of sums eventually result in a Skellam distribution as the solution of the process [329]. We outline the derivation of this distribution in appendix C. The distribution's mean is $\mu ={{n}_{0}}+{\int}_{{{t}_{0}}}^{\tau}\text{d}s\,\left({{r}_{s}}-{{l}_{s}}\right)$ and its variance ${{\sigma}^{2}}={\int}_{{{t}_{0}}}^{\tau}\text{d}s\,\left({{r}_{s}}+{{l}_{s}}\right)$ . These moments also follow from the fact that the Skellam distribution describes the difference of two Poisson random variables; one for jumps to the right, the other for jumps to the left. The two moments can, also be obtained more easily by deriving equations for their time evolution from the master equation. Those equations do not couple for the simple random walk. Let us also note that if the two jump rates ${{r}_{\tau}}$ and ${{l}_{\tau}}$ agree, the Skellam distribution tends to a Gaussian for large times.

2.2.2. Chemical reactions.

As a second example, we turn to processes whose transition rates can be decomposed additively into the transition rates of simple chemical reactions. In such a reaction, k1, k2,... molecules of types A1, A2,... come together to be replaced by l1, l2,... molecules of the same types (with ${{k}_{j}},{{l}_{j}}\in {{\mathbb{N}}_{0}}$ ). Besides reacting with each other, the molecules could also diffuse in space, which can be modelled in terms of hopping processes on a regular, d-dimensional lattice such as $\mathbb{L}:={{\mathbb{Z}}^{d}}$ . Upon labelling particles on different lattice sites by their positions, the hopping of a molecule of type A1 from lattice site $i\in \mathbb{L}$ to lattice site $j\in \mathbb{L}$ could be regarded as the chemical reaction $A_{1}^{(i)}\to A_{1}^{(\,j)}$ . We consider the 'chemical' master equation associated to such hopping processes in section 4.5.2.

To demonstrate the generating function approach from section 2.1, we focus on a system with only a single type of molecule A engaging in the 'well-mixed' reaction $k\,A\to l\,A$ ($k,l\in {{\mathbb{N}}_{0}}$ ). Since the forward master equation (10) is linear in the transition rate ${{Q}_{\tau}}(n,m)$ , the following considerations readily extend to networks of multiple reactions, multiple types of molecules, and processes with spatial degrees of freedom. The basis functions and functionals introduced below can, for example, be used to study branching and annihilating random walks, which are commonly modelled in terms of diffusing particles that engage in the binary annihilation reaction $2\,A\to \varnothing $ and the linear growth process $A\to (1+m)A$ [96, 132, 317, 330, 331]. For an odd number of offspring, these walks exhibit an absorbing state phase transition falling into the universality class of directed percolation (according to perturbative calculations in one and two spatial dimensions [96, 132], according to non-perturbative calculations in up to six dimensions [318]). The decomposition of a process into elementary reactions of the form $k\,A\to l\,A$ is also possible in the contexts of growing polymer crystals [13], aggregation phenomena [294], and predator-prey ecosystems [163].

As explained in section 1.5, the transition rate ${{w}_{\tau}}(m,n)={{\gamma}_{\tau}}{{\delta}_{m,n-k+l}}{{(n)}_{k}}$ of the reaction $k\,A\to l\,A$ is determined by combinatorial counting. Its proportionality to the falling factorial ${{(n)}_{k}}=n(n-1)\cdots (n-k+1)$ derives from picking k molecules out of a population of n molecules. The overall time scale of the reaction is set by the rate coefficient ${{\gamma}_{\tau}}$ (this coefficient may also absorb a factorial $k!$ accounting for the indistinguishability of molecules). The above transition rate guarantees that the number of molecules (or 'particles') in the system never drops below zero, provided that it was non-negative initially. Thus, the state space of n is ${{\mathbb{N}}_{0}}$ and we require basis functions $|n\rangle $ and basis functionals $\langle n|$ only for such values.

Before specifying the (C)omplete and (O)rthogonal basis as well as the basis (E)volution operator ${{\mathcal{E}}_{\tau}}$ , let us first specify an appropriate transition operator ${{\mathcal{Q}}_{\tau}}$ . Its corresponding condition (Q) depends on the transition matrix ${{Q}_{\tau}}$ , whose elements ${{Q}_{\tau}}(m,n)={{\gamma}_{\tau}}{{(n)}_{k}}\left({{\delta}_{m,n-k+l}}-{{\delta}_{m,n}}\right)$ can be inferred from the rate ${{w}_{\tau}}(m,n)$ with the help of the relation (9). Consequently, the condition (Q) reads

Equation (63)

This condition is met by the transition operator

Equation (64)

provided that there exist a 'creation' operator c and an 'annihilation' operator a acting as

Equation (65)

Equation (66)

The second of these relations ensures that basis functions with n  <  0 do not appear because $a|0\rangle $ vanishes. Both c and a may depend on time, just as the basis functionals and functions $\langle n|$ and $|n\rangle $ may do. It follows from (65) and (66) that the two operators fulfil the commutation relation

Equation (67)

at a fixed time τ. The commutation relation is meant with respect to functions that can be expanded in the basis functions yet to be defined. For brevity, we commonly drop the subscript τ of the creation and annihilation operators. Together with the orthogonality condition, the commutation relation implies that the operators act on the basis functionals as

Equation (68)

Equation (69)

In quantum mechanics, creation and annihilation operators prove useful in solving the equation of motion of a quantum particle in a quadratic potential (i.e. in solving the Schrödinger equation of the quantum harmonic oscillator) [22, 332]. There, the ket $|n\rangle $ is interpreted as carrying n quanta of energy $\hbar \omega $ in addition to the ground state energy $\hbar \omega /2$ of the oscillator's 'vacuum state' $|0\rangle $ (with Dirac constant $\hbar $ and angular frequency ω). The creation operator then adds a quantum of energy to state $|n\rangle $ via the relation $c|n\rangle =\sqrt{n+1}|n+1\rangle $ , and the annihilation operator removes an energy quantum via $a|n\rangle =\sqrt{n}|n-1\rangle $ (a also destroys the vacuum state). These relations differ from the ones in (65) and (66) because in quantum mechanics, the creation and annihilation operators are defined in terms of self-adjoint position and momentum operators, forcing the former operators to be hermitian adjoints of each other (i.e. $c={{a}^{\dagger}}$ and $a={{c}^{\dagger}}$ ). Consequently, the relations corresponding to (68) and (69) read $\langle n|{{a}^{\dagger}}=\sqrt{n}\langle n-1|$ and $\langle n|a=\sqrt{n+1}\langle n+1|$ . Nevertheless, the commutation relation (67) also holds in the quantum world in which its validity ultimately derives from the non-commutativity of the position operator Q and the momentum operator P (i.e. from $[Q,P]=\text{i}\hbar $ , which follows from P being the generator of spatial displacements in state space [332]). Besides, let us note that in quantum field theory, the creation operator is interpreted as adding a bosonic particle to an energy state, and the annihilation operator as removing one [333].

One may wonder whether the above interpretations can be transferred to the theory of stochastic processes in which the creation and annihilation operators need not be each other's adjoints. For example, the creation operator in (65) could be interpreted as adding a molecule to a system, and the corresponding annihilation operator (66) as removing a molecule. The commutation relation (67) may then be interpreted as that the addition of a particle to the system (one way to do it) and the removal of a particle (many ways to do it) do not commute. Let us, however, note that these interpretations only apply to the processes discussed in the present section, which can be decomposed into reactions of the form $k\,A\to l\,A$ (with possibly multiple types of particles and spatial degrees of freedom). The interpretations do not apply, for example, to the random walk of a single particle with state space $\mathbb{Z}$ , which we discussed in the previous section (there, the 'shift operators' with actions $c|n\rangle =\,|n+1\rangle $ and $a|n\rangle =\,|n-1\rangle $ commute).

Whether creation and annihilation operators with the properties (65) and (66) exist depends on the choice of the basis functions. For the study of chemical reactions, a useful choice proves to be the basis function

Equation (70)

Here, $\tilde{q}(\tau )$ and $\tilde{x}(\tau )$ are arbitrary, possibly time-dependent functions and $\zeta \ne 0$ is a free parameter. This parameter only becomes relevant in section 6 in recovering a path integral representation of averaged observables. There, its value is set to $\zeta :=\text{i}$ but typically we choose $\zeta :=1$ . For the latter choice, the basis function (70) simplifies to

Equation (71)

Alternatively, the parameter ζ could be used to rescale the variable q by a system size parameter N if such a parameter is available.

The two functions $\tilde{q}$ and $\tilde{x}$ may prove helpful in simplifying the flow equation obeyed by the generating function. For example, we chose $\tilde{q}:=1$ and $\tilde{x}:=\gamma /\mu $ in the introduction to section 2 to simplify the flow equation of the reaction $\varnothing \rightleftharpoons A$ (with growth rate coefficient γ and decay rate coefficient μ). Later, in section 7, $\tilde{q}$ and $\tilde{x}$ will act as the stationary paths of a path integral with q and an auxiliary variable x being deviations from them. If both $\tilde{q}$ and $\tilde{x}$ are chosen as zero, the basis function (71) simplifies to the basis function

Equation (72)

of the ordinary probability generating function.

For the general basis function (70), creation and annihilation operators with the properties (65) and (66) can be defined as

Equation (73)

Equation (74)

Apparently, these operators are not each other's adjoints. For the basis function $|n{{\rangle}_{q}}={{q}^{n}}$ of the ordinary probability generating function, the operators simplify to c  =  q and $a={{\partial}_{q}}$ . The corresponding transition operator (64) reads ${{\mathcal{Q}}_{\tau}}\left(q,{{\partial}_{q}}\right)={{\gamma}_{\tau}}\left({{q}^{l}}-{{q}^{k}}\right)\partial _{q}^{k}$ , resulting in the flow equation

Equation (75)

Thus far, we have not specified the basis functionals. These functionals can be defined using the annihilation operator (74). In particular, we complement the basis function (70) with the functionals

Equation (76)

where f represents a test function. The (O)rthogonality of the basis ${{\left\{\langle m|,|n\rangle \right\}}_{m,n\in {{\mathbb{N}}_{0}}}}$ follows directly from the action of the annihilation operator on the basis function $|n\rangle $ , and from the vanishing of this function at $q=-\tilde{q}/\zeta $ , except for n  =  0. The (C)ompleteness of the basis is also readily established 7 .

The only piece still missing is the basis (E)volution operator ${{\mathcal{E}}_{\tau}}$ to encode the time-dependence of the basis function (70). Differentiation of this function with respect to time and use of the creation and annihilation operators (73) and (74) show that this operator is given by

Equation (77)

Let us illustrate the use of the above basis for a process whose transition operator can be reduced to a mere constant by an appropriate choice of $\tilde{q}$ and $\tilde{x}$ . This simplification is, however, bought by making the basis functions time-dependent. In particular, we consider the simple growth process $\varnothing \to A$ for a time-independent growth rate coefficient γ. By our previous discussions, one can readily verify that the ordinary probability generating function with basis function $|n{{\rangle}_{q}}={{q}^{n}}$ obeys the flow equation ${{\partial}_{\tau}}|g\rangle =\gamma (q-1)|g\rangle $ for this process. This flow equation can be simplified by redefining the basis function of the generating function as $|n{{\rangle}_{\tau,q}}:={{(q+1)}^{n}}{{\text{e}}^{-\tilde{x}(q+1)}}$ , with $\tilde{x}$ solving the rate equation ${{\partial}_{\tau}}\tilde{x}=\gamma $ of the process. Hence, the basis function depends explicitly on time through its dependence on $\tilde{x}(\tau )=\tilde{x}\left({{t}_{0}}\right)+\gamma \left(\tau -{{t}_{0}}\right)$ . Upon combining the transition operator (64) with the basis evolution operator (77), one finds that the generating function now obeys the flow equation ${{\partial}_{\tau}}|g\rangle =-\gamma |g\rangle $ . The equation is readily solved by $|g\left(\tau |{{t}_{0}},{{n}_{0}}\right)\rangle ={{\text{e}}^{-\gamma \left(\tau -{{t}_{0}}\right)}}|{{n}_{0}}{{\rangle}_{{{t}_{0}}}}$ . The conditional probability distribution can be recovered via the inverse transformation (48) as

Equation (78)

The coefficient ${{\langle n{{|}_{\tau}}|{{n}_{0}}\rangle}_{{{t}_{0}}}}$ can be evaluated by determining how the functional $\langle n{{|}_{\tau}}$ acts at time t0. Using ${{a}_{\tau}}-\tilde{x}(\tau )={{a}_{{{t}_{0}}}}-\tilde{x}\left({{t}_{0}}\right)$ and the binomial theorem, one can rewrite the action of this functional as

Equation (79)

By the (O)rthogonality condition, the conditional probability distribution (78) therefore evaluates to the shifted Poisson distribution

Equation (80)

where ${{ \Theta }_{n}}:=1$ for $n\geqslant 0$ and ${{ \Theta }_{n}}:=0$ otherwise. The validity of this solution can be verified by solving the corresponding flow equation of the ordinary generating function.

In our above approach, we first established the form of the transition operator because the basis function (70) is not the only possible choice. Ohkubo recently proposed the use of orthogonal polynomials for this purpose [334]. In analogy to the eigenfunctions of the quantum harmonic oscillator [22, 332], one can, for example, choose the basis function as the Hermite polynomial

Equation (81)

with $n\in {{\mathbb{N}}_{0}}$ . Hermite polynomials constitute an Appell sequence, i.e. they fulfil ${{\partial}_{q}}H{{e}_{n}}(q)=nH{{e}_{n-1}}(q)$ (18.9.27 in [335]). With $a:={{\partial}_{q}}$ , this property coincides with the defining relation $a|n\rangle =n|n-1\rangle $ of the annihilation operator in (66). Furthermore, Hermite polynomials obey the recurrence relation $H{{e}_{n+1}}(q)=qH{{e}_{n}}(q)-nH{{e}_{n-1}}(q)$ (18.9.1 in [335]). Combined with the Appell property, $c:=q-{{\partial}_{q}}$ therefore fulfils the defining relation $c|n\rangle =\,|n+1\rangle $ of a creation operator in (65). After complementing the basis function (81) with the functional

Equation (82)

the (O)rthogonality and (C)ompleteness of the basis are also established (18.3 and 18.18.6 in [335]). Thus, the chemical master equation (24) can be transformed into a flow equation obeyed by a generating function based on Hermite polynomials. To our knowledge, however, no stochastic process has thus far been solved or been approximated along these lines. Besides Hermite polynomials, Ohkubo also proposed the use of Charlier polynomials and mentioned their relation with certain birth-death processes [334].

2.2.3. Intermezzo: the unit vector basis.

One may wonder why we actually bother with explicit representations of the bras $\langle n|$ and kets $|n\rangle $ . Often, these objects are introduced only formally as the basis of a 'bosonic Fock space' [6, 7, 91, 289, 336], leaving the impression that the particles under consideration are in fact bosonic quantum particles. Although this impression takes the analogy with quantum theory too far, the analogy has helped in developing new methods for solving the master equations. In the previous sections, we showed how the forward master equation can be cast into a linear PDE obeyed by a probability generating function. Later, in section 5, this PDE is solved in terms of a path integral by which we then recover a path integral representation of averaged observables in section 6. This 'analytic' derivation of the path integral is, however, not the only possible way. In the following, we sketch the mathematical basis of an alternative approach, which employs the unit vectors ${{\widehat{\boldsymbol{e}}}_{n}}$ as the basis. The approach ultimately results in the same path integral representation of averaged observables as we show in section 6.2. The duality between the two approaches resembles the duality between the matrix mechanics formulation of quantum mechanics by Heisenberg, Born, and Jordan [9395] and its analytic formulation in terms of the Schrödinger equation [92].

The alternative derivation of the path integral also starts out from the forward master equation ${{\partial}_{\tau}}p\left(\tau |{{t}_{0}}\right)=Qp\left(\tau |{{t}_{0}}\right)$ . As before, $p\left(\tau |{{t}_{0}}\right)$ represents the matrix of the conditional probabilities $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ , and Q is the transition rate matrix. For simplicity, we assume the transition rate matrix to be independent of time. Moreover, we focus on processes that can be decomposed additively into chemical reactions of the form $k\,A\to l\,A$ with time-independent rate coefficients. The elements of the transition matrix associated to this reaction read $Q(m,n)=\gamma {{(n)}_{k}}\left({{\delta}_{m,n-k+l}}-{{\delta}_{m,n}}\right)$ . Since the particle numbers n and n0 may assume any values in ${{\mathbb{N}}_{0}}$ , the probability matrix $p\left(\tau |{{t}_{0}}\right)$ and the transition matrix Q have infinitely many rows and columns.

In the introductory section 1.4, we formulated conditions under which the forward master equation is solved by the matrix exponential $p\left(\tau |{{t}_{0}}\right)={{\text{e}}^{Q\left(\tau -{{t}_{0}}\right)}}{\mathbb {1}}$ (in the sense of the expansion (16), the state space must countable and the exit rates bounded). For the above chemical reaction, those conditions are not necessarily met, and thus the matrix exponential may not exist. Nevertheless, we regard $p\left(\tau |{{t}_{0}}\right)={{\text{e}}^{Q\left(\tau -{{t}_{0}}\right)}}{\mathbb {1}}$ as a 'formal' solution in the following. With the help of certain mathematical 'tricks', this solution will be cast into a path integral in section 6. But before, let us rewrite the transition matrix in the exponential in terms of creation and annihilation matrices. For this purpose, we employ the (infinitely large) unit column vectors $|n\rangle :={{\widehat{\boldsymbol{e}}}_{n}}$ and their orthogonal unit row vectors $\langle n|\,\,:\,=\widehat{\boldsymbol{e}}_{n}^{\intercal}$ as basis (with $n\in {{\mathbb{N}}_{0}}$ ). Individual probabilities can therefore be inferred from the above solution as

Equation (83)

For multivariate or spatial processes, the basis vectors can be generalized to tensors (i.e. $|\boldsymbol{n}\rangle =\,|{{n}_{1}}\rangle \otimes |{{n}_{2}}\rangle \otimes \ldots $ ).

The orthogonality of the basis allows us to rewrite the elements of the transition rate matrix of the reaction $k\,A\to l\,A$ as

Equation (84)

A matrix with these elements can be written as

Equation (85)

with c acting as a 'creation matrix' and a acting as an 'annihilation matrix'. In particular, we define c in terms of its sub-diagonal $(1,1,1,\ldots )$ and a in terms of its super-diagonal $(1,2,3,\ldots )$ . All of their other matrix elements are set to zero. It is readily shown that these matrices fulfil $c|n\rangle =\,|n+1\rangle $ and $a|n\rangle =n|n-1\rangle $ with respect to the basis vectors. These relations coincide with our previous relations (65) and (66). Besides, the matrices also fulfil $\langle n|c=\langle n-1|$ and $\langle n|a=(n+1)\langle n+1|$ , as well as the commutation relation $[a,c]={\mathbb {1}}$ . Further properties of the matrices are addressed in section 6.2. By our previous comments in section 2.1, it is not a surprise that the transition matrix (85) has the same form as the transition operator (64). Both of them are normal-ordered polynomials in c and a. This property will help us in section 6.2 in recovering a path integral representation of averaged observables.

2.2.4. Processes with locally excluding particles.

As noted in the introductory section 1.3, one can model the movement of molecular motors along a cytoskeletal filament in terms of a master equation [113115]. In its simplest form, such a model describes the movement of mutually excluding motors as a hopping process on a one-dimensional lattice $\mathbb{L}\subset \mathbb{Z}$ , with attachment and detachment of motors at certain boundaries. To respect the mutual exclusion of motors, their local number ni on a lattice site $i\in \mathbb{L}$ is restricted to 0 and 1. Various other processes can be modelled in similar ways, for example, aggregation processes [26], adsorption processes [337, 338], and directed percolation [29]. In the following, we show how the generating function approach from section 2.1 and its complementary approach from section 2.2.3 can be applied to such processes. To illustrate the mathematics behind these approaches while not burdening ourselves with too many indices, we demonstrate the approaches for the simple, non-spatial telegraph process [339, 340]. This process describes a system that randomly switches between two states that are called the 'on' and 'off' states, or, for brevity, the '1' and '0' states. The rate at which the system switches from state 1 to state 0 is denoted as μ, and the rate of the reverse transition as γ. For simplicity, we assume these rate coefficients to be time-independent. Consequently, the master equation of the telegraph process reads

Equation (86)

with $\boldsymbol{p}(\tau |\centerdot )={{\left(\,p(\tau,1|\centerdot ),p(\tau,0|\centerdot )\right)}^{\intercal}}$ being the probability vector. Alternatively, the master equation can be written as ${{\partial}_{\tau}}p(\tau,n|\centerdot )={\sum}_{m\in \left\{1,0\right\}}Q(n,m)p(\tau,m|\centerdot )$ with transition matrix elements

Equation (87)

One can solve the above master equation in various ways, for example, by evaluating the matrix exponential in $p\left(\tau |{{\tau}_{0}}\right)={{\text{e}}^{Q\left(\tau -{{\tau}_{0}}\right)}}{\mathbb {1}}$ after diagonalizing the transition matrix. In the following, the telegraph process serves as the simplest representative of processes with excluding particles and it helps in explaining how the methods from the previous sections are applied to such processes. The inclusion of spatial degrees of freedom into the procedure is straightforward.

To apply the generating function technique from section 2.1, we require orthogonal and complete basis functions $\left\{|0\rangle,|1\rangle \right\}$ and basis functionals $\left\{\langle 0|,\langle 1|\right\}$ , as well as a transition operator $\mathcal{Q}\left(q,{{\partial}_{q}}\right)$ fulfilling condition (Q), i.e. for $n\in \left\{0,1\right\}$ ,

Equation (88)

An operator with this property can be constructed in (at least) two ways: using creation and annihilation operators fulfilling the 'bosonic' commutation relation [a,c]  =  ac  −  ca  =  1, or using operators fulfilling the 'fermionic' anti-commutation relation $\left\{a,c\right\}:=ac+ca=1$ .

Let us first outline an approach based on the commutation relation. The approach was recently proposed by van Wijland [90]. To illustrate the approach, we take the 'analytic' point of view with $c\left(q,{{\partial}_{q}}\right)$ and $a\left(q,{{\partial}_{q}}\right)$ being differential operators. However, the following considerations also apply to the approach from the previous section where c and a represented creation and annihilation matrices. To cast a master equation such as (86) into a path integral, van Wijland employed creation and annihilation operators with the same actions as in section 2.2.2, i.e. $c|n\rangle =\,|n+1\rangle $ and $a|n\rangle =n|n-1\rangle $ . Besides complying with the commutation relation [a,c]  =  1, these relations imply that the 'number operator' $\mathcal{N}:\,=ca$ fulfils $\mathcal{N}\,|n\rangle =n|n\rangle $ . This operator may be used to define the 'Kronecker operator'

Equation (89)

in terms of a series expansion of its exponential. The Kronecker operator acts on the basis functions as ${{\delta}_{\mathcal{N},m}}|n\rangle ={{\delta}_{n,m}}|n\rangle $ , and thus it can be used to replace the Kronecker deltas in (88). One thereby arrives at the flow equation

Equation (90)

On the downside, the flow equation now involves power series with arbitrarily high derivatives with respect to q (upon choosing the operators as c  =  q and $a={{\partial}_{q}}$ ). In section 5, we show how the solution of a flow equation such as (90) can be expressed in terms of a path integral. The derivation of the path integral requires that the transition operator $\mathcal{Q}\left(q,{{\partial}_{q}}\right)$ is normal-ordered with respect to q and ${{\partial}_{q}}$ , i.e. that all the q are to the left of all the ${{\partial}_{q}}$ in every summand. This order can be achieved by the repeated use of $\left[{{\partial}_{q}},q\right]\,f(q)=f(q)$ . More information on the procedure can be found in [90]. Van Wijland applied the resulting path integral to the asymmetric diffusion of excluding particles on a one-dimensional lattice. Moreover, Mobilia, Georgiev, and Täuber have employed the method in studying the stochastic Lotka-Volterra model on d-dimensional lattices [153].

An alternative to the above approach lies in the use of operators fulfilling the anti-commutation relation {a, c}  =  ac  +  ca  =  1. This relation is clearly not fulfilled by $c:=q$ and $a:={{\partial}_{q}}$ , at least not if q represents an ordinary real variable. However, ${{\partial}_{q}}q+q{{\partial}_{q}}=1$ holds true if q represents a Grassmann variable (see [341, 342] for details). Grassmann variables commute with real and complex numbers (i.e. $[q,\alpha ]=0$ for $\alpha \in \mathbb{C}$ ), but they anti-commute with themselves and with other Grassmann variables (i.e. $\left\{q,\tilde{q}\right\}=0$ ). Grassmann variables have, for example, proven useful in calculating the correlation functions of kinetic Ising models [343], even when the system is driven far from equilibrium [344].

Instead of using Grassmann variables for the basis, let us consider a representation of the basis defined by the unit row vectors $\langle 0|\,=(0,1)$ and $\langle 1|\,=(1,0)$ , and by the unit column vectors $|0\rangle ={{(0,1)}^{\intercal}}$ and $|1\rangle ={{(1,0)}^{\intercal}}$ . Obviously, these vectors fulfil the orthogonality condition $\langle m|n\rangle ={{\delta}_{m,n}}$ , and ${\sum}_{n}|n\rangle \langle n|\,={\mathbb {1}}$ is the 2-by-2 unit matrix. The anti-commutation relation {a, c}  =  1 is met by the creation and annihilation matrices

Equation (91)

The creation matrix acts as $c|0\rangle =\,|1\rangle $ and $c|1\rangle =\mathbf{0}$ , and the annihilation matrix as $a|1\rangle =\,|0\rangle $ and $a|0\rangle =\mathbf{0}$ (zero vector). Consequently, the matrix product ca fulfils $ca|0\rangle =\mathbf{0}$ and $ca|1\rangle =|1\rangle $ , and thus it serves the same purpose as the Kronecker delta ${{\delta}_{n,1}}$ in (88). Analogously, the operator ac serves the same purpose as ${{\delta}_{n,0}}$ . The master equation (86) can therefore be written in a form resembling the flow equation (90), namely as

Equation (92)

In this form, the stochastic process mimics a spin-1/2 problem (especially, if the creation and annihilation matrices are written in terms of the Pauli spin matrices ${{\sigma}_{x}}$ , ${{\sigma}_{y}}$ , and ${{\sigma}_{z}}$ ). More information on spin-representations of master equations is provided in [66, 337, 338, 345348]. A spin-representation has, for example, been employed in the analysis of reaction–diffusion master equations via the density matrix renormalization group [349, 350]. In their study of directed percolation of excluding particles on the one-dimensional lattice $\mathbb{Z}$ , Brunel, Oerding, and van Wijland performed a Jordan–Wigner transformation of the spatially extended 'spin' matrices ${{c}_{i}}:=\sigma _{i}^{+}$ and ${{a}_{i}}:=\sigma _{i}^{-}$ ($i\in \mathbb{Z}$ ) [29]. The transformed operators fulfill anti-commutation relations not only locally but also non-locally, and thus represent the stochastic process in terms of a 'fermionic' (field) theory. While the Jordan–Wigner transformation provides an exact reformulation of the stochastic process, its applicability is largely limited to systems with one spatial dimension. Moreover, the transformation requires that the stochastic process is first rewritten in terms of a spin-1/2 chain (typically possible only for processes with a single species). Based on the (coherent) eigenstates of the resulting fermionic creation and annihilation operators, Brunel et al then derived a path integral representation of averaged observables. The paths of these integrals proceed along the values of Grassmann variables. Further information on the Jordan–Wigner transformation, Grassmann path integrals, and alternative approaches can be found in [2628, 30, 31, 351353].

2.3. Methods for the analysis of the generating function's flow equation

In the following, we outline various approaches that have recently been proposed for the analysis of the generating function's flow equation.

2.3.1. A spectral method for the computation of stationary distributions.

In their study of a linear transcriptional regulatory cascade of genes and proteins, Walczak, Mugler, and Wiggins developed a spectral method for the computation of stationary probability distributions [34]. They described the regulatory cascade on a coarse-grained level in terms of the copy numbers of certain chemical 'species' at its individual steps. By imposing a Markov approximation, the dynamics of the cascade was reduced to a succession of two-species master equations. The solution of each master equation served as input for the next equation downstream. Every of the reduced master equations allowed for the following processes: First, each of the master equation's two species $i\in \left\{1,2\right\}$ is produced in a one-step process whose rate ${{\gamma}_{i}}\left({{n}_{1}}\right)$ depends only on the copy number n1 of the species coming earlier along the cascade. Second, each of the two species degrades at a constant per-capita rate ${{\mu}_{i}}$ . With $\boldsymbol{n}:={{\left({{n}_{1}},{{n}_{2}}\right)}^{\intercal}}\in \mathbb{N}_{0}^{2}$ , the corresponding master equation reads

Equation (93)

Here, the unit vector ${{\widehat{\boldsymbol{e}}}_{i}}$ points in the direction of the ith species. The master equation can be cast into a flow equation for the generating function $|g(\tau |\centerdot )\rangle ={\sum}_{\boldsymbol{n}}p\left(\tau,\boldsymbol{n}|\centerdot \right)|\boldsymbol{n}\rangle $ by following the steps in section 2.1. For this purpose, we choose the basis function as a multivariate extension of the basis function from the introduction to section 2, i.e. as $|\boldsymbol{n}{{\rangle}_{\boldsymbol{q}}}:=\,|{{n}_{1}}{{\rangle}_{{{q}_{1}}}}|{{n}_{2}}{{\rangle}_{{{q}_{2}}}}$ with

Equation (94)

The values of the auxiliary parameters ${{\bar{\gamma}}_{1}}$ and ${{\bar{\gamma}}_{2}}$ are only specified in a numerical implementation of the method and affect its stability. Information on how their values are chosen is provided in [34]. For our present purposes, the values of the parameters remain unspecified. By differentiating the generating function with respect to time, one finds that the generating function obeys the flow equation ${{\partial}_{\tau}}|g\rangle =\left({{\mathcal{Q}}_{0}}+{{\mathcal{Q}}_{1}}\right)|g\rangle $ with the transition operators

Equation (95)

Equation (96)

Here, the two new operators ${{\hat{\gamma}}_{1}}$ and ${{\hat{\gamma}}_{2}}$ are defined in terms of their actions ${{\hat{\gamma}}_{i}}|{{n}_{1}}\rangle ={{\gamma}_{i}}\left({{n}_{1}}\right)|{{n}_{1}}\rangle $ on the basis functions. Surprisingly, the explicit form of these operators is not needed. Thus, the spectral method even allows for non-polynomial growth rates ${{\gamma}_{i}}\left({{n}_{1}}\right)$ .

The operator ${{\mathcal{Q}}_{0}}$ has the same form as the transition operator of the bi-directional reaction $\varnothing \rightleftharpoons A$ from the introductory example. Without the perturbation ${{\mathcal{Q}}_{1}}$ , the above flow equation could thus be solved by extending the previous ansatz (38) to two species. To accommodate ${{\mathcal{Q}}_{1}}$ as well, it proves useful to generalize that ansatz to

Equation (97)

with yet to be determined expansion coefficients ${{G}_{\boldsymbol{k}}}(\tau |\centerdot )$ . The auxiliary ket is defined as $|\boldsymbol{k}\rangle {{\rangle}_{\boldsymbol{q}}}:=\,|{{k}_{1}}\rangle {{\rangle}_{{{q}_{1}}}}|{{k}_{2}}\rangle {{\rangle}_{{{q}_{2}}}}$ with $|{{k}_{i}}\rangle {{\rangle}_{{{q}_{i}}}}=q_{i}^{{{k}_{i}}}$ and is orthogonal to the bra $\langle \langle \boldsymbol{k}|:=\langle \langle {{k}_{1}}|\langle \langle {{k}_{2}}|$ with $\langle \langle {{k}_{i}}|\,f:=\frac{1}{{{k}_{i}}!}\partial _{{{q}_{i}}}^{{{k}_{i}}}\,f\left({{q}_{i}}\right){{|}_{{{q}_{i}}=0}}$ . These bras can be used to extract the expansion coefficient via ${{G}_{\boldsymbol{k}}}(\tau |\centerdot )=\langle \langle \boldsymbol{k}|g(\tau |\centerdot )\rangle $ . Differentiation of this coefficient with respect to time and imposing stationarity eventually results in a recurrence relation for ${{G}_{\boldsymbol{k}}}$ that can be solved iteratively. The steady-state probability distribution of the stochastic process is then recovered from the generating function via the inverse transformation (48).

In [34], Walczak et al computed the steady-state distribution of the transcriptional regulatory cascade by solving the recurrence relation for $G_{\boldsymbol{k}}^{s}$ numerically. The resulting distribution was compared to distributions acquired via an iterative method and via the stochastic simulation algorithm (SSA) of Gillespie. The spectral method was found to be about 108 times faster than the SSA in achieving the same accuracy. But as we already mentioned in the introductory section 1.4, the SSA generally performs poorly in the estimation of full distributions, especially in the estimation of their tails. As the spectral method has only been applied to cascades with steady-state copy numbers below $n\approx 30$ thus far, it may be challenged by a direct integration of the two-species master equation (after introducing a reasonable cut-off in the copy numbers). The integration of  ∼1000 coupled ODEs does not pose a problem for modern integrators and the integration provides the full temporal dynamics of the process (see [271] for efficient algorithms). In its current formulation, the spectral method is limited to the evaluation of steady-state distributions and to simple one-step birth-death dynamics. It would be interesting to advance the method for the application to more complex processes (possibly with spatial degrees of freedom), and to extend its scope to the temporal evolution of distributions.

2.3.2. WKB approximations and related approaches.

The probability distribution describing the transcriptional regulatory cascade from the previous section approaches a non-trivial stationary shape in the asymptotic time limit $\tau \to \infty $ (see figure 2 in [34]). Often, however, a non-trivial shape of the probability distribution persists only transiently and is said to be quasi-stationary or metastable. The lifetime and shape of such a distribution can often be approximated in terms of a WKB approximation [84]. A WKB approximation of a jump process starts out from an exponential (eikonal) ansatz for the shape of the metastable probability distribution ('real-space' approach) or for the generating function discussed in the previous sections ('momentum-space' approach). Information on the real-space approach can be found in [63, 64, 146, 171, 172, 177, 354363]. The recent review of Assaf and Meerson provides an in-depth discussion of the applicability of the real- and momentum-space approaches [364].

In the following, we outline the momentum-space WKB approximation for a system in which particles of type A annihilate in the binary reaction $2\,A\to \varnothing $ with rate coefficient μ, and are replenished in the linear reaction $A\to 2\,A$ with rate coefficient $\gamma \gg \mu $ . According to a deterministic model of the combined processes with rate equation ${{\partial}_{\tau}}\bar{n}=\gamma \bar{n}-2\mu {{\bar{n}}^{2}}$ , the particle number $\bar{n}$ converges to an asymptotic value ${{\bar{n}}_{\infty}}=\gamma /(2\mu )\gg 1$ . However, a numerical integration of the (truncated) master equation of the stochastic process shows that the mean particle number stays close to ${{\bar{n}}_{\infty}}$ only for a long but finite time (see figure 3). Asymptotically, all particles become trapped in the 'absorbing' state n  =  0. Consequently, the conditional probability distribution converges to $p\left(\infty,n|{{t}_{0}},{{n}_{0}}\right)={{\delta}_{n,0}}$ . Up to a pre-exponential factor, the time after which the absorbing state is reached can be readily estimated using a momentum-space WKB approximation as shown below [35]. The value of the pre-exponential factor was determined by Turner and Malek-Mansour using a recurrence relation [365], by Kessler and Shnerb using a real-space WKB approximation [357], and by Assaf and Meerson upon combining the generating function technique with Sturm-Liouville theory [366] (the latter method was developed in [36, 37]; see also [367]). Using this method, Assaf and Meerson also succeeded in computing the shape of the metastable distribution. Moreover, Assaf et al showed how a momentum-space WKB approximation can be used to determine mean extinction times for processes with time-modulated rate coefficients [38].

Figure 3.

Figure 3. Comparison of the deterministic particle number $\bar{n}(\tau )$ (blue dashed line) and the mean particle number $\langle n\rangle (\tau )$ of the stochastic model (orange line) of the combined processes $2\,A\to \varnothing $ and $A\to 2\,A$ . The annihilation rate coefficient was set to $\mu =1$ , the growth rate coefficient to $\gamma =150$ . The deterministic trajectory started out from $\bar{n}(0)=40$ , the numerical integration of the master equation from $p(0,n|0,40)={{\delta}_{n,40}}$ . The deterministic trajectory converged to ${{\bar{n}}_{\infty}}=75$ for very large times, whereas the mean particle number approached a quasi-stationary value close to ${{\bar{n}}_{\infty}}$ before converging to the absorbing state n  =  0. The inset shows the conditional probability distribution at time $\tau ={{10}^{16}}$ . At this time, a significant share of particles was already in the absorbing state and the distribution was bimodal.

Standard image High-resolution image

Upon rescaling time as $\gamma \,\tau \to \tau $ , the chemical master equation (24) of the combined processes $A\to 2\,A$ and $2\,A\to \varnothing $ translates into the flow equation

Equation (98)

of the ordinary generating function $g(\tau ;q|\centerdot )={\sum}_{n}{{q}^{n}}p(\tau,n|\centerdot )$ (see sections 2.1 and 2.2.2). A WKB approximation of the flow equation can be performed by inserting the ansatz

Equation (99)

with the 'action'

Equation (100)

into the equation, followed by a successive analysis of terms that are of the same order with respect to the power of the small parameter $1/{{\bar{n}}_{\infty}}$ . Note that the exponential ansatz (99) often includes a minus sign in front of the action, which we neglect for convenience. The pre-factor '2' of the action (100) is also included just for convenience. Upon inserting the ansatz (99) into the flow equation (98), one obtains the equality

Equation (101)

with the 'Hamiltonian'

Equation (102)

Thus, at leading order of $1/{{\bar{n}}_{\infty}}$ , we have obtained a closed equation for S0, which has the form of a Hamilton–Jacobi equation [368]. A Hamilton–Jacobi equation can be solved by the method of characteristics [368], with the characteristic curves q(s) and x(s) obeying Hamilton's equations

Equation (103)

Equation (104)

These equations are, for example, solved by q(s)  =  1 and x(s) being a solution of ${{\partial}_{s}}x=x-2{{x}^{2}}$ . Note that this equation corresponds to a rescaled rate equation. The Hamiltonian vanishes along the characteristic curve because H(1, x)  =  0. Further 'zero-energy' lines of the Hamiltonian are given by x  =  0 and by $x(q)=\frac{q}{1+q}$ . These lines partition the phase portrait of Hamilton's equations into separate regions as shown in figure 4. The path $\left(q,\frac{q}{1+q}\right)$ from the 'active state' $(q,x)=\left(1,\frac{1}{2}\right)$ to the 'passive state' (0,0) constitutes the 'optimal path to extinction' (see below) [35, 38, 369].

Figure 4.

Figure 4. Phase portrait of Hamilton's equations (103) and (104). The Hamiltonian H(q, x) in (102) vanishes along 'zero-energy' lines (orange dotted lines). These lines connect fixed points of Hamilton's equations (green disks). Hamilton's equations reduce to the (rescaled) rate equation ${{\partial}_{s}}x=x-2{{x}^{2}}$ along the line connecting the fixed points (1, 0) and (1, 1/2). The zero-energy line connecting the fixed point (1, 1/2) to (0, 0) denotes the 'optimal path to extinction' and can be used to approximate the mean extinction time of the process.

Standard image High-resolution image

In addition to Hamilton's equations, the method of characteristics implies that the action S0 obeys

Equation (105)

along characteristic curves (the minus signs are only included to emphasize a similarity with an action encountered later in section 7.1). According to Elgart and Kamenev, the (negative) value of the action (100) along the optimal path to extinction determines the logarithm of the mean extinction time $\bar{\tau}$ from the metastable state in leading order of ${{\bar{n}}_{\infty}}$ [35]. As this path proceeds along a zero-energy line, the action (100) evaluates to

Equation (106)

Upon returning to the original time scale via $\tau \to \gamma \,\tau $ , the mean extinction time follows as

Equation (107)

with pre-exponential factor A. The value of this factor has been determined in [357, 365, 366] and reads, in terms of our rate coefficients, $A=2\sqrt{\pi /{{{\bar{n}}}_{\infty}}}$ . More information on the above procedure can be found in [35, 38]. The method has also been applied to classic epidemiological models [369] and to a variant of the Verhulst logistic model [370] (exact results on this model with added immigration have been obtained in [371] using the generating function technique).

2.3.3. A variational method.

The generating function's flow equation can also be analysed using a variational method as proposed by Sasai and Wolynes [39, 372]. Their approach is based on a method that Eyink had previously developed (primarily) for Fokker–Planck equations [83]. Instead of dealing with flow equation ${{\partial}_{\tau}}|g\rangle ={{\tilde{\mathcal{Q}}}_{\tau}}|g\rangle $ of the generating function in its differential form (or in terms of its spin matrix representation section 2.2.4), the variational method involves a functional variation of the 'effective action' $ \Gamma ={\int}_{{{t}_{0}}}^{t}\text{d}\tau \,\langle {{\psi}^{L}}|\left({{\partial}_{\tau}}-{{\tilde{\mathcal{Q}}}_{\tau}}\right)|{{\psi}^{R}}\rangle $ with $|{{\psi}^{R}}\rangle =|g\rangle $ . To perform this variation, one requires an ansatz for the objects $\langle {{\psi}^{L}}|$ and $|{{\psi}^{R}}\rangle $ , being parametrized by the variables ${{\left\{\alpha _{i}^{L}\right\}}_{i=1,\ldots,S}}$ and ${{\left\{\alpha _{i}^{R}\right\}}_{i=1,\ldots,S}}$ . The values of these parameters are determined by requiring that $\langle {{\psi}^{L}}|$ is an extremum of the action. Besides its application to networks of genetic switches [39], the variational method has been applied to signalling in enzymatic cascades in [372]. An extension of the method to multivariate processes is described in [373]. Whether or not the variational method provides useful information mainly depends on making the right ansatz for $\langle {{\psi}^{L}}|$ and $|{{\psi}^{R}}\rangle $ . The method itself does not suggest their choice. It remains to be seen whether the method can be applied to processes for which only little is known about the generating function.

2.4. Résumé

In the present section, we formulated general conditions under which the forward master equation (10) can be transformed into a linear partial differential equation, a 'flow equation', obeyed by the probability generating function

Equation (108)

First, the conditions (C) and (O) require that there exists a complete and orthogonal basis comprising a set of basis functions $\left\{|n\rangle \right\}$ and a set of basis functionals $\left\{\langle n|\right\}$ . The basis functionals recover the conditional probability distribution via $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)=\langle n|g\left(\tau |{{t}_{0}},{{n}_{0}}\right)\rangle $ and the right choice of the basis functions may help to obtain a simplified flow equation. We introduced different bases for the study of random walks, of chemical reactions, and of processes with locally excluding particles in section 2.2. Moreover, the conditions (E) and (Q) require that there exist two differential operators: a basis evolution operator ${{\mathcal{E}}_{\tau}}$ encoding the possible time-dependence of the basis, and a transition operator ${{\mathcal{Q}}_{\tau}}$ encoding the actual dynamics of the process. The generating function (108) then obeys the flow equation

Equation (109)

Various methods have recently been proposed for the study of such a flow equation and were outlined in section 2.3. These methods include the variational approach of Eyink [83] and of Sasai and Wolynes [39], WKB approximations and spectral formulations of Elgart and Kamenev and of Assaf and Meerson [3537], and the spectral method of Walczak, Mugler, and Wiggins [34]. Thus far, most of these methods have only been applied to systems without spatial degrees of freedom and with only one or a few types of particles. Future research is needed to overcome these limitations. Later, in section 5, we show how the solution of the flow equation (109) can be represented by a path integral. The evaluation of the path integral is demonstrated in computing the generating function of general linear processes. Moreover, we explain in section 7 how this path integral connects to a recent method of Elgart and Kamenev for the computation of rare event probabilities [35]. One can also use the path integral to derive a path integral representation of averaged observables. But a simpler route to this representation starts out from a different flow equation, a flow equation obeyed by the 'marginalized distribution'.

3. The marginalized distribution and the probability generating functional

In the following, we discuss two further ways of casting the forward and backward master equations into linear PDEs. The first of these flow equations is obeyed by a 'marginalized distribution' and is easily derived from the backward master equation (12). The equation proves useful in the computation of mean extinction times. Moreover, it provides a most direct route to path integral representations of the conditional probability distribution and of averaged observables. To our knowledge, the flow equation of the marginalized distribution has not been considered thus far. In section 3.4, we then introduce a probability generating 'functional' whose flow equation is derived from the forward master equation. The transformation mapping the functional to the conditional probability distribution is shown to generalize the Poisson representation of Gardiner and Chaturvedi [32, 33].

3.1. Flow of the marginalized distribution

The generalized probability generating function (45) was defined by summing the conditional probability distribution $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ over a set of basis functions, the kets $\left\{|n\rangle \right\}$ . In the following, we consider the distribution $p\left(t,n|\tau,{{n}_{0}}\right)$ instead, i.e. we fix the final time t while keeping the initial time τ variable. By summing this distribution over a set of basis functions ${{\{|{{n}_{0}}\rangle}_{\tau}}\}$ , one can define the series

Equation (110)

As before, the subscript denoting the time-dependence of the basis function is typically dropped. The variables n and n0 again represent states from some countable state space. Assuming that the basis functions $\left\{|n\rangle \right\}$ and appropriately chosen basis functionals $\left\{\langle n|\right\}$ form a (C)omplete and (O)rthogonal basis, the conditional probability distribution can be recovered from (110) via

Equation (111)

We call the function $|\,p(t,n|\tau )\rangle $ a 'marginalized distribution' because it proves most useful when the summation in (110) constitutes a marginalization of the conditional probability distribution $p\left(t,n|\tau,{{n}_{0}}\right)$ over a probability distribution $|{{n}_{0}}\rangle $ . The marginalized distribution then represents a 'single-time distribution' with respect to the random variable n in the sense of section 1.3. A basis function to which these considerations apply is the 'Poisson basis function' $|{{n}_{0}}{{\rangle}_{x}}:=\frac{{{x}^{{{n}_{0}}}}{{\text{e}}^{-x}}}{{{n}_{0}}!}$ . We make heavy use of this basis function in the study of chemical reactions in section 3.2.2. Since the definition of the marginalized distribution in (110) does not affect the random variable n, the marginalized distribution of course solves the forward master equation ${{\partial}_{t}}|\,p(t,n|\tau )\rangle ={\sum}_{m}{{Q}_{t}}(n,m)|\,p(t,m|\tau )\rangle $ with the initial condition $|\,p(\tau,n|\tau )\rangle =\,|{{n}_{0}}{{\rangle}_{\tau}}$ . In the following, we formulate conditions under which $|\,p\rangle $ also obeys a linear PDE evolving backward in time.

Before proceeding, let us briefly note that if the basis function of the marginalized distribution is not chosen as a probability distribution, the name marginalized 'distribution' is somewhat of a misnomer; that is, for example, the case for the Fourier basis $|{{n}_{0}}{{\rangle}_{x}}:={{\text{e}}^{\text{i}{{n}_{0}}x}}$ , which we consider in section 3.2.1.

The derivation of the linear PDE obeyed by the marginalized distribution proceeds analogously to the derivation in section 2.1. But instead of employing the forward master equation, we now employ the backward master equation ${{\partial}_{-\tau}}p(t|\tau )=p(t|\tau ){{Q}_{\tau}}$ for this purpose (recall that $p(t|\tau )$ is the matrix with elements $p\left(t,n|\tau,{{n}_{0}}\right)$ ). Upon differentiating the marginalized distribution (110) with respect to the time parameter τ, one obtains the equation

Equation (112)

The rate $Q_{\tau}^{\intercal}\left(m,{{n}_{0}}\right)={{Q}_{\tau}}\left({{n}_{0}},m\right)$ represents an element of the transposed transition matrix $Q_{\tau}^{\intercal}$ .

As in section 2.1, two differential operators are required to turn the above expression into a linear PDE. First, we require a basis evolution operator ${{\mathcal{E}}_{\tau}}\left(x,{{\partial}_{x}}\right)$ fulfilling ${{\mathcal{E}}_{\tau}}|{{n}_{0}}\rangle ={{\partial}_{\tau}}|{{n}_{0}}\rangle $ for all values of n0. The previous evolution operator (E) serves this purpose. A second differential operator $\mathcal{Q}_{\tau}^{\dagger}\left(x,{{\partial}_{x}}\right)$ is required to encode the information stored in the transition matrix. This operator should be a power series in ${{\partial}_{x}}$ and fulfil, for all n0,

By the (C)ompleteness of the basis, one could also define this operator constructively as

Equation (114)

We wrote these expressions in terms of the transposed transition matrix because for the unit column vectors $|m\rangle ={{\widehat{\boldsymbol{e}}}_{m}}$ and the unit row vectors $\langle n|=\widehat{\boldsymbol{e}}_{n}^{\intercal}$ , $\mathcal{Q}_{\tau}^{\dagger}$ and $Q_{\tau}^{\intercal}$ coincide (with $m,n\in {{\mathbb{N}}_{0}}$ ). As we mostly consider bases whose kets and bras represent functions and functionals, we call $\mathcal{Q}_{\tau}^{\dagger}$ the 'adjoint' transition operator in the following. Often, an operator ${{O}^{\dagger}}\left(x,{{\partial}_{x}}\right)$ is said to be the adjoint of an operator $O\left(x,{{\partial}_{x}}\right)$ if the following relation holds with respect to two test functions f and g:

Equation (115)

However, whether an operator ${{\mathcal{Q}}_{\tau}}$ complementing the above $\mathcal{Q}_{\tau}^{\dagger}$ actually exists will not be important in the following (except in our discussion of the Poisson representation in section 3.5).

Provided that both a basis evolution operator ${{\mathcal{E}}_{\tau}}$ and an adjoint transition operator $\mathcal{Q}_{\tau}^{\dagger}$ are found for a particular process, it follows from the backward-time equation (112) that the marginalized distribution $|\,p(t,n|\tau )\rangle $ obeys the flow equation 8

Equation (116)

The evolution of this equation proceeds backward in time, starting out from the final condition $|\,p(t,n|t)\rangle =\,|n\rangle $ .

In section 4, we show how the flow equation (116) can be solved in terms of a 'backward' path integral. Provided that the basis function $|n\rangle $ is chosen as a probability distribution, this path integral represents a true probability distribution: the marginalized distribution. The fact that the backward path integral represents a probability distribution distinguishes it from the 'forward' path integral in section 5. The forward path integral represents the probability generating function (45). Both the forward path integral and the backward path integral can be used to derive a path integral representation of averaged observables, as we show in section 6. The derivation of this representation from the backward path integral, however, is significantly easier. In fact, the path integral representation of the average of an observable A will follow directly by summing the backward path integral representation of the marginalized distribution over A(n), i.e. via $\langle A\rangle ={\sum}_{n}A(n)|\,p(t,n|\tau )\rangle $ . Note that this average also obeys the flow equation (116). In section 3.3, we demonstrate how the flow equation can be used to compute mean extinction times.

Before introducing bases for the analysis of different stochastic processes, let us briefly note that if a process is homogeneous in time, its marginalized distribution depends only on the difference $t-\tau $ . The above flow equation can then be rewritten so that it evolves $|\,p(\tau,n|0)\rangle $ forward in time τ, starting out from the initial condition $|\,p(0,n|0)\rangle =\,|n\rangle $ . In section 3.3, we make use of this property to compute mean extinction times.

3.2. Bases for particular stochastic processes

To demonstrate the application of the marginalized distribution (110), let us reconsider the random walk from section 2.2.1 and the chemical reaction from section 2.2.2. The 'Poisson basis function' introduced in the latter section will be employed in the computation of mean extinction times in section 3.3. Moreover, it will allow us to recover the Poisson representation of Gardiner and Chaturvedi [32, 33] in section 3.5.

3.2.1. Random walks.

The following solution of the random walk largely parallels the previous derivation in section 2.2.1. In particular, we again use the orthogonal and complete Fourier basis $|n{{\rangle}_{x}}={{\text{e}}^{\text{i}nx}}$ and $\langle n|\,f={\int}_{-\pi}^{\pi}\frac{\text{d}x}{2\pi}{{\text{e}}^{-\text{i}nx}}f(x)$ with $n\in \mathbb{Z}$ . Due to the time-independence of the basis, the (E)volution operator ${{\mathcal{E}}_{\tau}}$ is zero. The condition (${{\text{Q}}^{\intercal}}$ ) on the adjoint transition operator $\mathcal{Q}_{\tau}^{\dagger}$ is specified by the transition matrix (59) of the process and reads

Equation (117)

As before, we employ the operators $c(x)={{\text{e}}^{\text{i}x}}$ and $a(x)={{\text{e}}^{-\text{i}x}}$ , which act on the basis functions as $c|n\rangle =\,|n+1\rangle $ and $a|n\rangle =\,|n-1\rangle $ , respectively. Therefore, the adjoint transition operator with the above property can be defined as

Equation (118)

As this operator does not contain any derivatives, it is self-adjoint in the sense of (115). Due to a mismatch in signs, it is, however, not the adjoint of the previous operator ${{\mathcal{Q}}_{\tau}}$ in (61). This mismatch could be corrected by redefining the above basis function as $|n{{\rangle}_{x}}:={{\text{e}}^{-\text{i}nx}}$ . Ignoring this circumstance, the flow equation of the marginalized distribution follows as

Equation (119)

and is solved by

Equation (120)

The conditional probability distribution is recovered via the inverse Fourier transformation $p\left(t,n|\tau,{{n}_{0}}\right)=\langle {{n}_{0}}|\,p\rangle $ . Upon inserting the explicit representation of the basis, the derivation proceeds as in appendix C (with the substitutions $x\to -q$ , $\tau \to {{t}_{0}}$ and $t\to \tau $ ). Eventually, one recovers a Skellam distribution as the solution of the process.

3.2.2. Chemical reactions.

To prepare the computation of mean extinction times in the next section as well as the derivation of the Poisson representation in section 3.5, we now reconsider processes that can be decomposed additively into chemical reactions of the form $k\,A\to l\,A$ . Our later derivation of a path integral representation of averaged observables is also restricted to such processes. The state space of the number of molecules is again ${{\mathbb{N}}_{0}}$ . Analogous to section 2.2.2, we require an (O)rthogonal and (C)omplete basis, as well as an (E)volution operator ${{\mathcal{E}}_{\tau}}$ and an adjoint transition operator $\mathcal{Q}_{\tau}^{\dagger}$ (condition (${{\text{Q}}^{\intercal}}$ )) to specify the flow equation of the marginalized distribution.

As discussed in section 2.2.2, the elements of the transition rate matrix of the reaction $k\,A\to l\,A$ with rate coefficient ${{\gamma}_{\tau}}$ are given by ${{Q}_{\tau}}(m,n)={{\gamma}_{\tau}}{{(n)}_{k}}\left({{\delta}_{m,n-k+l}}-{{\delta}_{m,n}}\right)$ . The condition (${{\text{Q}}^{\intercal}}$ ) on the adjoint transition operator therefore reads

Equation (121)

This condition is met by

Equation (122)

provided that there exist operators c and a fulfilling

Equation (123)

Equation (124)

respectively. We again call c the creation and a the annihilation operator, even though the pre-factors in the above relations differ from the ones in (65) and (66). Similar relations also hold with respect to the basis functionals, namely $\langle n|c=n\langle n-1|$ and $\langle n|a=\langle n+1|$ (assuming the orthogonality of the basis). The operators also fulfil the commutation relation [a, c]  =  1.

The actions of the creation and annihilation operators on the basis functions hint at how the basis functions and functionals from section 2.2.2 can be adapted to meet the present requirements. In particular, the relations (123) and (124) can be fulfilled by moving the factorial from the basis functional (76) to the basis function (70) so that

Equation (125)

Equation (126)

Let us briefly note that we changed the argument of the basis function from q to x as compared to section 2.2.2 because both the generating function approach and the marginalized distribution approach thereby result in the same path integral representation of averaged observables (see section 6). Apart from this notational change, the basis evolution operator

Equation (127)

keeps the form it had in (77). The creation and annihilation operators also keep their previous forms in (73) and (74), i.e.

Equation (128)

Equation (129)

Despite their similar appearance, the operator $\mathcal{Q}_{\tau}^{\dagger}$ in (122) is not the adjoint of the operator ${{\mathcal{Q}}_{\tau}}$ in (64). Nevertheless, the two operators fulfil ${{\mathcal{Q}}_{\tau}}(q,x)=\mathcal{Q}_{\tau}^{\dagger}(x,q)$ for scalar arguments. This relation is essentially the reason why we interchanged the letters x and q as compared to section 2.2.2. Both the generating function approach and the marginalized distribution approach thereby lead to the same path integral representation of averaged observables, as will be shown in section 6.1.

The choice of the parameters $\zeta \ne 0$ , $\tilde{x}(\tau )$ , and $\tilde{q}(\tau )$ in the basis function (125) depends on the problem at hand. In section 7, $\tilde{x}$ and $\tilde{q}$ will act as 'stationary' or 'extremal' paths, with x and an auxiliary variable q being deviations from them. For $\zeta :=\tilde{q}:=1$ and $\tilde{x}:=0$ , the basis function instead simplifies to the Poisson distribution

Equation (130)

In the following, we make heavy use of this 'Poisson basis function'. It will play a crucial role in the formulation of a path integral representation of averaged observables in section 6 and in recovering the Poisson representation in section 3.5.

Let us demonstrate the use of the Poisson basis function for the linear decay process $A\to \varnothing $ with rate coefficient ${{\mu}_{\tau}}$ . For the above choice of ζ, $\tilde{x}$ , and $\tilde{q}$ , the creation operator (128) reads c  =  x and the annihilation operator (129) reads $a={{\partial}_{x}}\,+\,1$ . With the transition operator (122), the flow equation (116) obeyed by the marginalized distribution $|\,p(t,n|\tau ){{\rangle}_{x}}$ follows as

Equation (131)

This equation is solved by the Poisson distribution

Equation (132)

whose mean ${{\alpha}_{t,\tau}}\,x$ decays proportionally to ${{\alpha}_{t,\tau}}:={{\text{e}}^{-{\int}_{\tau}^{t}\text{d}s\,{{\mu}_{s}}}}$ . To interpret this solution, let us recall that the sum in the definition of the marginalized distribution (110) does not affect the particle number n. Therefore, upon relabelling the time parameters, the above solution (132) also solves the forward master equation of the process, namely

Equation (133)

Unlike the conditional distribution $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ , however, the marginalized distribution $|\,p\left(\tau,n|{{t}_{0}}\right){{\rangle}_{x}}$ describes the dynamics of a population whose particle number at time t0 is Poisson distributed with mean x. The conditional distribution is recovered from it via the inverse transformation $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)=\langle {{n}_{0}}|\,p\left(\tau,n|{{t}_{0}}\right)\rangle $ with $\langle {{n}_{0}}|\,f={{\left({{\partial}_{x}}+1\right)}^{{{n}_{0}}}}f(x){{|}_{x=0}}$ . This transformation results in the Binomial distribution

Equation (134)

Both the mean value ${{\text{e}}^{-{\int}_{{{t}_{0}}}^{\tau}\text{d}s\,{{\mu}_{s}}}}{{n}_{0}}$ and the variance $\left(1-{{\text{e}}^{-{\int}_{{{t}_{0}}}^{\tau}\text{d}s\,{{\mu}_{s}}}}\right)\,{{\text{e}}^{-{\int}_{{{t}_{0}}}^{\tau}\text{d}s\,{{\mu}_{s}}}}{{n}_{0}}$ of this distribution decay exponentially for large times, provided that ${{\mu}_{s}}>0$ .

If at most two reactants and two products are involved in a reaction, the flow equation (116) of the marginalized distribution has the mathematical form of a backward Fokker–Planck equation. That is, for example, the case for the coagulation reaction $2\,A\to A$ . For the Poisson basis function, the marginalized distribution $|\,p(t,n|\tau ){{\rangle}_{x}}$ of this process obeys the flow equation

Equation (135)

with the drift coefficient ${{\alpha}_{\tau}}(x):=-{{\mu}_{\tau}}{{x}^{2}}$ and the diffusion coefficient ${{\beta}_{\tau}}(x):=-2{{\mu}_{\tau}}{{x}^{2}}$ . Unlike the diffusion coefficient of the 'true' backward Fokker–Planck equation (2), this diffusion coefficient may be negative (e.g. for $x\in \mathbb{R}\backslash \{0\}$ ). The final condition $|\,p(t,n|t){{\rangle}_{x}}=\frac{{{x}^{n}}{{\text{e}}^{-x}}}{n!}$ and the Feynman–Kac formula (A.3) imply that the above flow equation is solved by

Equation (136)

Here, x(s) obeys the Itô SDE

Equation (137)

which evolves x(s) from $x(\tau )=x$ to x(t). The symbol ${{\langle \langle \centerdot \rangle \rangle}_{W}}$ represents an average over realizations of the Wiener process W. Since the diffusion coefficient ${{\beta}_{\tau}}(x)=-2{{\mu}_{\tau}}{{x}^{2}}$ of the coagulation process can take on negative values, the sample paths of the SDE may acquire imaginary components. This circumstance does not prevent the use of (136) for the calculation of $|\,p(t,n|\tau )\rangle $ , although it may complicate numerical evaluations. Recently, Wiese attempted the evaluation of the average in (136) via the generation of sample paths [336]. Over a short time interval $[\tau,t]$ , he found a good agreement between the resulting distribution and a distribution effectively acquired via the stochastic simulation algorithm (SSA) in section 1.3. Over larger time intervals, however, the integration of the SDE encountered problems regarding its numerical convergence. Future research is needed to overcome this limitation. Moreover, it remains an open challenge to specify the boundary conditions of the PDE (135) to enable its direct numerical integration (an analogous problem is encountered for the flow equation of the generating function, see [366]).

3.3. Mean extinction times

One often wishes to know the mean time at which a process first hits some target in state space. Such a target could, for example, be a state in which no more particles are left in the system. If the particles only replenish through auto-catalysis, the process will then come to a halt. The mean time after which that happens is called the mean extinction time. For Markov processes with continuous sample paths, mean extinction times and, more generally, first-passage times, are commonly calculated with the help of the backward Fokker–Planck equation (2). For jump processes, one can use the backward master equation for this purpose. The calculation, however, is typically feasible only for one-step processes and involves the solution of a recurrence relation [110, 215]. In [374], Drummond et al recently showed how the calculation can be simplified using the Poisson representation of Gardiner and Chaturvedi [32, 33]. We introduce this representation in section 3.5. In the following, we outline how mean extinction times can instead be inferred in an analogous way using the marginalized distribution from the previous sections. For the Poisson basis function $|{{n}_{0}}{{\rangle}_{x}}=\frac{{{x}^{{{n}_{0}}}}{{\text{e}}^{-x}}}{{{n}_{0}}!}$ , the marginalized distribution is a true (single-time) probability distribution and is more easily interpreted than the integral kernel of the Poisson representation.

We again consider a process that can be decomposed additively into chemical reactions of the form $k\,A\to l\,A$ . In addition, the transition matrix $\mathcal{Q}_{\tau}^{\dagger}={{\mathcal{Q}}^{\dagger}}$ of the process shall now be time-independent and the particles in the system shall be Poisson distributed with mean x at time $\tau =0$ . Consequently, the probability of finding n particles in the system at time $\tau \geqslant 0$ is described by the marginalized distribution $|\,p(\tau,n|0){{\rangle}_{x}}$ , provided that its basis function is chosen as $|{{n}_{0}}{{\rangle}_{x}}:=\frac{{{x}^{{{n}_{0}}}}{{\text{e}}^{-x}}}{{{n}_{0}}!}$ (see the definition (110) of the marginalized distribution). Since the Poisson basis function is independent of time and the process under consideration homogeneous in time, the marginalized distribution $|\,p(\tau,n|0){{\rangle}_{x}}$ obeys the forward-time flow equation (see (116))

Equation (138)

We define the probability of finding the system in an 'active' state with n  >  0 particles at time $\tau \geqslant 0$ as

Equation (139)

Over time, this probability flows into the 'absorbing' state n  =  0 at rate $f(\tau,x):=-{{\partial}_{\tau}}\alpha (\tau ;x)$ . Since $f(\tau,x)\text{d}\tau $ is the probability of becoming absorbed during the time interval $[\tau,\tau + \Delta t]$ , one can define the mean extinction time as ${{\langle \tau \rangle}_{x}}:={\int}_{0}^{\infty}\text{d}\tau \,\tau \,f(\tau,x)$ . An integration by parts transforms this average into

Equation (140)

The boundary terms of the integration by parts vanished because we assume that all particles are eventually absorbed (in particular, we assume ${{\lim}_{\tau \to \infty}}\tau \alpha (\tau ;x)=0$ ). The definition of the probability $\alpha (\tau ;x)$ in (139) implies that it fulfils the same forward-time flow equation as the marginalized distribution $|\,p(\tau,n|0){{\rangle}_{x}}$ . Since ${{\lim}_{\tau \to \infty}}\alpha (\tau ;x)=0$ , the mean extinction time (140) therefore obeys

Equation (141)

The last equality follows from the fact that a certain fraction of all particles, namely ${{\text{e}}^{-x}}$ , has already been in the absorbing state initially. The above derivation readily extends to higher moments of the mean extinction time.

For the linear decay process $A\to \varnothing $ with decay rate coefficient μ, the equation (141) for the mean extinction time reads

Equation (142)

This equation implies that the mean extinction time $\mu {{\langle \tau \rangle}_{x}}$ increases logarithmically with the particles' mean initial distance x from the absorbing state (see figure 5). The explicit solution of the equation is $\mu {{\langle \tau \rangle}_{x}}=\ln x+\gamma -\text{Ei}(-x)$ with Euler's constant γ and the exponential integral $\text{Ei}$ (6.2.6 in [335]; the exponential integral ensures that ${{\langle \tau \rangle}_{0}}=0$ ). The application of the functional $\langle {{n}_{0}}|\,f={{\left({{\partial}_{x}}\,+\,1\right)}^{{{n}_{0}}}}f(x){{|}_{x=0}}$ to the solution returns the mean extinction time of particles whose initial number is not Poisson distributed but that is fixed to some value ${{n}_{0}}\geqslant 0$ . The corresponding mean extinction time $\mu {{\langle \tau \rangle}_{{{n}_{0}}}}$ is given by the harmonic number ${{H}_{{{n}_{0}}}}:={\sum}_{i=1}^{{{n}_{0}}}\frac{1}{i}$ . One can also infer this result directly from the backward master equation using the methods discussed in [110, 215].

Figure 5.

Figure 5. Mean extinction time $\mu \langle \tau \rangle $ of the linear decay process $A\to \varnothing $ with decay rate coefficient μ. The blue line represents the mean extinction time if the number of particles is initially Poisson distributed with mean $x\in {{\mathbb{R}}_{\geqslant 0}}$ ($\mu {{\langle \tau \rangle}_{x}}=\ln x+\gamma -\text{Ei}(-x)$ ). The red circles represent the mean extinction time if the initial number of particles is set to ${{n}_{0}}\in {{\mathbb{N}}_{0}}$ ($\mu {{\langle \tau \rangle}_{{{n}_{0}}}}={{H}_{{{n}_{0}}}}$ ).

Standard image High-resolution image

Drummond et al have extended the above computation to chemical reactions with at most two reactants and two products [374]. For these reactions, the flow equation (138) has the mathematical form of a (forward) Fokker–Planck equation.

3.4. Flow of the generating functional

Both the probability generating function (45) and the marginalized distribution (110) were defined by summing the conditional probability distribution over a set of basis functions, either at the final or at the initial time. In addition, one can define the 'probability generating functional'

Equation (143)

by summing the conditional distribution over the set $\left\{\langle n|\right\}$ of basis functionals. The definition of the generating functional is meant with respect to test functions that can be expanded in the basis functions $\left\{|n\rangle \right\}$ . As before, we assume that the basis functions and functionals constitute a (C)omplete and (O)rthogonal basis. The probability generating functional 'generates' probabilities in the sense that

Equation (144)

In the next section, we show how this inverse transformation reduces to the Poisson representation of Gardiner and Chaturvedi [32, 33] upon choosing the basis function $|n\rangle $ as a Poisson distribution.

The derivation of the (functional) flow equation of $\langle g|$ proceeds analogously to the derivations in sections 2.1 and 3.1. Differentiation of its definition (143) with respect to τ and use of the forward master equation ${{\partial}_{\tau}}p\left(\tau |{{t}_{0}}\right)={{Q}_{\tau}}p\left(\tau |{{t}_{0}}\right)$ result in

Equation (145)

This equation can be cast into a linear PDE by using the basis (E)volution operator ${{\mathcal{E}}_{\tau}}$ and the adjoint transition operator $\mathcal{Q}_{\tau}^{\dagger}$ (condition (${{\text{Q}}^{\intercal}}$ )). In particular, since the Kronecker delta $\langle n|m\rangle ={{\delta}_{n,m}}$ is independent of time, it holds that $\langle n|{{\partial}_{\tau}}|m\rangle =\left(-{{\partial}_{\tau}}\langle n|\right)|m\rangle $ . Consequently, the basis evolution operator in condition (E) fulfils

Equation (146)

with respect to functions that can be expanded in the basis functions. Moreover, the (O)rthogonality and (C)ompleteness of the basis imply that the adjoint transition operator in condition (${{\text{Q}}^{\intercal}}$ ) acts on basis functionals as

Equation (147)

Both of the above expressions hold for all values of n. If the two differential operators ${{\mathcal{E}}_{\tau}}$ and $\mathcal{Q}_{\tau}^{\dagger}$ exist, the generating functional $\langle g\left(\tau |{{t}_{0}},{{n}_{0}}\right)|$ obeys the functional flow equation

Equation (148)

with initial condition $\langle g\left({{t}_{0}}|{{t}_{0}},{{n}_{0}}\right)|=\langle {{n}_{0}}|$ . Note that this flow equation employs the same operator as the flow equation (116) of the marginalized distribution. Both equations can be used to derive the 'backward' path integral representation considered in section 4.

As a side note, let us remark that the flow equation ${{\partial}_{\tau}}|g\rangle =\left({{\mathcal{E}}_{\tau}}+{{\mathcal{Q}}_{\tau}}\right)|g\rangle $ of the generating function in (54) also admits a functional counterpart. In particular, the series

Equation (149)

obeys the flow equation

Equation (150)

with final value $\langle \,p(t,n|t)|\,=\langle n|$ . The corresponding inverse transformation reads $p\left(t,n|\tau,{{n}_{0}}\right)=\langle \,p(t,n|\tau )|{{n}_{0}}\rangle $ . Both the flow equation obeyed by the generating function and the above flow equation can be used to derive the 'forward' path integral representation in section 5. Further uses of the series (149) remain to be explored.

3.5. The Poisson representation

Assuming that the action of the generating functional $\langle g|$ on a function f can be expressed in terms of an integral kernel (also called g) as

Equation (151)

the insertion of the Poisson basis function $|n{{\rangle}_{x}}=\frac{{{x}^{n}}{{\text{e}}^{-x}}}{n!}$ into the inverse transformation (144) results in

Equation (152)

A representation of the probability distribution of this form is called a 'Poisson representation' and was first proposed by Gardiner and Chaturvedi [32, 33]. Since the integration in (152) proceeds along the real line, the above representation is referred to as a 'real' Poisson representation [375]. Although the use of a real variable may seem convenient, its use typically results in the kernel being a 'generalized function', i.e. a distribution. For example, the initial condition $p\left({{t}_{0}},n|{{t}_{0}},{{n}_{0}}\right)={{\delta}_{n,{{n}_{0}}}}$ is recovered for the integral kernel $g\left({{t}_{0}};x|{{t}_{0}},{{n}_{0}}\right)=\delta (x){{\left({{\partial}_{x}}+1\right)}^{{{n}_{0}}}}$ . By the definition

Equation (153)

of distributional derivatives (1.16.12 in [335], $j\in {{\mathbb{N}}_{0}}$ ), this kernel can also be written as $\left[{{\left(1-{{\partial}_{x}}\right)}^{{{n}_{0}}}}\delta (x)\right]$ . The integral kernel $\delta \left(x-{{x}_{0}}\right)$ instead results for a Poisson distribution with mean x0. In [32], Gardiner and Chaturvedi used the real Poisson representation to calculate steady-state probability distributions of various elementary reactions. Furthermore, the real Poisson representation was employed by Elderfield [376] to derive a stochastic path integral representation. Droz and McKane [377] later argued that this representation is equivalent to a path integral representation based on Doi's Fock space algebra. Our discussion of the backward and forward path integral representations in sections 4 and 5 clarifies the similarities and differences between these two approaches.

To circumvent the use of generalized functions, various alternatives to the real Poisson representation have been proposed: the 'complex' and 'positive' Poisson representations [110, 375] as well as the 'gauge' Poisson representation [378]. The former two representations as well as the real representation are discussed in the book of Gardiner [110]. The complex Poisson representation is obtained by continuing the Poisson basis function in (152) into the complex domain and performing the integration around a closed path $\mathcal{C}$ around 0 (once in counter-clockwise direction). Upon using Cauchy's differentiation formula to redefine the basis functional as

Equation (154)

it becomes apparent that the initial condition $p\left({{t}_{0}},n|{{t}_{0}},{{n}_{0}}\right)={{\delta}_{n,{{n}_{0}}}}$ is then recovered for the kernel $g\left({{t}_{0}};x|{{t}_{0}},{{n}_{0}}\right)=\frac{{{n}_{0}}!}{2\pi \text{i}}\frac{{{\text{e}}^{x}}}{{{x}^{{{n}_{0}}+1}}}$ . As the interpretation of this kernel is also not straightforward, we refrain from calling it a 'quasi-probability distribution' [32].

Let us exemplify the flow equation obeyed by the integral kernel for the reaction $k\,A\to l\,A$ with rate coefficient ${{\gamma}_{\tau}}$ . The flow equation can be inferred from the corresponding flow equation (148) of the generating functional and the kernel's definition in (151). Since we employ the Poisson basis function, the results from section 3.2.2 imply that the adjoint transition operator (122) reads

Equation (155)

Consequently, one finds that the integral kernel $g(\tau ;x|\centerdot )$ obeys the flow equation

Equation (156)

Equation (157)

To arrive at this equation, we performed repeated integrations by parts while ignoring any potential boundary terms (see the definition of the adjoint operator in (115)). The importance of boundary terms is discussed in [378].

Thus far, most studies employing the Poisson representation have focused on networks of bimolecular reactions ${\sum}_{j}{{k}_{j}}\,{{A}_{j}}\to {\sum}_{j}{{l}_{j}}\,{{A}_{j}}$ with ${\sum}_{j}{{k}_{j}}\leqslant 2$ and ${\sum}_{j}{{l}_{j}}\leqslant 2$ . For these networks, the flow equation (156) assumes the mathematical form of a forward Fokker–Planck equation with the derivatives being of at most second order (the corresponding diffusion coefficient may be negative). It has been attempted to map the resulting equation to an Itô SDE [32], but it should be explored whether this procedure is supported by the Feynman–Kac formula in appendix A. The numerical integration of an SDE with potentially negative diffusion coefficient was attempted for the bi-directional reaction $2\,A\leftrightharpoons \varnothing $ in [379]. The value of the integration has remained inconclusive. Recently, an exponential ansatz for the integral kernel $g(\tau ;x|\centerdot )$ has been considered to approximate its flow equation in the limit of weak noise [380] (see section 2.3.2). Moreover, a gauge Poisson representation was recently employed in a study of the coagulation reaction $2\,A\to A$ [381].

3.6. Résumé

In the previous section, we formulated general conditions under which the master equation can be transformed into a partial differential equation obeyed by the probability generating function (45). In the present section, we complemented this flow equation by a backward-time flow equation obeyed by the marginalized distribution (110) and by a functional flow equation obeyed by the probability generating functional (143). Whereas the marginalized distribution

Equation (158)

was defined as the sum of the conditional probability distribution over a set of basis functions, the generating functional

Equation (159)

was defined as the sum of the distribution over a set of basis functionals. In section 3.2, we introduced (O)rthogonal and (C)omplete basis functions and functionals for the study of different stochastic processes, including the Poisson basis for the study of chemical reactions ($\,|n{{\rangle}_{x}}=\frac{{{x}^{n}}{{\text{e}}^{-x}}}{n!}$ and $\langle m|\,f={{\left({{\partial}_{x}}+1\right)}^{m}}f(x){{|}_{x=0}}$ ). Provided that there also exist a basis (E)volution operator ${{\mathcal{E}}_{\tau}}$ and an adjoint transition operator $\mathcal{Q}_{\tau}^{\dagger}$ (condition (${{\text{Q}}^{\intercal}}$ )), the marginalized distribution obeys the backward-time flow equation

Equation (160)

with final condition $|\,p(t,n|t)\rangle =\,|n\rangle $ . Moreover, the probability generating functional (143) then obeys the functional flow equation (148). The inverse transformation $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)=\langle g\left(\tau |{{t}_{0}},{{n}_{0}}\right)|n\rangle $ of this functional generalizes the Poisson representation of Gardiner and Chaturvedi [32, 33] as shown in section 3.5. In section 3.3, we showed how the flow equation (160) obeyed by the marginalized distribution can be used to compute mean extinction times. Furthermore, the equation will prove useful in the derivation of path integral representations of the master equation and of averaged observables in sections 4 and 6. Future studies could explore whether the flow equation obeyed by the marginalized distribution can be evaluated in terms of WKB approximations or spectral methods.

4. The backward path integral representation

In the previous two sections, we showed how the forward and backward master equations can be cast into four linear PDEs for the series expansions (41)–(44). In this section, as well as in section 5, the solutions of these four equations are expressed in terms of two path integrals. Upon applying inverse transformations, the path integrals provide distinct representations of the conditional probability distribution solving the master equations. The flow equations obeyed by the generating function (41) and by the series (44) will lead us to the 'forward' path integral representation

Equation (161)

and the flow equations obeyed by the marginalized distribution (42) and by the generating functional (43) to the 'backward' path integral representation

Equation (162)

The meanings of the integral signs ${\rlap{-} \int}_{[{{t}_{0}}}^{t)}$ and ${\rlap{-} \int}_{({{t}_{0}}}^{\,t]}$ , as well as of the exponential weights are explained below. We choose the above terms for the two representations because the forward path integral representation propagates the basis function $|{{n}_{0}}{{\rangle}_{{{t}_{0}}}}$ to time t, where it is then acted upon by the functional $\langle n{{|}_{t}}$ . Counter-intuitively, however, this procedure requires us to solve a stochastic differential equation proceeding backward in time (see sections 5.2 and 5.3). Analogously, the backward path integral representation propagates the basis function $|n{{\rangle}_{t}}$ backward in time to t0, where one then applies the functional $\langle {{n}_{0}}{{|}_{{{t}_{0}}}}$ . This procedure requires us to solve an ordinary, forward-time Itô SDE (see section 4.3). The name 'adjoint' path integral representation may also be used for the backward representation. As we will see below, its 'action' ${{\mathcal{S}}^{\dagger}}$ involves the adjoint transition operator $\tilde{\mathcal{Q}}_{\tau}^{\dagger}$ .

To make the following derivations as explicit as possible, we now assume the discrete variables n and n0 to be one-dimensional. Likewise, the variables x and q of the associated flow equations are one-dimensional real variables. The derivation below employs an exponential representation of the Dirac delta function. Although such a representation also exists for Grassmann variables [382], we do not consider that case. The extension of the following derivations to processes with multiple types of particles is straightforward and proceeds analogously to the inclusion of spatial degrees of freedom. A process with spatial degrees of freedom is considered in sections 4.5.24.5.4.

After deriving the backward path integral representation in section 4.1, we exemplify how this representation can be used to solve the bi-directional reaction $\varnothing \rightleftharpoons A$ (section 4.2), the pair generating process $\varnothing \to 2\,A$ (section 4.5.1), and a process with diffusion and linear decay (sections 4.5.24.5.4). The forward path integral representation is derived in section 5 and is exemplified in deriving the generating function of linear processes $A\to l\,A$ with $l\geqslant 0$ . For the linear growth process $A\to 2\,A$ , we recover a negative Binomial distribution as the solution of the master equation. Observables of the particle number are considered in section 6.

As an intermezzo, we show in sections 4.4 and 5.3 how one can derive path integral representations for jump processes with continuous state spaces (or processes whose transition rates can be extended to such spaces). The corresponding derivations are based on Kramers–Moyal expansions of the backward and forward master equations. Since the backward and forward Fokker–Planck equations constitute special cases of these expansions, we recover a classic path integral representation whose development goes back to works of Martin, Siggia, and Rose [16], de Dominicis [17], Janssen [18, 19], and Bausch, Janssen, and Wagner [19]. Moreover, we recover the Feynman–Kac formula from appendix A, an Onsager–Machlup representation [79], and Wiener's path integral for Brownian motion [80, 81].

Before starting out with the derivations of the two path integral representations (161) and (162), let us note that these representations only apply if their underlying transition operators ${{\tilde{\mathcal{Q}}}_{\tau}}$ or $\tilde{\mathcal{Q}}_{\tau}^{\dagger}$ can be written as power series in their arguments (x and ${{\partial}_{x}}$ for the backward path integral and q and ${{\partial}_{q}}$ for the forward path integral; the names of the variables differ because both path integrals then lead to the same path integral representation of averaged observables in section 6 without requiring a change of variable names). In addition, the power series need to be 'normal-ordered', meaning that in every summand, all the x are to the left of all the ${{\partial}_{x}}$ (all the q to the left of all the ${{\partial}_{q}}$ ). This order can be established by the repeated use of the commutation relation $\left[{{\partial}_{x}},x\right]\,f(x)=f(x)$ (or, more directly, by invoking ${{\left({{\partial}_{x}}\,x\right)}^{n}}f(x)={\sum}_{m=0}^{n}\left\{\begin{array}{c} n+1 \\ m+1 \end{array}\right\}{{x}^{m}}\partial _{x}^{m}f(x)$ or ${{\left(x{{\partial}_{x}}\right)}^{n}}f(x)={\sum}_{m=0}^{n}\left\{\begin{array}{c} n \\ m \end{array}\right\}{{x}^{m}}\partial _{x}^{m}$ , with the curly braces representing Stirling numbers of the second kind; see section 26.8 in [335]). The transition operator ${{\mathcal{Q}}_{\tau}}={{\gamma}_{\tau}}\left({{c}^{l}}-{{c}^{k}}\right){{a}^{k}}$ in (64) and the adjoint transition operator $\mathcal{Q}_{\tau}^{\dagger}={{\gamma}_{\tau}}{{c}^{k}}\left({{a}^{l}}-{{a}^{k}}\right)$ in (122) of the chemical reaction $k\,A\to l\,A$ are already in their normal-ordered forms (with the creation and annihilation operators in (73) and (74), or (128) and (129), respectively).

4.1. Derivation

We first derive the backward path integral representation (162) because its application to actual processes is more intuitive than the application of the forward path integral representation. For this purpose, we consider the flow equation ${{\partial}_{-\tau}}|\,p(\centerdot |\tau )\rangle =\tilde{\mathcal{Q}}_{\tau}^{\dagger}|\,p(\centerdot |\tau )\rangle $ in (116) obeyed by the marginalized distribution. Its final condition reads $|\,p(t,n|t)\rangle =\,|n\rangle $ . The conditional probability distribution is recovered from the marginalized distribution via $p\left(t,n|{{t}_{0}},{{n}_{0}}\right)=\langle {{n}_{0}}|\,p\left(t,n|{{t}_{0}}\right)\rangle $ . As the first step, we split the time interval [t0,t] into N pieces ${{t}_{0}}\leqslant {{t}_{1}}\leqslant \cdots \leqslant {{t}_{N}}:=t$ of length $ \Delta t:=\left(t-{{t}_{0}}\right)/N\ll 1$ . Over the time interval $\left[{{t}_{0}},{{t}_{1}}\right]$ , the flow equation is then solved by 9

Equation (163)

with the generator $\mathcal{L}_{\tau}^{\dagger}:=1+\tilde{\mathcal{Q}}_{\tau}^{\dagger} \Delta t+\text{O}\left({{(\Delta t)}^{2}}\right)$ . Alternatively, the following derivation could be performed by solving the flow equation (148) obeyed by the generating functional over the time interval [tN−1, t] as (see figure 6)

Equation (164)

Figure 6.

Figure 6. Outline of the derivation of the backward path integral representation and of its evaluation in terms of an average over the paths of an Itô stochastic differential equation.

Standard image High-resolution image

Here, the generating functional $\langle g|$ acts on functions f(xN−1), meets the initial condition $\langle g\left({{t}_{0}}|{{t}_{0}},{{n}_{0}}\right)|\,=\langle {{n}_{0}}|$ , and is transformed back into the conditional probability distribution via $p\left(t,n|{{t}_{0}},{{n}_{0}}\right)=\langle g\left({{t}_{0}}|{{t}_{0}},{{n}_{0}}\right)|n\rangle $ . Both of the above approaches result in the same path integral representation. In the following, we use the equation (163) for this purpose.

As the next step of the derivation, we insert the integral representation of a Dirac delta between the generator ${{\mathcal{L}}^{\dagger}}$ and the marginalized distribution $|\,p\rangle $ in (163), turning the right-hand side of this expression into

Equation (165)

The integrations over x1 and q1 are both performed along the real line from $-\infty $ to $+\infty $ . If the adjoint transition operator $\tilde{\mathcal{Q}}_{\tau}^{\dagger}$ , and thus also $\mathcal{L}_{\tau}^{\dagger}$ , are normal-ordered power series, we can replace ${{\partial}_{{{x}_{0}}}}$ by $\text{i}{{q}_{1}}$ in the above expression and pull $\mathcal{L}_{{{t}_{0}}}^{\dagger}$ to the right of the exponential. Making use of the final condition $|\,p(t,n|t){{\rangle}_{{{x}_{N}}}}=\,|n{{\rangle}_{t,{{x}_{N}}}}$ , the above steps can be repeated until (165) reads

Equation (166)

with the abbreviation

Equation (167)

To proceed, we now replace $\mathcal{L}_{\tau}^{\dagger}=1+\tilde{\mathcal{Q}}_{\tau}^{\dagger} \Delta t+\text{O}\left({{(\Delta t)}^{2}}\right)$ by the exponential ${{\text{e}}^{\tilde{\mathcal{Q}}_{\tau}^{\dagger} \Delta t}}$ (for brevity, we drop the correction term in the following). Note that the exponential ${{\text{e}}^{\tilde{\mathcal{Q}}_{\tau}^{\dagger} \Delta t}}$ does not involve differential operators because those were all replaced by the new variables $\text{i}{{q}_{j}}$ . Whether the exponentiation can be made mathematically rigorous is an open question and may possibly be answered positively only for a restricted class of stochastic processes. Upon performing the exponentiation, one obtains the following discrete-time path integral representation of the marginalized distribution:

Equation (168)

Equation (169)

The final condition $|\,p(t,n|t){{\rangle}_{x}}=\,|n{{\rangle}_{t,x}}$ is trivially fulfilled because only N  =  0 time slices fit between t and t. The object $\mathcal{S}_{N}^{\dagger}$ is called an 'action'. The exponential factor ${{\text{e}}^{-\mathcal{S}_{N}^{\dagger}}}$ weighs the contribution of every path $\left({{x}_{1}},{{q}_{1}}\right)\to \left({{x}_{2}},{{q}_{2}}\right)\to \cdots \to \left({{x}_{N}},{{q}_{N}}\right)$ .

As the final step of the derivation, we take the continuous-time limit $N\to \infty $ , or $ \Delta t\to 0$ , at least formally. The following continuous-time expressions effectively serve as abbreviations of the above discretization scheme with the identifications $x\left({{t}_{0}}+j \Delta t\right):={{x}_{j}}$ and $q\left({{t}_{0}}+j \Delta t\right):={{q}_{j}}$ . Further comments on the discretization scheme are provided in section 4.4. Combined with the inverse transformation (111), we thus arrive at the following backward path integral representation of the conditional probability distribution:

Equation (170)

Equation (171)

Equation (172)

Here we included x(t0) as an argument of the functional $\langle {{n}_{0}}{{|}_{{{t}_{0}},x\left({{t}_{0}}\right)}}$ to express that this functional acts on $x(\tau )$ only for $\tau ={{t}_{0}}$ . Moreover, we defined ${\rlap{-} \int}_{({{t}_{0}}}^{t]}:={{\lim}_{N\to \infty}}{\int}^{}-\,_{1}^{N}$ to indicate that the path integral involves integrations over x(t) and q(t), but not over x(t0) and q(t0).

The evaluation of the above path integral representation for an explicit process involves two steps. First, the path integral (171) provides the marginalized distribution $|\,p\left(t,n|{{t}_{0}}\right)\rangle $ . Second, this marginalized distribution is mapped to the conditional probability distribution $p\left(t,n|{{t}_{0}},{{n}_{0}}\right)$ by the action of the functional $\langle {{n}_{0}}|$ .

4.2. Simple growth and linear decay

Let us demonstrate the above two steps for the bi-directional reaction $\varnothing \rightleftharpoons A$ . The stationary distribution of this process was already derived in the introduction to section 2. Both the growth rate coefficient γ and the decay rate coefficient μ shall be time-independent, but this assumption is easily relaxed. As the first step, the backward path integral (170) requires us to choose an appropriate basis. The time-independent Poisson basis with the basis function $|n{{\rangle}_{x}}=\frac{{{x}^{n}}{{\text{e}}^{-x}}}{n!}$ proves to be convenient for this purpose (see section 3.2.2). The adjoint transition operator follows from (122), (128), and (129) as ${{\tilde{\mathcal{Q}}}^{\dagger}}\left(x,{{\partial}_{x}}\right)=(\gamma -\mu x){{\partial}_{x}}$ . Insertion of this operator into the action (172) results in

Equation (173)

The integration over the path $q(\tau )$ from $\tau ={{t}_{0}}$ to $\tau =t$ is performed most easily in the discrete-time approximation. The marginalized distribution $|\,p\left(t,n|{{t}_{0}}\right){{\rangle}_{x\left({{t}_{0}}\right)}}$ with $x\left({{t}_{0}}\right)\equiv {{x}_{0}}$ thereby follows as the following product of Dirac deltas:

The function $|n{{\rangle}_{t,{{x}_{N}}}}$ is also integrated over. Upon taking the continuous-time $ \Delta t\to 0$ and performing the integration over the path $x(\tau )$ , one finds that the marginalized distribution is given by the Poisson distribution

Equation (174)

with $x(\tau )$ solving ${{\partial}_{\tau}}x=\gamma -\mu x$ . This equation coincides with the rate equation of the process $\varnothing \rightleftharpoons A$ . The solution $x(\tau )={{\text{e}}^{-\mu \left(\tau -{{t}_{0}}\right)}}\left(x\left({{t}_{0}}\right)-\frac{\gamma}{\mu}\right)+\frac{\gamma}{\mu}$ of the rate equation specifies the mean and the variance of the above Poisson distribution. For $\mu >0$ , both of them converge to the asymptotic value $\gamma /\mu $ . In the next section and in section 4.5, we generalize these results to more general processes.

The marginalized distribution (174) does not only solve the master equation of the reaction $\varnothing \rightleftharpoons A$ , but it also establishes the link between the path integral variable x and the moments of the particle number n. In particular, the mean particle number evaluates to $\langle n\rangle (t)=x(t)$ , while higher order moments can determined via $\langle {{n}^{k}}\rangle (t)={\sum}_{l=0}^{k}\left\{\begin{array}{c} k \\ l \end{array}\right\}x{{(t)}^{l}}$ . The curly braces denote a Stirling number of the second kind (see section 26.8 in [335]).

The conditional probability distribution can be calculated from the marginalized distribution by applying the functional $\langle {{n}_{0}}|\,f={{\left({{\partial}_{x\left({{t}_{0}}\right)}}+1\right)}^{{{n}_{0}}}}f\left(x\left({{t}_{0}}\right)\right){{|}_{x\left({{t}_{0}}\right)=0}}$ to the latter (see (126) and (129)). In the limit of a vanishing decay rate ($\mu \to 0$ ), for which $x(\tau )=x\left({{t}_{0}}\right)+\gamma \left(\tau -{{t}_{0}}\right)$ , we thereby recover the shifted Poisson distribution (80), i.e.

Equation (175)

Thus, the mean and variance of the conditional probability distribution grow linearly with time. If the particles decay but are not replenished ($\mu >0$ but $\gamma =0$ ), we instead recover the Binomial distribution (134).

4.3. Feynman–Kac formula for jump processes

We now show how the backward path integral representation (171) of the marginalized distribution can be expressed in terms of an average over the paths of an Itô stochastic differential equation (SDE). The resulting expression bears similarities with the Feynman–Kac formula (3), especially when the adjoint transition operator $\tilde{\mathcal{Q}}_{\tau}^{\dagger}\left(x,{{\partial}_{x}}\right)$ of the stochastic process under consideration is quadratic in ${{\partial}_{x}}$ . In the general case, however, functional derivatives act on the average over paths. These derivatives can, for example, be evaluated in terms of perturbation expansions as we demonstrate in section 4.5.4. The procedure outlined below serves as a general starting point for the exact or approximate evaluation of the backward path integral (171).

In the following, we consider a stochastic process whose adjoint transition operator has the generic form

Equation (176)

As before, we call ${{\alpha}_{\tau}}$ a drift and ${{\beta}_{\tau}}$ a diffusion coefficient. The object $\mathcal{P}_{\tau}^{\dagger}$ is referred to as the (adjoint) perturbation operator, or simply as the perturbation. The perturbation operator absorbs all the terms of higher order in ${{\partial}_{x}}$ and possibly also terms of lower order. Thus, the above form of $\tilde{\mathcal{Q}}_{\tau}^{\dagger}$ is not unique and ${{\alpha}_{\tau}}$ , ${{\beta}_{\tau}}$ , and $\mathcal{P}_{\tau}^{\dagger}$ should be chosen so that the evaluation of the expressions below becomes as simple as possible. If the perturbation operator $\mathcal{P}_{\tau}^{\dagger}$ is zero, those expressions simplify considerably.

As the first step, let us rewrite the backward path integral representation (171) of the marginalized distribution as

Equation (177)

where ${{\mathcal{Z}}_{Q=0,X=0}}$ represents a value of the (Q,X)-generating functional

Equation (178)

Hence, we have singled out the integrations over xN and qN before performing the continuous-time limit (see (168) and (171)). We call ${{\mathcal{Z}}_{Q,X}}$ a generating functional because a functional differentiation of ${{\mathcal{Z}}_{Q,X}}$ with respect to $Q(\tau )$ generates a factor $x(\tau )$ , and a functional differentiation of ${{\mathcal{Z}}_{Q,X}}$ with respect to $X(\tau )$ generates a factor $\text{i}q(\tau )$ .

Given the action (172) with the adjoint transition operator (176), the above properties of the (Q,X)-generating functional can now be used to rewrite this function as (see appendix D)

Equation (179)

Equation (180)

Here, ${{\langle \langle \centerdot \rangle \rangle}_{W}}$ represents the average over realizations of a Wiener process $W(\tau )$ . This Wiener process influences the evolution of the path $x(\tau )$ through the Itô SDE

Equation (181)

The temporal evolution of $x(\tau )$ starts out from x(t0), which is determined by the argument of the marginalized distribution (177). In section 4.5.4, we demonstrate how the marginalized distribution can be evaluated in terms of a perturbation expansion of the (Q,X)-generating functional (179) for a process of diffusing particles that are also decaying. In order to perform the perturbation expansion, let us already note that the value $x(\tau )$ of the path at time τ depends on the 'source' $X\left({{\tau}^{\prime}}\right)$ only for times ${{\tau}^{\prime}}<\tau $ , and that $\frac{\delta}{\delta Q(\tau )}{\int}_{{{t}_{0}}}^{t}\text{d}{{\tau}^{\prime}}Q\left({{\tau}^{\prime}}\right)f\left({{\tau}^{\prime}}\right)=f(\tau )$ holds for all $\tau \in \left({{t}_{0}},t\right]$ . Moreover, let us note that the (Q,X)-generating functional depends on qN and x(t0) but not on xN . These properties follow from the derivation of the representation (179), which we outline in appendix D.

An important special case of the above representation is constituted by processes whose perturbation operator $\mathcal{P}_{\tau}^{\dagger}$ is zero. For Q  =  X  =  0, the generating functional (179) then simplifies to ${{\mathcal{Z}}_{0,0}}={{\text{e}}^{\text{i}{{q}_{N}}x(t)}}$ and the marginalized distribution (177) follows in terms of the Feynman–Kac like fomula

Equation (182)

Here, $x(\tau )$ solves the Itô SDE

Equation (183)

with the initial value x(t0). For the Poisson basis function $|n{{\rangle}_{x}}=\frac{{{x}^{n}}{{\text{e}}^{-x}}}{n!}$ , the above representation of the marginalized distribution coincides with our earlier result (136), which we encountered in the discussion of the coagulation reaction $2\,A\to A$ . For the bi-directional reaction $\varnothing \rightleftharpoons A$ from the previous section with adjoint transition operator ${{\tilde{\mathcal{Q}}}^{\dagger}}\left(x,{{\partial}_{x}}\right)=(\gamma -\mu x){{\partial}_{x}}$ , the Itô SDE (183) simplifies to the deterministic rate equation ${{\partial}_{\tau}}x=\gamma -\mu x$ . Thus, no averaging is required to evaluate the marginalized distribution (182) and one recovers our earlier solution (174).

Our above discussion only applies to processes in well-mixed environments with a single type of particles. A generalization of the results to processes with multiple types of particles or with spatial degrees of freedom is straightforward. A spatial process will be discussed in section 4.5. In a multivariate generalization of the above procedure, the adjoint transition operator (176) includes a vector-valued drift coefficient $\boldsymbol{\alpha }_{{\tau}}\left(\boldsymbol{x}\right)$ and a diffusion matrix ${{\beta}_{\tau}}\left(\boldsymbol{x}\right)$ . Moreover, the derivation of the corresponding generating functional (179) requires that there exists a matrix $\sqrt{{{\beta}_{\tau}}}:={{\gamma}_{\tau}}$ fulfilling ${{\gamma}_{\tau}}\gamma _{\tau}^{\intercal}={{\beta}_{\tau}}$ . If the diffusion matrix ${{\beta}_{\tau}}$ is positive-semidefinite, its positive-semidefinite and symmetric square root $\sqrt{{{\beta}_{\tau}}}$ can be determined via diagonalization [111].

In section 4.5, we exemplify the use of our above results for various well-mixed and spatial processes. But before, let us show how the results from the previous sections can be used to derive a path integral representation for processes with continuous state spaces.

4.4. Intermezzo: the backward Kramers–Moyal expansion

The transition rate ${{w}_{\tau}}(m,n)$ denotes the rate at which probability flows from a state n to a state m, or, considering an individual sample path, the rate at which particles jump from state n to state m. Thus, ${{\kappa}_{\tau}}(\Delta n,n):={{w}_{\tau}}(n+ \Delta n,n)$ denotes the rate at which the state n is left via jumps of size $ \Delta n$ (with $n\in {{\mathbb{N}}_{0}}$ and $ \Delta n\in \mathbb{Z}$ ). Thus far, we only considered jumps between the states of a discrete state space. But in the following, we derive a path integral representation for processes having a continuous state space.

4.4.1. Processes with continuous state spaces.

We consider a process whose state is characterized by a continuous variable $x\in {{\mathbb{R}}_{\geqslant 0}}$ . The change from the letter n to the letter x is purely notational and emphasizes that the state space is now continuous (the change also highlights a formal similarity between the linear PDEs discussed so far and the ones derived below). The conditional probability distribution $p\left(\tau,x|{{t}_{0}},{{x}_{0}}\right)$ describing the system shall be normalized as ${{{\int}^{}}_{\mathbb{R}}}\text{d}x\,p(\tau,x|\centerdot )=1$ and obey the master equation

Equation (184)

with the initial condition $p\left({{t}_{0}},x|{{t}_{0}},{{x}_{0}}\right)=\delta \left(x-{{x}_{0}}\right)$ . The structure of this master equation is equivalent to the structure of the master equation (11). Provided that the product ${{\kappa}_{\tau}}(\Delta x,x)p(\tau,x|\centerdot )$ is analytic in x, one can perform a Taylor expansion of the above master equation to obtain the (forward) Kramers–Moyal expansion [1, 103, 272]

Equation (185)

The 'jump moments' $\mathcal{M}_{\tau}^{(m)}$ are defined as

Equation (186)

By Pawula's theorem [104, 383, 384], the positivity of the conditional probability distribution requires that the Kramers–Moyal expansion either stops at its first or second summand, or that it does not stop at all. If the expansion stops at its second summand, it assumes the form of a (forward) Fokker–Planck equation. The drift coefficient of this Fokker–Planck equation is given by $\mathcal{M}_{\tau}^{(1)}(x)$ and its non-negative diffusion coefficient by $\mathcal{M}_{\tau}^{(2)}(x)$ . The sample paths of the Fokker–Planck equation are continuous, however, contradicting our earlier assumption of the process making discontinuous jumps in state space. For a jump process, the Kramers–Moyal expansion cannot stop. Nevertheless, a truncation of the Kramers–Moyal expansion at the level of a Fokker–Planck equation often provides a decent approximation of a process, provided that fluctuations cause only small relative changes of its state x.

The backward analogue of the master equation (184) reads

Equation (187)

with the final condition $p\left(t,x|t,{{x}_{0}}\right)=\delta \left(x-{{x}_{0}}\right)$ . Given the analyticity of $p\left(\centerdot |\tau,{{x}_{0}}\right)$ in x0, the backward master equation can be rewritten in terms of the backward Kramers–Moyal expansion

Equation (188)

with the adjoint transition operator

Equation (189)

This operator is the adjoint of the operator in the forward Kramers–Moyal expansion (185) in the sense that ${\int}^{}\text{d}x\,\left[\tilde{\mathcal{Q}}_{\tau}^{\dagger}\left(x,{{\partial}_{x}}\right)f(x)\right]g(x)={\int}^{}\text{d}xf(x)\left[{{\tilde{\mathcal{Q}}}_{\tau}}\left(x,{{\partial}_{x}}\right)g(x)\right]$ (provided that all boundary terms in the integrations by parts vanish).

4.4.2. Path integral representation of the backward Kramers–Moyal expansion.

The adjoint transition operator $\tilde{\mathcal{Q}}_{\tau}^{\dagger}\left({{x}_{0}},{{\partial}_{{{x}_{0}}}}\right)$ of the backward Kramers–Moyal expansion (189) is normal-ordered with respect to x0 and ${{\partial}_{{{x}_{0}}}}$ . Moreover, the backward Kramers–Moyal expansion (188) has the same form as the flow equation (116) obeyed by the marginalized distribution. One can therefore follow the steps in section 4.1 to represent the backward Kramers–Moyal expansion by the path integral

Equation (190)

with the action

Equation (191)

The integral sign in (190) is again defined as the continuous-time limit of (167) and traces out all paths of $x(\tau )$ and $q(\tau )$ for $\tau \in \left({{t}_{0}},t\right]$ (note that $x(\tau )$ differs from x and x0, which are fixed parameters; for brevity, however, $x(\tau )$ is occasionally abbreviated as x below). A diagrammatic computation of multi-time correlation functions based on a path integral representation equivalent to (190) has recently been considered in [385].

Let us specify the path integral representation (190) for a model of the chemical reaction $k\,A\to l\,A$ with rate coefficient ${{\gamma}_{\tau}}$ (and $k,l\in {{\mathbb{N}}_{0}}$ ). As we assume the particle 'number' x to be continuous, the model is only reasonable in an approximate sense for large values of x. In defining the transition rate of the reaction, one has to ensure the non-negativity of x and the conservation of probability. A possible choice of the transition rate is

Equation (192)

Here, the Heaviside step function $ \Theta (x-k)$ ensures that a sufficient number of particles are present to engage in a reaction (and it prevents the loss of probability to negative values of x).

Assuming a smooth approximation of the Heaviside step function that vanishes for x  <  0, the jump moments (186) follow as

Equation (193)

The corresponding adjoint transition operator evaluates to

Equation (194)

Path integrals with such an operator have been noted in [386388], but their potential use remains to be fully explored. Curiously, upon ignoring the step function, the operator (194) has the same structure as the transition operator (122) upon identifying $x{{\text{e}}^{-\text{i}q}}$ with the creation operator c, and ${{\text{e}}^{\text{i}q}}$ with the annihilation operator a (Cole-Hopf transformation [386390]). Note, however, that the stochastic processes associated to the two transition operators differ from each other (discrete versus continuous state space; different transition rates). The connection between the two associated path integrals (171) and (190) should be further explored.

4.4.3. Path integral representation of the backward Fokker–Planck equation.

With the drift coefficient ${{\alpha}_{\tau}}(x):=\mathcal{M}_{\tau}^{(1)}(x)$ and the non-negative diffusion coefficient ${{\beta}_{\tau}}(x):=\mathcal{M}_{\tau}^{(2)}(x)$ , a truncation of the adjoint transition operator (189) at its second summand reads

Equation (195)

Thus, the corresponding Kramers–Moyal expansion (188) recovers the backward Fokker–Planck equation (2). Since the action (191) evaluates to

Equation (196)

the corresponding path integral (190) coincides with a classic path integral representation of the (backward) Fokker–Planck equation. The original development of this representation goes back to works of Martin, Siggia, and Rose [16], de Dominicis [17], Janssen [18, 19], and Bausch, Janssen, and Wagner [19]. The application of this path integral to stochastic processes is, for example, discussed in the book of Täuber [70].

The transition operator (195) of the backward Fokker–Planck equation has the same form as the transition operator (176) in section 4.3, but it does not involve a perturbation operator $\mathcal{P}_{\tau}^{\dagger}$ . Therefore, one can follow the steps in that section to evaluate the path integral (190) in terms of an average over the paths of an Itô SDE. This procedure shows that the backward Fokker–Planck equation is solved by

Equation (197)

with $x(\tau )$ solving the Itô SDE

Equation (198)

with initial value $x\left({{t}_{0}}\right)={{x}_{0}}$ . Hence, we have recovered the Feynman–Kac formula (3) (apart from a notational change in the time parameters).

4.4.4. The Onsager–Machlup function.

The connection between the above path integral representation of the (backward) Fokker–Planck equation and the work of Onsager and Machlup [79] becomes apparent upon the completion of a square (as in a Hubbard-Stratonovich transformation [391, 392]). For this purpose, the diffusion coefficient ${{\beta}_{\tau}}$ in the transition operator (195) must not only be non-negative but positive(-definite). One can then complete the square in the variable q and perform the path integration over this variable. Returning to the discrete-time approximation (169) of the action, one obtains the following representation of the conditional probability distribution $p\left(t,x|{{t}_{0}},{{x}_{0}}\right)$ in terms of convolutions of Gaussian distributions:

Equation (199)

The Dirac delta function is included in the integrations. Moreover, ${{\mu}_{j}}:={{\alpha}_{{{t}_{j}}}}\left({{x}_{j}}\right) \Delta t$ acts as the mean and $\sigma _{j}^{2}:={{\beta}_{{{t}_{j}}}}\left({{x}_{j}}\right) \Delta t$ as the variance of the Gaussian distribution

Equation (200)

As before, $ \Delta t=\left(t-{{t}_{0}}\right)/N$ denotes the time intervals between ${{t}_{0}}\leqslant {{t}_{1}}\leqslant \cdots \leqslant {{t}_{N}}=t$ .

Upon identifying $x\left({{t}_{0}}+j \Delta t\right)$ with xj , the representation (199) can be rewritten as

Equation (201)

This integral proceeds only over paths $x(\tau )$ with $\tau \in \left({{t}_{0}},t\right]$ because the q-variables have already been integrated over. Paths are weighed by the exponential factor ${{\text{e}}^{-{{\mathcal{S}}^{\dagger}}}}$ with the action

Equation (202)

One may abbreviate the continuous-time limit of this action by the integral

Equation (203)

The integrand of this action is called an Onsager–Machlup function [79, 393] and the representation (201) may, consequently, be called an Onsager–Machlup representation (or 'functional' in the sense of functional integration). Note that the Onsager–Machlup representation involves the limit

Equation (204)

Mathematically rigorous formulations of the Onsager–Machlup representation have been attempted in [394396]. For the other path integrals discussed in this review, such attempts have not yet been made to our knowledge.

4.4.5. Alternative discretization schemes.

Up to this point, we have discretized time in such a way that the evaluation of the path integrals eventually proceeds via the solution of an Itô stochastic differential equation (see section 4.3). Alternative discretization schemes have been proposed as well and are, for example, employed in stochastic thermodynamics [237]. To illustrate these schemes, let us, for simplicity, assume that the drift coefficient $\alpha (x)$ does not depend on time and that the diffusion coefficient $D:={{\beta}_{\tau}}$ is constant. A general discretization scheme—called the α-scheme but here we use a κ—consists of shifting the argument xj of the drift coefficient in the action (202) to ${{\bar{x}}_{j}}:=\kappa \,{{x}_{j+1}}+(1-\kappa ){{x}_{j}}$ by writing

Equation (205)

(with $\kappa \in [0,1]$ ). Afterwards, the drift coefficient is expanded in powers of ${{x}_{j+1}}-{{x}_{j}}$ , which is of order $\sqrt{ \Delta t}$ along relevant paths. Following [397], it suffices to keep only the first two terms of the expansion of $\alpha \left({{x}_{j}}\right)$ so that

Equation (206)

As the next step, the integration variables x1,..., xN are transformed according to

Equation (207)

The Jacobian of this transformation vanishes everywhere except on its diagonal and sub-diagonal. Its determinant therefore contributes an additional factor to the path integral and requires a redefinition of the action (202) as

One may abbreviate this limit by

Equation (208)

The Itô version of this action with $\kappa =0$ has proved to be convenient in perturbation expansions of path integrals because so called 'closed response loops' can be omitted right from the start (see section 4.5 in [70]). Moreover, this prescription does not require a change of variables and makes the connection to the Feynman–Kac formula in appendix A most apparent. The Stratonovich version of the action with $\kappa =\frac{1}{2}$ is, for example, employed in Seifert's review on stochastic thermodynamics [237]. For a recent, more thorough discussion of the above discretization schemes, as well as of conflicting approaches, we refer the reader to [398] (the above action is discussed in the appendices).

4.4.6. Wiener's path integral.

Before returning to processes with a discrete state space, let us briefly note how the Onsager–Machlup representation (201) relates to Wiener's path integral of Brownian motion [80, 81], as it is discussed in [399]. It turns out that the Onsager–Machlup representation in fact coincides with that path integral for one-dimension Brownian motion, for which the drift coefficient vanishes and the diffusion coefficient $D={{\beta}_{\tau}}$ is constant. Since the convolution of two Gaussian distributions is again a Gaussian distribution, with means and variances being summed, the solution of the process follows as $p\left(t,x|{{t}_{0}},{{x}_{0}}\right)={{\mathcal{G}}_{0,D\left(t-{{t}_{0}}\right)}}\left(x-{{x}_{0}}\right)$ .

4.5. Further exact solutions and perturbation expansions

After this detour to processes with continuous state spaces, let us return to the evaluation of the backward path integral representation (171). In the following, we show how the method introduced in section 4.3 can be applied to several elementary jump processes. We already applied a simplified version of the method in section 4.2 to solve the bi-directional reaction $\varnothing \rightleftharpoons A$ . We now consider a generic reaction $k\,A\to l\,A$ with rate coefficient ${{\gamma}_{\tau}}$ . Using the Poisson basis function $|n{{\rangle}_{x}}=\frac{{{x}^{n}}{{\text{e}}^{-x}}}{n!}$ , the adjoint transition operator of this reaction can be written as (see section 3.2.2)

Equation (209)

Equation (210)

Here we employed the creation operator c  =  x and the annihilation operator $a={{\partial}_{x}}+1$ , and we performed a series expansion with respect to ${{\partial}_{x}}$ . Terms of third and higher order in ${{\partial}_{x}}$ were shoved into the perturbation operator $\mathcal{P}_{\tau}^{\dagger}\left(x,{{\partial}_{x}}\right)$ . In the following, we approximate the marginalized distribution of the reaction $k\,A\to l\,A$ by first dropping both the diffusion coefficient and the perturbation operator from (210). Afterwards, we reintroduce the diffusion coefficient and show how the marginalized distribution follows as the average of a Poisson distribution over the paths of an Itô SDE. For the pair generation process $\varnothing \to 2\,A$ , this representation is exact because the perturbation operator associated to this process is zero (see section 4.5.1). Later, in section 4.5.4, we solve a process with a non-vanishing perturbation operator $\mathcal{P}_{\tau}^{\dagger}$ .

As a first approximation of the reaction $k\,A\to l\,A$ , let us drop all the terms of the adjoint transition operator (210) except for the drift coefficient ${{\alpha}_{\tau}}(x)={{\gamma}_{\tau}}(l-k){{x}^{k}}$ . The SDE (181) then simplifies to the deterministic rate equation ${{\partial}_{\tau}}x={{\gamma}_{\tau}}(l-k){{x}^{k}}$ of the reaction. Its solution $x(\tau )$ acts as the mean of the marginalized distribution, which, according to (177)–(180), is again given by the Poisson distribution $|\,p\left(t,n|{{t}_{0}}\right){{\rangle}_{x\left({{t}_{0}}\right)}}=\,|n{{\rangle}_{t,x(t)}}$ in (174).

Going one step further, one may keep the diffusion coefficient in (210). The marginalized distribution then reads

Equation (211)

with $x(\tau )$ solving the Itô SDE

Equation (212)

Hence, the Poisson distribution in (211) is averaged over all possible sample paths of the SDE (212), whose evolution starts out from the initial value x(t0) (see (136)). The above expression has also recently been derived by Wiese [336], although based on the forward path integral that we will discuss in section 5. Apparently, the expression under the square root in (212) is non-negative only if $k\in \left\{0,1\right\}$ or if $l\geqslant k\geqslant 2$ . If that is not the case, $x(\tau )$ strays off into the complex domain. SDEs with imaginary noise (or the corresponding Langevin equations) have been studied in several recent articles, most notably for the binary annihilation reaction $2\,A\to \varnothing $ and for the coagulation process $2\,A\to A$ [336, 400, 401]. The numerical evaluation of such SDEs, however, often encountered severe convergence problems [336, 379]. The appearance of imaginary noise has been linked to the anti-correlation of particles in spatial systems [400], but as (212) shows, no spatial degrees of freedoms are actually required for its emergence. As Wiese pointed out, imaginary noise generally appears when, over time, the marginalized distribution (211) becomes narrower than a Poisson distribution [336].

4.5.1. Pair generation.

For the pair generation process $\varnothing \to 2\,A$ with growth rate coefficient ${{\gamma}_{\tau}}$ , the drift and the (squared) diffusion coefficients agree: ${{\alpha}_{\tau}}={{\beta}_{\tau}}=2{{\gamma}_{\tau}}$ . As neither of them depends on $x(\tau )$ , the Itô SDE is readily solved by $x(t)=x\left({{t}_{0}}\right)+{\int}_{{{t}_{0}}}^{t}\text{d}\tau 2{{\gamma}_{\tau}}+{\int}_{{{t}_{0}}}^{t}\text{d}W(\tau )\sqrt{2{{\gamma}_{\tau}}}$ . After introducing the (rather daunting) parameter

Equation (213)

one can employ the binomial theorem to rewrite the right-hand side of the marginalized distribution (211) as

If the rate coefficient γ is independent of time, one can show that the parameter ${{\eta}_{k}}$ coincides with the kth moment of a Gaussian distribution with zero mean and unit variance (i.e. ${{\eta}_{k}}=0$ for odd k and ${{\eta}_{k}}=(k-1)!!$ for even k; note that the sum of two Gaussian random variables is again a Gaussian random variable, with its mean and variance following additively). The marginalized distribution $|\,p\left(t,n|{{t}_{0}}\right){{\rangle}_{x\left({{t}_{0}}\right)}}$ therefore follows as

Equation (214)

Here, $\lfloor n/2\rfloor $ represents the integral part of n/2. The marginalized distribution (214) solves the master equation of the pair generation process and is initially of Poisson shape. For x(t0)  =  0, the distribution effectively keeps that shape, although only on the set of all even numbers (the reaction $\varnothing \to 2\,A$ cannot create an odd number of particles when starting out from zero particles). Using Mathematica by Wolfram Research, we verified that the distribution also applies when ${{\gamma}_{\tau}}$ depends on time. The evolution of the distribution is shown in figure 7 for the rate coefficient ${{\gamma}_{\tau}}=1/\left(1+\sqrt{\tau}\right)$ . We refrain from computing the conditional distribution from the marginalized distribution because the computation is unwieldy and does not shed any more light on the path integral approach.

Figure 7.

Figure 7. Evolution of the marginalized distribution $|\,p\left(t,n|{{t}_{0}}\right)\rangle $ in (214), which solves the pair generation process $\varnothing \to 2\,A$ (with t0  : =  0). The initial mean of the distribution was chosen as x(t0)  =  0 and the rate coefficient of the reaction as ${{\gamma}_{\tau}}=1/\left(1+\sqrt{\tau}\right)$ .

Standard image High-resolution image

4.5.2. Diffusion on networks.

The backward path integral can also be used to solve spatial processes. In fact, stochastic path integrals have been most useful in the study of such processes. If particles engaging in a chemical reaction can also diffuse in space, their density may evolve in ways that are not expected from the well-mixed, non-spatial limit. That is, for example, the case for particles that annihilate one another in the reaction $2\,A\to \varnothing $ while diffusing along a one-dimensional line. From the well-mixed limit, one would expect that the particle density decays asymptotically as t−1 with time (see section 7), but instead it decays as t−1/2 [130, 294, 402]. The reason behind this surprising decay law is the rapid condensation of the system into a state in which isolated particles are separated by large voids. From that point on, further decay of the particle density requires that two particles first find each other through diffusion. Therefore, the process is 'diffusion-limited' and exhibits a slower decay of the particles (see e.g. [403] on the related 'first-passage' problem). Path integral representations of the master equation, combined with renormalization group techniques, have been successfully applied to the computation of decay laws in spatially extended systems, both regarding the (universal) exponents and the pre-factors of these laws [130132, 294, 306, 404]. For a broader discussion of systems exhibiting a transition into an absorbing state see [68]. Before turning to the particle density and, more generally, to observables of the particle number, we now show how the backward path integral helps in computing the full probability distribution of a spatial process.

As a first step, we consider a pure diffusion process with particles hopping between neighbouring nodes of a network $\mathbb{L}$ . For now, the topology of the network may be arbitrary, being either random or regular. In the next section, the network topology will be chosen as a regular lattice. In the limit of a vanishing lattice spacing (and an infinite number of lattice sites), a 'field theory' will be obtained. In section 4.5.4, the particles will also be allowed to decay.

The configuration of particles on the network may be represented by the vector $\boldsymbol{n}\in \mathbb{N}_{0}^{|\mathbb{L}|}$ , with $|\mathbb{L}|$ being the total number of network nodes. The configuration changes whenever a particle hops from some node $i\in \mathbb{L}$ to a neighbouring node $j\in {{{\mathsf {N}}}_{i}}\subset \mathbb{L}$ . The probability $p\left(\tau,\boldsymbol{n}|{{t}_{0}},\boldsymbol{n}_{{0}}\right)$ of finding the system in configuration $\boldsymbol{n}$ then obeys the master equation

Equation (215)

Here, ${{\widehat{\boldsymbol{e}}}_{i}}$ represents a unit vector that points in direction $i\in \mathbb{L}$ . Moreover, ${{\varepsilon}_{\tau,i}}$ acts as a hopping rate and may depend both on time and on the node from which a particle departs. Apparently, the above master equation has the same structure as the chemical master equation (24). It may therefore be cast into a linear PDE by extending the operators and basis functions from section 3.2.2 to multiple variables. In particular, we extend the Poisson basis function to

Equation (216)

and employ the creation and annihilation operators

Equation (217)

According to the flow equation (116), the marginalized distribution evolves via ${{\partial}_{-\tau}}|\,p(t,n|\tau )\rangle =\tilde{\mathcal{Q}}_{\tau}^{\dagger}|\,p(t,n|\tau )\rangle $ with the adjoint transition operator

Equation (218)

Here we introduced the discrete Laplace operator ${\underline \Delta }{{f}_{i}}:={\sum}_{j\in {{{\mathsf {N}}}_{i}}}\left(\,{{f}_{j}}-{{f}_{i}}\right)$ , which acts both on ${{\varepsilon}_{\tau,i}}$ and on xi .

By following the steps in section 4.3, one finds that the marginalized distribution is given by the multivariate Poisson distribution

Equation (219)

Its mean $\boldsymbol{x}(t)$ solves the discrete diffusion equation

Equation (220)

with the initial condition $\boldsymbol{x}\left({{t}_{0}}\right)$ . Let us note that the marginalized distribution solves the master equation (215), but with the initial number of particles being Poisson distributed locally.

The conditional distribution $p\left(t,\boldsymbol{n}|{{t}_{0}},\boldsymbol{n}_{{0}}\right)$ follows from the marginalized distribution (219) by applying the functional

Equation (221)

to it (see (126) with (129)). The evaluation requires a prior solution of the discrete diffusion equation (220). This equation can be written in matrix form as ${{\partial}_{\tau}}\boldsymbol{x}={{M}_{\tau}}\boldsymbol{x}$ with ${{M}_{\tau,ij}}=\left({{A}_{ij}}\,-\,|{{{\mathsf {N}}}_{i}}|{{\delta}_{ij}}\right){{\varepsilon}_{\tau,j}}$ . Here, A represents the symmetric adjacency matrix of the network and $|{{{\mathsf {N}}}_{i}}|$ represents the number of neighbours of node $i\in \mathbb{L}$ . The matrix equation can in principle be solved through a Magnus expansion. The solution has the generic form $\boldsymbol{x}(\tau )=G\left(\tau |{{t}_{0}}\right)\boldsymbol{x}\left({{t}_{0}}\right)$ with the 'propagator' G solving ${{\partial}_{\tau}}G\left(\tau |{{t}_{0}}\right)={{M}_{\tau}}G\left(\tau |{{t}_{0}}\right)$ . We write the elements of the propagator as $G\left(\tau,i|{{t}_{0}},\,j\right)$ . Its flow starts out from $G\left({{t}_{0}}|{{t}_{0}}\right)={\mathbb {1}}$ . For later purposes, let us note the time-reversal property in terms of the matrix inverse $G{{(t|\tau )}^{-1}}=G(\tau |t)$ and also the conservation law ${{\mathbf{1}}^{\intercal}}G\left(\tau |{{t}_{0}}\right)={{\mathbf{1}}^{\intercal}}$ .

It proves insightful to consider the master equation (215) of the multi-particle hopping process for the random walk of a single particle on the one-dimensional lattice $\mathbb{L}=\mathbb{Z}$ , which we already considered in sections 2.2.1 and 3.2.1 (with symmetric hopping rates ${{\varepsilon}_{\tau,i}}={{l}_{\tau}}={{r}_{\tau}}$ ). The presence of only a single particle can be enforced by choosing $\boldsymbol{n}_{{0}}$ as n0,k   =  1 for one $k\in \mathbb{Z}$ and n0,j   =  0 for all $j\ne k$ . By following the above steps, one eventually finds that the master equation (215) is solved by the conditional probability distribution

Equation (222)

with the propagator G solving the master equation (55) of the simple random walk. Hence, the propagator evaluates to a Skellam distribution.

4.5.3. Diffusion in continuous space.

To make the transition to a field theory, one may specify the network as the the d-dimensional lattice $\mathbb{L}={{\left(l\mathbb{Z}\right)}^{d}}$ with the lattice spacing l  >  0 going to zero. In order to take this limit, we define the variable $x\left(\tau,\boldsymbol{r}\right):={{x}_{\boldsymbol{r}}}(\tau )$ , the rescaled Laplace operator $ \Delta :={\underline \Delta }/{{l}^{2}}$ , and the rescaled diffusion coefficient ${{D}_{\tau}}\left(\boldsymbol{r}\right):={{l}^{2}}{{\varepsilon}_{\tau,\boldsymbol{r}}}$ for $\boldsymbol{r}\in \mathbb{L}$ . Assuming that these definitions can be continued to $\boldsymbol{r}\in {{\mathbb{R}}^{d}}$ , the discrete diffusion equation (220) becomes a PDE for the 'field' $x\left(\tau,\boldsymbol{r}\right)$ in the limit $l\to 0$ , namely ${{\partial}_{\tau}}x\left(\tau,\boldsymbol{r}\right)= \Delta \left({{D}_{\tau}}\left(\boldsymbol{r}\right)x\left(\tau,\boldsymbol{r}\right)\right)$ . Here, $ \Delta $ represents the ordinary Laplace operator 10 . The solution of the PDE acts as the mean of the multivariate Poisson distribution (219) whose extension to $\boldsymbol{r}\in \mathbb{R}$ is, however, not quite obvious. If the diffusion coefficient is homogeneous in space, the PDE is solved by

Equation (224)

with the Gaussian kernel

Equation (225)

The action (172) can also be extended into continuous space. For that purpose, one may rescale the second path integral variable as $q\left(\tau,\boldsymbol{r}\right):={{l}^{-d}}{{q}_{\boldsymbol{r}}}(\tau )$ so that in the limit $l\to 0$ ,

Equation (226)

Equation (227)

The above rescaling of x and q is not unique and depends on the problem at hand. Instead of dividing ${{q}_{\boldsymbol{r}}}$ by the volume factor ld , this factor is sometimes employed to cast ${{x}_{\boldsymbol{r}}}$ and the particle number ${{n}_{\boldsymbol{r}}}$ into densities [130]. In the study of branching and annihilating random walks with an odd number of offspring, yet another kind of rescaling may bring the action into a Reggeon field theory [70, 405407] like form [96]. The critical behaviour of these random walks falls into the universality class of directed percolation (DP) [96, 132]. Information on this universality class can be found in the book [68], as well as in the original articles of Janssen and Grassberger, which established the extensive scope of the DP class [408, 409].

4.5.4. Diffusion and decay.

Let us exemplify how one can accommodate a non-vanishing perturbation operator $\mathcal{P}_{\tau}^{\dagger}$ in the adjoint transition operator (176). As in the section before the last, we consider a system of particles that are hopping between the nodes of an arbitrary network $\mathbb{L}$ . The corresponding diffusive transition operator (218) shall specify the drift coefficient ${{\alpha}_{\tau,i}}\left(\boldsymbol{x}\right)={\underline \Delta }{{\varepsilon}_{\tau,i}}{{x}_{i}}$ in the multivariate extension of (176). The coefficient ${{\beta}_{\tau,ij}}\left(\boldsymbol{x}\right)$ is zero. Besides allowing for diffusion, we now allow the particles to decay in the linear reaction $A\to \varnothing $ . For the sake of brevity, we assume that the corresponding decay rate coefficient ${{\mu}_{\tau}}$ is spatially homogeneous. This assumption can be relaxed but the equations that result cannot be written in matrix form and involve many indices. We treat the adjoint transition operator of the decay process as the perturbation, which reads, according to the flow equation (131),

Equation (228)

The combined process could also be solved directly by treating the decay process as part of the drift coefficient ${{\alpha}_{\tau,i}}\left(\boldsymbol{x}\right)$ . For pedagogic reasons, however, we wish to outline a perturbative solution using Feynman diagrams.

The first step of the derivation is to solve the differential equation (181) for $\boldsymbol{x}$ , which now reads ${{\partial}_{\tau}}{{x}_{i}}={\underline \Delta }{{\varepsilon}_{\tau,i}}{{x}_{i}}+{{X}_{i}}(\tau )$ . The homogeneous solution of this equation is given by $\boldsymbol{x}_{{h}}(\tau ):=G\left(\tau |{{t}_{0}}\right)\boldsymbol{x}\left({{t}_{0}}\right)$ , with G being the propagator from the end of section 4.5.2. The solution of the full equation can be written as

Equation (229)

As the next step, the evaluation of the marginalized distribution (177) requires us to compute the $\left(\boldsymbol{Q},\boldsymbol{X}\right)$ -generating functional (179) for $\boldsymbol{Q}=\boldsymbol{X}=\mathbf{0}$ . By performing a series expansion of its leading exponential, this function can be written as

Equation (230)

with $\mathcal{P}_{\tau}^{\dagger}=\frac{\delta}{\delta \boldsymbol{X}(\tau )}\centerdot \left(-{{\mu}_{\tau}}\right)\frac{\delta}{\delta \boldsymbol{Q}(\tau )}$ . To evaluate ${{\mathcal{Z}}_{\mathbf{0},\mathbf{0}}}$ , it helps to note that $\ln \mathcal{Z}_{\boldsymbol{Q},\boldsymbol{X}}^{0}={\int}_{{{t}_{0}}}^{t}\text{d}\tau \,\boldsymbol{Q}\centerdot \boldsymbol{x}$ , from which it follows that for ${{t}_{0}}\leqslant \tau \leqslant t$ :

Equation (231)

Equation (232)

The Heaviside step function is used with $ \Theta (0):=0$ to take into account that $\boldsymbol{x}_{{h}}(\tau )$ depends on $\boldsymbol{X}\left({{\tau}^{\prime}}\right)$ only for ${{\tau}^{\prime}}<\tau $ (see appendix D). All other derivatives of $\ln \mathcal{Z}_{\boldsymbol{Q},\boldsymbol{X}}^{0}$ , as well as itself, vanish for $\boldsymbol{Q}=\boldsymbol{X}=\mathbf{0}$ .

Upon inserting the perturbation (228) into (230), one observes that every summand of the expansion can be written in terms of a combination of the derivatives (231) and (232), and a terminal factor $\text{i}\boldsymbol{q}_{{N}}$ . For example, the summand with k  =  2 and l  =  1 reads

Equation (233)

with $\boldsymbol{Q}=\boldsymbol{X}=\mathbf{0}$ being taken after the evaluation of the functional derivatives. The evaluation of these derivatives results in

Equation (234)

One can represent this expression graphically by a Feynman diagram according to the following rules. First, every diagram ends in a sink , which contributes the factor $\text{i}\boldsymbol{q}_{{N}}$ . The incoming line, or 'leg', of the sink represents the derivative $\frac{\delta}{\delta \boldsymbol{Q}(t)}$ . This leg may either be left dangling, resulting in a factor $\boldsymbol{x}_{{h}}(t)$ according to (231), or it may be 'contracted' with the outgoing line of a vertex . The two legs of this vertex reflect the two derivatives in the perturbation $\mathcal{P}_{\tau}^{\dagger}=\frac{\delta}{\delta \boldsymbol{X}(\tau )}\centerdot \left(-{{\mu}_{\tau}}\right)\frac{\delta}{\delta \boldsymbol{Q}(\tau )}$ . According to (232), the contraction results in a propagator $G(t,i|\tau,j)$ and the vertex itself contributes a factor $-{{\mu}_{\tau}}$ . The incoming leg of the vertex may again be left dangling or it may be connected to another vertex. Therefore, each Feynman diagram is a straight line for the linear decay process. The expression (234) can therefore be represented graphically by

Equation (235)

Note that dangling outgoing lines are not permitted because the derivative $\delta \ln \mathcal{Z}_{\boldsymbol{Q},\boldsymbol{X}}^{0}/\delta \boldsymbol{X}(\tau )$ vanishes for $\boldsymbol{Q}=\boldsymbol{X}=\mathbf{0}$ .

For more complex, non-linear processes, the Feynman diagrams may contain multiple kinds of vertices, each representing an individual summand of $\mathcal{P}_{\tau}^{\dagger}$ . If there exist vertices with more than two legs, the diagrams may exhibit internal loops (see section 6.4). It is then usually impossible to evaluate the full perturbation series and it needs to be truncated at a certain order in the number of loops. Further information about these techniques, and about how renormalization group theory comes into play, is provided, for example, by the book of Täuber [70]. Information on a non-perturbative renormalization group technique can be found in [410].

For the simple diffusion process with decay, all the summands of (230) are readily cast into Feynman diagrams. However, it turns out that individual summands of the expansion may be associated to multiple diagrams that are not connected to one another. Furthermore, diagrams that represent summands of lower order keep reappearing as unconnected components of summands of higher order. Hence, there appears to be redundant information involved. This redundancy is removed by a classical theorem from diagrammatic analysis. This theorem states that the logarithm $\ln {{\mathcal{Z}}_{\mathbf{0},\mathbf{0}}}$ is given by the sum of only the connected diagrams [411], i.e. by

Equation (236)

This sum can be evaluated with the help of $G(t|\tau )G\left(\tau |{{t}_{0}}\right)=G\left(t|{{t}_{0}}\right)$ as

Equation (237)

Equation (238)

After inserting this expression into the marginalized distribution (177), one recovers the multivariate Poisson distribution (219). Its mean, however, has now acquired the pre-factor ${{\text{e}}^{-{\int}_{{{t}_{0}}}^{t}\text{d}\tau \,{{\mu}_{\tau}}}}$ , reflecting the decay of the particles.

4.6. Résumé

In the present section, we introduced the novel backward path integral representation

Equation (239)

of the marginalized distribution (see section 3)

Equation (240)

When the basis function $|n{{\rangle}_{x}}$ is chosen as a probability distribution (e.g. $|n{{\rangle}_{x}}=\frac{{{x}^{n}}{{\text{e}}^{-x}}}{n!}$ for all $n\in {{\mathbb{N}}_{0}}$ ), the backward path integral represents a true probability distribution: the marginalized distribution $|\,p\left(t,n|{{t}_{0}}\right)\rangle $ . This distribution solves the forward master equation (10) for the initial condition $|\,p\left({{t}_{0}},n|{{t}_{0}}\right)\rangle =|n\rangle $ and it transforms into the conditional probability distribution as $p\left(t,n|\tau,{{n}_{0}}\right)=\langle {{n}_{0}}|\,p(t,n|\tau )\rangle $ . In section 4.3, we showed how the backward path integral (239) can be expressed in terms of an average over the paths of an Itô stochastic differential equation. This method provided the exact solutions of various elementary stochastic processes, including the simple growth, the linear decay, and the pair generation processes. Moreover, we showed how the path integral can be evaluated perturbatively using Feynman diagrams for a process with diffusion and linear particle decay. We hope that the backward path integral (239) will prove useful in the study of reaction–diffusion processes. Thus far, the critical behaviour of such processes could only be approached via path integral representations of averaged observables or of the generating function [69, 412]. In section 6, we show how the former representation readily follows from the backward path integral (239) upon summing the marginalized distribution over an observables A(n). The corresponding representation is commonly applied in the study of diffusion-limited reactions (e.g. [69, 96, 130132], but we here show how it can be freed of some of its quantum mechanical ballast (such as 'second-quantized' or 'normal-ordered' observables and coherent states). Besides, we showed in section 4.4 how one can derive a path integral representation of processes with continuous state spaces whose (backward) master equations admit a Kramers–Moyal expansion. Provided that this expansion stops at the level of a diffusion approximation, one recovers a classic path integral representation of the (backward) Fokker–Planck equation and also the Feynman–Kac formula (3). Moreover, the representation can be rewritten in terms of an Onsager–Machlup function and, for diffusive Brownian motion, it simplifies to the path integral of Wiener.

5. The forward path integral representation

Thus far, we have focused on the backward path integral (162). This integral will be used again in section 6 to derive a path integral representation of averaged observables. Moreover, we use it in section 7.4 to approximate the binary annihilation reaction $2\,A\to \varnothing $ . In the present section, however, we shift our focus to the forward path integral (161). Its derivation proceeds analogously to the derivation of the backward solution, so we keep it brief.

5.1. Derivation

The forward path integral can be derived both from the flow equation (54) obeyed by the generating function or from the flow equation (150) obeyed by the series (149) (see figure 8). As it is more convenient to work with functions than with functionals, we use the flow equation of the generating function for this purpose. The derivation parallels a derivation of Elgart and Kamenev [35]. As in section 4.1, we first split the time interval [t0, t] into N pieces ${{t}_{0}}\leqslant {{t}_{1}}\leqslant \cdots \leqslant {{t}_{N}}:=t$ of length $ \Delta t$ . Over a sufficiently small interval $ \Delta t$ , the flow equation (54) is then solved by

Equation (241)

with the generator ${{\mathcal{L}}_{\tau}}:=1+{{\tilde{\mathcal{Q}}}_{\tau}} \Delta t+\text{O}\left({{(\Delta t)}^{2}}\right)$ . After inserting the integral form of a Dirac delta between $\mathcal{L}$ and $|g\rangle $ , the right-hand side of the equation reads

Equation (242)

Figure 8.

Figure 8. Outline of the derivation of the forward path integral representation and of its evaluation in terms of an average over the paths of a backward-time stochastic differential equation.

Standard image High-resolution image

Assuming that the transition operator ${{\tilde{\mathcal{Q}}}_{\tau}}$ , and therefore also ${{\mathcal{L}}_{\tau}}$ , are normal-ordered with respect to q and ${{\partial}_{q}}$ , we may replace ${{\partial}_{{{q}_{N}}}}$ by $\text{i}{{x}_{N-1}}$ and interchange ${{\mathcal{L}}_{{{t}_{N-1}}}}$ with the exponential. This procedure is repeated N times before invoking the exponentiation ${{\mathcal{L}}_{\tau}}=\exp \left({{\tilde{\mathcal{Q}}}_{\tau}} \Delta t\right)$ . Using the inverse transformation (48) and the initial condition $|g\left({{t}_{0}}|{{t}_{0}},{{n}_{0}}\right)\rangle =\,|{{n}_{0}}\rangle $ , one obtains the discrete-time path integral representation

Equation (243)

Equation (244)

Equation (245)

Here we again employed the abbreviation (167), i.e.

Equation (246)

Moreover, the initial condition $p\left({{t}_{0}},n|{{t}_{0}},{{n}_{0}}\right)={{\delta}_{n,{{n}_{0}}}}$ is again trivially fulfilled for N  =  0. Upon taking the continuous-time limit $N\to \infty $ , so that $ \Delta t\to 0$ , the forward path integral representation of the master equation follows as

Equation (247)

Equation (248)

Equation (249)

The limit ${\rlap{-} \int}_{[{{t}_{0}}}^{t)}:={{\lim}_{N\to \infty}}{\rlap{-} \int}_{0}^{N-1}$ now involves integrations over x(t0) and q(t0), but not over x(t) and q(t).

5.2. Linear processes

The forward path integral (248) can, for example, be used to derive the generating function of the generic linear process $A\to l\,A$ with rate coefficient ${{\mu}_{\tau}}$ (and $l\in {{\mathbb{N}}_{0}}$ ). For that purpose, we choose the basis function as $|n{{\rangle}_{\tau,q}}={{q}^{n}}$ so that $|g\rangle $ coincides with the ordinary generating function. Since the basis function does not depend on time, (64) alone specifies the transition operator

Equation (250)

of the flow equation ${{\partial}_{\tau}}|g(\tau |\centerdot )\rangle ={{\tilde{\mathcal{Q}}}_{\tau}}\left(q,{{\partial}_{q}}\right)|g(\tau |\centerdot )\rangle $ . Consequently, the action

Equation (251)

is linear in $\text{i}x$ . To evaluate the path integral (248), it helps to reconsider the discrete-time approximation

Equation (252)

of the action. Upon integrating over all the qj -variables and taking the limit $ \Delta t\to 0$ , one obtains the generating function

Equation (253)

with $q(\tau )$ solving ${{\partial}_{-\tau}}q={{\mu}_{\tau}}\left({{q}^{l}}-q\right)$ . The unique real solution of this equation with final value q(t) reads

Equation (254)

For the linear growth, or Yule-Furry [119, 413], process $A\to 2\,A$ (l  =  2), the inverse transformation (247) casts the generating function into

Equation (255)

Equation (256)

for n  >  0 and into ${{\delta}_{0,{{n}_{0}}}}$ for n  =  0. The above distribution has the form of a negative Binomial distribution with probability of success ${{\text{e}}^{-{\int}_{{{t}_{0}}}^{t}\text{d}\tau \,{{\mu}_{\tau}}}}$ , number of failures n  −  n0, and number of successes n0 [414]. The mean value ${{\text{e}}^{{\int}_{{{t}_{0}}}^{t}\text{d}\tau \,{{\mu}_{\tau}}}}{{n}_{0}}$ of the marginalized distribution (256) grows exponentially with time as long as ${{\mu}_{\tau}}>0$ , and so does its variance $\left({{\text{e}}^{{\int}_{{{t}_{0}}}^{t}\text{d}\tau \,{{\mu}_{\tau}}}}-1\right)\,{{\text{e}}^{{\int}_{{{t}_{0}}}^{t}\text{d}\tau \,{{\mu}_{\tau}}}}{{n}_{0}}$ .

For the linear decay process $A\to \varnothing $ (l  =  0), the inverse transformation in (255) instead recovers the Binomial distribution (134).

5.3. Intermezzo: the forward Kramers–Moyal expansion

The above procedure can be generalized to processes whose transition operator is of the form

Equation (257)

The derivation proceeds analogously to section 4.3 and appendix D. The probability distribution is thereby expressed as an average over the paths of a stochastic differential equation proceeding backward in time. The merit of such a representation remains to be explored. In the following, we briefly outline how the procedure is applied to processes with continuous sample paths.

In section 4.4, we explained how the forward master equation (184) of a process with a continuous state space can be written in terms of the (forward) Kramers–Moyal expansion

Equation (258)

with initial condition $p\left(\tau,q|{{t}_{0}},{{q}_{0}}\right)=\delta \left(q-{{q}_{0}}\right)$ . Here we changed the letter from x to q to keep in line with the notation used in sections 5.1 and 5.2. If the Kramers–Moyal expansion stops with its second summand, it coincides with the (forward) Fokker–Planck equation

Equation (259)

To apply the procedure from the previous section to this equation, it needs to be brought into the form ${{\partial}_{\tau}}p(\tau,q|\centerdot )={{\tilde{\mathcal{Q}}}_{\tau}}\left(q,{{\partial}_{q}}\right)p(\tau,q|\centerdot )$ with a normal-ordered transition operator ${{\tilde{\mathcal{Q}}}_{\tau}}\left(q,{{\partial}_{q}}\right)$ . It is readily established that this operator can be expressed in the form of (257) with the coefficients

Equation (260)

Equation (261)

Equation (262)

The Fokker–Planck equation now has the same form as the flow equation obeyed by the generating function in section 5.1. Therefore, we can follow the steps in that section to represent the solution of the Fokker–Planck equation by the path integral

Equation (263)

Equation (264)

The evaluation of this path integral proceeds analogously to the derivation in section 5.2 and appendix D. In particular, one can rewrite the probability distribution as

Equation (265)

with $q(\tau )$ solving the backward-time SDE

Equation (266)

The time evolution of this SDE starts out from the final value q(t)  =  q. In a discrete-time approximation, the SDE reads

Equation (267)

The increments $ \Delta {{W}_{j}}$ are Gaussian distributed with mean 0 and variance $ \Delta t$ . Let us note that the two path integral representations (190) and (263) of the conditional probability distribution belong to an infinite class of representations [42, 104, 415] (see also section 4.4.5). We focus on the two representations that follow from the backward and forward Kramers–Moyal expansions via the step-by-step derivations in sections 4.1 and 5.1, respectively.

To exemplify the validity of the path integral (263) and of the representation (265), let us consider a process with linear drift and no diffusion, i.e. a process with jump moments $\mathcal{M}_{\tau}^{(1)}=\mu q$ and $\mathcal{M}_{\tau}^{(2)}=0$ . For this process, the representation (265) evaluates to

Equation (268)

Equation (269)

Thus, the leading exponential in (265) converted the argument of the Dirac delta from the solution of a final value problem to the solution of an initial value problem. It is, of course, no surprise that the probability distribution is given by a Dirac delta function because the process is purely deterministic.

Another simple process that can be solved with the help of (265) is the pure diffusion process with $\mathcal{M}_{\tau}^{(1)}=0$ and $\mathcal{M}_{\tau}^{(2)}=D$ . The derivation is performed most easily in the discrete-time approximation and results in the Wiener path integral

Equation (270)

with ${{\tilde{q}}_{N}}:=q$ and Gaussian distribution ${{\mathcal{G}}_{\mu,{{\sigma}^{2}}}}$ (see (200)). An evaluation of the convolutions of Gaussian distributions shows that the process is solved by ${{\mathcal{G}}_{0,D\left(t-{{t}_{0}}\right)}}\left(q-{{q}_{0}}\right)$ . Further uses of the path integral representation (263) of the (forward) Fokker–Planck equation remain to be explored.

5.4. Résumé

Here we derived the forward path integral representation

Equation (271)

of the probability generating function $|g\left(t|{{t}_{0}},{{n}_{0}}\right)\rangle =$ ${\sum}_{n}|n{{\rangle}_{t}}\,p\left(t,n|{{t}_{0}},{{n}_{0}}\right)$ . The conditional probability distribution is recovered from the generating function via the inverse transformation $p\left(t,n|{{t}_{0}},{{n}_{0}}\right)=\langle n|g\left(t|{{t}_{0}},{{n}_{0}}\right)\rangle $ . The path integral representation of the generating function has, for example, been employed to compute rare event probabilities [35] by a method that we discuss in section 7. Most often, however, the representation has only served as an intermediate step in deriving a path integral representation of averaged observables. Such a representation is considered in the next section. In section 5.2, we showed how the forward path integral (271) can be evaluated along the paths of a differential equation proceeding backward in time. We thereby obtained the generating function of generic linear processes. Besides, we derived a novel path integral representation of processes with continuous state spaces in section 5.3, based on the (forward) Kramers–Moyal expansion. The potential use of this representation remains to be explored.

6. Path integral representation of averaged observables

The backward and forward path integral representations of the conditional probability distribution provide a full characterization of a Markov process. Yet, an intuitive understanding of how a process evolves is often attained more easily by looking at the mean particle number $\langle n\rangle $ and at its variance $\langle {{(n-\langle n\rangle )}^{2}}\rangle $ . Although both of these averages can in principle be inferred from a given probability distribution, it often proves convenient to bypass the computation of the distribution and to focus directly on the observables. Thus, we now show how one can derive a path integral representation of the average $\langle A\rangle $ of an observable A(n). The path integral representation applies to all processes that can be decomposed additively into reactions of the form $k\,A\to l\,A$ in a well-mixed, non-spatial environment. The extension of the path integral to multiple types of interacting particles and to processes with spatial degrees of freedom is straightforward. In the derivation, we assume that the number of particles in the system is initially Poisson distributed with mean x(t0). This assumption is common in the study of reaction–diffusion master equations and will allow us to focus on the marginalized distribution from section 3 instead of on the conditional probability distribution. The derived path integral has been applied in various contexts, particularly in the analysis of decay laws of diffusion-limited reactions and in the identification of universality classes [69, 70].

6.1. Derivation

Assuming that the number of particles is initially Poisson distributed with mean x(t0), the probability of finding n particles at time t is $p(t,n)={\sum}_{{{n}_{0}}}p\left(t,n|{{t}_{0}},{{n}_{0}}\right)p\left({{t}_{0}},{{n}_{0}}\right)$ . Here we make use of the initial (single-time) distribution

Equation (272)

Consequently, the average value of an observable A(n) at time t evaluates to

Equation (273)

Here we emphasize that the average depends on the mean of the initial Poisson distribution. As the single-time distribution coincides with the marginalized distribution $|\,p\left(t,n|{{t}_{0}}\right)\rangle $ in (110) for the Poisson basis function $|{{n}_{0}}{{\rangle}_{x\left({{t}_{0}}\right)}}=\frac{x{{\left({{t}_{0}}\right)}^{{{n}_{0}}}}{{\text{e}}^{-x\left({{t}_{0}}\right)}}}{{{n}_{0}}!}$ , the above average can equivalently be written as

Equation (274)

The path integral representation of ${{\langle A\rangle}_{x\left({{t}_{0}}\right)}}$ then follows directly from the backward path integral representation (171) of the marginalized distribution as

Equation (275)

Equation (276)

With the adjoint transition operator $\mathcal{Q}_{\tau}^{\dagger}(c,a)={{\gamma}_{\tau}}{{c}^{k}}\left({{a}^{l}}-{{a}^{k}}\right)$ of the reaction $k\,A\to l\,A$ (see section 3.2.2), the action ${{\mathcal{S}}^{\dagger}}$ in (172) reads

Equation (277)

The '+1' in the transition operator is called the 'Doi-shift' [91]. In our above derivation, this shift followed from choosing a Poisson distribution as the basis function. The unshifted version of the path integral can be derived by choosing the basis function of the marginalized distribution (110) as $|{{n}_{0}}{{\rangle}_{x\left({{t}_{0}}\right)}}=\frac{x{{\left({{t}_{0}}\right)}^{{{n}_{0}}}}}{{{n}_{0}}!}$ , turning the average (274) into

Equation (278)

Upon rewriting the marginalized distribution in terms of the backward path integral (171), the action (277) acquires the addition summand x(t0)  −  x(t) and now involves the unshifted transition operator $\mathcal{Q}_{\tau}^{\dagger}\left(x,\text{i}q\right)$ .

The average over a Poisson distribution in (276) establishes the link between the particle number n and the path integral variable x. For the simplest observable $A(n):=n$ , i.e. for the particle number itself, it holds that ${{\langle \langle n\rangle \rangle}_{x}}=x$ . This relation generalizes to factorial moments of order $k\in \mathbb{N}$ for which ${{\langle \langle {{(n)}_{k}}\rangle \rangle}_{x}}={{x}^{k}}$ (recall that ${{(n)}_{k}}:=n(n-1)\cdots (n-k+1)$ ). The computation of factorial moments may serve as an intermediate step in obtaining ordinary moments of the particle number. For this purpose, one may use the relation ${{n}^{k}}={\sum}_{l=0}^{k}\left\{\begin{array}{c} k \\ l \end{array}\right\}{{(n)}_{l}}$ , where the curly braces represent a Stirling number of the second kind (see section 26.8 in [335]). An extension of the path integral representation (275) to multi-time averages of the form

Equation (279)

remains open (with ${{\tau}_{2}}>{{\tau}_{1}}>{{t}_{0}}$ ). The results of Elderfield [376] may prove helpful for this purpose.

In a slightly rewritten form, the Doi-shifted path integral (275) was, for example, employed by Lee in his study of the diffusion-controlled annihilation reaction $k\,A\to \varnothing $ with $k\geqslant 2$ [130, 416]. He found that below the critical dimension dc   =  2/(k  −  1), the particle density asymptotically decays as $n\sim {{A}_{k}}{{t}^{-d/2}}$ with a universal amplitude Ak . At the critical dimension, the particle density instead obeys $n\sim {{A}_{k}}{{(\ln t/t)}^{1/(k-1)}}$ . Neglecting the diffusion of particles, the above action (277) of the reaction $k\,A\to \varnothing $ reads

Equation (280)

Besides using different names for integration variables ($\text{i}q\to \bar{\psi}$ and $x\to \psi $ ), the action employed by Lee involves an additional boundary term. This term can be introduced by rewriting the path integral representation (275) as

Equation (281)

This path integral involves integrations over the variables q(t0) and x(t0), and the mean of the initial Poisson distribution is denoted by ${{\lambda}_{0}}$ . Consequently, one needs to add the boundary term $\text{i}q\left({{t}_{0}}\right)\left(x\left({{t}_{0}}\right)-{{\lambda}_{0}}\right)$ to the action (280) to equate x(t0) with ${{\lambda}_{0}}$ . The factor $\text{i}q\left({{t}_{0}}\right)x\left({{t}_{0}}\right)$ of the new term is, however, often dropped eventually [130, 416]. The convention of Lee is also commonly used, for example, by Täuber [70]. The unshifted version of the path integral with transition operator $\mathcal{Q}_{\tau}^{\dagger}\left(x,\text{i}q\right)$ is recovered via $\text{i}q+1\to \text{i}q$ .

For completeness, let us note that the path integral representation (275) can also be derived from the forward path integral (244), provided that the observable A(n) is analytic in n. It then suffices to consider the factorial moment A(n)  :=  (n)k . To perform the derivation, one may choose the basis function of the generating function (45) as $|n\rangle :={{\left(\text{i}q+1\right)}^{n}}$ (insert $\zeta :=\text{i}$ , $\tilde{q}:=1$ and $\tilde{x}:=0$ into (70)). As the first step, the forward path integral (248) is summed over an initial Poisson distribution with mean x(t0). The average of the factorial moment is then obtained via ${{\langle {{(n)}_{k}}\rangle}_{x\left({{t}_{0}}\right)}}=\partial _{\text{i}{{q}_{N}}}^{k}|g\left(t|{{t}_{0}};x\left({{t}_{0}}\right)\right){{\rangle}_{{{q}_{N}}}}{{|}_{{{q}_{N}}=0}}$ . To recover the action (277), one may note that the operator ${{\mathcal{Q}}_{\tau}}(c,a)$ in (64) and the operator $\mathcal{Q}_{\tau}^{\dagger}(c,a)$ in (122) fulfil ${{\mathcal{Q}}_{\tau}}\left(\text{i}q+1,x\right)=\mathcal{Q}_{\tau}^{\dagger}\left(x,\text{i}q+1\right)$ for scalar arguments. Thus, both the marginalized distribution approach and the generating function approach result in the same path integral representation of averaged observables. A detailed derivation of the path integral (275) from the generating function is, for example, included in the article of Dickman and Vidigal [412] (see their equations (106) and (108)).

6.2. Intermezzo: alternative derivation based on coherent states

We noted previously in section 2.2.3 that the path integral representation (275) of the average

Equation (282)

can be derived without first casting the master equation into a linear PDE [91, 130, 289, 336, 416]. The following section outlines this derivation for the process $k\,A\to l\,A$ with time-independent rate coefficient γ. Moreover, we assume A(n) to be analytic in n.

The alternative derivation of the path integral representation (275) starts out from the exponential solution of the master equation, i.e. from (see (83))

Equation (283)

The bras are chosen as the unit row vectors $\langle n|=\widehat{\boldsymbol{e}}_{n}^{\intercal}$ and the kets as the unit column vectors $|n\rangle ={{\widehat{\boldsymbol{e}}}_{n}}$ . As before, the transition matrix of the reaction $k\,A\to l\,A$ with rate coefficient γ reads (see (85))

Equation (284)

The creation matrix fulfils $c|n\rangle =\,|n+1\rangle $ and $\langle n|c=\langle n-1|$ , and the annihilation matrix $a|n\rangle =n|n-1\rangle $ and $\langle n|a=(n+1)\langle n+1|$ . Thus, the basis column vectors can be generated incrementally via $|n\rangle ={{c}^{n}}|0\rangle $ and the basis row vectors via $\langle n|\,\,=\langle 0|\frac{{{a}^{n}}}{n!}$ .

Since the bra $\langle n|$ is a left eigenvector of the number matrix $\mathcal{N}:=ca$ with eigenvalue n, one may write $A(n)\langle n|\,=\langle n|A\left(\mathcal{N}\,\right)$ for an analytic observable A. After inserting the exponential solution (283) into the averaged observable (282), an evaluation of the sums therefore results in

Equation (285)

Here we employed the (infinitely-large) unit matrix ${\mathbb {1}}$ . Moreover, we changed the variable x(t0) to x0 in anticipation of a discrete-time approximation.

Following the lecture notes of Cardy [91], we now perform the Doi-shift by shifting the first exponential in the above expression to the right. To do so, we require certain relations, which are all based on the commutation relation $[a,c]={\mathbb {1}}$ . First, it follows from this relation that $\left[a,{{c}^{n}}\right]=n{{c}^{n-1}}{\mathbb {1}}$ holds for all $n\in {{\mathbb{N}}_{0}}$ , and more generally that $\left[a,\left[a\cdots \left[a,{{c}^{n}}\right]\right]\right]={{(n)}_{j}}{{c}^{n-j}}{\mathbb {1}}$ holds for nested commutators with $j\leqslant n$ annihilation matrices. Nested commutators of higher order vanish. The Hadamard lemma [417] can therefore be employed to write (with $z\in \mathbb{C}$ and $n\in \mathbb{N}$ )

Equation (286)

Equation (287)

This expression generalizes to the following shift operations for an analytic function f:

Equation (288)

Equation (289)

Shifting of the first exponential in (285) to the right therefore results in

Equation (290)

To proceed, we split the time interval [t0,t] into N pieces of length $ \Delta t:=\left(t-{{t}_{0}}\right)/N$ . The Trotter formula [418] can then be used to express the right hand side of (290) in terms of the limit

Equation (291)

This expression is now rewritten by inserting N of the identity matrices [15]

Equation (292)

Equation (293)

We obtained the expression in the second line by performing integrations by parts while neglecting any potential boundary terms (note that the derivations in sections 4 and 5 did not involve integrations by parts). Moreover, we introduced the right eigenvector $|z\rangle \rangle :={{\text{e}}^{z\left(c-{\mathbb {1}}\right)}}|0\rangle $ of the annihilation matrix ($a|z\rangle \rangle =z|z\rangle \rangle $ with $z\in \mathbb{C}$ ), and the left eigenvector $\langle \langle {{z}^{\star}}|\,:=\langle 0|{{\text{e}}^{za}}$ of the creation matrix ($\langle \langle {{z}^{\star}}|c=z\langle \langle {{z}^{\star}}|$ ). These vectors are commonly referred to as 'coherent states'. The eigenvector conditions can be easily verified by rewriting the vectors as

Equation (294)

Insertion of the identity matrices into (291) results in

Equation (295)

This expression can be simplified with the help of the shift operations in (288) and (289). Since the Q-matrix (284) is a normal-ordered polynomial in c and a (i.e. all the c are to the left of all the a), and both $\langle 0|c$ and $a|0\rangle $ vanish, the above expression evaluates to

Equation (296)

The factor $\langle 0|A\left(\left(c+{\mathbb {1}}\right)\left(a+{{x}_{N}}{\mathbb {1}}\right)\right)|0\rangle $ could be evaluated by normal-ordering the observable with respect to c and a before employing $\langle 0|c=0$ and $a|0\rangle =0$ again. The resulting object is sometimes called a 'normal-ordered observable' [336]. In the following, we show that this object agrees with the average over a Poisson distribution in (276). The proof of this assertion is based on the observation that ${{f}_{j}}(x):=\langle 0|{{\left(\left(c+{\mathbb {1}}\right)\left(a+x{\mathbb {1}}\right)\right)}^{j}}|0\rangle $ fulfils the following defining relation of Touchard polynomials (see [419, 420] for information on these polynomials):

Equation (297)

Since the jth Touchard polynomial fj (x) agrees with the jth moment of a Poisson distribution with mean x, i.e. with ${{\langle \langle {{n}^{j}}\rangle \rangle}_{x}}$ as defined in (276), our above assertion holds true.

The path integral representation (275) of averaged observables is recovered as the continuous-time limit of (296) (upon rewriting $1+Q \Delta t$ as an exponential). The action (277) is also recovered because the transition matrix Q(c,a) in (284) and the adjoint transition operator $\mathcal{Q}_{\tau}^{\dagger}(c,a)$ in (122) fulfil $Q\left(\text{i}q+1,x\right)={{\mathcal{Q}}^{\dagger}}\left(x,\text{i}q+1\right)$ for scalar arguments.

6.3. Perturbation expansions

The path integral representation (275) of factorial moments can be rewritten in terms of a (Q, X)-generating functional analogously to section 4.3. The resulting expression may serve as the starting point for a perturbative [15, 70] or a non-perturbative analysis [410] of the path integral. Here we focus on the perturbative approach. To derive the representation, we assume that the adjoint transition operator can be split into drift, diffusion, and perturbation parts as in (176), so that

Equation (298)

By following the steps in appendix D, the path integral representation of a factorial moment can be written as

Equation (299)

where $\mathcal{Z}_{Q,X}^{0}={{\langle \langle {{\text{e}}^{{\int}_{{{t}_{0}}}^{t}\text{d}\tau Q(\tau )x(\tau )}}\rangle \rangle}_{W}}$ represents the (Q, X)-generating functional (180). Moreover, $x(\tau )$ solves the Itô SDE

Equation (300)

with initial condition x(t0). If both the diffusion and perturbation parts of the transition operator (298) vanish, the factorial moment at time t obeys the statistics of a Poisson distribution, i.e. $\langle {{(n)}_{j}}\rangle =x{{(t)}^{j}}$ . In the general case, the representation (299) may be evaluated perturbatively by expanding its exponential as [15]

Equation (301)

Note that this expression involves only a single sum and not a double sum as the expansion (230) did.

6.4. Coagulation

Let us exemplify the perturbation expansion of (299) for particles that diffuse and coagulate in the reaction $2\,A\to A$ . This reaction exhibits the same asymptotic particle decay as the binary annihilation reaction $2\,A\to \varnothing $ and thus belongs to the same universality class [294, 421]. Only a pre-factor differs between the perturbation operators of the two reactions (see below). A full renormalization group analysis of general annihilation reactions $k\,A\to l\,A$ with l  <  k was performed by Lee [130, 416] (see also [69]). The procedure of Lee differs slightly from the one presented below, but it involves the same Feynman diagrams. We restrict ourselves to the 'tree level' of the diagrams. The coagulation reaction $2\,A\to A$ and the annihilation reaction $2\,A\to \varnothing $ eventually cause all but possibly one particle to vanish from a system. The transition into an absorbing steady state can be prevented; for example, by allowing for the creation of particles through the reaction $\varnothing \to A$ . The fluctuations around the ensuing non-trivial steady state have been explored with the help of path integrals in [298, 308].

In the following, we consider particles that diffuse on an arbitrary network $\mathbb{L}$ and that coagulate in the reaction $2\,A\to A$ . Hence, all results from section 4.5.4 apply here as well, except that the transition operator of the coagulation reaction now acts as a perturbation. With the local coagulation rate coefficient ${{\mu}_{\tau,i}}$ at lattice site $i\in \mathbb{L}$ , this perturbation reads (see (122))

Equation (302)

Equation (303)

For the binary annihilation reaction $2\,A\to \varnothing $ , the first rate coefficient ${{\mu}_{\tau,i}}$ in this expression differs by an additional pre-factor 2. Note that we allow the rate coefficient to depend both on time and on the node $i\in \mathbb{L}$ .

As explained in section 4.5.4, summands of the expansion of (299) can be represented by Feynman diagrams. For the mean local particle number $\langle {{n}_{i}}\rangle $ , each of the diagrams is composed of a sink with one incoming leg, and possibly of the vertices

Equation (304)

According to the functional derivative (232), contracted lines, either between two vertices or between a vertex and the sink, are associated to the propagator $G\left(\tau |{{\tau}^{\prime}}\right)$ . Dangling incoming lines, on the other hand, introduce a factor $\boldsymbol{x}_{{h}}(\tau )$ , which represents the homogeneous solution of ${{\partial}_{\tau}}{{x}_{i}}={\underline \Delta }{{\varepsilon}_{\tau,i}}{{x}_{i}}+{{X}_{i}}(\tau )$ (see (229) and (231)).

In general, diagrams constructed from the above building blocks exhibit internal loops. The simplest connected diagram with such a loop and with a single sink is given by

Equation (305)

This loop is part of the m  =  2 summand of (301). Its mathematical expression is obtained by tracing the diagram from right to left and reads

Equation (306)

The combinatorial factor 2 stems from the two possible ways of connecting the two vertices (either outgoing leg can connect to either incoming leg). Note that the sink, which corresponds to the final derivative $\delta /\delta Q(t)$ , is not associated to an additional pre-factor (unlike in section 4.5.4).

In the following, we focus on the contribution of 'tree diagrams' to the mean particle number. These diagrams do not exhibit internal loops. Thus, upon removing all the diagrams containing loops from the expansion of (299), one can define the 'tree-level average'

Equation (307)

The corresponding mathematical expression reads

Equation (308)

The inclusion of diagrams with loops would correct ${{\bar{n}}_{i}}$ to the true mean $\langle {{n}_{i}}\rangle $ . For the treatment of loops, see for example [70, 130, 293, 416].

The tree-level average ${{\bar{n}}_{i}}$ defined above fulfils the deterministic rate equation of the coagulation process, i.e. it fulfils

Equation (309)

If both the rates ${{\varepsilon}_{\tau}}$ and ${{\mu}_{\tau}}$ , and also the mean x(t0) of the initial Poisson distribution are spatially homogeneous, the well-mixed analogue of (309) can be derived by direct resummation of (308). In the general case, the validity of the rate equation (309) can be established by exploiting a self-similarity of ${{\bar{n}}_{i}}$ . For that purpose, we assume that ${{\bar{n}}_{i}}(\tau )$ is known for $\tau <t$ and we want to extend its validity up to time t. To do so, we remove all the sinks from the diagrams in (307) and connect their now dangling outgoing legs with the incoming leg of another three-vertex at time τ. The second incoming leg of this vertex is contracted with every other tree diagram and its outgoing leg with a sink at time t. The corresponding expression reads ${\sum}_{j\in \mathbb{L}}G(t,i|\tau,j)\left(-{{\mu}_{\tau,j}}\right){{\bar{n}}_{j}}{{(\tau )}^{2}}$ and it contributes to ${{\bar{n}}_{i}}(t)$ for every time τ. To respect also the initial condition ${{\bar{n}}_{i}}\left({{t}_{0}}\right)={{x}_{i}}\left({{t}_{0}}\right)$ , we introduce the first diagram of (307) by hand so that in total

Equation (310)

Differentiation of this equation with respect to t confirms the validity of the rate equation (309) (recall the definition of the propagator in section 4.5.2).

6.5. Résumé

Path integral representations of averaged observables have proved useful in a variety of contexts. Their use has deepened our knowledge about the critical behaviour of diffusion-limited annihilation and coagulation reactions [69, 70, 130, 293, 294, 302, 304, 305, 316, 400], of branching and annihilating random walks and percolation processes [69, 70, 96, 132, 133, 301, 303, 422, 423], and of elementary multi-species reactions [131, 295297, 299, 300, 309311, 313, 424, 425]. In the present section, we derived a path integral representation of averaged observables for processes that can be decomposed additively into reactions of the form $k\,A\to l\,A$ . Provided that the number of particles n in the system is initially Poisson distributed with mean x(t0), we showed that the average of an observable A(n) can be represented by the path integral

Equation (311)

Equation (312)

We derived this representation in section 6.1 by summing the backward path integral representation (171) of the marginalized distribution over the observable A(n) (using the Poisson basis function $|n\rangle =\frac{{{x}^{n}}{{\text{e}}^{-x}}}{n!}$ ). The generalization of the above path integral to reactions with multiple types of particles and spatial degrees of freedom is straightforward. The above path integral representation was found to be equivalent to Doi-shifted path integrals used in the literature [70, 96, 130133]. Its unshifted version is obtained for a redefined basis function. Unlike path integral representations encountered in the literature, our representation (311) does not involve a so-called 'normal-ordered observable'. By using a defining relation of Touchard polynomials, we could show that this object agrees with the average (312) of A(n) over a Poisson distribution (see section 6.2).

As shown in section 6.3, the path integral representation (311) can be rewritten in terms of a perturbation expansion. We demonstrated the evaluation of this expansion in section 6.4 for diffusing particles that coagulate according to the reaction $2\,A\to A$ . In doing so, we restricted ourselves to the tree-level of the Feynman diagrams associated to the perturbation expansion. Information on perturbative renormalization group techniques for the treatment of diagrams with loops can be found in [69, 70]. Recently, non-perturbative renormalization group techniques have been developed for the evaluation of stochastic path integrals [317, 410]. These techniques have proved particularly useful in studying branching and annihilating random walks [317, 318] and annihilation processes [306, 307].

7. Stationary paths

In the previous sections, we outlined how path integrals can be expressed in terms of averages over the paths of stochastic differential equations (SDEs). Corrections to those paths were treated in terms of perturbation expansions. In the following, we formulate an alternative method in which the variables of the path integrals act as deviations from 'stationary', or 'extremal', paths. The basic equations of the method have the form of Hamilton's equations from classical mechanics. Their application to stochastic path integrals goes back at least to the work of Mikhailov [11].

More recently, Elgart and Kamenev extended the method for the study of rare event probabilities [35] and for the classification of phase transitions in reaction–diffusion models with a single type of particles [320]. These studies are effectively based on the forward path integral representation (244) of the generating function. After reviewing how this approach can be used for the approximation of probability distributions, we extend it to the backward path integral representation (168) of the marginalized distribution. Whereas the generating function approach requires an auxiliary saddle-point approximation to extract probabilities from the generating function, the backward approach provides direct access to probabilities. A proper normalization of the resulting probability distribution is, however, only attained beyond leading order. The generating function technique respects the normalization of the distribution even at leading order, but this normalization may be violated by the subsequent saddle-point approximation.

The methods discussed in the following all apply to the chemical master equation (24) and employ the basis functions that we introduced in sections 2.2.2 and 3.2.2. Moreover, we assume the number of particles to be initially Poisson distributed with mean x(t0). This assumption proves to be convenient in the analysis of explicit stochastic processes but it can be easily relaxed.

7.1. Forward path integral approach

Our goal lies in the approximation of the marginalized probability distribution

Equation (313)

For this purpose, we employ the forward path integral (244) to formulate an alternative representation of the ordinary probability generating function

Equation (314)

As the first step, we rewrite the argument q(t) of this function in terms of a deviation $ \Delta q(t)$ from a 'stationary path' $\tilde{q}(t)$ , i.e. $q(t)=\tilde{q}(t)+ \Delta q(t)$ . The path $\tilde{q}(t)$ and an auxiliary path $\tilde{x}(\tau )$ are chosen so that the action of the resulting path integral is free of terms that are linear in the path integral variables $ \Delta q(\tau )$ and $ \Delta x(\tau )$ (with $\tau \in \left[{{t}_{0}},t\right]$ ). Thus, the approach bears similarities with the stationary phase approximation of oscillatory integrals [426]. In the next section, we apply the method to the binary annihilation process $2\,A\to \varnothing $ .

In order to implement the above steps, we define the basis function

Equation (315)

for yet to be specified paths $\tilde{q}(\tau )$ and $\tilde{x}(\tau )$ , and a free parameter ζ. The basis function has the same form as the basis function (70) but with its second argument being written as $ \Delta q(\tau )$ . Provided that the path $\tilde{q}(\tau )$ fulfils the final condition $\tilde{q}(t)=q(t)$ , one can trivially rewrite the generating function (314) in terms of the above basis function as

Equation (316)

The term in brackets has the form of the generalized generating function (45), and thus it can be rewritten in terms of the forward path integral (248) by following the steps in section 5.1. Using $ \Delta q(\tau )$ and $ \Delta x(\tau )$ as labels for the path integral variables, one arrives at the representation

Equation (317)

In analogy with (246), the integral sign is defined as the continuous-time limit

Equation (318)

The action (249) inside the generating function (317) reads

Equation (319)

As emphasized above, our interest lies in a system whose initial number of particles is Poisson distributed with mean x(t0). Thus, instead of dealing with the ordinary generating function (314), it proves convenient to work in terms of the generating function

Equation (320)

To arrive at the second line, we made use of the path integral representation (317) while requiring that the path $\tilde{x}(\tau )$ meets the initial condition $\tilde{x}\left({{t}_{0}}\right)=x\left({{t}_{0}}\right)$ . Note that the marginalized distribution (313) is recovered from the redefined generating function via

Equation (321)

Thus far, we have not yet specified the paths $\tilde{q}(\tau )$ and $\tilde{x}(\tau )$ , besides requiring that they fulfil the boundary conditions $\tilde{q}(t)=q(t)$ and $\tilde{x}\left({{t}_{0}}\right)=x\left({{t}_{0}}\right)$ . We specify these paths in such a way that the action (319) becomes free of terms that are linear in the deviations $ \Delta q$ and $ \Delta x$ . For this purpose, let us recall the definitions of the creation operator $c=\zeta \Delta q+\tilde{q}$ and of the annihilation operator $a={{\partial}_{\zeta \Delta q}}+\tilde{x}$ from section section 2.2.2, as well as the definition of the transition operator

Equation (322)

The basis evolution operator is specified in (77) as

Equation (323)

Upon performing a Taylor expansion of the operator ${{\tilde{\mathcal{Q}}}_{\tau}}\left(\Delta q,\text{i} \Delta x\right)$ in the action (319) with respect to the deviations $ \Delta q$ and $ \Delta x$ , one observes that terms that are linear in the deviations vanish from the action if the paths $\tilde{x}(\tau )$ and $\tilde{q}(\tau )$ fulfil 11

Equation (324)

Equation (325)

These equations resemble Hamilton's equations from classical mechanics. Just as in classical mechanics, ${{\mathcal{Q}}_{\tau}}$ is conserved along solutions of the equations if it does not depend on time itself. This property follows from $\frac{\text{d}}{\text{d}\tau}{{\mathcal{Q}}_{\tau}}\left(\tilde{q},\tilde{x}\right)={{\partial}_{\tau}}{{\mathcal{Q}}_{\tau}}\left(\tilde{q},\tilde{x}\right)$ , with $\frac{\text{d}}{\text{d}\tau}$ being the total time derivative. Since the ordinary generating function obeys the flow equation ${{\partial}_{\tau}}|g{{\rangle}_{q}}={{\mathcal{Q}}_{\tau}}\left(q,{{\partial}_{q}}\right)|g{{\rangle}_{q}}$ , the conservation of total probability requires that ${{\mathcal{Q}}_{\tau}}\left(1,{{\partial}_{q}}\right)=0$ . This condition is, for example, fulfilled by the transition operator ${{\mathcal{Q}}_{\tau}}\left(q,{{\partial}_{q}}\right)={{\gamma}_{\tau}}\left({{q}^{l}}-{{q}^{k}}\right)\partial _{q}^{k}$ of the generic reaction $k\,A\to l\,A$ (see (64)). For the final value q(t)  =  1, Hamilton's equations (324) and (325) are solved by $\tilde{q}(\tau )=1$ with $\tilde{x}(\tau )$ solving the rate equation ${{\partial}_{\tau}}\tilde{x}={{\gamma}_{\tau}}(l-k){{\tilde{x}}^{k}}$ of the process. Elgart and Kamenev analysed the topology of ${{\mathcal{Q}}_{\tau}}\left(\tilde{q},\tilde{x}\right)=0$ lines of reaction–diffusion models with a single type of particles to classify the phase transitions of these models [320] (they use the label p instead of $\tilde{q}$ , and q instead of $\tilde{x}$ ).

Provided that Hamilton's equations (324) and (325) are fulfilled, the action (319) evaluates to

Equation (326)

with the definitions

Equation (327)

Equation (328)

The transition operator $ \Delta {{\mathcal{Q}}_{\tau}}$ absorbs all terms of the Taylor expansion of ${{\mathcal{Q}}_{\tau}}$ that are of second or higher order in the deviations. Combined with (320), the above action results in the following path integral representation of the generating function:

Equation (329)

Although this path integral representation may seem daunting, our primary interest lies only in its leading-order approximation $|g\rangle \approx {{\text{e}}^{-\tilde{\mathcal{S}}}}$ . This approximation is exact if $ \Delta {{\mathcal{Q}}_{\tau}}$ vanishes. That is, for example, the case for the simple growth process $\varnothing \to A$ . For the binary annihilation reaction $2\,A\to \varnothing $ , the leading-order approximation was evaluated by Elgart and Kamenev up to a pre-exponential factor [35]. The pre-exponential factor was later approximated by Assaf and Meerson for large times [36]. In the next section, we evaluate the leading-order approximation of the binary annihilation reaction and evaluate its pre-exponential factor for arbitrary times.

7.2. Binary annihilation

Let us demonstrate the use of the representation (329) for the binary annihilation reaction $2\,A\to \varnothing $ with rate coefficient ${{\mu}_{\tau}}$ . First, we evaluate the generating function in leading order. Afterwards, the generating function is cast into a probability distribution using the inverse transformation (48). The derivatives involved in the inverse transformation are expressed by Cauchy's differentiation formula, which is evaluated in terms of a saddle-point approximation.

The transition operator of the binary annihilation reaction follows from (64) as ${{\mathcal{Q}}_{\tau}}\left({{c}_{\tau}},{{a}_{\tau}}\right)={{\mu}_{\tau}}\left(1-c_{\tau}^{2}\right)a_{\tau}^{2}$ with the annihilation rate coefficient ${{\mu}_{\tau}}$ . Therefore, Hamilton's equations (324) and (325) read

Equation (330)

Equation (331)

A phase portrait of these equations is shown in figure 9. Equation (331) is solved by $\tilde{q}=1$ , for which the previous equation simplifies to the rate equation ${{\partial}_{\tau}}\tilde{x}=-2{{\mu}_{\tau}}{{\tilde{x}}^{2}}$ of the process. This rate equation is solved by

Equation (332)

with the asymptotic limit ${{\bar{x}}_{\infty}}(t):={{\left(2{\int}_{{{t}_{0}}}^{t}\text{d}\tau \,{{\mu}_{\tau}}\right)}^{-1}}\to 0$ . Since Hamilton's equations conserve $\left({{\tilde{q}}^{2}}-1\right){{\tilde{x}}^{2}}$ , one can rewrite the equation (331) as

Equation (333)

Figure 9.

Figure 9. Phase portrait of Hamilton's equations (330) and (331) for the rate coefficient ${{\mu}_{\tau}}=1$ . The transition operator ${{\mathcal{Q}}_{\tau}}\left(\tilde{q},\tilde{x}\right)={{\mu}_{\tau}}\left(1-{{\tilde{q}}^{2}}\right){{\tilde{x}}^{2}}$ of the binary annihilation process vanishes for $\tilde{x}=0$ and $\tilde{q}=1$ (orange dotted lines). The green disk represents the fixed point $\left(\tilde{q},\tilde{x}\right)=(1,0)$ of Hamilton's equations. The red dash-dotted line exemplifies a particular solution of Hamilton's equations for a given time interval (t0,t), a given initial value x(t0) and a given final value q(t) (black dashed lines).

Standard image High-resolution image

Here we assume $\tilde{q}(t)=q(t)<1$ but the derivation can also be performed for q(t)  >  1 (these inequalities are preserved along the flow). The above equation only allows for an implicit solution that provides $\tilde{q}\left({{t}_{0}}\right)$ for a given q(t), namely

Equation (334)

The first term of this equation was neglected in previous studies [35, 36] (for large times t, $\tilde{q}\left({{t}_{0}}\right)\approx 1$ ). Using the conservation of $\left({{\tilde{q}}^{2}}-1\right){{\tilde{x}}^{2}}$ once again, the action in the leading-order approximation $|g{{\rangle}_{q(t)}}={{\text{e}}^{-\tilde{\mathcal{S}}\left(q(t)\right)}}$ can be written as

Equation (335)

We have thus fully specified the generating function in terms of its argument q(t) and the mean x(t0) of the initial Poisson distribution. The leading-order approximation $|g{{\rangle}_{q(t)}}={{\text{e}}^{-\tilde{\mathcal{S}}\left(q(t)\right)}}$ respects the normalization of the underlying probability distribution because

Equation (336)

Here we used that for q(t)  =  1, Hamilton's equation (331) is solved by $\tilde{q}(\tau )=1$ , implying that $\tilde{\mathcal{S}}\left(q(t)=1\right)=0$ .

The probability distribution follows from the generating function via the inverse transformation $|\,p\left(t,n|{{t}_{0}}\right){{\rangle}_{x\left({{t}_{0}}\right)}}=\frac{1}{n!}\partial _{q(t)}^{n}|g{{\rangle}_{q(t)}}{{|}_{q(t)=0}}$ . This transformation is trivial for n  =  0, for which one obtains the probability of observing an empty system. For other values of n, the derivatives can be expressed by Cauchy's differentiation formula so that in leading order

Equation (337)

Here, $\mathcal{C}$ represents a closed path around zero in the complex domain and is integrated over once in counter-clockwise direction.

The contour integral in (337) can be evaluated in a saddle-point approximation as

Equation (338)

The saddle-point qs is found by solving the equation $n/{{q}_{s}}=-{{\tilde{S}}^{\prime}}\left({{q}_{s}}\right)$ . By differentiating the implicit solution (334) with respect to q(t), the saddle-point condition can be rewritten as

Equation (339)

A closed equation for qs is obtained by combining this equation with the implicit solution (334). The resulting equation is solved by qs   =  1 for $n=\bar{x}(t)$ , i.e. if n lies on the trajectory of the rate equation. For other values of n, the equation has to be solved numerically. Once qs has been obtained, the value $\tilde{q}\left({{t}_{0}}\right)$ can be inferred from (339).

One piece is still missing for the numeric evaluation of the probability distribution (338), namely its denominator. It evaluates to

Equation (340)

Equation (341)

The pre-exponential factor of the distribution (338) computed by Assaf and Meerson [36] is recovered for large times for which ${{\bar{x}}_{\infty}}(t)\to 0$ and thus $\alpha \to 1$ . The above expressions hold both for qs   <  1 and qs   >  1. Care has to be taken in evaluating the limit ${{q}_{s}}\to 1$ . A rather lengthy calculation employing L'Hôpital's rule shows that the left-hand side of (340) evaluates to

Equation (342)

The pre-factor of this expression matches the normalization constant that Elgart and Kamenev inserted by hand [35].

After putting all of the above pieces together, the probability distribution (338) provides a decent approximation of the binary annihilation process. Figure 10 compares the distribution with a distribution that was obtained through a numerical integration of the master equation. For very large times, the quality of the approximation deteriorates. In particular, the approximation does not capture the final state of the process in which it is equally likely to find a single surviving particle or none at all. Limiting cases of the approximation (338) are provided in [35, 36].

Figure 10.

Figure 10. Solution of the binary annihilation reaction $2\,A\to \varnothing $ via the numerical integration of the master equation (blue lines) and approximation of the process using the stationary path method in section 7.2 (red circles). The figures in the upper row show the marginalized probability distribution $|\,p(t,n|0)\rangle $ at times (a) t  =  0.001, (b) t  =  0.1, and (c) t  =  1 on a linear scale. The figures in the lower row show the distribution at times (d) t  =  0.001, (e) t  =  0.1, and (f) t  =  1 on a logarithmic scale. The rate coefficient of the process was set to ${{\mu}_{\tau}}=1$ . For the numerical integration, the master equation was truncated at n  =  450. The marginalized distribution was initially of Poisson shape with mean x(0)  =  250. Its leading-order approximation is given in (338).

Standard image High-resolution image

Although the above evaluation of the saddle-point approximation is rather elaborate, it is still feasible. It becomes infeasible if one goes beyond the leading-order term of the generating function. In the section after the next, we show how higher order terms can be included in a dual approach, which is based on the backward path integral (171).

7.3. Backward path integral approach

The above method can be readily extended to a novel path integral representation of the marginalized distribution

Equation (343)

Unlike in the previous section, no saddle-point approximation is required to evaluate $|\,p\left(t,n|{{t}_{0}}\right){{\rangle}_{x\left({{t}_{0}}\right)}}$ . The resulting path integral can also be evaluated beyond leading order, at least numerically. In the next section, we outline such an evaluation for the binary annihilation reaction $2\,A\to \varnothing $ . On the downside, the leading order of the 'backward' approach does not respect the normalization of $|\,p\left(t,n|{{t}_{0}}\right){{\rangle}_{x\left({{t}_{0}}\right)}}$ . The normalization of the distribution has to be implemented by hand or by evaluating higher order terms.

To represent the marginalized distribution (343) by a path integral, we require that for a given particle number n and for a given initial mean x(t0), there exist functions $\tilde{x}(\tau )$ and $\tilde{q}(\tau )$ fulfilling

Equation (344)

Equation (345)

Since the transition operators $\mathcal{Q}_{\tau}^{\dagger}$ in (122) and ${{\mathcal{Q}}_{\tau}}$ in (64) are connected via $\mathcal{Q}_{\tau}^{\dagger}\left(\tilde{x},\tilde{q}\right)={{\mathcal{Q}}_{\tau}}\left(\tilde{q},\tilde{x}\right)$ , the above equations differ from the previous equations (324) and (325) only in the final condition on $\tilde{q}$ . As shown below, the marginalized distribution (343) can then be expressed as

Equation (346)

Equation (347)

Equation (348)

The variables $ \Delta x$ and $ \Delta q$ again represent deviations from the stationary paths $\tilde{x}$ and $\tilde{q}$ . The transition operator $ \Delta \mathcal{Q}_{\tau}^{\dagger}$ encompasses all the terms of an expansion of $\mathcal{Q}_{\tau}^{\dagger}\left(\zeta \Delta x+\tilde{x},{{\zeta}^{-1}}\text{i} \Delta q+\tilde{q}\right)$ that are of second or higher order in the deviations. A Taylor expansion of the action shows that it is free of terms that are linear in $ \Delta x$ and $ \Delta q$ .

The derivation of the above representation proceeds analogously to section 7.1. It begins by using $\tilde{x}\left({{t}_{0}}\right)=x\left({{t}_{0}}\right)$ and the basis function $|n{{\rangle}_{\tau, \Delta x}}=\frac{1}{n!}{{\left(\zeta \Delta x+\tilde{x}\right)}^{n}}{{\text{e}}^{-\tilde{q}\left(\zeta \Delta x+\tilde{x}\right)}}$ from (125) to rewrite the right-hand side of the marginalized distribution (343) as

Equation (349)

The sum in this expression can be represented by the backward path integral (168), turning the expression into

Equation (350)

Recalling the definition of ${{\mathcal{E}}_{\tau}}$ in (127), the transition operator

Equation (351)

can now be expanded in the deviations. We thereby obtain the action

Equation (352)

The path integral representation (346) follows upon inserting this action into (350) and employing the final condition $\tilde{q}(t)=n/\tilde{x}(t)$ .

7.4. Binary annihilation

To complement the approximation of the binary annihilation reaction $2\,A\to \varnothing $ with rate coefficient ${{\mu}_{\tau}}$ in section 7.2, we now perform an approximation of the process using the backward path integral representation (346) of the marginalized distribution. We first perform the approximation in leading order, which amounts to the evaluation of the pre-factor of (346). For the simple growth process $\varnothing \to A$ and for the linear decay process $A\to \varnothing $ , the pre-factor provides exact solutions.

7.4.1. Leading order.

The evaluation of the leading-order approximation

Equation (353)

proceeds very much like the derivation in section 7.2. Using the adjoint transition operator $\mathcal{Q}_{\tau}^{\dagger}\left({{c}_{\tau}},{{a}_{\tau}}\right)={{\mu}_{\tau}}c_{\tau}^{2}\left(1-a_{\tau}^{2}\right)$ from (122), one finds that Hamilton's equations

Equation (354)

Equation (355)

agree with (330) and (331), apart from the final condition on $\tilde{q}$ . The conservation of $\left({{\tilde{q}}^{2}}-1\right){{\tilde{x}}^{2}}$ and the asymptotic deterministic limit ${{\bar{x}}_{\infty}}(t)={{\left(2{\int}_{{{t}_{0}}}^{t}\text{d}\tau \,{{\mu}_{\tau}}\right)}^{-1}}$ can be used to rewrite the action (347) as

Equation (356)

In addition, the conservation law can be used to infer the flow equation

Equation (357)

Here we assume $\tilde{q}(t)<1$ but the derivation can also be performed for $\tilde{q}(t)>1$ . The equation is implicitly solved by

Equation (358)

For a given initial mean x(t0), this equation can be solved numerically for $\tilde{x}(t)$ , which is then inserted into (353). Upon normalizing the distribution by hand, it provides a reasonable approximation of the process. The quality of the approximation comes close to the quality of the approximation discussed in section 7.2.

7.4.2. Beyond leading order.

Instead of normalizing the function (353) by hand, it can be normalized by evaluating the path integral (346). In the following, we perform this evaluation, but only after restricting the action $ \Delta {{\mathcal{S}}^{\dagger}}$ in (348) to terms that are of second order in the deviations $ \Delta x$ and $ \Delta q$ , i.e. to

Equation (359)

The corresponding transition operator

Equation (360)

is obtained as explained in section 7.3 (with $\zeta :=\text{i}$ ). Its coefficients read

Equation (361)

The path integral can now be evaluated in one of the following two ways.

On the one hand, the transition operator $ \Delta \mathcal{Q}_{\tau}^{\dagger}$ has the same form as the one in section 4.3 and thus can be treated in the same way. Following appendix D, the path integral (346) can be expressed in terms of the following average over a Wiener process W:

Equation (362)

Here, $ \Delta x(\tau )$ solves the Itô SDE

Equation (363)

with initial value $ \Delta x\left({{t}_{0}}\right)=0$ . Unfortunately, the computation of a sufficient number of sample paths to approximate the average (362) was found to be rather slow.

As an alternative to the above way, one can discretize the action $ \Delta {{\mathcal{S}}^{\dagger}}$ and cast it into the quadratic form $ \Delta {{\mathcal{S}}^{\dagger}}=\frac{1}{2}{{\xi}^{\intercal}}A\xi $ with $\xi :={{\left\{ \Delta {{q}_{j}}, \Delta {{x}_{j}}\right\}}_{j=1,\ldots,N}}$ . The matrix A is symmetric and tridiagonal. If A is also positive-definite, the path integral (346) evaluates to $1/\sqrt{\det A}$ . Curiously, we found that this factor even matches the average (362) when A is not positive-definite. Therefore, we used the factor $1/\sqrt{\det A}$ to correct the approximation (353) and evaluated the resulting expression for various times t. A comparison with distributions obtained via a numeric integration of the master equation is provided in figure 11. Apparently, the quality of the approximation is even better than the quality of the approximation discussed in section 7.2, at least for early times. However, upon approaching the asymptotic limit of the process, it becomes increasingly difficult to numerically solve (358) for a fixed point $\tilde{x}(t)$ .

Figure 11.

Figure 11. Solution of the binary annihilation reaction $2\,A\to \varnothing $ via the numerical integration of the master equation (blue lines) and approximation of the process using the stationary path method in section 7.4 (red circles). The figures in the upper row show the marginalized probability distribution $|\,p(t,n|0)\rangle $ at times (a) t  =  0.001, (b) t  =  0.1, and (c) t  =  1 on a linear scale. The figures in the lower row show the distribution at times (d) t  =  0.001, (e) t  =  0.1, and (f) t  =  1 on a logarithmic scale. The rate coefficient of the process was set to ${{\mu}_{\tau}}=1$ . For the numerical integration, the master equation was truncated at n  =  450. The marginalized distribution was initially of Poisson shape with mean x(0)  =  250. Its leading-order approximation (353) was corrected by the factor $1/\sqrt{\det A}$ (see section 7.4.2). The numerical solution of the fixed point equation (358) failed for large times (missing circles in (f)).

Standard image High-resolution image

7.5. Résumé

The solution of a master equation can be approximated by expanding its forward or backward path integral representations around stationary paths of their respective actions. Such an expansion proves particularly useful in the approximation of 'rare-event' probabilities in a distribution's tail. Algorithms such as the stochastic simulation algorithm (SSA) of Gillespie typically perform poorly for this purpose. Whereas the expansion of the forward path integral representation provides the ordinary probability generating function (314) as an intermediate step, the expansion of the backward path integral representation provides the marginalized distribution (343). In both cases, the stationary paths obey differential equations resembling Hamilton's equations from classical mechanics. In sections 7.2 and 7.4, we demonstrated the two approaches in approximating the binary annihilation reaction $2\,A\to \varnothing $ . Our approximation based on the forward path integral amends an earlier computation of Elgart and Kamenev [35]. The computation requires a saddle-point approximation of Cauchy's differentiation formula to extract probabilities from the generating function. The advantage of the approach lies in the fact that its leading order term respects the normalization of the underlying probability distribution. The backward approach does not require the auxiliary saddle-point approximation but its leading order term is not normalized. In the approximation of the binary annihilation reaction, we demonstrated how the expansion of the backward path integral can be evaluated beyond leading order. Future studies are needed to explore whether the two approaches are helpful in analysing processes with multiple types of particles and spatial degrees of freedom. The efficient evaluation of contributions beyond the leading order approximations also remains a challenge.

8. Summary and outlook

On sufficiently coarse time and length scales, many complex systems appear to evolve through finite jumps. Such a jump may be the production of an mRNA or protein in a gene regulatory network [138, 139, 142], the step of a molecular motor along a cytoskeletal filament [113115], or the flipping of a spin [134137]. The stochastic evolution of such processes is commonly modelled in terms of a master equation [4, 5]. This equation provides a generic description of a system's stochastic evolution under the following three conditions: First, time proceeds continuously. Second, the system's future is fully determined by the system's present state. Third, changes of the system's state proceed via discontinuous jumps.

In this work, we reviewed the mathematical theory of master equations and discussed analytical and numerical methods for their solution. Special attention was paid to methods that apply even when stochastic fluctuations are strong and to the representation of master equations by path integrals. In the following, we provide brief summaries of the discussed methods, which all have their own merits and limitations. Information on complementary approximation methods can be found in the text books [98, 110] and in the review [274].

The stochastic simulation algorithm (section 1.4)

The SSA [7578] and its variations [78, 247249] come closest to being all-purpose tools in solving the (forward) master equation numerically. The SSA enables the computation of sample paths with the correct probability of occurrence. Since these paths are statistically independent of one another, their computation can be easily distributed to individual processing units. Besides providing insight into a system's 'typical' dynamics, the sample paths can be used to compute a histogram approximation of the master equation's solution or to approximate observables. In principle, the SSA can be applied to arbitrarily complex systems with multiple types of particles and possibly spatial degrees of freedom. Consequently, the algorithm is commonly used in biological studies [138, 139, 142] and extensions of it have been implemented in various simulation packages [250257]. A fast but only approximate alternative to the SSA for the simulation of processes evolving on multiple time scales is τ-leaping [78, 249, 258267]. Algorithms for the simulation of processes with time-dependent transition rates are discussed in [268, 269]. Let us note, however, that for small systems, a direct numerical integration of the master equation may be more efficient than the use of the SSA.

Alternative path summation algorithms (section 1.4)

Given enough time, the SSA could be used to generate every possible sample path of a process (at least if the state space is finite). If every path is recorded just once, the corresponding histogram approximation of the conditional probability distribution $p\left(\tau,n|{{\tau}_{0}},{{n}_{0}}\right)$ would agree with the 'path summation representation' (19). Various alternatives to the SSA have been proposed for the numerical evaluation of this representation, mostly based on its Laplace transform [238243]. Thus far, these algorithms have remained restricted to rather simple systems. The analytical evaluation of the sums and convolutions that are involved proves to be a challenge.

Exponentiation and uniformization (section 1.4)

If the state space of a system is finite, the (forward) master equation ${{\partial}_{\tau}}p\left(\tau |{{t}_{0}}\right)=Qp\left(\tau |{{t}_{0}}\right)$ is solved by the matrix exponential $p\left(\tau |{{\tau}_{0}}\right)={{\text{e}}^{Q\left(\tau -{{\tau}_{0}}\right)}}{\mathbb {1}}$ . Here, Q represents the transition matrix of a process and $p\left(\tau |{{t}_{0}}\right)$ the matrix of the conditional probabilities $p\left(\tau,n|{{\tau}_{0}},{{n}_{0}}\right)$ . If the state space of the system is countably infinite but the exit rates from all states are bounded (i.e. ${{\sup}_{m}}|Q(m,m)|<\infty $ ), the conditional probability distribution can be represented by the 'uniformization' formula (16). As the evaluation of this formula involves the computation of an infinite sum of matrix powers, its use is restricted to small systems and to systems whose transition matrices exhibit special symmetries. The same applies to the evaluation of the above matrix exponential (note, however, the projection techniques proposed in [219, 220]).

Flow equations (sections 2 and 3)

For the chemical master equation (24) of the reaction $k\,A\to l\,A$ with rate coefficient ${{\gamma}_{\tau}}$ , it is readily shown that the probability generating function

Equation (364)

with basis function $|n{{\rangle}_{q}}={{q}^{n}}$ obeys the linear partial differential equation, or 'flow equation'

Equation (365)

Analogously, the marginalized distribution

Equation (366)

with Poisson basis function $|{{n}_{0}}{{\rangle}_{x}}=\frac{{{x}^{{{n}_{0}}}}{{\text{e}}^{-x}}}{{{n}_{0}}!}$ obeys the backward-time flow equation

Equation (367)

After one has solved one of these equations, the conditional probability distribution can be recovered via the inverse transformations $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)=\langle n|g\left(\tau |{{t}_{0}},{{n}_{0}}\right)\rangle $ or $p\left(t,n|\tau,{{n}_{0}}\right)=\langle {{n}_{0}}|\,p(t,n|\tau )\rangle $ . In sections 2 and 3, we generalized the above approaches and formulated conditions under which the forward and backward master equations can be transformed into flow equations. Besides the flow equations obeyed by the generating function and the marginalized distribution, we also derived a flow equation obeyed by the generating functional

Equation (368)

For the Poisson basis function $|n{{\rangle}_{x}}=\frac{{{x}^{n}}{{\text{e}}^{-x}}}{n!}$ , the inverse transformation $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)=\langle g\left(\tau |{{t}_{0}},{{n}_{0}}\right)|n\rangle $ of this functional recovers the Poisson representation of Gardiner and Chaturvedi [32, 33]. The Poisson representation can, for example, be used for the computation of mean extinction times [374], but we found the marginalized distribution to be more convenient for this purpose (see section 3.3).

Thus far, most methods that have been proposed for the analysis of the generating function's flow equation have only been applied to systems without spatial degrees of freedom. These methods include a variational approach [39, 83], spectral formulations and WKB approximations [34, 3638, 328, 364, 366, 367, 370, 427]. We outlined some of these methods in section 2.3. 'Real-space' WKB approximations, which employ an exponential ansatz for the probability distribution rather than for the generating function, were not discussed in this review (information on these approximations can be found in [63, 64, 146, 171, 172, 177, 354360, 362, 363, 364]). WKB approximations often prove helpful in computing a mean extinction time or a (quasi)stationary probability distribution. Future studies could explore whether the flow equation obeyed by the marginalized distribution can be evaluated in terms of a WKB approximation or using spectral methods. Moreover, further research is needed to investigate whether the above methods may be helpful in studying processes with spatial degrees of freedom and multiple types of particles. For that purpose, it will be crucial to specify satisfactory boundary conditions for the flow equations obeyed by the generating function and the marginalized distribution (see [366] for a discussion of 'lacking' boundary conditions in a study of the branching-annihilation reaction $A\to 2\,A$ and $2\,A\to \varnothing $ ).

Recently, we have been made aware of novel ways of treating the generating function's flow equation based on duality relations [428432]. These approaches are not yet covered here.

Stochastic path integrals (sections 46)

Path integral representations of the master equation have proved invaluable tools in gaining analytical and numerical insight into the behaviour of stochastic processes, particularly in the vicinity of phase transitions [14, 69, 70, 96, 130133, 162, 293319, 321, 323325, 400, 422425]. A classical example of such a phase transition is the transition between an active and an absorbing state of a system [68]. In section 6, we showed that for a process whose initial number of particles is Poisson distributed with mean x(t0), the average of an observable A(n) at time t can be represented by the path integral

Equation (369)

Equation (370)

The above path integral can be derived both from the novel backward path integral representation

Equation (371)

of the conditional probability distribution (see section 4), and from the forward path integral representation

Equation (372)

(see section 5). The meanings of the integral signs and of the actions $\mathcal{S}$ and ${{\mathcal{S}}^{\dagger}}$ were explained in the respective sections. We derived both of the above representations of the conditional probability distribution from the flow equations discussed in the previous paragraph. An extension of the path integrals to systems with multiple types of particles or spatial degrees of freedom is straightforward. We demonstrated the use of the integrals in solving various elementary processes, including the pair generation process and a process with linear decay of diffusing particles. Although we did not discuss the application of renormalization group techniques, we showed how the above path integrals can be evaluated in terms of perturbation expansions using Feynman diagrams. Information on perturbative renormalization group techniques can be found in [69, 70], information on non-perturbative techniques in [410]. Besides the above path integrals, we showed how one can derive path integral representations for processes with continuous state spaces. These path integrals were based on Kramers–Moyal expansions of the respective backward and forward master equations. Upon truncating the expansion of the backward master equation at the level of a diffusion approximation, we recovered a classic path integral representation of the (backward) Fokker–Planck equation [1619]. We hope that our exposition of the path integrals helps in developing new methods for the analysis of stochastic processes. Future studies may focus directly on the backward or forward path integral representations (371) and (372) of the conditional probability distribution, or use the representation (369) to explore how (arbitrarily high) moments of the particle number behave in the vicinity of phase transitions.

Stationary path approximations (section 7)

Elgart and Kamenev recently showed how the forward path integral representation (372) can be approximated by expanding its action around 'stationary paths' [35]. The paths obey Hamilton's equations of the form

Equation (373)

with the transition operator ${{\mathcal{Q}}_{\tau}}$ acting as the 'Hamiltonian'. In section 7, we reviewed this 'stationary path method' and showed how it can be extended to the backward path integral representation (371). We found that this backward approach does not require an auxiliary saddle-point approximation if the number of particles is initially Poisson distributed, but that a proper normalization of the probability distribution is only attained beyond leading order. Future work is needed to apply the method to systems with spatial degrees of freedom and multiple types of particles. The latter point also applies to the classification of phase transitions based on phase-space trajectories of the equations (373) as has been proposed in [320].

We hope that our review of the above methods inspires future research on master equations and that it helps researchers who are new to the field of stochastic processes to become acquainted with the theory of stochastic path integrals.

Acknowledgments

For critical reading of a pre-submission manuscript and for providing valuable feedback, we would like to thank Michael Assaf, Léonie Canet, Alexander Dobrinevski, Nigel Goldenfeld, Peter Grassberger, Baruch Meerson, Ralf Metzler, Mauro Mobilia, Harold P. de Vladar, Herbert Wagner, and Royce Zia. Moreover, we would like to thank Johannes Knebel, Isabella Krämer, and Cornelius Weig for helpful discussions during the preparation of the manuscript. This research was financially supported by the German Excellence Initiative via the program 'NanoSystems Initiative Munich' (NIM).

Appendix A.: Proof of the Feynman–Kac formula in section 1.3

Here we provide a brief proof of a special case of the Feynman–Kac, or Kolmogorov, formula (see section 4.3.5 in [110]). In particular, we assume that for $\tau \in \left[{{t}_{0}},t\right]$ , a function $u(\tau,x)$ obeys the linear PDE

Equation (A.1)

Equation (A.2)

The function ${{\alpha}_{\tau}}$ is called a drift coefficient and ${{\beta}_{\tau}}$ a diffusion coefficient. According to the Feynman–Kac formula, the above PDE is solved by

Equation (A.3)

with ${{\langle \langle \centerdot \rangle \rangle}_{W}}$ representing an average over realizations of the Wiener process W. The function X(s) with $s\in [\tau,t]$ and $\tau \geqslant {{t}_{0}}$ represents a solution of the Itô stochastic differential equation (SDE)

Equation (A.4)

Equation (A.5)

If the initial value x of the SDE is chosen as a real number, the drift coefficient ${{\alpha}_{s}}$ is a real function, and the diffusion coefficient ${{\beta}_{s}}$ as a real non-negative function, the 'sample path' X(s) assumes only real values along its temporal evolution. In a multivariate extension of the Feynman–Kac formula, one requires a matrix $\sqrt{{{\beta}_{s}}}:={{\gamma}_{s}}$ fulfilling ${{\gamma}_{s}}\gamma _{s}^{\intercal}={{\beta}_{s}}$ . If the matrix ${{\beta}_{s}}$ is symmetric and positive-semidefinite, one may choose ${{\gamma}_{s}}$ as its unique symmetric and positive-semidefinite square root [111].

The solution (3) of the backward Fokker–Planck equation (2) constitutes a special case of the above formula. There, the independent variable is x0 instead of x, and $u\left(\tau,{{x}_{0}}\right)$ is the conditional probability distribution $p\left(t,x|\tau,{{x}_{0}}\right)$ (with x being an arbitrary parameter). The final value of the distribution is $G\left({{x}_{0}}\right)=\delta \left(x-{{x}_{0}}\right)$ . The Feynman–Kac formula (A.3) then implies that ${{\langle \langle \delta \left(x-X(t)\right)\rangle \rangle}_{W}}$ solves the backward Fokker–Planck equation (2). Note that in the main text, we use a small letter to denote the sample path.

To prove the Feynman–Kac formula (A.3), we assume that X(s) solves the SDE (A.4). By Itô's Lemma and the PDE (A.1), it then holds that

Equation (A.6)

As the next step, we integrate this differential from $s=\tau $ to s  =  t and average the result over realizations of the Wiener process W. Since ${{\langle \langle \text{d}W\rangle \rangle}_{W}}=0$ , it follows that ${{\langle \langle u\left(\tau,X(\tau )\right)\rangle \rangle}_{W}}={{\langle \langle u\left(t,X(t)\right)\rangle \rangle}_{W}}$ . This expression coincides with the Feynman–Kac formula (A.3) because the initial condition (A.5) implies that $u\left(\tau,X(\tau )\right)=u(\tau,x)$ does not depend on the Wiener process, and because the final condition (A.2) implies that u(t,X(t))  =  G(X(t)).

Appendix B.: Proof of the path summation representation in section 1.4

In the following, we prove that the path summation representation (19) solves the forward master equation (11) if the transition rate w(n,m) is independent of time. Hence, the process is homogeneous in time and we may choose t0  :=  0. After defining $d(n,m):={{\delta}_{n,m}}w(m)={{\delta}_{n,m}}{\sum}_{k}w(k,m)$ , we first rewrite the master equation as

Equation (B.1)

with the probability vector $\boldsymbol{p}\left(\tau |{{t}_{0}},{{n}_{0}}\right)$ . Note that the matrix notation assumes some mapping between the state space of n and an index set $I\subset \mathbb{N}$ . Following [239], the Laplace transform

Equation (B.2)

of the above master equation is given by

Equation (B.3)

Here, $\boldsymbol{p}\left(0|0,{{n}_{0}}\right)={{\widehat{\boldsymbol{e}}}_{{{n}_{0}}}}$ represents a unit vector pointing in direction n0. Note that s is a scalar but that w and d are matrices. Making use of the unit matrix ${\mathbb {1}}$ , the above expression can be solved for $\mathcal{L}\boldsymbol{p}(s|\centerdot )$ and be rewritten as

Equation (B.4)

The value of the real part of s is determined by the parameter ε in the inverse Laplace transformation

Equation (B.5)

We assume that ε can be chosen so large that the first factor in the solution (B.4) can be rewritten as a geometric series, i.e. as

Equation (B.6)

This expression may be simplified by noting that

Equation (B.7)

By defining ${\sum}_{\left\{{{\mathcal{P}}_{J}}\right\}}:={\sum}_{{{n}_{1}}}\cdots {\sum}_{{{n}_{J-1}}}$ , (B.6) becomes

Equation (B.8)

Since an exponential function $f(\tau ):={{\text{e}}^{-\alpha \tau}} \Theta (\tau )$ transforms as $\mathcal{L}f(s)={{(s+\alpha )}^{-1}}$ and since the Laplace transform converts convolutions into products, we thus find that (B.8) agrees with the Laplace transform of the path summation representation (19).

Appendix C.: Solution of the random walk in sections 2.2.1 and 3.2.1

We here provide the conditional probability distribution $p\left(\tau,n|{{t}_{0}},{{n}_{0}}\right)$ solving the random walk from sections 2.2.1 and 3.2.1. In the first of these sections, we showed that this distribution is obtained by applying the functional $\langle n|\,f={\int}_{-\pi}^{\pi}\frac{\text{d}q}{2\pi}{{\text{e}}^{-\text{i}nq}}f(q)$ to the generating function $|g\left(\tau |{{t}_{0}},{{n}_{0}}\right)\rangle $ , which we derived as

Multiple series expansions show that this expression can be rewritten as ${{\text{e}}^{-{\int}_{{{t}_{0}}}^{\tau}\text{d}s\,\left({{r}_{s}}+{{l}_{s}}\right)}}$ times

We ignore the constant pre-factor ${{\text{e}}^{-{\int}_{{{t}_{0}}}^{\tau}\text{d}s\,\left({{r}_{s}}+{{l}_{s}}\right)}}$ for now and apply the functional $\langle n|$ to this expression. After carefully noting the restrictions imposed by Kronecker deltas, the resulting expression reads

Equation (C.1)

This sum can be expressed by a modified Bessel function of the first kind (10.25.2 in [335]), resulting in

Multiplied with ${{\text{e}}^{-{\int}_{{{t}_{0}}}^{\tau}\text{d}s\,\left({{r}_{s}}+{{l}_{s}}\right)}}$ , this expression corresponds to a Skellam distribution with mean $\mu ={{n}_{0}}+{\int}_{{{t}_{0}}}^{\tau}\text{d}s\,\left({{r}_{s}}-{{l}_{s}}\right)$ and variance ${{\sigma}^{2}}={\int}_{{{t}_{0}}}^{\tau}\text{d}s\,\left({{r}_{s}}+{{l}_{s}}\right)$ .

Appendix D.: Details on the evaluation of the backward path integral representation in section 4.3

In the following, we fill out the missing steps in section 4.3 and rewrite the backward path integral representation (171) of the marginalized distribution in terms of the (Q,X)-generating functional (179). Upon comparing the discrete-time approximation of the marginalized distribution (168) with its representation in (177), one observes that the corresponding (Q,X)-generating functional should read

Equation (D.1)

Equation (D.2)

A differentiation of (D.1) with respect to $ \Delta t\,{{Q}_{j}}$ generates a factor xj−1, a differentiation with respect to $ \Delta t\,{{X}_{j-1}}$ a factor $\text{i}{{q}_{j}}$ . Upon recalling the definition of the action $\mathcal{S}_{N}^{\dagger}$ in (169), the first exponential in (D.1) can be rewritten via

Equation (D.3)

Equation (D.4)

The equality holds up to corrections of $\text{O}(\Delta t)$ (because of the sums, all remaining terms are of $\text{O}(1)$ ). Note that the right hand side of (D.4) is independent of xN . One can linearize the terms that are quadratic in qj by the completion of a square. In particular, we write

Equation (D.5)

with the average being defined as

Equation (D.6)

The average employs the Gaussian distribution

Equation (D.7)

with zero mean and variance $ \Delta t$ (see (200)). The sequence $ \Delta {{W}_{1}},\ldots, \Delta {{W}_{N-1}}$ can be interpreted as the steps of a discretized Wiener process. To proceed with the derivation, we now move the perturbation operator ${{\mathcal{P}}^{\dagger}}$ to the front of the (Q,X)-generating functional (D.1) by writing

Equation (D.8)

Upon combining all of the above steps, one finds that

Equation (D.9)

with the definition

Equation (D.10)

The sequence of Dirac delta functions implies that xj depends on Xi only for i  <  j. This property has to be kept in mind when the functional derivatives in (D.9) are evaluated in continuous-time. In the continuous-time limit, i.e. for $N\to \infty $ and $ \Delta t\to 0$ , one recovers the marginalized distribution (177) with generating functional (179) and Itô SDE (181).

Footnotes

  • Note that we do not distinguish between random variables and their outcomes. Moreover, we stick to the physicists' convention of ordering times in descending order. In the mathematical literature, the reverse order is more common, see e.g. [4].

  • Due to the central importance of the Feynman–Kac formula, we provide a brief proof of it in appendix A. We also encounter the formula in section 4 in evaluating a path integral representation of the (backward) master equation.

  • The Langevin equation corresponding to the SDE (4) reads ${{\partial}_{s}}x(s)={{\alpha}_{s}}\left(x(s)\right)+\sqrt{{{\beta}_{s}}\left(x(s)\right)}\eta (s)$ , with the Gaussian white noise $\eta (s)$ having zero mean and the auto-correlation function $\langle \eta (s)\eta \left({{s}^{\prime}}\right)\rangle =\delta \left(s-{{s}^{\prime}}\right)$ .

  • Just as a population whose growth is described by the deterministic equation ${{\partial}_{\tau}}n={{n}^{2}}$ explodes after a finite time, so does a population whose growth is described by the master equation (11) with transition rate $w(n,m)={{\delta}_{n,m+1}}m(m-1)$ [117]. This transition rate models the elementary reaction $2\,A\to 3\,A$ as explained in section 1.5. An explosion also occurs for the rate $w(n,m)={{\delta}_{n,m+1}}{{m}^{2}}$ .

  • As before, sums whose range is not specified cover the whole state space.

  • Later, in our derivation of the forward path integral in section 5.1, we employ the finite difference approximation

    This scheme conforms with the derivation of the forward master equation (10).

  • Use ${{\left({{\partial}_{q}}\,+\,\zeta \tilde{{x}}\,\right)}^{n}}f(q){{|}_{q=-\tilde{{q}}\,/\zeta}}=\partial _{q}^{n}\left({{\text{e}}^{\tilde{{x}}\,\left(\zeta q+\tilde{{q}}\,\right)}}f(q)\right){{|}_{q=-\tilde{{q}}\,/\zeta}}$ and the analyticity of ${{\text{e}}^{\tilde{x}\left(\zeta q+\tilde{q}\right)}}f(q)$ for an analytic test function f.

  • In the derivation of the backward path integral in section 4.1, we use the finite difference approximation

    The discretization scheme conforms with the derivation of the backward master equation (12).

  • This discretization of time conforms with the Itô prescription for forward-time SDEs and with the derivation of the backward master equation (12) (see section 4.3 and also the footnote on page 25).

  • 10 

    With ${{{\mathsf {N}}}_{\boldsymbol{r}}}$ denoting the neighbouring lattice site of $\boldsymbol{r}\in {{\left(l\mathbb{Z}\right)}^{d}}$ , the ordinary Laplace operator follows as the limit of a finite-difference approximation, i.e. as

    Equation (223)

  • 11 

    If Hamilton's equations are fulfilled, it holds that

    Here, $ \Delta {{\mathcal{Q}}_{\tau}}$ represents all terms of the Taylor expansion of ${{\mathcal{Q}}_{\tau}}$ that are of second or higher order in the deviations.

Please wait… references are loading.

Biographies

Markus F Weber

Markus F Weber received his PhD in Physics at the Ludwig-Maximilians-Universität München in 2016. He conducted his doctoral research in the group of Erwin Frey. The present review arose out of this work. He is interested in the mathematical theory of stochastic processes and in the mathematical modelling of biological systems. His recent articles focus on bacterial range expansions, evolutionary game theory, and on a non-equilibrium condensation phenomenon of bosons.

Erwin Frey

Erwin Frey received his PhD in Physics at the Technische Universität München in 1989 and was Postdoctoral Fellow at Harvard University. Since 2001, he is Professor of Theoretical Physics at the Ludwig-Maximilians-Universität München. He is interested in how the interplay between stochastic fluctuations, interactions, and geometry shapes system-level properties and their functional characteristics. A central goal of his research is to identify and characterize universal action principles in biological systems.

10.1088/1361-6633/aa5ae2