Paper The following article is Open access

Linear response theory for quantum Gaussian processes

, and

Published 21 August 2019 © 2019 The Author(s). Published by IOP Publishing Ltd on behalf of the Institute of Physics and Deutsche Physikalische Gesellschaft
, , Citation Mohammad Mehboudi et al 2019 New J. Phys. 21 083036 DOI 10.1088/1367-2630/ab30f4

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1367-2630/21/8/083036

Abstract

Fluctuation dissipation theorems (FDTs) connect the linear response of a physical system to a perturbation to the steady-state correlation functions. Until now, most of these theorems have been derived for finite-dimensional systems. However, many relevant physical processes are described by systems of infinite dimension in the Gaussian regime. In this work, we find a linear response theory for quantum Gaussian systems subject to time dependent Gaussian channels. In particular, we establish a FDT for the covariance matrix that connects its linear response at any time to the steady state two-time correlations. The theorem covers non-equilibrium scenarios as it does not require the steady state to be at thermal equilibrium. We further show how our results simplify the study of Gaussian systems subject to a time dependent Lindbladian master equation. Finally, we illustrate the usage of our new scheme through some examples. Due to broad generality of the Gaussian formalism, we expect our results to find an application in many physical platforms, such as opto-mechanical systems in the presence of external noise or driven quantum heat devices.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Fluctuation dissipation theorems (FDTs) provide very powerful tools to study the linear response of physical systems close to their steady state. The aim of such theorems is to establish and quantify a connection between (i) the linear response of the system under study to a (time-dependent) perturbation, and (ii) the steady-state correlation functions. Different versions of FDT appear, depending on whether the system under study is classical [16] or quantum [713], or whether the steady state is thermal [7], or a generic non-equilibrium steady state [5, 6, 8]. Response functions have been used to estimate noise [14, 15], to study topological insulators [16] or to witness and quantify non-Markovianity of quantum systems [17]. For a thermal system, or a thermal system subject to a quench, the FDT is connected to the quantum Fisher information [18, 19]. On the account of the fact that the quantum Fisher information is a witness of multipartite entanglement [2022], one can benefit its connection to the FDT in order to detect multipartite entanglement close to thermal equilibrium. Some recent works report violation of FDTs under certain circumstances [23, 24].

To date, the majority of theoretical works in the quantum domain have been focused on finite dimensional systems and close to thermal equilibrium. Many physical processes of interest, however, are described by continuous variable systems in the Gaussian regime with infinite-dimensional Hilbert space. These systems also find an application for quantum information technologies and, in fact, have been successfully used for quantum teleportation [25], crafting cluster states with enormous number of entangled states [26] or secure quantum key distribution [27]. It is therefore relevant and timely to establish FDT for quantum Gaussian systems. These theorems should be phrased in terms of the natural tools used to describe Gaussian systems, based on first and second moments rather than density matrices.

In this work we address all these issues and provide a linear response theory for Gaussian continuous variable quantum systems. More specifically, we consider processes described by Gaussian quantum channels and derive the linear response of the covariance matrix. The formalism can find an application in many different scenarios, since it covers the case of time-dependent fluctuations and non-equilibrium scenarios, as it does not assume the initial state to be thermal.

The structure of the article is as follows: in section 2 we review quantum Gaussian channels. The aim of this section is to provide the minimal necessary tools for this study; for more about Gaussian quantum channels see [28, 29] and the references therein. In section 3 we set our framework and present the main results. We prove these results in section 4. In section 5, we discuss the application of our theorem for those cases in which the channel is described by a Lindbladian master equation. We show how to use our results in section 6 through some examples. Finally, in section 7 we conclude and discuss future directions.

2. Definition of the Gaussian scenario

We consider N-mode bosonic systems with the quadrature vector $R\equiv {\left({q}_{1},...{q}_{N};{p}_{1},...{p}_{N}\right)}^{T}$. Here and throughout the article, T stands for transpose, and the elements qi and pi represent the position and momentum of the ith mode, respectively. The quadratures respect the bosonic algebra: [Ri, Rj] = Ωi, j, where Ω is the symplectic matrix

Equation (1)

In our notation, ${{\mathbb{0}}}_{N}$ is an N × N matrix of zeros, while ${{\mathbb{I}}}_{N}$ is the identity matrix of size N. In the rest of this work we set ${\hslash }=1$ unless otherwise mentioned. We recall that, by definition, Gaussian systems are those with a Gaussian characteristic function [28, 29]. In turn, the characteristic function, denoted by $\chi (\eta )$, reads as:

Equation (2)

with ρ being the density matrix, ${W}_{\eta }\equiv {{\rm{e}}}^{-{\eta }^{T}{\rm{\Omega }}R}$ being the Weyl operator, and the phase space vector η belongs to ${{\mathbb{R}}}^{2N}$. Therefore, the characteristic function of a Gaussian system has the following shape:

Equation (3)

Here, we use the displacement vector, denoted by d, and the covariance matrix, denoted by σ, which are given by:

Equation (4)

Equation (5)

where we define ${{\rm{\Sigma }}}_{{ij}}\equiv {R}_{i}\circ {R}_{j}=\tfrac{1}{2}(\bar{{R}_{i}}\,\bar{{R}_{j}}+\bar{{R}_{j}}\,\bar{{R}_{i}})$, and $\bar{{R}_{i}}={R}_{i}-{{d}}_{i}$, in order to lighten our notation. The covariance matrix σ obeys the uncertainty principle: σ + Ω/2 ≥ 0 [28, 30].

As already advanced, Gaussian systems are fully described by their first and second moments. Thus, Gaussian channels can be completely identified by their action on the displacement vector and the covariance matrix. Denoting an arbitrary Gaussian quantum channel by ${\mathscr{M}}$, in the most generic case it operates on the quadratures vector and the covariance matrix as follows [28, 29, 31]:

Equation (6)

Equation (7)

Here, ${f}\in {{\mathbb{R}}}^{2N}$, while X and $Y\in {{\mathbb{R}}}^{2N}\times {{\mathbb{R}}}^{2N}$ are real matrices. The complete positivity of the map dictates that [28]:

Equation (8)

Hereafter, without loss of generality, we restrict to zero-mean Gaussian states, and focus on Gaussian channels which map zero-mean states to zero-mean states. This is to say: d = f = 0. On this account, the map ${\mathscr{M}}$ could be alternatively characterized by the set {X, Y}. In section 5 we review how to bring the particular case of (quadratic) Lindbladian master equations into the standard form of Gaussian channels.

3. Framework and main results

We work with a one-parameter family of Gaussian quantum channels ${{\mathscr{M}}}_{\lambda }$, where λ is a real parameter, and can represent the strength of an external magnetic field or temperature, to name a few. See section 6 for some examples. Let σλ be a fixed point covariance matrix of the Gaussian channel ${{\mathscr{M}}}_{\lambda }$. This implies that:

Equation (9)

or alternatively

Equation (10)

Note that we allow both Xλ and Yλ depend on the parameter λ. Since we are interested in the linear response, we work in a regime where the parameter λ can be considered as a linear contribution to the channel and its fixed point covariance matrix. Therefore, we assume that ${{\mathscr{M}}}_{\lambda }={{\mathscr{M}}}_{0}+\lambda {M}+{\mathscr{O}}({\lambda }^{2})$, and ${\sigma }_{\lambda }={\sigma }_{0}+\lambda \varsigma +{\mathscr{O}}({\lambda }^{2})$, and we safely ignore the second and higher orders. In other words, if we normalize ${{\mathscr{M}}}_{0}$ and M (and doing the same for σ0 and ς) such that they have the same operator norm, then $| \lambda | \ll 1$. Notice that, neither M is a Gaussian quantum channel on its own, nor is ς a covariance matrix. We refer to ς as the static linear response of the covariance matrix, and we have: $\varsigma \equiv {\partial }_{\lambda }{\sigma }_{\lambda }{| }_{\lambda =0}$. In a Markovian scenario, the time evolution of the covariance matrix is described by the consecutive operations of the map. Choosing an initial covariance matrix σ0, that is the fixed point of ${{\mathscr{M}}}_{0}$, at discrete time steps t = 1, 2, 3, ...we have:

Equation (11)

where we allow for a time dependent parameter λ(t) . The aim is to characterize the linear response of σ(t) in terms of steady state correlations, that is elements of steady state covariance matrix.

Our main result expresses the linear response of the covariance matrix as follows:

Equation (12)

where $\sum $ stands for summation (not to be mistaken with Σ, the matrix of second order operators), and Φ(t) is the response function which reads:

Equation (13)

Here, Δt stands for time differentiation, i.e. Δt(f(t)) ≡ f(t + 1) − f(t) and ${X}_{0}^{t}$ represents t times the application of ${X}_{\lambda }{| }_{\lambda =0}$. If each map is applied for an infinitesimal time δt → 0, we have the continuous version of the response function:

Equation (14)

Equation (15)

Thus, in order to find the response function we simply need to find: (i) the static linear response ς, and (ii) the time evolution of ς under the unperturbed channel X0. Three further comments/results are in order:

3.1. Static linear response

Our result is fully consistent with the static linear response. Consider a scenario in which (i) the perturbation is constant in time, i.e. $\lambda (t)=\lambda $, and (ii) the map has σλ as its unique fixed point5 . For any arbitrary time t the linear response simplifies to:

Equation (16)

In the limit of $t\to \infty $ the upper value vanishes (see section 4.2), that is:

Equation (17)

We immediately revive the static linear response:

Equation (18)

3.2. Kubo's response function

Consider a system at thermal equilibrium with the density matrix ρ0 = exp(−β H0)/Z0. Here, the quadratic Hamiltonian is given by ${H}_{0}=\tfrac{1}{2}{R}^{T}GR$, and β is the inverse temperature. The system is disturbed by adding a perturbative time-dependent term to its Hamiltonian: H(t) = H0 − λ(t) h with $h=\tfrac{1}{2}{R}^{T}gR$. Notice that both G and g are symmetric 2N by 2N real matrices. The response function of the covariance matrix reads as:

Equation (19)

with $\tilde{\sigma }:= ({\rm{\Omega }}g{\sigma }_{0}-{\sigma }_{0}g{\rm{\Omega }})$ and ${S}_{G}^{t}:= \exp (-{\rm{i}}{t}{\rm{\Omega }}G)$. Therefore, the linear response of Hamiltonian drivings can be identified only by having σ0, G and g.

3.3. Alternative expression for the linear response

Practically, equations (13) and (15) provide a very useful formalization to find the linear response, for them relying on two elements that are easy to calculate, but they do not seem to follow the usual structure of FDT. However, one can write down an alternative FDT that looks more similar to the traditional one, in particular as presented in [32]. This reads as:

Equation (20)

where the correlation function is evaluated according to the fixed point of the unperturbed map, hence the index '0'. Here the elements of the matrix Σ(t) are the time evolution of the quadratures under ${{\mathscr{M}}}_{0}$, such that ${{\rm{\Sigma }}}_{m,n}(t)={R}_{m}(t)\circ {R}_{n}(t)$. In addition, Λ0 is the symmetric logarithmic derivative (SLD). The SLD is a Hermitian operator with a vanishing expectation value: ${\left\langle {{\rm{\Lambda }}}_{0}\right\rangle }_{0}=0$—again the index '0' indicates that the trace is evaluated over the fixed point of the unperturbed map. In our case it can be written down as a linear combination of second order quadratures:

Equation (21)

with the matrix of coefficients C being the solution of the following equation [33, 34]:

Equation (22)

4. Proof of main results

Here we present the proof of equations (13), (17), and (19). The proof of equation (20) is presented in the appendix B.

4.1. The response function

We start by expanding equation (11) and keeping the terms up to the first order in λ. This yields:

Equation (23)

In the last line we define the response function ${\rm{\Phi }}(t)={{\mathscr{M}}}_{0}^{t}{M}{\sigma }_{0}$. To proceed further, we need to identify how M acts on σ0. To this aim, we notice that criterion (9) implies that:

Equation (24)

By substituting in the response function, we have:

where we define $\varsigma (t)\equiv {{\mathscr{M}}}_{0}^{t}\varsigma $. In section 2 we explained how the map ${{\mathscr{M}}}_{0}$ applies to covariance matrices, however, ς is not a covariance matrix. Thus, we have to identify how the map acts on the static linear response ς. To this end, we focus on the time evolution of the covariance matrix σλ:

Equation (25)

where we define ${Y}_{0}(t)\equiv {\sum }_{s=0}^{t-1}{X}_{0}^{s}\,{Y}_{0}\,{{X}_{0}^{{T}^{s}}}^{}$. By substituting ${\sigma }_{\lambda }={\sigma }_{0}+\lambda \varsigma +{\mathscr{O}}({\lambda }^{2})$ on the left-hand side of (25), we have:

Equation (26)

Therefore, we identify the first two terms of the right-hand side as ${{\mathscr{M}}}_{0}^{t}{\sigma }_{0}$, and $\varsigma (t)={X}_{0}^{t}\varsigma {{X}_{0}^{{T}^{t}}}^{}$. Plugging this into (25) completes our proof of equation (13).

4.2. Proof of equation (17)

The map ${{\mathscr{M}}}_{0}$ has a unique fixed point σ0. This is to say, for any initial covariance matrix σ we have:

Equation (27)

Particularizing to σλ = σ0 + λς, yields:

Equation (28)

Since the equality holds for any value of the parameter, the term proportional to λ should vanish, which proves equation (17).

4.3. FDT for thermal states and Hamiltonian evolutions (Kubo's response function)

Consider a Gaussian system with H(t) = H0 − λ(t) h, where ${H}_{0}=\tfrac{1}{2}{R}^{T}GR$, and $h=\tfrac{1}{2}{R}^{T}gR$. The thermal state ${\rho }_{\lambda }=\exp (-\beta {H}_{\lambda })/{{Z}}_{\lambda }$—being Zλ the partition function—is a fixed point of the unitary dynamics produced by Hλ ≡ H0 − λ h. Recall that, such unitary dynamics corresponds to a symplectic transformation that operates on the covariance matrix (see appendix A for details). In particular we have ${X}_{0}=\exp (-{\rm{i}}{\rm{\Omega }}G)=: {S}_{G}$. By initially preparing the system at the thermal state ρ0 (corresponding to λ = 0) the linear response reads as:

Equation (29)

where ${\rm{\Sigma }}(t)={S}_{G}^{t}\,{\rm{\Sigma }}{{S}_{G}^{{T}^{t}}}^{}$ is a 2N by 2N matrix that represents the Heisenberg picture evolution of all of the quadratures. From the Heisenberg equation (or by using equation (A.1)) we have:

Equation (30)

By plugging (30) into (29), and using the cyclic property of the trace we have:

Equation (31)

To proceed further, we notice that [H0 − λh, ρλ] = 0, and hence, by taking the derivative with respect to λ, and evaluating at λ = 0, we have:

Equation (32)

By inserting the above identity in (31), and using again the cyclic property of the trace, we have:

Equation (33)

which can be rewritten in the shape of the standard Kubo-response function:

Equation (34)

The Kubo response function (33) can be brought into a more useful shape. To this end, let us look at an individual element of Σ, say ${{\rm{\Sigma }}}_{{lm}}=\tfrac{1}{2}({R}_{l}{R}_{m}+{R}_{m}{R}_{l})$. The response function of this object has two parts, i.e. ${{\rm{\Phi }}}_{{{\rm{\Sigma }}}_{{lm}}}(t)=\tfrac{1}{2}\left({{\rm{\Phi }}}_{{R}_{m}{R}_{l}}(t)+{{\rm{\Phi }}}_{{R}_{l}{R}_{m}}(t)\right)$. We have:

Equation (35)

where from first to the second equation we use the definition of h, from second to the third we use the fact that for a unitary dynamics $R(t)={S}_{G}^{t}\,R$, from the third to the fourth we expand the commutators and benefit from the canonical commutation relation, and in the last line we reorder everything to show them as product of matrices. By writing the same expression for ${{\rm{\Phi }}}_{{R}_{m}{R}_{l}}(t)$, and adding it up to the above result, one obtains:

Equation (36)

with $\tilde{\sigma }:= ({\rm{\Omega }}g{\sigma }_{0}-{\sigma }_{0}g{\rm{\Omega }})$. In a more compact form, for any second order moment we have:

Equation (37)

The equation (37) can be considered as the Kubo response function for the covariance matrix of a Gaussian system. Since it is purely defined in terms of the original Hamiltonian (G) and the driving force (g), it does not require finding ς.

5. FDT for Lindbladian master equations

The stationary state—if it exists—and the elements of the Gaussian channel equivalent to the Lindbladian master equation are found routinely. Let us have a quick reminder about how to formalize this—one could also see for instance [35]. Consider the following master equation:

Equation (38)

Since we are interested in Gaussian dynamics, the Hamiltonian is quadratic in the quadrature operators, and can be written as:

Equation (39)

while the Lindbladian operators can be written as:

Equation (40)

with ${c}_{m}\in {{\mathbb{C}}}^{2N}$ being a vector of size 2N. We have ignored some constants in both expressions above, and also a linear dependence of H on the quadratures, since they shall not affect our results significantly. With these definitions, one can write down the master equation for the covariance matrix and the quadratures vector as follows:

Equation (41a)

Equation (41b)

with the matrix $A=-{\rm{i}}{\rm{\Omega }}(G-\mathrm{Im}({{CC}}^{\dagger }))$ and $D={\rm{\Omega }}\mathrm{Re}({{CC}}^{\dagger }){\rm{\Omega }}$ and, the rectangular matrix C is defined as $C={\left({c}_{1}^{T};{c}_{2}^{T};...;{c}_{m}^{T}\right)}^{T}\in {{\mathbb{C}}}^{2N\times m}$ (See the appendix C or [3639] for the derivation.) From here, the application of the map in an infinitesimal time δt can be identified as:

Equation (42)

Finally, the stationary state covariance matrix can be obtained by letting the left-hand side of (41b) equal to zero. This leads to solving the Lyapunov equation that reads as [40]:

Equation (43)

The answer to this equation exists and is unique if all of the eigenvalues of A have negative real parts and is given by:

Equation (44)

6. Examples

We conclude our study by applying our formalism to two physically relevant examples: a driven harmonic oscillator and a cascaded optimal parametric oscillator.

6.1. Driven harmonic oscillator

The thermalization and/or dynamics of quantum open systems is sometimes described in the collision model framework (see for instance [4145]). Following [46] we use the collision model to address the dynamics of a system in presence of thermal noise. The system consists of a single bosonic mode, while the environment consists of an infinite number of bosonic modes. The system mode consecutively interacts for some time $\delta t$ with individual environmental modes. Let σE(t) denote the state of the environmental mode that interacts with the system at time t:

Equation (45)

Here (c(t) − 2)/2 represents the mean photon number of the environment mode. Furthermore, we denote the covariance matrix of the system at time t with σs(t) . After colliding with the t/δtth mode of the environment, the covariance matrix of the system maps to:

Equation (46)

with the index E meaning that we trace out the environmental mode. Moreover, the symplectic matrix Sη depends on the interaction time δt but we have dropped this dependence for lightening our notation. It is given by [46]:

Equation (47)

Here, the parameter η quantifies the thermalization rate—which again depends on the interaction length δt—in particular for η = 0 the system thermalizes after one application of the map (in this case, the system and the environment exchange their states), whereas for η = 1 it never thermalizes (in this case, the system and the environment do not interact, and their states remain unchanged). By plugging this symplectic transformation into equation (46), we can describe the dynamics of the system covariance matrix by means of a quantum Gaussian channel—i.e. with the form of equation (6). In particular, one can easily check that $X=\sqrt{\eta }\,{{\mathbb{I}}}_{2}$, and $Y=(1-\eta )c(t){{\mathbb{I}}}_{2}$. We notice that a consecutive application of this map, with fixed c (i.e. ${\sigma }_{E}(t)={c}_{0}{{\mathbb{I}}}_{2}$, ∀t) brings any initial system covariance matrix to the steady state ${\sigma }_{s}(\infty )={c}_{0}{{\mathbb{I}}}_{2}$. In fact, even if the parameter η is time dependent, the steady state will remain the same, therefore, we chose it to be constant. Now let us assume that the time dependence of c(t) is a linear correction, i.e, at any time we have c(t) = c0 + λ(t), with $| \lambda (t)| \ll {c}_{0}$. Moreover, the initial covariance matrix of the system is ${\sigma }_{s}(0)={c}_{0}{{\mathbb{I}}}_{2}$. By using our FDT we aim at identifying the response function. First, notice that $\varsigma ={\partial }_{\lambda }{\sigma }_{\lambda }{| }_{\lambda =0}={{\mathbb{I}}}_{2}$. In addition, since $X=\sqrt{\eta }\,{{\mathbb{I}}}_{2}$, we have $\varsigma (t)={X}^{t/\delta t}\varsigma {{X}^{T}}^{t/\delta t}\,{{\mathbb{I}}}_{2}={\eta }^{t/\delta t}\,{{\mathbb{I}}}_{2}$. Therefore, we have ${\rm{\Phi }}(t)=-{\partial }_{t}{\eta }^{t/\delta t}\,{{\mathbb{I}}}_{2}=-\mathrm{log}({\eta }^{1/\delta t}){\eta }^{t/\delta t}\,{{\mathbb{I}}}_{2}$, which vanishes exponentially in time.

Without any loss of generality let $\lambda (t)={\lambda }_{0}\cos \nu t$, with ν being the (potentially tunable) modulation frequency. We shall choose $\nu \delta t\ll 1$, such that the consecutive interaction with the environment is smooth. Specifically, it would be interesting to find the amplitude of the response for different ν. To this aim, it is useful to study the linear response of the covariance matrix to the strength of the perturbation:

Equation (48)

where we define $\tilde{\eta }=-\mathrm{log}({\eta }^{1/\delta t})$ to lighten our notation. In figure 1 we depict the linear response, i.e. ${\partial }_{\lambda }{\sigma }_{s}(t){| }_{\lambda =0}$ versus time, for different frequencies. These graphs show how the system responds to a perturbation imposed by manipulating the environment degrees of freedom. Such perturbation can be realized by e.g. changing the temperature or the frequency of the modes in the environment. The case with ν = 0, specifically illustrates the relaxation of the system to a new thermal state after a quench.

Figure 1.

Figure 1. Linear response (${\partial }_{\lambda }\sigma (t){| }_{\lambda =0}$) of the covariance matrix for Example A. The strength of interaction with the environment is set to η = 0.999, and δt = 10−3. The biggest response happens when the perturbation is fixed, i.e. for ν = 0. The response decreases monotonically by increasing ν, and for big modulation frequency it vanishes.

Standard image High-resolution image

Indeed, the linear response is initially zero. For small t, it grows with $\tilde{\eta }t$, regardless of the modulation frequency ν. As time increases, the last term in equation (48) vanishes exponentially. Thus, at long times the system will be oscillating around its initial state unless for the case with ν = 0, i.e. when we have a constant perturbation. The maximum of the response at long times is achieved at t*, the solution of $\tan (\nu {t}^{* })=\nu /\tilde{\eta }$. The value of linear response at such maximum is $\tilde{\eta }/\sqrt{{\nu }^{2}+{\tilde{\eta }}^{2}}$. Clearly, for any value of $\tilde{\eta }$ the modulation frequency with the biggest linear response corresponds to ν = 0. Finally, if $\left|\nu /\tilde{\eta }\right|\to \infty $, the response goes to zero, hence the noise is canceled out.

6.2. Cascaded optical parametric oscillator (OPO)

An OPO coupled to a vacuum field is described by the Hamiltonian $H={\rm{i}}\epsilon ({a}^{\dagger 2}-{a}^{2})/4$, with $\epsilon \geqslant 0$ denoting the effective pump intensity. Here, we use the standard definition of the annihilation and creation operators, that read as $a=(x+{\rm{i}}\,p)/\sqrt{2}$ and ${a}^{\dagger }=(x-{\rm{i}}\,p)/\sqrt{2}$, respectively. The coupling to the vacuum is described by the Lindbladian operator $L=\sqrt{\kappa }\,a$, with κ > 0 being the damping cavity rate. For this system, it is not difficult to see that the operators G and C read as follow:

Equation (49)

From here, one can obtain the matrices A and D that appear in equation (43):

Equation (50)

Thus, the steady state covariance matrix exists if κ > epsilon, and reads as $\sigma =\tfrac{1}{2}\mathrm{diag}(\kappa /(\kappa -\epsilon ),\kappa /(\kappa +\epsilon ))$.

The cascaded OPO consists of two interacting optical oscillators with local Lindbladian operators. The Hamiltonian reads as $H={H}_{1}+{H}_{2}+{\rm{i}}({L}_{1}^{\dagger }{L}_{2}-{L}_{1}{L}_{2}^{\dagger })/2$, with ${H}_{j}={\rm{i}}{\epsilon }_{j}({a}_{j}^{\dagger 2}-{a}_{j}^{2})/4$, and ${L}_{j}=\sqrt{{\hslash }\kappa }{a}_{j}$. In turn the dissipation is given by a single operator L = L1 + L2 For convenience, in what follows we work in a representation, where the quadrature vector is represented by R = (x1 ... xN p1 ... pN)T. The matrices G and C read as:

Equation (51)

Therefore, one can find the matrices A and D as follow:

Equation (52)

Again, the criterion for having a steady state is $\kappa \gt \max \{{\epsilon }_{1},{\epsilon }_{2}\}$ [35].

Before trying to solve the Lyapunov equation, we note that the matrices A and G are block diagonal, and can be solved in the corresponding blocks. Therefore, the resulting covariance matrix will be a direct sum of two different terms [35] (one is completely in position subspace, the other one in the momentum subspace, while there are no correlations between the two). In particular, we need to find the exponential of non-Hermitian matrices Ax and Ap, the position and momentum subspaces of the matrix A, respectively. To this end, we use the Jordan canonical form of the matrices, that is ${A}_{\alpha }={V}_{\alpha }{J}_{\alpha }{V}_{\alpha }^{-1}$. After doing some straightforward algebra, one finds:

Equation (53)

and

Equation (54)

Thus we have the dynamics element for our FDT. Moreover, by putting in the expression of the CM, one finds [35]:

Equation (55)

where we define g± = (epsilon1 + epsilon2  ±  2κ)(epsilon1  ± κ), and ${h}_{\pm }=({\epsilon }_{1}^{2}+{\epsilon }_{1}\,{\epsilon }_{2}\,\pm \,{\epsilon }_{1}\,\kappa +2{\kappa }^{2}\,\mp \kappa \,{\epsilon }_{2}){\left({\epsilon }_{2}\mp k\right)}^{-1}$. This state is always entangled [35]. By making use of the above covariance matrix we can find the other necessary element for our FDT, namely $\varsigma ={\partial }_{\lambda }{\sigma }_{\lambda }{| }_{\lambda =0}$. In turn, the response reads as:

Equation (56)

Notice that, since all the matrices involved in the above equation are block-diagonals, the response will be block-diagonal as well. Therefore, we deal with response function in the position and momentum blocks separately. For instance, Φx(t) read as:

Equation (57)

with σx being the first matrix in the rhs of equation (55). Setting κ as the driving parameter, that is by choosing $\kappa (t)=\kappa +\lambda \cos \nu t$, the linear response for different values of modulation frequency is depicted in figure 2. Our first observation is that the position quadratures are more responsive to the perturbation, with $\left\langle {x}_{2}^{2}\right\rangle $ having the biggest amplitude of oscillations. From a sensing (estimation) point of view, this means that $\left\langle {x}_{2}^{2}\right\rangle $ is the most sensitive quadrature measurement in estimation of λ (however, one can design measurements which are superposition of different quadratures and perform even better than $\left\langle {x}_{2}^{2}\right\rangle $). We further notice that the response to a perturbation with bigger ν is smaller—except for the $\left\langle {p}_{1}{p}_{2}\right\rangle $. In fact we can prove that, after long enough time, the amplitude of oscillations of the linear response monotonically decreases with ν. To see this, we recall that the amplitude of oscillations at long times is given by the magnitude of the dynamical susceptibility. The latter is defined as the Fourier transform of the response function:

Equation (58)

where the integration over negative times is ignored because Φ(t < 0) = 0 (due to causality). On the other hand, for any perturbation of the form $\lambda (t)=\lambda \cos \nu t$, the linear response at long times reads as:

Equation (59)

where we define $\alpha ={\cot }^{-1}\left(\tfrac{\mathrm{Im}\chi (\omega )}{\mathrm{Re}\chi (\omega )}\right)$. Also in the first line, we use the fact that the response function vanishes exponentially at large enough t, so that we can replace the upper bound of the integral with infinity. Thus, the dynamical susceptibility characterizes the amplitude of oscillations at long times. In figure 3 we depict the dynamical susceptibility of position and momentum quadratures. The monotonic decrease in χx, p(ω) with ω makes it clear why in figure 2 we see the amplitude of oscillations decrease with increasing ω. Thus, in an estimation scenario it is more suitable to modulate the perturbation with a small or vanishing frequency. Whereas, if we treat the perturbation as a noise, we shall modulate it with a higher frequency in order to cancel it ot. Comparing the two panels of figure 3 also clarifies why the position quadratures have a considerably larger response than the momentum quadratures.

Figure 2.

Figure 2. The linear response—normalized by the steady state expectation values—of the position and momentum blocks of the covariance matrix for Example B. We consider two different values of modulation frequency ν = 0.1 (top), and ν = 0.5 (bottom). Here we have set the parameters epsilon1 = 1, epsilon2 = 1.1, and the coupling κ is derived with the profile $\kappa =1.5+\lambda \cos (\nu t)$. Overall, the position quadratures have a bigger response than the momentum ones. In particular, the position of the second oscillator is the most sensitive one. Moreover, it is seen that the bigger frequency ν = 0.5 leads to a smaller response. In fact, for all observables except $\left\langle {p}_{1}{p}_{2}\right\rangle $, the linear response monotonically decreases with increasing the frequency. See figure 3.

Standard image High-resolution image
Figure 3.

Figure 3. The dynamical susceptibility—normalized by the steady state expectation values—of the position and the momentum parts of the covariance matrix for Example B. For all of the observables—except $\left\langle {p}_{1}{p}_{2}\right\rangle $—the dynamical susceptibility monotonically decreases with the frequency, that explains why the top panel of figure 2 has a bigger amplitude of oscillations compared to the bottom panel. Also, the dynamical susceptibility of the position block is significantly larger than that of momentum, which explains why in figure 2 position operators have a bigger respond. The parameters are the same as in figure 2.

Standard image High-resolution image

7. Conclusions

We have derived a linear response theory for the covariance matrix of Gaussian systems subjected to time-dependent Gaussian quantum channels. Our method establishes a connection between the linear response to a time dependent perturbation on the one hand, and on the other hand (i) the static linear response of the system and (ii) the building blocks of the Gaussian channel itself. When dealing with thermal states evolving under unitary dynamics, we revive Kubo's linear response theory. We further present an alternative expression for Kubo's response theory, that is more suitable for Gaussian dynamics. We have then showcased how for any arbitrary (Gaussian) Lindbladian master equation, the two ingredients (i) and (ii) can be identified straightforwardly. Through the examples of thermalization of a harmonic oscillator, and the cascaded parametric oscillator, we have illustrated how to use our formalism. Since Lindbladian master equations appear often in the description of open quantum systems [4750], we expect our results to find an application in many different setups. In particular, they can be used to improve our understanding of opto-mechanical systems [51, 52], quantum heat devices [5358], or for the study of the open dynamics of quantum systems in the vicinity of non-thermal steady states.

Acknowledgments

We thank Anna Sanpera, Andreu Riera-Campeny, and Janek Kolodynski for fruitful discussions. Support from the Spanish MINECO (QIBEQI FIS2016-80773-P, ConTrAct FIS2017-83709- R, and Severo Ochoa SEV-2015-0522), the ERC CoG QITBOX, the AXA Chair in Quantum Information Science, Fundacio Privada Cellex, and the Generalitat de Catalunya (CERCA Program and SGR1381) is acknowledged.

Appendix A.: Symplectic representation of a Gaussian unitary transformation

Suppose that the density matrix of a Gaussian systems evolves under a unitary transformation U(t) = exp(−itH), with the quadratic Hamiltonian $H=\tfrac{1}{2}{R}^{T}G\,R$. This is to say: ρ(t) = U(t)ρ(0)U−1(t) . Under this unitary, in the Heisenberg picture, the quadratures evolve as:

Equation (A.1)

Proof. By writing the Heisenberg picture evolution of the jth element of the quadrature vector, and using the Baker–Campbell–Hausdorff formula we have:

Equation (A.2)

where we use the fact that $[{R}^{T}{GR},{R}_{j}]={\left({\rm{\Omega }}{GR}\right)}_{j}$.

Appendix B.: Proof of the alternative shape of the response function

On the one hand, by expanding ς(t) with the help of equation (22) we have:

Equation (B.1)

where we have defined the non-symmetric two-time correlation matrix σt0 with the elements ${\sigma }_{m,r}^{t0}\equiv {\left\langle {R}_{m}(t)\circ {R}_{r}\right\rangle }_{0}$, and Ωt0 with ${{\rm{\Omega }}}_{m,r}^{t0}\equiv {\left\langle \left[{R}_{m}(t),{R}_{r}\right]\right\rangle }_{0}$. In turn, ${R}_{m}(t)\equiv {{\mathscr{M}}}_{0}^{t}{R}_{m}={X}^{t}{R}_{m}$ represents the time evolution of the quadratures vector, under the unperturbed map. On the other hand, with the help of Wick's theorem, one can expand the fourth order moments that appear in the right-hand side of equation (20), in terms of second order moments. Specifically—by breaking the elements of Σ(t) into two parts as in ${\rm{\Sigma }}{\left(t\right)}_{m,n}=\tfrac{1}{2}\,({R}_{m}(t){R}_{n}(t)+{R}_{n}(t){R}_{m}(t))$—we have:

Equation (B.2)

Here, from the first to the second equality we use the fact that the average of the SLD is zero, from the third equality to the fourth we use the Wick's theorem, and from the fifth to the sixth one we use the fact that C is a symmetric matrix, and ${\sigma }_{i,j}^{t0}={\left({\sigma }^{t0}\right)}_{j,i}^{T}$, and ${{\rm{\Omega }}}_{i,j}^{t0}=-{\left({{\rm{\Omega }}}^{t0}\right)}_{j,i}^{T}$. Notice that the matrix inside parenthesis in the last line is symmetric, and therefore $\mathrm{Corr}({{\rm{\Lambda }}}_{0},{\rm{\Sigma }}{\left(t\right)}_{m,n})=\mathrm{Corr}({{\rm{\Lambda }}}_{0},{R}_{m}(t){R}_{n}(t))$. Finally, since this identity is true for any element of Σ(t), we have:

Equation (B.3)

which together with (B.1) completes our proof.

Appendix C.: From quadratic Lindbladian master equation to the master equation for the CM

Given a Lindbladian master equation that acts on the density matrix, how can we build up the equivalent master equation that operates on the moments (specifically on the first, and the second moments). To this aim, let us consider the most generic Lindbladian ME:

Equation (C.1)

with the quadratic Hamiltonian $H=\tfrac{1}{2}{R}^{T}{GR}$, and the linear Lindbladian operators ${L}_{k}={c}_{k}^{T}R$. In the Heisenberg picture, for any observable O—that does not depend explicitly on time—this reads as

Equation (C.2)

C.1. First moments

We start by the first moments. Namely, for the Rj element, we can calculate the terms that appear above, individually. For the Hamiltonian part we have

Equation (C.3)

The dissipation part of the dynamics has two contributions. The first part can be written as (we drop the index of the Lindbladian operators to avoid confusion, but we shall reconsider them later on):

Equation (C.4)

In the same manner, we can deal with the second part:

Equation (C.5)

Putting the two parts together, we have:

Equation (C.6)

By considering all of the Lindbladian operators, in a more compact form we have:

Equation (C.7)

with $A:= -{\rm{i}}{\rm{\Omega }}\left(G\,-\mathrm{Im}({{CC}}^{\dagger })\right)$, and $C:= {\left({c}_{1}^{T};{c}_{2}^{T};...{c}_{m}^{T}\right)}^{T}$ is a 2N × m matrix containing the dissipation coefficients.

C.2. Second moments

We choose the same procedure to deal with the second moments. To begin with, we explore the Hamiltonian term:

Equation (C.8)

Moreover, for the dissipators we have:

Equation (C.9)

Let us deal with the two terms appearing in the anticommutator separately. The first one reads:

Equation (C.10)

and finally the second one reads as:

Equation (C.11)

Putting the last three expressions together in the dissipation part of the master equation yields:

Equation (C.12)

If we add this to the same dissipation part, but with the elements $j\leftrightarrow k$, and divide by two, we have:

Equation (C.13)

Thus, for the second moments, after adding up all dissipators, we will have the following master equation:

Equation (C.14)

with $A=-{\rm{i}}{\rm{\Omega }}(G-\mathrm{Im}({{CC}}^{\dagger }))$, and $D={\rm{\Omega }}\,\mathrm{Re}({{CC}}^{\dagger }){\rm{\Omega }}$, with the same definition of C that we have for first moments, i.e. $C:= {\left({c}_{1}^{T};{c}_{2}^{T};...{c}_{m}^{T}\right)}^{T}$.

Footnotes

  • The static linear response holds true only for a scenario in which the steady state is unique. Indeed for the unitary dynamics and thermal states this is not the case, nonetheless our main result—equation (15)—still holds, from which we obtaine the Kubo response.

Please wait… references are loading.