Stability of a Stochastic Model for Demand-Response

We study the stability of a Markovian model of electricity production and consumption that incorporates production volatility due to renewables and uncertainty about actual demand versus planned production. We assume that the energy producer targets a fixed energy reserve, subject to ramp-up and ramp-down constraints, and that appliances are subject to demand-response signals and adjust their consumption to the available production by delaying their demand. When a constant fraction of the delayed demand vanishes over time, we show that the general state Markov chain characterizing the system is positive Harris and ergodic (i.e., delayed demand is bounded with high probability). However, when delayed demand increases by a constant fraction over time, we show that the Markov chain is non-positive (i.e., there exists a non-zero probability that delayed demand becomes unbounded). We exhibit Lyapunov functions to prove our claims. In addition, we provide examples of heating appliances that, when delayed, have energy requirements corresponding to the two considered cases.


Introduction.
Recent results on modelling the future electricity markets [11] suggest that they may lead to highly undesirable equilibria for consumers, producers or both. A central reason for such an outcome might be the combination of volatility in supply and demand, the delays required for any unplanned capacity increase, and the inflexibility of demand which leads to high dis-utility costs of blackouts. Further, the use of renewable energy sources such as wind and solar increases volatility and worsens these effects [8].
Flexible load is advocated in [5] as a mechanism to reduce ramp-up requirements and adapt to the volatility of electricity supply that is typical of renewable sources. A deployments report [3] shows the feasibility of delaying air conditioners using signals from the distributor. Adaptive appliances combined with simple, distributed adaptation algorithms were advocated in [4,7]; they are assumed to reduce, or delay, their demand when the grid is not able to satisfy them. Some examples might be: e-cars, which may have some flexibility regarding the time and the rate at which their batteries can be loaded; heating systems or air conditioners, which can delay their demand if instructed to; hybrid appliances which use alternative sources in replacement for the energy that the grid cannot supply. If the alternative energy source is a battery, then it will need to be replenished at a later point in time, which will eventually lead to later demand.
The presence of adaptive appliances may help address the volatility of renewable energy supply, however, backlogged demand is likely to be merely delayed, rather than canceled; this introduces a feedback loop into the global system of consumers and producers. Potentially, one might increase the backlogged demand to a point where future demand becomes excessive. In other words, one key question is whether it is possible to stabilize the system. This is the question we address in this paper.
To address this fundamental question, we consider a macroscopic model, inspired by the model in [8]. We assume that electricity supply follows a two-step allocation process: first, in a forecast step (day ahead market ) demand and renewable supply are forecast, and the total supply is planned; second, (real time market ) the actual, volatile demand and renewable supply are matched as possible. We assume, as in [1], that the rate at which supply can be increased in the real time step is subject to ramp-up and ramp-down constraints. Indeed, it is shown in this reference that it is an essential feature of the real time market. We modify the model in [8] and assume that the whole demand is adaptive. While this is clearly an exaggerate assumption, we do it for simplicity and as a first step, leaving the combination of adaptive and non-adaptive demand to a later research. We are interested in simple, distributed algorithms, as suggested in [4], therefore, we assume that the suppliers cannot directly observe the backlogged demand; in contrast, they see only the effective instantaneous demand; at any point in time where the supply cannot match the effective demand, the backlogged demand increases.
Our model is macroscopic, so we do not model in detail the mechanism by which appliances adapt to the available capacity; several possible directions for achieving this are described in [4]. However we do consider two essential parameters of the adaptation process. First, the delay 1/λ is the average delay after which frustrated demand is expressed again. Second, the evaporation µ is the fraction of backlogged demand that disappears and will not be resubmitted per time unit. The inverse delay λ is clearly positive; in contrast, as discussed in Section 2 and in Appendix A, it is reasonable to assume that some adaptive appliances naturally lead to a positive evaporation (this is the case for a simple model of heating systems), but it is not excluded that inefficiencies in some appliances lead to negative evaporation.
Within these modelling assumptions, the electricity suppliers are confronted with a scheduling issue: how much capacity should be bought in the real time market to match the adaptive demand. The effect of adaptation is to increase the latent demand, due to backlogged demand returning into the system. This is the mechanism by which the system might become unstable. We consider a threshold based mechanism as in [8]. It consists in targeting some fixed supply reserve at any point in time; the target reserve might not be met, due to volatility of renewable supply and of demand, and due to the ramp-up and ramp-down constraints.
Our contribution is to show that if evaporation is positive, then any such threshold policy does stabilize the system. In contrast, if evaporation is negative, then there exists no threshold policy which stabilizes the system. The case where evaporation is exactly equal to 0 remains unsolved.
Our results suggest that evaporation plays a central role. Simple adaptation mechanisms as described in this paper might work if evaporation is positive (as one may perhaps generally expect), but will not work if evaporation is negative, i.e. if the fact that demand is backlogged implies that a higher fraction of demand returns into the system. This suggests that future research be done in order to gain a deeper understanding of evaporation, whether it can truly be assumed to be positive, and if not, how to control it.
We use discrete time, for tractability. We use the theory of Markov chains on general state spaces in [9]. In Section 2 we describe the assumptions and the model, and relate our model to prior work. In Section 3 we study the stability of the system under threshold policies. Proofs are given in Section 4. The appendix discusses whether evaporation can be positive or negative when the appliance is a heating system, including heat pumps.

Model and Assumptions.
2.1. Assumptions and Notation. We use a discrete model, where t ∈ N represents the time elapsed since the beginning of this day. The time unit represents the time scale at which scheduling decisions are done, and is of the order of the second.
The supply is made of two parts: the planned supply G f (t), forecast in the day-ahead market, and the actual supply G a (t), which may differ, due to two causes. First, the forecasted supply may not be met, due to fluctuations, for example in wind and sunshine. Second, the suppliers attempt to match the demand by adding (or subtracting) some supply, bought in the real time market. We assume that this latter term is limited by the ramp-up and ramp-down constraints. We model the actual supply as where M (t) is the random deviation from the planned supply due to renewables, G(t − 1) is the supply decision in the real time market. We view G a (t) as deterministic and given(it was computed yesterday in the day-ahead market), M (t) as an exogenous stochastic process, imposed by nature, and G(t − 1) as a control variable.
We call D a (t) the "natural" demand. It is the total electricity demand that would exist if the supply would be sufficient. In addition, there is, at every time t, the backlogged demand B(t), which results from adaptation: B(t) is the demand that is expressed at time t due to a previous demand being backlogged. The total effective demand, or expressed demand, is We model the effect of demand adaptation as follows.
In the above equations, F (t) is the frustrated demand, i.e. that is denied satisfaction at time t. Eq. (2.5) expresses that, through adaptation, the demand that is served is equal to the minimum of the actual demand and the supply. The variable Z(t) is the latent backlogged demand; it is the demand that was delayed, and might later be expressed. It is incremented by the frustrated demand.
The frustrated demand is expressed with some delay; we model this with Eq. (2.3), where λ −1 is the average delay, in time slots.
The evolution of latent backlogged demand is expressed by Eq. (2.4). The expressed demand B(t) is removed from the backlog (some of it may return to the backlog, by means of Eq. (2.5) at some later time). The remaining backlog may evaporate at a rate µ, which captures the effect on total demand of delaying some demand. Delaying a demand may indeed result in a decreased backlogged demand, in which case the evaporation factor is positive. For example this occurs if we delay heating in a building with a heating system that has a constant energy efficiency (as we discuss in Appendix A.1); such a heating system will request more energy in the future, but the integral of the energy consumed over time is less whenever some heating requests are delayed. In this case, positive evaporation comes at the expense of a (hopefully slight) decrease in comfort (measured by room temperature). In other cases, though, we may not exclude that evaporation be negative. This may occur for example with heat pumps, as discussed in Appendix A.2.
As in [8], we assume that the natural demand can be forecast with some error, so that where the forecasted demand D f (t) is deterministic and D(t), the deviation from the forecast, is modelled as an exogenous stochastic process. We assume that the day ahead forecast is done with some fixed safety margin r 0 , so that 2.2. The Stochastic Model. We model the fluctuations in demand D(t) and renewable supply M (t) as stochastic processes such that their difference M (t) − D(t) is an ARIMA(0, 1, 0) Gaussian process, i.e.
where N (t) is white Gaussian noise, with variance σ 2 . This is the discrete time equivalent of Brownian motion, as in [8].
Let R(t) be the reserve, i.e the difference between the actual production and the expressed demand, defined by and let H(t) be the increment in supply bought on the real time market, i.e.
Putting all the above equations together, we obtain the system equations: Thus we can describe our system by a two-dimensional stochastic process The sequence H(t) is the control sequence. It satisfies the ramp-up and ramp-down constraints: where ζ and ξ are some positive constants.
We assume a simple, threshold based control, which attempts to make the reserve equal to some threshold value r * > ζ; therefore (2.13) In summary, we have as model the stochastic sequence X = (X(t)) t∈N defined by where N is an iid white Gaussian noise sequence. Note that X is a Markov chain on the state space S = R × R + .

Related Models.
Let us now discuss some similar models which have been considered in the literature.
In [9], the authors consider the so-called Linear State Space model (LSS), which introduces an n-dimensional stochastic process X = {X k } k , with X k ∈ R n . For matrices F ∈ R n×n , G ∈ R n×p , and for a sequence of i.i.d. random variables of finite mean taking values in R p , the process evolves as (2.16) Our model (2.14)-(2.15) is in fact a superposition of three such LSS models, depending on the current state of the Markov chain. The challenge of showing that our model is stable comes from the fact that in the part of the state space in which R(t) < 0, the corresponding LSS does not satisfy the stability condition (LSS5) of [9] (which requires that the eigenvalues of F be contained in the open unit disk of C).
In [10] a slightly different model for capturing elasticity of demand is proposed. More specifically, the authors consider a scenario in which a deterministically bounded amount of demand arrives at each time step, while the supplier decides whether to buy an additional amount of energy from an external source at a certain cost. Unsatisfied demand is backlogged. A threshold policy is analyzed and found to be stable (the size of the backlog is found to be deterministically bounded) and optimal. Pricing decisions are also explored. The main differences with the present work are the following: • Additional parameters which model delay and loss of backlogged demand (i.e. λ and µ) enrich our model's expressivity. • We consider potentially unbounded demand, modeled as a 0-mean Gaussian random variable, which makes proving stability more challenging. • No results on pricing and cost-optimality are included in the present work. The continuous time model used in [2] seeks to capture the presence of two types of energy sources, primary and ancillary, the latter being less desirable (i.e. more costly) than the former, both subject to (different) ramp-up constraints. A threshold policy is again discussed in the context of rigid demand, which is simply dropped if not enough energy is available. The analyzed Markov chain is a two-dimensional process having on the first coordinate the quantity of energy used from the ancillary source in order to satisfy as much demand as possible, and on the second coordinate the reserve (i.e. energy surplus).

System Stability under a Threshold
Policy. Let us study how the system stability of (2.14)-(2.15) depends on the evaporation parameter µ.
Define the following four domains: Then the process (2.14)-(2.15) rewrites in matrix form: and The main reason for which the analysis of system stability is challenging is the fact that both A 1 and A 2 = A 4 admit 1 as an eigenvalue.
The first result of this section can be stated in the form of the following Theorem 3.1. If µ > 0, the Markov chain (2.14-2.15) is positive Harris and ergodic. For any initial distribution ρ, the chain converges to its unique invariant probability measure in total variation norm, i.e.
The proof uses the theory of general state space Markov chains. The following Lemmas are instrumental in proving the result. The proofs are found in Section 4.
A set C is said to be ν T -small for some non-trivial measure ν T and a positive integer T, if for all x ∈ C, the probability of reaching any measurable B is Furthermore, a set C is petite if there exists a distribution h on the positive integers and a non-trivial measure ν h , such that for any x ∈ C, and for any Borel set B, the transition kernel of the sampled chain has the following property: Two direct consequences follow: Corollary 3.4. The Markov chain (2.14-2.15) is aperiodic. Corollary 3.5. Any compact subset of the state space is petite.
In our proof of Theorem 3.1 we exhibit a Lyapunov function for the system, which has negative drift outside of a compact set surrounding the origin. The existence of such a function along with the previous results, enable us to apply Theorem 13.0.1 of [9], by which we conclude.
In the proof, we find a function which has finite increments and which is such that outside a certain level set its drift is negative. By Theorem 11.5.2 from [9] we reach the conclusion.
We sum up these results in the following Corollary 3.7. If a positive fraction of latent jobs disappear during each time slot (µ > 0), then any threshold policy stabilizes the system. On the other hand, if delaying any job results in an increase of its requirement/workload by a positive fraction (µ < 0), then there exists no threshold policy that stabilizes the system.

Proofs.
Let γ := 1 − λ − µ and note that γ < 1 whenever µ > 0. Consider the measure ϕ E defined as follows: for any Borel set A, where ν denotes the Lebesgue measure on R 2 . Take a measurable set B ⊂ E. We denote the support sets of B on the two dimensions by We further denote the cross-section of B at some z ∈ R + by Consider any initial point x = (r, z). We wish to show that whenever ϕ E (B) is strictly positive, there exists a finite T such that the probability of hitting B in T steps starting from x is also strictly positive. Then, by Proposition 4.2.1 (ii) from [9], we can conclude.
We wish to select a large enough T such that the event (1).
We namely pick a T such that .
Let us now compute a lower bound of the probability of this trajectory. For some set A ⊂ R and some real number b ∈ R, we denote −A := {−a : a ∈ A} and A + b := {a + b : a ∈ A}.
We can now characterize the probability of hitting B in T + 2 steps following the trajectory that we just described: . . .
Denote the density of a normal variable of mean 0 and standard deviation σ by N . The probability that N (1) is such that Z(2) ∈ B δ 2 (γ T ) (i.e., event ω 2 ) can be written as We can lower bound the probability that N (2) is such that R(2) ∈ L(ℓ) as since the infimum is taken over a bounded set. Similarly, after 2 < t ≤ T + 1 steps, the process will remain in D 3 ∪ (D 4 ∩ {(r, z) : r ≤ r * + ℓ}) with probability at least Note that R(t) has a different expression, depending on whether r ≤ r * + ξ, or r > r * + ξ. We introduce the following notation: Then, we can rewrite The strict positiveness of the lower bound follows from the fact that L(ℓ) and B are bounded.
In the last step we want to ensure that R(T + 2) belongs to the corresponding cross-section Let us denote Then, we can finally write: Since L(ℓ) is bounded, there exists a large enough q such that We make the following coordinate change: Then, we can write: . We can then conclude by (4.3): When the initial point x is in D 2 or D 3 we just need to consider N (1), such that (R(1), Z(1)) ∈ D 1 and Z(2) = γ 2 z − r(1) − N (1) ∈ B δ 2 (γ T ), for some T greater than a suitably chosen T 0 , where The rest of the analysis follows.

Proof of Lemma 3.3.
Assume without loss of generality that C ∩ D 1 has positive Lebesgue measure. Let us show that the set C 1 = C ∩ D 1 satisfies the "smallness" property in the statement.
In Lemma 3.2 we essentially proved that we can reach any bounded Lebesgue-measurable set B of positive measure from any state x in a finite number of steps. However, in order to give an upper bound on the required number of steps, we defined the set E = I × [0, a] and we introduced the measure ϕ E , which is defined as the Lebesgue measure of the set obtained via intersection with E. We obtained that for any 0 < ǫ ≤ 1, for any B such that ϕ E (B) > 0, and for any x, there exists T 0 (B, ǫ, x) > 0, such that for any T ≥ T 0 (B, ǫ, x), there exists a strictly positive constant K = K(T, B, x) > 0 such that (4.7) In order to prove smallness, we need to eliminate the dependence on the specific starting point x and on the destination B of K and T 0 .
The dependence on x can easily be dealt with. Fix some destination set B. Since the set C 1 is bounded, we can prove that both are finite. Indeed, revisiting the proof of Lemma 3.2, we have Furthermore, in (4.6) we have that by the fact that C 1 is bounded. For 2 < t ≤ T + 1, the p t > 0 do not depend on the starting point. Again, via the boundedness of C 1 , we have that there exists a constant η > 0 such that The dependence on the destination set B is more tricky. The highest inconvenience is the dependence on B of the proposed value for T 0 , which, as seen in (4.8), depends logarithmically on δ. Hence, for any ǫ, one can always find a set B which is such that δ needs to be arbitrarily close to 0 in order to satisfy (4.2). This in turn leads to T 0 being arbitrarily large (whereas we would like it to be constant). This is due to the deterministic geometric convergence of Z(t) towards 0 in region D 3 . In other words, if starting from a strictly positive value, Z(t) cannot reach 0 in a finite time following the trajectory we considered.
Let us instead pick a finite closed interval I and constants ∆ > δ > 0 and let us define the set A := I × [δ, ∆]. Clearly, ν(A δ ) = ν(A), so in this case we are allowed to pick ǫ = 1. We also define η(T, A, C 1 ) := inf x∈C1,(v,w)∈A(γ T ) h x (v, w) > 0 and With T 0 = T 0 (A, 1, C 1 ) given by (4.8), but for the chosen δ > 0 (no dependence on any specific destination set), for all T ≥ T 0 we define where ϕ A is defined in (4.1). By (4.6) we can conclude that C 1 is ν T -small. In a similar fashion, we can prove that the entire set C is small.

Proof of Theorem 3.1.
In the following we define a Lyapunov function V which is unbounded off petite sets, that is for any n < ∞ the sublevel set C V (n) := {y : V (y) ≤ n} is small. We show that there exist constants a, b, c > 0 and a set C = [−a, a] × [0, b] such that the drift satisfies A consequence of this result is that, by Theorem 9.1.8 of [9] and by Corollary 3.5, which shows that the set C is small, the chain is Harris recurent. Furthermore, by Theorem 10.0.1 of [9], it admits a unique invariant measure π. Finally, by Theorem 13.0.1 of [9] and by Corollary 3.4, we get the finiteness of π and we conclude.
Let us now delve into the details: We prove that the function H : is a Lyapunov function for the system (2.14-2.15). Since µ = 0, matrix A 1 can be diagonalized as Then, for X(t) ∈ D 1 , we can write Denote and ∆ 1 := {y ∈ R 2 : M 1 y ∈ D 1 } or, more explicitly, Then, for Y (t) ∈ ∆ 1 , the system can be rewritten as: (4.11) Hence, for a point x = (r, z) ∈ D 1 , the Lyapunov function can be rewritten in the new variable y = M −1 Let us compute the drift of H in x ∈ D 1 , DH(x) = DW 1 (y): Since µ > 0, the setS 1 of y ∈ ∆ 1 for which DW 1 (y) ≤ −1 is given by the hypergraph of the second degree polynomial function We represent both sets in Figure 4.2(a). In the original coordinates, we denote the corresponding set by C 1 = M 1 S 1 = {M 1 y : y ∈ S 1 }.
4.4. Proof of Theorem 3.6. Notice that if µ ≤ −λ, then the Z coordinate of the Markov chain cannot decrease. Hence the chain is trivially unstable (it is not even ψ-irreducible).
In the case −λ < µ < 0, we turn again to [9] for proving the claim. In this case the Markov chain (2.14-2.15) is ψ-irreducible. We need to find a function H which satisfies the hypothesis of Theorem 11.5.2 from [9] to show non-positivity.
Like in the proof of Theorem 3.1, consider the change of coordinates Y (t) = M −1 1 X(t), which yields evolution (4.11) for Y (t) = (U (t), V (t)) ∈ ∆ 1 defined in (4.10). The state space under these new coordinates becomes ∆ Consider y = (u, v) ∈ ∆ such that v ≥ 1. Then, since 0 < λ + µ < λ, we have that necessarily y ∈ ∆ 1 . We obtain: , then by the definition of H we have that H(Y (1)) = 0, in which case we are left with the second term (since v ≥ 1 implies log v ≥ 0). For a given v, the argument of the logarithm in the integral of the first term is less than 1 for n comprised between ℓ 0 (v) := µ 2 v − ζ and ℓ 1 (v). Furthermore, it is lower bounded by 1 v , and, since log a ≤ a − 1 for a > 0, we can write All the terms in the sum above are bounded for v > 1. The last term converges to 0 as v goes to infinity, via two applications of the L'Hopital-Bernoulli rule.
Next consider y = (u, v) ∈ ∆ such that v < 1. Thus H(y) = 0. We distinguish 4 cases: M 1 y ∈ D 1 , M 1 y ∈ D 2 , M 1 y ∈ D 3 , or M 1 y ∈ D 4 . Equivalently, the 4 cases write as In the case y ∈∆ 1 , we have Both terms are bounded and converge to 0 as v goes to −∞ (the first term again by application of L'Hopital-Bernoulli, the second one trivially).
Let us now examine the drift DH(y) = E y (H(Y (1)) − H(y) in some point y ∈ ∆ 1 ∩ {v ≥ 1}, which we denote by DH(v) by abuse of notation: Taking the limit as v goes to infinity, we find that lim v→∞ DH(v) = log(1 − µ) > 0. Indeed, each of the last two terms converge to 0. In other words, for any ǫ > 0, there exists a v 0 (ǫ) > 1 which is such that for all v > v 0 , |DH(v) − log(1 − µ)| < ǫ. Taking a sufficiently small ǫ, say Define the set C : Then any y in the complementarȳ C := C c ∩ ∆ is such that H(y) > sup y1∈C H(y 1 ) and DH(y) > 0. Using (4.16) and (4.17) we can apply Theorem 11.5.2 from [9] and conclude.
Appendix A. Impact of Delayed Heating.
A.1. With Constant Coefficient of Performance. Assume the appliance is a heating system, which is used to maintain the temperature of a building at temperature T (t). We use the model in [6, Chapter III-E]. Let ǫ be the coefficient of performance of the appliance, T (t) the room temperature, and θ(t) the outside temperature. Assume in this section that the coefficient of performance is constant. Assume the heating appliance consumes an amount of energy energy equal to d(t) at time slot t; the obtained room temperature T (t) is defined by the following equation where K is the building's leakiness and C its thermal inertia. Note that this equation is valid if we assume that the appliance is used for heating only and not for cooling. We make no particular assumption on d(t), which typically depends on thermostat regulation by the building occupants. Consider two scenarios. In the first scenario, there is always enough supply and the appliance does not need to adapt. The energy consumed by the appliance is D a (t), the naturally occurring demand. Let T * (t) be the temperature obtained. Thus D a (t)ǫ = K(T * (t) − θ(t)) + C(T * (t) − T * (t − 1)) t = 1 . . . τ (A.2) In the second scenario, energy supply is not sufficient at times 1, . . . , τ − 1 and is sufficient at time τ . The energy consumed at times 1, . . . , τ − 1 is D a (t) − F (t) where F (t) is the frustrated demand. The accumulated frustrated demand at time τ − 1 is The obtained temperature is T (t) < T * (t) for t = 1, . . . , τ − 1. We assume that T (0) = T * (0). Assume that at time τ , we use the energy required to obtain the same temperature as in the first scenario; let D a (τ ) + Z(τ ) be this amount of energy, i.e. Z(τ ) is the amount of backlogged demand that remains at time τ . We have thus (D a (t) − F (t))ǫ = K(T (t) − θ(t)) + C(T (t) − T (t − 1)) t = 1 . . . τ − 1 (A.4) (D a (τ ) + Z(τ ))ǫ = K(T * (τ ) − θ(τ )) + C(T * (τ ) − T (τ − 1)) (A.5) By summation of the last two equalities it comes: thus, by subtraction: (T * (t) − T (t)) < 0 (A.8) Therefore, for this simple model, the backlogged demand diminishes, i.e. the total energy consumption is reduced if heating is delayed. In other words, there is positive evaporation.
A.2. With Heat Pumps. The model above assumes that the coefficient of performance ǫ is constant. For some systems, this does not hold as it might depend on the initial room temperature T (t), the target room temperature T * (t) and the outside temperature θ(t). In particular, for heat pumps, the coefficient of performance is reduced when the differences T * (t) − T (t) and T * (t) − θ(t) are large. To understand what happens with such systems, we revisit the second scenario above and, for simplicity, consider the simpler case where all demand is frustrated at times 1 to τ − 1. Equations (A.4) and (A.5) are now replaced by 0 = K(T (t) − θ(t)) + C(T (t) − T (t − 1)) t = 1 . . . τ − 1 (A.9) (D a (τ ) + Z(τ ))ǫ ′ (τ ) = K(T * (τ ) − θ(τ )) + C(T * (τ ) − T (τ − 1)) (A. 10) and D a (t) = F (t) for t = 1, . . . , τ −1. Here we assume for simplicity that ǫ is maintained constant in scenario 1 but the coefficient of performance is smaller for scenario 2, i.e. ǫ ′ (τ ) < ǫ. This is because the heat pump needs to produce warmer water at time τ than in scenario 1 to be able to reach the target room temperature, and if the outside temperature is low, the efficiency is less. Summing these equations and combining with Eq. (A.7) gives Compare with Eq. (A.8): the last term is positive, and if large, it may well be that Z(τ ) > Z(τ − 1). i.e. there may be some negative evaporation.