• Ei tuloksia

In this section we discuss the Feynman-Kac theorem for BSDEs driven by a Brownian motion and a L´evy process. The Feynman-Kac Theorems are useful in many applications, for example you have a PDE which you can not solve in closed form or when you have an expectation which you can not solve in closed form [2]. In financial applications the expectation giving the arbitrage price of the contract can not always be evaluated in closed form and numerical approximations are preferred. We either do simulations of the SDEs or solve the resulting PDE (or systems of PDEs) which is provided by the Feynman-Kac theorems [24].

Model 1

Under Model 1, all theorems, definitions and proofs were done by [9] we will expand the proofs in their article. For any(t, x) ∈[0, T]×Rp, consider the following stochastic differential equation on[0, T] :

dXs=b(s, Xs)ds+σ(s, Xs)dWs, t≤s≤ T, (18) Xs =x, 0≤s≤t.

We denote the solution of Equation (18) by(Xst,x, 0≤ s ≤ T).We consider the associ-ated BSDE

−dYs=f(s, Xst,x, Ys, Zs)ds−ZsdWs, (19) YT = Ψ(XTt,x).

We denote the solution of Equation (19) by {(Yst,x, Zst,x),0 ≤ s ≤ T}. The coupled system of Equations (18) and (19) is termed a FBSDE or a BSDE associated with a FSDE, and the solution is denoted by{(Xst,x, Yst,x, Zst,x),0≤s≤T}.

The functionf is anRdvalued Borel function defined on[0, T]×Rp ×Rd×Rn×d,and Ψis anRdvalued Borel function defined onRp.The coefficientbis aRpvalued function defined on[0, T]×Rp,andσis anRp×nvalued function defined on[0, T]×Rp.Standard Lipschitz assumptions are required on the coefficients, that is, there exists a Lipschitz constantC > 0such that

|b(t, x)−b(t, y)|+|σ(t, x)−σ(t, y)| ≤C(1 +|x−y|), and

|f(t, x, y1, z1)−f(t, x, y2, z2)| ≤C(|z1−z2|+|y1−y2|).

Finally, we suppose that there exists a constantCsuch that for each(s, x, y, z),

|b(t, x)|+|σ(t, x)| ≤C(1 +|x|), and

|f(t, x, y, z)|+|Ψ(x)| ≤C(1 +|x|p), for realp≥ 12.

Generalization of the Feynman-Kac formula.

Proposition 2. Letνbe a function of classC1,2 (or smooth enough to be able to apply Itˆo formula toν(s, Xst,x)) and suppose that there exists a constant C such that, for each(s, x),

|ν(s, x)|+|σ(s, x)xν(s, x)| ≤C(1 +|x|).

Also,ν is supposed to be the solution of the following quasi-linear parabolic partial dif-ferential equation

tν(t, x) +Lν(t, x) +f(t, x, ν(t, x), σ(t, x)xν(t, x)) = 0, (20) ν(T, x) = Ψ(x),

where∂xνis the gradient ofν,andL(t,x) denotes the second order differential operator L(t,x) =X

i,j

aij(t, x)∂x2ixj +X

i

bi(t, x)∂xi, aij = 1

2[σσ]ij.

Then ν(t, x) = Ytt,x,where {(Yst,x, Zst,x), t ≤ s ≤ T} is the unique solution of BSDE Equation (19). Also

(Yst,x, Zst,x) = (ν(s, Xst,x), σ(s, Xst,x)xν(s, Xst,x)), t≤s ≤T.

Proof. Xst,x is a d-dimensional Itˆo process. The partial derivatives with respect to time and space ared-dimensional vectors∂tν(s, Xst,x)and∂xν(s, Xst,x),respectively. We also have ∂2xν(s, Xst,x), a d ×d Hessian matrix of ν with respect toXst,x. Applying the Itˆo formula Equation (3) in differential form toν(s, Xst,x),we have

dν(s, Xst,x) =∂tν(s, Xst,x)ds+

xν(s, Xst,x)

dXst,x+ 1 2

dXst,x

x2ν(s, Xst,x)dXst,x.

Substituting fordXst,x =b(s, Xst,x)ds+σ(s, Xst,x)dWs,we have Sinceν solves Equation (20), it follows that

−dν(s, Xst,x) = f(s, , Xst,x, ν(s, Xst,x), σ(s, Xst,x)xν(s, Xst,x))ds

−∂xν(s, Xst,x)σ(s, Xst,x)dWs, ν(T, XTt,x) = Ψ(XTt,x).

Thus by uniqueness of solutions of BSDE,{ν(s, Xst,x), σ(s, Xst,x)xν(s, Xst,x), s ∈[0, T]}

is the unique solution of the BSDE Equation (19), and the result is obtained.

We now show that conversely, in certain cases the solution of BSDE Equation (19) cor-responds to the solution of the PDE Equation (20). Ifd = 1,we can use the comparison theorem to show that if b, σ, f, and Ψsatisfy the assumptions by [9], and if f, andΨ are supposed to be uniformly continuous with respect to x, thenu(t, x) is the viscosity solution of Equation (20) [6].

Definition 16. Suppose u ∈ C([0, T] ×Rp) satisfies u(T, x) = Ψ(x), x ∈ Rp. Then

u is called a viscosity sub-solution or super-solution of PDE Equation (20) if, for each (t, x) ∈ [0, T]×Rp andφ ∈ C1,2([0, T]×Rp)such thatφ(t, x) = u(t, x)and(t, x)is a minimum or maximum ofφ−u,

tφ(t, x) +Lφ(t, x) +f(t, x, φ(t, x), σ(t, x)xφ(t, x))≥0, sub-solution or

tφ(t, x) +Lφ(t, x) +f(t, x, φ(t, x), σ(t, x)xφ(t, x))≤0. super-solution Moreover, u is called a viscosity solution of PDE Equation (20) if it is both a viscosity sub-solution and a viscosity super-solution of Equation (20).

Theorem 2.4.1. We suppose thatd= 1and thatf andΨare uniformly continuous with respect tox.Then the functionudefined byu(t, x) =Ytt,x is a viscosity solution of PDE (20). Furthermore, if we suppose that for eachR > 0there exists a continuous function mR :R+−→R+such thatmR(0) = 0and

|f(t, x, y, z)−f(t, x, y, z)| ≤mR(|x−x|(1 +|z|)), (21) for allt ∈ [0, T], |x|, |x| ≤ R,and|z| ≤ R forz ∈ Rn,thenu is the unique viscosity solution of Equation(20).

Proof. The countinuity of u with respect to (t, x) is given by [9]. We consider Equa-tion (20) in the viscosity sense to avoid restrictive assumpEqua-tions on the coefficients of our model. We prove thatuis a viscosity sub-solution and the proof for the super-solution is similar. Let(t, x)∈[0, T]×Rp andφ ∈ C1,2([0, T]×Rp)be such that

φ(t, x) = u(t, x), and

φ≥u, on[0, T]×Rp

without loss of generality we suppose thatφ ∈ Cwith bounded derivatives. Forh > 0 we have

φ(t+h, Xt+ht,x )−φ(t, x)−

t+h

Z

t

f(s, Xst+h, Yst+h, Zst,x)ds+

t+h

Z

t

Zst,xdWs≥0,

at this moment we are not sure yet if Zst,x converges toσ(t, x)xφ(t, x). Now consider

( ¯Ys,Z¯s)in the intervalt ≤s ≤t+hto be the solution of the BSDE

Note ( ¯Y ,Z)¯ has the same generator as (Y, Z) but different terminal condition which is φ(t + h, Xt+ht,x ) ≥ Yt+h = u(t + h, Xt+ht,x ). By the Comparison Theorem 2.1.14 and (0,0)applying a priori estimate we have

E

where

sinceφand all coefficients and their derivatives are uniformly continuous with respect to xthus

By taking expectation of Equation (22) we have

t=E( ˜Yt) =E Sincef is Lipschitz we have

(r, h)−δ(r, h)| ≤K(|Y˜r|+|Z˜r|)

and by Equation (23) we haveY˜t=hǫ(h)hence sinceY¯t≥φ(t, x)we have

t+h

Z

t

G(r, x)dr≥ −hǫ(h)

so lettingh−→0we obtain

G(t, x) =∂tφ(t, x) +Lφ(t, x) +f(t, x, φ(t, x), ∂xφ(t, x)σ(t, x))≥0.

Following the same procedure we show thatuis a super-solution of Equation (20) there-fore we haveuis a viscosity solution for Equation (20).

Model 2

All Theorems, definitions and proof under this model were done by [14]. Now consider the BSDE (16) or equivalently

Yt=g(XT) +

T

Z

t

f(s, Ys−Zs)ds−

X

i=1 T

Z

t

Zs(i)dHs(i)

we haveXta L´evy process which does not have a Brownian motion part such that Xt=a+Lt

whereLtis a pure Jump process with L´evy measureν(dx).We also haveg(XT)is square integrable. Consider the partial differential integral equation (PDIE) satisfied by θ = θ(t, x)

∂θ

∂t(t, x) + Z

R

θ(1)(t, x, y)ν(dy) +a∂θ

∂x(t, x) +f(t, θ(t, x),{θ(i)(t, x)}i=1) = 0. (24) θ(T, x) =g(x)

where

θ(1)(t, x, y) =θ(t, x+y)−θ(t, x)− ∂θ

∂x(t, x)y, (25)

and

a =a+ Z

{|y|≥1

yν(dy).

Thus we have,

Proposition 3. Supposeθ is aC1,2 function such that the first and second partial deriva-tives in x are bounded by a polynomial of x, uniformly in t. Then the unique adapted solution of Equation (16) is given by

Yt =θ(t, Xt),

Proof. We consider the following by [14].

Lemma 1. Leth : Ω×[0, T]×R 7−→Rbe a random function measurable with respect toP ⊗BRsuch that

|h(s, y)| ≤ as(y2∧ |y|) a.s, (27) where{as; 0≤s≤T}is the non-negative predictable process such that

E

Proof. We prove Lemma 1then we use it to prove Proposition 3. Since from Equation

(27) it implies that

is a square integrable martingale. Then by the predictable representation theorem [13]∃φ such that from Equations (28) and (29) we have

φ(i)s = Z

R

h(s, y)pi(y)ν(dy)

then the result follows.

Now we apply the Itˆo formula Theorem2.1.8toθ(s, Xs)froms =ttos =T we have

applying Lemma1toh(s, y) =θ(s, Xs)−θ(s, Xs−)− ∂x∂θ(s, Xs−)△Xswe obtain

Substituting Equation (31) into (30) we get

g(XT)−θ(t, Xt) =

Now from Equation (24) we have g(XT)−θ(t, Xt)