• Ei tuloksia

6.3 Electrical Impedance Process Tomography

6.3.4 Space Discretization

The realizations of the concentration distribution C are in the space L2(D). The computation requires space discretization. We need to choose a finite dimensional subspace ofL2(D) and assume that the realizations of the concentration distribution are in that subspace. This causes a discretization error. Usually the discretization error is ignored in numerical implementations. The discretized state estimation system is assumed to represent the reality. In this subsection we want to analyse the stochastic nature of the discretization error in the case of electrical impedance process tomography.

Let {Vm}m=1 be a sequence of finite dimensional subspaces of L2(D) such that Vm ⊂ Vm+1 for all m ∈ N and ∪Vm = L2(D). Since L2(D) is a separable Hilbert space, there exists such a sequence, for example, Vm may be the subspace spanned by the m first functions in an orthonormal basis of L2(D). Let {ϕml }Nl=1m be an orthonormal basis of Vm for all m ∈ N. We denote by (·,·) the inner product in L2(D). We define the orthogonal projectionPm :L2(D)→ Vm form∈Nby

Pmf =

Nm

X

l=1

(f, ϕmlml

for allf ∈L2(D). The subspacesVmare appropriate discretization spaces ifPmf → f inL2(D) asm→ ∞for allf ∈L2(D), i.e., the orthogonal projectionsPmconverge strongly to the identity operator.

LetX : (Ω,F,P)→L2(D) be a random variable. Then for all ω∈Ω PmX(ω) =

Nm

X

l=1

(X(ω), ϕmlml =

Nm

X

l=1

(Xm(ω))lϕml

where Xm(ω) := ((X(ω), ϕm1 ), . . . ,(X(ω), ϕmNm))T is an RNm-valued random vari-able. We view Xm as a discretized version of the random variable X at the dis-cretization level m. If X is a Gaussian random variable with mean ¯x and covari-ance Γ, then Xm is also Gaussian [21, Theorem A.5]. Furthermore, the mean of Xm is EXm = ((¯x, ϕm1 ), . . . ,(¯x, ϕmNm))T and the covariance matrix is defined by (CovXm)ij := (Γϕmi , ϕmj ) since

E(Xm)i(Xm)j =E(X, ϕmi )(X, ϕmj ) = (Γϕmi , ϕmj ) + (¯x, ϕmi )(¯x, ϕmj ) for all i, j= 1, . . . , Nm.

Evolution Equation

We want to discretize the discrete evolution equation (6.16). We use the discretiza-tion levelm. We form an evolution equation for the discreteRNm-valued stochastic process {Ckm}nk=0 where Ckm := ((Ck, ϕm1 ), . . . ,(Ck, ϕmNm))T for allk = 0, . . . , n. By using the discrete evolution equation (6.16),

(Ck+1m )i = (Ck+1, ϕmi ) = (U(∆k)Ck+Wk+1, ϕmi )

= (U(∆k)PmCk, ϕmi ) + (U(∆k)(I−Pm)Ck, ϕmi ) + (Wk+1, ϕmi )

=

Nm

X

l=1

(U(∆kml , ϕmi )(Ckm)l+ (Emk+1)i+ (Wk+1m )i

for alli= 1, . . . , Nm andk= 0, . . . , n−1 almost surely where the discrete stochastic process

Ek+1m = ((Ck,(I−Pm)U(∆km1 ), . . . ,(Ck,(I −Pm)U(∆kmNm))T

represent the discretization error and Wk+1m is the state noise vector. Thus the discretized evolution equation is

Ck+1m =Amk+1Ckm+Ek+1m +Wk+1m (6.18) for all k= 0, . . . , n−1 almost surely where the matrixAmk+1 is defined by

(Amk+1)ij := (U(∆kmj , ϕmi ) (6.19) for all i, j = 1, . . . , Nm. The discretized evolution equation (6.18) is used in the evolution updating step of the Bayesian filtering. Thereby we need to define the statistical quantities of the discrete stochastic processes{Ek+1m }nk=01and{Wk+1m }nk=01. The state noise Wk+1 is a Gaussian random variable with mean 0 and covariance given by Formula (6.15) for allk= 0, . . . , n−1. Hence the state noise vectorWk+1m is Gaussian with mean 0 and covariance matrix

(Cov(Wk+1m ))ij =

µZ tk+1 tk

U(tk+1−s)QU(tk+1−s)ds ϕmi , ϕmj

= Z tk+1

tk

¡U(tk+1−s)QU(tk+1−s)ϕmi , ϕmj ¢ ds for all i, j= 1, . . . , Nm and k= 0, . . . , n−1. We define the matrix Qmk,l(s) by

(Qmk,l(s))ij := (U(tk−s)QU(tl−s)ϕmi , ϕmj )

6.3. Electrical Impedance Process Tomography 113

for all i, j= 1, . . . , Nm,k, l= 0, . . . , n−1 and s∈[0, tk∧tl]. Then Cov(Wk+1m ) =

Z tk+1

tk

Qmk+1,k+1(s)ds (6.20) for all k = 0, . . . , n−1. Since the state noises Wk and Wl are uncorrelated for all k 6=l, by the Gaussianity the state noise vectors Wkm and Wlm are independent if k6=l.

We use our knowledge of the stochastic behaviour of the continuous evolution equa-tion (6.13) for the examinaequa-tion of the discretizaequa-tion errorEk+1m for allk= 0, . . . , n−1.

The concentration distribution Ck has a Gaussian modification with mean U(tk)c0 and covariance given by Formula (6.14) where t=tk for all k= 0, . . . , n−1. Hence the discretization error Ek+1m has a Gaussian version for all k = 0, . . . , n−1. The mean of the Gaussian version is given by

(EEk+1m )i=E(Ck,(I−Pm)U(∆kmi ) = (ECk,(I −Pm)U(∆kmi )

= (U(tk)c0,(I−Pm)U(∆kmi ) = (c0,U(tk)(I−Pm)U(∆kmi ) for all i= 1, . . . , Nm and k= 0, . . . , n−1. Thus

(EEk+1m )i = (c0,U(tk+1mi )−

Nm

X

l=1

(U(∆kml , ϕmi )(c0,U(tkml ) for all i= 1, . . . , Nm because

U(t)(I−Pm)U(s)f =U(t+s)f−

Nm

X

l=1

(U(s)f, ϕml )U(t)ϕml

=U(t+s)f−

Nm

X

l=1

(U(s)ϕml , f)U(t)ϕml

for all f ∈L2(D) and s, t∈[0, T]. Hence

EEk+1m =



(U(tk+1)c0, ϕm1 ) ...

(U(tk+1)c0, ϕmNm)

+Amk+1



(U(tk)c0, ϕm1 ) ... (U(tk)c0, ϕmNm)

 (6.21)

for all k= 0, . . . , n−1. The covariance matrix of the Gaussian version is given by (CovEk+1m )ij

= (Cov(Ck)(I−Pm)U(∆kmi ,(I−Pm)U(∆kmj )

= (U(tk0U(tk)(I −Pm)U(∆kmi ,(I−Pm)U(∆kmj )+

+ µZ tk

0 U(tk−s)QU(tk−s)ds(I −Pm)U(∆kmi ,(I−Pm)U(∆kmj

= (Γ0U(tk)(I−Pm)U(∆kmi ,U(tk)(I−Pm)U(∆kmj )+

+ Z tk

0

¡QU(tk−s)(I−Pm)U(∆kmi ,U(tk−s)(I −Pm)U(∆kmj ¢ ds

for all i, j= 1, . . . , Nm and k= 0, . . . , n−1. Since Γ0 and Q are self-adjoint as the covariance operator of Gaussian random variables,

(CovEmk+1)ij

= (Γ0U(tk+1mi ,U(tk+1mj )+

Nm

X

l=1

(U(∆kml , ϕmj )(Γ0U(tkml ,U(tk+1mi )+

Nm

X

l=1

(U(∆kml , ϕmi )(Γ0U(tkml ,U(tk+1mj )+

+

Nm

X

l,p=1

(U(∆kml , ϕmi )(U(∆kmp , ϕmj )(Γ0U(tk))ϕml ,U(tkmp )+

+ Z tk

0

¡QU(tk+1−s)ϕmi ,U(tk+1−s)ϕmj ¢ ds+

Nm

X

l=1

(U(∆kml , ϕmj ) Z tk

0

(QU(tk−s)ϕml ,U(tk+1−s)ϕmi )ds+

Nm

X

l=1

(U(∆kml , ϕmi ) Z tk

0

¡QU(tk−s)ϕml ,U(tk+1−s)ϕmj ¢ ds+

+

Nm

X

l,p=1

(U(∆kml , ϕmi )(U(∆kmp , ϕmj ) Z tk

0

¡QU(tk−s)ϕml ,U(tk−s)ϕmp ¢ ds

for all i, j= 1, . . . , Nm and k= 0, . . . , n−1. We define the matrix Γm,k0,l by

m,k0,l )ij := (U(tk0U(tlmi , ϕmj )

for all i, j= 1, . . . , Nm and k, l= 0, . . . , n−1. Then

CovEk+1m = Γm,k+10,k+1 −(Amk+1Γm,k+10,k )T −Amk+1Γm,k+10,k +Amk+1Γm,k0,k (Amk+1)T+ +

Z tk

0

Qmk+1,k+1(s)ds− Z tk

0

(Amk+1Qmk+1,k(s))T ds+

− Z tk

0

Amk+1Qmk+1,k(s)ds+ Z tk

0

Amk+1Qmk,k(s)(Amk+1)T ds

(6.22)

for all k= 0, . . . , n−1 where the integration is done componentwise.

Since Ck andWk+1 are independent, also Ckm and Wk+1m as well asEk+1m and Wk+1m are mutually independent for allk= 0, . . . , n−1. On the other hand,Ckm andEk+1m are correlated for allk= 0, . . . , n−1. The correlation matrix ofCkm andEk+1m can be calculated by using the continuous evolution equation (6.13) for allk= 0, . . . , n−1.

6.3. Electrical Impedance Process Tomography 115

Then

(Cor(Ckm, Ek+1m ))ij = (Cov(Ckmi ,(I−Pm)U(∆kmj )

= (U(tk0U(tkmi ,(I−Pm)U(∆kmj )+

+ µZ tk

0 U(tk−s)QU(tk−s)ds ϕmi ,(I−Pm)U(∆kmj

= (Γ0U(tkmi ,U(tk)(I−Pm)U(∆kmj )+

+ Z tk

0

¡QU(tk−s)ϕmi ,U(tk−s)(I −Pm)U(∆kmj ¢ ds for all i, j= 1, . . . , Nm and k= 0, . . . , n−1. Thus

(Cor(Ckm, Ek+1m ))ij = (Γ0U(tkmi ,U(tk+1mj )+

Nm

X

l=1

(U(∆kml , ϕmj )(Γ0U(tkml ,U(tkmi )+

+ Z tk

0

¡QU(tk−s)ϕmi ,U(tk+1−s)ϕmj ¢ ds+

Nm

X

l=1

(U(∆kml , ϕmj ) Z tk

0

(QU(tk−s)ϕml ,U(tk−s)ϕmi )ds for all i, j= 1, . . . , Nm and hence

Cor(Ckm, Emk+1)

= Γm,k+10,k −(Amk+1Γm,k0,k )T + Z tk

0

¡Qmk+1,k(s)−(Amk+1Qmk,k(s))T¢

ds (6.23) for all k= 0, . . . , n−1.

According to the discretized evolution equation (6.18) the random variableCk+1m has a Gaussian version. The mean of the Gaussian version is

ECk+1m =E¡

Amk+1Ckm+Ek+1m +Wk+1m ¢

=Amk+1ECkm+EEk+1m (6.24) and the covariance matrix is

CovCk+1m = Cov¡

Amk+1Ckm+Ek+1m +Wk+1m ¢

= Cov(Amk+1Ckm) + Cor(Amk+1Ckm, Ek+1m )+

+ Cor(Ek+1m , Amk+1Ckm) + CovEk+1m + CovWk+1m

=Amk+1Cov(Ckm)(Amk+1)T +Amk+1Cor(Ckm, Ek+1m )+

+ Cor(Ckm, Ek+1m )T(Amk+1)T + CovEk+1m + CovWk+1m

(6.25)

for all k= 0, . . . , n−1.

Observation Equation

Since both the operator U and mapping g are non-linear, the space discretization of the observation equation (6.17) is far more difficult than the evolution equation (6.16), especially when we are interested in the discretization error. We assume

that the functiong is Fr´echet differentiable. Then we can linearize the observation equation (6.17). For allk= 1, . . . , n

Vk =U(g(Ck);Ik) +Sk

≈U(g(f);Ik) +U0(g(f);Ik)g0(f)(Ck−f) +Sk

=U(g(f);Ik)−U0(g(f);Ik)g0(f)f+U0(g(f);Ik)g0(f)Ck+Sk

where f ∈L2(D). The point f in which the linearization is done should be chosen wisely. It may be, for example, the mean of the initial value. The linearization induces error. However, in future we ignore the linearization error. We denotebk:=

U(g(f);Ik)−U0(g(f);Ik)g0(f)f and Bk := U0(g(f);Ik)g0(f) for all k = 1, . . . , n.

Thenbk∈RLandBk:L2(D)→RLis a bounded linear operator for allk= 1, . . . , n.

The linearized observation equation is

Vk=BkCk+bk+Sk (6.26)

for all k= 1, . . . , n. Then the discretized observation equation is

Vk =BkPmCk+Bk(I−Pm)Ck+bk+Sk= [Bkϕ]Ckm+Ekm+bk+Sk (6.27) for allk = 1, . . . , n where [Bkϕ] := [Bkϕm1 . . . BkϕmNm] is the L×Nm matrix whose lth column is Bkϕml for all l = 1, . . . , Nm and Ekm := Bk(I −Pm)Ck represents the discretization error. The discretized observation equation (6.27) is used in the observation updating step of the Bayesian filtering. We need to define the statistical quantities of the processes {Ekm}nk=1 and{Vk}nk=1.

We use our knowledge of the stochastic behaviour of the continuous evolution equa-tion (6.13). We assume that the process S(t), t ≥ 0, is a Gaussian process inde-pendent of the process C(t), t ≥0. Then Sk is independent of Ckm and Ekm for all k = 1, . . . , n. On the other hand, Ckm and Ekm are correlated for all k = 1, . . . , n.

The concentration distribution Ck has a Gaussian modification with mean U(tk)c0 and covariance given by Formula (6.14) wheret=tk for allk= 1, . . . , n. Hence the discretization errorEkm has a Gaussian version for allk= 1, . . . , n. The mean of the Gaussian version is

EEkm =Bk(I −Pm)U(tk)c0 and the covariance matrix is

CovEkm =Bk(I−Pm)U(tk0U(tk)(I−Pm)Bk+ +

Z tk

0

Bk(I−Pm)U(tk−s)QU(tk−s)(I−Pm)Bkds

for all k = 1, . . . , n. The correlation matrix of Ckm and Ekm can be calculated by using the continuous evolution equation (6.13) for allk= 1, . . . , n. First of all,

Cor([Bkϕ]Ckm,Ekm) = Cor(BkPmCk, Bk(I−Pm)Ck)

=BkPmCov(Ck)(I−Pm)Bk

=BkPmU(tk0U(tk)(I−Pm)Bk+ +

Z tk

0

BkPmU(tk−s)QU(tk−s)(I−Pm)Bk ds

6.3. Electrical Impedance Process Tomography 117

for all k= 1, . . . , n. If the matrix [Bkϕ] is invertable,

Cor(Ckm,Ekm) = [Bkϕ]1Cor([Bkϕ]Ckm,Ekm) for all k= 1, . . . , n.

In the observation updating step of the Bayesian filtering we need the joint prob-ability distribution of Ckm and Vk for all k= 1, . . . , n. By the continuous evolution equation (6.13) the random variable Ckm has a Gaussian version. According to the discretized observation equation (6.27) the random variableVk has a Gaussian ver-sion and the joint probability distribution of Ckm and Vk is Gaussian. In addition, the mean of the Gaussian version of Vk is

EVk=E³

[Bkϕ]Ckm+Ekm+bk+Sk´

= [Bkϕ]ECkm+EEkm+bk+ESk (6.28) and the covariance matrix is

CovVk= Cov³

[Bkϕ]Ckm+Ekm+bk+Sk´

= Cov([Bkϕ]Ckm) + Cor([Bkϕ]Ckm,Ekm)+

+ Cor(Ekm,[Bkϕ]Ckm) + Cov(Ekm) + Cov(Sk)

= [Bkϕ] Cov(Ckm)[Bkϕ]T + Cor([Bkϕ]Ckm,Ekm)+

+ Cor([Bkϕ]Ckm,Ekm)T + Cov(Ekm) + Cov(Sk)

(6.29)

for all k= 1, . . . , n. The correlation matrix of Ckm and Vk is Cor(Ckm, Vk) = Cor³

Ckm,[Bkϕ]Ckm+Ekm+bk+Sk´

= Cor(Ckm,[Bkϕ]Ckm) + Cor(Ckm,Ekm)

= Cov(Ckm)[Bkϕ]T + [Bkϕ]1Cor([Bkϕ]Ckm,Ekm)

(6.30)

for all k= 1, . . . , n.

Bayesian Filtering

The discretized state estimation system concerning the electrical impedance process tomography problem is

Ck+1m =Amk+1Ckm+Ek+1m +Wk+1m , k= 0, . . . , n−1, (6.31) Vk = [Bkϕ]Ckm+Ekm+bk+Sk, k = 1, . . . , n. (6.32) The state noise vectorsWkmandWlmare mutually independent and also independent of C0m for all k 6= l. We assume that the observation noise vectors Sk are chosen such a way thatSkandSlare mutually independent and also independent ofC0m for allk6=l andSk and Wlm are mutually independent for all k, l= 1, . . . , n. Then the stochastic processes {Ckm}nk=0 and {Vk}nk=1 form an evolution–observation model.

Therefore we may use the Bayesian filtering method.

In the evolution updating step of the Bayesian filtering it is assumed that we know the conditional probability density ofCkmwith respect to some measurementsDk:=

{v1, v2, . . . , vk}. We need to calculate the conditional probability density of Ck+1m

with respect to the dataDk. We suppose that the conditional expectationE(Ckm|Dk) is a Gaussian random variable with mean ¯ckand covariance matrix Γk. According to the discretized evolution equation (6.31) we are able to present the joint distribution ofCkmand Ck+1m conditioned on the measurementsDkand know that it is Gaussian.

By Theorems 6.3 and 6.4 and Formulas (6.24) and (6.25) the Gaussianity of the joint probability density implies that the conditional marginal probability density of Ck+1m is Gaussian with mean

¯

ck+1 :=Amk+1¯ck+EEk+1m (6.33) and covariance matrix

Γk+1 :=Amk+1Γk(Amk+1)T +Amk+1Cor(Ckm, Ek+1m )+

+ Cor(Ckm, Ek+1m )T(Amk+1)T + CovEk+1m + CovWk+1m . (6.34) Thus the evolution updating step is defined if we are able to calculate the vector EEk+1m and matrices Amk+1, CovEk+1m , CovWk+1m and Cor(Ckm, Ek+1m ) given by For-mulas (6.19)–(6.23) for all k= 0, . . . , n−1.

In the observation updating step of the Bayesian filtering it is assumed that we know the conditional probability density of Ck+1m with respect to some measured data Dk := {v1, v2, . . . , vk}. A new measurement vk+1 is obtained. We need to calculate the conditional probability density of Ck+1m with respect to measure-ments Dk+1 := {v1, v2, . . . , vk+1}. We suppose that the conditional expectation E(Ck+1m |Dk) is a Gaussian random variable with mean ¯ck+1 and covariance matrix Γk+1. By Theorems 6.3 and 6.4 the conditional probability density of Ck+1m with respect to the dataDk+1 is Gaussian with mean

˜

ck+1 := ¯ck+1+ Cor(Ck+1m , Vk+1) Cov(Vk+1)1(vk+1−EVk+1) (6.35) and covariance matrix

Γek+1 = Γk+1−Cor(Ck+1m , Vk+1) Cov(Vk+1)1Cor(Ck+1m , Vk+1)T. (6.36) Thus the observation updating step is defined if we are able to calculate the vector EVk+1 and matrices Cov(Vk+1) and Cor(Ck+1m , Vk+1) given by Formulas (6.28)–

(6.30) for all k= 0, . . . , n−1.

The evaluation of the matrices needed in the Bayesian filtering method depends on the discretization space Vm, analytic semigroup {U(t)}t0, function c0, operators Γ0,Qand Bk, vectorbk and statistics of the observation noiseS for allk= 1, . . . , n.

Usually the discretization space Vm is chosen such a way that the projectionPm is fairly easy to calculate. For example, the basis functionsϕml have compact supports and they are piecewise polynomial. The function c0 and operator Γ0 represent our prior knowledge of the concentration distribution. The mean c0 should illustrate the expected concentration distribution in the pipe and hence it depends heavily on the application. Since the diffusion is a smoothing operation, we may assume that the initial state is rather smooth. Thus the covariance operator Γ0 should have some smoothness properties. Our certainty of the time evolution model is coded into the Wiener process and hence into the operatorQ. The choice ofQdepends on the application. The crucial factor in the evaluation of the matrices is the analytic semigroup{U(t)}t0. Since it is defined by Formula (2.3), only in some special cases we can present the analytic semigroup in a closed form. In Subsection 6.3.5 we

6.3. Electrical Impedance Process Tomography 119

study the one dimensional version of the problem. Then the analytic semigroup is a convolution operator. The operatorBk and vectorbk for allk= 1, . . . , nare related to the measurement situation. We need to be able to solve the complete electrode model for a known concentration distribution and also to calculate the Fr´echet de-rivatives of mappings U and g for that concentration distribution. The function g depends on the application. At least for strong electrolytes and multiphase mixtures relations between the conductivity and concentration distribution are studied and discussed in the literature. The observation noise S represents the accuracy of the measurement equipment.