• Ei tuloksia

Photoacoustic imaging is an excellent modality for imaging blood vessels due to its high sensitivity to haemoglobin with certain wavelenghts [26, 27]. PAI can detect both tiny and large vessels so it can provide comprehensive imaging of the vascu-lature. In addition, a structure of vessels can be visualised using intravascular PA imaging catheters [130–133]. Since increased vascularity, high total blood concentra-tions and abnormally low haemoglobin oxygen saturation indicate the presence of tumours, PAI can potentially be used for tumour imaging [134].

PAI is well suited for molecular imaging which measures chemical and biological processes at the molecular and cellular level [18]. Molecular imaging is based on exogenous chromophores, which either change their optical properties if there are changes in the molecular level or have specific spectral signatures and target specific molecules.

In addition to structural and molecular imaging, PAI can also provide functional information. One important functional parameter that PAI is able to measure is blood oxygen saturation [135]. Since blood oxygen saturation is determined by the concentrations of oxyhaemoglobin and deoxyhaemoglobin, PAI is able to pro-duce an image of blood oxygen saturation. Other functional capability of PAI is the measurement of blood flow velocity [136]. In addition, PAI can provide functional information in form of temperature [137–139]. The Grüneisen coefficient changes with temperature, thus PAI can provide images of temperature distributions in tis-sue [139].

Another major application of PAI is small animal imaging, which enables study-ing human disease processes, monitorstudy-ing treatment and developstudy-ing new treatments [5]. The small size of the animal means that penetration depth requirements are generally modest and signals can usually be recorded around the whole body of the animal. Thus, high-resolution images can be obtained even deeper anatomical structures. In small animals, for example viscera [65, 68, 140–143], heart [144, 145], skin [74, 146, 147], and brain [67, 148–150] have been studied.

Breast imaging is a promising application of PAI [22–25]. In fact, breast imaging is one of the most advanced clinical applications of PAI and clinical measurement devices are available. PAI can provide comprehensive information of the vasculature and tumour properties. Thus, PAI is well suited to diagnostic imaging of the breast.

Another potential application of PAI is brain imaging [151]. Imaging of brain is challenging due to strong optical scattering and acoustic attenuation by the skull.

Thus, studies have been mainly performed using small animals due to their thin skull. In these studies, both brain structure and functionality have been imaged [67, 79, 149, 152]. In addition, brain tumours and disorders have been considered [153–

157]. Thus far, imaging of human brains have not been performed, but a few studies promotes its feasibility, especially if human neonates would be imaged subjects [72, 158].

PAI has also been applied in dermatologic imaging [74, 146, 147, 159, 160]. Super-ficial vasculature of small animals and humans have been imaged. In addition, skin cancers, burns and other skin lesions have been detected with PAI. Recently, PAI has been applied in diagnosis and monitoring of psoriasis [161].

The other possible biomedical applications include gynecological [162, 163], uro-logic [164, 165], endocrinouro-logical [166–168], and ocular [169] imaging. In addi-tion, PAI have been utilised in diagnosing joint diseases [170–172] and Crohn dis-ease [173].

In addition to imaging, PAI can be used as an assisting method. PAI can be utilised in planning and monitoring treatments and thus it could help to improve treatments and outcomes. In addition, PAI can be an useful tool for assessing the efficacy of treatments. Examples of applications are guidance of therapy, surgery and biopsy [134, 174–181].

3 Bayesian inversion methods in PAT

In this chapter, the Bayesian approach to the inverse problem of PAT is described following publication I. In addition, extending this approach to three dimensions (3D) is described following publicationII. Finally, the Bayesian approximation error approach and its utilisation in modelling uncertainties in the speed of sound is described following publicationIV.

3.1 BAYESIAN APPROACH TO PAT

In the inverse problem of PAT, an unobservable parameterp0(parameter of interest) is estimated based on the measurements pt. An observation model describes how these depend on each other. Since the practical measured pressure signals always contain noise that is commonly assumed to be additive, the discrete observation model of PAT corresponding to the forward model (2.4) can be written in form

pt=K(c)p0+e, (3.1)

where ptRMis a vector composed of the acoustic pressure waves sampled at the sensors at discrete time points, p0RN is the discrete initial pressure distribution, K(c)∈RM×N is the linear operator which maps the initial pressure distribution to the measurable data by discretizing the wave equation (2.4), ande∈RMrepresents the unknown noise. In practice, the operatorK(c)can be assembled by computing the impulse response of a discrete system approximating (2.4) for each of the ele-ments in the reconstructed area and placing the acoustic output on the columns of K(c). In this thesis, the impulse responses were computed using the k-space time domain method implemented in the k-Wave toolbox [95].

In the Bayesian framework [182, 183], all parameters are modelled as random variables, and information related to these parameters is expressed by probability distributions. The solution of the inverse problem is the posterior density that is a conditional probability density for the parameters of interest in each element of the domain. The posterior density extracts information and assesses uncertainty about the variables based on information obtained through the inference of measurements, model, and prior model for unknown parameters. In the case of PAT, the posterior density reflects information on the unknown initial pressure distribution p0 given the acoustic pressure measurements pt.

According the Bayes’ theorem, the posterior density follows proportionality π(p0|pt)π(p0)π(pt|p0). (3.2) where π(p0) is the prior probability density and π(pt|p0) is the likelihood den-sity. The prior density models what is known of the parameters of interest before the measurements, while the likelihood describes the probabilities of measurement outcomes over all choices of parameters .

Let us consider a linear observation model (3.1) and assume that p0 and e are mutually independent. Then, the likelihood becomes

π(pt|p0) =πe(pt−K(c)p0), (3.3)

whereπeis the probability density of the noisee. The posterior density can now be written as

π(p0|pt)∝π(p0)πe(pt−K(c)p0). (3.4) Let us further assume that the noiseeand initial pressurep0 are Gaussian dis-tributed i.e. e ∼ N(ηee) and p0 ∼ N(ηp0p0) where ηe and Γe are the mean and covariance of the noise, respectively, and ηp0 and Γp0 are the mean and co-variance of the prior model, respectively. Now, in the case of a linear observation model and Gaussian distributed noise and prior, the posterior density is a Gaussian N(ηp0|ptp0|pt)and is described by the mean and covariance

It should be noted that the posterior density has this closed form solution only when an observation model is linear and noise and prior are Gaussian distributed.

In other situations, i.e. non-linear observation model and/or non-Gaussian dis-tributions, the posterior density can in principle be explored by employing Markov Chain Monte Carlo (MCMC) technique, which provides a representative set of sam-ples from the posterior density. However, in tomography, the posterior density is often high dimensional, and thus MCMC methods can be computationally pro-hibitively too expensive. Therefore, point estimates of the posterior density are typ-ically computed. The most commonly used point estimate is amaximum a posteriori (MAP) estimate that is defined as

p0,MAP=arg max

p0

π(p0|pt). (3.7)

In Gaussian case, the MAP estimate is same as the mean of the posterior density i.e.

p0,MAP=ηp0|pt.

In addition to obtaining an estimate of the parameter of interest, the posterior density also provides information on the uncertainty of the estimate. This informa-tion can be used to assess the accuracy of the estimate. From the posterior covariance posterior variances σp2

0|pt = diag{Γp0|pt}and further standard deviations σp0|pt for the each unknown parameter can be extracted. In addition, the reliability of the estimates can be assessed by computing credible intervals

h

ηp0|pt,kασp0|pt,k, ηp0|pt,k+ασp0|pt,k

i, (3.8)

where ηp0|pt,k is the value of the posterior meanηp0|pt in thekth element,σp0|pt,k is the value of σp0|pt in thekth element and α = 1, 2, 3 corresponds to 68.3%, 95.5%

and 99.7% intervals, respectively. Also the posterior marginal densities can be in-formative for inspecting the uncertainty of the estimate. Since the posterior density is a Gaussian, all marginal densities are Gaussian. The marginal density of the kth element is defined as

p0,k|pt∼ N(ηp0|pt,kp0|pt,kk), (3.9) whereηp0|pt,kis the value ofηp0|pt in thekth element andΓp0|pt,kk is the value of the kth diagonal element of the matrixΓp0|pt.

Figure 3.1: Two sample draws of 2D spatial distributions (10×10 mm) from the white noise (first column), Ornstein-Uhlenbeck (second column) and squared ex-ponential (third column) priors with the meanηp0 =5 and the standard deviation σp0 = 10/6. The characteristic length scalel = 2 mm was used for the Ornstein-Uhlenbeck and squared exponential prior.

3.1.1 Prior model

A prior model describes our pre-existing knowledge of the imaged target in form of a probability density. That is, it describes what values are probable for each parameter of interest and how these values are distributed. The prior model should be chosen in such a way that the imaged target is well supported by the model.

Commonly used prior models are Gaussian priors [182]. Three examples of Gaussian priors are a white noise prior, Ornstein-Uhlenbeck prior and squared ex-ponential prior [184,185]. Sample draws of 2D spatial distributions from these priors are shown in Figure 3.1. Gaussian priors can be described by their meansηp0 and covariance matrices Γp0. If the range of the unknown parameter values is roughly known, this information can be utilised to choose the meanηp0 of the prior. Corre-spondingly, information related to the expected spatial distribution of the parame-ters can be utilised to choose the covariance matrixΓp0. If the estimated parameters are assumed to be non-smooth (the parameters are independent of each other or have no spatial correlation), the white noise prior can be an appropriate choice. In the white noise prior, theΓp0 is defined as

Γp0,ij =σp20δij, (3.10)

where i and j are the element indices, σp0 is the standard deviation, andδij is the Dirac delta function. The prior knowledge on the range of the unknown parameter values can be utilised to choose the standard deviation σp0. On contrary, if the parameters are assumed to be somewhat smooth, the squared exponential prior or Ornstein-Uhlenbeck prior can be an useful choice. In the squared exponential prior,

theΓp0 is of form

Γp0,ij=σp20exp −kri−rjk2 2l2

!

(3.11) whereas in the case of the Ornstein-Uhlenbeck prior theΓp0 is defined as

Γp0,ij=σ2p0exp

where ri and rj are the element positions of the element indicesi and j and l is the characteristic length scale which controls the spatial range of correlation. The characteristic length scalelcan be chosen based on prior information about internal structures of the imaged target and size of these structures.

In PAT, some spatial correlation in parameter values can be expected since pre-sumably properties of biological tissue types are relatively homogeneous. In addi-tion, it is possible that the target is composed of heterogeneities separated by sharper edges such as blood vessels. Thus, the squared exponential or Ornstein-Uhlenbeck prior can be regarded as more appropriate presumption for PAT than the white noise prior.

3.1.2 Matrix-free implementation

In a large dimensional inverse problem, the closed form matrix presentation i.e.

(3.5)-(3.6) might be impractical to evaluate and store. Therefore, a matrix-free method for the computation of the posterior mean and covariance is needed. This can be achieved by solving linear systems corresponding to the mean and covariance us-ing iterative linear solvers, such as conjugate gradient methods, and evaluatus-ing the multiplication by the forward modelKwith the forward operator [96], the multipli-cation by the transpose of the forward modelKT with the adjoint operator [45] and the multiplication by the covariance of the prior Γp0 with a 3D convolution using the fast Fourier transform as follows. The mean of the posterior density (3.5) can be computed by solving the system of linear equations

p0|pt =d, (3.13)

where

C=Γp0K(c)TΓ−1e K(c) +I, (3.14) d=Γp0K(c)TΓ−1e (ptηe) +ηp0 (3.15) and Iis an identity matrix. Correspondingly, thekth column of the posterior covari-ance (3.6) can be computed by solving the linear system

p0|pt,k =Γp0ek, (3.16) whereCis as in (3.14) andekis a unit vector with value one at thekth element and zeros elsewhere. In this thesis, the linear systems (3.13) and (3.16) are solved using a biconjugate gradient stabilized (l) method [186] built-in MATLAB.

3.2 BAYESIAN APPROXIMATION ERROR APPROACH FOR