• Ei tuloksia

Solving clinical chromophore indices

The goal when studying skin optics is often to find out clinically significant parameters about the conditions of the skin, the clinical chromophore indices, such as pigmentation index (PI) and erythema index (EI). These parameters may be the actual concentrations or sometimes artificial variables either linearly or non-linearly related to the actual concentrations, melanin and haemoglobin. General strategies for solving the clinical chromophore indices are shown in Figure 16.

Various light transport models of skin make it possible to estimate the reflectance of skin when the optical properties are known. But usually, reflectance is straightforward to measure and it is the optical properties, which are unknown, that need to be esti-mated. Because skin, and therefore also the models, are non-linear, they cannot be

Optical parameters Optical

measurements R( )

Clinical indices PI, EI, O

Concentrations c , c , c 1) Measure

2a) optimise model

2b) Calculate clinical indices

a s g

Hbo Hbd Mel

Sat

3) Solve concentrations

4) Calculate clinical indices

Figure 16. Schematic diagram of strategies to solve the clinically significant chro-mophore indices: In phase 1), the reflectance spectra is measured. 2a) the optical properties of the skin are searched by fitting a skin model to the spectra. 3) The concentrations of the chromophores are solved from the optical properties. 4) The given clinical indices are calculated from the con-centrations. Sometimes chromophore indices are calculated directly from the shapes of the reflectance or absorption spectra 2b).

easily inverted. Therefore, the optical properties are often searched iteratively, using an optimisation method, through the parameter space, until the reflectance predicted by the model fits to the measured reflectance spectrum. Some widespread optimisation methods are linear least squares, Levenberg-Marquardt and genetic algorithms.

4.4.1 Linear least squares

Linear least squares (LLS) is a method for fitting a linear model to the statistical data.

A linear model for predicting the dependent variableyfrom predictor variablesxican be expressed as:

(57) yˆi=

N

j=1

βjxi,j

The goal for optimisation is to find optimal values for coefficientsβj to minimise the squared error:

(58) E=

M

i=1

(yi−yˆi)2

The linear least squares problem can be solved in closed form. The solution is well known:

(59) β= (XTX)−1XTy

The least squares optimisation method is implemented in most linear algebra pack-ages and suitable libraries are available, including LAPACK (Anderson, Bai & Bischof 1999). Often the LLS problem is solved using orthogonal decomposition of X, for example QR- or Singular Value Decomposition (SVD).

4.4.2 Levenberg-Marquardt

No general closed form optimisation method is known for fitting a non-linear least squares (NLS) model. Therefore, the methods for NLS are iterative and thus much slower than LLS methods. One popular iterative method for the fitting of a non-linear model is the Levenberg-Marquardt method (LMA) (Moré 1978).

The Levenberg-Marquardt method uses both the steepest descent and Gauss-Newton methods. When the solution is still far away, the algorithm behaves like the steepest descent, which is slow, but guaranteed to converge. Close to the solution, the algorithm behaves like the Gauss-Newton method, which converges faster (Lourakis 2005). The Gauss-Newton method is similar to Newton’s method, but it does not require the com-putation of the second derivatives, and it can only be used for minimising the sum of squares.

LMA implementation for many programming languages and computing platforms is available in the LEVMAR library (Lourakis Jul. 2004) and MINPACK library (Moré, Garbow & Hillstrom 1980).

4.4.3 Genetic algorithm

Genetic algorithms (GA) are evolutionary algorithms often used for optimisation of complex systems (Goldberg 1989; Alander 1992). GAs require neither computation of derivatives, nor continuous fitness space. They can be used for very many kinds of optimisation problems. In case the fitness space contains several local optima in addition to the global optimum, the GA is more likely to find the global optimum than LLS or NLS methods. The expense is a much longer computation time.

Instead of using matrix operations or derivatives, GAs use methods inspired by bi-ological evolution, such as inheritance, mutation, selection, and crossover. Genetic algorithms are explained in (Whitley 1994). An overview of the optimisation phases used by genetic algorithm are shown in Figure (17).

Initialisation

Selection

Reproduction End?

Yes No

Random + a-priori guess

Fitness function evaluation

Crossover + mutation

End criteria

Figure 17. The steps of Genetic Algorithm. GA starts with the initialisation of the population using random individuals and possibly utilising a-priori knowl-edge of the problem domain. Then the population is breed in selection–

reproduction loop until the end criterion is reached. In selection phase, the individuals which are the best according to the fitness function evaluation are selected for reproduction step. The new population consists of the re-sults of crossover and mutation of the selected parents.

Genetic algorithms start with a population of solution candidates. The population con-sists of N individuals. Each individual contains the model parameters, βj, encoded often as a sequence of bits. The initial population is often sampled at least partly ran-domly from the parameter space. A priori information can be utilised by including individuals with presumably good genes in the initial population.

The initial population is breed towards higher fitness population by population, by iter-ating selection and reproduction steps. To select the best individuals of the population, the fitness of each individual is evaluated using a fitness function, which can be ar-bitrary. The sum of squares function, shown in Equation (58) is a common choice.

Having evaluated the fitness of the individuals, the optimisation proceeds to the repro-duction step. When reproducing a new population, few best individuals are preserved

as such, if elitism is used. All high fitness individuals take part in the reproduction process, which uses crossover and mutation operators in producing the next generation which is supposed to share most of the properties of the best individuals of the parent population. The mutation randomly changes one or more individual genes of the target individual and the crossover composes a new child from the blocks of genes of two parents.

Each population explores part of the parameter space often much more effectively than random search, according to the so called building block hypothesis introduced by (Goldberg 1989) also described in (Whitley 1994). Most of the individuals tend to converge towards global and local optima, generation by generation. The mutations help GAs not to stick to the local optima.