• Ei tuloksia

Kalman-type filters

Because KF works well for linear systems (3) there are many KF extensions to the nonlinear system (1). We call these extensions Kalman-type filters. Kalman-type filters approximate only the mean (estimate) and covariance (error estimate) of the posterior. If neces-sary, a Kalman-type filter approximates the posterior as Gaussian with those parameters.

Many Kalman-type filters are based on BLUE. Let E

‚– x y

™Œ

=

– x ˆ

y ˆ

™

and V

‚– x y

™Œ

=

– P

x x

P

x y

P

y x

P

y y

™ (4) then the BLUE of the state x is [ 8 ]

x ˆ = ˆ x

+ P

x y

P

y y1

yy ˆ E €

(x − x ˆ )(x − x ˆ )

T

Š

= P

x x

− P

x y

P

y y1

P

y x

. (5) Kalman-type filters use BLUE recursively, so that at every time instant the Kalman-type filter approximates these expectations (4) based on previous state and error estimates and computes new state and error estimates using (5), e.g. Algorithms 2-4. Natur-ally, different Kalman-type filters do these approximations using different methods. Next, we present some of these approximations.

Maybe the best know KF extension is the Extended Kalman Filter

(EKF) [ 5, 8, 22, 29, 57, 63, 71 ] . EKF is based on BLUE and it linearizes

Algorithm 2: Extended Kalman filter

1. Start with the mean x ˆ

0

= E ( x

0

) and the covariance P

0

= V ( x

0

) of the initial state. Set k = 1.

2. Prior mean of state at t

k

is x ˆ

k

= F

k1

x ˆ

k−1

, and prior covariance is P

k

= F

k1

P

k1

F

kT1

+ Q

k1

.

3. Posterior mean of state at t

k

is x ˆ

k

= ˆ x

k

+ P

x y

k

P

y y1

k

€ y

k

y ˆ

k

Š and posterior covariance is P

k

= P

k

− P

x yk

P

y y1

k

P

x yT

k

, where P

x yk

= P

k

H

Tk

, H

k

= h

k

( ˆ x

k

) ,

P

y yk

= H

k

P

k

H

Tk

+ R

k

and y ˆ

k

= h

k

( ˆ x

k

) . 4. Increment k and repeat from 2.

the measurement function h

k

(1c) around the prior mean and uses this linearization to compute the necessary expectations (4) [ 2, 8 ] . EKF algorithm for system (1) is given as Algorithm 2. This EKF Algorithm 2 is used in publications [ P1–P5, R1–R6 ] . It is also possible to compute higher order EKF, which takes into account the higher order terms of the Taylor series of the measurement func-tion. For example, Second Order Extended Kalman Filter (EKF2) takes into account the second order terms of the Taylor series [ 8 ] . EKF2 algorithm for system (1) is given as Algorithm 3. This EKF2 Algorithm 3 is used in publications [ P1–P3, R1, R4–R6 ] . EKF2 uses the assumption that the posterior is Gaussian, contrary to EKF, which does not use this, usually incorrect, assumption. Modified Gaussian Second Order Filter [29, 57] is the same as EKF2.

One big drawback of EKF and the higher order EKF is that the

approximation of the measurement function is computed in the

prior mean estimate, so if it is incorrect then the approximation of

the measurement function is incorrect and the next estimate of the

prior mean is incorrect, and so on. One example of a situation where

EKF estimate veers away from the true route and gets stuck in an

incorrect solution branch is given in [ P1 ] . One approach to avoiding

this kind of situation is to use different linearization points that do

Algorithm 3: Second order extended Kalman filter

1. Start with the mean x ˆ

0

= E ( x

0

) and the covariance P

0

= V ( x

0

) of the initial state. Set k = 1.

2. Prior mean of state at t

k

is x ˆ

k

= F

k−1

x ˆ

k−1

, and prior covariance is P

k

= F

k1

P

k1

F

kT1

+ Q

k1

.

3. Posterior mean of state at t

k

is x ˆ

k

= ˆ x

k

+ P

x y

k

P

y y1

k

€ y

k

y ˆ

k

Š and posterior covariance is P

k

= P

k

− P

x yk

P

y y1

k

P

x yT

k

, where P

x y

k

= P

k

H

Tk

, H

k

= h

k

( ˆ x

k

) , y ˆ

k

= h

k

( ˆ x

k

) + 1

2

nyk

X

i=1

e

i

tr € H ¨

k

i

P

k

Š , H ¨

k

i

= h

k′′i

( ˆ x

k

) and P

y yk

= H

k

P

k

H

Tk

+ R

k

+ 1

2

nyk

X

i=1 nyk

X

j=1

e

i

e

Tj

tr H ¨

k

i

P

k

H ¨

k

j

P

k

. 4. Increment k and repeat from 2.

not depend on the prior, if possible. The name of that kind of KF extension is Linearized Kalman Filter (LKF) [ 22, 71 ] . One possible choice for the linearization point is likelihood peak(s), see [ P3 ] or the mean of a truncated Gaussian, see [ P5 ] . Of course, if we have more than one linearization point, we usually end up to Gaussian mixture filter, see Section 3.3. Another possibility is to generate linearization points using the state model; this idea is used in [ 47, 48 ] . It is also possible to compute EKF (or LKF), use the posterior mean estimate as a linearization point of LKF and iterate this procedure. The name of that kind of filter is Iterated Extended Kalman Filter (IEKF) [ 8, 29 ] . Note that IEKF computes the maximum a posterior estimate not the posterior mean estimate [ 8 ] .

One drawback of EKF and the higher order EKF is that we have to

compute the derivative of the measurement function analytically,

which is not always possible or practical. Because of this, there

are many derivative-free Kalman filter extensions. Note that in

some cases, computing derivative analytically is not a problem at

all [ R1 ] . Central difference filter [ 68 ] , first-order divided difference filter [ 60 ] and Unscented Kalman Filter (UKF) [ 33, 35, 36, 37 ] are examples of derivative-free Kalman-type filters. These filters are also referred to as linear regression Kalman filters [ 49 ] and sigma-point Kalman filters [ 88 ] . These filters use different numerical inte-gration methods to compute the expectations (4). For example, UKF uses so called Unscented Transformation (UT) [ 35 ] that approxi-mates expectation values of x ∼ N

n

( ˆ x ,P )

E (h (x )) = Z

h(ξ)p

x

(ξ)dξ ≈

2n

X

i=0

ω

i

h(χ

i

), (6) where ω

i

and χ

i

are so called extended symmetric sigma-point set of distribution N

n

( ˆ x,P ) , which are given in Table 1.1 [ 36 ] .

Table 1.1: Extended symmetric sigma-point set of N

n

( ˆ x ,P) Index (i ) Weight (ω

i

) sigma-point (χ

i

)

0 ω

0

x ˆ

1,... , n

12nω0

x ˆ + Æ

n

1ω0

Pe

i

n + 1,... ,2n

1−ω2n0

x ˆ − Æ

n

1ω0

Pe

in

Approximation (6) is accurate if function h is third order polyno-mial [ 33 ] . In Table 1.1, the weight of the mean point ω

0

< 1 is a freely selectable constant. The choice ω

0

= 1 −

n3

is justified because it guarantees that approximation (6) is accurate for some fourth order polynomial [ 36 ] . However, if we use negative weight ω

0

, it is possible to produce non-positive semidefinite posterior covari-ance P

k

and usually this causes problems. If ω

0

=

2

3

and n = 1

then UT coincides with three points Gauss-Hermite rule, which has

also been applied to Kalman-type filter framework; Gauss-Hermite

filter [ 28 ] . UT needs 2n + 1 points and three point Gauss-Hermite

rule generalization to higher dimension needs 3

n

points, and thus

UT and Gauss-Hermite rules coincide only in the one dimensional

case [ 28, 36 ] .

UKF algorithm for system (1) is given as Algorithm 4. This UKF Algorithm 4 is used in publications [ P3, P4, P5, R4, R6 ] . The extended symmetric sigma point set used in Algorithm 4 is the conventional choice of sigma points [ 33, 37 ] , but there are also other possible choices [ 36 ] such as the simplex sigma-point set, which use the minimum number of sigma-points (n + 1) [ 31 ] . It is also possible to scale sigma-points so that the predicted covariance is certainly positive semidefinite [ 30 ] . Algorithm 4 assumes that the state model noise w is Gaussian and exploits the linearity of the state model (1b), so it is not precisely the same as the conventional UKF. Publication [ P4 ] extends UKF to Gaussian mixture filter framework, where every sigma point is replaced by a Gaussian distribution.

Algorithm 4: Unscented Kalman filter

1. Start with the mean x ˆ

0

= E (x

0

) and the covariance P

0

= V (x

0

) of the initial state. Set k = 1.

2. Prior mean of state at t

k

is x ˆ

k

= F

k1

x ˆ

k1

, and prior covariance is P

k

= F

k1

P

k1

F

kT1

+ Q

k1

.

3. Posterior mean of state at t

k

is x ˆ

k

= ˆ x

k

+ P

x y

k

P

y y1

k

€ y

k

y ˆ

k

Š and posterior covariance is P

k

= P

k

− P

x y

k

P

y y1

k

P

x yT

k

, where P

x y

k

=

2n

X

i=0

ω

i

€

χ

i

x ˆ

k

Š €

h

k

i

) − y ˆ

k

Š

T

, y ˆ

k

=

2n

X

i=0

ω

i

h

k

i

) and P

y y

k

= R

k

+

2n

X

i=0

ω

i

€

h

k

i

) − y ˆ

k

Š €

h

k

i

) − y ˆ

k

Š

T

.

Here ω

i

and χ

i

are extended symmetric sigma-point set of distribution N

n

€ x ˆ

k

,P

k

Š

given in Table 1.1.

4. Increment k and repeat from 2.

This list of different interpretations, variations and extension of

Kalman-type filters is not exhaustive. For example, it is possible to

use UKF to maintain and propagate information about the higher

order moments [ 32, 34, 87 ] .