• Ei tuloksia

Measurement Combination for Acoustic Source Localization in a Room Environment

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Measurement Combination for Acoustic Source Localization in a Room Environment"

Copied!
14
0
0

Kokoteksti

(1)

Volume 2008, Article ID 278185,14pages doi:10.1155/2008/278185

Research Article

Measurement Combination for Acoustic Source Localization in a Room Environment

Pasi Pertil ¨a, Teemu Korhonen, and Ari Visa

Department of Signal Processing, Tampere University of Technology, P.O. Box 553, 33101 Tampere, Finland

Correspondence should be addressed to Pasi Pertil¨a,pasi.pertila@tut.fi Received 31 October 2007; Revised 4 February 2008; Accepted 23 March 2008 Recommended by Woon-Seng Gan

The behavior of time delay estimation (TDE) is well understood and therefore attractive to apply in acoustic source localiza- tion (ASL). A time delay between microphones maps into a hyperbola. Furthermore, the likelihoods for different time delays are mapped into a set of weighted nonoverlapping hyperbolae in the spatial domain. Combining TDE functions from several micro- phone pairs results in a spatial likelihood function (SLF) which is a combination of sets of weighted hyperbolae. Traditionally, the maximum SLF point is considered as the source location but is corrupted by reverberation and noise. Particle filters utilize past source information to improve localization performance in such environments. However, uncertainty exists on how to com- bine the TDE functions. Results from simulated dialogues in various conditions favor TDE combination using intersection-based methods over union. The real-data dialogue results agree with the simulations, showing a 45% RMSE reduction when choosing the intersection over union of TDE functions.

Copyright © 2008 Pasi Pertil¨a et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. INTRODUCTION

Passive acoustic source localization (ASL) methods are at- tractive for surveillance applications, which are a constant topic of interest. Another popular application is human in- teraction analysis insmart roomswith multimodal sensors.

Automating the perception of human activities is a popu- lar research topic also approached from the aspect of local- ization. Large databases of smart room recordings are avail- able for system evaluations and development [1]. A typi- cal ASL system consists of several spatially separated micro- phones. The ASL output is either source direction or location in two- or three-dimensional space, which is achieved by uti- lizing received signal phase information [2] and/or ampli- tude [3], and possibly sequential information through track- ing [4].

Traditional localization methods maximize a spatial likelihood function (SLF) [5] to locate the source. Lo- calization methods can be divided according to the way the spatial likelihood is formed at each time step. The steered beamforming approach sums delayed microphone signals and calculates the output power for a hypotheti- cal location. It is therefore a direct localization method,

since microphone signals are directly applied to build the SLF.

Time delay estimation (TDE) is widely studied and well understood and therefore attractive to apply in the source lo- calization problem. The behavior of correlation-based TDE methods has been studied theoretically [6] also in reverber- ant enclosures [7,8]. Other TDE approaches include deter- mining adaptively the transfer function between microphone channels [9], or the impulse responses between the source and receivers [10]. For more discussion on TDE methods, see [11].

TDE-based localization methods first transform micro- phone pair signals into a time delay likelihood function.

These pairwise likelihood functions are then combined to construct the spatial likelihood function. It is therefore a two- step localization approach in comparison to the direct ap- proach. The TDE function provides a likelihood for any time delay value. For this purpose, the correlation-based TDE methods are directly applicable. A hypothetical source posi- tion maps into a time delay between a microphone pair. Since the TDE function assigns a likelihood for the time delay, the likelihood for the hypothetical source position is obtained.

From a geometrical aspect, time delay is inverse-mapped

(2)

as a hyperbola in 3D space. Therefore, the TDE function corresponds to a set of weighted nonoverlapping hyperbo- lae in the spatial domain. The source location can be solved by utilizing spatially separated microphone pairs, that is, combining pairwise TDE functions to construct a spatial likelihood function (SLF). The combination method varies.

Summation is used in [12–14], multiplication is used in [15, 16], and the determinant, used originally to deter- mine the time delay from multiple microphones in [17], can also be applied for TDE function combination in lo- calization. The traditional localization methods consider the maximum point of the most recent SLF as the source lo- cation estimate. However, in a reverberant and noisy en- vironment, the SLF can have peaks outside the source po- sition. Even a moderate increase in the reverberation time may cause dominant noise peaks [7], leading to the failure of the traditional localization approach [15]. Recently, par- ticle filtering (PF)-based sound source localization systems have been presented [13,15,16,18]. This scheme uses infor- mation also from the past time frames to estimate the cur- rent source location. The key idea is that spatially inconsis- tent dominant noise peaks in the current SLF do not nec- essarily corrupt the location estimate. This scheme has been shown to extend the conditions in which an ASL system is usable in terms of signal to noise ratio (SNR) and rever- beration time (T60) compared to the traditional approach [15].

As noted, several ways of combination TDE functions have been used in the past, and some uncertainty exists about a suitable method for building the SLF for sequential 3D source localization. To address this issue, this work introduces a generalized framework for combining TDE functions in TDE-based localization using particle filtering.

Geometrically, the summation of TDE functions represents the union of pairwise spatial likelihoods, that is, union of the sets of weighted hyperbolae. Such SLF does have the maximum value at the correct location but also includes the unnecessary tails of the hyperbolae. Taking the intersection of the sets reduces the unnecessary tails of the hyperbolae, that is, acknowledges that the time delay is eventually related only to a single point in space and not to the entire set of points it gets mapped into (hyperbola). TDE combination schemes are compared using a simulated dialogue. The simulation reverberation time (T60) ranges from 0 to 0.9 second, and the SNR ranges from 10 to +30 dB. Also real-data from a dialogue session is examined in detail.

The rest of this article is organized as follows:Section 2 discusses the signal model and TDE functions along with signal parameters that affect TDE. Section 3 proposes a general framework for combining the TDE functions to build the SLF. Section 4 categorizes localization methods based on the TDE combination operation they apply and discusses how the combination affects the SLF shape.

Iterative localization methods are briefly discussed. Particle filtering theory is reviewed inSection 5 for sequential SLF estimation and localization. In Section 6, simulations and real-data measurements are described. Selected localization methods are compared inSection 7. Finally, Sections8and 9conclude the discussion.

2. SIGNAL MODEL AND TDE FUNCTION

The sound signal emitted from a source is propagated into the receiving microphone. The received signal is a convo- lution of source signal and an impulse response. The im- pulse response encompasses the measurement equipment re- sponse, room geometry, materials as well as the propagation delay from a sourcernto a microphonemiand reverbera- tion effects. Theith microphone signal is a superposition of convoluted source signals [14,15]:

xi(t)= N n=1

sn(t)∗hi,n(t) +wi(t), (1) wherei [1,. . .,M], andsn(t) is the signal emitted by the nth source,n [1,. . .,N],wi(t) is assumed here to be in- dependent and identically distributed noise,trepresents dis- crete time index,hi,n(t) is the impulse response, andde- notes convolution. The propagation time from a source point rnto microphoneiis

τi,rn=rnmi·c1, (2) wherecis the speed of sound, and·is the Euclidean norm.

Figure 1(a)illustrates propagation delay from source to mi- crophones, using a 2D simplification.

A wavefront emitted from pointrarrives at spatially sep- arated microphonesi,jaccording to their corresponding dis- tance from pointr. This time difference of arrival (TDOA) value between the pairp= {i,j}in samples is [14]

Δτp,r=rmirmj·fs·c1, (3) where fsis the sampling frequency, and·denotes round- ing. Conversely, a delay between microphone pairΔτp,r de- fines a set of 3D locationsHp,rforming a hyperbolic surface that includes the unique location r. The geometry is illus- trated inFigure 1(b), where hyperbolae related to TDOA val- ues30,20,. . ., 30 are illustrated.

In this work, a TDE function between microphone pairp is definedRpp)[0, 1], where the delay can have values:

τp

−τmax,τmax , τpZ, (4) τmax=mjmi·fs·c1. (5) The unit of delay is one sample. TDE functions include the generalized cross correlation (GCC) [19] which is defined for a frame of microphone pairpdata:

RGCCp

τp

=F1Wp(k)Xi(k)Xj(k), (6) whereXj(k)is a complex conjugate transpose of the DFT of the jth microphone signal,k is discrete frequency,F1{·}

denotes inverse DFT, and Wp(k) is a weighting function, see [19]. Phase transform (PHAT) weighting Wp(k) =

|Xi(k)Xj(k)|1 causes sharper peaks in the TDE function compared to the nonweighted GCC and is used by sev- eral TDE-based localization methods, including the steered response power using phase transform (SRP-PHAT) [14].

(3)

0 0.5 1 1.5 2

ycoordinate(m)

1 1.5 2 2.5 3

xcoordinate (m) Microphone Source 16014012010080

604020

(a) Propagation delay from source 1

0 0.5 1 1.5 2

ycoordinate(m)

1 1.5 2 2.5 3

xcoordinate (m) Microphone

30

20100 10 20

30

(b) TDOA values and corresp- onding hyperbolae

0 0.5 1

Likelihood

30 20 10 0 10 20 30 Delay in samplesτ

(c) TDE function values,Rpp)

0 0.5 1 1.5 2

ycoordinate(m)

1 1.5 2 2.5 3 xcoordinate (m)

0 0.2 0.4 0.6 0.8 1

Likelihood

Microphone Source

(d) Spatial likelihood function (SLF) for a microphone pair

Figure1: Source localization geometry is presented. The sampling frequency is 22050 Hz, the speed of sound is 343 m/s, the source signal is colored noise, and SNR is +24 dB. The sources are located atr1= 3, 2andr2= 1.5, 1.5or at TDOA valuesΔτ1=18 andΔτ2= −6. In panel (a), the propagation time from source atr1is different for the two microphones (values given in samples). This difference is the TDOA value of the source. Panel (b) illustrates how different TDOA values are mapped into hyperbolae. In panel (c), the two peaks at locations τp=18 andτp = −6 in the TDE function correspond to the source locationsr1andr2, respectively. Panel (d) displays the TDE function values from panel (c) mapped into a microphone pairwise spatial likelihood function (SLF).

An example of TDE function is displayed in Figure 1(c).

Other weighting schemes include the Roth, Scot, Eckart, the Hannan-Thomson (maximum likelihood) [19], and the Hassab-Boucher methods [20].

Other applicable TDE functions include the modified av- erage magnitude difference function (MAMDF) [21]. Re- cently, time frequency histograms have been proposed to in- crease TDE robustness against noise [22]. For a more detailed discussion on TDE refer to [11]. The evaluation of different TDE methods and GCC weighting methods is, however, out- side the scope of this work. Hereafter, the PHAT-weighted GCC is utilized as the TDE weighting function since it is the optimal weighting function for a TDOA estimator in a rever- berant environment [8].

The correlation-based TDOA is defined as the peak lo- cation of the GCC-based TDE function [19]. Three distinct SNR ranges (high, low, and the transition range in between) in TDOA estimation accuracy have been identified in a nonreverberant environment [6]. In the high SNR range, the TDOA variance attains the Cramer-Rao lower bound (CRLB) [6]. In the low SNR range, the TDE function is dom- inated by noise, and the peak location is noninformative. In the transition range, the TDE peak becomes ambiguous and is not necessary related to the correct TDOA value. TDOA estimators fail rapidly when the SNR drops into this tran- sition SNR range [6]. According to the modified Ziv-Zakai lower bound, this behavior depends on time-bandwidth product, bandwidth to center frequency ratio, and SNR [6].

In addition, the CRLB depends on the center frequency.

In a reverberant environment the correlation-based TDOA performance is known to rapidly decay when the reverberation time (T60) increases [7]. The CRLB of the correlation-based TDOA estimator in the reverberant case is derived in [8] where PHAT weighting is shown to be opti- mal. In that model, the signal to noise and reverberation ra- tio (SNRR) and signal frequency band affect the achievable minimum variance. The SNRR is a function of the acous-

tic reflection coefficient, noise variance, microphone distance from the source, and the room surface area.

3. FRAMEWORK FOR BUILDING THE SPATIAL LIKELIHOOD FUNCTION

Selecting a spatial coordinaterassigns a microphone pairp with a TDOA valueΔτp,ras defined in (3). The TDE func- tion (6) indexed with this value, that is,Rp(Δτp,r), represents the likelihood of the source existing at the locations that are specified by the TDOA value, that is, hyperboloidHp,r. The pairwise SLF can be written as

PRp|r=Rp

Δτp,r

[0, 1], (7)

whereP(· | ·) represents conditional likelihood, normalized between [0, 1].Figure 1(d)displays the pairwise SLF of the TDE measurement displayed inFigure 1(c). Equation (7) can be interpreted as a likelihood of a source having locationr given the measurementRp.

The pairwise SLF consists of weighted nonoverlapping hyperbolic objects and therefore has no unique maximum. A practical solution to reduce the ambiguity of the maximum point is to utilize several microphone pairs. The combination operator used to perform fusion between these pairwise SLFs influences the shape of the resulting SLF. Everything else ex- cept the source position of each of the hyperboloid’s shape is nuisance.

A binary operator combining two likelihoods can be de- fined as

: [0, 1]×[0, 1]−→[0, 1]. (8) Among such operators, ones that are commutative, mono- tonic, associative, and bounded between [0, 1] are of interest

(4)

0 0.5 1 LikelihoodA 0

0.2 0.4 0.6 0.8 1

LikelihoodB

0.1 0.2

0.3 0.4

0.5 0.6

0.7 0.8

0.9 Joint likelihood, sum

(a) Sum 0.5(A+B)

0 0.5 1

LikelihoodA 0

0.2 0.4 0.6 0.8 1

LikelihoodB

0.1 0.2

0.3 0.4

0.50.60.70.80.9 Joint likelihood, product

(b) ProductAB

0 0.5 1

LikelihoodA 0

0.2 0.4 0.6 0.8 1

LikelihoodB

0.1 0.2

0.3 0.4

0.5 0.60.70.80.9 Joint likelihood, Hamachert-norm,p=0.1

(c) Hamachert-norm

Figure2: Three common likelihood combination operators, normalized sum (s-norm), product (t-norm), and Hamachert-norm are illustrated along their resulting likelihoods. The contour lines represent constant values of output likelihood.

here. For likelihoodsA,B,C,D, these rules are written as

A⊗B =B⊗A, (9)

A⊗B≤C⊗D, if A≤CandB≤D, (10) A⊗(B⊗C) =(A⊗B)⊗C. (11) Such operations includet-norm ands-norm.s-norm opera- tions between two sets represent the union of sets and have the propertyA⊗0 = A. The most commons-norm oper- ation is summation. Other well- knowns-norm operations include the Euclidean distance and maximum value.

At-norm operation represents the intersection of sets and satisfies the property A 1 = A. Multiplication is the most common such operation. Othert-norm operations include the minimum value and Hamacher t-norm [23]

which is a parameterized norm and is written for two values AandB:

h(A,B,γ)= AB

γ+ (1−γ)(A+B−AB), (12) whereγ >0 is a parameter. Note that the multiplication is a special case of (12) whenγ=1.

Figure 2illustrates the combination of two likelihood val- ues, A and B. The likelihood values are displayed on the axes. The leftmost image represents summation, the middle represents product and the rightmost is Hamachert-norm= 0.1). The contour lines represent the joint likelihood.

The summation is the onlys-norm here. In general, thet- norm is large only if all likelihoods are large. Similarly, the s-norm can be large even if some likelihood values are small.

The combination of pairwise SLFs can be written: (using

with prefix notation.)

PR|r=

pΩ

Rp

Δτp,r

, (13)

where each microphone pairpbelongs to a microphone pair group Ω, and R represents all the TDE functions of the group. There existsM2unique microphone pairs in the set of all pairs. Sometimes partitioning the set of microphones

into groups or arrays before pairing is justified. The sig- nal coherence between two microphones decreases as micro- phone distance increases [24] which favors partitioning the microphones into groups with low sensor distance. Also, the complexity of calculating all pairwise TDE function values is O(M2), which is lower for partitioned arrays. Selecting too small sensor separation may lead to over-quantization of the possible TDOA values where only a few delay values exist, see (5).

4. TDE-BASED LOCALIZATION METHODS

Several TDE-based combination schemes exist in the ASL lit- erature. The most common method is the summation. This section presents four distinct operations in the generalized framework.

4.1. Summation operator in TDE-based localization The method in [12] sums GCC values, which is equiva- lent to the steered beamformer. The method in [13] sums precedence-weighted GCC values (for direction estimation).

SRP-PHAT method sums PHAT-weighted GCC values [14].

All these methods use the summation operation which ful- fills the requirements (9)–(11). Using (13), the SRP-PHAT is written as

PSRP-PHAT(R|r)=

pΩ

RGCC-PHATp

Δτp,r

. (14)

Every high value of the pairwise SLF is present in the re- sulting SLF since the sum represents a union of values. In a multiple source situation with more than two sensors, this approach generates high probability regions outside actual source positions, that is, ghosts. SeeFigure 3(a) for illustra- tion, where ghosts appear, for example, at x,y coordinates 3.1, 1.2and2.6, 1.3.

4.2. Multiplication operator in TDE-based localization In [15,16], product was used as the likelihood combination operator which is a probabilistic approach. (In [15] negative

(5)

0 0.5 1 1.5 2 2.5 3 3.5 4

ycoordinate(m)

0.2 0.1 0

xcoordinate (m) (b) SLF marginal

density

4 3

2 1

0 ycoordinat

e (m) 0

0.2 0.4 0.6 0.8 1

0 1 2 3 4

xcoordinate (m) (a) 2D spatial likelihood function

(SLF), operator: sum

e.g., ghost

0 0.5 1 1.5 2 2.5 3 3.5 4

ycoordinate(m)

0.02 0.01 0

xcoordinate (m) (f) SLF marginal

density

4 3

2 1 0 ycoordinat

e (m) 0

0.2 0.4 0.6 0.8 1

0 1 2 3 4

xcoordinate (m) (e) 2D spatial likelihood function

(SLF), operator: product

0 2 4

ycoordinate(m)

0 2 4

xcoordinate (m) (d) SLF contour

0.2 0.15 0.1 0.05 0

0 1 2 3 4

xcoordinate (m) (c) SLF marginal density

0 2 4

ycoordinate(m)

0 2 4

xcoordinate (m) (h) SLF contour

0.02 0.015 0.01 0.005 0

0 1 2 3 4

xcoordinate (m) (g) SLF marginal density

Figure3: A two-source example scenario with three microphone pairs is illustrated. The source coordinates arer1 = 3, 2andr2 = 1.5, 1.5. Two combination operatorssumandproductare used to produce two separate spatial likelihood functions (SLFs). The SLF con- tours are presented in panels (d) and (h). Circle and square represent microphone and source locations, respectively. Panels (a) and (e) illustrate the resulting 2D SLF, produced with the sum and product operations, respectively. The marginal distributions of the SLFs are presented in panels (b) and (c) for the sum, and (f) and (g) for the product. The panel (a) distribution hasghostswhich are the result of summed observations, see example ghost at3.1, 1.2. Also, the marginal distributions are not informative. In the panel (e), SLF has sharp peaks which are in the presence of the actual sound sources. The marginal distributions carry source position information, though this is not guaranteed in general.

GCC values are clipped and the resulting positive values are raised to powerq) If the likelihoods are independent, the in- tersection of sets equals their product. The method, termed here multi-PHAT, multiplies the pairwise PHAT-weighted GCC values together in contrast to summation. The multi- PHAT fulfills (9)–(11) and is written using (13)

Pmulti-PHAT(R|r)=

pΩ

RGCC-PHATp

Δτp,r

. (15)

This approach outputs the common high likelihood ar- eas of the measurements, and so the unnecessary peaks of the SLF are somewhat reduced. The ghosts experienced in the SRP-PHAT method are eliminated in theory by the intersection-based combination approach. This is illustrated in Figure 3(b). The SLF has two distinct peaks that corre- spond to the true source locations.

4.3. Hamachert-norm in TDE-based localization Several other methods that have the properties (9)–(11) can be used to combine likelihoods. These methods in- clude parameterizedt-norms and s-norms [23]. Here, the Hamachert-norm (12) is chosen because it is relatively close

to the product and represents the intersection of sets. The Hamachert-norm is defined as a dual norm, since it oper- ates on two inputs.

The parameterγ > 0 in the Hamachert-norm (12) de- fines how the norm behaves. For example,h(0.5, 0.2, 0.1)≈ 0.16 whereas their product equals 0.2·0.5 = 0.1, and h(0.5, 0.2, 15) 0.085. Figures 2(b) and 2(c) represent the multiplication and Hamacher t-norm (γ = 0.1). The Hamacher t-norm-based TDE localization method is writ- ten using (13):

PHamacher-PHAT(R|r,γ)

=h. . . hR1

Δτr

,R2

Δτr

,γ,. . .,RJ

Δτr

, (16) whereRJ(Δτr) is abbreviated notation ofRJGCCPHAT(ΔτJ,r), that is, the PHAT-weighted GCC value from theJth micro- phone pair for locationr, whereJis the total number of pairs, andh(·,·,γ) is the Hamachert-norm (12). Since the norm is commutative, the TDE measurements can be combined in an arbitrary order. Any positiveγ value can be chosen, but valuesγ <1 were empirically found to produce good results.

(6)

Note that multi-PHAT is a special case of Hamacher-PHAT whenγ=1.

4.4. Other combination methods in TDE-based localization

Recently, a spatial correlation-based method for TDOA es- timation has been proposed [17], termed the multichannel cross correlation coefficient (MCCC) method. It combines cross correlation values for TDOA estimation and is consid- ered here for localization. The correlation matrix from aM microphone array is here written:

R=

⎢⎢

⎢⎢

⎢⎢

⎢⎣ R1,1

Δτr

R1,2

Δτr

. . . R1,M

Δτr

R2,1

Δτr

R2,2

Δτr

. . . R2,M

Δτr

... ... . .. ... RM,1

Δτr

RM,2

Δτr

. . . RM,M

Δτr

⎥⎥

⎥⎥

⎥⎥

⎥⎦

, (17)

whereRi,j(Δτr) equalsRGCC-PHATp (Δτp,r). In [17], the matrix (17) is used for TDOA estimation, but here it is interpreted as a function of source position using (13)

PMCCC(R|r)=1detR. (18) The spatial likelihood of, for example, a three microphone array is

PMCCC(R|r)=1detR3×3

=R1,2

Δτr

2

+R1,3

Δτr

2

+R2,3

Δτr

2

2R1,2

Δτr

R1,3

Δτr

R2,3

Δτr

. (19) The MCCC method is argued to remove the effect of a chan- nel that does not correlate with the other channels [17]. This method does not satisfy the monotonicity assumption (10).

Also, the associativity (11) does not follow in arrays larger than three microphones.

4.5. Summary of the TDE combination methods Four different TDE combination schemes were discussed, and existing localization methods were categorized accord- ingly.Figure 3displays the difference between the intersec- tion and the union of TDE function in localization. The SLF produced with the Hamachert-norm differs slightly from the multiplication approach and is not illustrated. Also, the SLF produced with the MCCC is relatively close to the summa- tion, as seen later inFigure 10. The intersection results in the source location information. The union contains the same information as the intersection but also other regions, such as the tails of the hyperbolae. This extra information does not help localization. In fact, likelihood mass outside true source position increases the estimator variance. However, this extra likelihood mass can be considered in other applications, for example, to determine the speaker’s head orientation [25].

1 Xt=SIR{Xt−1,Rt}; 2 forj=1toNjdo 3 rtjP(rt|rt−1j );

4 Calculatewtj=P(Rt|rtj);

5 end

6 Normalize weights,w1:Nt j/Nj=1j wtj; 7 Xt=RESAMPLE{Xt};

Algorithm1: SIR algorithm for particle filtering [30].

4.6. Iterative methods for TDE-based source location estimation

A straightforward but computationally expensive approach for source localization is to exhaustively find the maximum value of the SLF. The SRP-PHAT is perhaps the most com- mon way of building the SLF so a lot of algorithms, includ- ing the following ones, have been developed to reduce the computational burden. A stochastic [26] and a determin- istic [27] ways of reducing the number of SLF evaluations have been presented. These methods iteratively reduce the search volume that contains the maximum point until the volume is small enough. In [28], the fact that a time delay is inverse-mapped into multiple spatial coordinates was uti- lized to reduce the number of SLF grid evaluations by consid- ering only the neighborhood of thenhighest TDE function values. In [29], the SLF is maximized initially at low frequen- cies that correspond to large spatial blocks. The maximum- valued SLF block is selected and further divided into smaller blocks by increasing the frequency range. The process is re- peated until a desired accuracy is reached.

5. SEQUENTIAL SPATIAL LIKELIHOOD ESTIMATION In the Bayesian framework, the SLF represents the noisy mea- surement distributionP(Rt |rt) at time framet, whereRt

represents measurement and rt state. In the previous sec- tion, several means of building the measurement distribu- tion were discussed. The next step is to estimate the source position using the posterior distributionP(r0:t |R1:t). The subindices emphasize that the distribution includes all the previous measurements and state information, unlike the it- erative methods discussed above. The stater0 represents a priori information. The first measurement is available at time framet=1.

It is possible to estimate the posterior distribution in a recursive manner [4]. This can be done in two steps, termed prediction and update. The prediction of the state distribu- tion is calculated by convolving the posterior distribution with a transition distributionP(rt|rt1) written as

Prt|R1:t1

=

Prt|rt1

Prt1|R1:t1

drt1. (20) The new SLF, that is,P(Rt|rt) is used to correct the predic- tion distribution:

Prt|R1:t

= PRt|rt

Prt|R1:t1

PRt|rt

Prt|R1:t1

drt, (21)

(7)

Coordinates: (x,y,z) z

y

x

WindowCeilingheight =2.59 Projectorcanvas

Talker 2 (0.507, 2.002, 0.965)

Sofa

Talker 1

(2.406, 2.97, 1.118) (0, 3.96, 0)

Array 1 Diusor Array 2

(4.53, 3.96, 0)

Sofa Table

Sofa

Diusor

Door

Diusor Array 3

(0, 0, 0) (4.53, 0, 0)

Figure4: A diagram of the meeting room. The room contains furniture, a projector canvas, and three diffusors. Three microphone arrays are located on the walls. Talker positions are given [m], and they are identical in the simulations and in the real-data experiments.

where the nominator is a normalizing constant. For each time framet, the two steps (20) and (21) are repeated.

In this work, a particle filtering method is used to nu- merically estimate the integrals involved [4,30]. For a tu- torial on PF methods, refer to [30]. PF approximates the posterior density with a set ofNjweighted random samples Xt = {rtj,wtj}Nj=j1for each framet. The approximate poste- rior density is written as

Pr0:t|R1:t

Nj

j=1

wtjδr0:tr0:tj , (22)

where the scalar weightswt1,...,Nj sum to unity, andδ is the Dirac’s delta function.

In this work, the particlesr1,...,Nt j are 3D points in space.

The specific PF method used is the sampling importance resampling (SIR), described inAlgorithm 1. The algorithm propagates the particles according to the motion model which is here selected as a dual-Gaussian distribution (Brow- nian motion). Both distributions are centered on the cur- rent estimate with standard deviations of σ and 4σ, (see Algorithm 1Line 3). The new weights are calculated from the SLF on Line 4.

The resampling is applied to avoid the degeneracy prob- lem, where all but one particle have insignificant weight. In the resampling step, particles of low weight are replaced with particles of higher weight. In addition, a percentage of the particles are randomly distributed inside the room to no- tice events like the change of the active speaker. After esti- mating the posterior distribution, a point estimate is selected to represent the source position. Point estimation methods include the maximum a posteriori (MAP), the conditional mean (CM), and the median particle. If the SLF is multi- modal, CM will be in the center of the mass and thus not necessarily near any source. In contrast, MAP and median

will be inside a mode. Due to the large number of parti- cles, the median is less likely to oscillate between different modes than MAP. In SIR, the MAP would be the maximum weighted particle from the SLF and thus prone to spurious peaks. Also, the MAP cannot be taken after the resampling step since the weights are effectively equal. Therefore, the me- dian is selected as the source state estimate:

rt=medianr1t,r2t,. . .,rNtj. (23) 6. SIMULATION AND RECORDING SETUP

A dialogue situation between talkers is analyzed. The local- ization methods already discussed are compared using sim- ulations and real-data measurements performed in a room environment. The simulation is used to analyze how the dif- ferent TDE combination methods affect the estimation per- formance when noise and reverberation are added. The real- data measurements are used to verify the performance differ- ence.

The meeting room dimensions are 4.53×3.96×2.59 m.

The room layout and talker locations are illustrated in Figure 4. The room contains three identical microphone ar- rays. Each array consists of four microphones, and their co- ordinates are given inTable 1. The real room is additionally equipped with furniture and other small objects.

6.1. Real-data measurements

The measured reverberation time T60 of the meeting room is 0.25 seconds, obtained with the maximum-length sequence (MLS) technique [31] using the array microphones and a loudspeaker. A sampling rate of 44.1 kHz is used, with 24 bits per sample, stored in linear PCM format. The array micro- phones are Sennheiser MKE 2-P-C electret condenser micro- phones with a 48 V phantom feed.

(8)

Table1: Microphone geometry for the arrays is given for each microphone (mm). The coordinate system is the same used inFigure 4.

Array 1 Array 2 Array 3

Mic x y z Mic x y z Mic x y z

1 1029 3816 1690 5 3127 3816 1715 9 3714 141 1630

2 1405 3818 1690 6 3507 3813 1715 10 3335 144 1630

3 1215 3819 2088 7 3312 3814 2112 11 3527 140 2030

4 1215 3684 1898 8 3312 3684 1940 12 3517 270 1835

0 5 10 15 20 25

Time (s)

0.05 0 0.05

Amplitude

Real-data dialogue between two speakers Silence

Talker 1Talker 2

Figure5: The real-data dialogue signal is plotted from one microphone. The signal is annotated into “talker 1”, “talker 2”, and “silence”

segments. The annotation is also illustrated. The talkers repeated their own sentence.

A 26 second dialogue between human talkers was recorded. The talkers uttered a predefined Finnish sentence and repeated the sentence in turns for six times. The SNR is estimated to be at least 16 dB in each microphone. The recording signal was manually annotated into three different classes “talker 1”, “talker 2”, and “silence”.Figure 5displays the signal and its annotation. The reference position is mea- sured from the talker’s lips and contains some errors due to unintentional movement of the talker and the practical na- ture of the measurement.

6.2. Simulations

The meeting room is simulated using the image method [32].

The method estimates the impulse responsehi,n(t) between the sourcenand receiving microphonei. The resulting mi- crophone signal is calculated using (1). The reverberation time (T60) of the room is varied by changing the reflec- tion coefficient of the walls βw, and the ceiling and floor βc,f which are related byβc,f =

βw. The coefficient deter- mines the amount of sound energy reflected from a surface.

Recordings with 10 different T60 values between 0 and 0.9 second are simulated with SNR ranging from 10 dB to +30 dB in 0.8 dB steps for each T60 value. The simulation signals consisted of 4 seconds of recorded babble. The ac- tive talker switches from talker 1 to talker 2 at time 2.0 sec- onds. The total number of recordings is 510. The T60 values are [0, 0.094, 0.107, 0.203, 0.298, 0.410, 0.512, 0.623, 0.743, 0.880]. These are median values of channel T60 values calcu- lated from the impulse response using Schroeder integration [33].

7. LOCALIZATION SYSTEM FRAMEWORK

The utilized localization system is based on the ASL frame- work discussed in this work. Microphone pairwise TDE

functions are calculated inside each array with GCC-PHAT [19]. Pairwise GCC values are normalized between [0,1] by first subtracting the minimum value and dividing by the largest such GCC value of the array. A Hamming windowed frame of size 1024 samples is utilized (23.2 milliseconds) with no overlapping between sequential frames. The microphones are grouped into three arrays, and each array contains four microphones, see Table 1. Six unique pairs inside each ar- ray are utilized. Microphone pairs between the arrays are not included in order to lessen the computational complexity.

The TDE function values are combined with the following schemes, which are considered for ASL:

(1) SRP-PHAT + PF: PHAT-weighted GCC values are summed to form the SLF (14), and SIR-PF algorithm is applied.

(2) Multi-PHAT + PF: PHAT-weighted GCC values are multiplied together to form the SLF (15), and SIR-PF algorithm is applied.

(3) Hamacher-PHAT + PF: PHAT-weighted GCC values are combined pairwise using the Hamacher t-norm (16), with parameter valueγ = 0.75. The SIR-PF al- gorithm is then applied.

(4) MCCC + PF: PHAT-weighted GCC values are formed into a matrix (17), and the determinant operator is used to combine the pairwise array TDE functions (18). Multiplication is used to combine the result- ing three array likelihoods together. In the simulation, multiplication produced better results than using the determinant operator for the array likelihoods. The SIR-PF algorithm is also applied.

The particle filtering algorithm discussed in Section 5 (SIR-PF) is used with 5000 particles. The systematic resam- pling was applied due to its favorable resampling quality and low computational complexity [34]. The particles are con- fined to room dimensions and in the real-data analysis also

(9)

between heights of 0.5–1.5 m to reduce the effects of ven- tilation noise. The 5000 particles have a Brownian motion model, with empirically chosen standard deviation σ val- ues 0.05 and 0.01 m for the simulations and real-data ex- periments, respectively. The Brownian motion model was se- lected since the talkers are somewhat stationary. Different dy- namic models could be applied if the talkers move [35].The particles are uniformly distributed inside the room at the be- ginning of each run, that is, the a priori spatial likelihood function is uniform.

7.1. Estimator performance

The errors are measured in terms of root mean square (RMS) values of the 3D distance between the point estimatert and reference positionrt. The RMS error of an estimator is de- fined as

RMSE{method} = 1 T

T

t=1

rtrt2 , (24) wheretis the frame index, andTrepresents the number of frames.

In the real-data analysis, the time frames annotated as “si- lence” are omitted. 0.3 second of data is omitted from the beginning of the simulation and after the speaker change to reduce the effects of particle filter convergence on the RMS error. Omitting of nonspeech frames could be performed au- tomatically with a voice activity detector (VAD), see for ex- ample [36].

7.2. Results for simulations

Results for the simulations using the four discussed ASL methods are given in Figures6and7, for talker locations 1 and 2, respectively. The subfigures (a) to (d) represent the RMS error contours for each of the four methods. The x- axis displays the SNR of the recording, andy-axis displays the reverberation time (T60) value of the recording. A large RMS error value indicates that the method does not produce meaningful results.

For all methods, talker location 1 results in better ASL performance, than location 2. The results of location 1 are examined in detail.

The multi- and Hamacher-PHAT (intersection) methods clearly exhibit better performance. At +14 dB SNR, the in- tersection methods have RMSE20 cm when reverberation time T600.4 second. In contrast, the SRP- and MCCC- PHAT attain the same error with T600.2 second.

The results for talker location 2 are similar, except that there exists a systematic increase in RMS error. The decrease in performance is mainly caused by the slower convergence of the particle filter. At the start of the simulation, talker 1 becomes active and all of the particles are scattered randomly inside the room, according to the a priori distribution. When talker 2 becomes active and talker 1 silent, most of the par- ticles are still at talker 1 location, and only a percent of the particles are scattered in the room. Therefore, the particle fil-

ter is more likely to converge faster to talker 1 than to talker 2, which is seen in the systematic increase of RMSE.

Evident in larger area of RMS error contour below 0.2 m multi- and Hamacher-PHAT increase the performance both in noisy and reverberant environments compared to SRP- and MCCC-PHAT.

7.3. Results for real-data measurements

Since the location estimation process utilizes a stochastic method (PF), the calculations are repeated 500 times and then averaged. The averaged results are displayed for the four methods inFigure 8. The location estimates are plotted with a continuous line, and the active talker is marked with a dashed line. All methods converge to both speakers. The SRP-PHAT and MCCC-PHAT behave smoothly. The multi- PHAT and Hamacher-PHAT adapt to the switch of the active speaker more rapidly than other methods and also exhibit rapid movement of the estimator compared to the SRP- and MCCC-PHAT methods.

The RMS errors of the real-data segment are SRP-PHAT:

0.31 m, MCCC-PHAT: 0.29 m, Hamacher-PHAT: 0.14 m, and multi-PHAT: 0.14 m. The performance in the real-data scenario is further illustrated in Figure 9. The percentage of estimates outside a sphere centered at the ground truth location of both talkers is examined. The sphere radius is used as a threshold value to determine if an estimate is an outlier. The Hamacher-PHAT outperforms the others meth- ods. SRP-PHAT has 80.6% of estimates inside the 25 cm er- ror threshold, the MCCC-PHAT has 81.8%, the Hamacher- PHAT has 93.1%, and the multi-PHAT has 92.4%.

The results agree with the simulations. The reason for the performance difference can be further examined by looking at the SLF shape. For this analysis, the SLFs are evaluated with a uniform grid of 5 cm density over the whole room area at three different elevations (0.95, 1.05, and 1.15 m). The marginal SLF is generated by integrating SLFs over the z- dimension and time. The normalized marginal spatial like- lihood functions are displayed in Figure 10. In the RMSE sense (24), the likelihood mass is centered around the true positionrin all cases. However, Hamacher- and multi-PHAT likelihood distributions have greater peakiness with more likelihood mass concentrated around the talker. The SRP- PHAT and MCCC-PHAT have a large evenly distributed like- lihood mass, that is, large variance. Note that only a single talker was active at a time, and the marginal SLFs are multi- modal due to integration over the whole recording time.

8. DISCUSSION

The simulations use the image method which simplifies the acoustic behavior of the room and source. The simulations neglect that the reflection coefficient is a function of the in- cident angle and frequency, and that the air itself absorbs sound [37]. The effect of the latter becomes more significant in large enclosures. The human talker is acoustically modeled as a point source. This simplification is valid for the simula- tions, since the data is generated using this assumption. In the real-data scenario, the sound does not originate from a

(10)

10 6 2 2 6 10 14 18 22 26 30 SNR (dB)

0 0.2 0.4 0.6 0.8

T60(s)

RMSE, talker 1, combination: 2, SRP-PHAT + PF

0.2

0.2

0.2

0.2

0.5

0.5

0.5 0.5

0.5

(a) Method 1, SRP-PHAT + PF

10 6 2 2 6 10 14 18 22 26 30 SNR (dB)

0 0.2 0.4 0.6 0.8

T60(s)

RMSE, talker 1, combination: 1, multi-PHAT + PF

0.2

0.2

0.2 0.2

0.2 0.20.20.2

0.5

0.50.5 0.5 0.5

0.50.5 0.5 0.5 0.5

(b) Method 2. Multi-PHAT + PF

10 6 2 2 6 10 14 18 22 26 30 SNR (dB)

0 0.2 0.4 0.6 0.8

T60(s)

RMSE, talker 1, combination: 3, Hamacher-PHAT + PF

0.2 0.2

0.2 0.2

0.2 0.2

0.2

0.5 0.5 0.5

0.5 0.5 0.50.5

0.5 0.5 0.5

(c) Method 3: Hamacher-PHAT + PF

10 6 2 2 6 10 14 18 22 26 30 SNR (dB)

0 0.2 0.4 0.6 0.8

T60(s)

RMSE, talker 1, combination: 4, MCCC + PF

0.2 0.2

0.2

0.2

0.2

0.5

0.5

0.5 0.5

0.5

(d) Method 4: MCCC-PHAT + PF

Figure6: The figure presents simulation results for talker location 1. The four ASL methods used are described inSection 7. The RMS error is defined inSection 7.1. The signals SNR values range from10 to 30 dB, with reverberation time T60 between 0 and 0.9 second, see Section 6. The contour lines represent RMS error values at steps [0.2, 0.5] m.

10 6 2 2 6 10 14 18 22 26 30 SNR (dB)

0 0.2 0.4 0.6 0.8

T60(s)

RMSE, talker 2, combination: 2, SRP-PHAT + PF

0.20.2

0.2 0.2

0.5

0.5 0.5 0.5

(a) Method 1, SRP-PHAT + PF

10 6 2 2 6 10 14 18 22 26 30 SNR (dB)

0 0.2 0.4 0.6 0.8

T60(s)

RMSE, talker 2, combination: 1, multi-PHAT + PF

0.5

0.5 0.5 0.5 0.5 0.5

0.5 0.5 0.5

0.5 0.5

0.2 0.2

0.2 0.2

0.2 0.2 0.2

(b) Method 2. Multi-PHAT + PF

10 6 2 2 6 10 14 18 22 26 30 SNR (dB)

0 0.2 0.4 0.6 0.8

T60(s)

RMSE, talker 2, combination: 3, Hamacher-PHAT + PF

0.50.5 0.5

0.5

0.5 0.5 0.5 0.5 0.5

0.5 0.5

0.50.5 0.5 0.5

0.5

0.2 0.2

0.2 0.2 0.2 0.2 0.2

(c) Method 3: Hamacher-PHAT + PF

10 6 2 2 6 10 14 18 22 26 30 SNR (dB)

0 0.2 0.4 0.6 0.8

T60(s)

RMSE, talker 2, combination: 4, MCCC + PF

0.2

0.2 0.2

0.5

0.5 0.5 0.5

(d) Method 4: MCCC-PHAT + PF

Figure7: The figure presents simulation results for talker location 2. The four ASL methods used are described inSection 7. The RMS error is defined inSection 7.1. The signals SNR values range from10 to 30 dB, with reverberation time T60 between 0 and 0.9 second, see Section 6. The contour lines represent RMS error values at steps [0.2, 0.5] m.

(11)

0 5 10 15 20 25 Time (s)

0 0.5 1 1.5 2

zcoordinate(m)

0 5 10 15 20 25

Time (s) 1.5

2 2.5 3 3.5

ycoordinate(m)

0 5 10 15 20 25

Time (s) 0.5

1 1.5 2 2.5

xcoordinate(m)

Real-data results, method 1 SRP-PHAT + PF

(a) Method 1, SRP-PHAT + PF

0 5 10 15 20 25

Time (s) 0

0.5 1 1.5 2

zcoordinate(m)

0 5 10 15 20 25

Time (s) 1.5

2 2.5 3 3.5

ycoordinate(m)

0 5 10 15 20 25

Time (s) 0.5

1 1.5 2 2.5

xcoordinate(m)

Real-data results, method 2 multi-PHAT + PF

(b) Method 2. Multi-PHAT + PF

0 5 10 15 20 25

Time (s) 0

0.5 1 1.5 2

zcoordinate(m)

0 5 10 15 20 25

Time (s) 1.5

2 2.5 3 3.5

ycoordinate(m)

0 5 10 15 20 25

Time (s) 0.5

1 1.5 2 2.5

xcoordinate(m)

Real-data results, method 3 Hamacher-PHAT + PF

(c) Method 3: Hamacher-PHAT + PF

0 5 10 15 20 25

Time (s) 0

0.5 1 1.5 2

zcoordinate(m)

0 5 10 15 20 25

Time (s) 1.5

2 2.5 3 3.5

ycoordinate(m)

0 5 10 15 20 25

Time (s) 0.5

1 1.5 2 2.5

xcoordinate(m)

Real-data results, method 4 MCCC-PHAT + PF

(d) Method 4: MCCC-PHAT + PF

Figure8: Real-data results averaged over 500 runs using the four methods described inSection 7are plotted. The reference is also plotted with a dashed line. Refer toFigure 4for room geometry. Thex-axis in each picture represents time in seconds. They-axis displays the correspondingx,y,zcoordinates of the result.

0 0.05 0.1 0.15 0.2 0.25

Error threshold (m) 0

10 20 30 40 50 60 70 80 90 100

Percentageoutliers(%)

ASL performance as a function of error threshold radius

Multi-PHAT + PF SRP-PHAT + PF

MCCC-PHAT + PF Hamacher-PHAT + PF

Figure9: The figure displays the percentage of the estimates (y-axis) falling outside of a sphere centered at the active speaker. The sphere radius is plotted on thex-axis (threshold value).

Viittaukset

LIITTYVÄT TIEDOSTOT

This paper presents a passive acoustic self-localization and synchro- nization system, which estimates the positions of wireless acoustic sensors utilizing the signals emitted by

Lewandowski, “Sound source detection, localization and classification using consecutive ensemble of CRNN models,” in De- tection and Classification of Acoustic Scenes and Events

In particular, this work examined the training area concept in a two-step approach for AGB estimation using airborne laser scanning (ALS) and RapidEye satellite

Konfiguroijan kautta voidaan tarkastella ja muuttaa järjestelmän tunnistuslaitekonfiguraatiota, simuloi- tujen esineiden tietoja sekä niiden

Keskeiset työvaiheet olivat signaalimerkkien asennus seinille, runkoverkon merkitseminen ja mittaus takymetrillä, seinillä olevien signaalipisteiden mittaus takymetrillä,

A three-dimensional shape estimation approach for tracking of phase interfaces in sedimentation processes using electrical impedance tomography.. Measurement Science and

awkward to assume that meanings are separable and countable.ra And if we accept the view that semantics does not exist as concrete values or cognitively stored

The Minsk Agreements are unattractive to both Ukraine and Russia, and therefore they will never be implemented, existing sanctions will never be lifted, Rus- sia never leaves,