• Ei tuloksia

Motion Context Adaptive Fusion of Inertial and Visual Pedestrian Navigation

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Motion Context Adaptive Fusion of Inertial and Visual Pedestrian Navigation"

Copied!
7
0
0

Kokoteksti

(1)

Visual Pedestrian Navigation

Jesperi Rantanen, Maija M¨akel¨a, Laura Ruotsalainen, Martti Kirkko-Jaakkola Department of Navigation and Positioning

Finnish Geospatial Research Institute (FGI) Kirkkonummi, Finland

jesperi.rantanen@nls.fi

Abstract—We use motion context recognition to enhance the result of our infrastructure-free indoor navigation algorithm.

Target applications are difficult navigation scenarios such as first responder, rescue, and tactical applications. Our navigation algorithm uses inertial navigation and visual navigation fusion.

Random Forest classifier algorithm is taught with training data from Inertial Measurement Unit and visual navigation data to classify between walking, running and climbing. This information is used both in pedestrian navigation to do stationarity detection with adaptive threshold and in particle filter fusion to exclude visual data from during climbing. Methods are evaluated in an indoor navigation test where person wearing tactical equipment moves through a building. Results show improvement of posi- tioning accuracy based on loop closure error on the test track especially when the movement is fast paced. The loop closure error was reduced on average 4 % in two tests when movement was slow and 14 % when movement was fast.

Index Terms—Context awareness, Indoor navigation, Inertial navigation, Sensor Fusion, Visual odometry

I. INTRODUCTION

Navigation systems that work in any location are especially useful in for example first responder, rescue, and tactical ap- plications, where pre-existing navigation infrastructure cannot be expected to be present indoors. Our research attempts to develop infrastructure-free indoor navigation and situational awareness system [1]. Freedom from infrastructure consists of using only self-contained sensors and not preinstalled equip- ment, such as Wi-Fi and Bluetooth positioning. Preliminary knowledge of the location, such as floorplans or maps, are not used either in infrastructure-free navigation. Developing an infrastructure-free navigation system is a challenging task but makes it possible for the system to be used anywhere.

Self-contained sensors include Inertial Navigation Systems (INS) that use Inertial Measurement Units (IMUs) to measure acceleration and turn rate. Camera is also a self-contained sensor and can be used in indoor navigation to obtain turn rate and translation estimates visually [2]. While cameras and INS can be used anywhere and require no infrastructure, they do have a limitation that they can only estimate relative position.

Visual navigation and INS require a known starting location and their result will inevitably drift away from the true location

This work was supported by the Scientific Advisory Board for Defence of the Finnish Ministry of Defence [project INTACT (INfrastructure-free TAC- Tical situational awareness)], and the Finnish Geospatial Research Institute (FGI).

as time passes from the last known location update. For this reason it is necessary to use every possible method to increase the time the sensors can give accurate position estimate.

Situational awareness in navigation consists of all the infor- mation of the user’s surrounding environment and conditions in addition to their location [3]. This can also be called the navigation context. Context information is valuable in itself but the information about the user’s context can also be used to improve the navigation solution. Navigation systems always operate in some context they are built for and many modern systems already navigate in a number of different contexts [4].

For example a smart phone that can be carried on a pedestrian or on a car, etc. [5]. A context adaptive system should be able to recognize this change automatically.

It is difficult to develop an algorithm that is suited for variable conditions especially in applications such as rescue and tactical operations. Instead, it may be useful to change the algorithm based on situation, for example based on how the user moves [6]. Some examples of this context adaptive navi- gation include works such as [7] where the navigation system seamlessly transfers between indoor and outdoor positioning systems when necessary. In [8] the authors detect features in indoor environments such as elevators, escalators and stairs through pedestrian activity recognition. This information is used to enhance pedestrian dead reckoning.

In this study we recognize the motion context using machine learning and use this knowledge to determine whether the user is running, walking, or climbing. Several studies have shown that it is possible to detect the motion context of a pedestrian – in other words, how the user moves – from sensor measurements, for example [1], [9]. This information was also recognized and applied to inertial navigation algorithms in [10], [11]. We use machine learning to create a pedestrian navigation algorithm that can recognize contexts walking, running, and climbing and adapt based on the detected context.

We then fuse the results with visual navigation data in particle filter presented in [12]. For the particle filter we use the visual data based on context: during climbing the visual data will not be used.

© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. 978-1-5386-5635-8/18/$31.00 ©2018 IEEE

(2)

In this study the objective is to use the motion context information to create adaptive navigation algorithm. We use machine learning to achieve better results in navigation but do not aim to create the best possible motion context classifier.

Improving the navigation result, rather than the most accurate classification, is the main goal of this paper although the classification accuracy and navigation result are definitely related.

We test our algorithm in a realistic tactical scenario with fast paced movement. This test scenario is chosen because we want to show that the used navigation algorithm works in applications where infrastructure-free navigation is necessary:

first responder, rescue, and tactical applications. Moreover, in this study we show that context adaptive inertial navigation and context adaptive fusion of visual and inertial navigation is feasible and useful even when the person is moving at a fast pace and carrying a lot of equipment.

In section II we discuss the related scientific work in the fields of motion context recognition, pedestrian inertial navigation and visual navigation. In section III we outline the methods in our context adaptive navigation. In section IV, we discuss the tests we did to evaluate these methods and the test results. In section V we discuss future work and conclude this paper.

II. RELATED WORK

A. Motion context recognition

In general, an activity recognition system consists of a sens- ing module, a feature processing and selection module, and a classification module [13]. Wearable accelerometers, as well as sensors such as magnetometers, barometers, thermometers and microphones have been used for this purpose [14]–[16].

Many of the sensors used for motion recognition in previous research are included in our infrastructure-free navigation system [1], so it is fairly straightforward to use them as a sensing module in an activity recognition system designed for improving navigation outcome.

The raw sensor outputs are not usually used as such in mo- tion recognition, but are transformed into features using mea- surements made over some time period having a predefined length. Often these features are time domain features, such as means, variances, medians and other statistical measures of the sensor output [16]. Also different frequency domain fea- tures, for example frequency peaks and peak power estimates, spectral centroids and spreads, have been used [13]–[15], [17].

Usually some feature selection needs to be performed, as the best feature set for the problem at hand often can not be known in advance [18].

In general, no machine learning algorithm is better than other. This is known as No Free Lunch Theorem [19]. In practice this means that the best suited algorithm for motion recognition has to be found through testing separately for each problem at hand. In existing research for example decision trees [14]–[18], Bayesian classifiers [15], [16], [18], [20] and Support Vector Machines (SVM) [11], [15], [18], [20] have been used for motion recognition. In this work we use a

Random Forest (RF) classifier, that is an ensemble of decision trees. For more details see [19], for example.

B. Zero velocity update

In inertial navigation the simplest method is to integrate the position from acceleration and turn rate observations of an IMU. However, this method suffers from rapid accumulation of error due to the double integration of noisy measurement and lack of absolute position updates

Mounting the inertial sensor on the foot of the user makes it possible to remove some of the accumulated error [21].

Error reduction is possible because during each step of the user there is a moment when both the ball and heel of the foot are touching the ground and the foot is nearly stationary.

The velocity should be zero during the stationary period so a Zero Velocity Update (ZUPT) can be made. This ZUPT can be used as a pseudo measurement in the navigation algorithm and the accumulated error to velocity can be removed.

ZUPT based pedestrian navigation is very dependent on effective detection of the stationary periods. Various methods have been suggested. In [22] several detectors were tested, including: variation of acceleration, magnitude of acceleration, angular rate energy, and the generalized likelihood ratio test, which they derived and named Stance Hypothesis Optimal detection (SHOE). The last two were found the most reliable.

Angular rate is used also in for example [23]–[25].

More adaptive ways to detect the stationary period have been explored because a simple threshold does not always work for all movement types such as running and walking. In [26] an adaptive threshold taking into account both velocity and acceleration is tested on running, walking and crawling motions. In [27] a hidden Markov Model is used to segment the parts of human gait.

In [11] the detection threshold is based on the users mode of motion (running or walking), which is first detected from the IMU measurement. In this study we explore the a similar motion context recognition based adaptive ZUPT method but include climbing in the motion contexts and fuse the results with visual navigation.

C. Visual navigation

Due to the limitations of inertial navigation we use a sensor fusion approach and augment the inertial navigation solution with visual navigation. A video camera can be used as a visual gyroscope and odometer [2]. This visual navigation system works through feature extraction to find turn rate and transla- tion from consecutive images. Scale problem caused by using a monocular camera is solved in this method through measuring the height of the camera above ground. This distance remains relatively unchanged if the camera is carried on the user’s chest pointing forwards while the person is walking or running.

The visual gyroscope and visual odometer measurements can be used in sensor fusion to support other sensors such as the INS. This type of visual navigation does not need pre-installed infrastructure to operate.

(3)

The visual gyroscope and odometer are prone to errors if the field of vision is obscured. Obscuring the field of view may result in the lack of features in images. Lack of features can cause the visual gyroscope and odometer to have erroneous observations or no observations at all. In [28] an error detection method is developed which can detect these instances based on a value called Line Dilution of Precision (LDOP). However, as stated in [12], errors remain after the LDOP analysis and they need to be considered. Moreover, the assumptions made to solve the scale problem do not hold in certain situations, such as when the camera cannot see the ground in front of it.

III. METHODS

In this section we discuss the methods that we use in the tests to obtain motion context recognition and navigation data.

First the selection and training of the classifier is described.

Then we discuss the details of the navigation algorithm, parts of which were generally described in the section II above.

A. Motion context recognition

In this work we use a Random Forest classifier trained using the test data described in Section IV. We use 374 labeled training samples, containing 330 samples of walking, 9 samples of running and 35 samples of climbing, to train 500 Random Trees used as an ensemble. Each sample consists of features computed from sensor readings obtained during a time period of one second. Before training we replace the missing values in the data using the feature mean in the training data.

B. Adaptive ZUPT threshold

The zero velocity detector used in this work is based on angular rate energy detector similarly as for example in [22].

The test statistic is calculated from the mean of squares of angular velocities in a fixed time window, normalized with the sensor noise level. The length of the time window in this study covered three or four observations from the sensors which were updating at the rate of 100 Hz.

For single threshold comparison we used threshold value

4

3 ∗10000. This value was chosen based on previous exper- imentation to work adequately for walking. For the motion context adaptive inertial navigation the threshold values were set to 43 ∗10000, 20000, and 50000 for walking, climbing, and running, respectively. The walking threshold is the same as in the single threshold test. The running threshold was found based on experimentation with fast paced movement.

For climbing we use a threshold value higher than in walking, since the foot may move more unpredictably while climbing compared to walking. Based on trials with the test data we chose not to use a threshold value as high as in running because climbing usually is relatively slow paced movement.

For each second the used ZUPT threshold is determined by the current motion context of the user. The ZUPT threshold will be chosen by the previously trained motion context classifier that categorizes the current context based on the sensor measurements to either walking, running or climbing.

Fig. 1. Flowchart of the presented context adaptive navigation algorithm.

This motion context then determines the ZUPT threshold that is used during navigation.

Using the ZUPTs the foot-mounted INS data is processed in an extended Kalman filter algorithm to obtain a navigation solution.

C. Particle filter fusion

The INS navigation result of the extended Kalman filter is fused with the visual gyroscope and odometer observations in a particle filter to refine the navigation solution further.

The particle filter algorithm we use to fuse inertial and visual navigation data in a particle filter is the same as presented in [12]. The navigation solution of the INS is divided to translations and heading changes for each video camera frame to be fused with the corresponding estimates from visual gyroscope and odometer. The frame rate of the video was 10 Hz. Using a particle filter approach is justified because it allows the use of non-Gaussian error models which improves the results in the fusion of inertial and visual navigation [12].

The used particle filter fusion already discards the visual data in the case where the LDOP value is too high. In this study we add the context awareness to this algorithm and also

(4)

Fig. 2. Equipment and the sensors worn by the test persons. On the left side orange colored IMUs are visible on the instep of the feet. On the right the torso-mounted camera and IMU are shown.

choose to discard the visual data under certain contexts. As mentioned in II-C, the visual data may be unreliable in certain situations, such as when the user is climbing and the camera faces the ladder and the wall. The method presented in this section is illustrated in a flowchart in Fig. 1,

IV. TESTS AND RESULTS

The tests were performed in co-operation with the Finnish Defence Forces. The tests consisted of recording the data of navigation sensors worn by a test person and post processing it. These sensors included three IMUs that were attached on both feet and to the chest of the person. The IMU sensors recorded acceleration and turn rate data for motion context recognition and navigation. In addition to IMUs the person carried forward pointing video camera mounted on the torso of the test person. From this video we extracted visual odometer and visual gyroscope data that is the estimate of translation and turn rate. Test equipment on the test persons is displayed in Fig. 2. In the image the person is also wearing other measurement sensors that were not used in this study.

Our goal is to test how the adaptive navigation algorithms help in a difficult navigation scenario such as first responder, rescue, or tactical operation. The test persons wore tactical equipment to make the movement of the test person more appropriate for the application. The test track started outdoors, immediately entered a building, travelled through several smaller rooms, narrow corridors, one large hall, and ended outside. The track ended on the same location where it started to evaluate loop closure error of the navigation solution.

During the route the person also climbed up, moved sideways and then descended on a ladder that covered part of one wall of the large hall. Approximate illustration of the route and the location is shown in Fig. 3. The person travelled the route twice. The first round they walked slowly and stopped occasionally. On the second round they moved faster and sprinted occasionally to resemble a more realistic motion.

One round of the test track was approximately 170 meters in length. Two test persons produced in total three sets of test

Ladder cl imb Start and 10 m

finish

Fig. 3. Approximate illustration of the test site outline, planned test route, and it’s features. The arrows point to direction the route was travelled. Start and finish are outside. Grey rectangles designate mockup airplane frames in the large hall area that the person goes through at one point.

TABLE I

CONFUSION MATRICES FOR THERFCLASSIFIER. Classification for Person 1

True Class Classified as Walk Climb Run

Walk 372 17 15

Climb 0 29 0

Run 3 0 6

Classification for Person 2 True Class Classified as

Walk Climb Run

Walk 307 1 16

Climb 7 35 0

Run 0 0 13

data each containing two rounds of the test track. The test person 1 did the first and third round of tests. The test person 2 did the second round. The third round was not used for navigation but was used as training data for motion context classification.

A. Motion context recognition

Data was partitioned into windows of 1 s each and inertial and visual features were extracted from these time windows.

From the video we determined motion type, the context, that each second of data represented. These features connected to context label, either running, walking or climbing were then used in classification. A Random Forest classifier was trained using the third set of data, done by test person number 1.

This model was then tested on the two other datasets, one obtained by test person number 1 and one obtained by test person number 2. Confusion matrices can be seen below on Table I.

The overall classification accuracy is high. The classification for the first dataset was approximately 92 % and for the second dataset 94 %. Notably, the accuracy for the second set was relatively good considering that the test person is different than in the training set. However, high classification accuracy is mostly due to most of the data belonging to the walking

(5)

Fig. 4. Test person climbing a ladder. During the time they climbed up, moved sideways along the ladder, and climbed down the field of vision of the torso-mounted camera is limited to the wall directly in front of them.

TABLE II

RESULTING LOOP CLOSURE ERRORS FROM PROCESSING NAVIGATION TEST DATA WITH AND WITHOUT CONTEXT INFORMATION Movement pace Test person Loop closure errors [m]

No context info Context info

Slow Person 1 5.07 4.87

Person 2 3.89 3.71

Fast Person 1 5.19 4.65

Person 2 4.72 3.82

class. Running and climbing are not as well recognized as walking. The trained classifier is used during navigation tests to determine the current motion type to obtain correct ZUPT threshold.

B. Navigation

We tested using the motion context information obtained in previous test with our indoor navigation algorithms. The navigation solution was calculated from both test sets that were used to test the classifier. Each second of data received a context label based on what motion type the classifier finds during navigation. The information of the recognized motion type was used either to modify the threshold value for ZUPT in case of running and climbing motion or to omit the visual navigation data in the case of climbing motion, pictured in Fig. 4.

We processed the navigation data both with the recognized context information and without it to evaluate the difference.

With context information we used different ZUPT thresholds for walking, climbing and running motions in the extended Kalman filter for inertial navigation solution. We also omitted camera observations while climbing in the particle filter that fuses the INS solution with visual navigation data. Without context information in the processing, we use only single threshold and rely on the LDOP values to recognize poor visual navigation observations. Each processing in the par- ticle filter produces slightly different result depending on the random number seed used in the processing. The navigation

-5 0 5 10 15 20 25 30 35 40

-20 -15 -10 -5 0 5 10

(a) Route

Start Finish

-5 0 5 10 15 20 25 30 35 40

-20 -15 -10 -5 0 5 10

(b)

[m]

[m]

[m]

[m]

Fig. 5. Comparison of one processed navigation result from slow paced movement without context information (a) and with context information (b).

results for the particle filter are computed as the average of 200 processing runs of the particle filter with different random number seeds. Each processing run used 2000 particles.

The results show that the navigation result improved after introducing the context information to the sensor fusion.

However, for the first test round, during which the person moves at a slow pace, the results are quite similar with and without the context information. We measured the loop closure error, which is the difference in position with the first and last measurements at the same location. Results are the mean loop closure errors of 200 processing runs. For the slow paced round the mean loop closure error was 5.07 m without context use and 4.87 m with context use for the first test person. The mean loop closure error was 3.89 m without context use and 3.71 m with context use in the slow paced round for second test person. Both had approximately 0.2 m or on average 4 % improvement with context use. Results are compared in the Table II.

There is walking and climbing but no running in this slow paced test. Lack of running explains why the context use does not improve the results more in the slow paced test. The resulting navigation route and loop closure error for single

(6)

processing round is shown as an example in the Fig. 5. This route was obtained by the second test person on the slow paced test round. The plots of the route processed without context use (a) and with context use (b) are quite similar although there is some difference in the loop closure error. Starting direction is arbitrary but same for both plots. This is one result of the 200 processing runs shown as an example of the performance.

Route can be compared to actual one taken shown in Fig. 3.

In the second round of testing, where the person moved at a fast pace, the difference between using and not using context information is considerable. For fast paced round the mean loop closure error was 5.19 m without context use and 4.65 m with context use for the first test person. The mean loop closure error in the fast paced round was 4.72 m without context use and 3.82 m with context use for the second test person. The decrease in loop closure error with context use is 0.54 m and 0.90 m respectively or on average 14 %. Results of this fast paced test compared to the results of the previous slow paced test are in the Table II.

One of the 200 processing results from the fast paced test round by test person 2 is shown as an example in Fig. 6.

Compared to Fig. 5 the results without context use (a) and with context use (b) are more different. Again the starting direction is arbitrary but same for both plots. Starting location in this figure is different for both plots since it is determined by the end location of the first round. However, this does not affect the result as only the difference between round start and end points is considered.

Most notably in the Fig. 6 the upper right corner of the plot (a) there is a spike in the route. This is due to person running fast during this part of the test. Because of a low threshold value the algorithm does not get ZUPT for a long time causing error to accumulate. When the foot being stationary is finally detected, the navigation algorithm is able to remove some of the accumulated error. However, some of the error remains as can be seen in the loop closure error, the difference between start and end points. The plot for navigation result processed with context use (b) is cleaner and loop closure error is smaller due to the ZUPT being made more often.

V. CONCLUSIONS

We tested our motion context adaptive navigation algorithm in a realistic tactical scenario with two test persons wearing application appropriate equipment. The purpose of the test was to evaluate our navigation algorithm for applications that would benefit most from infrastructure-free navigation: first responder, rescue and tactical applications. The loop closure error results in navigation tests show improvement in the location accuracy when context information is used in the processing. The loop closure error was reduced on average 4 % when the movement was slow and 14 % when movement was fast. While using context makes the system more complex, this complexity is justified. The context information, such as the information whether the user is walking, running, or lying down is already valuable information for example to the command of a rescue operation that can monitor the status

-5 0 5 10 15 20 25 30 35 40

-20 -15 -10 -5 0 5 10

(b)

-5 0 5 10 15 20 25 30 35 40

-20 -15 -10 -5 0 5 10

(a) Route

Start Finish

[m]

[m]

[m]

[m]

Fig. 6. Comparison of one processed navigation result from fast paced movement without context information (a) and with context information (b).

of the users. This information should also be used to aid the navigation.

The classification accuracy is good overall but better for walking than for running and climbing. This is likely due to the fact that the training data had many times larger number of walking samples than climbing or running samples. However, the objective of this paper was to create an adaptive navigation algorithm and not to create the best possible motion context classifier. The objective of this paper was achieved because navigation accuracy improved with the adaptive algorithm even though not all running or climbing instances were cor- rectly detected. Every instance does not need to be correctly classified as long as enough of them are recognized so that regular ZUPT observations can be made. Context information for visual navigation seems to be helpful too. Camera is a useful sensor since it is self-contained and can be fused with the INS but has different sources of error. LDOP value does not detect every instance where the visual odometer and gyroscope values should not be used. In some contexts the camera image should not be used at all. For example while the user climbs a ladder or crawls on the ground the field of vision of the camera is too limited. Context recognition

(7)

was able to remove some of the poor visual observations not directly detected from the LDOP during climbing.

Further research is required to make infrastructure-free navigation system suited for all situations. For example crawl- ing should be added as a context to the system. Crawling motion is problematic for both visual and foot-mounted inertial navigation. The motion needs to be recognized so some other navigation method can be used during crawling. Also using context information for climbing up or down could be used to improve vertical navigation. There are many ways new contexts could be introduced to the current navigation systems.

The results of using motion context information in adaptive navigation are promising.

REFERENCES

[1] L. Ruotsalainen, L. Chen, M. Kirkko-Jaakkola, S. Gr¨ohn, and H. Ku- usniemi, “INTACT - Towards infrastructure-free tactical situational awareness,” European Journal of Navigation, vol. 14, no. 4, pp. 33–

38, 2016.

[2] L. Ruotsalainen, H. Kuusniemi, M. Z. H. Bhuiyan, L. Chen, and R. Chen, “A two-dimensional pedestrian navigation solution aided with a visual gyroscope and a visual odometer,”Gps Solutions, vol. 17, no. 4, pp. 575–586, 2013.

[3] R. Chen and R. Guinness, Geospatial computing in mobile devices.

Artech House, 2014.

[4] P. D. Groves, H. Martin, K. Voutsis, D. Walter, and L. Wang, “Context detection, categorization and connectivity for advanced adaptive inte- grated navigation,” inIn Proceedings of the ION GNSS+, Nashwille, Tennesee. The Institute of Navigation, 2013.

[5] N. El-Sheimy and C. Goodall, “Everywhere navigation: Integrated solutions on consumer mobile devices,”Inside GNSS, vol. 6, no. 5, pp.

74–82, 2011.

[6] J. Rantakokko, J. Rydell, P. Str¨omb¨ack, P. H¨andel, J. Callmer, D. T¨ornqvist, F. Gustafsson, M. Jobs, and M. Gruden, “Accurate and reliable soldier and first responder indoor positioning: multisensor systems and cooperative localization,”IEEE Wireless Communications, vol. 18, no. 2, pp. 10–18, 2011.

[7] P. Peltola, C. Hill, and T. Moore, “Adaptive real-time dual-mode filter design for seamless pedestrian navigation,” inIn Proceedings of the ICL-GNSS 2017, Nottingham, United Kingdom, Jun. 2017.

[8] A. T¨or¨ok, A. Nagy, L. Kov´ats, and P. Pach, “Drear-towards infrastructure-free indoor localization via dead-reckoning enhanced with activity recognition,” in Next Generation Mobile Apps, Services and Technologies (NGMAST), 2014 Eighth International Conference on.

IEEE, 2014, pp. 106–111.

[9] K. Frank, V. Nadales, M. Josefa, P. Robertson, and M. Angermann,

“Reliable real-time recognition of motion related human activities using mems inertial sensors,” 2010.

[10] L. Pei, J. Liu, R. Guinness, Y. Chen, H. Kuusniemi, and R. Chen, “Using LS-SVM based motion recognition for smartphone indoor wireless positioning,”Sensors, vol. 12, no. 5, pp. 6155–6175, 2012.

[11] B. Wagstaff, V. Peretroukhin, and J. Kelly, “Improving foot-mounted inertial navigation through real-time motion classification,” in 2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sept 2017, pp. 1–8.

[12] L. Ruotsalainen, M. Kirkko-Jaakkola, J. Rantanen, and M. M¨akel¨a,

“Error modelling for multi-sensor measurements in infrastructure-free indoor navigation,”Sensors, vol. 18, no. 2, p. 590, 2018.

[13] T. Choudhury, S. Consolvo, B. Harrison, J. Hightower, A. LaMarca, L. LeGrand, A. Rahimi, A. Rea, G. Borriello, B. Hemingway, P. Klasnja, K. Koscher, J. A. Landay, J. Lester, D. Wyatt, and D. Haehnel, “The Mobile Sensing Platform: An Embedded Activity Recognition System,”

IEEE Pervasive Computing, vol. 7, no. 2, pp. 32–41, Jun. 2008.

[14] J. P¨arkk¨a, M. Ermes, P. Korpip¨a¨a, J. M¨antyj¨arvi, J. Peltola, and I. Ko- rhonen, “Activity Classification Using Realistic Data From Wearable Sensors,”IEEE Transactions on Information Technology in Biomedicine, vol. 10, no. 1, pp. 119–128, Jan. 2006.

[15] A. Mannini and A. M. Sabatini, “Machine Learning Methods for Classifying Human Physical Activity from On-Body Accelerometers,”

Sensors, vol. 10, pp. 1154–1175, 2010.

[16] U. Maurer, A. Smailagic, D. P. Siewiorek, and M. Deisher, “Activity Recognition and Monitoring Using Multiple Sensors on Different Body Positions,” inProceedings of the International Workshop on Wearable and Implantable Body Sensor Networks, 2006.

[17] L. Ruotsalainen, R. Guinness, S. Gr¨ohn, L. Chen, M. Kirkko-Jaakkola, and H. Kuusniemi, “Situational Awareness for Tactical Applications,” in Proceedings of the ION GNSS+, Portland, Oregon, Sep. 2016.

[18] R. Guinness, “Context Awareness for Navigation Applications,” Doctoral dissertation, Tampere University of Technology, Tampere, Finland, 2015.

[19] R. O. Duda, P. E. Hart, and D. G. Stork,Pattern Classification, 2nd ed.

John Wiley & Sons, Inc, 2001.

[20] J. Parviainen, J. Bojja, J. Collin, J. Lepp¨anen, and A. Eronen, “Adaptive Activity and Environment Recognition for Mobile Phones,” Sensors, vol. 14, pp. 20 753–20 778, 2014.

[21] E. Foxlin, “Pedestrian tracking with shoe-mounted inertial sensors,”

IEEE Computer Graphics and Applications, vol. 25, no. 6, pp. 38–46, Nov. 2005.

[22] I. Skog, P. Handel, J.-O. Nilsson, and J. Rantakokko, “Zero-velocity detection—An algorithm evaluation,”IEEE Transactions on Biomedical Engineering, vol. 57, no. 11, pp. 2657–2666, Jul. 2010.

[23] L. Ojeda and J. Borenstein, “Non-GPS navigation for security personnel and first responders,”The Journal of Navigation, vol. 60, no. 3, pp. 391–

407, 2007.

[24] J. Rantakokko, P. Str¨omb¨ack, E. Emilsson, and J. Rydell, “Soldier positioning in GNSS-denied operations,” inProc. of the Sensors and Electronics Technology Panel Symposium (SET-168) on Navigation Sensors and Systems in GNSS Denied Environments, Izmir, Turkey, 2012.

[25] R. Zhang, F. Hoflinger, and L. Reindl, “Inertial sensor based indoor localization and monitoring system for emergency responders,” IEEE Sensors Journal, vol. 13, no. 2, pp. 838–848, 2013.

[26] U. Walder and T. Bernoulli, “Context-adaptive algorithms to improve indoor positioning with inertial sensors,” in Indoor Positioning and Indoor Navigation (IPIN), 2010 International Conference on. IEEE, 2010, pp. 1–6.

[27] S. K. Park and Y. S. Suh, “A zero velocity detection algorithm using inertial sensors for pedestrian navigation systems,” Sensors, vol. 10, no. 10, pp. 9163–9178, 2010.

[28] L. Ruotsalainen, J. Bancroft, and G. Lachapelle, “Mitigation of attitude and gyro errors through vision aiding,” inIndoor Positioning and Indoor Navigation (IPIN), 2012 International Conference on. IEEE, 2012, pp.

1–9.

Viittaukset

LIITTYVÄT TIEDOSTOT

Tukin kemiallisen merkinnän selvänä vahvuutena on, että se ei välttämättä edellytä sahatavaran uudelleenmerkintää, sillä tukin päähän tehty merkintä ja- kautuu

This corroborates Tangney and Dearing’s (2002) arguments that there is no adaptive function of shame, and that if shame is caused by attributions to both

A sound event detection system is used to de- tect sound events present in the tested context and the event histogram constructed from the recognition result is matched with

In Study I, the catalytic ectodomain of this enzyme was fused to GST, the recombinant fusion protein was expressed in Sf9 insect cells, and the acceptor specificity of the

Using the floor plan information in the motion model enables more efficient particle filtering than the conventional particle filter [1] that uses the random-walk motion model

In this paper, a fixed-point formulation of a central signal pro- cessing algorithm for inertial navigation, updating of the Direction Cosine Matrix (DCM) is presented..

The main objective of the thesis is to find novel methods for error detection in satellite navigation which are outside of the traditional approach of fault detection and

Keywords: global navigation satellite system (GNSS); adaptive scalar tracking loop (A-STL); fast adaptive bandwidth (FAB); fuzzy logic (FL); loop-bandwidth control algorithm