• Ei tuloksia

Contactless Measurement in Smart Environment for the Elderly People Using Kinect v2 Sensor

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Contactless Measurement in Smart Environment for the Elderly People Using Kinect v2 Sensor"

Copied!
73
0
0

Kokoteksti

(1)

MSc Thesis

Ekundayo Olufemi A.

{Contactless Measurement in Smart Environment for the Elderly People Using Kinect v2 Sensor

.

}

School of Computer Science

{International Master's Degree Programme in Information Technology}

February 2018

(2)

Foreword

This thesis was done at the School of Computing, University of Eastern Finland dur- ing the autumn 2017.

I want to extend my gratitude to my parents, friends, teachers, and especially my supervisor Prof. Pekka Toivanen.

(3)

List of abbreviations

AAL Ambient Assisted Living

API Application Programming Interface

CMOS Complementary Metal–oxide–semiconductor GDL Gesture Description Language

IR Infrared

IMU Inertia Measurement Unit LIDAR Light Detection and Ranging RFID Radio-frequency Identification

RGB Red Green Blue

SDK Software Development Kit

SEAL Smart Environment for assisted Living SIFT Scale Invariant Feature Transform

SSIM Structural Similarity Index-based Measure TOF Time of Flight

(4)

iv

Contents

1 Introduction to Kinetic v2 Sensor ... 1

1.1 Evolution of Kinect Sensor ... 1

1.2 Technology of Kinect ... 2

1.3 Kinect (2.0 2013) – Designed for Xbox One ... 3

1.4 1.5 Non- Commercial Kinect designed for Microsoft Windows ... 3

1.5 Kinect Versions from 1.5 to 1.8 ... 3

1.6 Kinect v2 ... 3

1.7 Significance of Kinect v2 and Assisted Living Facilities ... 4

1.8 Potential Use of Kinect v2 in Assisted Living ... 4

1.8.1 Different Spheres for Application of Kinect v2 ... 5

2 Review of related literature ... 6

2.1 Introduction ... 6

2.2 Use of Wireless Sensor Networks ... 6

2.3 Kinect v2 Depth Sensor ... 6

2.4 Use in Karate Techniques ... 6

2.5 Advantages of v2 over v1 ... 7

2.6 Pose Estimation of Human Body Part Using Multiple Cameras ... 8

7 An Innovative Hearing System Utilizing the Human Body ... 8

2.8 Accuracy and Reliability of Optimum Distance in Kinect v2 ... 9

2.9 Integration of Microsoft Kinect with Simulink ... 10

2.10Utility and usability of Kinect v2 and Leap Motion ... 12

2.11A Depth-Based Fall Detection System Using a Kinect Sensor ... 13

2.12Experimental Studies on Human Body ... 15

2.13Body Movement Analysis and Recognition ... 15

2.14An Integrated Platform for Live 3-D Human Reconstruction ... 18

2.15 Automated Training and Maintenance through Kinect ... 19

2.15Kinect in the Kitchen and other Practical Home Environments ... 20

2.16Kinect Gaming and Physiotherapy ... 21

3 Research Methodology ... 23

3.1 Introduction ... 23

3.2 Model of the research ... 24

3.3 Research Design ... 24

3.4 Primary Data ... 25

3.5 Summary ... 25

4 Data analysis and presentation ... 26

4.1 Introduction ... 26

4.2 Smart Home environments ... 27

4.3 Movement detection models ... 39

4.4 Skeletal Tracking systems ... 53

5 Findings and conclusion ... 57

(5)

v

5.1 Findings ... 57 5.2 Conclusions ... 60 References ... 62 Appendices

Appendix 1: Checklist (2 pages)

(6)

vi

Table of Figures and Illustrations

Figure 1-Xbox 360, Kinect v1. Klesistern (2014) ... 1

Figure 2- CMOS sensor, Primesense. Journal of Sensors (2014) ... 2

Figure 3-Kinect sensor components. Journal of Sensors (2013) ... 4

Figure 4- GDL illustration. Teng et.al. (2013) ... 7

Figure 5- Medical application. Lim et.al. (2014) ... 9

Figure 6- Simulink Kinect. Joshua et.al. (2014) ... 11

Figure 7- Leap Motion Sensor. Hughes et.al. (2015) ... 12

Figure 8- Motion Sensor illustration. Hughes et.al. (2015) ... 13

Figure 9- Fall detection illustration. Samuele et.al. (2014) ... 14

Figure 10- Movement analysis Glove. Yang et.al. (2012) ... 16

Figure 11- Humanoid robotics illustration. Clingal et.al. (2014) ... 16

Figure 12- RGDB illustration. Immitrios et.al. (2014) ... 19

Figure 13- Smart Home System illustration. Berkley University Journal et.al. (2013) ... 20

Figure 14- Pose Experiments, Kinect tests. (2013) ... 23

Figure 15- Research design ... 25

Figure 16- Gradinaru (2016) graphical representation of system ... 26

Figure 17- Conceptual Framework of a smart home environment ... 28

Figure 18- Smart home environment layered description ... 28

Figure 19- Smart home environment layout ... 30

Figure 20- Hondori et al (2013) system set up including inertia sensors and Kinect sensors ... 31

Figure 21- Hondori et al (2013) 3-D trajectories ... 32

(7)

vii

Figure 22- Hondori et al (2013) experimental data on body movements ... 33

Figure 23- Hondori et al (2013) limb changes in task like drinking and eating ... 34

Figure 24- Hondori et al (2013) inertia sensor data from individual’s items ... 34

Figure 25- Mohamed et al (2013) smart house used in the experiment ... 35

Figure 26- Mohamed et al (2013) Natural User Interface ... 36

Figure 27- Mohamed et al (2013) Waist detection posture ... 37

Figure 28- Mohamed et al (2013) Waist detection posture ... 37

Figure 29- Mohamed et al (2013) Kinect procedure for gesture recognition ... 38

Figure 30- Mohamed et al (2013) ... 38

Figure 31- Mohamed et al (2013) Kinect toolbox recognition of circle gestures ... 39

Figure 32- Chin et al (2013) Three Kinect sensors, IR light, RGB camera, IR detector ... 40

Figure 33- Chin et al (2013) Depth sensor distance ... 40

Figure 34- Chin et al (2013) Depth frame bit pixel ... 41

Figure 35- Chin et al (2013) Algorithm depth distance ... 41

Figure 36- Chin et al (2013) Average depth distance vs Actual distance ... 44

Figure 37- Chin et al (2013) Accuracy analysis AMPE vs Distance ... 44

Figure 38- Chin et al (2013) Precision analysis std vs Distance ... 45

Figure 39-Alexiadis et al (2017)3-D Camera and sensor setup ... 47

Figure 40- Alexiadis et al (2017) Stages for the proposed model ... 48

Figure 41- Alexiadis et al (2017) Image quality reconstruction; Kinect data, waterlight geometry and Poisson ... 48

Figure 42- Tahavori et al (2013) Kinect for Xbox vs Windows ... 49

Figure 43- Sengupta and Ohya (1996) Two staged pose estimation illustration ... 51

(8)

viii

Figure 44- Sengupta and Ohya (1996) back projection method estimation ... 51

Figure 45- Sengupta and Ohya (1996) images used for the experiment ... 52

Figure 46- Sengupta and Ohya (1996) extracted silhouette images ... 52

Figure 47- Sengupta and Ohya (1996) rendered images from the parameter set ... 53

Figure 48- Sengupta and Ohya (1996) rendered images of the transferred model .... 53

Figure 49- Tao et al (2013) constant camera error ... 54

Figure 50- Tao et al (2013) variable camera error ... 55

Figure 51- Choe et al (2014) invariability of IR images and RGB under different lighting conditions ... 56 Figure 52- Choe et al (2014) Data capturing system, used to obtain the base mesh . 56 Figure 53- Choe et al (2014) input shading image, projected mesh and depth map . 56

(9)

1 Introduction to Kinetic V2 Sensor

Kinect technology was initially named as Project Natal during the initial phases of its development. It is a series of input devices developed by Microsoft for different vid- eo consoles including Xbox one and Xbox 360. The device makes use of different gestures and spoken commands to provide a natural interface to users to interact with console and computer (Lange, 2011). In 2010, Kinect was developed to enhance the audience base of Xbox 360 and was rumored to be launched with release of Xbox 360 console [1]. These reports were however dismissed by Microsoft. At that time, it was believed by the company that Xbox will last until 2015. Following the release, different experiments were conducted to evaluate the stability of the device. In 2009 to prove the stability of Kinect, different games were shown in Tokyo Game show of year 2009. The games included Beautiful Katamari and Space Invaders Extreme were the important ones (Stowers, 2011). Initially, it was planned that for Kinect operations like skeletal mapping, a microprocessor would also be accompanied by the sensor unit, however, later it was decided that there would not be any dedicated processor in it. For this dedicated purpose, processor cores were developed instead.

Research by Stowers (2011) further showed that only 10-15 % computing resources were used by Kinect. In the same timeframe development of Kinect like gadgets be- came the trend of the time.

Figure 1-Xbox 360, Kinect v1. Klesistern (2014)

1.1 Evolution of Kinect Sensor

After the "World Premiere 'Project Natal' for Xbox 360 Experience" of 2010, Kinect was the official name granted to this gadget. This word was created from a combina- tion of two words kinetic and connect. Initially, this was thought to be an imperative initiative and the date of launch was initially set as November 2010 by Microsoft [3].

This was however changed as the project faced delays. When Xbox 360 was later announced to be launched it was ready for Kinect, port with connector and ready for launch in July 2010.

(10)

At the time of release of the Kinect, there were many companies, working in collabo- ration with Microsoft to ascertain its possibilities, application and compatibility with other gadgets. Villaroman (2011) argued that because of its immense appeal and at- tention Microsoft announced that it would launch a commercial version along with launch of Software Development Kit (SDK) for the companies [1]. Finally, Mi- crosoft launched Windows SDK, the commercial version of Kinect. At that time, different companies were working on different applications for Kinect.

1.2 Technology of Kinect

Israeli developer Primesense, developed Kinect v1. It was a combination of hardware and software, both by Microsoft. Kinect v1 generated 3-D view of an object through a combination of gadgetry including camera, infra-red projector and microchip spe- cially designed for this purpose. Versions of 3-D reconstruction of image based was done by scanner system called Light Coding. To capture video data in 3-D, in spite of different light conditions, depth sensor has been designed with a combination of monochrome Complementary metal–oxide–semiconductor (CMOS) sensor. The depth sensor was an innovative feature addition that fitted well with most applica- tions. Keeping in view, the presence of furniture or other obstacles, player’s physical environment and game play can be calibrated automatically by Kinect software. It also has the ability to adjust the depth of the sensor.

Figure 2- CMOS sensor, Primesense. Journal of Sensors (2014)

The developer PrimeSense, clarified that the number of people that can be tracked by the software are only restricted by the number of people who can fit in the camera.

According to Microsoft only 6 players can be tracked by the software simultaneous- ly. However joint players could go up to 20. The key features, which were regarded as success of Kinect were its voice recognition, facial recognition and most im- portant, gesture recognition.

(11)

1.3 Kinect (2.0 2013) – Designed for Xbox One

It was released in November 2013. The old technology of Primesense was replaced by Microsoft’s developed ‘time of flight sensor’. According to most analysts like Azzari (2013) this innovation uses a time of flight camera and has great ability of processing of 2 GB per second. It has three times greater accuracy over its predeces- sor and can track with the help of an Infrared (IR) sensors. It also has the ability of tracking 6 skeletons at a time. Kinect v2 came with improved video communication and applications, specifically developed for video analytics. The accompanying Mi- crophone is used to provide voice commands.

1.4 Non- Commercial Kinect designed for Microsoft Windows

In Feb 2012, Microsoft released a new version that has Windows 7 compatible PC drivers. This version provided capabilities to developers built by using C++, C# and Visual basic. It also had access to low level streams from depth and other sensors.

Almost 50 companies worked with Microsoft for the development of Kinect (Chang, 2012). The enhanced capabilities were for skeletal tracking and advanced audio ca- pabilities. Skeletal tracking was to allow tracking of people by gesture driven appli- cations. The audio capabilities were integrated with speech recognition Windows application programming interface (API).

1.5 Kinect Versions from 1.5 to 1.8

These were started and launched in 19 different countries. It was released in 2012. A new application known as Kinect for Windows v1.5 SDK including Kinect Studio was developed. Users interacting with the application were to record, debug and play back clips. In this version, tracking of arms, neck and head of Kinect using person was developed for new system or joint skeletal system. The versions from 1.6 to 1.8 further improved with minor variations.

1.6 Kinect v2

It was released for the first time in 2014. It was designed on the same technology as was Kinect for Xbox one.

(12)

Figure 3-Kinect sensor components. Journal of Sensors (2013)

1.7 Significance of Kinect v2 in Assisted Living Facilities

According to Biswass (2011), Kinect v2 is an advanced motion sensor capable of measuring 3-D motion in a person. Kinect SDK, Microsoft made Kinect for Win- dows was an interface to kinetic hardware, which was provided by an Application Programme.

Assisted living residence is for the people with some disability or those that have attained old age who cannot live independently or have opted to not live inde- pendently (El-laithy, 2012). In recent past, with scientific developments in this field, there has been a transformation from ‘care as a service’ to “care as business”. It has evolved to a huge industry, in 2012, a survey conducted in US facilities showed ex- istence of 22,500 such facilities. These can be standalone services or part of multi- level senior living community. Kinect v2 sensor has emerged as a potential contribu- tor in improving the standards of assisted living. The features of v2 are; enhanced field of view, improved picture resolution, enhanced skeletal tracking and recogni- tion of joints.

1.8 Potential Use of Kinect V2 in Assisted Living

Most researchers like Stowers (2104) agree that Kinect v2 can be a potential contrib- utor to many more domains to enhance standards for assisted living. It can be used in building of smart home environment, detection of driver fatigue by multi-sensor sig- nals based methods and to model movement in human body by using twin cylinder method. Kinect v2 can also provide a platform for live 3-D human reconstruction as well as capturing motion. It can help in monitoring of patients during external beam radiotherapy and assist in recognition of Karate techniques and similar domains.

(13)

In the rehabilitation system, it can help in undertaking skeletal tracking in virtual reality rehabilitation system. Kinect v2 has also been widely used in geometry re- finements required in the motion fields and human body tracking based on discrete wavelet transform (DWT). Moreover, it can be used in shadow detection and classi- fication, estimation of movements of human body parts and propagation along hu- man body parts (Rowe, 2011).

1.8.1

Different Spheres with Scope for Application of Kinect v2 Sensor

In this paper we will capitalize upon the potential of Kinect v2, with regards to assited living. Kinect v2 can be a contributor to facilitate living in assisted living environment for elderly people and treatment of illnesses. People can perform their routine exercises under the view of Kinect sensor, because it can analyze the move- ments and correct any mistakes and accordingly pass on the instructions. This can provide much needed motivation for elderly people to exercise regularly. Another innovation of v2 sensor with regards to assisted living can be by providing a hearing system using human body as a medium of transmission. This mechanism of replacing sound transmitter and transmission line can be done by Kinect v2 Sensor.

Kinect v2 can also help in the treatment of Parkinson’s disease. It can measure clini- cally relevant movements with accuracy like hand clasping and even tapping. Rela- tive improvements or worsening of these movements over time could also be accu- rately measured using Kinect v2.

(14)

2 Review of related literature

2.1 Introduction

A variety of studies have been undertaken to review the efficacy of Kinect v2 Sensor.

Researchers have even gone ahead to recommend use and applications. However, utilization of Kinect v2 in assisted living is rarely found. Some of the research in- clude;

2.2 Use of Wireless Sensor Networks

For Wireless Sensor Networks (WSNs), analysis, proposal and implementation for smart home for assisted living has been done by Hemant and Ghayvat (2013). Ac- cording to them WSNs are today the backbone of many systems. Smart home sys- tems that provide assisted living to patients already use WSNs. These researchers provided a protocol designed for providing smart homes for assisted living. They described this protocol implementation in an old home built to specifically test the implementation of a wireless sensor network. The protocol targets event and com- munication based protocols and provides smart home solutions. However, sensors alone were not found to be enough. Intelligent sampling and control algorithm is designed according to sensor type and structure.

2.3

Kinect v2 Depth Sensor

Research by Lin and Longyu (2013) extensively described the use of Kinect depth sensor, since its launch. Even though Microsoft has released a new version with im- proved hardware as well, however, in their view the accuracy needed a test. They performed experiments to check the Kinetic v2 depth sensor and its accuracy. They observed some variations in the depth evaluations of the Kinect and proposed a toler- ation method to enhance the accuracy while evaluating depth. [2]

2.4

Use in Karate Techniques

An Effectiveness comparison of Kinect v1 and Kinect v2 for recognition of Oyama karate techniques has been done by (Marek and Tomasz, 2010). The purpose of study was to evaluate Kinect v1 and Kinect v2 to recognize the actions of Karate Tech- niques named as Oyama. Initially, multimedia cameras were famous for personal computers and game consoles and were also cheaper while being used for these pur-

(15)

poses. However, Kinect v2 has given the concept a wide array of use. Its use for hu- man computer interaction also gave it a new dimension.

According to their research Kinect can be used in medicine, education and for con- trolling robotics arm. Kinect v2 has come up as one of the best intelligent home solu- tions and has many potentials, yet to be explored and fully utilized. Postural segmen- tation and assessment of postural control capabilities are the most common ap- proaches to be used. Classification method is used to make gesture recognition pos- sible. To perform tracking and generate motion capture data, kinetic sensor data is preprocessed by kinetic libraries. Kinect v2 has appeared by enhancing the capabili- ties of its predecessor i.e. Kinect.

2.5

Advantages of v2 over v1

In Kinect v2, Gesture Description Language (GDL) has been used as a classification algorithm. The data was recorded by two professional and belt instructors. The re- search collected 200 x movement samples per person. The data was divided into two sections as training and evaluation. The data was then thoroughly assessed. Kinect v2 proved to be more reliable than Kinect v1, taking stock of recognition rates of GDL classifier and error classification cases. The major advantage of Kinect v2 over Kinect v1 was accurate calculation of leg joint positions. [3]

Figure 4- GDL illustration. Teng et.al. (2013)

A different research conducted by the University of North Carolina at Chapel Hill has correctly illustrated the functions and classification of Kinetic shadow detection feature. The research shows that Kinetic maps are often found with holes, missing data or similar missing links in many of the cases. In their research they advocate a different idea, which is, turning holes into a useful information (Teng and Hui, 2014). They proposed different types of shadows based on local patterns as shown by geometry. Shadow information is then fully used. [4]

(16)

2.6 Pose Estimation of Human Body Part Using Multiple Cameras

There is a lot of existing research on methods of estimating the pose of multiple 2-D and 3-D images and objects as a starting point (Kuntal and Jun, 2014). In the re- search the approximate volume in 3-D is obtained by projecting the silhouettes in images. The authors analyzed that existing means of communication like the video conferencing systems have few limitations. The users are often at far distances, one of the solutions viewed for this problem are feeling of co-location of humans [2].

Views of points in real space interact to object in space. It has tackled the issue by assuming space in 3-D modeled. In this paper an example of human body part with pose estimation has been given. The works include pose parameters by random selec- tion. The author conducted few experiments using CAD model of a human head, which was undertaken utilizing 4 cameras. These were placed in a semi-circle in equal distance. Any algorithm for the estimation of pose is difficult to extend easily for the application. Silhouette edges for experiments were separated manually. Three randomly chosen points in volume are taken, every fifth point on edge of silhouette is taken. The results were initially not good, but the results later improved. The algo- rithms developed by them can easily be used in future [3].

2.7 An Innovative Hearing System Utilizing the Human Body as a Transmission Medium

Some researchers recommended an innovation in hearing system using the human body as the medium of transmission (Son and Kwang, 2013). This concept has made the replacement of sound transmitter with human body. Self-demodulation is the base of generating audible sound. Frequency of two waves difference in audio signal was produced by self-modulation effect, through a non-linear medium. In this con- text a user is able to hear sound without a transmitter and making noise by using self- modulation. The concept of wireless sound transmission has been given by the au- thor. Distortions in propagation process can be reduced by ultrasound [19]. The pa- per has successfully given the concept of using human body as a transmission medi- um for the proposed system as model to be used.

(17)

Figure 5- Medical application. Lim et.al. (2014)

2.8 Accuracy and Reliability of Optimum Distance for High Per- formance Kinect Sensor

A high-performance research conducted by Lim and Shafriza (2013), analyzed the sensor from a different perspective. In their camera i.e. depth/rg camera, each pixel represented a distance, which corresponded directly to some point in this physical world [20]. Biomedical application is one of the successful features of Microsoft Kinect Sensor as it gives the necessary tools required to provide measurements of volume, length and other measurements. These technologies have become popular with time, applications like Time of flight (TOF) and Microsoft Kinect Sensor are applicable in biomedical field, and come in domain of range camera. The principal of working of TOF camera is of emission of modulated light on the scene [17]. This light is reflected and measured with reference signal. To obtain depth information, it is correlated with modulated light. The technique used by Kinect sensor is different as utilized in infrared structured light projector and CMOS camera, which computes

(18)

depth of the scene. Now, 3-D technologies have come in market with depth cameras and Kinect Sensor. The primary aim of development of Kinect sensor is its utiliza- tion in biomedical applications. This Kinect sensor is like a camera, due to its speci- fications [14]. These authors focused on the fact that Kinect sensor can provide ac- curate and reliable depth distance values same with actual distance. The analysis of the measurement of depth to actual distance has a lot of importance for the accuracy of Kinect sensor. The depth array calculated by the researchers had a precision of up to 11 bits value. Therefore, it is likely that the depth sensor measurements of Kinect Sensor will provide non-linear function of distance [18]. The focus of the research was also on default range and near range of distance from Kinect sensor.

The authors undertook the task of investigating depth data of Kinect sensor. In this study, they carried out a reliability analysis of the sensor’s specification as claimed by Microsoft. This research provided an insight into the authenticity of data. Experi- ments conducted with these sensors have proved that error in depth measurements are enhanced by enhancing distance to sensor. These variations range from a few mm to 40 mm at the max [15]. The formula used for these calculations was Kuder- Richardson formula. This study proved to be very useful as it provided the method- ology to determine 3-D pose estimation in human motion application by carrying out accurate, precise and reliable depth distance.

2.9 Integration of Microsoft Kinect with Simulink: Real-Time Ob- ject Tracking Example

Microsoft Kinect has a lot of potential in system applications due to its introduction as a low cost and high resolution 3-D sensing systems (Joshua and Tyler, 2015). The purpose of their study was to develop Kinect block, and provide access to depth im- age streams, and access to sensor cameras. Available drivers of Kinect, the interface of C language, is an impediment to Kinect application development. This access pro- vides the ability to incorporate without any difficulty to an image processing based on Simulink. The study focused on issues affiliated with implementation aspect of Kinect. One of such important aspect is calibration of sensor, another is utility of Kinect block and Kinect and it is through 3-D object tracking example [9].

(19)

Figure 6- Simulink Kinect. Joshua et.al. (2014)

For detection of both moving and stationary obstacles, it is greatly dependent upon the capability of systems to navigate in unsure circumstances. In the available sen- sors, Sonar sensors is a low-cost option, however, these are prone to false echoes and reflections due to bad quality angular resolution. Another option is Infra-red and la- ser range finders, these are also low cost, but the grey area is provision of measure- ments from only one point in the scene. Other available option is of spectrum, Radar and Light Detection and Ranging (LIDAR) systems, these can provide precise meas- urements along with good angular resolutions [14]. However, they also have grey areas. Most important of them are their high-power consumption and high expenses.

This complete picture and the revolution of low cost digital cameras has produced an interest in vision-based setups for the vehicles, which are autonomous. Even in this case, the disadvantage of distance must be of stereoscopic cameras. The recent re- lease of the Microsoft Kinect addresses this issue by providing both a camera image and a depth image. Kinect was primarily for entertainment market, however due to its powerful capabilities to operate, it has gained a lot of popularity in sensing and robotic community. Few of the examples of this popularity are applications related to robot / human interaction, 3-D virtual environment construction, medicine, robot tracking and sensing. Most of Kinect applications are coded in C [15]. In industry and academia, use of image processing is now common place. Even inexperienced users can use these tools. These are used to target hardware to implement by making use of code generation automatically. Simulink provides a widely accepted environ- ment for designing implementation of image processing algorithms as well. For ex- ample, in automotive industry, Simulink is used to produce seamless code-generation tools. These translate the final design into a target hardware, which is real time- executable. These tools have a use in educational environments, it enables the stu- dents to concentrate on major details rather than low level. The major contributions of this study can be divided into three major spheres. Interface development allowing Kinect to be involved in refined Simulink designs, it allows more users to access.

The targets discussed in the paper are Linux based, which are used in mobile auton- omous robotic machines. Real time Kinect streams parallel camera and depth images.

(20)

2.10

Comparing the utility and usability of the Microsoft Kinect and Leap Motion sensor devices in the context of their application for gesture control of biomedical images

A study conducted by Hughes and Nextorov (2014) investigated the interaction of medical images in operating room with a requirement of maintaining asepsis. This arrangement has resulted in a complex type of arrangement in use of mouse and key- board, between scrubbed clinicians and on other end non-scrubbed personnel [16]. It is Microsoft Kinect or Leap motion which could give direct control of medical image navigation and manipulation.

Figure 7- Leap Motion Sensor. Hughes et.al. (2015)

The authors admitted that many studies have already been undertaken to study use of Leap and Microsoft Kinect in Operating Room, however, no study had compared the sensors in terms of their usage. They aimed their study to compare the use, utility, accuracy and acceptance of the two motion sensors. In this research, 42 persons par- ticipates. Out of these 30 % were diagnostic Radiologists and 70 % were surgeons or Interventional Radiologists. All the participants were having good computer skills but limited gaming experience. In analysis of utility of two motion sensors, 50 % of participants rated Microsoft Kinect v2 as very useful in their routine practice, how- ever performance of Leap Motion Sensor was 38 %. Out of Surgeons and Interven- tional Radiologists 54 % rated Microsoft Kinect as useful [13]. Younger participants found Leap Motion interface as more useful than older. In 37.5 % participants, after use of Kinect sensor, perception of leap motion sensor deteriorated. System accepta- bility was better for Kinect Sensor, as compared to Leap Motion Sensor. With re- spect to utility and use, Microsoft Kinect was found better. However, Leap motion was found to have a better accuracy. Kinect was more acceptable to the users, alt- hough Microsoft Kinect was tiresome physically. More than half of the surgeons and

(21)

interventional Radiologists found Microsoft Kinect v2 as very useful. As regards to this study, Vascular and Orthopedic surgeons found the sensors to be more useful.

The measurement accuracy was found not to be of good standard, which can be at- tributed to many factors including the system’s field of view.

For Leap Motion Sensor, user needed to place the cursor at end or start point of ana- tomical structure and keep the hand sable before indicator of selection is seen [5].

More time was taken before selecting the measurement point. On the other hand, Kinect proved to be better, as took short time. In certain cases, participant had showed hand before selecting the end point, so the measurement command was com- pleted prematurely. Few gestures were initially available for both sensors and were available. However, later gestures were disabled and replaced by discrete input or click. Due to requirement of time to implement measured command in four seconds for the startup and end measurements, both sensors were found to be slower than average time. In terms of time to task completion, as per the prior studies conducted, with adequate practice, motion sensors performed better than the mouse. Fastest time for a participant was 6.38 secs for Leap Motion Sensor and 7.54 secs for Mi- crosoft Kinect V2. These times are lower than overall average time to indicate and measure [11].

Figure 8- Motion Sensor illustration. Hughes et.al. (2015)

System use influences utility of system by surgeons. It has been shown by relation- ship between use and utility, due to poor use, there comes poor utility. Study proved that Leap Motion Sensor could not be equivalent to Kinect v2, because younger doc- tors were more comfortable with use of Motion Sensors, as compared to Kinect [9].

2.11 A Depth-Based Fall Detection System Using a Kinect Sensor

Researchers have also tested Kinect sensors application in fall detection systems.

Samuel and Enea (2014), for instance, carried out a study proposing a fall detection system basing on Microsoft Kinect. This system is privacy preserving and automatic.

(22)

The raw depth data, which is provided by the sensors is analyzed by means of ad-hoc algorithm. This system implements a definite solution to categorize, all the blobs in the specific scene. Whenever a person is identified, a tracking algorithm is followed between different frames. When use of depth frame is made of, it allows to extract human body, even when it is interacting with other things such as a wall, or a tree.

Inter-frame processing algorithm helps to efficiently solve the problem of blob fusion [14]. If a depth blob attached to a person is near the floor, the fall is detected.

Figure 9- Fall detection illustration. Samuele et.al. (2014)

In the study, in top- view configuration, using Kinect Sensor method of automatic fall detection has been proposed. Without relying on sensors, which are wearable and by the exploitation of privacy- preserving depth data only, this approach allows de- tecting a fall event. With the help of ad hoc discrimination algorithm, this system could identify and bifurcate the stationery objects from human subjects, within scene.

Simultaneous tracking can be done and numerous human subjects can be monitored.

Authors confirmed through experiments the capability of identifying the human body during a fall event. Moreover, the capability of algorithm recommended for tackling the blob fusions in domain of depth.

The system proposed in this research has been tested and realized on PC with fea- tures of Windows 7, i5 processor with a RAM of 4 GB. The proposed algorithm can be adapted by diff depth sensors, and it needs only depth information as input data.

Moreover, embedded real time implementation has been done featuring Linaro 12.11, Cortex A-9 and 2 GB RAM. Authors foresee that future research activities will focus to simultaneously tackle and manage various depth sensors by improving and enhancing the performance of the algorithm. The system will be made to support the tracking of subjects whenever it endeavor to cross areas covered by adjacent sen- sors.

(23)

2.12 Experimental Studies on Human Body Communication Char- acteristics based upon Capacitive Coupling

Researcher at the Academy of Sciences, Shenzhen, China studied Human Body Communication and regarded it as technology of transmission for sensor network applications for short range (Wen-cheng and Ze-dong, 2014). There are few full- scale measurements, which described body channel propagation on capacitive cou- pling [11]. The study has its focus on experimenting various body parts, investigating the features of body channel. By making use of coupling technique, both in terms of frequency and time, the characteristics of body channel may be measured. Based on the results measurements, it was observed that the body maintained stable character- istics. Elbow, wrist and knee affected channel affected the attenuation characteristics [19].

2.13 Body Movement Analysis and Recognition

Different studies have also proposed human-robot interaction basing on innovative combination of sensors. Yand and Hui (2014) conducted a study on communication by non-verbal ways for communication of robots and humans by developing an un- derstanding of human body gestures. The robot can express itself by making use of body movements, such as facial expressions, movements of body parts and verbal expression. For this communication, twelve gestures of upper body will be utilized.

Interactions of objects and humans are also included in these. Gestures are character- ized by the information of arm, hand posture and arm. To capture the hand posture, use is made of Cyber Glove II. Microsoft Kinect gives information for head and arm posture [12]. This is an up to date solution of human gesture combination by the sen- sors. Basing on the data obtained by posture data of body, proposal has been made of human gestures recognition, which is real time, as well as effective. In this study, experiments were also conducted to prove the efficacy and effectiveness of the ap- proach proposed in this.

(24)

Figure 10- Movement analysis Glove. Yang et.al. (2012)

Human-computer interaction has recently gained the interest and attention of indus- trial and academic communities, and is still not very old field as it started in 1990s.

This field has contributions from mechanical engineering, computer sciences and mathematics. Unlike interactions of earlier times, more social dynamics aspect must be expected in domain of human-robot interactions. As people want to interact with robots, as they do with other humans, so robot human interaction is needed to be made more believable. Robots should be able to make use of verbal and body lan- guage, as well as facial expressions [10]. Some robots are already being used for this goal. Nao Humanoid Robot1 can use gestures and body expressions. The main con- cern of the study was to establish means of communication between robot and human using body language. One of the main purpose of the study was to apply other than verbal language to human-robot interaction in social domain. Upper body, gestures are applied, which are 12 in number. These are involved in recommended system and are all intuitive and natural gestures. They characterize themselves by arm, head and posture information. Human-object interactions are involved in these gestures.

Figure 11- Humanoid robotics illustration. Clingal et.al. (2014)

(25)

A human body dataset is constructed to analyze the recommended recognition meth- od. The dataset was made by making results from 25 samples of different body sizes, culture backgrounds and genders. Efficiency and effectivity of the recommended system has been proven by the experiments. Few of the major aspects of the study are:

 Kinect and Cyber Glove II are integrated to captured arm, head and hand pos- ture. For this human gesture-capture sensor is recommended.

 For recognition of upper body gestures, a real time and effective sensor is recommended.

 A gesture understanding and human robot interaction system is built to assist humans to interact with robots.

A scenario was established in which, a user and a robot classroom interaction was created for a case study of GUHRI system. The user is student and robot acts as lec- turer. Robot can understand the upper body gestures, 12 in number. Robot is like humans and can react by combining facial expression, verbal language or body movement. The behavior of robot in class is triggered by the body language of the user [7]. Here all the actions are completely consistent with the established scenario.

GUHRI system has also the ability to tackle unexpected situations like, if a user an- swers a phone call suddenly, it can react appropriately. Regarding proper under- standing of upper body gestures, dynamic are the important body language compo- nents in daily life. They provide clue for communication to enhance performance for this communication. To make robot- human interaction, robot should be able to un- derstand static as well as dynamic gestures with the help of movement analysis and recognition of human gestures. Human body 3-D combined information can be ob- tained in real time by the Microsoft’s Kinect SDK. By the change in position of body joint in temporal axis, motion information can be obtained. Activity recognition has already also been done to by this information of body joint motion. Possibility of ignoring hand gestures is still there, due to which chances of ignoring gestures by hand are there. Future is likely to be marked by studies on recognition of gestures of upper body and body motion, as well as requisite information by hand gestures. An- other dimension is recognition of sensor form egocentric point of view. In the rec- ommended GUHRI system of the paper, Kinect has been proposed as vision sensor.

It is not a perfect system and has many limitations like inability to change viewpoint due to fixed position of Kinect Sensor. Due to this limitation, it is always not possi- ble for remote to get maximum viewpoint of gestures by the human body. One of the options available to solve this problem is to get gesture information by egocentric perspective of the robot. This provides opportunity for changing the view point of the robot, but it gives birth to some new problems. As the distinction between motion of a camera and a real body motion will be difficult for the robot [11]. In future, further work can also be done in this regard by understanding the integration of verbal clues to GUHRI system to further increase the robot-human interaction. If robot is more autonomous in seeing and hearing, it will become more like humans.

(26)

This paper has recommended in overall context, a GUHRI system, with understand- ing of robots and human interactions and innovative understanding of gestures. Ro- bot can understand 12 upper body part gestures which can be comprehended by the robot. By a few features like facial expressions, body movements and verbal expres- sion, robot also has the ability to express itself. A combination of sensors has been recommended to combine Microsoft’s Kinect and Cyber Glove to capture posture of head arm and hand simultaneously [3]. By doing this, an effective and real-time ges- ture recognition mechanism has been recommended. In the experiments, human body gesture dataset has been built. The efficiency of our gesture recognition has been built by results of the experiments conducted. Till now, the gestures involved are static gestures like of having question, to appreciate, to call, to drink etc. In this study, the future recommendations are to understand dynamic gestures as to say no, to clap, to wave hand. Another important recommended addition is of speech recog- nition; it would make the interaction more real.

2.14 An Integrated Platform for Live 3-D Human Reconstruction and Motion Capturing

There are also experiments and studies that show how Kinect technology can be used for live 3-D human reconstruction and motion capturing. In their research, Imitrios and Alexadis (2011) investigate the developments in 3-D capturing, processing and provided ways to unlock pathways of 3-D applications. Their study addresses tasks of real time capturing and motion tracking by explaining main features of an inte- grated platform targeting future 3-D applications. Moreover, along this, an innova- tive sensors calibration method has also been discussed. Basing on an increased de- viation of volumetric Fourier transform based method, an innovative method of re- construction has been from RGB-D has been recommended in this paper. The paper also proposed, a qualitative evaluation of 3-D reconstruction mechanisms, as existing evaluation methods have been found quite irrelevant. Overall, an accurate mecha- nism of real time human body tracking has been recommended, that also was basd on a generic and multiple depth based mechanism. Experiments conducted in the study proved the lessons of the study.

In this study, including multi-Kinetic v2, capturing reconstruction of moving hu- man’s other applications like fast reconstruction of humans, and based on skeleton- motion tracking by depth cameras has been described and main elements of integrat- ed system have been described elaborately. Based on these elements, innovative ap- proaches have been recommended in this paper and discussion on existing approach- es have also been explained. Along with this, an innovative mechanism for evalua- tion of 3-D reconstruction system, has also been recommended. Some limitations of ongoing researches have also been discussed. Imperfect synchronization issue with RGBD sensors, may lower the construction quality, it is one of the main limitations

(27)

of this research. In domain of skeleton tracking mechanism, short comings of topol- ogy change are to take over by fitting of skeleton scheme [2]. Moreover, by splitting the body into upper and lower parts and fusing our mechanism of data from inertial measurements, limitations can be overcome.

Figure 12- RGDB illustration. Immitrios et.al. (2014)

2.15 Automated Training and Maintenance through Kinect

Availability of Kinect at low cost rates and its provision of high quality sensors has enabled researchers like Saket and Jagannath (2011) conduct a study to reduce bur- den on mechanics involved in automobile maintenance, undertaken in centralized workshops [1]. A system prototype has been recommended that works with Kinect.

Speech and gesture are the two modes of operation of this system. If on speech mode, it can be controlled by various audio commands. It can also be controlled by gesture mode. Gesture recognition is done by Kinect System. The system along with RGB depth camera processed data of skeletal, by keep record of body joints. Recog- nition of gestures is undertaken by checking user movements against the predefined situation. Real time image data streams are captured by high density camera, 3-D model is generated and superimposed on data being received in real timeframe.

In the recommended system, Kinect plays an important role. It works as a tracking instrument for the developed Reality System. [6] The system recommended in this paper utilizes few of the very important features of Kinect, which are speech recogni- tion, joint estimation and tracking system of skeletal. One of the most important fea- tures of Kinect is tracking of skeletal. Reason for it is the ability of finding user’s position by using it, which is used for guiding the user in assembly procedure. It also makes use in recognition of gesture. This assembly helps to bring the individual parts together and join them as a single product. These assemblies, can further be bifurcat- ed into full and partial assemblies. The basic mode, which is also called as full as- sembly mode will teach the technicians about the procedure of assembling of particu-

(28)

lar product. In partial assembly mode, the role of Kinect becomes more important, as the technician is guided in detail about the assembly of parts. When assembly of one part is completed, next part assembly can be started with [12]. There are two differ- ent modes, in which the system can work and these ate Gesture and Speech modes.

Basing on the user’s acquaintance and know-how/ experience on the system, the user can select the mode, according to his/her convenience. If speech mode has been se- lected, the user will command by speaking. On selecting Gesture Mode, user inter- acts by using gestures, whereas the system guides by voice commands. For example, the START word of command to start with the system.

The research has elaborately discussed about the use of Kinect sensor for tracking and detection issues. Not only as tracking device, but Kinect is also being used as an input device. The study is a step towards making automatic repair and maintenance of vehicles. The recommended system will assist in reducing the work load on skilled experts for considering regular activities. Instead this system can be used for small jobs. By doing this, process of documentation will also become simple. The supervisor has no need to roam around in this system [2]. The system keeps check on each step, so the process of step wise verification also becomes simpler. The system recommended in study is likely to bring many opportunities for engineering based companies making use of Augmented Reality to make their complex tasks easy.

Overall, this system can contribute a lot towards an improvement in the system of repair and maintenance.

2.15 Kinect in the Kitchen: Testing Depth Camera Interactions in Practical Home Environments

Galen (2013), from University of California, Berkeley has carried out a study that depth cameras are being used in millions of houses due to developments in Microsoft Kinect.

Figure 13- Smart Home System illustration. Berkley University Journal et.al.

(2013)

(29)

This study has taken Kinect to real kitchens. Although, touchless gestural controls can prove to be difficult for few but it enables the commands to be transformed into movements of cooking. This smart kitchen enables the users to alter the scheme and control with other limbs, when hands are not empty. The recommended system was tested with 5 different persons, who cooked in respective kitchens and identified that placing the Kinect was simple and reason of their success. An important challenge was accidental commands in the kitchen [12].

The experiment proved that the users found the system, easy and pleasing with low levels frustration. It was also felt that implementation of the system enabled to load music and recipes. It was helpful, as the interaction style was general. All subjects expressed that although, it was difficult and messy to cook but they were quite happy about the experience. The observations were not favorable in view of those conduct- ing the research. Accidental use of navigational aid caused a lot of mess. Other than accidental pressing of buttons, during change of directions, sweeping hand also caused problems. Some of the errors occurred, when the subjects pushed buttons, while focusing elsewhere. Another problem that was experienced was that the sub- jects mostly pushed the wrong buttons. Most of these wrong pressings were due to pushing of buttons more quickly. Kinect SDK smoothing was the reason for this, by the authors. The subjects liked the lock buttons on the screens, but were rarely used by them [17]. During conduct of the experiment, few subjects could not identify that the lock was not automatic but was result of automatic pushing of the button. It is recommended that for the future use, locking system should be made automatic, es- pecially when the subject turns sideways (resultantly position of the axis joint col- lapses towards inner side) towards the side counters or towards the side of face coun- ters behind. For this system, it is recommended to make this unlocking a process involving 2 steps, instead of keeping it a single step process. The Kinect proved to be extremely useful during the conduct of the experiment. Especially the ease of posi- tioning Kinect was surprising for the users. During the conduct of experiment, cam- era was so placed that the subject generally remained in the frame. One important aspect was requirement of distance in the experiment. To do this, the cart was gener- ally placed, out of the kitchen, and out of the way.

2.16 Kinect Gaming and Physiotherapy

Research conducted by Sachin and Singh (2014) from University of Pune recom- mended a system that joins 2 applications of Kinect. These are Kinect gaming as well as application of Kinect, used for physiotherapy. The recommended system, under- take the tasks basing on critical features, such as depth recognition, tracking of skele- tal and recognition of gestures. Kinect camera is the key instrument, as per the stud- ies, which gets all the operations to be implemented [2]. The movement of subject body was tracked by implementing skeletal tracking and by identifying key points on

(30)

the skeleton of human body. Depth recognition is another important feature of the system. It is carried out to segment the front and rear ground of the image. Depend- ing upon pixel color, it has also the ability to separate a person from the background.

Kinect is required to conduct these operations. One of the main reasons to do so is, as it has the capability to produce RGB streams and depth at lesser cost than the usual sensors in common use. Kinect can measure the distance of any given point from the Kinect Sensor, as it has the time of flight camera. To undertake this open Kinect driver framework for Kinect is being implemented. It has the capability to generate depth images. For performance of applications, normally Kinect is used along with console device [12]. Console device is quite costly, therefore, in this study an effort is being made to give away with the console device, rather to tackle the problem of tracking of human skeletal is being undertaken using Microsoft Kinect. In this study, an effort has been made to maximize the hardware and by finishing the console de- vice, the procedures are to be conducted by incorporating Kinect with developed and refined system programs to undertake the particular set of operations [15]. The study panel has recommended the final project implementation which can be utilized for further development of applications.

(31)

3 Research Methodology

3.1 Introduction

This section lays out the procedures and methods employed in this research. In this research documentary analysis will be primarily be used. This section will outline the results and facts from previous research methods like sampling, research design and data analysis. Additionally, concerns have been raised about the applicability of the different Kinect innovations and discoveries (Bevilacqua, 2014). This research will address those concerns. An experimental analysis on the effectiveness of Kinect in assisted living environments is crucial as it helps Ambient Assisted Living (AAL) organizations benchmark against best standards and practices. In his research, Konstantinidis (2015) expressed the need for AAL organizations to adapt to external environments and patient needs as a strategy that helps in improving both the tech- nical and practical application of Kinect. This is particularly important as most Smart home environments are shifting towards a service culture and staff reduction strategy which has a more demanding clientele. This research will analyze results from clini- cal experiments in Kinetic devices like Camera tracking. In his research Anastatiou (2011), analyzed the efficacy of kinetic camera in tracking hand, elbow and trunk movements.

In addition, a glimpse of available research works show that Kinect devices have been extensively researched and documented. Experimental research has been done in 3-D mapping technological improvements and in body tracking. In this context, this research will analyze consequential advances in related technologies like GPU systems and sensors that facilitate technological improvements and new Kinect ap- plications. Technologies like Mo-cap, Kinect v1 and Kinect v2, have been used to properly perform experiments in assisted living environments. Test for this system involve sitting, walking and standing.

Figure 14- Pose Experiments, Kinect tests. (2013)

(32)

3.2 Model of the research

This research will employ a documentary analysis strategy and will primarily use experimental and clinical studies. Experimental results will be used to determine the impact of Kinect and the different applications in Assisted Living Environments.

Main upside of a documentary analysis is that it’s cost effective and relies on scien- tifically approved approaches to conduct the study Clembers (2001). Documentary analysis also tends to work with an unlimited scope making the research simple and logistically easier compared to other research methods. Results from clinical tests and applications were also used to answer the research objectives.

Statistical Package for science (SPSS) was used to analyze all the collected data after which descriptive metrics like means, averages, percentages and frequencies were used for further analysis. Data interpretation was conducted in respect to the frame of reference of the research problem and objectives.

According to researchers like Robinson (2003), validity and reliability of data collec- tion methods directly determines the accuracy of collected data. Reliability ensures that instruments used yield consistent results. To ensure objectivity and accuracy of the research a different department was tasked at auditing and inspecting the docu- ments used in the research. Cronbach’s Alpha was used to check for consistency in obtained results. The Alpha, which ranges from 0-1, measures the level of reliability in an increasing rate. According to Dristern (1990), the minimum value of reliability of a research should be at a value of 0.6.

In this research, the research team also corrected for inconsistencies, errors and mod- ified the formulas used to increase accuracy.

3.3 Research Design

The research design employed in this study will outline the blueprint and plan for answering the research questions and fulfilling research objectives. According to Blumberg (2005), a research design shows the plan that will guide researchers in answering the research questions.

Although researchers concur that it’s sometimes technical to perform a research us- ing documentary analysis, they agree that it’s an important approach which can help researchers get deeper insights especially if they use a combination of methodologies Flinter (2009).

(33)

Figure 15- Research design

3.4 Primary Data

In the collection of data. More emphasis was placed on data that could be analyzed.

Quantitative: Will entail numerical data collected from questionnaire, interviews and surveys. Quantitative data are easily analyzable and can be used to show patterns and trend. Graphs, pie charts and tables can be used to further illustrate quantitative data which can then be used to draw inferences. Email survey will be used because of its easy admissibility and the potential to survey large number of respondents.

Qualitative: These are non-numerical data collected from methods like one-on-one interviews and observations. Qualitative data can help in clearing any bias that may result from quantitative data collection methods. Questions are asked directly to the interviewer or respondent.

3.5 Summary

Results from various journals, books and literature sets will be used to form an opin- ion on the use of Kinect and its application on smart living environments. Important- ly, this research seeks to outline the future trends in Kinect applications and use in AAL environments. Although researchers in this fields Webster (2014) believe the application of Kinect to AAL is still at its infancy this research will delve into the future of applications and its relationship with other technologies like the IoT (inter- net of things) and Olympus camera.

(34)

4 Data analysis and presentation

4.1 Introduction

For comprehensive analysis, the following sections of the paper are organized in a documentary analysis manner. In this case, documentary analysis is used as a tool to gather evidence that center around use of Microsoft Kinect application, weaknesses and use in assisted living environments. This section will analyze lab results from conducted experiments, survey and studies in Kinect and Kinect components.

The most important reason documentary analysis was used for this research is that it is efficient. In our case, documented research papers, journals are easily accessible and their documented results verifiable. In this section, different research papers are analyzed to form an opinion on the future and applications of Kinect in assisted liv- ing environments. General research data was used to design the final data analysis technique. This section analyzed existing protocols used in assisted living environ- ments and proposed new protocols and areas of research. Key approach for this re- search is to build on the works of previous researchers such as Yang et al (2015) and Gradinaru (2016) both of who proposed new technologies for 3-D representation using sensors. In his research, Gradinaru (2016) designed new systems and software for capturing and display of animated information.

Other sister technologies involved in the development of Kinect applications like 3-D sensing tools were also analyzed, for video and still cameras.

Figure 16- Gradinaru (2016) graphical representation of system

(35)

Some of the key areas targeted for analysis include;

 Smart Home environments

 Movement Detection Models

 Internet of things and its impact on Kinect

 Skeletal Tracking systems

4.2 Smart Home environments

Smart home systems play a critical in creation and continuity of Kinect operations in assisted living environments. According to Kawatsu (2014) a smart home environ- ment is one that creates interconnections between a physical environment. In a smart home environment, people have the expectations that the technologies can be used to improve their everyday life. Applications of smart home systems can be in commu- nication, safety, welfare and appliances. The devices used in home systems environ- ment consist of communication modules, cameras, sensors and actuators. Overall, a server is used to manage all the operations of the smart home environments.

In their research, Baeg et al (2007) constructed from scratch a smart home environ- ment in the research building of KITECH (Korea Institute of Industrial Technology).

This research aimed to demonstrate the efficacy and practicability of robot assisted home environment. The research featured custom made sensors, actuators a robot and a database.

The researchers made use of RFID (Radio-frequency identification) technology to identify, track and follow objects within the home system. RFID uses radiofrequency to track objects. RFID tag was used to identify objects in the environment. Basically, objects with the tag were considered smart appliances. Apart from the smart envi- ronment, the conceptual framework consisted of servers and a robot. Smart objects were assigned sensor capabilities which meant they could they could communicate with both the server and the robots.

Below figure shows the conceptual environment:

(36)

Figure 17- Conceptual Framework of a smart home environment

The smart environment was divided into layers; the first layer consisted of the real home environment which has scattered setting of objects and appliances. The second layer consisted of actuators and wireless sensors. This level includes additional sen- sors like temperature sensors, RFID readers, smart lights, and humidity and security sensors. In level three there were devices like tables, chairs and shelves which all had RFID sensors to enable identification ease. In the fourth level there existed a com- munication protocol which ensured that reliable and accurate communication be- tween the home server and other devices in the vicinity. The server which managed the relationship between the devices and the sensors was in level five.

Figure 18- Smart home environment layered description

(37)

In this experiment, the main use of the robot was to allow several key functions like;

mapping, localization, object recognition, and interaction. To that end, the robot was equipped with ultra-violent sensors, cameras, ultra sound, a good processing speed and adequate memory.

For this experiment, specific home services were selected to be performed that repli- cated real home services. The objective of the smart home environment was to give users close to real life services. Some of the functions to be performed in the smart environment included; Object cleaning, running home errands and executing home security functions.

Object cleaning; in this scenario, the service robot is tasked at tiding up the room or environment. The robot does this by arranging objects in a required or preset way.

RFID installed in the roof of the home are used to direct the robot on navigation and what objects to clean. Purpose of this part of the experiment was to investigate the potential use of robots in tasks such as laundry cleaning, home arrangement and tasks like doing dishes.

Performing errands; In this case, robots are tasked at identifying and fetching specif- ic objects or smart items around the smart home. Fetched objects have RFID tags which means they are easily identifiable within the network. The fetch function works after receiving a command from a person. The robot then sends a request with the position of the object to be fetched, after receiving the information it moves to where the device is, grabs it and sends it back.

In this research, the researchers used two key modules; RFID interfaces and Com- munication modules. The protocol used to operate the communication module was the ZigBee protocol. The ZigBee protocol is an open standard protocol based on 802.15.4b. The protocol provides inter connections for different applications that is low-power and wireless. ZigBee protocol was used for all the devices. On the other hand, EPCglobal Gen2 was used for RFID modules. EPCglobal Gen2 employs a standard for the use and applications of any RFID module.

The team used below physical layout for the research;

(38)

Figure 19- Smart home environment layout

This paper outlines innovative ways which can help improve assisted living envi- ronments. The architecture employed and use of RFID systems proves that smart home systems can be created from available materials and technology. Scenarios performed by robots like cleaning, arrangements can be employed in assisted living environments. According to the researchers, the goal was to create environment where people are served by robots. The robots work by ensuring the environment is as required. The robots employed in this research can be used to help individuals in assisted living environments perform basic functions like cleaning, washing or house arranging.

With such developments in robotics and creation of smart homes, Kinect v2 can be used employed both for navigation and dense map creation. The Kinect v2, as op- posed to v1 is based build on time-of-light principle which means that it can even be used out of homes. RFID sensors employed in this research can be in particular very useful when it comes to mobile robot movement.

For robotic applications, Kinect v2 sensor has been used by researchers to provide a much better application primarily because of the ToF technology employed. By us- ing ToF, accurate measurements for objects can be obtained and used. Also, due to the high resolution cameras, a lot of information is captured. The result is that home environments are accurately mapped with fine details and minimal errors. With Kinect v2’s active illumination, surrounding images are captured even in dark envi- ronments.

Research conducted by Hondori et al (2013), gave important insights in the applica- tion of Microsoft Kinect v2 in a smart home setting. The research focused on ges- tures and made use of sensor fusion on Kinect and inertia sensors. The goal of the research was to access the significance of smart home systems in assisting post- stroke patients’ complete day to day activities. To achieve this, Microsoft Kinect was

Viittaukset

LIITTYVÄT TIEDOSTOT

Light quality varies in space and time, and plants are able to detect and respond to these environmental cues. Plants must time when their leaves come out in spring and fall off

Smart homes are homes equipped with technologies that are developed to ensure safety, security and better quality of life for people with compromised physical or mental ability.

Homekasvua havaittiin lähinnä vain puupurua sisältävissä sarjoissa RH 98–100, RH 95–97 ja jonkin verran RH 88–90 % kosteusoloissa.. Muissa materiaalikerroksissa olennaista

Ensimmäisessä osassa valotetaan hieman moniakselisen kuormituksen problematiikkaa sekä esitellään menetelmiä, joilla voidaan käyttää rakennetta it- seään ”voima-anturina”

Järjestelmän toimittaja yhdistää asiakkaan tarpeet ja tekniikan mahdollisuudet sekä huolehtii työn edistymisestä?. Asiakas asettaa projekteille vaatimuksia ja rajoitteita

Solmuvalvonta voidaan tehdä siten, että jokin solmuista (esim. verkonhallintaisäntä) voidaan määrätä kiertoky- selijäksi tai solmut voivat kysellä läsnäoloa solmuilta, jotka

In chapter eight, The conversational dimension in code- switching between ltalian and dialect in Sicily, Giovanna Alfonzetti tries to find the answer what firnction

that the competition for jobs has become severer as there are more highly educated people competing for the jobs, and in the same time, there are many structural