• Ei tuloksia

Smart home systems play a critical in creation and continuity of Kinect operations in assisted living environments. According to Kawatsu (2014) a smart home environ-ment is one that creates interconnections between a physical environenviron-ment. In a smart home environment, people have the expectations that the technologies can be used to improve their everyday life. Applications of smart home systems can be in commu-nication, safety, welfare and appliances. The devices used in home systems environ-ment consist of communication modules, cameras, sensors and actuators. Overall, a server is used to manage all the operations of the smart home environments.

In their research, Baeg et al (2007) constructed from scratch a smart home environ-ment in the research building of KITECH (Korea Institute of Industrial Technology).

This research aimed to demonstrate the efficacy and practicability of robot assisted home environment. The research featured custom made sensors, actuators a robot and a database.

The researchers made use of RFID (Radio-frequency identification) technology to identify, track and follow objects within the home system. RFID uses radiofrequency to track objects. RFID tag was used to identify objects in the environment. Basically, objects with the tag were considered smart appliances. Apart from the smart envi-ronment, the conceptual framework consisted of servers and a robot. Smart objects were assigned sensor capabilities which meant they could they could communicate with both the server and the robots.

Below figure shows the conceptual environment:

Figure 17- Conceptual Framework of a smart home environment

The smart environment was divided into layers; the first layer consisted of the real home environment which has scattered setting of objects and appliances. The second layer consisted of actuators and wireless sensors. This level includes additional sen-sors like temperature sensen-sors, RFID readers, smart lights, and humidity and security sensors. In level three there were devices like tables, chairs and shelves which all had RFID sensors to enable identification ease. In the fourth level there existed a com-munication protocol which ensured that reliable and accurate comcom-munication be-tween the home server and other devices in the vicinity. The server which managed the relationship between the devices and the sensors was in level five.

Figure 18- Smart home environment layered description

In this experiment, the main use of the robot was to allow several key functions like;

mapping, localization, object recognition, and interaction. To that end, the robot was equipped with ultra-violent sensors, cameras, ultra sound, a good processing speed and adequate memory.

For this experiment, specific home services were selected to be performed that repli-cated real home services. The objective of the smart home environment was to give users close to real life services. Some of the functions to be performed in the smart environment included; Object cleaning, running home errands and executing home security functions.

Object cleaning; in this scenario, the service robot is tasked at tiding up the room or environment. The robot does this by arranging objects in a required or preset way.

RFID installed in the roof of the home are used to direct the robot on navigation and what objects to clean. Purpose of this part of the experiment was to investigate the potential use of robots in tasks such as laundry cleaning, home arrangement and tasks like doing dishes.

Performing errands; In this case, robots are tasked at identifying and fetching specif-ic objects or smart items around the smart home. Fetched objects have RFID tags which means they are easily identifiable within the network. The fetch function works after receiving a command from a person. The robot then sends a request with the position of the object to be fetched, after receiving the information it moves to where the device is, grabs it and sends it back.

In this research, the researchers used two key modules; RFID interfaces and Com-munication modules. The protocol used to operate the comCom-munication module was the ZigBee protocol. The ZigBee protocol is an open standard protocol based on 802.15.4b. The protocol provides inter connections for different applications that is low-power and wireless. ZigBee protocol was used for all the devices. On the other hand, EPCglobal Gen2 was used for RFID modules. EPCglobal Gen2 employs a standard for the use and applications of any RFID module.

The team used below physical layout for the research;

Figure 19- Smart home environment layout

This paper outlines innovative ways which can help improve assisted living envi-ronments. The architecture employed and use of RFID systems proves that smart home systems can be created from available materials and technology. Scenarios performed by robots like cleaning, arrangements can be employed in assisted living environments. According to the researchers, the goal was to create environment where people are served by robots. The robots work by ensuring the environment is as required. The robots employed in this research can be used to help individuals in assisted living environments perform basic functions like cleaning, washing or house arranging.

With such developments in robotics and creation of smart homes, Kinect v2 can be used employed both for navigation and dense map creation. The Kinect v2, as op-posed to v1 is based build on time-of-light principle which means that it can even be used out of homes. RFID sensors employed in this research can be in particular very useful when it comes to mobile robot movement.

For robotic applications, Kinect v2 sensor has been used by researchers to provide a much better application primarily because of the ToF technology employed. By us-ing ToF, accurate measurements for objects can be obtained and used. Also, due to the high resolution cameras, a lot of information is captured. The result is that home environments are accurately mapped with fine details and minimal errors. With Kinect v2’s active illumination, surrounding images are captured even in dark envi-ronments.

Research conducted by Hondori et al (2013), gave important insights in the applica-tion of Microsoft Kinect v2 in a smart home setting. The research focused on ges-tures and made use of sensor fusion on Kinect and inertia sensors. The goal of the research was to access the significance of smart home systems in assisting post-stroke patients’ complete day to day activities. To achieve this, Microsoft Kinect was

used to monitor activities such as spoon acceleration, wrist position, elbow position, shoulder joints and angular positions. Purpose was to distinguish between healthy and paralyzed individuals. This distinction is a complex problem in assisted living environments. Microsoft Kinect and Inertia were successfully tested in these envi-ronments. The use of a smart home systems in assisting stroke patients was driven by the high cost associated with visiting rehab facilities. Convenience of having smart home systems would allow doctors and therapist to remotely assist clients. The smart home systems would help therapists monitor patients and analyze improve-ments/progress.

As opposed to the smart home systems developed by Zheng et al (2013), the systems developed and tested by Hondori et al (2013) did not rely on numeric integration of inertia measurement unit (IMU). This research made use of inertia and Kinect sen-sors at the same time. The main activity used to record movements was intake ges-tures. Critical body functions like eating and drinking were selected. The setup in-cluded Microsoft Kinect, Sensor fusion and Inertia sensor. Inertia sensors were placed on different items like utensils which recorded movements of both the subject and the items they were using. A Kinect sensor was also placed on the table to moni-tor individual movements while eating and drinking.

Figure 20- Hondori et al (2013) system set up including inertia sensors and Kinect sensors

Individuals were asked to perform different tasks in order to record the experimental data.

Eating and drinking task- activities such as eating, cutting steak and drinking water were performed and repeated for a couple of times. These movements were then ana-lysed in 3-D trajectories as seen below.

Figure 21- Hondori et al (2013) 3-D trajectories The body movements are measured in terms of degrees.

Right elbow- changes in the range of 50-110 Left elbow- changes in the range of 65-115

Kinect sensor data analysis- above figure shows body movements and changes.

The movements on the wrist and joints illustrate body movements. These shows movements of the individual’s limbs while his head is still.

Figure 22- Hondori et al (2013) experimental data on body movements

Figure 23- Hondori et al (2013) limb changes in task like drinking and eating

Figure 24- Hondori et al (2013) inertia sensor data from individual’s items Data measured from inertia sensors is illustrated by figure 23. The bias on the signal is approximately 9.81 m/s due to gravity. This is adjusted and factored in each of the 3-D measuring unit. It was found that during cutting of the stake, the frequency rec-orded was of the highest value while magnitude was stud. Frequency during drinking is constant.

This researched proved that smart home environments could lessen the burden in-curred by the post stroke patients. The systems could also give vital data to physician for proper monitoring and study of patients. Microsoft Kinect and inertia sensors are vital for the system. The researches demonstrated that it’s possible to capture move-ments and positions such as angular displacement and limb gestures. While other researchers have performed the same research using on-body sensing techniques this research relied solely on Kinect and inertia sensors.

The system used in this research can be used in clinical assisted living environments.

A different research conducted by Mohamed et al (2013) assessed how smart home systems can be used to assist individuals with disabilities while making use of Mi-crosoft Kinect systems. The recommended systems aimed at monitoring elderly indi-viduals. The systems recognized gestures and body actions and gave feedback through a network. The key goal of the experiment was to monitor elderly

individu-als in their natural environment. To this end, two projects were initiated, DOMUS and GUARDIAN ANGEL.

Objective of the GUARDIAN ANGEL project was to produce sensors that could be integrated to any media type. Monitoring of all the various object parameters was a key objective of the experiment. According to the researchers, Microsoft Kinect was used because superior advantages compared to other sensors. Some of the key ad-vantages included; RGB camera, depth camera and infrared transmitter.

Figure 25- Mohamed et al (2013) smart house used in the experiment Some of the favorable characteristic of Microsoft Kinect is as illustrated in below diagram;

Property Specification

Field of View (Horizontal, Vertical, Diagonal) 58 H, 45 V, 70 D

Depth image size VGA (640X480)

Spatial x/y resolution 3 mm

Depth z resolution (at 2m distance from sensor) 1cm Maximum image throughput (frame rate) 60FPS

Color image size UXGA (1600x1200)

Data interface/power supply USB 2.0

Power consumption 2.25W

Operating environment Indoors

Table 1-Characteristics of Microsoft Kinect Components

Processing of the recorded data was done via three data streams generated by IR light reflected from the scene. Below is an image of the natural user interface.

The data is transmitted in three streams image, depth and audio. The Kinect system was relied upon to give accurate 3-D information.

Application

Processed data (Natural Human Interaction Library)

Data Streams (Image stream, Depth stream, Audio stream) Kinect Sensor

Figure 26- Mohamed et al (2013) Natural User Interface

Tested activities included gestures that were done using hand positions. The Kinect sensor assessed the position using 20 joints. X and Y coordinates of each joint was calculated. Below are images of the gestures and postures. The application could detect gestures and postures. Two methods were used to recognize gestures algo-rithm or template based. Because of needed flexibility, the 1 dollar and N dollar al-gorithm were used. These alal-gorithms can be implemented in different environments even in a prototyping context. In this case, an act done by an individual is recognized and compared to previously recorded sets of points. 1 dollar algorithm takes note of movements “unistroke”. A gesture is denoted by a continuous gesture “multistroke”.

In the 1 dollar unistroke four identifiers are used to recognize templates that are compared to stored data.

 Resampling

 Rotation based on indicative angle

 Scaling and translation

 Score calculation

Figure 27- Mohamed et al (2013) Waist detection posture

Figure 28- Mohamed et al (2013) Waist detection posture

The recognition scenario is performed as shown in the figure 29 below.

Figure 29- Mohamed et al (2013) Kinect procedure for gesture recognition A toolbox with Kinect SDK was used for the experiment. The toolbox utilizes both Golden section search and 1 dollar method. Theoretically, the two methods both fa-cilitate technical understanding of gestures like a circle as illustrated below.

Figure 30- Mohamed et al (2013)

Kinect toolbox recognition of circle gestures

The researchers concluded that ultimately a lot of Kinect sensors may be needed to properly monitor a complete home environment like a large building hospital. The researchers made use of a WIFI network in mesh topology. When a gesture was

de-Waiting for push/pull gesture Waiting for skeleton moving

Recognizer Algorithm Detecting Unistroke gesture

Network Communication

Device Action

tected an alert was sent via text or a simple alert. Communication from all frames was hence not required. The system worked such that in case there was an emergen-cy a text alert was transmitted.

The figure below shows the communication process.

Figure 31- Mohamed et al (2013) Kinect toolbox recognition of circle gestures In general, the program can detect gestures and communicating it via text transmis-sion. Unlike other smart home systems, in this research the sensors were non-intrusive to the users. The researchers successfully found the appropriate algorithm for gesture commands. For future experiments, the researchers aimed to using an Ethernet gateway EIB/KNX to accommodate many actuators.