• Ei tuloksia

Automated Training and Maintenance through Kinect

Availability of Kinect at low cost rates and its provision of high quality sensors has enabled researchers like Saket and Jagannath (2011) conduct a study to reduce bur-den on mechanics involved in automobile maintenance, undertaken in centralized workshops [1]. A system prototype has been recommended that works with Kinect.

Speech and gesture are the two modes of operation of this system. If on speech mode, it can be controlled by various audio commands. It can also be controlled by gesture mode. Gesture recognition is done by Kinect System. The system along with RGB depth camera processed data of skeletal, by keep record of body joints. Recog-nition of gestures is undertaken by checking user movements against the predefined situation. Real time image data streams are captured by high density camera, 3-D model is generated and superimposed on data being received in real timeframe.

In the recommended system, Kinect plays an important role. It works as a tracking instrument for the developed Reality System. [6] The system recommended in this paper utilizes few of the very important features of Kinect, which are speech recogni-tion, joint estimation and tracking system of skeletal. One of the most important fea-tures of Kinect is tracking of skeletal. Reason for it is the ability of finding user’s position by using it, which is used for guiding the user in assembly procedure. It also makes use in recognition of gesture. This assembly helps to bring the individual parts together and join them as a single product. These assemblies, can further be bifurcat-ed into full and partial assemblies. The basic mode, which is also callbifurcat-ed as full as-sembly mode will teach the technicians about the procedure of assembling of

particu-lar product. In partial assembly mode, the role of Kinect becomes more important, as the technician is guided in detail about the assembly of parts. When assembly of one part is completed, next part assembly can be started with [12]. There are two differ-ent modes, in which the system can work and these ate Gesture and Speech modes.

Basing on the user’s acquaintance and know-how/ experience on the system, the user can select the mode, according to his/her convenience. If speech mode has been se-lected, the user will command by speaking. On selecting Gesture Mode, user inter-acts by using gestures, whereas the system guides by voice commands. For example, the START word of command to start with the system.

The research has elaborately discussed about the use of Kinect sensor for tracking and detection issues. Not only as tracking device, but Kinect is also being used as an input device. The study is a step towards making automatic repair and maintenance of vehicles. The recommended system will assist in reducing the work load on skilled experts for considering regular activities. Instead this system can be used for small jobs. By doing this, process of documentation will also become simple. The supervisor has no need to roam around in this system [2]. The system keeps check on each step, so the process of step wise verification also becomes simpler. The system recommended in study is likely to bring many opportunities for engineering based depth cameras are being used in millions of houses due to developments in Microsoft Kinect.

Figure 13- Smart Home System illustration. Berkley University Journal et.al.

(2013)

This study has taken Kinect to real kitchens. Although, touchless gestural controls can prove to be difficult for few but it enables the commands to be transformed into movements of cooking. This smart kitchen enables the users to alter the scheme and control with other limbs, when hands are not empty. The recommended system was tested with 5 different persons, who cooked in respective kitchens and identified that placing the Kinect was simple and reason of their success. An important challenge was accidental commands in the kitchen [12].

The experiment proved that the users found the system, easy and pleasing with low levels frustration. It was also felt that implementation of the system enabled to load music and recipes. It was helpful, as the interaction style was general. All subjects expressed that although, it was difficult and messy to cook but they were quite happy about the experience. The observations were not favorable in view of those conduct-ing the research. Accidental use of navigational aid caused a lot of mess. Other than accidental pressing of buttons, during change of directions, sweeping hand also caused problems. Some of the errors occurred, when the subjects pushed buttons, while focusing elsewhere. Another problem that was experienced was that the sub-jects mostly pushed the wrong buttons. Most of these wrong pressings were due to pushing of buttons more quickly. Kinect SDK smoothing was the reason for this, by the authors. The subjects liked the lock buttons on the screens, but were rarely used by them [17]. During conduct of the experiment, few subjects could not identify that the lock was not automatic but was result of automatic pushing of the button. It is recommended that for the future use, locking system should be made automatic, es-pecially when the subject turns sideways (resultantly position of the axis joint col-lapses towards inner side) towards the side counters or towards the side of face coun-ters behind. For this system, it is recommended to make this unlocking a process involving 2 steps, instead of keeping it a single step process. The Kinect proved to be extremely useful during the conduct of the experiment. Especially the ease of posi-tioning Kinect was surprising for the users. During the conduct of experiment, cam-era was so placed that the subject gencam-erally remained in the frame. One important aspect was requirement of distance in the experiment. To do this, the cart was gener-ally placed, out of the kitchen, and out of the way.

2.16 Kinect Gaming and Physiotherapy

Research conducted by Sachin and Singh (2014) from University of Pune recom-mended a system that joins 2 applications of Kinect. These are Kinect gaming as well as application of Kinect, used for physiotherapy. The recommended system, under-take the tasks basing on critical features, such as depth recognition, tracking of skele-tal and recognition of gestures. Kinect camera is the key instrument, as per the stud-ies, which gets all the operations to be implemented [2]. The movement of subject body was tracked by implementing skeletal tracking and by identifying key points on

the skeleton of human body. Depth recognition is another important feature of the system. It is carried out to segment the front and rear ground of the image. Depend-ing upon pixel color, it has also the ability to separate a person from the background.

Kinect is required to conduct these operations. One of the main reasons to do so is, as it has the capability to produce RGB streams and depth at lesser cost than the usual sensors in common use. Kinect can measure the distance of any given point from the Kinect Sensor, as it has the time of flight camera. To undertake this open Kinect driver framework for Kinect is being implemented. It has the capability to generate depth images. For performance of applications, normally Kinect is used along with console device [12]. Console device is quite costly, therefore, in this study an effort is being made to give away with the console device, rather to tackle the problem of tracking of human skeletal is being undertaken using Microsoft Kinect. In this study, an effort has been made to maximize the hardware and by finishing the console de-vice, the procedures are to be conducted by incorporating Kinect with developed and refined system programs to undertake the particular set of operations [15]. The study panel has recommended the final project implementation which can be utilized for further development of applications.

3 Research Methodology

3.1 Introduction

This section lays out the procedures and methods employed in this research. In this research documentary analysis will be primarily be used. This section will outline the results and facts from previous research methods like sampling, research design and data analysis. Additionally, concerns have been raised about the applicability of the different Kinect innovations and discoveries (Bevilacqua, 2014). This research will address those concerns. An experimental analysis on the effectiveness of Kinect in assisted living environments is crucial as it helps Ambient Assisted Living (AAL) organizations benchmark against best standards and practices. In his research, Konstantinidis (2015) expressed the need for AAL organizations to adapt to external environments and patient needs as a strategy that helps in improving both the tech-nical and practical application of Kinect. This is particularly important as most Smart home environments are shifting towards a service culture and staff reduction strategy which has a more demanding clientele. This research will analyze results from clini-cal experiments in Kinetic devices like Camera tracking. In his research Anastatiou (2011), analyzed the efficacy of kinetic camera in tracking hand, elbow and trunk movements.

In addition, a glimpse of available research works show that Kinect devices have been extensively researched and documented. Experimental research has been done in 3-D mapping technological improvements and in body tracking. In this context, this research will analyze consequential advances in related technologies like GPU systems and sensors that facilitate technological improvements and new Kinect ap-plications. Technologies like Mo-cap, Kinect v1 and Kinect v2, have been used to properly perform experiments in assisted living environments. Test for this system involve sitting, walking and standing.

Figure 14- Pose Experiments, Kinect tests. (2013)

3.2 Model of the research

This research will employ a documentary analysis strategy and will primarily use experimental and clinical studies. Experimental results will be used to determine the impact of Kinect and the different applications in Assisted Living Environments.

Main upside of a documentary analysis is that it’s cost effective and relies on scien-tifically approved approaches to conduct the study Clembers (2001). Documentary analysis also tends to work with an unlimited scope making the research simple and logistically easier compared to other research methods. Results from clinical tests and applications were also used to answer the research objectives.

Statistical Package for science (SPSS) was used to analyze all the collected data after which descriptive metrics like means, averages, percentages and frequencies were used for further analysis. Data interpretation was conducted in respect to the frame of reference of the research problem and objectives.

According to researchers like Robinson (2003), validity and reliability of data collec-tion methods directly determines the accuracy of collected data. Reliability ensures that instruments used yield consistent results. To ensure objectivity and accuracy of the research a different department was tasked at auditing and inspecting the docu-ments used in the research. Cronbach’s Alpha was used to check for consistency in obtained results. The Alpha, which ranges from 0-1, measures the level of reliability in an increasing rate. According to Dristern (1990), the minimum value of reliability of a research should be at a value of 0.6.

In this research, the research team also corrected for inconsistencies, errors and mod-ified the formulas used to increase accuracy.

3.3 Research Design

The research design employed in this study will outline the blueprint and plan for answering the research questions and fulfilling research objectives. According to Blumberg (2005), a research design shows the plan that will guide researchers in answering the research questions.

Although researchers concur that it’s sometimes technical to perform a research us-ing documentary analysis, they agree that it’s an important approach which can help researchers get deeper insights especially if they use a combination of methodologies Flinter (2009).

Figure 15- Research design

3.4 Primary Data

In the collection of data. More emphasis was placed on data that could be analyzed.

Quantitative: Will entail numerical data collected from questionnaire, interviews and surveys. Quantitative data are easily analyzable and can be used to show patterns and trend. Graphs, pie charts and tables can be used to further illustrate quantitative data which can then be used to draw inferences. Email survey will be used because of its easy admissibility and the potential to survey large number of respondents.

Qualitative: These are non-numerical data collected from methods like one-on-one interviews and observations. Qualitative data can help in clearing any bias that may result from quantitative data collection methods. Questions are asked directly to the interviewer or respondent.

3.5 Summary

Results from various journals, books and literature sets will be used to form an opin-ion on the use of Kinect and its applicatopin-ion on smart living environments. Important-ly, this research seeks to outline the future trends in Kinect applications and use in AAL environments. Although researchers in this fields Webster (2014) believe the application of Kinect to AAL is still at its infancy this research will delve into the future of applications and its relationship with other technologies like the IoT (inter-net of things) and Olympus camera.

4 Data analysis and presentation

4.1 Introduction

For comprehensive analysis, the following sections of the paper are organized in a documentary analysis manner. In this case, documentary analysis is used as a tool to gather evidence that center around use of Microsoft Kinect application, weaknesses and use in assisted living environments. This section will analyze lab results from conducted experiments, survey and studies in Kinect and Kinect components.

The most important reason documentary analysis was used for this research is that it is efficient. In our case, documented research papers, journals are easily accessible and their documented results verifiable. In this section, different research papers are analyzed to form an opinion on the future and applications of Kinect in assisted liv-ing environments. General research data was used to design the final data analysis technique. This section analyzed existing protocols used in assisted living environ-ments and proposed new protocols and areas of research. Key approach for this re-search is to build on the works of previous rere-searchers such as Yang et al (2015) and Gradinaru (2016) both of who proposed new technologies for 3-D representation using sensors. In his research, Gradinaru (2016) designed new systems and software for capturing and display of animated information.

Other sister technologies involved in the development of Kinect applications like 3-D sensing tools were also analyzed, for video and still cameras.

Figure 16- Gradinaru (2016) graphical representation of system

Some of the key areas targeted for analysis include;

 Smart Home environments

 Movement Detection Models

 Internet of things and its impact on Kinect

 Skeletal Tracking systems

4.2 Smart Home environments

Smart home systems play a critical in creation and continuity of Kinect operations in assisted living environments. According to Kawatsu (2014) a smart home environ-ment is one that creates interconnections between a physical environenviron-ment. In a smart home environment, people have the expectations that the technologies can be used to improve their everyday life. Applications of smart home systems can be in commu-nication, safety, welfare and appliances. The devices used in home systems environ-ment consist of communication modules, cameras, sensors and actuators. Overall, a server is used to manage all the operations of the smart home environments.

In their research, Baeg et al (2007) constructed from scratch a smart home environ-ment in the research building of KITECH (Korea Institute of Industrial Technology).

This research aimed to demonstrate the efficacy and practicability of robot assisted home environment. The research featured custom made sensors, actuators a robot and a database.

The researchers made use of RFID (Radio-frequency identification) technology to identify, track and follow objects within the home system. RFID uses radiofrequency to track objects. RFID tag was used to identify objects in the environment. Basically, objects with the tag were considered smart appliances. Apart from the smart envi-ronment, the conceptual framework consisted of servers and a robot. Smart objects were assigned sensor capabilities which meant they could they could communicate with both the server and the robots.

Below figure shows the conceptual environment:

Figure 17- Conceptual Framework of a smart home environment

The smart environment was divided into layers; the first layer consisted of the real home environment which has scattered setting of objects and appliances. The second layer consisted of actuators and wireless sensors. This level includes additional sen-sors like temperature sensen-sors, RFID readers, smart lights, and humidity and security sensors. In level three there were devices like tables, chairs and shelves which all had RFID sensors to enable identification ease. In the fourth level there existed a com-munication protocol which ensured that reliable and accurate comcom-munication be-tween the home server and other devices in the vicinity. The server which managed the relationship between the devices and the sensors was in level five.

Figure 18- Smart home environment layered description

In this experiment, the main use of the robot was to allow several key functions like;

mapping, localization, object recognition, and interaction. To that end, the robot was equipped with ultra-violent sensors, cameras, ultra sound, a good processing speed and adequate memory.

For this experiment, specific home services were selected to be performed that repli-cated real home services. The objective of the smart home environment was to give users close to real life services. Some of the functions to be performed in the smart environment included; Object cleaning, running home errands and executing home security functions.

Object cleaning; in this scenario, the service robot is tasked at tiding up the room or environment. The robot does this by arranging objects in a required or preset way.

RFID installed in the roof of the home are used to direct the robot on navigation and what objects to clean. Purpose of this part of the experiment was to investigate the potential use of robots in tasks such as laundry cleaning, home arrangement and tasks like doing dishes.

Performing errands; In this case, robots are tasked at identifying and fetching specif-ic objects or smart items around the smart home. Fetched objects have RFID tags which means they are easily identifiable within the network. The fetch function works after receiving a command from a person. The robot then sends a request with the position of the object to be fetched, after receiving the information it moves to where the device is, grabs it and sends it back.

In this research, the researchers used two key modules; RFID interfaces and Com-munication modules. The protocol used to operate the comCom-munication module was the ZigBee protocol. The ZigBee protocol is an open standard protocol based on 802.15.4b. The protocol provides inter connections for different applications that is low-power and wireless. ZigBee protocol was used for all the devices. On the other

In this research, the researchers used two key modules; RFID interfaces and Com-munication modules. The protocol used to operate the comCom-munication module was the ZigBee protocol. The ZigBee protocol is an open standard protocol based on 802.15.4b. The protocol provides inter connections for different applications that is low-power and wireless. ZigBee protocol was used for all the devices. On the other