• Ei tuloksia

4. IMPLEMENTATION

4.4 Process data history

4.4.2 Process data manipulation

Process data files are transposed in specific folder inside TUT-AM-EC2 instance. Ques-tioned folder holds own username and usage rights and it serves only for the purpose of the data files. Folder is monitored with Node.js backend program. New file is shifted over to Line-by-Line reader and handled according to line count. Steps from the new file arrival until the file is entirely processed are detail in the Figure 31.

Figure 31. Flowchart for History Data storing processing

Viewer can notice from the above figure that after the line is read it is processed with two independent threads. One for writing the records inside AWS RDS MySQL data-base and one for forming the data model over the records, to be send into IoT-Ticket Data Server. For more detailed knowledge, viewer can look for Figure 32 (below) where the program architecture for file processing is portrayed with Unified Modeling Language (UML) chart. As it can be observed, architecture is more elementary when compared with real-Time monitoring.

Figure 32. Program architecture for History Data processing

Portrayed in the Figure 32 the entire file processing can be comprehended as one com-ponent (History Data manipulation and storing). This comcom-ponent is formed from multitude of individual components acting together for outcome. All the software com-ponents are deployed under FileService folder within TUT-AM-EC2 instance and they hold log4js logging feature. At the initial phase, mainService.js is launched with Node.js forever npm-module. Inside mainService.js a watchFolder function is deployed for monitoring the changes in the FTP server dedicated folder. Function is implemented with npm module called watch. Watch module can notice a multitude of changes, yet in here, a new file arrival was accessed. After a file arrival, watch module provides the file name for the function. According file is moved inside AWS S3 bucket with Node.js aws-sdk module. At the same time file name is provided for Line-by-Line reader func-tion, developed around Node.js npm linebyline module.

Line-by-Line reader commences accessing the file. According to the line count, a specif-ic manipulation is carried out. At the initial line, there is a basspecif-ic information for the fin-ished process. One important record inside this line is the knowledge of the according process (CMT or COAXwire). Database interactions and IoT-Ticket interactions are handled accordingly. From the line, a start time is taken out and shifted for millisecond record. Reason for such action comes from the IoT-Ticket Data Server where timestamps are handled with unix-time (epoch time) at millisecond accuracy. Together with rest of the data from the initial line, a data model is structured for later sending the

basic process information to IoT-Ticket Data Server, under ProcessData device. One additional data model is formulated to hold the start and the stop times of the process with millisecond form. This data model is transposed into IoT-Ticket after the file han-dling is ended. Reason for two distinct data models is to update the mentioned timestamp values within Production -path in ProcessMonitor device. Two different device interactions requires two different REST requests. Recognition of the first line also immediately updates another Boolean variable inside ProcessData device. Updated variable is chosen according the process the file concerns. These timestamp values and Boolean variables are later accessed when triggering a report creation. More detailed information can be found from Chapter 4.5. Simultaneously the first line of the file is provided for DatabaseIntercation component. Database is constructed with one-to-many architecture as illustrated in the EER (Enhanced entity-relationship) model in Figure 33. Basic information from the process is stored inside Production table, where buildPlatform is the primary key. Data from the lines holding the process variables are stored within according buildPlatform table. A new buildPlatform table is formed each time new process file is accessed. Build platform is the identification of certain process run. This identification is formed by the robot when starting a new process. Format comes from the current date and timestamp. This enables each process to be identified with distinctive ID. Handling the initial line of the file, DatabaseIntercation component formulates the SQL query and writes the data inside AWS MySQL database Production table.

Figure 33. Process data - database structure

Sequential comes the second line of the file. This line is the first holding the variables from the process. Line is provided for both DatabaseInteraction component and

IoTdataModel function. DatabaseInteraction component verifies the line count; forms SQL query and stores the data inside according buildPlatform table. Connection with IoT-Ticket Data Server is slower than internal connection with AWS RDS MySQL. For this reason, a data model is formed to hold the data from the process records in JSON array structure. This structure is monitored and each time it reaches 10 000 JSON array objects the data portion is sent for IoT-Ticket Data Server, within the body of the HTTP message. After the file handling is finished, remaining data in JSON array is sent for IoT-Ticket Data Server. IoTdataModel function handles the formation of the JSON structure. With the function call, a process start time in unix-time form is provided.

Function excavates the timestamp (milliseconds from the start of the process) from the provided process line and adds the value with provided start timestamp. A timestamp is now calculated for all the records within the line. Function forms a temporary JSON structure for each record. These structures are then pushed inside another JSON array for holding the records. Reason for such double array action arises from the feature where IoT-Ticket datanodes are handled with identical keywords; each datanode in that case need to be handled individually.

Each line holding the process variables are processed similar way, database is updated and JSON array is pushed with records from the line, while monitoring the size of JSON array. As the file is handle, Line-by-Line reader watches every line for key word

‘end’. From this record, a final line is realized and stop time for the process is found after the key word. Database Production table is now updated with according time.

Equivalently the data model for the basic information (for Iot-Ticket), is now updated with end time. Simultaneously a second data model for holding the start and stop times in millisecond form and data model for holding variable information for triggering a report creation is updated (for IoT-Ticket). All data models are now completed and rec-ords are send for IoT-Ticket Data Server via methods component. Administrator of the solution has beforehand prepared a device inside IoT-Ticket Data Server for holding the process data. Returned device ID is stored inside devices.json file. Actual data is stored with Datanode-path combination, similar as for the real-Time monitoring data. These Datanode-path combinations are stored inside datanodes.json file.