• Ei tuloksia

Service platform implementation for simulation systems

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Service platform implementation for simulation systems"

Copied!
98
0
0

Kokoteksti

(1)

Markus Walden

Service Platform Implementation for Simulation Systems

Constructive Research for Standalone Application to VTT CFB-Pilot Reactor Model. in Information Technology December 22, 2016

University of Jyväskylä

(2)

Author:Markus Walden

Contact information: markus.walden@gmx.com

Supervisors: Timo Tiihonen, and Oleksiy Mazhelis

Title:Service Platform Implementation for Simulation Systems Työn nimi:Palvelu-arkkitehtuuri simulaatiojärjestelmän hallintaan

Project:Constructive Research for Standalone Application to VTT CFB-Pilot Reactor Model.

Study line: Mathematical Information Technology Page count:98+0

Abstract:

At present, the simulation systems are harnessed for the research of machining learning and knowledge discovery and databases (KDD). For this reason, the impact of simulations systems will continue to increase for the foreseeable future. Enterprises like the Alphabet (Google), Facebook, and SpaceX are all developing new technologies from self-driving cars to space exploration; all require simulation and model driven architecture (MDA) based ap- proaches. All of these innovations share the similarity of being designed to facilitate human by performing tasks based on system simulations using distinct mathematical models, rules, and platforms.

The goal of this research was to model the process of designing service implementation for simulations. This meant analyzing, designing and implementing an inter-platform solution for simulations. The advanced process simulation environment (APROS) and Simulink were used for this endeavor; these two platforms represent the generic (Simulink) and specialized (APROS) platform for simulations. The new integrated system solution was designed to offer enhanced capabilities to monitor and analyze data flow in the operational environment at large. The circulating fluidized-bed (CFB) model developed by the VTT (VTT Technical Research Centre of Finland) was used to facilitate the process and data flow components.

The means of reaching the goal of this project is to combine the functionalities of the two

(3)

platforms to work as one process by using service design principles. The modelling con- cepts of this research are derived from the MDA. The additional concepts associated with the services are based on the service driven architecture (SOA) and high level architecture (HLA). This architecture forms the theoretical background to the construct presented in this research.

The result of this project is synchronized time-driven process that allows the CFB-model to function as an inter-platform system. The gained result prove that dedicating resources into addressing the unique characteristics associated with transforming simulation to work as an independent application is worth pursuing.

Keywords:SOA, Simulation, Model, System, Service

Suomenkielinen tiivistelmä:Nykyään olemassa olevia simulaatiojärjestelmiä hyödynnetään koneiden itseoppimisen ja tietojärjestelmäpohjaisten (KDD) alustojen kehityksessä. Tämän takia simulaatioiden merkitys ja vaikutus tulee kasvamaan suuressa määrin jokapäiväisessä elinympäristössämme. Yritykset kuten Ahphabet (Google), Facebook ja SpaceX edustavat yrityksinä kehitystä mallintamiseen ja simulointiin siirtyvästä tavasta kehittää MDA poh- jaisia tuotteita itsenäisesti toimivista autoista avaruustutkimukseen. Ihmisen näkökulmasta nykyinen kehitys tulee muuttamaan ympäristöä tavalla, jossa kanssakäyminen koneiden kanssa muuttuu luonnolliseksi osaksi arkea.

Tutkielman Tarkoituksena oli suunnitella ja toteuttaa palvelualusta kahden simulaatiojär- jestelmän väliin. Projektissa käytettiin Simulink ja APROS järjestelmiä toteuttamaan vaa- dittu integraatio. Simulaatioiden kehitykseen tarkoitettuina järjestelminä Simulink on laaja- alainen ja APROS on selkeästi rajoitetumpaa käyttötarkoitusta varten suunniteltu. Projekti mahdollisti simulaatiosta syntyvän tietovirran kehittyneemmän analysoinnin ja hallinnan.

Toteutusta varten demo projektiksi valittiin CFB-reaktorin toimintaa mallintava ja VTT:n kehittämä simulaatio.

Tutkielmassa yhdistettiin järjestelmien toiminnallisuudet laajemmaksi kokonaisuudeksi jae- tun Service Platform alustan kautta. Mallintamista ja simulointia varten soveltuvista arkkite- htuureista valittiin MDA. Vastaavasti palvelun suunnittelun lähtökohdaksi valikoituivat SOA ja HLA. Yhdessä arkkitehtuurit muodostivat teoreettisen perustan työn esittämiselle tutkimuk-

(4)

sen muodossa.

Projektista syntyi yksittäisillä aika-askeleilla synkronoitu toteutus, jossa CFB-mallin suori- tus on jaettu kahden alustan väliin. Saavutettu tulos edistää eri järjestelmiin toteutettujen yksittäisten simulaatioiden yhdistämistä keskittymällä integraation yksityiskohtiin.

Avainsanat:SOA, Simulaatio, Malli, Järjestelmä, Palvelu

(5)

Preface

The main enabler to conducting this research has been VTT. It provided an opportunity to study simulations at cross-domain environment. Because proper simulations are inherently complex to design and develop, doing this kind of research would have been otherwise im- possible. Thus, The collaboration has enabled an industry driven approach to be adopted as the guiding principles to conducting this research.

Now that the project is due to finish with major goals completed it is time to thank the people involved. Namely Mr. Timo Leino for supervision, guidance and collaboration, Prof. Tommi Kärkkäinen for guiding and sponsoring the project, Prof. Timo Tiihonen and Ph.D. Oleksiy Mazhelis for supervising my thesis work and, finally for my friends, colleagues and family for providing support and guidance throughout the project.

Jyväskylä, December 22, 2016

The Author: Markus Walden

(6)

Glossary

AI AI (artificial intelligence) is the simulated capability to per- form any mental task as well as a human. The key components of current AI research are (McCarthy 2007):

• Deduction, reasoning, problem solving,

• Knowledge representation,

• Planning,

• Learning,

• Natural language processing (communication), and

• Perception.

APROS APROS (advanced process simulation environment) is a plat- form designed to model dynamic systems for reactors at nu- clear power plants and combustion power plants. The plat- form is build to allow automation and interactive design space.

APROS can simulate fast transients and steady state systems (VTT and Fortum 2004).

Artifact With Latin origin, artifact stand for an object made with skill (“The American HeritageR Science Dictionary” 2016). Based on the MDA specification the examples of artifacts "include model files, source files, scripts, binary executable files, tables in a database system, development deliverable, ...", made for system development process (Siegel 2010, Page 4)

CFB Developed to meet environmental standards, like the the mer- cury and air toxic standards (MATS), the CFB (circulating fluidized- bed) technology is designed for energy production by burning waste, biomass, and coal (Wikipedia 2014a).

CFB-model Simulink model to model the physical process for the CFB (cir- culating fluidized-bed) conditions. The model design is based on the CFB-pilot reactor and the model is deterministic. (Tou-

(7)

runen 2010)

CFB-pilot reactor CFB (circulating fluidized-Bed) reactors were designed to func- tions as the benchmark for the CFB-model input and the val- idated output metrics. The reactor output contains the error caused by some factors (nonuniform fuel, uneven burn, mea- surement and sensor error), as such the system is stochastic.

(Tourunen 2010)

CIM CIM (computation independent model) are general domain and business models. The “instances” of these models are “real things”, not representations of those things in an information system (Siegel 2014, Page 8).

Conservation Laws The conservation laws are used to declare the relations of vari- ables at equations.

Deterministic System "A deterministic system is a system in which no randomness is involved in the development of future states of the system. A deterministic model will thus always produce the same output from a given starting condition or initial state" (Meiss 2007).

The antonym for the deterministic system is the stochastic sys- tem.

Domain With Latin origin, domain stand for "a field of action, thought, influence" (“Dictionary.com Unabridged” 2016). In the con- text of this research, domain stands for field of science (IT) or a common terminology and functionality to solve a problem.

Environment The interaction between a system and the environment or en- vironments in which it is intended to work is an essential rela- tionship in MDA. In this content, the environment encapsulates system or systems as a higher level entity of the two (Siegel 2010, Page 3, Figure 1). In the context of this research, the environment translates to the operating system.

Heuristic Heuristics, an analytical and assumption based approach to solve, identify or back-track a known problem. Heuristic op-

(8)

erations commonly have cost and benefit metrics that are used to evaluate recursive actions (Patel 2007). Path finding or VRP (Vehicle Routing Problem) is commonly used example to ad- dress a heuristic process (Puranen 2011).

Initialization Bias Specific to computer-based simulations Initialization bias is caused by abnormal initialization of the output state during simulation start-up. This phenomena is caused by unprovided input-state and parameter values (Robinson 2004, Page 141).

Interface Inherent to the object-oriented paradigm interface defines the boundary between two or more class entities.

MDA MDA (model driven architecture) is an environment composed of three layers. The layers contain abstract modelling, infras- tructure, and high-level constraints to model-based develop- ment (Siegel 2014). The developers of MDA used the follow- ing definition "MDA is an approach to system development and interoperability that uses models to express and direct the course of understanding, requirements elicitation, design, con- struction, deployment, operation, maintenance, and modifica- tion." (Siegel 2010, Page 1).

MDA Transformation The essential identifiable benefit at the core of the MDA is inte- gration. Defined as MDA transformation, the process identifies three commonly used transformations (Siegel 2010, Page 4);

• Model-to-model,

• Model-to-artifact (code, text, or visual document), and

• Artifact-to-model.

Metamodel The metamodel is domain independent and primarily used to describe the elements for models to use (Siegel 2010, Page 4). According to the visualization of the model used to define the relations between the entities of models, metamodels, and systems, the model is defined by one to infinite relation with the metamodel (Siegel 2010, Page 3, Figure 1).

(9)

Model The model is defined in correlation to the entities defining the system, metamodel, and view (Siegel 2010, Page 3, Figure 1). According to the MDA specification "A model is a se- lective representation of some system whose form and con- tent are chosen based on a specific set of concerns" and, "The model is related to the system by an explicit or implicit map- ping."(Siegel 2010, Page 2).

MOF According to the MDA standard, the MOF (meta object facil- ity) is defined to allow the transformation process of converting a specific model to another format with more specific details (Siegel 2014, Page 14). Unique to the MDA is the transfor- mation between the PIM and PSM. The conversion process is executed by converting the existing model to machine-readable XML format and converting the model using the model corre- lation to its meta-level entities (Siegel 2010, Page 3, Figure 1).

Ontology The ontology acts as a root concept for the less concepts like class, OWL, and RDF. The definition of the ontology is based on the technical specification (Motik, Patel-Schneider, and Par- sia 2012). The ontology is foremost defined in correlation to other significant entities such like the IRI (ontology ID), an- notation (meta-level details) and axiom (relations to other in- stances of ontology) (Motik, Patel-Schneider, and Parsia 2012, Page 8, Figure 1).

PIM PIM (platform independent model) is designed without the con- siderations for the platform, focused on describing the key func- tionality of system(s). According to the MDA architecture, PIM is for "modelling the way the components of a system interact with each other, with people and with organizations to assist an organization or community in achieving its goals."

(Siegel 2010, Page 8).

Platform The platform is a root-level concept to MDA connected with

(10)

PIM and PSM. Characterized by specific technology such like j2EE, .NET and CORBA; the platform adds further constraints to the PIM (Siegel 2010, Page 9). In the content of this re- search, the platform translates to a specialized development environment, such like the APROS or Simulink. The platform is used to create the systems.

PSM PSM (platform specific model) share correlation to PIM with added constraints attached depending on the specific platform used. The transformation process from PIM to PSM and back is generally based on the "XML Schema Transformation", uti- lizing identifiable patterns at original source to replicate the functionality (Siegel 2010, Page 11).

QoS The QoS (quality of service) in IT is focused on aspects like security, latency, Error Rate, user experience, reusability or re- dundancy of the system. Specific to implementation, the pri- orities for choosing between these qualities is essential, for it is not possible to accommodate them all with equal emphasis.

Thus, appending the source the additional constraints defined requires the capability to attain them from the start. (Dobson 2004).

Semantic High level construct to define terminology and the conserva- tion laws within a specific domain. Facilitated by ontology (language), it commonly uses dialect called OWL (Web Ontol- ogy Language). (Patel-Schneider and Motik 2012, Page 1) Service Based on the SOA specification "services are the mechanism

by which needs and capabilities are brought together." (OMG 2012, Page 9) Enforced by the transaction, which is used to define the service within constraints is the SLA (service level agreement). As the formal document, it is used as the basis to define the QoS for the transaction(s) between the client and the provider (Liegener 2012).

(11)

Simulation The arch type simulation is the definition for a continuous sys- tem with equations to measure the change in variable at corre- lation to time. Derived from the arch type are the definition for the lower and the higher level simulations. The key simulation are: (Balci, Arthur, and Ormsby 2011, Page 158)

• Monte Carlo,

• System - Dynamic,

• Agent-based, and

• Artificial Intelligence.

Simulations to model phenomena are commonly referred to as system simulation; to account for the addition of domain being incorporated as part of the design.

Solver The solver is an autonomous fragment of software dedicated to running a simulation forward in time. It does this by fol- lowing specific algorithmic reasoning to the selected system of equation(s), used to characterize known phenomena. The basic solver qualities are Stiff, NDF, and MOD. Distinguished between fixed and variable step solvers, the common settings include min / max step size, initial step size, zero crossing de- tection, and reset method (Mathworks 2016a).

System At MDA, the system is a mid-level concept between the envi- ronment and the model. The system is defined with internal process and the external connection(s) as the interface to the environment (Siegel 2010, Page 3, Figure 1). In the content of this research, the simulation translates to the model instance, such like the CFB-model.

(12)

List of Figures

Figure 1. Assignment goal . . . 3

Figure 2. Dynamic System . . . 9

Figure 3. Simulation at technical domain . . . 10

Figure 4. state space . . . 14

Figure 5. The three tyre architecture for models (MDA) . . . 20

Figure 6. Service and simulation at technical domain . . . 24

Figure 7. Interfaced Service . . . 25

Figure 8. Collective summary of relevant literature and terminology . . . 33

Figure 9. UML Sequence diagram to running simulations encapsulated inside a service . . 34

Figure 10. View describing the service as an ontology graph . . . 37

Figure 11. View describing the service as an ontology graph . . . 40

Figure 12. Collective view describing the relations between the service and simulation ontology combined at single graph . . . 42

Figure 13. Collective view describing instances for request - response chain as graph . . . 43

Figure 14. CFB-model at abstract level . . . 50

Figure 15. MDA Implementation to Simulink . . . 53

Figure 16. General workflow to Simulink Models . . . 54

Figure 17. "Schematic picture of the one-dimensional dynamic model" . . . 60

Figure 18. Detailed workflow to Simulink Models - Process View . . . 62

Figure 19. APROS-Simulink workflow running . . . 63

Figure 20. APROS-Simulink interface-code during execution . . . 64

Figure 21. Automation . . . 65

Figure 22. Validation of Simulink-APROS output . . . 66

Figure 23. Simulation performance difference - breakdown to components . . . 71

Figure 24. Simulation performance difference - breakdown to components . . . 72

Figure 25. Validation of Simulink-APROS output . . . 73

Figure 26. Output visualizations for CFB-model at APROS . . . 74

List of Tables

Table 1. Automated output evaluation based on single and multiple runs . . . 17

Table 2. Stateless service architecture at state-driven frame . . . 27

Table 3. Technical standard to classifying state-driven-service architecture fitness . . . 30

Table 4. Transition from SOSA to SOA . . . 31

Table 5. Class entities for service ontology . . . 35

Table 6. Object entities for service ontology . . . 36

Table 7. Class entities for simulation ontology . . . 38

Table 8. Object entities for simulation ontology . . . 39

Table 9. Shared object entities for the interface ontology . . . 41

Table 10. IT inspired build process for an artifact . . . 47

(13)

Table 11. Strategies to manage the client-server architecture for simulations during implementation. . . 61 Table 12. Simulink model differences . . . 70 Table 13. Simulations as tools of development . . . 75

(14)

Contents

1 INTRODUCTION . . . 1

1.1 The Artifact in Design Science Discipline . . . 4

1.2 Modeling . . . 5

1.3 Structure of the Work . . . 6

2 SIMULATION . . . 7

2.1 Simulation System . . . 8

2.2 The Numerical Simulation of Continuous System. . . 10

2.3 Simulation Workspace . . . 12

2.3.1 Model in Design Space . . . 14

2.3.2 The Simulation Output . . . 17

2.4 The Model Driven Architecture and Simulation Platform . . . 19

2.4.1 The Abstract Layer of MDA . . . 21

2.5 Chapter Summary . . . 22

3 SERVICE . . . 23

3.1 Service - SOA oriented approach . . . 25

3.2 Service, simulation, and modelling at shared domain . . . 31

3.3 The Server - Client Architecture with the Information Model . . . 33

3.4 Chapter Summary . . . 45

4 IMPLEMENTATION . . . 46

4.1 CFB-Model . . . 49

4.2 The Matlab/Simulink Platform . . . 52

4.2.1 The Model Workflow . . . 53

4.2.2 The Simulink/APROS Global Configuration Interface . . . 56

4.3 The APROS Platform . . . 58

4.4 The Transformation Process to Integrate between Simulink and APROS . . . 59

4.4.1 The Strategies Available to Integrate between Simulink and APROS . . . 61

4.4.2 Path to Automation . . . 63

4.5 Chapter Summary . . . 66

5 EVALUATION . . . 68

5.1 The Quantifiable Performance Difference of the Implementation . . . 69

5.2 The Quality of the Implementation and the Path to Automation . . . 74

6 DISCUSSION . . . 77

6.1 Further Work . . . 78

6.2 Conclusions . . . 78

BIBLIOGRAPHY . . . 80

(15)

1 Introduction

The simulation and modelling paradigm is commonly used to illustrate the real phenom- ena using replicate data and specific modelling practices. Hence, modelling and simulation are often used as synonyms within the mathematical domain to describe known phenom- ena. As an abstraction of reality, the simulation is only an approximate prediction of what might happen, therefore, real systems (see system) are always needed to verify the findings.

Simulations are often implemented using computing environment (see environment).

A practical approach to simulations (see simulation) is to view them as a model (see model) to dynamic processes in real life. Global climate change is one of the most popular use-cases to simulate. Another useful scenario for simulation would be to model a plane during flight.

Essentially, simulations are always based on the concept of deriving results from the model described in the form of mathematical formulas (differential or algebraic equations). During simulation, the model formulas are typically calculated to create time-series constituting as the system state space (Wang and Marek-Sadowska 2015, Page 3, Figure 5). As the result of this process, simulations can, in general, be divided into diagnostic and projection based categories of software. Based on the selected category these approaches allow the formation of a process leading to known ending (plane crash), or to a regression of future event (climate change). Based on behavior defined at mathematical models, simulations are inherently complex to design and develop.

The primary motivation to conduct this research is to study the capabilities to integrate a simulation (system) between two platforms (see platform). Due to the complexity of mod- elling and simulation as a discipline, the common challenge for the existing simulations is the reliance on specialized development platforms, like Simulink, APROS, and Weka.

These platforms are commonly incompatible, and thus, having to choose between differ- ent (simulation) platforms is a complication. This is the problem regarding the integration;

the simulations developed for one platform can not be used on another (Balci, Arthur, and Ormsby 2011, Page 160). Thus, identifying the essential characteristics of simulations to be combined under common architecture and platform is important.

(16)

The setup to improving usability starts from distributing the logic needed to run the specific simulation. While increasing the level of difficulty to master, the approach enables smaller fragments of code to be designed for general use. One of the existing methods to accomplish the distribution of the code-base is to apply a service (see service) architecture to define the criteria for distribution (Balci, Arthur, and Ormsby 2011, Page 160). In this approach, the code-base is divided to three fragments between the roles known as client, service (provider) and the interface (Figure 7). The approach has two benefits; it solves the implementation process for stand-alone applications and it forces an integration component (interface) to combine the stand-alone process (simulation) with another external process1. The outcome of this process can be evaluated by studying the impacts caused by the transformation. The key metrics to evaluate for the two versions of the same simulation are: the comparison of run-time performance and the capability to manage error situation and the changes in the behavior of the simulation. The conceptual method described in this research can be divided into four parts, representing separate aspects to the build process (denoted by 1 - 4 in Figure 1):

1. The problem -> existing simulations in VTT have been implemented to multiple sim- ulation platforms - simulations developed in one platform can not be used by another platform;

2. Theoretical approach -> designing a service architecture to allow an individual simula- tion to be build to an application frame, and thus, to be accessed by another simulation platform;

3. Design and development -> implementing the process for a selected simulation; using the results to divide the model under the common interface accessing the two plat- forms; and

4. Evaluation -> comparing the run-time performance, error and behavior of simulations made to build and execute inside tailored development environment.

Hence, the research question pursued at this research is two-fold; can the build be done according to the plan and, what are the functional and metric based benefits of designing a

1. stand-alone processes (simulations) are developed as platform independent - the platforms commonly hide some of the run-time issues.

(17)

Figure 1. Assignment goal used to describe the foundation to this research (Peffers et al.

2007, Page 54, Figure 1)

simulation to operate based on a generic service model?

This research was set to evaluate the integration of the VTT (VTT Technical Research Centre of Finland) circulating fluidized-bed-reactor (see CFB-reactor) model built in Simulink with another simulation platform called advanced process simulation environment (see APROS) (Figure 1). As the commission was based on an existing simulation designed by VTT, the work focused on the integration of the two simulation platforms. The main challenges here were associated with redesigning the simulation to operate as standalone application and defining the interface between the two platforms. The results of this research were a case specific workflow to illustrate the interface needed by the two simulations and, the generic model to define a Service Oriented Architecture (SOA) based approach in relation to using simulations.

The CFB-technology (see CFB) was originally designed for power-plants to burn coal. The VTT developed CFB-reactor and simulation system was implemented to expand the type of fuels used by similar reactors. This was done by allowing advanced controls for the oxygen air mixture within the reactor core to be regulated at real-time. Before this integration project started, the simulation used to model the reactor conditions were contained within simulation platform called the Simulink. The goal of this project was to extend the simulation to function in collaboration with another simulation platform called APROS. Overall, there

(18)

was a reasonable expectation for the integration to be plausible using shared C-language based libraries as the language for the interface. The process of integrating the two platforms was distributed and transaction based, this makes it derivative of the service-driven approach.

The purpose of this research is to study the integration process and evaluate the functional and metric-based benefits of operating simulations via generic service model. Design science is used as the main method of research to allow both, the quantitative and qualitative analysis (Figure 1). Central to the method is the artifact (see artifact), which in this case is the updated CFB-model (see CFB-model). The design science approach (Section 1.1) is later applied in the segment for implementation (Section 4) to structure the development process. It is also used during the evaluation (Chapter 5) to justify the use of qualitative analysis as part of the process. The modelling aspect (Section 1.2) is defined at mathematical and information theory (IT) domain. It is further expanded during the segment for MDA (Section 2.4) and further implemented at segment to describe the transformation process for the CFB-model integration with APROS (Section 4.4).

1.1 The Artifact in Design Science Discipline

The word-origin to artifact is Latin phrase arte factum (something made with skill). Whether it is the development of new medicine, generation of new artificial substance, building pro- cess for particle destruction or software, the artifact based development is embedded in de- sign science (Aken 2004, Page 224). As an applied orientation to the basic fields of science like mathematics physics, chemistry, biology and other liberal arts, engineering is focused on evaluating the existing theories by using experimentation and evaluation (Hevner et al.

2004, Page 75).

Acting on opposite sides, the combined role of basic research and design science is to expand the pool of validated information as an entity (Hevner et al. 2004, Page 76). Whereas the basic research is focused on finding new phenomena and building new theories, the role of design-science is to use and explore the conclusions of this research by means of demonstrat- ing, building, and evaluating data generated as the result of the experimentation (Peffers et al.

2007, Page 46). For this discipline the key question is; how to use available knowledge and

(19)

theories to form something new and unique (Hevner et al. 2004, Page 77)? By understanding the thought pattern, these two statements share causality with cause and effect blocks (Aken 2004, Page 224 - 226).

Cause: As the human society is in the constant state of change, the stress to change the systems used by the society is ever increasing. As the result, systems used by it have to evolve as well - because, if the systems are not developed, they become redundant in the environment which is changing around them.

Effect: The evolution caused by changing conditions require methods to adapt to the chang- ing conditions. Thus, there is the need to anticipate the appropriate actions to take for the systems to keep evolving - the purpose of design-science is to function as the environment and facilitate the transition needed for the systems to keep evolving.

The high value of cross-validation to artifact based research is shown by the miscalculations of Reinhart and Rogoff study on the sustainability of debt to GDB (Gross Domestic Product) ratio (Reinheart and Rogoff 2010). However, the follow-up research demonstrates, how something as simple as, a logical programming flaw can drastically alter the grounds for conclusions done in the first article (Reinheart and Rogoff 2010) (Herndom, Ash, and Robert 2013). The example illustrates the importance of both disciplines in the pursuit for verifiable findings.

1.2 Modeling

The generic modelling translates to defining entities by characteristics and with relations to each other. Whether the technique to set the model is based on visual or mathemati- cal symbols are in itself irrelevant - for the symbols are only a language of notation. Still, there exist few notations that are commonly referred to as "standards" for defining models, these notations include mathematics at a domain, unified modelling language (UML) and the derivatives of predictive analytics (SIEGEL 2013, Page 26) and (Balci, Arthur, and Ormsby 2011, Page 160). The root concept used to encapsulate the notation language, domain and purpose is model driven architecture (see MDA), developed by the object management group (OMG). It defines the common architecture for modelling simulations, software, and busi-

(20)

ness processes.

In this research, the principles of MDA are used to define two models; the first one is based on the theoretical continuous system (Section 2.2) and the second one is based on defining service discovery model (Section 3.3) with using predictive logic (Malham 2016, Page 39- 42). As a result, the observer position changes within the course of this research. Modelling is not studied as a subject to particular view; it is defined as a stand-alone process before being subject to specific domain (see domain) (mathematics and IT) based constraint(s).

1.3 Structure of the Work

The next two chapters are focused on constructing the theoretical approach to bridging the gap between the service and simulation. At chapter 2, the focus is set at generic mathe- matical modelling and the associated MDA. Moreover, the Chapter 3 is set to view the IT oriented aspects of MDA and the key considerations for operating simulations through a service interface.

Combined, the two areas are enough to address the CFB-model integration (Chapter 4) and the associated evaluation (Chapter 5). The implementation (Chapter 4) is set to address the CFB-model legacy and Simulink and APROS integration. The evaluation (Chapter 5) is divided into two individual segments, addressing both the quantitative performance and qualitative (nominal) factors with regarding the current state of the CFB-model.

(21)

2 Simulation

Machine learning and simulation are both based on general input-system-output abstraction (Wang and Marek-Sadowska 2015, Page 2, Figure 4). To accomplish this form of interaction, the two entities (machine learning and simulation) share the same basic features, including the concepts of state and time. The essential difference between the two entities, however, is that in simulation modelling the system is constructed explicitly from lower level com- ponents and their interactions, whereas in machine learning the system behavior is learned from input-output pairs that are mapped to given meta-level structure (László et al. 2000, Page 4, Figure 4). This means that whatever input can be processed through simulation, can also be processed by a machine learning algorithm.

In this research, machine learning, is used as a mediator technology needed to bridge between the simulation and service. It is needed to solve problems caused by state space, such like the initialization bias (see initialization bias). As a pattern, the initialization bias is unlikely to cause the simulation to fail execution, rather it distorts the simulation output. Therefore, it has to be detected as a pattern within the state space (not as a value of any given state). This is an important distinction, because it eliminates some of the programming paradigms off, as impractical.

Used to solve similar problems with regarding the simulation state space, the machine learn- ing techniques have been combined with system simulations to address the following prob- lems (László et al. 2000, Page 2):

• "Modelling, simulation and optimisation of production processes and process chains";

• "Design, control and reconfiguration of flexible manufacturing systems (FMSs)"; and

• "Design and control of holonic manufacturing systems (HMSs)".

The concept of these proceedings is to facilitate reasoning needed to keep the simulations re- sponsive to changing conditions. The same approach that is needed to encapsulate a specific simulation inside a service as a workspace is further expanded in Section 2.3.

(22)

2.1 Simulation System

Modeling of dynamic systems is based on three concepts: the state (characteristics of the system sub components), evolution rules (dynamics of interaction between the system com- ponents), and time (Figure 2). Treatment of time divides the models to two categories (If we denote the state by x, radius by r, time by t, and system function F):

• Discrete - the system utilizing algebraic equations to formulate output (xt+1=rxt(1− xt)as map (Meiss 2007).

• Continuous - the system utilizing differential equations and time as the continuous variable ( ˙x(t) =F(x(t),t), wherex(t0) =x0) as flow (Meiss 2007).

The state, evolution rule, and time control the internal structure of the system (Figure 2).

Together the internal definitions form the foundation for the state space, equilibrium, and degrees of freedom as the system output (Figure 2). The dynamic system is commonly clas- sified as being deterministic (see deterministic) or stochastic (Figure 2). The classification is derived based on the internal evolution rule; for the stochastic systems, there must be more than one possible consequent for a given state and the deterministic system there must be only one consequent (Meiss 2007).

This research will be focused to the continuous system - to using the differentiability (dxdt = f(x)) to solve the specific state space (Meiss 2007). The state space is identified as "the set of all possibles states of a dynamic system" (Terman 2007)2. The scope of the state space is always limited by the degrees of freedom available, where ˙x= dxdt is the derivative of the state variable (Terman 2007).

The degrees of freedom are the unknown independent variables used in the equations de- scribing the system and, the equilibrium represents a constant solution of the evolution rule F(X) =0, where ˙x=F(X)(Izhikevich 2007). Besides the mathematical theory and domain knowledge that goes to building these simulations, the main thing to emphasize is that the content used as input and output is data distinguished by time-steps. Overall this translates to need to use time series as the foundation for building knowledge about the output of the

2. The state space like the superposition for the entire system.

(23)

Figure 2. Dynamic System (Meiss 2007).

simulation. Together these entities form the basis for a vast variety of simulations. The simulations are divided into three basic types (Balci, Arthur, and Ormsby 2011, Page 158):

• Monte Carlo - based on random statistical sampling (stochastic system), time as a discrete variable.

• System, dynamic - based on the principle that any effect (output) of the system is causally connected to past event(s).

• Agent-based - assuming that an agent is intelligent, autonomous, and self guided. An agent-based simulation is goal and motivation based System following the causality.

The derived simulation types are artificial intelligence and virtual reality. The bases for the two simulations is at deriving constraints from earlier experiences in correlation to time and having the capability to form rules as new programming (Balci, Arthur, and Ormsby 2011, Page 159). Together these traits allow higher cognitive functions like reasoning, learning, and perception to evolve.

(24)

2.2 The Numerical Simulation of Continuous System

The available simulation platforms (Simulink, APROS, Weka, ...) utilize the concept of the dynamic system (Figure 3) to modelling. Integrated at the core of these platforms is the solver (see solver). Mathematicians have developed a wide variety of numerical integration techniques for solving the ordinary differential equations (ODEs) that represent the continu- ous states of dynamic systems (Mathworks 2016d). The solver governs the execution of the time either at fixed-step (map) or variable-step (flow) mode3(Mathworks 2016c).

Figure 3. Simulation at technical domain.

Specific to linking the dynamic system with the solver is the capability to estimate (sample) the system output at any given time. For instance, the Jacobian method is suitable for the purpose that it transforms the equations to matrix format (series) for the simulation platforms to interpret. The method is iterative, a path to approximating the solution for equilibrium state. The output produced by the Jacobian method requires the system equations to be defined at the implicit form (equation 2.1) (Terman 2007). The transformation is made to allow the relations between the derivatives and the variables reflecting the physical coupling

3. Continuous solvers use numerical integration to compute a model’s continuous states at the current time step based on the states at previous time-steps and the state derivatives. Continuous solvers rely on the individ- ual blocks to compute the values of the model’s discrete states at each time step (Mathworks 2016d).

(25)

(Meiss 2007).

The Jacobian method is used to approximate the system of equation(s) to find a solution.

Forming the next state using the current state[∆f/∆x]xn+1= (xn), where f and x are treated as vectors and∆f/∆xis treated as matrix. The system in Figure 3 can be transformed into the Jacobian matrix (∂(f1, ...,fn)/∂(x1, ...,xm)). The Jacobian method presumes that the system of equation(s) has a unique solution and that the coefficient matrix, produced by[∆f/∆x]has non-zero entries on its main diagonal line.

Based on the concept of equilibrium:

The stability of typical equilibria of smooth ODEs is determined by the sign of real part of eigenvalues of the Jacobian matrix. These eigenvalues are often referred to as the ’eigenvalues of the equilibrium’. The Jacobian matrix of a system of smooth ODEs is the matrix of the partial derivatives of the right-hand side with respect to state variables. (Izhikevich 2007)

The equilibrium is defined "stable" if all the eigenvalues have negative real parts and, "un- stable" if at least one eigenvalue has positive real part (Izhikevich 2007). This means that the Jacobian method can also establish the equilibrium point for the dynamic system visualized in Figure 3. To do so requires defining the time derivative of the continuous state (Equation 2.1) (Mathworks 2016a). The pattern to produce the equilibrium for the dynamic systems is iterative, which makes the Jacobian method viable for solvers. The relative impact of the Jacobian transformation to simulation is visualized in Figure 24.

˙

x= f(x,u,t) (2.1)

y=g(x,u,t).

J=

∂f/∂x ∂f/∂u

∂g/∂x ∂g/∂u

where

(26)

˙

x: The time derivative of the continuous state that forms the state differential.

x: The continuous state - composed by the system of equation(s).

u: The system input - denoted as a and b in Figure 3.

t : The system time - denoted as the second index (t) for the vectors (a, b, c) in Figure 3.

y: The system output - denoted as c in Figure 3.

The workflow for the system to execute the time-steps produces the layered state space (Wang and Marek-Sadowska 2015, Page 4, Figure 7). The steps to describe the process the dynamic system interface with the specific solver configuration have to be considered case by case (Mathworks 2016a). Because solvers are optimized for different problems, review- ing the process for the solver selection is relevant. The Solver links the mathematical model with the discrete computer architecture. The different options are available to selecting the specific solvers would, in this case, be caused by the two question (Mathworks 2016a):

• Does the model have continuous states? and

• Does the system contain physical modelling components?

Since computers process state as binary, the Jacobian based solver mechanism can be used to bridge the cap from the discrete to continuous system(s) within the architecture. Infact, the evolution in programming and development in simulation platforms is transforming the computer middleware from binary to more analog systems. Ideas like sampling instead of measuring, predicting instead of reporting, and deriving instead of calculating are all tech- niques current being developed to make the current generation of computers better. Com- monly called as analytics, the process existing between an autonomous system and user is addressed next.

2.3 Simulation Workspace

The simulation is the deduction of external and internal components as visualized in Figure 3. The internal components govern the execution of the solver and system. The external components are the input, time, and output that are required for the simulation to execute.

In typical simulation cases the interaction between input, simulator, and output is managed by a human (expert), who manages the simulator data streams and handles possible excep-

(27)

tions. To implement an autonomous service platform capable for running simulations re- quires replacing this control with automated logic (analytics). For this task and to define the workspace (feasible inputs and outputs and their processing rules) requires the identification of higher level patterns. The relationship between these three entities (simulation, service, and analytics) is visualized in Figure 8.

During the implementation (Chapter 4), the role of the autonomous analytic procedure is handled by a person as controller. To implementing the (autonomous) service platform, the analytics is required to mimic the controller position. In this section, heuristics is used as a low-level analytic process to solve the vehicle routing problem (VRP) and to illustrate the controller role. The VRP is essentially a challenge to automate a delivery system for regional area with the goal optimize the cost of delivery.

Published in 2011, entitled "Metaheuristics Meet Metamodels", the research utilized the models to solve the VRP using and applied heuristics (see heuristic) (Puranen 2011). The challenge was characterized as an optimization problem to solve the most effective manner at distributing packages at random circumstances and with fixed parameters (Puranen 2011, Page 52). The criteria for the successful run was defined such that all packages had to be delivered, with available resources, time, and locations given as parameters. The goal for the used algorithm was to optimize the path between deliveries, but it could also be discovered that no solution would meet the requirement to deliver all packages. It is important to the point that the failure to meet assignment goals was among the plausible outcomes of the simulation and, that prior to the simulation being completed this information is not available.

The generic unpredictability of the state space with regarding simulation input is visualized in Figure 4. With regarding the routing problem the solution seems clear, either to reduce the number of packages to be delivered or by increasing available resources or time. Thus, the problem to solve is a balancing act between all of the options available. The key is to expect an undetermined number of simulations to fail and to apply the heuristic principle to make the changes to the input until the outcome is within the boundaries of the state space is found (Puranen 2011, Page 39-41). But given that the approach to apply heuristics to more complex systems is likely to start to resembling the search of the needle through the haystack, the approach to using heuristics as a general solution has its limits. The opposite

(28)

Figure 4. state space

path to using heuristics is relying on expertise driven approach. The information to validate the proper input and state exist in the mathematical model used to describe the simulation.

This approach is also promoted by the research evaluating the capabilities to outsourcing simulations to be managed by an external application (Kathryn Hoad, Stewart Robinson 2011, Page 9).

2.3.1 Model in Design Space

The IT solutions to assimilate the reasoning necessary to automate the simulation system are data-mining, knowledge discovery and databases (KDD), and machine learning. By com- bining System simulation with these techniques is used to allow interactive design process in engineering projects (Burrows et al. 2011, Page 163). This vision is based on the capability to integrate the IT technigues with mathematical models. However, due to its gross-domain aspect, the interdisciplinary angle of combining simulations of continues system with the interactive design space is a relatively complex topic.

The purpose of modelling recognizable patterns as knowledge items is based on the similarity

(29)

measurement, which is produced by comparing the outcomes of previously simulated models (Burrows et al. 2011, Page 164). In the case of designing bridges, the identification for similarity is likely be based on the design approach (dimensions of the bridge), used materials (mass density) and the expected stress and strain forced by traffic and conditions expected during its lifespan (Burrows et al. 2011, Page 166). The natural problem to solve with regarding the similarity measurement metric is; how to balance the training for the algorithm to select the models with good enough relevance. The capability to derive similarity measure is here evaluated in six steps (Burrows et al. 2011, Page 169):

1. Design space,

• Sampling - selecting model candidates, providing preliminary results.

2. Models,

• Simulation - Calculating output for the implementation of selected model.

3. Raw simulation space,

• Aggregation - sums and products of of simulation output. 12.064 measurements (3*3*5) in dimension space.

4. Post processed simulation space,

• Cluster analysis - maintaining run-time classification for the executed models.

5. Sampling, and

• Training set - selecting strategy to collect non-bias record to be further analysed for classification.

6. Machine learning.

• Similarity measure - deriving new knowledge for the next iteration of simulation.

Based on the findings of project at civil engineering involving bridges, the focus is to map the trail for applying simulation and data-mining techniques to assist the formation of interac- tive design space (Burrows et al. 2011, Page 165 Figure 2). The generic process of applying data-mining with simulations has two significant factors. First; to apply external processes to utilize simulation data for further analysis, the conceptual understanding of design space

(30)

must exist. If the acquired knowledge is unreliable, the process to collect further knowl- edge by means of data-mining is useless (Burrows et al. 2011, Page 163). Second; Human comprehension has limits, and thus, additional design aspects affecting applied model can be discovered by means of data-mining (Burrows et al. 2011, Page 169). The second aspect increases the interactivity of the design space, making work more meaningful. It is also an aspect to consider while designing the service for the simulations. In this case, the ownership and knowledge about the specific simulation can presumed to be included within the service design.

By applying the capability to derive similarity measure, the concept is to have the ability to evaluate the finding of simulation run in design space (Burrows et al. 2011, Page 163). The key points for conceptual modelling are (Burrows et al. 2011, Page 163):

• Develop an understanding of the problem situation;

• Determine the modelling objectives;

• Design the conceptual model: inputs, outputs, and model content; and

• Collect and analyze the data required to develop the model.

For this research the findings made by Burrows are a double-edged sword. The positive side is that the research supports isolating the executing process from the internal logic of producing the state space. The negative side is that the method proposed is based on applying relatively expensive calculations for real-time processing needs. The process to undertake the task of combining the proposed KDD process with the simulation of continuous system is, therefore, complex. The prepossessing of simulation data alone can be divided to four phases (Robinson 2004, Page 95, chapter 7):

1. Validating the records, looking for missing values, establishing a concept of false val- ues and isolating anomalies (outliers) from the data;

• This process is usually ambiguous, and therefore, must be done by hand with people understanding the domain.

2. Building a process to execute basic calculations (aggregates) with the data such like mean, mod, median, standard deviation and covariance;

(31)

• State and distance.

3. Dividing the available data into learning- and testing-sets; and

• Learning-set is used to create rules that apply information.

• Testing-set is used to evaluate how well the rules apply.

4. Building and improving the information model to be used to improve the simulation output and the design space.

2.3.2 The Simulation Output

The dynamic systems (Figure 3) produce a complex state space as output. The common challenge to these systems is not only how to interpret the results, but whether the results are valid. The initialization bias is common pattern to plague the development of simulations (Robinson 2004, Page 137, chapter 9). It also constitutes as an example of the control logic (analytics) that is required to substitute the human in the loop for automated simulations.

The strategies to test simulations are either based on single and multiple run strategies (Table 1). The first step to choosing between the two scenarios is determined by whether the simu- lation is terminating (plane crash) or non-terminating (climate change) (Table 1). The second step is to determine the model output, whether it is transient (cyclic) or converging towards a value (Table 1). The third step is to determine the possibilities to manage the simulation model initialization bias, and whether, it is possible to set the simulation to pre-determined state (Table 1). The fourth step is to determine the volume of output data required to pass certain accuracy of measurement from the output signals (Table 1).

Table 1. Automated output evaluation based on single and multiple runs (Robinson 2004, Page 137, chapter 9).

Output evaluation: Single run Output evaluation: Multiple Runs

init: warm-up period, initial conditions init: mixed warm-up and initial conditions.

Simulation type: Non-terminative Simulation type: terminative Output type: Steady state Output type: transient.

The problem is: solving the Initialization bias and the capability to calculate the quality

(32)

of system output require reasoning - algorithmic (static) or learning based (dynamic). The similarity measurement (Section 2.3.1) is a fair example for the static reasoning to put in place to evaluate whether the simulation instance would be stable. However, using this method requires previously executed run with close to same input. This in turn requires data-warehouse based system to store the simulation output, input, and run-time statistics.

It stands to reason that to build more dynamic approaches to evaluate the simulation output requires more advanced algorithms than the similarity measurement. Additional factors have to be considered:

• Deterministic or stochastic system,

• Interconnections inside the model (complexity),

• Feedback loops - output from n-1 clock-cycle turned into input at n clock-cycle, and

• Error handling and error types.

The nature of simulation is to define a natural phenomena as an abstract system. The simplest design to modelling any phenomena is a system with single dimension and deterministic output. The problem is that such systems are also furthest from the reality. To increase the theoretical quality of the system, thus, requires increasing either the dimensionality or stochasticity of the system. Therefore, the cost of making the system more realistic is the increase in the relative complexity.

The process to extending the quality of the system requires experimentation and competence to analyse the findings. To do so means having the capability to compare the quality with alternative systems and, to search experiment the possibilities for the changes to make the system better. It stands to reason that the more the simulation is studied, the more accu- rate it can be (while compared to the natural phenomena). However, the potential does not equal to actual improvement without the capability to deduct the meaningless information out (Robinson 2004, Page 169):

• Specifying construct at domain ->

• Debugging run-time performance (state and error(s)) ->

• Outlier detection ->

• Assess reliability, build validation [technical] ->

(33)

• Assess construct, Analyse meaningfulness [functional, domain specific] ->

• Make conclusions and derive new information -> goal.

The dynamic methods to remove initialisation bias validating output for an accurate estimate of (system) performance are long term research goals (Kathryn Hoad, Stewart Robinson 2011, Page 9). Even with the three year research program to produce dynamic reasoning to output evaluation couldn’t be automated - expertise to setup and interpret the findings was still needed:

Deciding what to do. On the basis of a knowledge of the model, the user must determine whether a warm-up analysis is needed and whether multiple replications or a long run is required. The user must also decide the length of run for the multiple replications case. These decisions depend on the nature of the model and the output, for instance, whether the model is terminating or non- terminating, or whether the output data are transient or steady-state. There may be ways of automating these decisions by inspecting the details of the model, but this would require further research into the characteristics of models.(Kathryn Hoad, Stewart Robinson 2011, Page 23)

The view has a causal connection to this research. It makes justifiable to presume that service layer (Chapter 3) designed to run simulations does need to relay on the user to make simi- lar setup and interpretation based decisions as for the AutoSimOA (Kathryn Hoad, Stewart Robinson 2011). The burden for the service to provide the client with additional information is part of the information model (Section 3.3) The information model is designed to reflect the functionality that would otherwise need to be added as capabilities for the software users to utilize.

2.4 The Model Driven Architecture and Simulation Platform

In this research, the MDA is used because it is extensive enough to capture the requirements for mathematical modelling, simulation, and service. The areas (boxed in blue) that are not covered by the MDA in Figure 5:

(34)

• The dynamic system - mathematical modelling (Section 2.2),

• The SOA -> the behavioral and information model (Section 3.1),

• The (case specific) Request - Response (Section 3.3),

• Simulink and APROS as platforms (Section 4.2 and section 4.3), and

• The (case specific) Simulink model transformation to PIM (Section 4.4).

The MDA is primarily designed for model and modular based design practices. This ap- proach to software based development requires both mediator technologies and independent development platforms to be incorporated as part of the architecture design (Figure 5).

Figure 5. The three tyre architecture for models (MDA -http://www.omg.org/mda/) The MDA is commonly divided into three layers. The first layer (high level of abstraction) of the MDA contains the meta object facility (see MOF), common warehouse metamodel (CWM), and unified modelling language (UML) (Figure 5). The second layer contains the platforms capable of forming the environment to address specified challenges; containing Java, (dot)NET, common object request broker architecture (CORBA), web service and ex- tensible markup language (XML)/ XML metadata interchange (XMI). The third layer con- tains the entities for the (pervasive) service-level like the transaction, event, security, and directory (Figure 5).

(35)

The philosophy of MDA is centered around the system described by one or more models (Siegel 2010, Page 3, Figure 1). At the core of the MDA are the meta-level components (UML, MOF, and CWM). The derivative of these components is the capability to transform the system to platform independent model (see PIM) or platform specific model (see PSM) ( (Siegel 2010, Page 8). The complement to the capability to transform the system to PIM or PSM, is the model transformation from PIM to PSM or PSM to PIM (see MDA transforma- tion). Therefore, process to convert between the model types is a functional property (PSM

= PIM + external components to OS integration). Thus, the build process for the system at the MDA is grounded at the meta-level components and defined in model(s).

2.4.1 The Abstract Layer of MDA

Among the constructs in the first layer of MDA is the MOF, which is different from the UML and the CWM in the sense that it is not build to define solutions to known problems.

Instead, MOF is defined to design methods for how to identify solutions for known problems.

The MOF is comprised of four layers that define it as a development platform for modelling languages. Starting from the highest level of abstraction, the first level of MOF is meta- metamodel. It is used for describing a model for constructing the meta-level modelling language. Thus, the MOF can also be used to describe itself (Siegel 2014, Page 14).

The second level defines the metamodel (see metamodel), the language needed for construct- ing the entities, which form the backbone of the model. For instance, in the context of UML class diagram the meta-level definitions could include constructs for classes, interfaces, in- heritance and polymorphism. In this case, a class in the diagram could comprise of variables and methods. The variables could be defined by the type corresponding to an existing class.

The name of the variable should also be unique within the domain of the class in which it is defined in. The method could be assigned with an output constituting as the type of an existing class and a higher than zero number of input parameters constituting as members of existing classes.

The third level of MOF is the first modelling level for user-defined models. The only limi- tation in place for the models is that they have a valid meta-level description based on rules

(36)

and entities. This means that nothing can ever be derived outside the definitions done at the meta-level. In practice, this means that if there is no definition for an abstract class at the meta-level, there can not be an instance of it made at the model level of the MOF. Thus, the MDA is fit to manage the dynamic systems based in the mathematical theory (Section 2.2).

The fourth level of MOF contains the system implementation layer. The layer is defined by the transformation of the model to executable process. Here the model also known as computation independent model (see CIM) is converted to PIM or PSM. If all of the entities defined at system are modeled correctly, the transformation between CIM, PIM, and PSM is automatic.

2.5 Chapter Summary

Simulation is expertise driven method. It works as an semi-automated process governed by the solver and specific system characterized by mathematical formula. It is possible to combine System simulation with machine learning to allow interactive design process. To do so requires the knowledge of solving the state driven problem to be defined as patterns.

The initialization bias is such an example of a pattern that constructed and identified using machine learning.

Provided that the capability to define an existing phenomena as model is available, the natu- ral conclusion to derive from modelling is that it is in essence a linear process. However, this is inherently a faulty conclusion. Just like any natural process, modelling is both iterative and repetitive process. Reducing the dimensionality of a model is therefore, also a non- linear process. The non-linearity characteristic also means that simulations are inherently connected with cognitive reasoning (human in the loop). When Artificial Intelligence (see AI) is developed with cognitive reasoning, it can replace the requirement for human in the loop to provide optimal results to problems like designing for problems like the bridge en- gineering challenge. Ultimately this will change the ownership of the problem; human will no longer have the ability to contest the findings of the simulation. Without the capability to derive the logic used to gain the output, people can only choose whether to implement the solution based on output or not.

(37)

3 Service

Combining Simulation and Service bridges two scientific domains, mathematics and IT around the task of isolating the logic of the system dynamics from the environment and needed data. This can be done by isolating the logic of the system dynamics from the envi- ronment and data required to execute the simulation. From a complimenting viewpoint, the simulation (system) is required to link with service (platform) in bottom-up form. How these two constructs could work together is outlined in Figure 6.

As distributed application logic (Service) and the systems expressed as the product of math- ematical equations (Simulation), the process is based on the service encapsulating the sim- ulation. To have this capability, the service must function as the sort of life-support system for the simulation. The key capabilities to meeting this requirement are listed at the service layer (Figure 6)4.

Part of those capabilities required are covered in Chapter 2. These include the management of the state space, system input and output. The rest, including model configuration, error, and the generic quality metrics to evaluate the service, are left to be studied. The remaining capabilities are addressed in relation to the service and simulation at constructed model de- fined using RDF/OWL (see ontology) notation (Section 3.3). Utilizing the language that has predefined set of entities constituting as metamodel (MOF, section 2.4) makes the notation compatible with the MDA.

At generic level, the purpose of service is to enable high-level integration based on the de- fined system as a super-type for the continuous system to be embedded (Figure 6). To relay actions for the service to take request - response protocol is often used. The request is the trigger to force action and the pending response is the service-provider acknowledgment to close the transaction triggered. These request-response transactions are commonly executed at server-client architecture, which means that the internal logic and the (process) controlling

4. The traits (Figure 6) to evaluate the state and output are founded on the specification for AutoSimOA (Kathryn Hoad, Stewart Robinson 2011); the remaining attributes were required to be solved for the implemen- tation (Chapter 4).

(38)

Figure 6. Service and simulation at technical domain.

metrics are isolated. The specific service capability to process continuous system, therefore, inherits the super-type to process the state space driven input and output to link with the internal logic of simulation with client. This leads paradigm common to machine learning, where the output is deducted based on pattern produced by state and time (László et al. 2000, Page 4, Figure 4).

The properties associated with the IT based service paradigm are simple - the client provides input and validates the output and the server executes the commands (Figure 9). On top of this, the key issues to resolve for the generic service are:

• Security and ownership - protocol to authenticate and identify users to allow the trans- actions through the service interface;

• Error management - overloading the default process of halting code execution on error, providing parametrisation for the severity of an error, and defining user-friendly causes to known errors;

• User interface (UI) - console, html-site, or application;

• Data source access - the simulation specific location to the data produced as output or

(39)

statistics;

• Configuration - the universal settings like the solver, time, and time-step and, the local settings like the input, initial state and output;

• Status - run-time setup to initialize, run and end the specific simulation.

To interface (Figure 7) with the system that is modeled in terms of state, time, and causality, requires a process capable of handling an interchange between the service provider (server) and client. For stateless systems the de-facto architecture is SOA. The architecture simplifies the service as "a mechanism to enable access to one or more capabilities, where the access is provided using a prescribed interface and is exercised consistent with constraints and policies as specified by the service description."(MacKenzie et al. 2006, Page 12). However, the architecture does not consider the (global) state in its design. Since SOA does not match with requirements for continuous system, the goal of this chapter is to map and isolate the changes needed for the conversion of SOA to state driven solutions.

Figure 7. Interfaced Service (Partridge and Bailey 2010, Page 4, Figure 1).

3.1 Service - SOA oriented approach

In this section, SOA is compared to the High Level Architecture (HLA) and Service-Oriented Simulation Architecture (SOSA) (SISO 2010) and (Gu, Lo, and Yang 2007). As an abstract service architecture, SOA is not bound to any specific platform like J2EE, .NET, C (MacKen- zie et al. 2006, Page 4). The SOA is commonly defined as loosely coupled and a reference architecture, which makes the architecture more difficult to implement as opposed to the platform specific or tightly coupled architectures (MacKenzie et al. 2006, Page 5). The SOA

(40)

based implementation is always dependent on the external sources for requirements, plat- form, and design (MacKenzie et al. 2006, Page 5, Figure 1).

The basic unit of SOA is at a machine to human or machine to machine interaction. To be meaningful, the communication requires a shared language with rules and relations com- bined. The SOA architecture refers to shared syntax and semantics (see semantic) facilitated by information and behavioral models (MacKenzie et al. 2006, Page 8). In this research, the information model is used to lay out the chain of communication between the provider and client is based on the RDF/OWL notation (Section 3.3). The model is designed to bind the system and service together using interface oriented approach constructed at Figure 9. As in the object oriented paradigm, the interface facilitates encapsulating the system instances as stand-alone entities within the specific service-request, which also enables the descriptions for service and client to be based on the same root (MacKenzie et al. 2006, Page 10).

The key components to SOA for human to machine or machine to machine communication are visibility, interaction, and effect (MacKenzie et al. 2006, Page 13). An IT perspective to the architecture are unique and usable identification (ID), the request and response process, and effect translates to self-contained executable code (MacKenzie et al. 2006, Page 14):

• The visibility is equal to the unique and identifiable ID;

• The interaction is described by sequencing the entire function of the service to its individual components constituting as information model; and

• The effect is described by the behavioral model.

In the context of this research, the behavioral model would constitute as the implementation of the dynamic system (Figure 2) and the information model would constitute as the service layer design (Sections 3.3). The combined role of the behavioral and information model forms the basic stand-alone application (Figure 6). To accommodate both, the human to machine and machine to machine interaction, the notation used for the information model is RDF/OWL. Overall, the following guidelines are defined for usable SOA implementation (MacKenzie et al. 2006, Page 26):

• "Have entities that can be identified as services as defined by this Reference Model;"

• "Be able to identify how visibility is established between service providers and con-

Viittaukset

LIITTYVÄT TIEDOSTOT

A research reactor is constructed to deliver the source of neutrons and gamma radiation, which are used for several types of research (like medicine, applied physics, biology,

The purpose of this qualitative action research case study was to plan, describe and evaluate the implementation of a 12- hour- long Psychological Skills Training program

Kemiallisen kierrätyksen tulkitsemisessa kierrätykseksi on se haaste, että kemiallisen kierrätyksen lopputuotteilla voi olla erilaisia käyttötarkoituksia, esimerkiksi

Head of Nuclear Safety Research Area VTT Technical Research Centre of Finland

tieliikenteen ominaiskulutus vuonna 2008 oli melko lähellä vuoden 1995 ta- soa, mutta sen jälkeen kulutus on taantuman myötä hieman kasvanut (esi- merkiksi vähemmän

Keski-Euroopan olosuhteisiin suunniteltujen kitkarenkaiden yleisyys Suomessa [Fre- quency of tyres for mild winter conditions in Finland].. VTT Tiedotteita – Research Notes

Lisäksi havaittiin, että tutkijoiden on vaikea arvioida liiketaloudellisia vaikutuksia, ja että niiden määrällinen arviointi on vaikeaa myös hyödyntäjille..

1.2 GOAL OF RESEARCH AND RESEARCH QUESTIONS This dissertation is primarily concerned with the technical aspects of developing gaze-contingent systems and with investigation of the