• Ei tuloksia

4. IMPLEMENTATION AND RESULTS

4.2 Reliability engineering case study

5.1.3 Design Space Exploration

One of the benefits of extending the causal graph obtained by DACM is that it provides the possibility of exploring in the design space efficiently. Design space exploration (DSE) is the process of exploring the design variables’ space to discover sets of suitable combinations of designs alternatives (Sharpe, Morris, Goldsberry, Seepersad, &

Haberman, 2017). Design space exploration is the process of discovering and evaluating valid design alternatives before implementing (Kang, Jackson, & Schulte, 2011).

Often the simulation models are complicated and computationally expensive. Therefore, some surrogate modelling is used to map the complicated model into a model which is simpler and is accurate enough. Several methods are developed to perform the mapping including set-based methods (Jawad Qureshi, Dantan, Bruyère, & Bigot, 2014), interval-based methods (Panchal, Fernández, Paredis, Allen, & Mistree, 2007), graph and grid-based methods (Schulz et al., 2017), space mapping methods (J. W. Bandler, Cheng, Hailu, & Nikolova, 2004).

The method introduced and implemented in this study can be classified as a space map-ping method. For example, Bandler et al. (2013; 2004) developed the Aggressive Space

Mapping (ASM) method in which they mapped accurate and complicated models into a set of coarse models which are accurate enough but fast to evaluate. The process of ASM can be summarized as (J. Bandler, 2013):

1- Preliminaries: Creating a library of fast, parameterized course models and devel-oping inexpensive means to evaluate them.

2 & 3- In a specific problem: choose a suitable “course” model for the system and extract the parameters. The course model is expected to be capable of meeting the system specifications, both in inputs and outputs. The relationship between these two models can be represented as:

𝑥𝑐= 𝑃(𝑥𝑓) (84)

In which 𝑥𝑐 and 𝑥𝑓 are respectively the vectors representing the course model and the fine model. The function 𝑃 is expected to be a linear mapping between these two models if they are a good match. Then in the course model is being optimized with a conventional method which results in the solution 𝑥𝑐. Finally, the parameters of the fine model can be calculated using the inverse:

𝑥𝑓= 𝑃−1(𝑥𝑐) (85)

Assign the optimized parameters to the fine model and run it. If the specifications are met, you have a sufficiently accurate course model.

4- Further iterations: use the real data from the situation or generated data from the fine model to update the course model with a mapping. This step is called pa-rameter extraction. Then the steps on step 3 can be repeated to exceed the op-timization specifications or to some fixed number of iterations.

Similar methods have been developed using Bayesian networks to empower designers.

Shahan and Seepersad (2009) developed a method using Bayesian networks for collab-orative design problems in distributed design projects. In their method, each designer develops a small Bayesian network that represents the regions of interests in their design space. Then these Bayesian networks are combined to form a global network which shows the interest of each designer. Sharpe et al. (2017) developed Kernel-based Bayesian network classifiers in which they used a Genetic Algorithm method to learn a Bayesian network structure and parameters from a small set of data and then used the BN to explore the design space. Conti and Kaijima (2017) developed a Bayesian network meta-model to enable bidirectional inference in a design analysis system. They used

machine learning to learn network structure and parameters from a set of simulated data and then used that to simulate the result of choosing specific design parameters on the outputs or find the most probable parameters to have a specific value in the output. Later on, they developed a method for developing meta-models which are not limited to Bayes-ian networks and implement it on a case study using BayesBayes-ian networks (Conti & Kaijima, 2018). Another example of Bayesian network structure learning algorithms for supporting early stage design support can be found in Matthews’ (2007) work.

In this study, the mapping is from the space of interactions between continues variables through accurate mathematical equations into a space of probabilistic interactions be-tween discretized values with a limited range. The benefit of this mapping is that not only the mapped model is easy to evaluate; it is enriched with experts’ knowledge.

As an example, the process can be formed as defining a target tolerance for the defect and then trying to find the best combination for the other variables to have minimal ma-terial loss. Assuming that defect less than 0.2mm is acceptable, we can set hard evidence for the first two intervals of the “Curling defect” in its monitor. Then the posterior distribution for the other variables is calculated by the software, as shown in Figure 42.

Figure 42. Posterior distributions after setting a target value of having less than 0.2mm of defect

The designer then can start exploring this design space to reach the efficient values for all the variables in systematic design space exploration (DSE) method. DOE methods such as Taguchi (Mistree, Lautenschlager, Erikstad, & Allen, 1993) method or Bayesian methods (Nabifar, 2012) can be used to explore the design space of this model. Further discussion of these methods is out of the scope of this study.

5.2 Reliability engineering case study

A model for describing the interrelation between the consecutive TTF values and CF values is presented in section 4.2.3. The initial hypothesis of this study was that although near perfect maintenance is taking place after each failure in the system, consecutive failure times are not independent of each other. This shows that the maintenance proce-dures are not perfect, or the conditions of the working environment and usage pattern is affecting the failure times. Consecutive failure times and censored failure times are separated into a set of variables and dependency between them is investigated through using a structural learning algorithm. The resulting model of dependencies is shown in Figure 43.

Figure 43. The BN showing the dependencies between TTF and CF variables Table 20 shows some association and independence metrics of the nodes in the model.

The mutual information (MI) is an asymmetric measure that shows having some infor-mation about one node can help finding out some inforinfor-mation about the other node. The minimum amount of MI can be as low as zero, and the maximum of it can be equal to the entropy of the parent node. Pearson correlation is determining the strength of any possible linear relationship between two nodes, which is asymmetrical again. Pearson correlation has a range between 1 and -1. Positive values represent direct linear relation, and negative value show reverse linear relation. A description of these measures and the formula for calculating them is presented in section 3.1.2.

Table 20. Association metrics between nodes in the model Relationship Analysis

Parent Child Mutual information Pearson's Correlation

TTF2 CF3 0.7918 -0.7356

TTF1 TTF2 0.7344 -0.0727

TTF2 TTF3 0.5962 -0.5855

TTF3 CF4 0.5387 -0.1319

TTF1 CF2 0.482 -0.4263

TTF3 TTF4 0.2404 -0.0989

TTF2 TTF5 0.1524 0.2435

TTF1 CF6 0.1286 0.5135

TTF4 CF5 0.0788 -0.2364

TTF5 TTF6 0.0258 0.2269

As shown in Table 20, the strength of the relation for TTF1−>TTF2, and TTF2−>CF3 are the strongest with the almost similar MI value of 0.7344 and 0.7918. The Pearson’s correlation for TTF2−>CF3 is high as well, but for TTF1−>TTF2, the value is close to zero. This might be because the relationship between these two variables is highly non-linear. The next strong relations are TTF2−>TTF3, TTF3−>CF4, TTF1−>CF2 and TTF3−>TTF4. Since the number of data points for the variables TTF5, TTF6, CF5, and CF6 are very scarce, the dependencies found by the algorithm between them and the other parameters are quite weak. This fact is shown in the MI value between these vari-ables and the other nodes in Table 20.

The relation between the TTFs, and between TTFs and CFs are not in a manner to find a trend or a general rule or equation for them, but the model can help to estimate the most probable next time to failure and estimate a distribution for the non-event period based on the history of the failures and working hours of an equipment.

For example, imagine that a pump experience two failure in up to now. The first failure (TTF1) has occurred between 89 to 326 hours of work of the pump and the second failure (TTF2) has occurred between 407 and 509 hours after the first failure.

Figure 44. Monitor windows after setting evidence for the failures As shown in Figure 44, after setting the evidence for the variables TTF1 and TTF2, the posterior distribution for the variables TTF3 and CF3 is calculated by the Bayesialab software. The monitors show that having this history from the pump and based on the model, the most probable interval that the next failure may occur in is the interval be-tween 239 and 470 hours. The model suggests that based on the historical data, 31.54%

of the failures have happened in this interval. The average value calculated for the TTF3 variable is about 424 hours with a standard deviation of 52.88 hours. This can provide a more accurate measure for the most probable value for TTF3. The model also shows that 60% of the data points in TTF3 for such an arrangement in TTF2 and TTF1 are missing.

The monitor for the censored TTF3 (CF3) variable is also shown in Figure 44. The inter-vals can give some insight about the probability to have no event in each interval. Also, the mean value of no event hours and its standard deviation is shown on the top of the monitor.