• Ei tuloksia

Asset maintenance maturity model as a structured guide to maintenance process maturity

3 Asset maintenance maturity model (AMMM)

This section presents the proposed conceptual asset maintenance maturity model (AMMM) and extends the MPM framework developed by Van Horenbeek (2014). The framework enumerates maintenance objectives and respective performance measures derived through literature survey and validated by industrial case studies. Moreover, the framework aims at addressing the following deficiencies associated with existing MPM frameworks that include:

are generic and in many cases describe a list of possible maintenance objectives together with the corresponding maintenance performance indicators;

fail to explicitly link the derived MPI’s with the maintenance objectives;

failure in accounting for the varied importance of different maintenance performance measures depending on the operating/business context.

The proposed MPM framework depicted in Fig. 2 explicitly links the different maintenance performance measures to all organizational levels, while at the same time taking into consideration the varying importance of the maintenance objectives and respective indicators.

The framework consists of 5 steps (circled in the framework) namely (Van Horenbeek, 2014):

Figure 2. Maintenance performance measurement (MPM) framework (Van Horenbeek, 2014)

1. Translating generic MPM frameworks into a customized MPM system taking into account all organizational levels;

2. Prioritize the maintenance objectives on all the organizational levels based on analytical network process (ANP) methodology;

3. Translating the business specific maintenance performance measures into MPI’s;

4. Measure, monitor, and control maintenance performance based on MPI’s; and

5. Continuous improvement through re-defining maintenance targets according to evolving business environment (performance measurement and benchmarking)

The study by Van Horenbeek (2014) focuses on the first three steps enumerated above. In this paper we extend the MPM framework by focusing on steps 4 and 5, where we propose inclusion of the AMMM. The AMMM constitutes three phases that include performance assessment, continual improvement, benchmarking and standardization.

3.1 Performance assessment

In this phase, the maintenance objectives and respective MPI’s are deployed to the respective organization level according to the relative importance weighting defined in the ANP’s limit super matrix. As such, maintenance objectives are deployed at the strategic level, while the MPI’s are deployed to the tactical and operational levels. The summation of importance weighting scores sums to a value of 1. Next, each MPI is evaluated independently and assigned a performance score using either of three assessment approaches: (1) objective; (2) subjective; or (3) combination of both objective and subjective. Depending on the availability of historical data, several MPI’s such as reliability may be computed objectively (i.e.

reliability analysis). Thus, here we apply a modified mathematical formulation first proposed by Hsieh (2009), where we define the weighted global performance assessment score (PAS) computed as follows: maintenance objectives, i.e. people and environment, functional and technical aspects,…, and maintenance budget;

: the total number of maintenance performance indicators in each maintenance objective;

= × ; weighted score for each maintenance indicator defined as the product of assigned performance assessment score and importance weighting derived from ANP.

To ensure uniformity of the weighted PAS, it is proposed that each performance assessment score should be transformed into ratios ranging from 0 to 1. For instance, the MPI

“maintenance costs” is normally described in economic terms e.g. Euros, but can instead be expressed as a ratio of actual maintenance cost incurred versus budgeted maintenance budget.

For subjective performance assessment, domain experts assign performance assessment scores to each of the MPI defined at the tactical and operational level. Here, we propose a 5-point Likert scale with a score range of ‘1’ to ‘5’, the latter being the most important. Thus, assigning each MPI a rating based on the Likert scale, a weighted global performance assessment score may be computed by using equation (1) defined above. Thus, the value of

is based on the Likert scale (i.e. range of 1-5), rather than ratio (e.g. reliability = 0.85).

Introducing the weighted PAS allows for a realistic evaluation of the organizations’

performance in that the most important maintenance performance indicators (i.e. assigned the highest weights), are assigned a higher weighting compared to less important MPI’s.

Moreover, the weighted PAS assumes a final value ranging from 0 to 1, easily transformed into a percentage thus presenting an intuitive approach through which the organization is situated on a specified level on the maturity ladder depicted in Fig. 3.

3.2 Continual improvement process

After situating on the maturity ladder, the organization may find the need to improve their maintenance programs, i.e. move to the next step on the maturity ladder. However, achieving such improvements is not always straightforward. Several studies report the implicit link between choice of the maintenance policy (e.g. FBM, TBM and CBM) and MPI’s and as such improvement activities are invariably linked to the selection of appropriate maintenance policy(s). In this paper we propose the use of risk assessment methodologies as a central tool for continual improvement of maintenance programs. The reasons are three-fold:

1. allows for a systematic identification of failure risk and as result focuses maintenance effort on the most important failure modes;

2. constitute important tools within the maintenance decision making framework. For instance, the FMEA is an important tool in the RCM concept, while the fault tree is an important reliability analysis tool in RBIM;

3. allows for the incorporation of important externalities often ignored during development of maintenance improvement programs that include economic feasibility, safety factors, or environmental considerations.

Therefore, the proposed structured improvement framework acts as a potential guide for developing improvement programs. A wide variety of risk assessment methodologies are reported in literature (Tixier et al., 2002). The effectiveness of the selected maintenance strategy(s) on the MPI is continuously evaluated (i.e. measure, monitor) via the feed-back loop depicted in Fig 3. For instance, a particular risk assessment methodology (e.g. RCM)

Increasing leve

l of maturi ty

Figure 3. Overview of proposed asset maintenance maturity model (AMMM) framework

may suggest implementing combined FBM and TBM, which upon evaluation does not necessarily lead to improved MPI’s scores. As such, a different risk assessment approach, e.g.

RBIM may instead propose optimizing preventive maintenance schedules leading to possible improved scores. Thus, the application of risk assessment methodologies marks an important departure from several conventional maturity models often proposed in literature which situates novel maintenance strategies, e.g. CBM at the highest maturity level, while traditional strategy(s) e.g. FBM and TBM are situated at the lowest maturity levels. Rather, the selection of maintenance strategy is specific to the business and operation context of the organization and explicitly linked to maintenance performance measurement.

3.3 Benchmarking and standardization of maintenance programs

Once the organization is satisfied with the current maturity level an internal or external benchmarking study is considered. Since the MPI’s are generic but vary in importance depending on the business context, the AMMM presents an intuitive approach for benchmarking maintenance programs across different organizations. Organizations achieving consistently high weighted PAS (i.e. > 80%) scores may be considered as having high maturity. Here, it does not necessarily imply implementing CBM, but rather the organizational specific maintenance strategy(s) that allows attainment of high performance evidenced by the weighted PAS. Thus the AMMM dispels existing asset maintenance maturity frameworks that situate maintenance policy(s) depending on its complexity. Rather, the organization may as well attain high maturity through implementing effective TBM as opposed to CBM. Indeed, implementing CBM may instead make the organization “over-mature” and as such not necessary.

4 Conclusion

This paper reviews research work carried out on the development of capability maturity models. In as much as several CMM’s exist, not much research works on models specific to asset maintenance domain are reported in published literature. Moreover, existing CMM’s largely propose subjective assessment criteria leading to possible ambiguity when applied for maintenance performance measurement and benchmarking studies. Thus, a novel asset maintenance maturity model (AMMM) is proposed specifically for the asset maintenance domain. The proposed model extends recent research work on maintenance performance measurement (MPM) and introduces the concept of weighted performance assessment score (wPAS) as the basis for maintenance performance measurement and benchmarking studies.

Furthermore, the use of risk assessment methodologies is proposed as part of a structured decision making framework for selecting the most appropriate maintenance policy(s) best suited to the organization considering the operational and business context.

The AMMM marks an important departure from existing CMM’s in that organizations in different business context are assessed based on the same generic list of maintenance objectives (and respective MPI’s), but with varying importance weighting. This presents an intuitive approach allowing for a better comparison for the maintenance performance of different organizations. Moreover, situating the capability maturity for the organization on the basis of the weighted PAS allows for better performance assessment and attempts to de-link the performance assessment exercise from the assessor’s subjectivity. Proposed future work will be on validating the proposed AMMM through case studies.

References

Becker, J. (2009) ‘Developing maturity models for IT management’. Business & Information Systems Engineering, Vol. 1, pp. 213.

Becker, J., Knackstedt, R. and Pöppelbuß, J. (2009) ‘Developing Maturity Models for IT Management’. Business & Information Systems Engineering, Vol. 1, pp. 213-222.

Bevilacqua, M. (2000) ‘The analytic hierarchy process applied to maintenance strategy selection’. Reliability engineering & systems safety, Vol. 70, pp. 71.

BSI (2007) ‘Maintenance Key Performance Indicators’, British Standards Institution.

BSI P. (2008) ‘PAS 55-2: asset management, Part 2: Guidelines for the application of PAS 55-1’, British Standards Institution.

Campbell, J. D. and Reyes-Picknell, J. V. (2006) ‘Uptime: Strategies for Excellence in Maintenance Management’, Productivity Press Inc.

Crosby, P. B. (1980) ‘Quality is free: the art of making quality certain’, Penguin.

De Bruin, T., Freeze, R., Kaulkarni, U. and Rosemann, M. (2005) ‘Understanding the main phases of developing a maturity assessment model’.

Fraser, P. (2002) ‘The use of maturity models/grids as a tool in assessing product development capability’, IEEE International Engineering Management Conference, Vol. 1, pp. 244.

Galar, D., Parida, A., Kumar, U., Stenstrom, C. and Berges, L. (2011) ‘Maintenance metrics:

A hierarchical model of balanced scorecard’, Quality and Reliability (ICQR), IEEE International Conference, September, 14-17, 2011.

Hauck, J. C. R., Caivano, D., Oivo, M., Baldassarre, M. T. and Visaggio, G. (2011) Proposing an ISO/IEC 15504-2 compliant method for process capability/maturity models customization product-focused software process improvement.

Hsieh, P. J. (2009) ‘The construction and application of knowledge navigator model (KNM):

An evaluation of knowledge management maturity’, Expert systems with applications, Vol. 36, 4087.

Khan, F. I. (2003) ‘Risk-based maintenance (RBM): a quantitative approach for maintenance/inspection scheduling and planning’, Journal of loss prevention in the process industries, Vol. 16, pp. 561.

Komonen, K. (2002) ‘A cost model of industrial maintenance for profitability analysis and benchmarking’, International journal of production economics, Vol. 79, pp.15.

Koronios, A., Nastasie, D., Chanana, V. and Haider, A. (2007) ‘Integration through standards–an overview of international standards for engineering asset management’, Fourth International Conference on Condition Monitoring, Harrogate, United Kingdom, 2007.

Kumar, U., Galar, D., Parida, A., Stenström, C. and Berges, L. (2011) ‘Maintenance Performance Metrics: A State of the Art Review’, 1st International Conference on Maintenance Performance Measurement and Management (MPMM), 2011.

Macgillivray, B. H. (2007) ‘Benchmarking risk management within the international water utility sector. Part II: A survey of eight water utilities’, Journal of risk research, 10, 105.

Maier, A. M., Clarkson, P. J. & Moultrie, J. (2012) ‘Assessing organizational capabilities:

Reviewing and guiding the development of maturity grids’, IEEE Transactions on Engineering Management, Vol. 59, pp. 138.

Maier, A. M., Moultrie, J. & Clarkson, P. J. (2009) ‘Developing maturity grids for assessing organisational capabilities: Practitioner guidance’.

Mettler, T. (2009) ‘Situational maturity models as instrumental artifacts for organizational design’, Proceedings of the 4th International Conference on Design Science Research in Information Systems and Technology - DESRIST '09. 1.

Mishra, R. P., Anand, G. and Kodali, R. (2006) ‘Development of a framework for world-class maintenance systems’, Journal of Advanced Manufacturing Systems, 05, 141-165.

Moubray, J. 2001. Reliability-centered maintenance, Industrial Press Inc.

Muchiri, P., Pintelon, L., Gelders, L. & Martin, H. (2011) ‘Development of maintenance function performance measurement framework and indicators’, International Journal of Production Economics, 131, 295-302.

Nakajima, S. (1988). Introduction to TPM: total productive maintenance, Productivity Press.

Oliveira, M. M., Lopes, I. D. S. and Figueiredo, D. (2012) ‘Maintenance management based on organization maturity level’.

Pintelon, L., Vanpuyvelde, F.,(2006). Maintenance decision making, Leuven, Belgium, Acco.

Schneider, J., Gaul, A. J., Neumann, C., Hogräfer, J., Wellßow, W., Schwan, M. and Schnettler, A. (2006) ‘Asset management techniques’, International Journal of Electrical Power & Energy Systems, Vol. 28, pp. 643-654.

Schuman, C. A. (2005) ‘Asset life cycle management: towards improving physical asset performance in the process industry’, International journal of operations &

production management, Vol. 25, pp. 566.

Steenbergen, M., Bos, R., Brinkkemper, S., Weerd, I. and Bekkers, W. (2010) ‘The Design of Focus Area Maturity Models’, Global Perspectives on Design Science Research.

Springer Berlin Heidelberg.

Strutt, J. E. (2006) ‘Capability maturity models for offshore organisational management’, Environment international, Vol. 32, pp. 1094.

Tiku, S. (2007) ‘Using a reliability capability maturity model to benchmark electronics companies’, The International Journal of Quality & Reliability Management, 24, 547.

Tixier, J., Dusserre, G., Salvi, O. & Gaston, D. (2002) ‘Review of 62 risk analysis methodologies of industrial plants’, Journal of Loss Prevention in the Process Industries, Vol. 15, pp. 291-303.

Tomlingson, P. D. (2007) ‘Achieving world class maintenance status’, Coal Age, pp. 112.

Van Horenbeek, A. (2014) ‘Development of a maintenance performance measurement framework using the analytic network process (ANP) for maintenance performance indicator selection’, Omega : The International Journal of Management Science, Vol.

42, pp. 33.

Wendler, R. (2012) ‘The maturity of maturity model research: A systematic mapping study’, Information and Software Technology, Vol. 54, pp. 1317-1339.

Modelling maintenance and operation strategies for high value water industry assets

Helen Cornwell*, Linda B Newnes, Jon Wright**, Paul Cook**

Department of Mechanical Engineering University of Bath

Claverton Down Bath, BA2 7AY, UK

*Corresponding author: Helen Cornwell

e-mail: h.cornwell@bath.ac.uk, e-mail: l.b.newnes@bath.ac.uk

** Wessex Water Claverton Down Bath, BA2 7WW, UK

e-mail: jon.wright@wessexwater.co.uk, e-mail: paul.cook@wessexwater.co.uk

Abstract

The overall aim of the research introduced in this paper is to estimate the impact of maintenance and operation strategies on the economic sustainment of in-service long-life high-value assets in the water sector. The focus of the research presented is to understand and model the utilisation and support costs for such assets. First in-service cost, maintenance strategy and maintenance modelling literature is reviewed, and the state of the art in the water sector is compared with other industrial sectors. A modelling methodology is then proposed, and an industrial exemplar is used to illustrate the estimation of in-service costs. This is demonstrated through the evaluation of alternative scenarios, and the potential benefits for industry are discussed. Finally, future research is described aimed at extending the model to include cost and performance perspectives, encompassing maintenance and operational costs, reliability, availability, contractual and regulatory obligations.

Key words: Through life cost for high-value assets, in-service costs, maintenance strategy, maintenance modelling

1 Introduction

Literature states that in-service costs constitute the largest proportion of through life cost (Waghmode and Sahasrabudhe, 2012). In-service costs include all costs incurred from when an asset enters service through to retirement or disposal, such as operation and maintenance cost, technical costs and support costs. In-service costs can account for 75% to 85% of the through life cost of a long life asset (Newnes et al., 2011) therefore the control and management of these costs has a major influence on the effective use of assets within an enterprise. Figure 1 depicts in-service costs as part of through life cost, and illustrates where the user can influence cost (and performance) from the in-service stage to the end of life.

Limited User Influence on Cost Major User Influence on Cost

Figure 1. Asset user sphere of influence in though life cost

Water is a major global industry, with the world market for drinking and wastewater having an estimated worth of 400-500 billion USD annually (Deutsche Bank Research, 2010). Water is a key input to industry, the economy and the health and well-being of the population, impacting the environment and sustainable development. However, factors such as increasing population and urbanisation and the effects of a changing climate mean that water resources are under increasing pressure. One such pressure is the operation and support costs for the assets used in the production and treatment of clean and wastewater. The control and management of in-service costs is critical to meeting the unprecedented coincident challenges to in-service costs and asset value (Palmer, 2010). Whilst there is little published evidence which quantifies in-service costs in the water industry, Lim et al. (2008) estimate the costs of operation and maintenance to be around 80% of the total Life Cycle Cost, whilst Bennett (2006) proposes 70-80%. Further, many water industry assets have long life cycles of up to 50 years (Englehardt et al., 2002), and the industrial exemplar discussed in this paper considers assets with a typical life cycle of up to 30 years. Such long life cycles increase the expected magnitude of in-service costs in the water industry.

This research builds on existing work for long life assets in aerospace and defence, where the use of through life cost estimating techniques is well established. Techniques used include quantitative parametric approaches (Niazi et al., 2006), where Cost Estimating Relationships (CERs) estimate costs based on defined asset parameters, for example weight. However, the potential of these techniques in other industries sharing similar product life and value characteristics has not been widely explored. Within the water sector, cost estimating research has been found to be inadequate (Cornwell and Newnes, 2012) to fulfill the aims of the National infrastructure Plan - a resilient and affordable water industry (HM Treasury, 2011).

This research seeks to combine through life cost estimating techniques with techniques from established research into maintenance strategy and policy (Sherwin, 2000; Veldman et al., 2011; Wang, 2002) in order to estimate the impact of maintenance and operational strategies on the economic sustainment of water industry assets. Maintenance policies aim to balance asset reliability and cost for maximum benefit (Sharma et al., 2011), and to assess this trade-off it is necessary to understand and model costs, using for example a bottom up approach, breaking costs down into the constituent parts of maintenance activities (Park and Seo, 2004).

This research addresses current gaps in knowledge and practice in the following areas;

extending quantitative parametric techniques to include performance in addition to cost, developing a robust model to include more than one category of maintenance activity and maintenance policy, and optimising the asset system with regard to both cost and performance. This research aims to answer the following question:

Can through-life cost estimating techniques, performance measurement and maintenance theory be combined to provide a robust estimate of in-service cost and performance?

Initial Design

Detailed

Design Production In-service Retirement Reuse or

Recycle

These estimates can be utilised for informed decision-making such as; operational/scheduling conditions, or determining the appropriate level of maintenance to optimise economic conditions whilst maintaining asset availability. The next section of the paper critiques the current literature on both in-service cost estimating and operational maintenance strategies.

2 Review of in-service cost estimating, maintenance strategies and modelling

This section presents a review of in-service cost estimating, the influence of operation and maintenance strategy on in-service costs, and maintenance modelling approaches.

2.1 In-service cost research

In order to understand and model in-service costs, researchers have defined the cost components within a breakdown structure (Asiedu and Gu, 1998; Fabrycky and Blanchard, 1991, Goh et al., 2010). Table 1 summarises the differing views on the specific components of in-service cost.

Table 1. Definition of in-service costs as part of through-life cost Reference Defined in-service cost components Comments Goh et al.

(2010)

Operation, Maintenance and Repair, Operation/Maintenance Management, Operator and Maintenance Training, Technical Data, Modification and Upgrade, Inventory and

Obsolescence

Focussed on Operation and Maintenance

Asiedu and Gu (1998)

Transport, Energy, Storage, Maintenance and Materials, Waste, Breakage, Cost of Warranty, Packaging, Waste, Pollution, Health Damages Inventory, Operator and Maintenance Training, Technical Data, Product Modification

Downtime, Personnel, Other), Service Contracts (Personnel), Leasing (Other) (personnel, facilities, software, data), Technical (Training and Data), Consumables, Transportation and Handling, Test and Support Equipment, Supply support

System view, wider scope to include data and support functions

The majority of models focus on operation and maintenance costs (Fabrycky and Blanchard, 1991; Goh et al., 2010), including repairs, modifications and upgrades. Gitzel and Herbort (2008) split out the costs of external personnel (service contracts) and leased equipment.

Asiedu and Gu (1998) define the factors that are considered during the user experience of the

Asiedu and Gu (1998) define the factors that are considered during the user experience of the