• Ei tuloksia

1.1 Background

During the last few decades, software has become a critical part of industrial systems.

The use of software has grown rapidly in each part of the ISA-95 automation pyramid (Figure 1) model and the levels have become more dependent on each other. It is also expected that the use of software will become even more common as the industry seeks effectiveness from automation.

Figure 1. Automation pyramid (Hollender 2010)

At the same time, the complexity of the software has also increased. This provides unique challenges to maintain the integrity of the connected software in different levels of production systems. For example, the configurations made at the MES level have a direct impact on the devices at the field level. The nature of modern, directly connected systems means that even minor errors, such as misspellings or wrong datatypes at the high level can cause major issues at the field level.

This problem has created a growing need for testing and quality assurance. A 2011 study by Pierre Audoin Consultants found out that companies invest up to 50 billion

dollars in testing and quality assurance annually and it one of the fastest-growing areas in IT services (Pierre Audoin Consultants GmbH 2011). Also, a 2013 study by the Uni-versity of Cambridge estimated that the total cost of debugging software accumulated to $312 billion per year and that failure to adopt proper debugging tools cost the econ-omy $41 billion worth of programming time annually (University of Cambridge 2013).

Testing can be done manually or with automated tools. In manual testing, the tests are executed by humans according to some set of actions and expected results and by general visual observation of the interfaces. With complex and interconnected views, manual testing can require a lot of work and can lead to false-positive results, if the tester forgets or fails to identify deficiencies. Automated testing is done with tools such as scripting and testing frameworks tests a pre-written to perform the same set of ac-tions and to expect the same set of results each time. This eliminates human errors, omission errors, and leaves more time for developers to focus on other tasks. Auto-mated testing requires planning. Creating a testing plan requires defining the coverage, testing technologies, test cases, and testing methods. (Ammann and Offutt 2008)

1.2 Problem definition

Ideally, the testing practices are crafted at the beginning of a new software project. This way tests can be developed gradually along with the software, allowing developers to refine test cases and learn to prioritize the most critical areas to test, thus making the development of the software easier to maintain. However, this is always not possible, and testing might be neglected for different reasons e.g. when prioritizing feature devel-opment or when lacking a coherent testing plan. According to a survey study by Torkar and Mankefors, 60% of developers said that testing (verification and validation) was the first neglected thing when something had to be discarded due to timeline restrictions (Torkar and Mankefors-Christiernin 2003, 164-173).

The lack of automated testing becomes an especially heavy burden if the software is released as a stand-alone software with an offline installer, which is often the case for industrial systems. Providing software updates and bugfixes to an offline application can be difficult and usually requires installing a new version. Also, as the complexity of the software accumulates, so does the risk of creating a bug and not catching it before the release. This usually requires help from technical support and is very cost-intensive (Figure 2). The severity of a bug can also be difficult to measure. A bug can for exam-ple complicate or block the use of software, but it can also be cause physical damage to people or the environment.

Figure 2. Cost of a bug (S.M.K and Farooq 2010)

This thesis focuses on creating an approach for adopting testing practices, efficiently developing tests, and maintaining the testing practices in a mature industrial software project.

1.3 Objectives

The objective of this thesis is to answer questions related to the development and im-plementation of a testing plan for software that is in a mature stage of development as well as considering the benefits of testing on the related systems in the software eco-system. The research work aims to answer the following questions which also define the scope of this thesis:

• How the testing can be introduced to be a persistent and maintainable feature of development to a mature project?

• How to identify the most critical parts of the software that should be tested in in-dustrial systems?

• What are the factors that affect the overall design and efficiency of the tests and what are the challenges in the industrial context?

1.4 Limitations

The scope of this thesis is to provide an approach for adopting testing practices in a software project. The technical details regarding test development are out of scope for this thesis.

The testing in the scope of this thesis is limited to functional testing. Usability-, scalabil-ity-, performance-, accessibilscalabil-ity-, security- and other forms of non-functional testing are not discussed in this thesis.

1.5 Outline

The structure of this thesis is as follows; Chapter 1 introduces this document and es-tablished the problem and the objectives as well as the limitations of the thesis work.

Chapter 2 defines the background and defines the current state of the art in software testing. Chapter 3 presents a proposed method for the problems discussed in chapter 1. Chapter 4 describes the implementation of the methods proposed in Chapter 3. In chapter 5 the results and conclusions are reviewed and some reflections and possible further work are considered.