• Ei tuloksia

Test-Driven Development and Analysis

4. MEASUREMENTS AND DATA ANALYSIS

4.6 Approach II: Patient Measurements and Software Devel- Devel-opmentDevel-opment

4.6.3 Test-Driven Development and Analysis

To answer the research question, the patient data needs to be analyzed. To address the objective of this thesis, and to follow the learnings from the preceding BML-loop, the patient data should be analyzed with technologies presented in Subsection 4.6.2 and alterations explained in Subsection 4.5.4. This evokes the need for comprehensive testing; the result determines the direction of following research and aims in future to have diagnostic power. This section states the principles used in low-level, or analysis-related, testing and details the analysis steps on necessary level.

Test-Driven Development (TDD) is a development method where feature imple-mentation begins by writing its test cases. The test cases are then ran and checked that they fail; they should fail as the feature is not implemented yet. After that, the feature implementation is performed and followed by running the tests (should pass now). Necessary refactoring, and retesting, is the final step. Thus, the tests

are run several times and therefore in practice automating the test cases is required.

There are several blends to apply TDD; in other words, how much the architecture is designed prior to TDD or at all. A popular variation, also used here, is to predesign the architecture and apply TDD in unit- and module level. [111, pp. 214-216]

TDD is chosen to be used as the programming is done by a single person. Thus, the psychological challenge of trying to break one’s own code is circumvented by writing the tests prior to the feature. Also, the mathematical features are convenient for automated testing. A test module called Qt Test, provided by Qt for testing Qt applications, is used to build and run the tests. The testing is data-driven, and Matlab scripts are used to generate the oracle values. The scripts are a credible oracle for two reasons. Firstly, several features developed in C++ are available as ready or almost-ready services in Matlab. Secondly, if same result is obtained with two different level languages the possibility of random error is relatively low.

The test-case design aims to gain high code coverage. In practice that means ap-plying test-design techniques of equivalence partitioning and boundary value analysis.

Equivalence partitioning is a method to limit the number of test cases; an input and its passing or failing is assumed to represent the whole partitioned group [111, p. 209]. Thus, the amount of test cases is diminished as the partitioned group can be tested with few representatitive cases [111, p. 209]. Also, equivalence partitioning enables effortless concentration on the negative test cases. Here, the targets for test-ing are the interfaces, limittest-ing values, handltest-ing of error conditions, data structures, and execution paths. There are multiple parameters to be partitioned for a single feature. A ready script is used to form combinations from the test parameters in such a way that the total test case amount stays limited.

Inboundary value analysis the borders of a value range and their environment is tested. The borders are typically borders of equivalence partitioned groups or other logical groups. The border value analysis is well reasoned testing approach as the borders are a very typical location for errors. One way to see border value analysis is that it complements the equivalence partitioning. [111, pp. 209-210]

The application is developed using object-oriented programming. The class diagram of the application is shown in Appendix G. OOP brings its own flavor to the testing: the state of the object, encapsulation, inheritance, possible dynamic binding, and exceptions need to be considered in testing. The three first factors are handled in the test case construction. Each test begins by constructing the object in suitable state, and then calling the methods to the constructed object. This approach is an option as the main emphasis on the testing is in logic due to the analytical

nature of the application. In case of inheritance, the base class is used here simply to distinguish different class types, and for future purposes. Thus, in testing the base class is constructed. Dynamic binding is not used in the application. Testing the exceptions are written as their own test cases.

The preprocessing steps are implemented as methods into the Movement-class (see Appendix G) of the application. For each preprocessing step - offset removal, distance conversion, baseline removal, and filtering - there is a public interface method to be called. In addition, there is a method to verify the sanity of the raw data.

Also, a single public interface method performPreprocessing exists that calls the preprocessing methods one by one. The individual preprocessing methods are left in the public interface for unit testing, and thus there is an encapsulation cost to keep the application testable.

In preprocessing, the difference to the preliminary phase solution is the simpli-fication of the baseline removal approach. CFAR filter is used in the preliminary study to compute the baseline; a Matlab function developed in the dissertation work is available. However, no suitable library implementation is found for the C++

implementation and its development would be laborious. The baseline calculation is reduced to median filter with adaptable length. The default length is set to two times the movement duration. The existing Matlab script is used to verify the suitability of the simplification.

In analysis, the temporal analysis is changed to more robust one. In the preliminary study, the temporal difference is computed from the maximum amplitude locations on the contralateral side. This method, however, is sensitive to noise and odd waveforms. Thus, the delay of the best cross-correlation coefficient is used instead. In other words, the temporal difference analysis goes from peak difference to maximum waveform match delay.

The second significant change to the analysis is in the usage of spatial references.

In the preliminary study, reference movements are used to compute a personal refer-ence for each participant. However, an observation from the patient data collection is that the prototype shifted notably on the patients’ head during the reference recording. The instruction to perform as big movement as possible for the reference guided the patients to do exaggerated expressions. Thus, an alternate method to normalize the spatial quantities is used. The median over every repetition for the spatial quantity value is used as a reference. Thus, each patient has their own reference. The healthy side’s values are used for both sides.