• Ei tuloksia

Error and bias in the web survey

3. STUDY HYPOTHESES AND A RESEARCH MODEL

4.2 Error and bias in the web survey

Error and bias are two different phenomena though they have some commonalities. An error is a difference between the average and actual value, which can be systematic or non-systematic. On the other hand, bias is a systematic error involving a difference between the collected and expected data.

It also endangers the precision of collected data. There are several types of biases such as survey bias, researcher bias, respondent bias and nonresponse bias.

Survey bias can occur at any stage of the questionnaire’s structure, design, types, color, and style. It is a representation of the information error, data analysis error, target population, and samples. On the other hand, improper survey design, proper planning, understandability of the research topic, research purpose and objectives create researcher biases. Method bias is also a part of researcher bias. It is a source of measurement error (i.e., random and systematic errors) and endangers the validity of the relationship between the construct measures.

Similarly, the common method bias is a subset of the method bias, and has been recognized as a systematic measurement error in behavioral science for the last 40 years. The source of common method bias is a mono-method design for specific target samples by the response format, specific items, general context and scale types. Respondent bias can also occur due to the unwillingness and inability of the participants to answer the research questions accurately and honestly. The context, format, unfamiliarity and weakness of the questionnaire also give rise to respondent biases. On the other hand, nonresponse bias means unrepresentative samples.

It occurs when the respondents do not answer the questions due to sensitivity and invitation issues. Accordingly, to minimize biases in the cross-border web survey, this research intensively focused on the samples, data criteria, survey administration, web survey process, validity, and reliability because bias can happen at any stage of the study process and in any of the measurement techniques. Therefore, this study used a common method and non-response bias technique for the remedy of biases (Bauer & Matzler, 2014; Mathews &

Diamantopoulos, 1995; Min et al., 2016; Podsakoff, MacKenzie, Lee, & Podsakoff, 2003; Slattery et al., 2011).

4.2.1 Common method bias

In behavioral research, the assessment of the common method bias is persuasive because the method bias impacts the actions of the respondents. Several types of tests can be used to remedy common method bias, such as marker variable Harman’s single factor test and so on. The marker variable technique is widely used in covariance-based SEM such as AMOS. Also, common method bias can be tested in PLS-SEM using WrapPLS software considering VIF values, but WrapPLS has not been profoundly updated. Therefore, this study used SmartPLS version 3 due to traditional usability in marketing and business strategy, but SmartPLS do not provide common method bias test (Bauer, Matzler, & Wolf, 2014; Hair, Hult, Ringle, & Sarstedt, 2017; Henseler, Ringle, & Sinkovics, 2009;

Kock, 2015; Ringle, Wende, & Becker, 2015; Ringle, Wende, & Will, 2005; Wong, 2013). As a result, scholars suggested that Harman's single factor test can be regarded to assess the common method bias when SmartPLS software is used.

Practically, no technique is appropriate, for instance, neither Harman's test nor marker variable technique eliminate the common method bias (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003; Sattler, Völckner, Riediger, & Ringle, 2010).

Though each technique has its own merits and demerits, the researchers mostly applied Harman's test, which is also well-known as a single factor test. Bauer and Matzler (2014), Bauer et al. (2014) and Zaheer et al. (2011) also used single-factor assessment in cross-border M&A studies. This study thus tested the common method bias using Harman’s single factor technique.

In the assessment, the 47 items of the independent and dependent variables were entered in the exploratory factor analysis considering the unrotated option. The results indicated that the strongest single factor explained the 16.490% variance.

Since the variance was less than 50%, this study confirms that there is no common method bias problem (See the Appendix 1) (Ayazlar & Ayazlar, 2015;

Bauer & Matzler, 2014; Bauer et al., 2014; Podsakoff et al., 2003).

4.2.2 Non-response bias

The non-response bias test is more influential in checking whether the respondent samples represent the non-respondents. The difference between respondent and non-respondent samples is noted when the observed and the expected values are significantly different. As a remedy, this study tested the non-response biases by comparing the early and late respondents in the unit of survey waves, the sectors, and sizes of the acquiring firms. There were six rounds of the web survey from March to November. Specifically, the study clusters the respondents in two survey waves: the early respondents (i.e., March to June) and the late respondents (i.e., September to November). Then, the study considered the manufacturing and service sectors. Also, the firms’ size was measured in terms of their annual sales (i.e., less than 49 million to more than 50 billion) using 1 to 7 point Likert scales (Ali, 2013; Bauer & Matzler, 2014).

Table 5. Group statistics

Respondents N Mean Std. Deviation Std. Error Mean Manufacturing and

service sectors

Early 72 1.86 .827 .098

Late 52 1.90 .748 .104

Acquiring firm’s size Early 61 3.20 1.787 .229

Late 47 3.04 1.681 .245

Table 5 illustrates that in the manufacturing and service sectors, there were 72 early respondents and 52 late respondents. On the other hand, in terms of the acquiring firm’s size, 61 of the respondents were early and 47 late.

Table 6. The independent samples test

Levene's Test for Equality of

Variances t-test for Equality of Means Equal

variances

F Sig. t df Sig.

(2-tailed)

Sig.

(1-tailed) Manufacturing

and service sectors

Assumed 2.697 .103 -.295 122 .768 .384

Not

assumed -.300 115.949 .765 .3825

Acquiring

firm’s size Assumed .790 .376 .456 106 .649 .3245

Not

assumed .460 101.821 .647 .3235

Table 6 indicates that there were no significant differences between the early and late respondents in the manufacturing and service sectors at 5% significant level since the Leven’s test is non-significant at p value 0.103. The t value is also insignificant by the two-tailed (p=0.768, 0.765) and one-tailed (p=0.384, 0.3825) test.

Moreover, there were no significant differences between the early and late respondents in terms of the firms’ size because the Leven’s test is non-significant at p value 0.103. Also, the two-tailed (p=0.649, 0.3245) and one-tailed (p= 0.647, 0.3235) tests exemplified the non-significance by the t-test (Ali, 2013; Armstrong

& Overton, 1977; Bauer & Matzler, 2014). The results confirm that nonresponse bias is not a problem in this study.