• Ei tuloksia

The data analysis in all five publications was conducted by thematic coding and analysis;

categories and themes were originally built from the bottom up by organising data into abstract units of information (Creswell, 2009; Leavy, 2014). All data sources, interviews, reflective diaries, and online reviews were analysed in a similar manner. Specifically, the researcher followed an analytical process, where data are organised into first- and second-order codes and then aggregated into theoretical dimensions. This method is known as the Gioia method, and it is an inductive data analysis method (Gioia, Corley & Hamilton, 2012). Following a clear analytical process enhances qualitative rigour since it demonstrates the progression from raw data to theoretical dimensions (cf. Vafeas et al.

2016; Zimmermann, Raisch & Birkinshaw, 2015).

Given the nature of each individual publication, the data analysis focused on either customer engagement or value co-destruction. In all publications, the data analysis process commenced with the reading of interview transcripts and interview notes, followed by the coding process. Interviews were analysed using the NVivo 10 software.

In publications I, II, IV, and V, the initial data analysis framework was compared with existing knowledge and was modified as per the empirical findings and theoretical insights gained during the process; thus, abductive reasoning took place. This process is depicted in Figure 5. Following the abductive approach allowed the researcher to form categories for all the information received from different data sets, get insights from the theory, and reflect back on the empirical findings. Table 6 summarises each publication and its data analysis perspective.

Table 6. Research design of the individual publications

Publication Reasoning

logic

Data source Analysis Study 1 or 2 I: Customer engagement in the hotel

industry: perceptions of hotel staff and guests

II: Customer engagement in B2B and B2G relationships: antecedents and

IV: When value co-creation fails: reasons that lead to value co-destruction V: Value co-destruction in a hotel

industry: how scripts shape service

The first stage of the coding process focused on identifying the first-order codes.

Publications I, II, and III focused on customer engagement. When creating the first-order codes regarding customer engagement data, the focus was on identifying, for example, specific antecedents or ways in which firms are engaging their customers, ways in which customers demonstrate their engagement with the firm, and specific outcomes of customer engagement. In Publications IV and V, the focus was on investigating the antecedents of value destruction. During the stage of first-order codes with value co-destruction data, the focus was on identifying an action, place, situation, or other factors that could initiate the value co-destruction process. Publications I and V had reflective diaries as additional data sources. Once the first-order codes from the interviews had been created for both respective publications, the reflective diaries were analysed. The diaries were analysed in a similar thematic coding fashion as the interviews, so the focus was on identifying specific first-order codes that fitted the themes of customer engagement or value co-destruction. Diary data can be analysed in a similar thematic or content analysis style as interview data (e.g. Clayton & Thorne, 2000). Diary data brought up new first-order codes because they represented the customer’s voice. An example of data analysis related to customer engagement is presented in Figure 4 depicting the service experience, which was identified as an antecedent for customer engagement.

Figure 4. An example of data analysis related to customer engagement

During the second stage of the coding process, the purpose was to group similar first-order codes into second-first-order codes. At this stage the, the first-first-order codes that had emerged from interview and diary data (Publications I & V) were treated as equals, meaning they were combined into a larger pool of first-order codes out of which the second-order codes were created. With regard to customer engagement data, for example first-order codes concerning WOM and opinions were grouped into a second-order code labelled as ‘Feedback’ (Publication II). The data analysis process focusing on value co-destruction was continued by combining first-order codes relating to lack of trust or openness into a second-order code labelled as ‘Inability to trust’ (Publication IV). Once the second-order codes had been created, an additional data source, 344 online reviews, was introduced in Publication V. Online reviews were analysed manually and their purpose was either to bring new insights or corroborate the existing findings. After all the

reviews had been read, it was evident that they corroborated our previous findings from the two other data sources, thus the reviews served the purpose of triangulation.

The third stage focused on creating the aggregated dimensions based on similar second-order codes. When analysing customer engagement data, the second-second-order codes regarding communication and feedback were grouped into the aggregated dimension labelled as ‘Dialogue’ (Publication I). With regard to value co-destruction data analysis, an aggregated dimension, labelled as ‘Blaming’ was based on the similarity of the second-order codes concerning complaining and blaming (Publication IV). The identified aggregated dimensions and the second-order codes were compared to either existing customer engagement or value co-destruction literature, and based on the identified results and the literature, the aggregated dimensions were either reformulated or retained.

This process followed the abductive data analysis method. Additionally, during the data analysis, the underlying theory that could explain the phenomenon of either customer engagement or value destruction was evaluated. For example, in the case of value co-destruction, the initial pre-understanding was that either service-dominant logic or service logic would be the underlying or explanatory theory. The data analysis was conducted with this pre-assumption. However, during this phase, it became evident that neither of the logics was an appropriate match with value co-destruction because they lack the precise focus on the service situation. Thus, a ‘new’ theory was searched in order to find a match, and the script theory was found to be a proper match for value co-destruction.

Then, the results were seen through the lens of the script theory, and the result of the abductive research approach was that value co-destruction emerges because of an actor’s inability or failure to adopt the correct script. The abductive data process with the example of value co-destruction is depicted in Figure 5.

Figure 5. An example of the abductive research approach related to value co-destruction