• Ei tuloksia

The aim of this thesis was to discover if the translation quality on Netflix is, first of all, adequate, and secondly, if the translation quality benefits from the added investment to Netflix Original content. To examine this, the conventions of subtitling and the way it affects the Finnish audiences‟ viewing experience needed to be discovered. This was examined through a three-part analysis; firstly analyzing the translation errors discovered from the material and the error‟s effects on the viewing experience, secondly analyzing the translation errors‟ severity and the number of translation errors in each translation error severity category, and if the subtitling could be deemed acceptable on O‟Brien‟s (2012) quality evaluation model, and thirdly analyzing through statistical significance if the possible quality differences could be deemed to be in relation to the origin of the content.

From the first part of the analysis, it can be noted that there is an abundance of translation errors discovered from the material, as the total number of translation errors was 578, and 7.1 per cent of the translations included a translation error. Furthermore, this number indicates that on average, a translation error occurred in every 14 translations. The translation errors were divided into two separate umbrella categories based on the translation errors‟

special features: overt and covert errors. The features used in this separation were in relation to Vehmas-Lehto‟s (2005) article on translation errors‟ construction, and the viewers‟

assumed ability to notice the errors based on Heikkilä‟s (2014) MA Thesis. From the umbrella categories, it was analyzed if the error would be something that the viewer would be able to notice in the subtitling, or if the error would be something that would not necessarily be noticed from the subtitling, yet could otherwise affect the viewers‟ experience. As the portion of the overt errors was 62.6 per cent of all of the translation errors, it could be assumed that the viewer would be able to notice a translation error in 4.5 per cent of all of the translations.

Additionally, as 37.4 per cent of the errors were covert errors, it could be concluded that 2.7 per cent of the translations included errors that may not be noticed by the viewers, but could still affect the viewing experience negatively.

From the second part of the analysis, it could be concluded that due the number and severity of the translation errors, the translation quality of the subtitling could not be deemed acceptable on O‟Brien‟s translation quality evaluation model. In the analysis, the translation errors were separated into three categories based on their severity: minor, major and critical errors. The categories were based on O‟Brien‟s (2012) translation quality evaluation model, and the effects linked to each of the severity categories were applied to the

86 purpose of subtitling. Fortunately, the critical translation error category was left empty in this study. The quality of the subtitling was therefore determined from the minor and major translation errors. To determine if the quality of the subtitling could be deemed acceptable, the threshold was set to the commonly accepted level discussed in O‟Brien‟s (2012) article, thus, the acceptable quality limit was set to accept, on average, one major translation error per thousand words and four minor translation errors per thousand words. The quality assessment based on only the major translation errors would have deemed the two Netflix Original episodes as a „pass‟, but would have deemed all three non-Original episodes as a „fail‟.

However, when the minor translation errors were included into the evaluation, none of the episodes would have passed, as the least amount of minor errors per thousand words in an episode was 15.6, and the most was 24.4, thereby exceeding the limit multiple times in all of the five episodes. The overall quality of the subtitling of Gilmore Girls could therefore be deemed as unacceptable. How this may affect Netflix‟s agenda to remain the world‟s leading subscription service, is best described by Mäkelä (2016) expressing that the viewers may be reluctant to view foreign content with poor subtitling quality.

The third part of this analysis was conducted to examine if the quality of subtitling would improve when the subtitled content was produced by Netflix. The numerical data was run through a statistical hypothesis test to examine if the differences in the number of translation errors per episode could be deemed to have statistical significance. The results discovered from the test indicated that a correlation between the origin of the content and the quality of subtitling could not be claimed. As the quality of the subtitling was deemed unacceptable in the previous part of the analysis, it could be assumed that the quality control the subtitling goes through is not as precise as one would expect. Even though there are specific on guidelines from overall convention to seemingly smaller aspects, such as indicating that a sentence is continued in the following caption (discussed in subsection 5.1.2.4.), the adhering to the guidelines appears unmonitored. However, improvements have been clearly made in order to adhere to the subtitling quality expectations, and therefore the quality control would be expected to follow.

As has been previously discussed, even though the poor quality of subtitling is often unfairly credited to the translator, the reasons may lie deeper in the production process (discussed in subsection 2.1.3.). This is further enforced by the fact that despite the number of translation errors found from the material of the study, each of the episodes had a number of well executed translation solutions, indicating that the erroneous translations were not necessarily caused by the lack of competence, rather a lack of revision. The lack of revision

87 can be linked to the „quantity over quality‟ mentality often discussed in studies on subtitling production processes (for example, Abdallah 2012, Lång 2013, discussed in subsection 2.1.3.), when AV translators are under pressure to produce subtitling that is both fast and cheap. One aspect of the production process that seems to rise at the top is obviously money.

If and when the AV translators a paid as little as is claimed (Mäkelä 2016), it deeply affects the quality of subtitling in many aspects. When translators do not receive an adequate compensation of their work, it drives a portion of the professional translators away from the field, and additionally the amount of work the remaining translators need to do to achieve a sufficient wage level, therefore the amount of time the translator is able to dedicate in ensuring the quality of their work decreases (Lång 2013). This is in clear conflict with Netflix‟s goal to increase their subscribers, when as mentioned by Mäkelä (2016), the poorly subtitled foreign content is something the audiences are reluctant to view. As Mäkelä (ibid.) further ponders, why save a few hundred dollars per episode and receive inadequate subtitling quality, when the original investment to the content is hundreds or even thousands of times larger than that? Therefore, it would be expected that if the quality of subtitling does not rise to an acceptable level through Netflix‟s internal improvements, further investigation should be assigned in examining the procedures in the production processes that result in the unacceptable level of quality.

88