• Ei tuloksia

Infection surveillance and definition for infected knee

9. Discussion

9.1 Methodological considerations

9.1.2 Infection surveillance and definition for infected knee

9.1.2.1 Clinical studies

In the single center series the local hospital infection register provided a readily available source for data on infectious complications. The diagnostic criteria (CDC) and surveillance methods (NNIS) applied in the present study are widely used in infection surveillance programs (de Boer et al. 2001, Gastmeier et al. 2005, Huotari 2007) and have been used in clinical studies as well (Saleh et al. 2002b, Babkin et al. 2007, Chesney et al. 2008).

To improve sensitivity in case-detection the records of the local hospital infection register were cross-referenced with the records of the hospital administration database and the Tekoset register. This technique revealed six new cases (25% of all infected knee replacements). In other words, one fourth of infected knee replacements would have remained unidentified if the local hospital infection register had been the only data source. This result concurs with earlier studies (Cadwallader et al. 2001, Curtis et al. 2004, Huotari et al. 2007a) suggesting that not even prospective infection surveillance necessarily identifies all PJI.

PJI cannot be identified from the hospital infection register unless they have been entered there by the out-patient clinic or ward staff. According to present and previous results (Huotari et al. 2007a) reporting appears a vulnerable point in the

registration process. The reasons for this lie probably in local practices, which may differ between hospitals.

Accurate case detection is essential because, with the present low infection rates missing, even a few infections have percentually a considerable effect (each infection accounted for 4% of the total number of infections in Study I). It is possible that studies which have relied on a single data source (de Boer et al. 2001, Peersman et al. 2001, Huotari et al. 2006, Phillips et al. 2006, Chesney et al. 2008) have underestimated the true burden of infected knee replacements. Collecting data from all relevant sources and manual confirmation of all suspected cases seems to be the most sensitive way to detect postoperative infections.

The postoperative stay in hospital has shortened considerably over the last decade (Study II), and surgical site infections occur usually after discharge. These infections are identified either in post-discharge surveillance or upon readmission (de Boer et al. 2001, Huotari et al. 2006). The rigor and quality of post-discharge surveillance may dramatically affect the reported infection rate and partly explain the differences in postoperative SSI rates between hospitals and countries (Huotari 2007).

Prosthetic joint infections are usually detected upon readmission or at a follow-up visit (Huotari et al. 2006). Being the only unit performing joint replacement surgery within the hospital district area and the referral center for prosthetic joint complications, the hospital of the single center series has probably succeeded well in detecting the cases with infected knee replacement.

The reliability and reproducibility of CDC criteria have been questioned (Wilson et al. 2004). The criticism is related in particular to superficial infections. In the case of deep infections, there is less variation in diagnostic accuracy (Wilson et al. 2004).

The problem, instead, is to distinguish between deep incisional infection and organ/space infection (Huotari et al. 2007a).

From the above it can be concluded that any changes in the interpretation and application of the diagnostic criteria or surveillance protocols, treatment and follow-up practices directly transmit to the detected PJI rate (Huotari 2007). Attention should be paid to the methodology and definitions when comparing infection rates reported in different studies and hospitals. When surveillance practices are kept standard, analysis of trends and comparisons to benchmark values are, however, possible and – in the best case – motivate attempts to improve infection control.

9.1.2.2 Register-based studies

Arthroplasty registers regard revision arthroplasty (removal or revision of one or more of the prosthesis components) as the endpoint of follow-up (Robertsson et al.

1999, Puolakka et al. 2001, Espehaug et al. 2006, Robertsson 2007). Although revision arthroplasty is a simple, unambiguous and easily obtainable definition of prosthesis failure, certain problems have been related to its use as measure of outcome (Robertsson 2007).

As regards infected knee replacement, one of the major concerns is the variety of different treatment approaches (see section 5.6, p. 53). Reoperations where a new prosthesis is not being implanted are poorly reported and registered in arthroplasty registers. In the validation study of the Swedish Knee Arthroplasty Register (Robertsson et al. 1999) 80% of reoperations were registered correctly but the reoperations most frequently missed were amputation, resection arthroplasty and arthrodesis. A similar result was obtained in Norway, where the arthroplasty register detected only 62% of resection arthroplasties (Espehaug et al. 2006).

The present study (Study II) shows that similar problems are encountered in the Finnish Arthroplasty Register: only revision total knee arthroplasties (92%) and secondary patellar resurfacing procedures (85%) were recorded with acceptable accuracy. This is consistent with a recent Finnish study in which the register captured only 18 of the 60 infected knee replacements identified by combining data from three Finnish health registers (Huotari et al. 2007b).

At worst, ignoring certain types of reoperations leads to underestimation of the postoperative complication rate (Espehaug et al. 2006). Furthermore, changes in treatment practices (e.g. increased popularity of debridement with retention of components in the management of early acute infections) may bias arthroplasty register-based analyses regarding the infection rate if the probability of an infection being detected depends on its type and etiology. If the first reoperation is followed by a subsequent revision knee replacement with implantation of a new prosthesis (e.g. in two-stage revisions) – which is often the case in actual practice – missing the first reoperation has less dramatic consequences and only delays registration of the failure.

This study (Studies II and III) attempted to overcome the above-mentioned problems by supplementing the Finnish Arthroplasty Register data with a search of

the Hospital Discharge Register for several surgical reoperations. The decision to ignore hospitalization periods that were lacking surgical procedure codes may have led to underestimation of the rate of infected knee replacement. However, in most cases removal of the infected prosthesis is required. Thus, it is possible that the cases where diagnosis code (T84.5) without associated surgical procedure code is recorded in the Hospital Discharge Register do not necessarily represent true PJI.

This hypothesis could explain the relatively high PJI rate (1.9%) reported in another recent Finnish study using the same data sources (Remes et al. 2007).

The side of the operated joint was infrequently recorded in the Hospital Discharge Register. When corresponding records in the arthroplasty register and the side of the operated joint were unavailable, the linkage between reoperation and the preceding knee replacement was based on an assumption that infectious failures occur rather early than late after the index procedure (see Study II for details). It is acknowledged that this technique may cause overestimation of the rate of early reoperations. However, most reoperations (79%) could be linked to preceding surgeries reliably with the help of arthroplasty register records. In addition, the infection rates calculated using only Hospital Discharge Register data and the endpoint data combined from the two registers were similar to the infection rate in pure arthroplasty register data (Study II). Therefore, the methodology used does not seem to have significantly biased the present results.

Earlier studies that have estimated the applicability of administrative register data have reported sensitivities ranging from 35% to 61% for identifying SSI (Romano et al. 2002, Curtis et al. 2004, Sherman et al. 2006). Lack of appropriate complication codes has been one of the main factors explaining the poor sensitivity. This was also the case in the Hospital Discharge Register (Study II). In an Australian study, where register entries were made by professional clinical coders, instead, the sensitivity in case-detection was similar for administrative register data and infection surveillance data (88% and 84% respectively) (Cadwallader et al. 2001). The variation in the reported sensitivities is probably attributable to the quality of registration process and differences in reporting activity. Lack of appropriate diagnosis codes leads to misclassification of PJI as aseptic failures and thereby to underestimation of the true infection rate.

The registers used here lacked microbiological data enabling case confirmation.

This probably has little relevance as it is the choice to initiate the treatment that has

clinical consequences and leads to resource utilization even in clinically unclear situations. Ideally, the endpoint events should have been verified against patient records. This study was not intended to validate either of the registers and therefore case verification was not carried out.

Nevertheless, the reoperations performed due to infection identified in this study probably represent true cases. The two registers were in most cases concordant in classifying the reasons for reoperations (infection vs. aseptic failure). In earlier studies, the positive predictive values of administrative registers has been relatively good (79–98%) despite compromised sensitivity (Cadwallader et al. 2001, Romano et al. 2002, Curtis et al. 2004).

By combining data from three Finnish registers, Huotari and associates (2007b) estimated that the rate of infected knee replacement is around 1.3% in Finland. This figure was considerably higher than that observed in any one of the three registers alone. In the national register series more reoperations due to infection were detected when the combined endpoint data was used yet the difference compared to rates derived from the Finnish Arthroplasty Register or the Hospital Discharge Register alone was not statistically significant (Study II). Combining data from different sources appears to be an effective way to improve the sensitivity in case detection – in a similar way than it did in the single center series.

The problems related to the recording of postoperative infections in the administrative health registers have less effect in risk factor analyses. The infections that have remained unidentified or misclassified fall into the comparison group (no infection) which decreases the number of case patients and causes heterogeneity within the comparison group. This may lead to false negative – rather than false positive – results in regression analyses. All available follow-up data was used in national register series to maximize statistical power even though restricting the follow-up to one year might have been reasonable in the analysis of perioperative and provider-related risk factors.

Finally, it is emphasized that the incidences and hazard ratios based on national register series data refer to reoperations due to infection, not to postoperative infectionsper se. Infections in patients who are not eligible for or are unwilling to undergo surgery and cases treated conservatively (e.g. long-term suppressive antibiotics) are ignored. Thus, the infections treated surgically do not represent a random sample of all infectious cases.