web stats
Concordance intra and inter assay
no_noise.gif

A good diagnostic method exhibits great agreement intra and inter assay.

In order to evaluate these agreements it is not enough to analyze the percentage of concordances because the influence of chance in the agreement is not considered. The Cohen´s kappa index is appropriate to solve this problem.

However, the Cohe's kappa index alone is appropriate if the marginal totals for the 2x2 table are relatively balanced, but if the prevalence is very high or low, the value of kappa may indicate a low level of reliability even when the observed proportion of agreement is quite high. In order to address the paradox other values have to be evaluated along with the index, the prevalence, the bias index, and the prevalence-adjusted bias-adjusted kappa (PABAK) for two raters, that will help to characterize the extent of the inter-rater reliability between two raters in an appropriate context.

Contingency table (2x2) for the evaluation of Cohen's kappa index

 

 

Result of the technique (technician 1)

 

 

 

Positive

Negative

 

Result of the technique (technician 2)

Positive

a

b

a+b

Negative

c

d

c+d

 

 

a+c

b+d

N

 

Kappa index is calculated dividing the subtraction of observed coincidence - expected coincidence by the substraction of 1 - expected coincidence, being the observed coincidence (a+d)/N and the expected coincidence [(a+b)x(a+c) + (c+d)x(b+d) ]/NxN.

In general, the following criteria based on the interpretation of Landis and Koch is used (1977):

Kappa

Kappa index

Agreement

< 0.00

Less than chance

0.00 –  0.20

Slight

0.21 –  0.40

Fair

0.41 –  0.60

Moderate

0.61 –  0.80

Substantial

0.81 –  1.00

Almost perfect

 

Prevalence index is calculated dividing (a-d)/N.

Bias index is calculated dividing (b-c)/N.

PABAK index is calculated as [(2x(a+d))/N]-1.