top of page

To begin, each researcher coded all 84 comments for the year 2014 as a way to gauge the quality of the methodology. These 84 comments were then tested for intercoder reliability and received acceptable Cohen's Kappa scores to move onto the bulk of the data set.

The three researchers were then each randomly assigned 653 comments with 226 overlapping cases to assess intercoder reliability. All but two MVP-specific coded variables demonstrated high inter-coder reliability via Cohen’s Kappa (>0.7 is generally accepted as reliable). These two variables, electricity and corporate overreach, were still included in data analysis but should be interpreted with lower confidence as they are only moderately reliable. Of the non MVP-specific coded variables, only Politics had strong reliability. Two variables, Strong Profanity and Hypocrisy, had to be thrown out of the data set due to low reliability scores. 

 

The Cohen's Kappa and percentage accuracy for the 226 comment reliability sample are shown in the chart below. The bottom column, "accepted reliability" is an average of all of the variables included in the data set (all except Strong Profanity and Hypocrisy). 

The Cohen's Kappa numbers for these coded variables have unacceptable reliability. 

MVP-specific categories

bottom of page