Reliability in Assessment

Reliability is the degree to which an assessment tool produces stable and consistent results.

Types of Reliability

1. Test-retest reliability is a measure of reliability obtained by administering the same test twice over a period of time to a group of individuals.  The scores from Time 1 and Time 2 can then be correlated in order to evaluate the test for stability over time.

Example:

A test designed to assess student learning in psychology could be given to a group of students twice, with the second administration perhaps coming a week after the first.  The obtained correlation coefficient would indicate the stability of the scores.

2. Parallel forms reliability is a measure of reliability obtained by administering different versions of an assessment tool (both versions must contain items that probe the same construct, skill, knowledgebase, etc.) to the same group of individuals.  The scores from the two versions can then be correlated in order to evaluate the consistency of results across alternate versions.

Example:

If you wanted to evaluate the reliability of a critical thinking assessment, you might create a large set of items that all pertain to critical thinking and then randomly split the questions up into two sets, which would represent the parallel forms.

3. Inter-rater reliability is a measure of reliability used to assess the degree to which different judges or raters agree in their assessment decisions.  Inter-rater reliability is useful because human observers will not necessarily interpret answers the same way; raters may disagree as to how well certain responses or material demonstrate knowledge of the constructor’s skill being assessed.

Example:

Inter-rater reliability might be employed when different judges are evaluating the degree to which art portfolios meet certain standards.  Inter-rater reliability is especially useful when judgments can be considered relatively subjective.  Thus, the use of this type of reliability would probably be more likely when evaluating artwork as opposed to math problems.

4. Internal consistency reliability is a measure of reliability used to evaluate the degree to which different test items that probe the same construct produce similar results.

  • Average inter-item correlation is a subtype of internal consistency reliability.  It is obtained by taking all of the items on a test that probe the same construct (e.g., reading comprehension), determining the correlation coefficient for each pair of items, and finally taking the average of all of these correlation coefficients.  This final step yields the average inter-item correlation.
  • Split-half reliability is another subtype of internal consistency reliability. The process of obtaining split-half reliability is begun by “splitting in half” all items of a test that are intended to probe the same area of knowledge (e.g., World War II) in order to form two “sets” of items. The entire test is administered to a group of individuals, the total score for each “set” is computed, and finally, the split-half reliability is obtained by determining the correlation between the two total “set” scores.