Modern assessment methods tend to use rubrics to describe student performance. A rubric is a scoring method that lists the criteria for a piece of work, or “what counts” (for example, purpose, organization, details, voice, and mechanics are often what count in a piece of writing); it also articulates gradations of quality for each criterion, from excellent to poor. There is no definition found in the dictionary for rubrics but it has come to be universally-accepted and so we also use the term in this book.
Perkins et al (1994) provide an example of rubrics scoring for student inventions and lists the criteria and gradations of quality for verbal, written, or graphic reports on student inventions. This is shown in the succeeding figure as a prototype of rubrics scoring. This rubric lists the criteria in the column on the left: The report must explain (1) the purposes of the invention, (2) the features or parts of the invention and how they help it serve its purposes, (3) the pros and cons of the design, and (4) how the design connects to other things past, present, and future. The rubric could easily include criteria related to presentation style and effectiveness, the mechanics of written pieces, and the quality of the invention itself. The four columns to the right of the criteria describe varying degrees of quality, from excellent to poor.
There are many reasons for the seeming popularity of rubrics scoring in the Philippine school system. First, they are very useful tools for both teaching and evaluation of learning outcomes. Rubrics have the potential to improve student performance, as well as monitor it, by clarifying teachers’ expectations and by actually guiding the students how to satisfy these expectations.
Secondly, rubrics seem to allow students to acquire wisdom in judging and evaluating the quality of their own work in relation to the quality of the work of other students. In several experiments involving the use of rubrics, students progressively became more aware of the problems associated with their solution to a problem and with the problems inherent in the solutions of other students. In other words, rubrics increase the students’ sense of responsibility and accountability.
Third, rubrics are quite efficient and tend to require less time for the teachers in evaluating student performance. Teachers tend to find that by the time a piece has been self- and peer-assessed according to a rubric, they have little left to say about it. When they do have something to say, they can often simply circle an item in the rubric, rather than struggling to explain the flaw or strength they have noticed and figuring out what to suggest in terms of improvements. Rubrics provide students with more informative feedback about their strengths and areas in need of improvement.
Finally, it is easy to understand and construct a rubrics scoring guide. Most of the items found in the rubrics scoring guide are self-explanatory and require no further help from outside experts.
In designing a rubric scoring guide, the students need to be actively involved in the process. The following steps are suggested in actually creating a rubric:
- Survey models — Show students examples of good and not-so-good work. Identify the characteristics that make the good ones good and the bad ones bad.
- Define criteria — From the discussions on the models, identify the qualities that define good work.
- Agree on the levels of quality — Describe the best and worst levels of quality, then fill in the middle levels based on your knowledge of common problems and the discussion of not-so-good work.
- Practice on models — Using the agreed criteria and levels of quality, evaluate the models presented in step 1 together with the students.
- Use self: and peer-assessment — Give students their task. As they work, stop them occasionally for self-and peer-assessment.
- Revise. Always give students time to revise their work based on the feedback they get in Step 5.
- Use teacher assessment — Use the same rubric students used to assess their work yourself.
Tips in Designing Rubrics
Perhaps the most difficult challenge is to use clear, precise and concise language. Terms like “creative”, “innovative” and other vague terms need to be avoided. If a rubric is to teach as well as evaluate, terms like these must be defined for students. Instead of these words, try words that can convey ideas and which can be readily observed. Patricia Crosby and Pamela Heinz, both seventh grade teachers (from Andrade, 2007), solved the same problem in a rubric for oral presentations by actually listing ways in which students could meet the criterion. This approach provides valuable information to students on how to begin a talk and avoid the need to define elusive terms like creative.
Specifying the levels of quality can often be very challenging also. Spending a lot of time with the criteria helps but in the end, what comes out are often subjective. There is clever technique often used to define the levels of quality. It essentially graduates the quality levels through the responses: “Yes,” “Yes but,” “No but,” and “No.” For example, Figure 4 shows part of a rubric for evaluating a scrapbook that documents a story.
Rubrics are scales that differentiate levels of student performance. They contain the criteria that must be met by the student and the judgement process that will be used to rate how well the student has performed. An exemplar is an example that delineates the desired characteristics of quality in ways students can understand. These are important parts of the assessment process.
Well-designed rubrics include:
- performance dimensions that are critical to successful task completion;
- criteria that reflect all the important outcomes of the performance task;
- a rating scale that provides a usable, easily-interpreted score; and
- criteria that reflect concrete references, in clear language understandable to students, parent, and other teachers.
In summary, we can say that to design problem-based tests, we have to ensure that both processes and end-results should be tested. The tests should be designed carefully enough to ensure that proper scoring rubrics can be designed so that the concerns about subjectivity in performance-based tests are addressed. Indeed, this needs to be done anyway in order to automate the test, so that performance-based testing is used widely.