In education terminology, rubric means "a scoring guide used to evaluate the quality of students' constructed responses". Rubrics usually contain evaluative criteria, quality definitions for those criteria at particular levels of achievement, and a scoring strategy. They are often presented in table format and can be used by teachers when marking, and by students when planning their work. Rubrics, when used with formative assessment purposes, have shown to have a positive impact on students' learning.
A scoring rubric is an attempt to communicate expectations of quality around a task. In many cases, scoring rubrics are used to delineate consistent criteria for grading. Because the criteria are public, a scoring rubric allows teachers and students alike to evaluate criteria, which can be complex and subjective. A scoring rubric can also provide a basis for self-evaluation, reflection, and peer review. It is aimed at accurate and fair assessment, fostering understanding, and indicating a way to proceed with subsequent learning/teaching. This integration of performance and feedback is called ongoing assessment or formative assessment.
- focus on measuring a stated objective (performance, behavior, or quality)
- use a range to rate performance
- contain specific performance characteristics arranged in levels indicating either the developmental sophistication of the strategy used or the degree to which a standard has been met.
Components of a scoring rubric
Scoring rubrics include one or more dimensions on which performance is rated, definitions and examples that illustrate the attribute(s) being measured, and a rating scale for each dimension. Dimensions are generally referred to as criteria, the rating scale as levels, and definitions as descriptors.
Herman, Aschbacher, and Winters distinguish the following elements of a scoring rubric:
- One or more traits or dimensions that serve as the basis for judging the student response
- Definitions and examples to clarify the meaning of each trait or dimension
- A scale of values on which to rate each dimension
- Standards of excellence for specified performance levels accompanied by models or examples of each level
Since the 1980s, many scoring rubrics have been presented in a graphic format, typically as a grid. Studies of scoring rubric effectiveness now consider the efficiency of a grid over, say, a text-based list of criteria.
Rubrics can be classified as holistic or analytic. Holistic rubrics integrate all aspects of the work into a single overall rating of the work. For example, "the terms and grades commonly used at university (i.e., excellent – A, good – B, average – C, poor – D, and weak – E) usually express an assessor’s overall rating of a piece of work. When a research article or thesis is evaluated, the reviewer is asked to express their opinion in holistic terms – accept as is, accept with minor revisions, require major revisions for a second review, or reject. The classification response is a weighted judgement by the assessor taking all things into account at once; hence, holistic. In contrast, an analytic rubric specifies various dimensions or components of the product or process that are evaluated separately. The same rating scale labels may be used as the holistic, but it is applied to various key dimensions or aspects separately rather than an integrated judgement. This separate specification means that on one dimension the work could be excellent, but on one or more other dimensions the work might be poor to average. Most commonly, analytic rubrics have been used by teachers to score student writing when the teacher awards a separate score for such facets of written language as conventions or mechanics (i.e., spelling, punctuation, and grammar), organisation, content or ideas, and style. They are also used in many other domains of the school curriculum (e.g., performing arts, sports and athletics, studio arts, wood and metal technologies, etc.). By breaking the whole into significant dimensions or components and rating them separately, it is expected that better information will be obtained by the teacher and the student about what needs to be worked on next." (Brown, Irving, & Keegan, 2014, p. 55).
Steps to create a scoring rubric
Scoring rubrics may help students become thoughtful evaluators of their own and others’ work and may reduce the amount of time teachers spend evaluating student work. Here is a seven-step method to creating and using a scoring rubric for writing assignments:
- Have students look at models of good versus "not-so-good" work. A teacher should provide sample assignments of variable quality for students to review.
- List the criteria to be used in the scoring rubric and allow for discussion of what counts as quality work. Asking for student feedback during the creation of the list also allows the teacher to assess the students’ overall writing experiences.
- Articulate gradations of quality. These hierarchical categories should concisely describe the levels of quality (ranging from bad to good) or development (ranging from beginning to mastery). They can be based on the discussion of the good versus not-so-good work samples or immature versus developed samples. Using a conservative number of gradations keeps the scoring rubric user-friendly while allowing for fluctuations that exist within the average range ("Creating Rubrics").
- Practice on models. Students can test the scoring rubrics on sample assignments provided by the instructor. This practice can build students' confidence by teaching them how the instructor would use the scoring rubric on their papers. It can also aid student/teacher agreement on the reliability of the scoring rubric.
- Ask for self and peer-assessment.
- Revise the work on the basis of that feedback. As students are working on their assignment, they can be stopped occasionally to do a self-assessment and then give and receive evaluations from their peers. Revisions should be based on the feedback they receive.
- Use teacher assessment, which means using the same scoring rubric the students used to assess their work.
Etymology and history
The traditional meanings of the word rubric stem from "a heading on a document (often written in red — from Latin, rubrica, red ochre, red ink), or a direction for conducting church services". Drawing on definition 2 in the OED for this word  rubrics referred to the instructions on a test to the test-taker as to how questions were to be answered.
In modern education circles, rubrics have recently come to refer to an assessment tool. The first usage of the term in this new sense is from the mid-1990s, but scholarly articles from that time do not explain why the term was co-opted. Perhaps rubrics are seen to act, in both cases, as metadata added to text to indicate what constitutes a successful use of that text. It may also be that the color of the traditional red marking pen is the common link.
As shown in the 1977 introduction to the International Classification of Diseases-9, the term has long been used as medical labels for diseases and procedures. The bridge from medicine to education occurred through the construction of "Standardized Developmental Ratings." These were first defined for writing assessment in the mid-1970s and used to train raters for New York State's Regents Exam in Writing by the late 1970s. That exam required raters to use multidimensional standardized developmental ratings to determine a holistic score. The term "rubrics" was applied to such ratings by Grubb, 1981 in a book advocating holistic scoring rather than developmental rubrics. Developmental rubrics return to the original intent of standardized developmental ratings, which was to support student self-reflection and self-assessment as well as communication between an assessor and those being assessed. In this new sense, a scoring rubric is a set of criteria and standards typically linked to learning objectives. It is used to assess or communicate about product, performance, or process tasks.
This section does not cite any sources. (March 2013) (Learn how and when to remove this template message)
One problem with scoring rubrics is that each level of fulfillment encompasses a wide range of marks. For example, if two students both receive a 'level four' mark on the Ontario system, one might receive an 80% and the other 100%. In addition, a small change in scoring rubric evaluation caused by a small mistake may lead to an unnecessarily large change in numerical grade. Adding further distinctions between levels does not solve the problem, because more distinctions make discrimination even more difficult. Both scoring problems may be alleviated by treating the definitions of levels as typical descriptions of whole products rather than the details of every element in them.
Scoring rubrics may also make marking schemes more complicated for students. Showing one mark may be inaccurate, as receiving a perfect score in one section may not be very significant in the long run if that specific strand is not weighted heavily. Some may also find it difficult to comprehend an assignment having multiple distinct marks, and therefore it is unsuitable for some younger children. In such cases it is better to incorporate the rubrics into conversation with the child than to give a mark on a paper. For example, a child who writes an "egocentric" story (depending too much on ideas not accessible to the reader) might be asked what her best friend thinks of it (suggesting a move in the audience dimension to the "correspondence" level). Thus, when used effectively scoring rubrics help students to improve their weaknesses.
Multidimensional rubrics also allow students to compensate for a lack of ability in one strand by improving another one. For instance, a student who has difficulty with sentence structure may still be able to attain a relatively high mark, if sentence structure is not weighted as heavily as other dimensions such as audience, perspective or time frame.
Another advantage of a scoring rubric is that it clearly shows what criteria must be met for a student to demonstrate quality on a product, process, or performance task.
Scoring rubrics can also improve scoring consistency. Grading is more reliable while using a rubric than with out one. Educators can refer to a rubric while scoring assignments to keep grading consistent between students. Teachers can also use rubrics to keep their scoring consistent between other teachers who teach the same class.
- Authentic assessment
- Concept inventory
- Educational assessment
- Educational technology
- Standards-based assessment
- Technology Integration
- Popham, James (October 1997). "What's Wrong - and What's Right - with Rubrics". Educational Leadership. 55 (2): 72–75.
- Dawson, Phillip (December 2015). "Assessment rubrics: towards clearer and more replicable design, research and practice Phillip". Assessment & Evaluation in Higher Education,. doi:10.1080/02602938.2015.1111294.
- Panadero, Ernesto; Jonsson, Anders (2013). "The use of scoring rubrics for formative assessment purposes revisited: A review". Educational Research Review. 9 (0). doi:10.1016/j.edurev.2013.01.002.
- Herman, Joan (January 1992). A Practical Guide to Alternative Assessment. Association for Supervision & Curriculum Deve. ISBN 0871201976.
- Brown, G. T. L., Irving, S. E., & Keegan, P. J. (2014). An introduction to educational assessment, measurement, and evaluation: Improving the quality of teacher-based assessment (3rd ed.). Auckland, NZ: Dunmore Publishing. ISBN 9781927212097
- Goodrich, H. (1996). “Understanding Rubrics.” Educational Leadership, 54 (4), 14-18.
- Dirlam, D. K. (1980). Classifiers and cognitive development. In S. & C. Modgil (Eds.), Toward a Theory of Psychological Development. Windsor, England: NFER Publishing, 465-498
- Grubb, Mel. (1981). Using Holistic Evaluation. Encino, Cal.: Glenco Publishing Company, Inc.
- Jonsson, Anders; Svingby, Gunilla (2007). "The use of scoring rubrics: Reliability, validity and educational consequences". Educational Research Review. 2 (2): 130–144. doi:10.1016/j.edurev.2007.05.002.
- Flash, P. (2009) Grading writing: Recommended grading strategies. Retrieved Sep 17, 2011, from http://writing.umn.edu/tww/responding/grading.html
- Stevens, D. & Levi, Antonia J. (2013). Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Sterling, VA: Stylus Publishing.
- University of Minnesota, Center for Advanced Research on Language Acquisition (CARLA), Virtual Assessment Center. (n.d.). Creating Rubrics. Retrieved May, 2015, from http://www.carla.umn.edu/assessment/vac/improvement/p_6.html
- Winter H., (2002). Using test results for assessment of teaching and learning. Chem Eng Education 36:188-190
|Look up rubric in Wiktionary, the free dictionary.|