Writing assessment

From Wikipedia, the free encyclopedia
  (Redirected from Writing Assessment)
Jump to: navigation, search

Writing assessment refers to an area of study that contains theories and practices that guide the evaluation of a writer’s performance or potential through a writing task. Writing assessment can be considered a combination of scholarship from Writing Theory and Measurement Theory within educational assessment.[1] Writing assessment can also refer to the technologies and practices used to evaluate student writing and learning.[2]

Contexts[edit]

Writing assessment began as a classroom practice during the first two decades of the 20th century, though high-stakes and standardized tests also emerged during this time.[3] During the 1930s, College Board shifted from using direct writing assessment to indirect assessment because these tests were more cost-effective and were believed to be more reliable.[3] Starting in the 1950s, more students from diverse backgrounds were attending colleges and universities, so administrators made use of standardized testing to decide where these students should be placed, what and how to teach them, and how to measure that they learned what they needed to learn.[4] The large-scale statewide writing assessments that developed during this time combined direct writing assessment with multiple-choice items, a practice that remains dominant today across U.S. large scale testing programs, such as the SAT and GRE.[3] These assessments usually take place outside of the classroom, at the state and national level. However, as more and more students were placed into courses based on their standardized testing scores, writing teachers began to notice a conflict between what students were being tested on--grammar, usage, and vocabulary—and what the teachers were actually teaching--writing process and revision.[4] Because of this divide, educators began pushing for writing assessments that were designed and implemented at the local, programmatic and classroom levels.[4][5] As writing teachers began designing local assessments, the methods of assessment began to diversify, resulting in timed essay tests, locally designed rubrics, and portfolios.

History[edit]

Because writing assessment is used in multiple contexts, the history of writing assessment can be traced through examining specific concepts and situations that prompt major shifts in theories and practices. Writing assessment scholars do not always agree about the origin of writing assessment.

In “Looking Back as We Look Forward: Historicizing Writing Assessment as a Rhetorical Act,” Kathleen Blake Yancey [4] offers a history of writing assessment by tracing three major shifts in methods used in assessing writing. She describes the three major shifts through the metaphor of overlapping waves: “with one wave feeding into another but without completely displacing waves that came before”. In other words, the theories and practices from each wave are still present in some current contexts, but each wave marks the prominent theories and practices of the time.

The first wave of writing assessment (1950-1970) sought objective tests with indirect measures of assessment. The second wave (1970-1986) focused on holistically scored tests where the students’ actual writing began to be assessed. And the third wave (since 1986) shifted toward assessing a collection of student work (i.e. portfolio assessment) and programmatic assessment.

Bob Broad in What We Really Value [6] points to the publication of Factors in Judgments of Writing Ability in 1961 by Diederich, French, and Carlton as the birth of modern writing assessment. Diederich, French, and Carlton based much of their book on research conducted through the Educational Testing Service (ETS) for the previous decade. This book is an attempt to standardize the assessment of writing and, according to Broad, created a base of research in writing assessment.[7]

Major Concepts[edit]

Validity and Reliability[edit]

Yancey traces the major shifts in writing assessment by pointing toward each wave’s swing toward or away from the concepts of validity and reliability.[8] Peggy O’Neill, Cindy Moore, and Brian Huot explain in A Guide To College Writing Assessment that reliability and validity are the most important terms in discussing best practices in writing assessment [9]

In the first wave of writing assessment, the emphasis is on reliability:[10] reliability confronts questions over the consistency of a test. In this wave, the central concern was to assess writing with the best predictability with the least amount of cost and work.

The shift toward the second wave marked a move toward considering principles of validity. Validity confronts questions over a test’s appropriateness and effectiveness for the given purpose. Methods in this wave were more concerned with a test’s construct validity: whether the material prompted from a test is an appropriate measure of what the test purports to measure. Teachers began to see an incongruence between the material being prompted to measure writing and the material teachers were asking students to write. Holistic scoring, championed by Edward M. White, emerged in this wave. It is one method of assessment where students’ writing is prompted to measure their writing ability.[11]

The third wave of writing assessment emerges with continued interest in the validity of assessment methods. This wave began to consider an expanded definition of validity that includes how portfolio assessment contributes to learning and teaching. In this wave, portfolio assessment emerges to emphasize theories and practices in Composition and Writing Studies such as revision, drafting, and process.

Direct and Indirect Assessment[edit]

Indirect writing assessments typically consist of multiple choice tests on grammar, usage, and vocabulary.[4] Examples include high-stakes standardized tests such as the ACT, SAT, and GRE, which are most often used by colleges and universities for admissions purposes. Other indirect assessments, such as Compass and Accuplacer, are used to place students into remedial or mainstream writing courses. Direct writing assessments, like the timed essay test, require at least one sample of student writing and are viewed by many writing assessment scholars as more valid than indirect tests because they are assessing actual samples of writing.[4] Portfolio assessment, which generally consists of several pieces of student writing written over the course of a semester, began to replace timed essays during the late 1980s and early 1990s. Portfolio assessment is viewed as being even more valid than timed essay tests because it focuses on multiple samples of student writing that have been composed in the authentic context of the classroom. Portfolios enable assessors to examine multiple samples of student writing and multiple drafts of a single essay.[4]

Writing Assessment as Technology[edit]

Methods[edit]

Methods of writing assessment vary depending on the context and type of assessment. The following is an incomplete list of writing assessments frequently administered:

Portfolio[edit]

Portfolio assessment is typically used to assess what students have learned at the end of a course or over a period of several years. Course portfolios consist of multiple samples of student writing and a reflective letter or essay in which students describe their writing and work for the course.[4][12][13][14] “Showcase portfolios” contain final drafts of student writing, and “process portfolios” contain multiple drafts of each piece of writing.[15] Both print and electronic portfolios can be either showcase or process portfolios, though electronic portfolios typically contain hyperlinks from the reflective essay or letter to samples of student work and, sometimes, outside sources.[13][15]

Timed-Essay[edit]

Timed essay tests were developed as an alternative to multiple choice, indirect writing assessments. Timed essay tests are often used to place students into writing courses appropriate for their skill level. These tests are usually proctored, meaning that testing takes place in a specific location in which students are given a prompt to write in response to within a set time limit. The SAT and GRE both contain timed essay portions.

Rubric[edit]

A rubric is a tool used in writing assessment that can be used in several writing contexts. A rubric consists of a set of criteria or descriptions that guides a rater to score or grade a writer. The origins of rubrics can be traced to early attempts in education to standardize and scale writing in the early 20th century. Ernest C Noyes argues in November 1912 for a shift toward assessment practices that were more science-based. One of the original scales used in education was developed by Milo B. Hillegas in A Scale for the Measurement of Quality in English Composition by Young People. This scale is commonly referred to as the Hillegas Scale. The Hillegas Scale and other scales used in education were used by administrators to compare the progress of schools.[16]

In 1961, Diederich, French, and Carlton from the Educational Testing Service (ETS) publish Factors in Judgments for Writing Ability a rubric compiled from a series of raters whose comments were categorized and condensed into a five-factor rubric:

Ideas: relevance, clarity, quantity, development, persuasiveness

Form: Organization and analysis

Flavor: style, interest, sincerity

Mechanics: specific errors in punctuation, grammar, etc.

Wording: choice and arrangement of words [17]

As rubrics began to be used in the classroom, teachers began to advocate for criteria to be negotiated with students to have students stake a claim in the how they would be assessed. Scholars such as Chris Gallagher and Eric Turley,[18] Bob Broad,[19] and Asao Inoue [20] (among many) have advocated that effective use of rubrics comes from local, contextual, and negotiated criteria.

Multiple-Choice Test[edit]

Multiple-choice tests contain questions about usage, grammar, and vocabulary. Standardized tests like the SAT, ACT, and GRE are typically used for college or graduate school admission. Other tests, such as Compass and Accuplacer, are typically used to place students into remedial or mainstream writing courses.

Automated Essay Scoring[edit]

Automated Essay Scoring (AES) is the use of non-human, computer-assisted assessment practices to rate, score, or grade writing tasks.

Race and Writing Assessment[edit]

Some scholars in writing assessment focus their research on the influence of race on the performance on writing assessments. Scholarship in race and writing assessment seek to study how categories of race and perceptions of race continues to shape writing assessment outcomes. However, scholars in writing assessment recognize that racism in the 21st century is no longer explicit,[21] but point out that writing assessment practices are silently racist. Nicholas Behm and Keith D. Miller in “Challenging the Frameworks of Color-Blind Racism: Why We Need a Fourth Wave of Writing Assessment Scholarship” [22] advocate for the recognition of another wave after the three that Yancey offers. Behm and Miller advocate for a wave where the intersections of race and writing assessment are brought to the forefront of assessment practices. As the authors explain, racial inequalities in writing assessment are typically justified with non-racial reasons.

See also[edit]

References[edit]

  1. ^ Behizadeh, Nadia and George Engelhard Jr. "Historical View of the influences of measurement and writing theories on the practice of writing assessment in the United States" Assessing Writing 16 (2011) 189-211.
  2. ^ Huot, B. & Neal, M. (2006). Writing assessment: A techno-history. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of Writing Research (pp. 417-432). New York, NY: Guilford Press.
  3. ^ a b c Behizadeh, Nadia and George Engelhard Jr. "Historical View of the influences of measurement and writing theories on the practice of writing assessment in the United States" Assessing Writing 16 (2011) 189-211
  4. ^ a b c d e f g h Yancey, Kathleen Blake. “Looking Back as We Look Forward: Historicizing Writing Assessment as a Rhetorical Act.” College Composition and Communication. 50.3 (1999): 483-503. Web. 23 Feb. 2013.
  5. ^ Huot, Brian. (Re)Articulating Writing Assessment for Teaching and Learning. Logan, Utah: Utah State UP, 2002.
  6. ^ Broad, Bob. What we Really Value: Beyond Rubrics in Teaching and Assessing Writing. Logan, UT: Utah State University Press, 2003. Print
  7. ^ Diederich, P.G.; French, J. W.; Carlton, S. T. (1961) Factors in Judgments of Writing Ability. Princeton, NJ: Educational Testing Service
  8. ^ Yancey, Kathleen Blake. “Looking Back as We Look Forward”
  9. ^ O’Neill, Peggy, Cindy Moore, and Brian Huot. A Guide to College Writing Assessment. Logan, UT: Utah State University Press, 2009. Print.
  10. ^ Yancey, Kathleen Blake. "Looking back as We Look Forward"
  11. ^ "Holisticism." College Composition and Communication, 35 (December, 1984): 400-409.
  12. ^ Emmons, Kimberly. “Rethinking Genres of Reflection: Student Portfolio Cover Letters and the Narrative of Progress.” Composition Studies 31.1 (2003): 43-62.
  13. ^ a b Neal, Michael. Writing Assessment and the Revolution in Digital Texts and Technologies. NY: Teachers College, 2011.
  14. ^ White, Edward. “The Scoring of Writing Portfolios: Phase 2.” College Composition and Communication 56.4 (2005): 581-599.
  15. ^ a b Yancey, Kathleen. "Postmodernism, Palimpsest, and Portfolios: Theoretical Issues in the Representation of Student Work." ePortfolio Performance Support Systems: Constructing, Presenting, and Assessing Portfolios. Eds Katherine V. Wills and Rich Rice. Fort Collins, Colorado: WAC Clearinghouse. Web. 16 November 2013.
  16. ^ Turley, Eric D. and Chris Gallagher. "On the 'Uses' of Rubrics: Reframing the Great Rubric Debate" The English Journal Vol 97. No. 4. (Mar. 2008) pp 87-92.
  17. ^ Diederich, P.G.; French, J. W.; Carlton, S. T. (1961) Factors in Judgments of Writing Ability.
  18. ^ Turley, Eric D. and Chris Gallagher. "On the 'Uses' of Rubrics: Reframing the Great Rubric Debate"
  19. ^ Broad, Bob. What we Really Value: Beyond Rubrics in Teaching and Assessing Writing
  20. ^ Inoue, Asao B. “Community-based Assessment Pedagogy.” Assessing Writing. 9 (2005): 208-38. Web. 23 Feb 2013.
  21. ^ Bonilla-Silva, Eduardo. Racism Without Racists: Color-Blind Racism and the Persistence of Racial Inequality in the United States. Lanham, MD: Rowman & LittleField Publishers, Inc., 2006. Print.
  22. ^ Behm, Nicholas, and Keith D. Miller. “Challenging the Frameworks of Color-blind Racism: Why We Need a Fourth Wave of Writing Assessment Scholarship.” Race and Writing Assessment. Asao B. Inoue, and Mya Poe, eds. NYC: Peter Lang Publishing, 2012. 127-38. Print.

External links[edit]