Leslie Cooper Perelman|
Los Angeles, California, U.S.A.
|Education||University of California, Berkeley, UMass Amherst|
|Known for||Criticism of standardized testing|
Les Perelman is a research affiliate at the Massachusetts Institute of Technology. Perelman taught writing and composition at MIT, where he served as Director of Writing Across the Curriculum and an Associate Dean of Undergraduate Education. He was an executive committee member of the Conference on College Composition and Communication. and Co-Chair of its Committee on Assessment 
Perelman taught in and directed writing programs at Tulane University and the University of Southern California. At MIT, he taught writing and composition and served as the director of Writing Across the Curriculum and an Associate Dean in the Office of Undergraduate Education.
Criticism of the SAT Writing Section
Following his 2005 study of essay samples as well as graded essays provided by the College Board for reference on the writing portion of the SAT, Perelman reported a high correlation between the length of an essay and score received. He also noted that the essays were not penalized for any factual inaccuracies.
Criticism of automated scoring
In 2012, Perelman demonstrated that long pretentious incoherent essays could achieve higher scores from the ETS scoring engine e-Rater than well written essays. 
In 2014, Perelman collaborated with students at MIT and Harvard to develop BABEL, the "Basic Automatic B.S. Essay Language" Generator. The nonsense essays generated by BABEL are claimed to perform well when graded by AES systems. Automated graders, Perelman argues, "cannot read meaning, and they cannot check facts. More to the point, they cannot tell gibberish from lucid writing." Perelman's work is cited by the NCTE in their Position Statement on Machine Scoring, which expresses similar concerns about the limitations of AES:
Computer scoring systems can be "gamed" because they are poor at working with human language, further weakening the validity of their assessments and separating students not on the basis of writing ability but on whether they know and can use machine-tricking strategies.
- "People Directory". Massachusetts Institute of Technology. Retrieved June 11, 2015.
- "iMOAT". Massachusetts Institute of Technology. Retrieved June 11, 2015.
- "2015 CCCC Officers and Executive Committee". National Council of Teachers of English. Retrieved June 11, 2015.
- "Committee on Assessment (November 2016)". National Council of Teachers of English. Retrieved July 29, 2016.
- "Construct Validity, Length, Score, and Time in Holistically Graded Writing Assessments: The Case against Automated Essay Scoring (AES)" (PDF). WAC Clearinghouse. Retrieved June 14, 2015.
- "The man who killed the SAT essay". Boston Globe. Retrieved June 14, 2015.
- Winerip, Michael (May 4, 2005). "SAT Essay Test Rewards Length and Ignores Errors". The New York Times. Retrieved June 11, 2015.
- Balf, Todd. "The Story Behind the SAT Overhaul". Retrieved April 5, 2015.
- Winerip, Michael (22 April 2012). "Facing a Robo-Grader? Just Keep Obfuscating Mellifluously". The New York Times. Retrieved 5 April 2013.
- Kolowich, Steve (April 28, 2014). "Writing Instructor, Skeptical of Automated Grading, Pits Machine vs. Machine". The Chronicle of Higher Education. Retrieved June 11, 2015.
- "NCTE Position Statement on Machine Scoring". National Council of Teachers of English. Retrieved June 11, 2015.