Readability

From Wikipedia, the free encyclopedia
Jump to: navigation, search
"Code readability" redirects here. For further information about this topic, see Computer programming#Readability of source code.

Readability is the ease with which text can be read and understood. Various factors to measure readability have been used, such as "speed of perception," "perceptibility at a distance," "perceptibility in peripheral vision," "visibility," "the reflex blink technique," "rate of work" (e.g., speed of reading), "eye movements," and "fatigue in reading."[1]

Readability is distinguished from legibility which is a measure of how easily individual letters or characters can be distinguished from each other. Readability can determine the ease in which computer program code can be read by humans, such as through embedded documentation.

Definition[edit]

Readability has been defined in various ways, e.g. by: The Literacy Dictionary,[2] Jeanne Chall and Edgar Dale,[3] G. Harry McLaughlin,[4] William DuBay.[5]

Easy reading helps learning and enjoyment, so what we write should be easy to understand.[6]

While many writers and speakers since ancient times have used plain language, in the 20th century there was much more focus on reading ease. Much of the research has focused on matching texts to people's reading skills. This has used many successful formulas: in research, government, teaching, publishing, the army, doctors, and business. Many people, and in many languages, have been helped by this.[7][8] By the year 2000, there were over 1,000 studies on readability formulas in professional journals about their validity and merit.[9] The study of reading is not just in teaching. Research has shown that much money is wasted by companies in making texts hard for the average reader to read.[10]

There are summaries of this research, see the links in this section. Many text books on reading include pointers to readability.[2][11][12][13]

Early research[edit]

In the 1880s, English professor L. A. Sherman found that the English sentence was getting shorter. In Elizabethan times, the average sentence was 50 words long. In his own time, it was 23 words long.

Sherman's work established that:

  • Literature is a subject for statistical analysis.
  • Shorter sentences and concrete terms help people to make sense of what is written.
  • Speech is easier to understand than text.
  • Over time, text becomes easier if it is more like speech.

Sherman wrote: "Literary English, in short, will follow the forms of standard spoken English from which it comes. No man should talk worse than he writes, no man should write better than he should talk.... The oral sentence is clearest because it is the product of millions of daily efforts to be clear and strong. It represents the work of the race for thousands of years in perfecting an effective instrument of communication.'[14]

In 1889 in Russia, the writer Nikolai A. Rubakin[15] published his study of over 10,000 texts written by everyday people. From these texts, he took out 1,500 words which he thought were understood by most people. He found that the main blocks were 1) unfamiliar words and 2) long sentences.[16] Starting with his own journal at the age of 13, Rubakin published many articles and books on science and many subjects for the great numbers of new readers throughout Russia. In Rubakin's view, the people were not fools. They were simply poor and in need of cheap books, written at a level they could grasp.[15]

In 1921, Harry D. Kitson published The Mind of the Buyer, one of the first uses of psychology in marketing. Kitson's work showed that each type of reader bought and read their own type of text. On reading two newspapers (the Chicago Evening Post and the Chicago American) and two magazines (the Century and the American), he found that sentence length and word length were the best signs of being easy to read.[17]

Text leveling[edit]

The earliest method of assessing the reading ease of texts is the subjective judgment termed text leveling. Formulas do not fully address the various content, purpose, design, visual input, and organization of a text. [18][19] [20]

Text leveling is commonly used to rank the reading ease of texts in areas where reading difficulties are easy to identify such as books for young children. At higher levels ranking the reading ease of texts becomes more difficult, as the reading difficulties become harder to identify. For this reason, better ways to assess reading ease were developed.

Vocabulary frequency lists[edit]

In the 1920s, the Scientific Movement in education looked for tests to measure students' achievement to aid in curriculum development. Teachers and educators had long known that readers, especially beginning readers, should have reading material that closely matched their ability, to help improve their reading skill. University-based psychologists did much of the early research, which was taken up later by publishers of textbooks.[6]

Educational psychologist Edward Thorndike of Columbia University noted that in Russia and Germany teachers were using word frequency counts to match books with students. Word skill was the best sign of intellectual development and the strongest predictor of reading ease. In 1921, Thorndike published his Teachers Word Book, which contained the frequencies of 10,000 words. It made it easier for teachers to choose books matching the reading skills of their class. It also laid down the basis for all research to come on reading ease.

Until computers came along, word frequency lists were the best aids for grading the reading ease of texts.[21] In 1981 the World Book Encyclopedia listed the grade levels of 44,000 words.[22]

Early children's readability formulas[edit]

In 1923, school teachers Bertha A. Lively and Sidney L. Pressey published the first reading ease formula. They had been concerned that science textbooks in junior high school had so many technical words. They felt that teachers spent all class time explaining their meaning. They argued that their formula would help to measure and reduce the “vocabulary burden” of textbooks. Their formula used five variable inputs and six constants. For each thousand words, it counted the number of unique words, the number of words not on the Thorndike list, and the median index number of the words found on the list. Manually, it took three hours to apply the formula to a book.[23]

After the Lively–Pressey study people tried to find formulas that were more accurate and easier to apply. By 1980, over 200 formulas were published in different languages.[citation needed]

In 1928, Carleton Washburne and Mabel Vogel created the first modern readability formula. It was validated by using an outside criterion and correlated .845 with test scores of students who read and liked the criterion books. It was also the first to introduce the variable of interest to the concept of readability.[24]

Between 1929 and 1939, Alfred Lewerenz of the Los Angeles School District published several new formulas.[25][25][26][27][28]

In 1934, Edward Thorndike published a formula of his own. He wrote that word skills can be increased if the teacher brings in new words and repeats them often.[29] In 1939, W.W. Patty and W. I Painter published a formula for measuring the vocabulary burden of textbooks. This was the last of the early formulas that used the Thorndike vocabulary-frequency list.[30]

Early adult readability formulas[edit]

During the recession of the 1930s, the U.S. government invested in adult education. In 1931, Douglas Waples and Ralph Tyler published What Adults Want to Read About. It was a two-year study of adult reading interests. Their book showed not only what people read but what they would like to read. They found that many readers lacked suitable reading materials: they would have liked to learn but the reading materials were too hard for them.[31]

Lyman Bryson of Teachers College, Columbia University found that many adults had poor reading ability due to poor education. Even though colleges had long taught writing in a clear and readable style, Bryson found that it was very rare. He wrote that such language is the result of a "discipline and artistry that few people who have ideas will take the trouble to achieve... If simple language were easy, many of our problems would have been solved long ago."[21] Bryson helped set up the Readability Laboratory at the College. Two of his students were Irving Lorge and Rudolf Flesch.

In 1934, Ralph Ojemann investigated the reading skills of adults, the factors which most directly affect reading ease, and the causes of each level of difficulty. He did not invent a formula but a method for assessing the difficulty of materials for parent education. He was the first to assess the validity of this method by using 16 magazine passages that had been tested on actual readers. He evaluated 14 measurable and three reported factors affecting reading ease.

Ojemann put great emphasis on the reported features, such as whether the text was coherent or unduly abstract. He used his 16 passages to compare and judge the reading ease of other texts, a method known today as scaling. He showed that even though these factors cannot be measured, they cannot be ignored.[32]

That same year, Ralph Tyler and Edgar Dale published the first adult reading ease formula which was based on passages from adult magazines. Of the 29 factors that had been significant for young readers, they found ten that were significant for adults. Three of them they used in their formula.[33]

In 1935, William S. Gray of the University of Chicago and Bernice Leary of Xavier College in Chicago published What Makes a Book Readable, one of the most important books in readability research. Like Dale and Tyler, they focused on what makes books readable for adults of limited reading ability.

The book included the first scientific study of the reading skills of adults in the U.S. The sample included 1,690 adults from a variety of settings and areas of the U.S. The test used a number of passages from newspapers, magazines, and books as well as a standard reading test. They found a mean grade score of 7.81 (eighth month of the seventh grade). About one-third read at the 2nd to 6th-grade level, one-third at the 7th to 12th-grade level, and one-third at the 13th to 17th grade level.

The authors emphasized that one-half of the adult population are lacking suitable reading materials. They wrote, "For them, the enriching values of reading are denied unless materials reflecting adult interests are adapted to their needs." The poorest readers, one-sixth of the adult population, need "simpler materials for use in promoting functioning literacy and in establishing fundamental reading habits." [34]

Gray and Leary then analyzed 228 variables that affect reading ease and divided them into four types: 1. content, 2. style, 3. format, and organization. They found that content was most important, followed closely by style. Third was format, followed closely by organization. They found no way to measure content, format, or organization, but they could measure variables of style. Among the 17 significant measurable variables of style, they selected five to create a formula: 1. average sentence length, 2 number of different hard words, 3. number of personal pronouns, percentage of unique words, and number of prepositional phrases. Their formula had a correlation of .645 with comprehension as measured by reading tests given to about 800 adults.[34]

In 1939, Irving Lorge published an article showing that there were other combinations of variables which were more accurate signs of difficulty than the ones used by Gray and Leary. His research also showed that "the vocabulary load is the most important concomitant of difficulty.[16] In 1944, Lorge published his Lorge Index, a readability formula using three variables, setting the stage for the simpler and more reliable formulas that would follow.[35]

By 1940, investigators had:

  • Successfully used statistical methods to analyze the reading ease of texts.
  • Found that unusual words and sentence length were among the first causes of reading difficulty.
  • Used vocabulary and sentence length in formulas to predict the reading ease of a text.

The popular readability formulas[edit]

The Flesch formulas[edit]

In 1943, Rudolf Flesch published his Ph. D. dissertation entitled Marks of a Readable Style, which included a readability formula for predicting the difficulty of adult reading material. Investigators began using it to improve communications in many fields. One of the variables it used was "personal references" such as names and personal pronouns. Another variable was affixes.[36]

In 1948, Flesch published his Reading Ease formula in two parts. Rather than using grade levels, it used a scale from 0 to 100, with 0 equivalent to the 12th grade and 100 equivalent to the 4th grade. It dropped the use of affixes. The second part of the formula predicts human interest by using personal references and the number of personal sentences. The new formula correlated 0.70 with the McCall-Crabbs reading tests.[37] The original formula is:

Reading Ease score = 206.835 − (1.015 × ASL) − (84.6 × ASW)
Where: ASL = average sentence length (number of words divided by number of sentences)
ASW = average word length in syllables (number of syllables divided by number of words)

Publishers discovered that the Flesch formulas could increase readership up to 60 percent. Flesch's work also made an enormous impact on journalism. The Flesch Reading Ease formula became one of the most widely used, and the one most tested and reliable.[38][39] In 1951, Farr, Jenkins, and Patterson simplified the formula further by changing the syllable count. The modified formula is:

New Reading Ease score = 1.599nosw − 1.015sl − 31.517
Where: nosw = number of one-syllable words per 100 words and
sl = average sentence length in words.[40]

In 1975, in a project sponsored by the U.S. Navy, the Reading Ease formula was recalculated to give a grade-level score. The new formula is now called the Flesch–Kincaid Grade-Level formula.[41] The Flesch–Kincaid formula is one of the most popular and heavily tested formulas. It correlates 0.91 with comprehension as measured by reading tests.[5]

The Dale–Chall formula[edit]

Edgar Dale, a professor of education at Ohio State University, was one of the first critics of Thorndike's vocabulary-frequency lists. He claimed that they did not distinguish between the different meanings that many words have. He created two new lists of his own. One, his "short list" of 769 easy words, was used by Irving Lorge in his formula. The other was his "long list" of 3,000 easy words, which were understood by 80% of fourth-grade students. In 1948, he incorporated this list in a formula which he developed with Jeanne S. Chall, who was to become the founder of the Harvard Reading Laboratory.

To apply the formula:

  1. Select several 100-word samples throughout the text.
  2. Compute the average sentence length in words (divide the number of words by the number of sentences).
  3. Compute the percentage of words NOT on the Dale–Chall word list of 3,000 easy words.
  4. Compute this equation

Raw Score = 0.1579*(PDW) + 0.0496*(ASL) + 3.6365

Where:

Raw Score = uncorrected reading grade of a student who can answer one-half of the test questions on a passage.
PDW = Percentage of Difficult Words not on the Dale–Chall word list.
ASL = Average Sentence Length

Finally, to compensate for the "grade-equivalent curve," apply the following chart for the Final Score:

Raw Score --- Final Score
4.9 and below --- Grade 4 and below
5.0 to 5.9 --- Grades 5–6
6.0 to 6.9 --- Grades 7–8
7.0 to 7.9 --- Grades 9–10
8.0 to 8.9 --- Grades 11–12
9.0 to 9.9 --- Grades 13–15 (college)
10 and above --- Grades 16 and above.[42]

Correlating 0.93 with comprehension as measured by reading tests, the Dale–Chall formula is the most reliable formula and is widely used in scientific research.

In 1995, Dale and Chall published a new version of their formula with an upgraded word list, the New Dale–Chall Readability Formula.[43]

The Gunning Fog formula[edit]

Main article: Gunning fog index

In the 1940s, Robert Gunning helped bring readability research into the workplace. In 1944, he founded the first readability consulting firm dedicated to reducing the "fog" in newspapers and business writing. In 1952, he published The Technique of Clear Writing with his own Fog Index, a formula that correlates 0.91 with comprehension as measured by reading tests.[5] The formula is one of the most reliable and simplest to apply:

Grade level= 0.4 * ( (average sentence length) + (percentage of Hard Words) )
Where: Hard Words = words with more than two syllables.[44]

Fry Readability Graph[edit]

In 1963, while teaching English teachers in Uganda, Edward Fry developed his Readability Graph. It became one of the most popular formulas and easiest to apply.[45][46] The Fry Graph correlates 0.86 with comprehension as measured by reading tests.[5]

McLaughlin's SMOG formula[edit]

Harry McLaughlin determined that word length and sentence length should be multiplied rather than added as in other formulas. In 1969, he published his SMOG (Simple Measure of Gobbledygook) formula:

SMOG grading = 3 + square root of polysyllable count.
Where: polysyllable count = number of words of more than two syllables in a sample of 30 sentences.[4]

The SMOG formula correlates 0.88 with comprehension as measured by reading tests.[5] It is often recommended for use in healthcare.[47]

The FORCAST formula[edit]

In 1973, a study commissioned by the U.S. military of the reading skills required for different military jobs produced the FORCAST formula. Unlike most other formulas, it uses only a vocabulary element, making it useful for texts without complete sentences. The formula satisfied requirements that it would be:

  • Based on Army-job reading materials.
  • Suitable for the young adult-male recruits.
  • Easy enough for Army clerical personnel to use without special training or equipment.

The formula is:

Grade level = 20 − (N / 10)
Where N = number of single-syllable words in a 150-word sample.[48]

The FORCAST formula correlates 0.66 with comprehension as measured by reading tests.[5]

Consolidation and validation[edit]

Beginning in the 1940s, continuing studies in readability confirmed and expanded on earlier research. From these studies, it became obvious that readability is not something embedded in the text but is the result of an interaction between the text and the reader. On the reader's side, readability is dependent on 1. prior knowledge, 2. reading skill, 3. interest, and 4. motivation. On the side of the text, readability is affected by 1. content, 2. style, 3. design, and 4. organization.

Readability and newspaper readership[edit]

Several studies in the 1940s showed that even small increases in readability greatly increases readership in large-circulation newspapers.

In 1947, Donald Murphy of Wallace's Farmer used a split-run edition to study the effects of making text easier to read. They found that reducing from the 9th to the 6th-grade level increased readership 43% for an article on 'nylon'. There was a gain of 42,000 readers in a circulation of 275,000. He found a 60% increase in readership for an article on 'corn'. He also found a better response from people under 35.[49]

Wilber Schramm interviewed 1,050 newspaper readers. He found that an easier reading style helps to decide how much of an article is read. This was called reading persistence, depth, or perseverance. He also found that people will read less of long articles than of short ones. A story 9 paragraphs long will lose three out of 10 readers by the 5th paragraph. A shorter story will lose only two. Schramm also found that the use of subheads, bold-face paragraphs, and stars to break up a story actually lose readers.[50]

A study in 1947 by Melvin Lostutter showed that newspapers generally were written at a level five years above the ability of average American adult readers. He also found that the reading ease of newspaper articles had little to do with the education, experience, or personal interest of the journalists writing the stories. It had more to do with the convention and culture of the industry. Lostutter argued for more readability testing in newspaper writing. He wrote that improved readability has to be a "conscious process somewhat independent of the education and experience of the staffs writers."[51]

A study by Charles Swanson in 1948 showed that better readability increases the total number of paragraphs read by 93% and the number of readers reading every paragraph by 82%.[52]

In 1948, Bernard Feld did a study of every item and ad in the Birmingham News of 20 November 1947. He divided the items into those above the 8th-grade level and those at the 8th grade or below. He chose the 8th-grade breakpoint because that was the average reading level of adult readers. An 8th-grade text "will reach about 50 percent of all American grown-ups," he wrote. Among the wire-service stories, the lower group got two-thirds more readers, and among local stories, 75 percent more readers. Feld also believed in drilling writers in Flesch's clear-writing principles.[53]

Both Rudolf Flesch and Robert Gunning worked extensively with newspapers and the wire services in improving readability. Mainly through their efforts in a few years, the readability of U.S. newspapers went from the 16th to the 11th-grade level, where it remains today.

The two publications with the largest circulations, TV Guide (13 million) and Readers Digest (12 million), are written at the 9th-grade level.[5] The most popular novels are written at the 7th-grade level. This supports the fact that the average adult reads at the 9th-grade level. It also shows that, for recreation, people read texts that are two grades below their actual reading level.[21]

The George Klare Studies[edit]

George Klare and his colleagues looked at the effects of greater reading ease on Air Force recruits. They found that more readable texts resulted in greater and more complete learning. They also increased the amount read in a given time, and made for easier acceptance.[54][55]

Other studies by Klare showed how the reader's skills,[56] prior knowledge,[57] interest, and motivation[56][57] affect reading ease.

Measuring coherence and organization[edit]

For centuries, teachers and educators have seen the importance of organization, coherence, and emphasis in good writing. Beginning in the 1970s, cognitive theorists began teaching that reading is really an act of thinking and organization. The reader constructs meaning by mixing new knowledge into existing knowledge. Because of the limits of the reading ease formulas, some research looked at ways to measure the content, organization, and coherence of text. Although this did not improve the reliability of the formulas, their efforts showed the importance of these variables in reading ease.

Studies by Walter Kintch and others showed the central role of coherence in reading ease, mainly for people learning to read.[58] In 1983, 'Susan Kemper devised a formula based on physical states and mental states. However,she found this was no better than word familiarity and sentence length in showing reading ease.[59]

Bonnie Meyer and others tried to use organization as a measure of reading ease. While this did not result in a formula, they showed that people read faster and retain more when the text is organized in topics. She found that a visible plan for presenting content greatly helps readers to assess a text. A hierarchical plan shows how the parts of the text are related. It also aids the reader in blending new information into existing knowledge structures.[60]

Bonnie Armbruster found that the most important feature for learning and comprehension is textual coherence, which comes in two types:

  • Global coherence, which integrates high-level ideas as themes in an entire section, chapter, or book.
  • Local coherence, which joins ideas within and between sentences.

Armbruster confirmed Kintsch's finding that coherence and structure are more help for younger readers.[61] R. C. Calfee and R. Curley built on Bonnie Meyer's work and found that an unfamiliar underlying structure can make even simple text hard to read. They brought in a graded system to help students progress from simpler story lines to more advanced and abstract ones.[62]

Many other studies looked at the effects on reading ease of other text variables, including:

  • Image words, abstraction, direct and indirect statements, types of narration and sentences, phrases, and clauses.[34]
  • Difficult concepts.[39]
  • Idea density.[63]
  • Human interest.[44][64]
  • Nominalization.[65]
  • Active and passive voice.[66][67][68][69]
  • Embeddedness.[67]
  • Structural cues.[70][71]
  • The use of images.[72][73]
  • Diagrams and line graphs.[74]
  • Highlighting.[75]
  • Fonts and layout.[76]
  • Document age.[77]

Advanced readability formulas[edit]

The John Bormuth formulas[edit]

John Bormuth of the University of Chicago looked at reading ease using the new Cloze deletion test developed by Wilson Taylor. His work supported earlier research including the degree of reading ease for each kind of reading. The best level for classroom "assisted reading" is a slightly difficult text that causes a "set to learn," and for which readers can correctly answer 50 percent of the questions of a multiple-choice test. The best level for unassisted reading is one for which readers can correctly answer 80 percent of the questions. These cutoff scores were later confirmed by Vygotsky[78] and Chall and Conard.[79] Among other things, Bormuth confirmed that vocabulary and sentence length are the best indicators of reading ease. He showed that the measures of reading ease worked as well for adults as for children. The same things that children find hard are the same for adults of the same reading levels. He also developed several new measures of cutoff scores. One of the most well known was the "Mean Cloze Formula." which was used in 1981 to produce the Degree of Reading Power system used by the College Entrance Examination Board.[80][81][82]

The Lexile Framework[edit]

In 1988, Jack Stenner and his associates at MetaMetrics, Inc. published a new system, the Lexile Framework, for assessing readability and matching students with appropriate texts.

The Lexile Framework uses average sentence length and average word frequency as found in the American Heritage Intermediate Corpus to predict a score on a 0–2000 scale. The AHI Corpus includes five million words from 1,045 published to which students in grades three to nine often read. Once you know a student's Lexile score, you can search a large database for books that match the score.

The Lexile Framework is one of the largest and most successful systems for the development of reading skills. The Lexile Book Database has more than 100,000 titles from more than 450 publishers. You can search the database for Lexile ratings on their Web site at: http://www.lexile.com.[83]

ATOS Readability Formula for Books[edit]

In 2000, researchers of the School Renaissance Institute and Touchstone Applied Science Associates published their Advantage-TASA Open Standard (ATOS) Reading ease Formula for Books. They worked on a formula that was easy to use and that could be used with any texts.

The project was one of the widest reading ease projects ever. The developers of the formula used 650 normed reading texts, 474 million words from all the text in 28,000 books read by students. The project also used the reading records of more than 30,000 who read and were tested on 950,000 books.

They found that three variables give the most reliable measure of text reading ease:

  • words per sentence
  • average grade level of words
  • characters per word

They also found that:

  • To help learning, the teacher should match book reading ease with reading skill.
  • Reading often helps with reading gains.
  • For reading alone below the 4th grade, the best learning gain requires at least 85% comprehension.
  • Advanced readers need 92% comprehension for independent reading.
  • Book length can be a good measure of reading ease.
  • Feedback and interaction with the teacher are the most important factors in reading.[84][85]

CohMetrix Psycholinguistics Measurements[edit]

Coh-Metrix can be used in many different ways to investigate the cohesion of the explicit text and the coherence of the mental representation of the text. "Our definition of cohesion consists of characteristics of the explicit text that play some role in helping the reader mentally connect ideas in the text."[86] The definition of coherence is the subject of much debate. Theoretically, the coherence of a text is defined by the interaction between linguistic representations and knowledge representations. While coherence can be defined as characteristics of the text (i.e., aspects of cohesion) that are likely to contribute to the coherence of the mental representation, Coh-Metrix measurements provide indices of these cohesion characteristics.[86]

Using the readability formulas[edit]

While experts agree that the formulas are highly accurate for grading the readability of existing texts, they are not so useful for creating or modifying them. The two variables used in most formulas, a sentence and a vocabulary, are the ones most directly related to reading difficulty, but they are not the only ones.

Writing experts have warned that if you "write to the formula," that is, attempt to simplify the text only by changing the length of the words and sentences, you may end up with text that is more difficult to read. All the variables are tightly related. If you change one, you must also adjust the others, including approach, voice, person, tone, typography, design, and organization.

Writing for a class of readers other than one's own is very difficult. It takes training, method, and practice. Among those who are good at this are writers of novels and children's books. The writing experts all advise that, besides using a formula, observe all the norms of good writing, which are essential for writing readable texts. Study the texts used by your audience and their reading habits. This means, if you are writing for a 5th-grade audience, study and learn good quality 5th-grade materials.[21][44][64][87][88][89][90]

See also[edit]

References[edit]

  1. ^ Tinker, Miles A. (1963). Legibility of Print. Iowa: Iowa State University Press. pp. 5–7. ISBN 0-8138-2450-8. 
  2. ^ a b Harris, Theodore L. and Richard E. Hodges, eds. 1995. The Literacy Dictionary, The Vocabulary of Reading and Writing. Newark, DE: International Reading Assn.
  3. ^ Dale, Edgar and Jeanne S. Chall. 1949. "The concept of readability." Elementary English 26:23.
  4. ^ a b McLaughlin, G. H. 1969. "SMOG grading-a new readability formula." Journal of reading 22:639–646.
  5. ^ a b c d e f g DuBay, W. H. 2006. Smart language: Readers, Readability, and the Grading of Text. Costa Mesa:Impact Information.
  6. ^ a b Fry, Edward B. 2006. "Readability." Reading Hall of Fame Book. Newark, DE: International Reading Assn.
  7. ^ Fry, E. B. 1986. Varied uses of readability measurement. Paper presented at the 31st Annual Meeting of the International Reading Association, Philadelphia, PA.
  8. ^ Rabin, A. T. 1988 "Determining difficulty levels of text written in languages other than English." In Readability: Its past, present, and future, eds. B. L. Zakaluk and S. J. Samuels. Newark, DE: International Reading Association.
  9. ^ Klare, G. R. 2000. "Readable computer documentation." ACM journal of computer documentation" 24, no. 3: 148–168.
  10. ^ Kimble, Joe. 1996–97. Writing for dollars. Writing to please. Scribes journal of legal writing 6. Available online at: http://www.plainlanguagenetwork.org/kimble/dollars.htm
  11. ^ Ruddell, R. B. 1999. Teaching children to read and write. Boston: Allyn and Bacon.
  12. ^ Manzo, A. V. and U. C. Manzo. 1995. Teaching children to be literate. Fort Worth: Harcourt Brace.
  13. ^ Vacca, J. A., R. Vacca, and M. K. Gove. 1995. Reading and learning to read. New York: Harper Collins.
  14. ^ Sherman, Lucius Adelno 1893. Analytics of literature: A manual for the objective study of English prose and poetry." Boston: Ginn and Co.
  15. ^ a b Choldin, M.T. (1979), "Rubakin, Nikolai Aleksandrovic", in Kent, Allen; Lancour, Harold; Nasri, William Z. et al., Encyclopedia of library and information science 26 (illustrated ed.), CRC Press, pp. 178–79, ISBN 9780824720261 
  16. ^ a b Lorge, I. 1944. "Word lists as background for communication." Teachers College Record" 45:543–552.
  17. ^ Kitson, Harry D. 1921. The Mind of the Buyer. New York: Macmillan.
  18. ^ Clay, M. 1991. Becoming literate: The construction of inner control. Portsmouth, NH: Heinneman.
  19. ^ Fry, E. B. 2002. "Text readability versus leveling." Reading Teacher" 56 no. 23:286–292.
  20. ^ Chall, J. S., J. L. Bissex, S. S. Conard, and S. H. Sharples. 1996. Qualitative assessment of text difficulty: A practical guide for teachers and writers. Cambridge MA: Brookline Books.
  21. ^ a b c d Klare, G. R. and B. Buck. 1954. Know Your Reader: The scientific approach to readability. New York: Heritage House.
  22. ^ Dale, E. and J. O'Rourke. 1981. The living word vocabulary: A national vocabulary inventory. World Book-Childcraft International.
  23. ^ Lively, Bertha A. and S. L. Pressey. 1923. "A method for measuring the 'vocabulary burden' of textbooks. Educational administration and supervision" 9:389–398.
  24. ^ Washburne, C. and M. Vogel. 1928. "An objective method of determining grade placement of children's reading material. Elementary school journal 28:373–81.
  25. ^ a b Lewerenz, A. S. 1929. "Measurement of the difficulty of reading materials." Los Angeles educational research bulletin 8:11–16.
  26. ^ Lewerenz, A. S. 1930. "Vocabulary grade placement of typical newspaper content." Los Angeles educational research bulletin 10:4–6.
  27. ^ Lewerenz, A. S. 1935. "A vocabulary grade placement formula." Journal of experimental education 3: 236
  28. ^ Lewerenz, A. S. 1939. "Selection of reading materials by pupil ability and interest." 'Elementary English review 16:151–156.
  29. ^ Thorndike, E. 1934. "Improving the ability to read." Teachers college record 36:1–19, 123–44, 229–41. October, November, December.
  30. ^ Patty. W. W. and W. I. Painter. 1931. "A technique for measuring the vocabulary burden of textbooks." Journal of educational research 24:127–134.
  31. ^ Waples, D. and R. Tyler. 1931. What adults want to read about.Chicago: University of Chicago Press.
  32. ^ Ojemann, R. H. 1934. "The reading ability of parents and factors associated with reading difficulty of parent-education materials." University of Iowa studies in child welfare 8:11–32.
  33. ^ Dale, E. and R. Tyler. 1934. "A study of the factors influencing the difficulty of reading materials for adults of limited reading ability." Library quarterly 4:384–412.
  34. ^ a b c Gray, W. S. and B. Leary. 1935. What makes a book readable. Chicago: Chicago University Press.
  35. ^ Lorge, I. 1944. "Predicting readability." Teachers college record 45:404–419.
  36. ^ Flesch, R. "Marks of a readable style." Columbia University contributions to education, no. 187. New York: Bureau of Publications, Teachers College, Columbia University.
  37. ^ Flesch, R. 1948. "A new readability yardstick." Journal of applied psychology 32:221-233.
  38. ^ Klare, G. R. 1963. The measurement of readability. Ames, Iowa: University of Iowa Press.
  39. ^ a b Chall, J. S. 1958. Readability: An appraisal of research and application. Columbus, OH: Bureau of Educational Research, Ohio State University.
  40. ^ Farr, J. N., J. J. Jenkins, and D. G. Paterson. 1951. "Simplification of the Flesch Reading Ease Formula." Journal of applied psychology. 35, no. 5:333–357.
  41. ^ Kincaid, J. P., R. P. Fishburne, R. L. Rogers, and B. S. Chissom. 1975. Derivation of new readability formulas (Automated Readability Index, Fog Count, and Flesch Reading Ease Formula) for Navy enlisted personnel. CNTECHTRA Research Branch Report 8-75.
  42. ^ Dale, E. and J. S. Chall. 1948. '"A formula for predicting readability". Educational research bulletin Jan.21 and Feb 17, 27:1–20, 37–54.
  43. ^ Chall, J. S. and E. Dale. 1995. Readability revisited: The new Dale–Chall readability formula. Cambridge, MA: Brookline Books.
  44. ^ a b c Gunning, R. 1952. The Technique of Clear Writing." New York: McGraw–Hill."
  45. ^ Fry, E. B. 1963. Teaching faster reading." London: Cambridge University Press.
  46. ^ Fry, E. B. 1968. "A readability formula that saves time." Journal of reading 11:513–516.
  47. ^ Doak, C. C., L. G. Doak, and J. H. Root. 1996. Teaching patients with low literacy skills. Philadelphia: J. P. Lippincott Company.
  48. ^ Caylor, J. S., T. G. Stitch, L. C. Fox, and J. P. Ford. 1973. Methodologies for determining reading requirements of military occupational specialties: Technical report No. 73-5. Alexander, VA: Human Resources Research Organization.
  49. ^ Murphy, D. 1947. "How plain talk increases readership 45% to 60%." Printer's ink. 220:35–37.
  50. ^ Schramm, W. 1947. "Measuring another dimension of newspaper readership." Journalism quarterly 24:293–306.
  51. ^ Lostutter, M. 1947. "Some critical factors in newspaper readability." Journalism quarterly 24:307–314.
  52. ^ Swanson, C. E. 1948. "Readability and readership: A controlled experiment." Journalism quarterly 25:339–343.
  53. ^ Feld, B. 1948. "Empirical test proves clarity adds readers." Editor and publisher 81:38.
  54. ^ Klare, G. R., J. E. Mabry, and L. M. Gustafson. 1955. "The relationship of style difficulty to immediate retention and to acceptability of technical material." Journal of educational psychology 46:287–295.
  55. ^ Klare, G. R., E. H. Shuford, and W. H. Nichols. 1957 . "The relationship of style difficulty, practice, and efficiency of reading and retention." Journal of applied psychology. 41:222–226.
  56. ^ a b Klare, G. R. 1976. "A second look at the validity of the readability formulas." Journal of reading behavior. 8:129–152.
  57. ^ a b Klare, G. R. 1985. "Matching reading materials to readers: The role of readability estimates in conjunction with other information about comprehensibility." In Reading, thinking, and concept development, eds. T. L Harris and E. J. Cooper. New York: College Entrance Examination Board.
  58. ^ Kintsch, W. and J. R. Miller 1981. "Readability: A view from cognitive psychology." In Teaching: Research reviews. Newark, DE: International Reading Assn.
  59. ^ Kemper, S. 1983. "Measuring the inference load of a text." Journal of educational psychology 75, no. 3:391–401.
  60. ^ Meyer, B. J. 1982. "Reading research and the teacher: The importance of plans." College composition and communication 33, no. 1:37–49.
  61. ^ Armbruster, B. B. 1984. "The problem of inconsiderate text" In Comprehension instruction, ed. G. Duffy. New York: Longmann, p. 202–217.
  62. ^ Calfee, R. C. and R. Curley. 1984. "Structures of prose in content areas." In Understanding reading comprehension, ed. J. Flood. Newark, DE: International Reading Assn., pp. 414–430.
  63. ^ Dolch. E. W. 1939. "Fact burden and reading difficulty." Elementary English review 16:135–138.
  64. ^ a b Flesch, R. 1949. The art of readable writing. New York: Harper.
  65. ^ Coleman, E. B. and P. J. Blumenfeld. 1963. "Cloze scores of nominalization and their grammatical transformations using active verbs." Psychology reports 13:651–654.
  66. ^ Gough, P. B. 1965. "Grammatical transformations and the speed of understanding." Journal of verbal learning and verbal behavior 4:107–111.
  67. ^ a b Coleman, E. B. 1966. "Learning of prose written in four grammatical transformations." Journal of applied psychology 49:332–341.
  68. ^ Clark, H. H. and S. E. Haviland. 1977. "Comprehension and the given-new contract." In Discourse production and comprehension, ed. R. O. Freedle. Norwood, NJ: Ablex Press, pp. 1–40.
  69. ^ Hornby, P. A. 1974. "Surface structure and presupposition." Journal of verbal learning and verbal behavior 13:530–538.
  70. ^ Spyridakis, J. H. 1989. "Signaling effects: A review of the research-Part 1." Journal of technical writing and communication 19, no 3:227-240.
  71. ^ Spyridakis, J. H. 1989. "Signaling effects: Increased content retention and new answers-Part 2." Journal of technical writing and communication 19, no. 4:395–415.
  72. ^ Halbert, M. G. 1944. "The teaching value of illustrated books." American school board journal 108, no. 5:43–44.
  73. ^ Vernon, M. D. 1946. "Learning from graphic material." British journal of psychology 36:145–158.
  74. ^ Felker, D. B., F. Pickering, V. R. Charrow, V. M. Holland, and J. C. Redish. 1981. Guidelines for document designers. Washington, D. C: American Institutes for Research.
  75. ^ Klare, G. R., J. E. Mabry, and L. M. Gustafson. 1955. "The relationship of patterning (underlining) to immediate retention and to acceptability of technical material." Journal of applied psychology 39, no 1:40–42.
  76. ^ Klare, G. R. 1957. "The relationship of typographic arrangement to the learning of technical material." Journal of applied psychology 41, no 1:41–45.
  77. ^ Jatowt, A. and K. Tanaka. 2012. "Longitudinal analysis of historical texts' readability." Proceedings of Joint Conference on Digital Libraries 2012 353-354
  78. ^ Vygotsky, L. 1978. Mind in society. Cambridge, MA: Harvard University Press.
  79. ^ Chall, J. S. and S. S. Conard. 1991. Should textbooks challenge students? The case for easier or harder textbooks. New York: Teachers College Press.
  80. ^ Bormuth, J. R. 1966. "Readability: A new approach." Reading research quarterly 1:79–132.
  81. ^ Bormuth, J. R. 1969. Development of readability analysis: Final Report, Project no 7-0052, Contract No. OEC-3-7-0070052-0326. Washington, D. C.: U. S. Office of Education, Bureau of Research, U. S. Department of Health, Education, and Welfare.
  82. ^ Bormuth, J. R. 1971. Development of standards of readability: Towards a rational criterion of passage performance. Washington, D. C.: U. S. Office of Education, Bureau of Research, U. S. Department of Health, Education, and Welfare.
  83. ^ Stenner, A. J., I Horabin, D. R. Smith, and R. Smith. 1988. The Lexile Framework. Durham, NC: Metametrics.
  84. ^ School Renaissance Institute. 2000. The ATOS readability formula for books and how it compares to other formulas. Madison, WI: School Renaissance Institute, Inc.
  85. ^ Paul, T. 2003. Guided independent reading. Madison, WI: School Renaissance Institute, Inc. http://www.renlearn.com/GIRP2008.pdf
  86. ^ a b Graesser, A.C.; McNamara, D.S.; Louwerse, M.M. (2003), "What do readers need to learn in order to process coherence relations in narrative and expository text", in Sweet, A.P.; Snow, C.E., Rethinking reading comprehension (New York: Guilford Publications): 82–98 
  87. ^ Flesch, R. 1946. The art of plain talk. New York: Harper.
  88. ^ Flesch, R. 1979. How to write in plain English: A book for lawyers and consumers. New York: Harpers.
  89. ^ Klare, G. R. 1980. How to write readable English. London: Hutchinson.
  90. ^ Fry, E. B. 1988. "Writeability: the principles of writing for increased comprehension." In Readability: Its past, present, and future, eds. B. I. Zakaluk and S. J. Samuels. Newark, DE: International Reading Assn.