Talk:Program evaluation

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Sociology (Rated Start-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Sociology, a collaborative effort to improve the coverage of sociology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
 
WikiProject United States Public Policy  
WikiProject icon This article is within the scope of WikiProject United States Public Policy, a collaborative effort to improve the coverage of United States public policy articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 ???  This article has not yet received a rating on the project's quality scale.
 ???  This article has not yet received a rating on the project's importance scale.
 

Adding in line references[edit]

I added a couple of citation, but didn't know how to put the link on the citation, so I added them to the external links. Could someone put these external links into the citation? —Preceding unsigned comment added by 66.192.112.211 (talk) 16:31, 24 January 2009 (UTC)

Evaluation vs. Analysis[edit]

Someone should make the difference clear, if there is one, and back it up with citations. RedHouse18 18:36, 18 July 2007 (UTC)

Substantive Edits to Address Tone and References[edit]

Because this article was identified as lacking clear references and having an inappropriate tone/style, I made substantive changes to the initial block of text. I divided this text into main headings, leaving a somewhat coherent and useful introduction. Then I formalized the language in the text. I also re-organized much of the previous information into a common framework used in program evaluation (the 5 dimensions of evaluation). I included an in-text citation for this framework, referencing one of the definitive books on program evaluationby Peter Rossi.

I did not address the issue of evaluation vs. analysis. Although this is an important distinction, I was focused on the broader aspects of tone, references, and organization.

Visuals[edit]

What if someone added a visual chart to the page? Perhaps looking at a section like the paradigms, and comparing them in a format other than straight up text. — Preceding unsigned comment added by 2601:681:4303:8F0:FD35:A57E:9095:BC30 (talk) 05:23, 7 December 2016 (UTC)

Reorganization of page[edit]

I was looking at this page and I thought that with some reorganization it could be made clearer for people who are completely unfamiliar with program evaluation. My proposed changes would be the following:


1. Brief History of Evaluation

2. Evaluation Models

  2.1 Empowerment
      2.1.1     Establishing a mission
      2.1.2     Taking stock
      2.1.3     Planning for the future
  2.2 CIPP Model of evaluation
     2.2.1      History of the CIPP model
     2.2.2      CIPP model
     2.2.3      Four aspects of CIPP evaluation
     2.2.4      Using CIPP in the different stages of the evaluation

3. Conducting Evaluations

  3.1 Types Of Evaluators
  3.2 Choosing an Evaluation Design
     3.2.1      Assessing needs
     3.2.2      Assessing program theory
     3.2.3      Assessing implementation
     3.2.4      Assessing the impact (effectiveness)
     3.2.5      Assessing efficiency
  3.3 Methodological constraints and challenges
     3.3.1      The shoestring approach
     3.3.2      Budget constraints
     3.3.3      Time constraints
     3.3.4      Data constraints
     3.3.5      Five-tiered approach
     3.3.6      Methodological challenges presented by language and culture
  3.4 Utilization/Reporting
     3.4.1      Persuasive utilization
     3.4.2      Direct (instrumental) utilization
     3.4.3      Conceptual utilization
     3.4.4      Variables affecting utilization
     3.4.5      Guidelines for maximizing utilization

4. See also

5. References

6. Further reading

7. External links

The one thing I am not sure where it fits is the discussion on paradigms. I am thinking that it could go either in the brief history or the discussion of models. Most of this information is already in the article and would just need to be rearranged and have references updated.

Efarmosa (talk) 22:52, 8 July 2017 (UTC)efarmosa

History of Program Evaluation[edit]

Program evaluation is often regarded as its beginning from the late 1960s with the infusion by the federal government of large sums of money into a wide range of human service and education programs. However, program evaluation continues to mature as a profession since the nineteenth century. Madaus, Scriven, and Stufflebeam (1983) provide six periods of the history of the program evaluation from the nineteenth century to the present.[1]

The first period, from 1800 to 1900, is the age of reform. The second period of the evaluation is from 1900 to 1930, which is called as the age of efficiency and testing. Next period is Tylerian age, from 1930 to 1945. The fourth period is the age of Innocence, from 1946 to about 1957. The fifth, from 1958 to 1972, is the age of Expansion, and the last period, from 1973 to the present, is the age of professionalization.

The age of reform (1800-1900) Many local districts attempted to evaluate educational programs and performance of schools. In the late 1890s, the United States presidential commissions had focused on evaluations of human service of programs with the examination of evidence.

The age of efficiency and testing (1900-1930) Evaluators during ‘the age of efficiency and testing’ sought to eliminate waste in social and education programs in accordance with scientific management. During this period, the surveys were commonly used to measure large school systems focusing on school and teacher efficiency and a precise set of instructional objectives (Madaus, Scriven, and Stufflebeam 1983). Large districts, department or a bureau, focused on improving the efficiency and the effectiveness of the programs or organizations by using standardized achievement tests.

The Tylerian Age (1930-1945) Ralph W. Tyler has had significant impacts on education in general and educational evaluation. Tyler began with a broad view of both curriculum and evaluation. Tyler believed that curriculum, as a set of learning experiences with the educational objectives, can help children to get their educational needs and specific behavioral outcomes. He introduced the term ‘educational evaluation’ which is to assess the extent that valued objectives had been achieved as part of an instructional program.

The Age of innocence (1946-1957) In this period, local or small school districts consolidated with others to provide the broad range of educational services including public health services, mental health, food services, and community education (Madaus, Stufflebeam, & Scriven, 1983). During the age of innocence, technical aspects of assessment were developed. Objective-based and comparative experimental evaluation were initially designed in this period.

The Age of expansion (1958-1972) The evaluations in the age of innocence made an issue about assessments of a large-scale curriculum development projects with federal money. The program evaluations during 1958 to 1972 emerged from four approaches: Tyler approach in accordance with defining and assessing objectives, nationally standardized tests, professional-judgment approaches to check periodically on the efforts of contractors, and the field experiments.

The Age of professionalization (1973-present) From 1973, the importance of evaluation has been crystallized and emerged as a distinct profession. However, the identity of evaluators was shaken in this period. Some evaluators regarded themselves as researchers. Before the 1970s, most evaluations had focused on outcome accountability. However, from the 1970s many evaluators, such as (Daniel Stufflebeam’s CIPP model), have focused on process accountability as well as goal accountability toward the improvement of the program or institutional performance.

BangsilO (talk) 01:32, 11 July 2017 (UTC)

Evaluation Use and the Role of The Community Dissonance Theory[edit]

The community dissonance theory [2] moves beyond Caplan’s (1979) Two Communities Theory [3], Shonkoff’s (2000) Three Cultures Theory [4], and the Elaborated Multi-Cultural Theory [5] to a new conceptual framework in an attempt to further explain the underutilization of evaluative findings/research in policymaking to that of a lack of communication between the knowledge producers (researchers/evaluators) and knowledge consumers (policymakers). This dissonance, or lack of agreement, can be attributed to the differing professional and institutional cultures with which the knowledge producers and knowledge consumers reside. These cultures influence how one thinks, acts and perceives the world.

Jillramey (talk) 00:35, 26 July 2017 (UTC)

  1. ^ Madaus, G. F., Stufflebeam, D., & Scriven, M. S. (1983). Evaluation Models: Viewpoints on educational and human services evaluation. Boston, MA: Springer.
  2. ^ (Bogenschneider & Corbett, 2010)
  3. ^ (Bogenschneider & Corbett, 2010; Weiss et al., 2008)
  4. ^ (Bogenschneider & Corbett, 2010)
  5. ^ (Bogenschneider, K., Olson, J. R., Mills, J., & Linney, K. D., 2006)