Jump to content

Pragmatic validity

From Wikipedia, the free encyclopedia

Pragmatic validity in research looks to a different paradigms from more traditional, (post)positivistic research approaches. It tries to ameliorate problems associated with the rigour-relevance debate, and is applicable in all kinds of research streams. Simply put, pragmatic validity looks at research from a prescriptive-driven perspective. Solutions to problems that actually occur in the complex and highly multivariate field of practice are developed in a way that, while valid for a specific situation, need to be adjusted according to the context in which they are to be applied.

The term "validity" is often seen as a sort catch-all for the question whether the knowledge claims resulting from research are warranted. The confusion might arise from the mingling of the terms 'internal validity' and 'external validity', where the former refers to proof of a causal link between a treatment and effect, and the latter is concerned with generalizability. (In this discussion I maintain the term 'generalizability' rather than external validity mainly to avoid any possible confusion between the two terms.) During this discussion I consider that validity is reflected in the question, "did we measure the right thing?", or, in other words, can the researcher prove that the effect he observed was actually a result of the cause? Positivistic research approaches this question in a different way than pragmatic research, which is based in a different paradigm. Design Science Research is one example of research firmly situated in a pragmatic perspective.

Validity in (post)positivist research

[edit]

Postpositivist research typically strives to numerically report upon empirical observations made within a controlled environment in order to arrive at a universal truth about a causal effect between a limited number of variables. This statement relates what much of the epistemology of Positivistic science is based on: isolating singular variables in order to come to a conclusion that is free of context. Laboratory experiments and quantitative models are the preferred methods for observing and reporting. These are considered to rule out any rival plausible explanations and thus help to guarantee validity.

Validity in pragmatic research

[edit]

Validity in prescription-driven research is approached in different ways than descriptive research. The first difference deals with what some researchers call 'messy situations' (Brown 1992; Collins, Joseph, and Bielaczuc 2004). A messy situation is a real-life, a highly multivariate one is where independent variables cannot be minimized nor completely accounted for. In explanatory science, experiments are in controlled laboratories, where variables can be minimalized. The complex nature of a real-life intervention means that the success or failure (effect) of the intervention may be difficult to conclusively link to the intervention itself (cause). This aspect of knowledge claims from science is seen as extremely problematic for positivist scientists looking for explanations. However, scientists using a pragmatic paradigm respond to this concept in two ways; first by questioning the value of research carried out in a controlled situation (Brown 1992; Hodkinson 2004; Kelly and Lesh 2000; Perrin 2000; Susman and Evered 1978; Walker and Evers 1999; Zaritsky et al. 2003) and secondly, by looking at causal effects through a different perspective.

The use of the phrase of Pragmatic Validity was first discussed in Worren, Moore & Elliott (2002), who contrasted it with Scientific Validity. This ideas has been taken up in the management literature to a considerable degree.

Many social science researchers assert that testing interventions in controlled laboratory settings is hardly feasible and not a reflection of the real world.[1] For them, real-life settings are needed in order to produce worthwhile research artifacts. These artifacts are validated by the adoption rate of the practitioners within the community of practice associated with the field.[2] Nowotny (2000) calls knowledge that has been validated by the multidisciplinary community of practice 'socially robust', meaning that it has been developed in (and for) contexts outside the laboratory and can be used by practitioners.

In the following statement, Cook (1983) refers to the well-known educational researcher Cronbach about multivariate causal interdependency and validity, and the need for understanding the complexity of the situation being researched.

Lawful statements of causation require full knowledge of this system of variables so that total prediction of the outcome can be achieved. From his belief in the systemic organization of causal connections and the utility of causal explanations of this type, Cronbach questions whether the experimentalists' isolation and manipulation of a small set of specific causal agents is sensitive to the real nature of causal agency, which depends on complex patterns of influence between multiple events and also involves characteristics of respondents, settings and times (p.78).

Thus, Cook (1983) actually questions the validity of causal explanations generated in a context-free setting (the goal of positivistic, explanatory research). Causal relationships in pragmatic research are looked at somewhat differently, which is apparent in the wording alone.

A statement about a causal relationship in positivistic research is something like the following; if you perform action x to subject y, then z happens. This assumes that the confounding variables have been ruled out, and the statement is always true, regardless of the situation (internally and externally valid). What I want to do now is use the concept of 'technological rules' in order to illustrate how causality is shown in prescriptive.

In pragmatic science, the goal is to develop knowledge that can be used to improve a situation. This we can call prescriptive knowledge. Prescriptive knowledge, according to van Aken (2004, 2004b, 2005) can take the form of a technological rule. A technological rule is "... a chunk of general knowledge linking an intervention or artifact with an expected outcome or performance in a certain field of application" (van Aken, 2005: p23). This rule can be formulated much the same way as my earlier example of a causal statement; 'if you perform action X to subject Y, then Z happens' (Note the cause and effect formulation). This type of algorithmic formulation is called a design solution (vanAken and Romme 2005). A design solution is usually a statistically proven quantitative model that can be taken as specific instruction (van Aken & Romme, 2005). On the other hand, there are more abstract technological rules that are used for designing solutions. These are heuristics that guide, but do not determine, the design process and are called design solutions (van Aken 2005; van Aken and Romme 2005). Design solutions are formulated in the following way; " If you want to achieve Y in situation Z, then you perform something like X" (van Aken & Romme, 2005; p. 6). In short, the resulting artifacts of pragmatic research can also be causal relationships, just typically not as specific or reductionist as those resulting from positivist research. The words 'something like' in the statement implicitly refer to the complexity in which the causal relationship is enacted. The causal agent (X, in the statement above) can also be seen as complex and multivariate (Cook, 1983). Testing these causal agents is done in context, much the same way as evaluation research tests social or economic programs (van Aken 2003) .

References

[edit]
  1. ^ Brown 1992; Cook 1983; Husen 1999
  2. ^ Brown 1992; Hodkinson 2004; Zaritsky et al. 2003

Sources

[edit]
  • Brown, A. 1992. "Design Experiments: Theoretical and Methodological Challenges in Creating Complex Interventions in Classroom Settings." The Journal of the Learning Sciences 2 (2):141-178.
  • Collins, A., Joseph, D. and Bielaczuc. K.; 2004. "Design Research: Theoretical and Methodological Issues." The Journal of the Learning Sciences 13 (1):15-42.
  • Cook, T.D. 1983. "Quasi-Experimentation: Its Ontology, Epistemology, and Methodology." In Beyond Method: Strategies for Social Research, edited by G. Morgon. London: Sage.
  • Hodkinson, P. 2004. "Research as a form of work: expertise, community and methodological objectivity." British Educational Research Journal 30 (1):9-26.
  • Husen, T. 1999. Research Paradigms in Education. In Issues in Education, edited by J. P. Keeves and G. Lakomski. Amsterdam: Pergamon.
  • Kelly, A. E. and Lesh. R.A.; 2000. "Trends and Shifts in Research Methods." In Handbook of Research Design in Mathematics and Science Education, edited by A. E. Kelly and R. A. Lesh. Mahwah, New Jersey: Lawrence Erlbaum.
  • Perrin, B. 2000. Donald T. Campbell and the Art od Practical " In-the-Trenches" Program Evaluation. In Validity & Social Experimentation, edited by L. Bickman. Thousand Oaks: Sage.
  • Susman, G.I. and Evered. R.D., 1978. "An Assessment of the Scientific Merits of Action Research." Administrative Science Quarterly 23:528-603.
  • van Aken, J. E. 2005. "Management Research as a Design Science: Articulating the Research Products of Mode 2 Knowledge Production in Management." British Journal of Management 16 (1):19-36.
  • van Aken, J.E., and A.G.L. Romme. 2005. Reinventing the Future: Design Science Research in the Field of Organization Studies (unpublished work): Eindhoven University of Technology/ Tilburg University.
  • vanAken, J.E., and A.G.L. Romme. 2005. Reinventing the Future: Design Science Research in the Field of Organization Studies (unpublished work): Eindhoven University of Technology/ Tilburg University.
  • Walker, J.C., and C.W. Evers. 1999. "Research in Education: Epistemological Issues." In Issues in Educational Research, edited by J. P. Keeves and G. Lakomski. Amsterdam: Permamon.
  • Worren, Nicolay, Karl Moore and Richard Elliot. 2002. "When Theories become Tools: Toward a Framework for Pragmatic Validity," Human Relations, 55 (10): 1227-1250.
  • Zaritsky, R., A.E. Kelly, W. Flowers, E. and Rogers, and P. O'Neill. 2003. Clinical Design Sciences: A View From Sister Design Efforts. Educational Researcher 32 (1):32-34.