Experiment

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Even very young children perform rudimentary experiments in order to learn about the world.

An experiment is an orderly procedure carried out with the goal of verifying, refuting, or establishing the validity of a hypothesis. Controlled experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Controlled experiments vary greatly in their goal and scale, but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies.

A child may carry out basic experiments to understand the nature of gravity, while teams of scientists may take years of systematic investigation to advance the understanding of a phenomenon. Experiments can vary from personal and informal natural comparisons (e.g. tasting a range of chocolates to find a favorite), to highly controlled (e.g. tests requiring complex apparatus overseen by many scientists that hope to discover information about subatomic particles). Uses of experiments vary considerably between the natural and human sciences.

Overview[edit]

In the scientific method, an experiment is an empirical method that arbitrates between competing models or hypotheses.[1][2] Experimentation is also used to test existing theories or new hypotheses in order to support them or disprove them.[3][4]

An experiment usually tests a hypothesis, which is an expectation about how a particular process or phenomenon works. However, an experiment may also aim to answer a "what-if" question, without a specific expectation about what the experiment will reveal, or to confirm prior results. If an experiment is carefully conducted, the results usually either support or disprove the hypothesis. According to some Philosophies of science, an experiment can never "prove" a hypothesis, it can only add support. Similarly, an experiment that provides a counterexample can disprove a theory or hypothesis. An experiment must also control the possible confounding factors—any factors that would mar the accuracy or repeatability of the experiment or the ability to interpret the results. Confounding is commonly eliminated through scientific control and/or, in randomized experiments, through random assignment.

In engineering and other physical sciences, experiments are a primary component of the scientific method. They are used to test theories and hypotheses about how physical processes work under particular conditions (e.g., whether a particular engineering process can produce a desired chemical compound). Typically, experiments in these fields will focus on replication of identical procedures in hopes of producing identical results in each replication. Random assignment is uncommon.

In medicine and the social sciences, the prevalence of experimental research varies widely across disciplines. When used, however, experiments typically follow the form of the clinical trial, where experimental units (usually individual human beings) are randomly assigned to a treatment or control condition where one or more outcomes are assessed.[5] In contrast to norms in the physical sciences, the focus is typically on the average treatment effect (the difference in outcomes between the treatment and control groups) or another test statistic produced by the experiment.[6] A single study will typically not involve replications of the experiment, but separate studies may be aggregated through systematic review and meta-analysis.

Of course, these differences between experimental practice in each of the branches of science have exceptions. For example, agricultural research frequently uses randomized experiments (e.g., to test the comparative effectiveness of different fertilizers). Similarly, experimental economics often involves experimental tests of theorized human behaviors without relying on random assignment of individuals to treatment and control conditions.[7]

History[edit]

Frontispiece of book showing two persons in robes, one holding a geometrical diagram, the other holding a telescope.
Hevelius's Selenographia, showing Alhasen [sic] representing reason, and Galileo representing the senses.
The duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads, and,.. attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency.

—Alhazen, [8]

An aspect associated with Alhazen's optical research is related to systemic and methodological reliance on experimentation (i'tibar)(Arabic: إختبار) and controlled testing in his scientific inquiries. Moreover, his experimental directives rested on combining classical physics (ilm tabi'i) with mathematics (ta'alim; geometry in particular). This mathematical-physical approach to experimental science supported most of his propositions in Kitab al-Manazir (The Optics; De aspectibus or Perspectivae) and grounded his theories of vision, light and colour, as well as his research in catoptrics and dioptrics (the study of the refraction of light).[9] Bradley Steffens in his book Ibn Al-Haytham: First Scientist has argued that Alhazen's approach to testing and experimentation made an important contribution to the scientific method. According to Matthias Schramm, Alhazen:

was the first to make a systematic use of the method of varying the experimental conditions in a constant and uniform manner, in an experiment showing that the intensity of the light-spot formed by the projection of the moonlight through two small apertures onto a screen diminishes constantly as one of the apertures is gradually blocked up.[10]

G. J. Toomer expressed some skepticism regarding Schramm's view, arguing that caution is needed to avoid reading anachronistically particular passages in Alhazen's very large body of work, and while acknowledging Alhazen's importance in developing experimental techniques, argued that he should not be considered in isolation from other Islamic and ancient thinkers.[11] Francis Bacon was an English philosopher and scientist in the 17th century, and an early and influential supporter of experimental science. He disagreed with the method of answering scientific questions by deduction and described it as follows: "Having first determined the question according to his will, man then resorts to experience, and bending her to conformity with his placets, leads her about like a captive in a procession."[12] Bacon wanted a method that relied on repeatable observations, or experiments. He was notably the first to order the scientific method as we understand it today.

There remains simple experience; which, if taken as it comes, is called accident, if sought for, experiment. The true method of experience first lights the candle [hypothesis], and then by means of the candle shows the way [arranges and delimits the experiment]; commencing as it does with experience duly ordered and digested, not bungling or erratic, and from it deducing axioms [theories], and from established axioms again new experiments.

— Francis Bacon. Novum Organum. 1620.[13]

In the centuries that followed, important advances and discoveries were made by people who applied the scientific method in different areas. For example, Galileo Galilei was able to accurately measure time and experiment to make accurate measurements and conclusions about the speed of a falling body.Antoine Lavoisier was a French chemist in the late 1700s who used experiment to describe new areas, such as combustion and biochemistry and to develop the theory of conservation of mass (matter).[14] During the 1800s, Louis Pasteur used the scientific method to disprove the prevailing theory of spontaneous generation and to develop the germ theory of disease.[15] Because of the importance of controlling potentially confounding variables, the use of well-designed laboratory experiments is preferred when possible.

A considerable amount of progress on the design and analysis of experiments occurred in the early 20th century by statisticians such as Ronald Fisher, Jerzy Neyman, Oscar Kempthorne, Gertrude Mary Cox, and William Gemmell Cochran, among others. This early work has largely been synthesized under the label Rubin causal model, which formalizes earlier statistical approaches to the analysis of experiments.

Types of experiment[edit]

Experiments might be categorized according to a number of dimensions, depending upon professional norms and standards in different fields of study. In some disciplines (e.g., Psychology or Political Science), a 'true experiment' is a method of social research in which there are two kinds of variables. The independent variable is manipulated by the experimenter, and the dependent variable is measured. The signifying characteristic of a true experiment is that it randomly allocates the subjects in order to neutralize the potential for experimenter bias and ensures, over a large number of iterations of the experiment, that all confounding factors are controlled for.[16][17]

Controlled experiments[edit]

A controlled experiment often compares the results obtained from experimental samples against control samples, which are practically identical to the experimental sample except for the one aspect whose effect is being tested (the independent variable). A good example would be a drug trial. The sample or group receiving the drug would be the experimental group (treatment group); and the one receiving the placebo or regular treatment would be the control one. In many laboratory experiments it is good practice to have several replicate samples for the test being performed and have both a positive control and a negative control. The results from replicate samples can often be averaged, or if one of the replicates is obviously inconsistent with the results from the other samples, it can be discarded as being the result of an experimental error (some step of the test procedure may have been mistakenly omitted for that sample). Most often, tests are done in duplicate or triplicate. A positive control is a procedure that is very similar to the actual experimental test but which is known from previous experience to give a positive result. A negative control is known to give a negative result. The positive control confirms that the basic conditions of the experiment were able to produce a positive result, even if none of the actual experimental samples produce a positive result. The negative control demonstrates the base-line result obtained when a test does not produce a measurable positive result. Most often the value of the negative control is treated as a "background" value to be subtracted from the test sample results. Sometimes the positive control takes the quadrant of a standard curve.

An example that is often used in teaching laboratories is a controlled protein assay. Students might be given a fluid sample containing an unknown (to the student) amount of protein. It is their job to correctly perform a controlled experiment in which they determine the concentration of protein in fluid sample (usually called the "unknown sample"). The teaching lab would be equipped with a protein standard solution with a known protein concentration. Students could make several positive control samples containing various dilutions of the protein standard. Negative control samples would contain all of the reagents for the protein assay but no protein. In this example, all samples are performed in duplicate. The assay is a colorimetric assay in which a spectrophotometer can measure the amount of protein in samples by detecting a colored complex formed by the interaction of protein molecules and molecules of an added dye. In the illustration, the results for the diluted test samples can be compared to the results of the standard curve (the blue line in the illustration) in order to determine an estimate of the amount of protein in the unknown sample.

Controlled experiments can be performed when it is difficult to exactly control all the conditions in an experiment. In this case, the experiment begins by creating two or more sample groups that are probabilistically equivalent, which means that measurements of traits should be similar among the groups and that the groups should respond in the same manner if given the same treatment. This equivalency is determined by statistical methods that take into account the amount of variation between individuals and the number of individuals in each group. In fields such as microbiology and chemistry, where there is very little variation between individuals and the group size is easily in the millions, these statistical methods are often bypassed and simply splitting a solution into equal parts is assumed to produce identical sample groups.

Once equivalent groups have been formed, the experimenter tries to treat them identically except for the one variable that he or she wishes to isolate. Human experimentation requires special safeguards against outside variables such as the placebo effect. Such experiments are generally double blind, meaning that neither the volunteer nor the researcher knows which individuals are in the control group or the experimental group until after all of the data have been collected. This ensures that any effects on the volunteer are due to the treatment itself and are not a response to the knowledge that he is being treated.

In human experiments, a subject (person) may be given a stimulus to which he or she should respond. The goal of the experiment is to measure the response to a given stimulus by a test method.

Original map by John Snow showing the clusters of cholera cases in the London epidemic of 1854

In the design of experiments, two or more "treatments" are applied to estimate the difference between the mean responses for the treatments. For example, an experiment on baking bread could estimate the difference in the responses associated with quantitative variables, such as the ratio of water to flour, and with qualitative variables, such as strains of yeast. Experimentation is the step in the scientific method that helps people decide between two or more competing explanations – or hypotheses. These hypotheses suggest reasons to explain a phenomenon, or predict the results of an action. An example might be the hypothesis that "if I release this ball, it will fall to the floor": this suggestion can then be tested by carrying out the experiment of letting go of the ball, and observing the results. Formally, a hypothesis is compared against its opposite or null hypothesis ("if I release this ball, it will not fall to the floor"). The null hypothesis is that there is no explanation or predictive power of the phenomenon through the reasoning that is being investigated. Once hypotheses are defined, an experiment can be carried out - and the results analysed - in order to confirm, refute, or define the accuracy of the hypotheses.

Natural experiments[edit]

The term "experiment" usually implies a controlled experiment, but sometimes controlled experiments are prohibitively difficult or impossible. In this case researchers resort to natural experiments or quasi-e xperiments.[18] Natural experiments rely solely on observations of the variables of the system under study, rather than manipulation of just one or a few variables as occurs in controlled experiments. To the degree possible, they attempt to collect data for the system in such a way that contribution from all variables can be determined, and where the effects of variation in certain variables remain approximately constant so that the effects of other variables can be discerned. The degree to which this is possible depends on the observed correlation between explanatory variables in the observed data. When these variables are not well correlated, natural experiments can approach the power of controlled experiments. Usually, however, there is some correlation between these variables, which reduces the reliability of natural experiments relative to what could be concluded if a controlled experiment were performed. Also, because natural experiments usually take place in uncontrolled environments, variables from undetected sources are neither measured nor held constant, and these may produce illusory correlations in variables under study.

Much research in several important science disciplines, including economics, political science, geology, paleontology, ecology, meteorology, and astronomy, relies on quasi-experiments. For example, in astronomy it is clearly impossible, when testing the hypothesis "suns are collapsed clouds of hydrogen", to start out with a giant cloud of hydrogen, and then perform the experiment of waiting a few billion years for it to form a sun. However, by observing various clouds of hydrogen in various states of collapse, and other implications of the hypothesis (for example, the presence of various spectral emissions from the light of stars), we can collect data we require to support the hypothesis. An early example of this type of experiment was the first verification in the 17th century that light does not travel from place to place instantaneously, but instead has a measurable speed. Observation of the appearance of the moons of Jupiter were slightly delayed when Jupiter was farther from Earth, as opposed to when Jupiter was closer to Earth; and this phenomenon was used to demonstrate that the difference in the time of appearance of the moons was consistent with a measurable speed.

Field experiments[edit]

Field experiments are so named in order to draw a contrast with laboratory experiments, which enforce scientific control by testing a hypothesis in the artificial and highly controlled setting of a laboratory. Often used in the social sciences, and especially in economic analyses of education and health interventions, field experiments have the advantage that outcomes are observed in a natural setting rather than in a contrived laboratory environment. For this reason, field experiments are sometimes seen as having higher external validity than laboratory experiments. However, like natural experiments, field experiments suffer from the possibility of contamination: experimental conditions can be controlled with more precision and certainty in the lab. Yet some phenomena (e.g., voter turnout in an election) cannot be easily studied in a laboratory.

Contrast with observational study[edit]

An observational study is used when it is impractical, unethical, cost-prohibitive (or otherwise inefficient) to fit a physical or social system into a laboratory setting, to completely control confounding factors, or to apply random assignment. It can also be used when confounding factors are either limited or known well enough to analyze the data in light of them (though this may be rare when social phenomena are under examination). In order for an observational science to be valid, confounding factors must be known and accounted for. In these situations, observational studies have value because they often suggest hypotheses that can be tested with randomized experiments or by collecting fresh data.

Fundamentally, however, observational studies are not experiments. By definition, observational studies lack the manipulation required for Baconian experiments. In addition, observational studies (e.g., in biological or social systems) often involve variables that are difficult to quantify or control. Observational studies are limited because they lack the statistical properties of randomized experiments. In a randomized experiment, the method of randomization specified in the experimental protocol guides the statistical analysis, which is usually specified also by the experimental protocol.[19] Without a statistical model that reflects an objective randomization, the statistical analysis relies on a subjective model.[19] Inferences from subjective models are unreliable in theory and practice.[20] In fact, there are several cases where carefully conducted observational studies consistently give wrong results, that is, where the results of the observational studies are inconsistent and also differ from the results of experiments. For example, epidemiological studies of colon cancer consistently show beneficial correlations with broccoli consumption, while experiments find no benefit.[21]

A particular problem with observational studies involving human subjects is the great difficulty attaining fair comparisons between treatments (or exposures), because such studies are prone to selection bias, and groups receiving different treatments (exposures) may differ greatly according to their covariates (age, height, weight, medications, exercise, nutritional status, ethnicity, family medical history, etc.). In contrast, randomization implies that for each covariate, the mean for each group is expected to be the same. For any randomized trial, some variation from the mean is expected, of course, but the randomization ensures that the experimental groups have mean values that are close, due to the central limit theorem and Markov's inequality. With inadequate randomization or low sample size, the systematic variation in covariates between the treatment groups (or exposure groups) makes it difficult to separate the effect of the treatment (exposure) from the effects of the other covariates, most of which have not been measured. The mathematical models used to analyze such data must consider each differing covariate (if measured), and the results will not be meaningful if a covariate is neither randomized nor included in the model.

To avoid conditions that render an experiment far less useful, physicians conducting medical trials, say for U.S. Food and Drug Administration approval, will quantify and randomize the covariates that can be identified. Researchers attempt to reduce the biases of observational studies with complicated statistical methods such as propensity score matching methods, which require large populations of subjects and extensive information on covariates. Outcomes are also quantified when possible (bone density, the amount of some cell or substance in the blood, physical strength or endurance, etc.) and not based on a subject's or a professional observer's opinion. In this way, the design of an observational study can render the results more objective and therefore, more convincing.

Ethics[edit]

By placing the distribution of the independent variable(s) under the control of the researcher, an experiment - particularly when it involves human subjects - introduces potential ethical considerations, such as balancing benefit and harm, fairly distributing interventions (e.g., treatments for a disease), and informed consent. For example in psychology or health care, it is unethical to provide a substandard treatment to patients. Therefore, ethical review boards are supposed to stop clinical trials and other experiments unless a new treatment is believed to offer benefits as good as current best practice.[22] It is also generally unethical (and often illegal) to conduct randomized experiments on the effects of substandard or harmful treatments, such as the effects of ingesting arsenic on human health. To understand the effects of such exposures, scientists sometimes use observational studies to understand the effects of those factors.

Even when experimental research does not directly involve human subjects, it may still present ethical concerns. For example, the nuclear bomb experiments conducted by the Manhattan Project implied the use of nuclear reactions to harm human beings even though the experiments did not directly involve any human subjects.

Experimental method in Law[edit]

The experimental method can be useful in solving juridical problems (R. Zippelius, Die experimentierende Methode im Recht, 1991, ISBN 3-515-05901-6).

See also[edit]

Notes[edit]

  1. ^ Cooperstock, Fred I. General Relativistic Dynamics: Extending Einstein's Legacy Throughout the Universe. Page 12. World Scientific. 2009. ISBN 978-981-4271-16-5
  2. ^ Griffith, W. Thomas. The Physics of Everyday Phenomena: A Conceptual Introduction to Physics. Page 4. New York: McGraw-Hill Higher Education. 2001. ISBN 0-07-232837-1.
  3. ^ Devine, Betsy. Fantastic realities: 49 mind journeys and a trip to Stockholm. Page 62. Wilczek, Frank. World Scientific. 2006. ISBN 978-981-256-649-2
  4. ^ Griffith, W. Thomas. The Physics of Everyday Phenomena: A Conceptual Introduction to Physics. Page 3. New York: McGraw-Hill Higher Education. 2001. ISBN 0-07-232837-1.
  5. ^ Holland, Paul W. 1986. "Statistics and Causal Inference." Journal of the American Statistical Association 81 (396): 945–960. http://www.jstor.org/stable/2289064.
  6. ^ Druckman, James N., Donald P. Green, James H. Kuklinski, and Arthur Lupia. 2011. Cambridge Handbook of Experimental Political Science. New York: Cambridge University Press.
  7. ^ Dickson, Eric S. 2011. "Economics Versus Psychology Experiments." In Cambridge Handbook of Experimental Political Science, ed. James N. Druckman, Donald P. Green, James H. Kuklinski, and Arthur Lupia. New York: Cambridge University Press.
  8. ^ Cite error: The named reference Sabra was invoked but never defined (see the help page).
  9. ^ (El-Bizri 2005a)
    (El-Bizri 2005b)
  10. ^ (Toomer 1964, pp. 463–4)
  11. ^ (Toomer 1964, p. 465)
  12. ^ Bacon, Francis. Novum Organum. In Durant, Will. The Story of Philosophy. Page 101. Simon & Schuster Paperbacks. 1926. ISBN 978-0-671-69500-2
  13. ^ Durant, Will. The Story of Philosophy. Page 101 Simon & Schuster Paperbacks. 1926. ISBN 978-0-671-69500-2
  14. ^ Bell (2005; p.57)
  15. ^ Dubos (1986; p.155)
  16. ^ http://changingminds.org/explanations/research/design/experiment_types.htm
  17. ^ http://psychology.ucdavis.edu/SommerB/sommerdemo/experiment/types.htm
  18. ^ Dunning, Thad. Natural Experiments in the Social Sciences: A Design-Based Approach. Cambridge University Press.
  19. ^ a b *Hinkelmann, Klaus and Kempthorne, Oscar (2008). Design and Analysis of Experiments, Volume I: Introduction to Experimental Design (Second ed.). Wiley. ISBN 978-0-471-72756-9. 
  20. ^ David A. Freedman, R. Pisani, and R. A. Purves. Statistics, 4th edition (W.W. Norton & Company, 2007) [1] ISBN 978-0-393-92972-0
  21. ^ David A. Freedman (2009) Statistical Models: Theory and Practice, Second edition, (Cambridge University Press) [2] ISBN 978-0-521-74385-3
  22. ^ Bailey, R. A (2008). Design of Comparative Experiments. Cambridge University Press. ISBN 978-0-521-68357-9.  Pre-publication chapters are available on-line.

Further reading[edit]

External links[edit]