Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Scientific control

From Wikipedia, the free encyclopedia
(Redirected fromExperimental control)
Methods employed to reduce error in science tests
For other uses, seeControl andTreatment and control groups.
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Scientific control" – news ·newspapers ·books ·scholar ·JSTOR
(August 2011) (Learn how and when to remove this message)
Take identical growing plants (Argyroxiphium sandwicense) and give fertilizer to half of them. If there are differences between the fertilized treatment and the unfertilized treatment, these differences may be due to the fertilizer as long as there weren't other confounding factors that affected the result. For example, if the fertilizer was spread by atractor but no tractor was used on the unfertilized treatment, then the effect of the tractor needs to be controlled.

Ascientific control is anexperiment orobservation designed to minimize the effects of variables other than theindependent variable (i.e.confounding variables).[1] This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of thescientific method.

Controlled experiments

[edit]
See also:Scientific method andExperimental design

Controls eliminate alternate explanations of experimental results, especially experimental errors and experimenter bias. Many controls are specific to the type of experiment being performed, as in the molecular markers used inSDS-PAGE experiments, and may simply have the purpose of ensuring that the equipment is working properly. The selection and use of proper controls to ensure that experimental results are valid (for example, absence ofconfounding variables) can be very difficult. Control measurements may also be used for other purposes: for example, a measurement of a microphone's background noise in the absence of a signal allows the noise to be subtracted from later measurements of the signal, thus producing a processed signal of higher quality.

For example, if a researcher feeds an experimentalartificial sweetener to sixty laboratoriesrats and observes that ten of them subsequently become sick, the underlying cause could be the sweetener itself or something unrelated. Other variables, which may not be readily obvious, may interfere with the experimental design. For instance, the artificial sweetener might be mixed with a dilutant and it might be the dilutant that causes the effect. To control for the effect of the dilutant, the same test is run twice; once with the artificial sweetener in the dilutant, and another done exactly the same way but using the dilutant alone. Now the experiment is controlled for the dilutant and the experimenter can distinguish between sweetener, dilutant, and non-treatment. Controls are most often necessary where a confounding factor cannot easily be separated from the primary treatments. For example, it may be necessary to use a tractor to spread fertilizer where there is no other practicable way to spread fertilizer. The simplest solution is to have a treatment where a tractor is driven over plots without spreading fertilizer and in that way, the effects of tractor traffic are controlled.

The simplest types of control are negative and positive controls, and both are found in many different types of experiments.[2] These two controls, when both are successful, are usually sufficient to eliminate most potential confounding variables: it means that the experiment produces a negative result when a negative result is expected, and a positive result when a positive result is expected. Other controls include vehicle controls, sham controls and comparative controls.[2]

Negative control

[edit]
See also:Placebo-controlled study
It has been suggested thatNegative controls bemerged into this section. (Discuss) Proposed since March 2025.

Where there are only two possible outcomes, e.g. positive or negative, if the treatment group and the negative control (non-treatment group) both produce a negative result, it can be inferred that the treatment had no effect. If the treatment group and the negative control both produce a positive result, it can be inferred that aconfounding variable is involved in the phenomenon under study, and the positive results are not solely due to the treatment.

In other examples, outcomes might be measured as lengths, times, percentages, and so forth. In the drug testing example, we could measure the percentage of patients cured. In this case, the treatment is inferred to have no effect when the treatment group and the negative control produce the same results. Some improvement is expected in the placebo group due to theplacebo effect, and this result sets the baseline upon which the treatment must improve upon. Even if the treatment group shows improvement, it needs to be compared to the placebo group. If the groups show the same effect, then the treatment was not responsible for the improvement (because the same number of patients were cured in the absence of the treatment). The treatment is only effective if the treatment group shows more improvement than the placebo group.

Positive control

[edit]

Positive controls are often used to assesstest validity. For example, to assess a new test's ability to detect a disease (itssensitivity), then we can compare it against a different test that is already known to work. The well-established test is a positive control since we already know that the answer to the question (whether the test works) is yes.

Similarly, in anenzyme assay to measure the amount of an enzyme in a set of extracts, a positive control would be an assay containing a known quantity of the purified enzyme (while a negative control would contain no enzyme). The positive control should give a large amount of enzyme activity, while the negative control should give very low to no activity.

If the positive control does not produce the expected result, there may be something wrong with the experimental procedure, and the experiment is repeated. For difficult or complicated experiments, the result from the positive control can also help in comparison to previous experimental results. For example, if the well-established disease test was determined to have the same effect as found by previous experimenters, this indicates that the experiment is being performed in the same way that the previous experimenters did.

When possible, multiple positive controls may be used—if there is more than one disease test that is known to be effective, more than one might be tested. Multiple positive controls also allow finer comparisons of the results (calibration, or standardization) if the expected results from the positive controls have different sizes. For example, in the enzyme assay discussed above, astandard curve may be produced by making many different samples with different quantities of the enzyme.

Randomization

[edit]
Main article:Random assignment

In randomization, the groups that receive different experimental treatments are determined randomly. While this does not ensure that there are no differences between the groups, it ensures that the differences are distributed equally, thus correcting forsystematic errors.

For example, in experiments wherecrop yield is affected (e.g.soil fertility), the experiment can be controlled by assigning the treatments to randomly selected plots of land. This mitigates the effect of variations in soil composition on the yield.

Blind experiments

[edit]
Main article:Blind experiment

Blinding is the practice of withholding information that maybias an experiment. For example, participants may not know who received an active treatment and who received aplacebo. If this information were to become available to trial participants, patients could receive a largerplacebo effect, researchers could influence the experiment to meet their expectations (theobserver effect), and evaluators could be subject toconfirmation bias. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases,sham surgery may be necessary to achieve blinding.

During the course of an experiment, a participant becomesunblinded if they deduce or otherwise obtain information that has been masked to them. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. Unblinding is common in blind experiments and must be measured and reported.Meta-research has revealed high levels of unblinding in pharmacological trials. In particular,antidepressant trials are poorly blinded.Reporting guidelines recommend that all studies assess and report unblinding. In practice, very few studies assess unblinding.[3]

Blinding is an important tool of thescientific method, and is used in many fields of research. In some fields, such asmedicine, it is considered essential.[4] In clinical research, a trial that is not blinded trial is called anopen trial.

See also

[edit]

References

[edit]
  1. ^Life, Vol. II: Evolution, Diversity and Ecology: (Chs. 1, 21–33, 52–57). W. H. Freeman. 2006. p. 15.ISBN 978-0-7167-7674-1. Retrieved14 February 2015.
  2. ^abJohnson PD, Besselsen DG (2002)."Practical aspects of experimental design in animal research"(PDF).ILAR J.43 (4):202–206.doi:10.1093/ilar.43.4.202.PMID 12391395. Archived fromthe original(PDF) on 2010-05-29.
  3. ^Bello, Segun; Moustgaard, Helene; Hróbjartsson, Asbjørn (October 2014). "The risk of unblinding was infrequently and incompletely reported in 300 randomized clinical trial publications".Journal of Clinical Epidemiology.67 (10):1059–1069.doi:10.1016/j.jclinepi.2014.05.007.ISSN 1878-5921.PMID 24973822.
  4. ^"Oxford Centre for Evidence-based Medicine – Levels of Evidence (March 2009)".cebm.net. 11 June 2009.Archived from the original on 26 October 2017. Retrieved2 May 2018.
  5. ^Lind, James."A Treatise of the Scurvy"(PDF). Archived fromthe original(PDF) on 2 June 2015.
  6. ^Simon, Harvey B. (2002).The Harvard Medical School guide to men's health. New York:Free Press. p. 31.ISBN 0-684-87181-5.

External links

[edit]
Overview
Controlled study
(EBM I to II-1)
Observational study
(EBM II-2 to II-3)
Measures
Occurrence
Association
Population impact
Other
Trial/test types
Analysis of clinical trials
Interpretation of results
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Scientific_control&oldid=1280397534"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp