Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Intended for healthcare professionals

  1. Sifting the evidence...
  2. Sifting the evidence—what's wrong with significance tests?Another comment on the role of statistical methods
Loading
  1. Jonathan A C Sterne(jonathan.sterne@bristol.ac.uk), senior lecturer in medical statistics,
  2. George Davey Smith, professor of clinical epidemiology
  1. Department of Social Medicine, University of Bristol, Bristol BS8 2PR
  2. Nuffield College, Oxford OX1 1NF
  1. Correspondence to: J Sterne

The findings of medical research are often met with considerable scepticism, even when they have apparently come from studies with sound methodologies that have been subjected to appropriate statistical analysis. This is perhaps particularly the case with respect to epidemiological findings that suggest that some aspect of everyday life is bad for people. Indeed, one recent popular history, the medical journalist James Le Fanu'sThe Rise and Fall of Modern Medicine, went so far as to suggest that the solution to medicine's ills would be the closure of all departments of epidemiology.1

One contributory factor is that the medical literature shows a strong tendency to accentuate the positive; positive outcomes are more likely to be reported than null results.24 By this means alone a host of purely chance findings will be published, as by conventional reasoning examining 20 associations will produce one result that is “significant at P=0.05” by chance alone. If only positive findings are published then they may be mistakenly considered to be of importance rather than being the necessary chance results produced by the application of criteria for meaningfulness based on statistical significance. As many studies contain long questionnaires collecting information on hundreds of variables, and measure a wide range of potential outcomes, several false positive findings are virtually guaranteed. The high volume and often contradictory nature5 of medical research findings, however, is not only because of publication bias. A more fundamental problem is the widespread misunderstanding of the nature of statistical significance.

View Full Text

Log in

Log in through your institution

Subscribe from £184 *

Subscribe and get access to all BMJ articles, and much more.

Subscribe

* For online subscription

Access this article for 1 day for:
£50 / $60/ €56
(excludes VAT)

You can download a PDF version for your personal record.

Article tools

  • Article alerts

    Please note: your email address is provided to the journal, which may use this information for marketing purposes.

    Log in or register:

    Register for alerts

    If you have registered for alerts, you should use your registered email address as your username
  • Download this article to citation manager

    Jonathan A CSterne,D RCox,George DaveySmith
    SterneJ A C,CoxD R,SmithG D.Sifting the evidence—what's wrong with significance tests?Another comment on the role of statistical methodsBMJ2001;322:226doi:10.1136/bmj.322.7280.226

    Help

    If you are unable to import citations, please contact technical support for your product directly (links go to external sites):

Forward this page

Thank you for your interest in spreading the word about The BMJ.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

CAPTCHA

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

This week's poll

Read related article

See previous polls


[8]ページ先頭

©2009-2025 Movatter.jp