Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Switch to: References

Citations of:

Is Evidence Historical?

In Peter Achinstein & Laura J. Snyder,Scientific methods: conceptual and historical problems. Malabar, Fla.: Krieger Pub. Co.. pp. 95--117 (1994)

Add citations

You mustlogin to add citations.
  1. Intersubjective corroboration.Darrell Patrick Rowbottom -2008 -Studies in History and Philosophy of Science Part A 39 (1):124-132.
    How are we to understand the use of probability in corroboration functions? Popper says logically, but does not show we could have access to, or even calculate, probability values in a logical sense. This makes the logical interpretation untenable, as Ramsey and van Fraassen have argued. -/- If corroboration functions only make sense when the probabilities employed therein are subjective, however, then what counts as impressive evidence for a theory might be a matter of convention, or even whim. So isn’t (...) so-called ‘corroboration’ just a matter of psychology? -/- In this paper, I argue that we can go some way towards addressing this objection by adopting an intersubjective interpretation, of the form advocated by Gillies, with respect to corroboration. I show why intersubjective probabilities are preferable to subjective ones when it comes to decision making in science: why group decisions are liable to be superior to individual ones, given a number of plausible conditions. I then argue that intersubjective corroboration is preferable to intersubjective confirmation of a Bayesian variety, because there is greater opportunity for principled agreement concerning the factors involved in the former. (shrink)
    Direct download(6 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • What experiment did we just do? Counterfactual error statistics and uncertainties about the reference class.Kent W. Staley -2002 -Philosophy of Science 69 (2):279-299.
    Experimenters sometimes insist that it is unwise to examine data before determining how to analyze them, as it creates the potential for biased results. I explore the rationale behind this methodological guideline from the standpoint of an error statistical theory of evidence, and I discuss a method of evaluating evidence in some contexts when this predesignation rule has been violated. I illustrate the problem of potential bias, and the method by which it may be addressed, with an example from the (...) search for the top quark. A point in favor of the error statistical theory is its ability, demonstrated here, to explicate such methodological problems and suggest solutions, within the framework of an objective theory of evidence. (shrink)
    Direct download(9 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Evidence of expert's evidence is evidence.Luca Moretti -2016 -Episteme 13 (2):208-218.
    John Hardwig has championed the thesis (NE) that evidence that an expert EXP has evidence for a proposition P, constituted by EXP’s testimony that P, is not evidence for P itself, where evidence for P is generally characterized as anything that counts towards establishing the truth of P. In this paper, I first show that (NE) yields tensions within Hardwig’s overall view of epistemic reliance on experts and makes it imply unpalatable consequences. Then, I use Shogenji-Roche’s theorem of transitivity of (...) incremental confirmation to show that (NE) is false if a natural Bayesian formalization of the above notion of evidence is implemented. I concede that Hardwig could resist my Bayesian objection if he re-interpreted (NE) as a more precise thesis that only applies to community-focused evidence. I argue, however, that this precisification, while diminishing the philosophical relevance of (NE), wouldn’t settle the tensions internal to Hardwig’s views. (shrink)
    Direct download(6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Swimming in evidence: A reply to Maher.Peter Achinstein -1996 -Philosophy of Science 63 (2):175-182.
    Direct download(9 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • (1 other version)Explanation v. Prediction: Which Carries More Weight?Peter Achinstein -1994 -PSA Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994 (2):156-164.
    According to a standard view, predictions of new phenomena provide stronger evidence for a theory than explanations of old ones. More guardedly, a theory that predicts phenomena that did not prompt the initial formulation of that theory is better supported by those phenomena than is a theory by known phenomena that generated the theory in the first place. So say various philosophers of science, including William Whewell (1847) in the 19th century and Karl Popper (1959) in the 20th, to mention (...) just two.Stephen Brush takes issue with this on historical grounds. In a series of fascinating papers he argues that generally speaking scientists do not regard the fact that a theory predicts new phenomena, even ones of a kind totally different from those that prompted the theory in the first place, as providing better evidential support for that theory than is provided by already known facts explained by the theory. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Philosophical Aspects of Evidence and Methodology in Medicine.Jesper Jerkert -2021 - Dissertation, Royal Institute of Technology, Stockholm
    The thesis consists of an introduction and five papers. The introduction gives a brief historical survey of empirical investigations into the effectiveness of medicinal interventions, as well as surveys of the concept of evidence and of the history and philosophy of experiments. The main ideas of the EBM movement are also presented. Paper I: Concerns have been raised that clinical trials do not offer reliable evidence for some types of treatment, in particular for highly individualised treatments, for example traditional homeopathy. (...) With respect to individualised treatments, it is argued that such concerns are unfounded. There are two minimal conditions related to the nature of the treatments that must be fulfilled for evaluability in a clinical trial, namely the proper distinction of treatment groups and the elimination of confounding variables or variations. These conditions do not preclude the testing of individualised medicine. Paper II: Traditionally, mechanistic reasoning has been assigned a negligible role in the EBM literature. When discussed, mechanistic reasoning has almost exclusively been positive – both in an epistemic sense of claiming that there is a mechanistic chain and in a health-related sense of there being claimed benefits for the patient. Negative mechanistic reasoning has been neglected. I distinguish three main types of negative mechanistic reasoning and subsume them under a new definition. One of the three distinguished types, which is negative only in the health-related sense, has a corresponding positive counterpart, whereas the other two, which are epistemically negative, do not have such counterparts, at least not that are particularly interesting as evidence. Accounting for negative mechanistic reasoning in EBM is therefore partly different from accounting for positive mechanistic reasoning. Paper III: Evidence hierarchies are lists of investigative strategies ordered with regard to the claimed strength of evidence. They have been used for a couple of decades within EBM, particularly for the assessment of evidence for treatment recommendations, but they remain controversial. An under-investigated question is what the order in the hierarchy means. Four interpretations of the order are distinguished and discussed. The two most credible ones are, in rough terms, “typically stronger” and “ideally stronger”. The GRADE framework seems to be based on the “typically stronger” reading. Even if the interpretation of an evidence hierarchy were established, hierarchies appear to be rather unhelpful for the task of evidence aggregation. However, specifying the intended order relation may help sort out disagreements. Paper IV: There are three main arguments for randomisation that connect inseparably to theoretical concepts: Randomisation is useful for performing null hypothesis testing. Randomisation is needed for plausible causal inferences from treatment to effect. Randomisation is acceptable and computationally convenient in a Bayesian setting. A critical scrutiny of these arguments shows that is acceptable in the context of clinical trials. As for, it is argued that randomisation only provides weak reasons for drawing causal inferences in the context of real clinical trials. Argument is weak because it is controversial among Bayesians, and because formally Bayesian analyses of trial results are rarely asked for. Paper V: Practical arguments for randomisation are arguments with no necessary connections to theoretical frameworks like null hypothesis testing or causal inferences. Four common practical arguments in the context of clinical trials are distinguished and assessed: Randomisation contributes to allocation concealment. Randomisation contributes to the baseline balance of treatment groups. Randomisation decreases self-selection bias. Randomisation removes allocation bias. Argument is rejected. Arguments and are approved. Argument is rejected if it is formulated so as to be independent from and, but it is true that randomisation contributes to balance through the mechanisms mentioned in and. It is judged that may be the strongest single argument. (shrink)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  

  • [8]ページ先頭

    ©2009-2025 Movatter.jp