| |
The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational‐level theories of cognition. To utilize this constraint, a precise and workable definition of “computational tractability” is needed. Following computer science tradition, many cognitive scientists and psychologists define computational tractability as polynomial‐time computability, leading to the P‐Cognition thesis. (...) This article explains how and why the P‐Cognition thesis may be overly restrictive, risking the exclusion of veridical computational‐level theories from scientific investigation. An argument is made to replace the P‐Cognition thesis by the FPT‐Cognition thesis as an alternative formalization of the Tractable Cognition thesis (here, FPT stands for fixed‐parameter tractable). Possible objections to the Tractable Cognition thesis, and its proposed formalization, are discussed, and existing misconceptions are clarified. (shrink) | |
In 1983, Dr. J. Robin Warren and Dr. Barry Marshall reported finding a new kind of bacteria in the stomachs of people with gastritis. Warren and Marshall were soon led to the hypothesis that peptic ulcers are generally caused, not by excess acidity or stress, but by a bacterial infection. Initially, this hypothesis was viewed as preposterous, and it is still somewhat controversial. In 1994, however, a U. S. National Institutes of Health Consensus Development Panel concluded that infection appears to (...) play an important contributory role in the pathogenesis of peptic ulcers, and recommended that antibiotics be used in their treatment. Peptic ulcers are common, affecting up to 10% of the population, and evidence has mounted that many ulcers can be cured by eradicating the bacteria responsible for them. (shrink) | |
This inaugural handbook documents the distinctive research field that utilizes history and philosophy in investigation of theoretical, curricular and pedagogical issues in the teaching of science and mathematics. It is contributed to by 130 researchers from 30 countries; it provides a logically structured, fully referenced guide to the ways in which science and mathematics education is, informed by the history and philosophy of these disciplines, as well as by the philosophy of education more generally. The first handbook to cover the (...) field, it lays down a much-needed marker of progress to date and provides a platform for informed and coherent future analysis and research of the subject. -/- The publication comes at a time of heightened worldwide concern over the standard of science and mathematics education, attended by fierce debate over how best to reform curricula and enliven student engagement in the subjects There is a growing recognition among educators and policy makers that the learning of science must dovetail with learning about science; this handbook is uniquely positioned as a locus for the discussion. -/- The handbook features sections on pedagogical, theoretical, national, and biographical research, setting the literature of each tradition in its historical context. Each chapter engages in an assessment of the strengths and weakness of the research addressed, and suggests potentially fruitful avenues of future research. A key element of the handbook’s broader analytical framework is its identification and examination of unnoticed philosophical assumptions in science and mathematics research. It reminds readers at a crucial juncture that there has been a long and rich tradition of historical and philosophical engagements with science and mathematics teaching, and that lessons can be learnt from these engagements for the resolution of current theoretical, curricular and pedagogical questions that face teachers and administrators. (shrink) | |
What features will something have if it counts as an explanation? And will something count as an explanation if it has those features? In the second half of the 20th century, philosophers of science set for themselves the task of answering such questions, just as a priori conceptual analysis was generally falling out of favor. And as it did, most philosophers of science just moved on to more manageable questions about the varieties of explanation and discipline-specific scientific explanation. Often, such (...) shifts are sound strategies for problem-solving. But leaving fallow certain basic conceptual issues can also result in foundational debates. (shrink) | |
In robustness analysis, hypotheses are supported to the extent that a result proves robust, and a result is robust to the extent that we detect it in diverse ways. But what precise sense of diversity is at work here? In this paper, I show that the formal explications of evidential diversity most often appealed to in work on robustness – which all draw in one way or another on probabilistic independence – fail to shed light on the notion of diversity (...) relevant to robustness analysis. I close by briefly outlining a promising alternative approach inspired by Horwich’s (1982) eliminative account of evidential diversity. (shrink) | |
Many cognitive scientists, having discovered that some computational-level characterization f of a cognitive capacity φ is intractable, invoke heuristics as algorithmic-level explanations of how cognizers compute f. We argue that such explanations are actually dysfunctional, and rebut five possible objections. We then propose computational-level theory revision as a principled and workable alternative. | |
This paper evaluates four competing psychological explanations for why the jury in the O.J. Simpson murder trial reached the verdict they did: explanatory coherence, Bayesian probability theory, wishful thinking, and emotional coherence. It describes computational models that provide detailed simulations of juror reasoning for explanatory coherence, Bayesian networks, and emotional coherence, and argues that the latter account provides the most plausible explanation of the jury's decision. | |
In this paper we present Drama, a distributed model of analogical mapping that integrates semantic and structural constraints on constructing analogies. Specifically, Drama uses holographic reduced representations (Plate, 1994), a distributed representation scheme, to model the effects of structure and meaning on human performance of analogical mapping. Drama is compared to three symbolic models of analogy (SME, Copycat, and ACME) and one partially distributed model (LISA). We describe Drama's performance on a number of example analogies and assess the model in (...) terms of neurological and psychological plausibility. We argue that Drama's successes are due largely to integrating structural and semantic constraints throughout the mapping process. We also claim that Drama is an existence proof of using distributed representations to model high‐level cognitive phenomena. (shrink) No categories | |
Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...) be as bad as Bostrom suggests. If the superintelligence must *learn* complex final goals, then this means such a superintelligence must in effect *reason* about its own goals. And because it will be especially clear to a superintelligence that there are no sharp lines between one agent's goals and another's, that reasoning could therefore automatically be ethical in nature. (shrink) | |
Over the past years, a number of probabilistic measures of coherence have been proposed. As shown in the paper, however, many of them do not conform to the intuitition that equivalent testimonies are highly coherent, regardless of their prior probability. | |
Several recent articles on the concept of intentional action center on experimental findings suggesting that intentionality ascription can be affected by moral factors. I argue that the explanation for these phenomena lies in the workings of a tacit moral judgment mechanism, capable under certain circumstances of altering normal intentionality ascriptions. This view contrasts with that of Knobe ([2006]), who argues that the findings show that the concept of intentional action invokes evaluative notions. I discuss and reject possible objections to the (...) moral mechanism view, and offer arguments supporting the model over Knobe's account on grounds of simplicity and plausibility. (shrink) | |
Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: first, (...) it is unclear how any intelligent system could learn its final values, since to judge one supposedly "final" value against another seems to require a further background standard for judging. Second, it is unclear how to determine the content of a system's values based on its physical or computational structure. Finally, there is the distinctly ethical question of which values we should best aim for the system to learn. I outline a potential answer to these interrelated puzzles, centering on a "miktotelic" proposal for blending a complex, learnable final value out of many simpler ones. (shrink) | |
It is shown that the probabilistic theories of coherence proposed up to now produce a number of counter-intuitive results. The last section provides some reasons for believing that no probabilistic measure will ever be able to adequately capture coherence. First, there can be no function whose arguments are nothing but tuples of probabilities, and which assigns different values to pairs of propositions {A, B} and {A, C} if A implies both B and C, or their negations, and if P(B)=P(C). But (...) such sets may indeed differ in their degree of coherence. Second, coherence is sensitive to explanatory relations between the propositions in question. Explanation, however, can hardly be captured solely in terms of probability. (shrink) | |
No categories | |
The paper introduces an extension of the proposal according to which conceptual representations in cognitive agents should be intended as heterogeneous proxytypes. The main contribution of this paper is in that it details how to reconcile, under a heterogeneous representational perspective, different theories of typicality about conceptual representation and reasoning. In particular, it provides a novel theoretical hypothesis - as well as a novel categorization algorithm called DELTA - showing how to integrate the representational and reasoning assumptions of the theory-theory (...) of concepts with the those ascribed to the prototype and exemplars-based theories. (shrink) | |
This paper proposes that self-deception results from the emotional coherence of beliefs with subjective goals. We apply the HOTCO computational model of emotional coherence to simulate a rich case of self-deception from Hawthorne's The Scarlet Letter.We argue that this model is more psychologically realistic than other available accounts of self-deception, and discuss related issues such as wishful thinking, intention, and the division of the self. | |
Structural and functional descriptions of technical artefacts play an important role in engineering practice. A complete description of a technical artefact involves a description of both functional and structural features. Engineers, moreover, assume that there is an intimate relationship between the function and structure of technical artefacts and they reason from functional properties to structural ones and vice versa. This raises the question of how structural and functional descriptions are related. The kind of inference patterns that establish coherence between structural (...) and functional descriptions are explored in this paper, using the analysis of coherence creating relations of Thagard et al. Explanatory, analogical and practical inference patterns are discussed and it is argued that of these three, practical inferences may be the most important. Practical inferences, however, cannot provide a full underpinning of the coherence of structural and functional descriptions of technical artefacts. The paper ends with the suggestion that any account of the coherence of the structural and functional descriptions of technical artefacts must involve reference to their intentional features.Keywords: Technical artefact; Structural description; Functional description; Coherence relation; Practical inference. (shrink) | |
Discussion is frequently observed in democratic politics, but change in view is rarely observed. Call this the unchanging minds hypothesis. I assume that a given belief or desire is not isolated, but, rather, is located in a network structure of attitudes, such that persuasion sufficient to change an attitude in isolation is not sufficient to change the attitude as supported by its network. The network structure of attitudes explains why the unchanging minds hypothesis seems to be true, and why it (...) is false: due to the network, the effects of deliberative persuasion are typically latent, indirect, delayed, or disguised. Finally, I connect up the coherence account of attitudes to several topics in recent political and democratic theory. Key Words: deliberation democracy persuasion change in view coherence. (shrink) | |
Although it’s sometimes thought that pluralism about truth is unstable—or, worse, just a non-starter—it’s surprisingly difficult to locate collapsing arguments that conclusively demonstrate either its instability or its inability to get started. This paper exemplifies the point by examining three recent arguments to that effect. However, it ends with a cautionary tale; for pluralism may not be any better off than other traditional theories that face various technical objections, and may be worse off in facing them all. | |
Single cell recordings in monkeys provide strong evidence for an important role of the motor system in action understanding. This evidence is backed up by data from studies of the (human) mirror neuron system using neuroimaging or TMS techniques, and behavioral experiments. Although the data acquired from single cell recordings are generally considered to be robust, several debates have shown that the interpretation of these data is far from straightforward. We will show that research based on single-cell recordings allows for (...) unlimited content attribution to mirror neurons. We will argue that a theoretical analysis of the mirroring process, combined with behavioral and brain studies, can provide the necessary limitations. A complexity analysis of the type of processing attributed to the mirror neuron system can help formulate restrictions on what mirroring is and what cognitive functions could, in principle, be explained by a mirror mechanism. We argue that processing at higher levels of abstraction needs assistance of non-mirroring processes to such an extent that subsuming the processes needed to infer goals from actions under the label ?mirroring? is not warranted. (shrink) | |
Belief revision theory and philosophy of science both aspire to shed light on the dynamics of knowledge – on how our view of the world changes in the light of new evidence. Yet these two areas of research have long seemed strangely detached from each other, as witnessed by the small number of cross-references and researchers working in both domains. One may speculate as to what has brought about this surprising, and perhaps unfortunate, state of affairs. One factor may be (...) that while belief revision theory has traditionally been pursued in a bottom- up manner, focusing on the endeavors of single inquirers, philosophers of science, inspired by logical empiricism, have tended to be more interested in science as a multi-agent or agent-independent phenomenon. (shrink) | |
The point of this paper is to provide a principled framework for a naturalistic, interactivist-constructivist model of rational capacity and a sketch of the model itself, indicating its merits. Being naturalistic, it takes its orientation from scientific understanding. In particular, it adopts the developing interactivist-constructivist understanding of the functional capacities of biological organisms as a useful naturalistic platform for constructing such higher order capacities as reason and cognition. Further, both the framework and model are marked by the finitude and fallibility (...) that science attributes to organisms, with their radical consequences, and also by the individual and collective capacities to improve their performances that learning organisms display. Part A prepares the ground for the exposition through a critique of the dominant Western analytic tradition in rationalising science, followed by a brief exposition of the naturalist framework that will be employed to frame the construction. This results in two sets of guidelines for constructing an alternative. Part B provides the new conception of reason as a rich complex of processes of improvement against epistemic values, and argues its merits. It closes with an account of normativity and our similarly developing rational knowledge of it, including (reflexively) of reason itself. (shrink) | |
The aim of this essay is to develop a coherence theory for the justification of evidentiary judgments in law. The main claim of the coherence theory proposed in this article is that a belief about the events being litigated is justified if and only if it is a belief that an epistemically responsible fact finder might hold by virtue of its coherence in like circumstances. The article argues that this coherentist approach to evidence and legal proof has the resources to (...) meet some of the main objections that may be addressed against attempts to analyze the justification of evidentiary judgments in law in coherentist terms. It concludes by exploring some implications of the proposed version of legal coherentism for a jurisprudence of evidence. (shrink) | |
How can we explain the intentional nature of an expert’s actions, performed without immediate and conscious control, relying instead on automatic cognitive processes? How can we account for the differences and similarities with a novice’s performance of the same actions? Can a naturalist explanation of intentional expert action be in line with a philosophical concept of intentional action? Answering these and related questions in a positive sense, this dissertation develops a three-step argument. Part I considers different methods of explanations in (...) cognitive neuroscience (Bennett & Hacker’s philosophical, conceptual analysis; Marr’s three levels of explanation; Neural Correlates of Consciousness research; mechanistic explanation), defending ‘mechanistic explanation’ as a method that provides the necessary tools for integrating interdisciplinary insights into human action. Furthermore, a dynamic, explanatory mechanism allows the assessment of the impact of learning and development on expert action in a valuable way that other methods don’t. Part II continues by scrutinizing several cognitive neuroscientific theories of learning and development (neuroconstructivism; dual-processing theories; simulation theory; extended mind/cognition hypothesis), arguing for the complex interactions between different types of processing and different action representations involved in expert action performances. Moreover, according to our discussion of a particular ‘simulation theory’ these interactions can be influenced in several ways with the use of language, allowing an agent to configure a specific action representation for performance at a later stage. The results of Parts I and II are then applied in Part III to a parallel discussion of philosophical analyses of intentional action (discussing i.a. Frankfurt, Bratman, Pacherie and Ricoeur) and cognitive neuroscientific insights in it. Both approaches are found to converge in emphasizing the importance for an expert to develop stable patterns of actions that comply maximally not only with his intentions, but also with his motor expertise and with situational conditions. Consequently, his actions – automatic, or not – rely on this ‘sculpted space of actions’. (shrink) | |
This paper explores the ethical relevance of a precise new characterization of coherence as maximization of satisfaction of positive and negative constraints. A coherence problem can be stated by specifying a set of elements to be accepted or rejected along with sets of positive and negative constraints that incline pairs of elements to be accepted together or rejected together. Computationally tractable and psychologically plausible algorithms are available for determining the acceptance and rejection of elements in a way that reliably approximates (...) coherence maximization. This paper shows how justification of ethical principles and particular judgments can be accomplished by taking into account deductive, explanatory, analogical, and deliberative coherence. (shrink) | |
The coherence of independent reports provides a strong reason to believe that the reports are true. This plausible claim has come under attack from recent work in Bayesian epistemology. This work shows that, under certain probabilistic conditions, coherence cannot increase the probability of the target claim. These theorems are taken to demonstrate that epistemic coherentism is untenable. To date no one has investigated how these results bear on different conceptions of coherence. I investigate this situation using Thagard’s ECHO model of (...) explanatory coherence. Thagard’s ECHO model provides a natural representation of the evidential significance of multiple independent reports. (shrink) | |
The idea that in order to be objective, research should be value-free, has recently been questioned in philosophy of science. I concentrate on two senses of objectivity, detached objectivity and interactive objectivity that do not require value-freedom. I use each of these to assess a young, strongly value-laden and overtly political discipline: indigenous studies. It has been criticised as relativistic and essentialistic, and in consequence, as not objective in the detached sense of objectivity, as values are used in place of (...) evidence. When addressing these critiques, I compare contemporary Sámi IS to early Finnish folkloristics. The interactive objectivity of the Sámi IS research community is increasing, and outside criticism is being taken into account. As a result, the detached objectivity of the conducted research has also increased. (shrink) | |
Coherence plays an important role in psychology. In this article, I suggest that coherence takes two main forms in humans’ cognitive system. The first belong to ‘system 1’. It relies on the degree of coherence between different representations to regulate them, without coherence being represented. By contrast other mechanisms, belonging to system 2, allow humans to represent the degree of coherence between different representations and to draw inferences from it. It is suggested that the mechanisms of explicit coherence evaluation have (...) social functions. They are used as means of epistemic vigilance—to evaluate what other people tell us. They can also be turned inwards to examine the coherence of our own beliefs. Their function is then to minimize the chances that we are perceived as being incoherent. Evidence from different domains of psychology is briefly reviewed in support of these hypotheses. (shrink) | |
This article is a response to Elijah Millgram's argument that my characterization of coherence as constraint satisfaction is inadequate for philosophical purposes because it provides no guarantee that the most coherent theory available will be true. I argue that the constraint satisfaction account of coherence satisfies the philosophical, computational, and psychological prerequisites for the development of epistemological and ethical theories. | |
I argue that coherence is truth-conducive in that coherence implies an increase in the probability of truth. Central to my argument is a certain principle for transitivity in probabilistic support. I then address a question concerning the truth-conduciveness of coherence as it relates to (something else I argue for) the truth-conduciveness of consistency, and consider how the truth-conduciveness of coherence bears on coherentist theories of justification. | |
Recent theorizing about the nature of the cognitive impairment in autism suggests that autistic individuals display abnormally weak central coherence, the capacity to integrate information in order to make sense of one’s environment. Our article shows the relevance of computational models of coherence to the understanding of weak central coherence. Using a theory of coherence as constraint satisfaction, we show how weak coherence can be simulated ina a connectionist network that has unusually high inhibition compared to excitation. This connectionist model (...) simulates autistic behaviour on both the false belief task and the homograph task. (shrink) | |
Situational irony concerns what it is about a situation that causes people to describe it as ironic. Although situational irony is as complex and commonplace as verbal and literary irony, it has received nowhere near the same attention from cognitive scientists and other scholars. This paper presents the bicoherence theory of situational irony, based on the theory of conceptual coherence (Kunda & Thagard, 1996; Thagard & Verbeurgt, 1998). On this theory, a situation counts as ironic when it is conceived as (...) having a bicoherent conceptual structure, adequate cognitive salience, and evokes an appropriate configuration of emotions. The theory is applied to a corpus of 250 examples of situational ironies gathered automatically from electronic news sources. A useful taxonomy of situational ironies is produced, new predictions and insights into situational irony are discussed, and extensions of the theory to other forms of irony are examined. (shrink) No categories | |
Advancement in cognitive science depends, in part, on doing some occasional ‘theoretical housekeeping’. We highlight some conceptual confusions lurking in an important attempt at explaining the human capacity for rational or coherent thought: Thagard & Verbeurgt’s computational-level model of humans’ capacity for making reasonable and truth-conducive abductive inferences (1998; Thagard, 2000). Thagard & Verbeurgt’s model assumes that humans make such inferences by computing a coherence function (f_coh), which takes as input representation networks and their pair-wise constraints and gives as output (...) a partition into accepted (A) and rejected (R) elements that maximizes the weight of satisfied constraints. We argue that their proposal gives rise to at least three difficult problems. (shrink) | |
This paper examines the concept of coherence and its role in legal reasoning. First, it identifies some problem areas confronting coherence theories of legal reasoning about both disputed questions of fact and disputed questions of law. Second, with a view to solving these problems, it proposes a coherence model of legal reasoning. The main tenet of this coherence model is that a belief about the law and the facts under dispute is justified if it is “optimally coherent,” that is, if (...) it is such that an epistemically responsible legal decision-maker would have accepted it as justified by virtue of its coherence in like circumstances. Last, looking beyond the coherence theory, the paper explores the implications of the version of legal coherentism proposed for a general theory of legal reasoning and rationality. (shrink) | |
I develop a probabilistic account of coherence, and argue that at least in certain respects it is preferable to (at least some of) the main extant probabilistic accounts of coherence: (i) Igor Douven and Wouter Meijs’s account, (ii) Branden Fitelson’s account, (iii) Erik Olsson’s account, and (iv) Tomoji Shogenji’s account. Further, I relate the account to an important, but little discussed, problem for standard varieties of coherentism, viz., the “Problem of Justified Inconsistent Beliefs.”. | |
Within the social sciences, much controversy exists about which status should be ascribed to the rationality assumption that forms the core of rational choice theories. Whilst realists argue that the rationality assumption is an empirical claim which describes real processes that cause individual action, instrumentalists maintain that it amounts to nothing more than an analytically set axiom or ‘as if’ hypothesis which helps in the generation of accurate predictions. In this paper, I argue that this realist-instrumentalist debate about rational choice (...) theory can be overcome once it is realised that the rationality assumption is neither an empirical description nor an ‘as if’ hypothesis, but a normative claim. (shrink) | |
This paper considers the prospects for objectivity in reasoning strategies in response to empirical studies that apparently show systematic culture‐based differences in patterns of reasoning. I argue that there is at least one modest class of exceptions to the claim that there are alternative, equally warranted standards of good reasoning: the class that entails the solution of certain well‐structured problems which, suitably chosen, are common, or touchstone, to the sorts of culturally different viewpoints discussed. There is evidence that some cognitive (...) tasks are seen in much the same way across cultures, not least by virtue of the common run of experiences with the world of material objects in early childhood by creatures with similar cognitive endowments. These tasks thus present as similarly structured sets of claims that have similar priority: what is framed, and what is bracketed, or held constant in the background, is shown to be naturally common across cultures. As a consequence, a normative view of reasoning and, by implication, critical thinking can be defended. While this might be a modest sense of objectivity, the high level of intercultural articulation that is able to occur among people of different backgrounds suggests that it provides cognitive scaffolding for a lot of other reasoning tasks as well. (shrink) | |
The extent to which belief revision is affected by systematic variability and direct experience of a conditional (if A then B) relation was examined in two studies. The first used a computer generated apparatus. This presented two rows of 5 objects. Pressing one of the top objects resulted in one of the bottom objects being lit up. The 139 adult participants were given one of two levels of experience (5 or 15 trials) and one of two types of apparatus. One (...) of these was completely uniform, while the other had an element that randomly alternated in its result. Following the testing of the apparatus, participants were asked to rate their certainty of the action of the middle element, which was always uniform (the AB belief). Then they were told of an observation inconsistent with this belief. Participants were then asked whether they considered the AB belief or the anecdotal observation to be more believable. Results showed that increased experience decreased the tendency to reject the AB belief, when the apparatus did not have any randomness. However, the presence of a single element showing random variation in the system strongly increased rejection of this belief. A second study looked at the effect of a single random element on a mechanical system as well as an electronic system using graphical representations. This confirmed the generality of the effect of randomness on belief revision, and provided support for the effects of embedding a belief into a system of relations. These results provide some insight into the complex factors that determine belief revision. (shrink) | |
Four articles in this issue of topiCS (volume 4, issue 1) argue against a computational approach in cognitive science in favor of a dynamical approach. I concur that the computational approach faces some considerable explanatory challenges. Yet the dynamicists’ proposal that cognition is self-organized seems to only go so far in addressing these challenges. Take, for instance, the hypothesis that cognitive behavior emerges when brain and body (re-)configure to satisfy task and environmental constraints. It is known that for certain systems (...) of constraints, no procedure can exist (whether modular, local, centralized, or self-organized) that reliably finds the right configuration in a realistic amount of time. Hence, the dynamical approach still faces the challenge of explaining how self-organized constraint satisfaction can be achieved by human brains and bodies in real time. In this commentary, I propose a methodology that dynamicists can use to try to address this challenge. (shrink) | |
Drawing inspiration from Lakatos’s philosophy of science, the paper presents a notion of intertheory explanation that is suitable to explain, from the point of view of a successor theory, its predecessor theory’s success (where it is successful) as well as the latter’s failure (where it fails) at the same time. A variation of the Ramsey-test is used, together with a standard AGM belief revision model, to give a semantics for open and counterfactual conditionals and ’because’-sentences featuring in such intertheory explanations. (...) Pre-theoretically described idealizing assumptions play a crucial role in this model, especially when the predecessor theory and the successor theory contradict each other. (shrink) |