| |
How does science work? Does it tell us what the world is "really" like? What makes it different from other ways of understanding the universe? In Theory and Reality , Peter Godfrey-Smith addresses these questions by taking the reader on a grand tour of one hundred years of debate about science. The result is a completely accessible introduction to the main themes of the philosophy of science. Intended for undergraduates and general readers with no prior background in philosophy, Theory and (...) Reality covers logical positivism the problems of induction and confirmation Karl Popper's theory of science Thomas Kuhn and "scientific revolutions" the views of Imre Lakatos, Larry Laudan, and Paul Feyerabend and challenges to the field from sociology of science, feminism, and science studies. The book then looks in more detail at some specific problems and theories, including scientific realism, the theory-ladeness of observation, scientific explanation, and Bayesianism. Finally, Godfrey-Smith defends a form of philosophical naturalism as the best way to solve the main problems in the field. Throughout the text he points out connections between philosophical debates and wider discussions about science in recent decades, such as the infamous "science wars." Examples and asides engage the beginning student a glossary of terms explains key concepts and suggestions for further reading are included at the end of each chapter. However, this is a textbook that doesn't feel like a textbook because it captures the historical drama of changes in how science has been conceived over the last one hundred years. Like no other text in this field, Theory and Reality combines a survey of recent history of the philosophy of science with current key debates in language that any beginning scholar or critical reader can follow. (shrink) | |
What is the point of ideology critique? Prominent Anglo-American philosophers recently proposed novel arguments for the view that ideology critique is moral critique, and ideologies are flawed insofar as they contribute to injustice or oppression. We criticize that view and make the case for an alternative and more empirically-oriented approach, grounded in epistemic rather than moral commitments. We make two related claims: (i) ideology critique can debunk beliefs and practices by uncovering how, empirically, they are produced by self-justifying power, and (...) (ii) the self-justification of power should be understood as an epistemic rather than moral flaw. Drawing on the recent realist revival in political theory, we argue that this genealogical approach has more radical potential, despite being more parsimonious than morality-based approaches. We demonstrate the relative advantages of our view by discussing the results of empirical studies on the contemporary phenomenon of neopatriarchy in the Middle East and North Africa. (shrink) | |
‘Sentience’ sometimes refers to the capacity for any type of subjective experience, and sometimes to the capacity to have subjective experiences with a positive or negative valence, such as pain or pleasure. We review recent controversies regarding sentience in fish and invertebrates and consider the deep methodological challenge posed by these cases. We then present two ways of responding to the challenge. In a policy-making context, precautionary thinking can help us treat animals appropriately despite continuing uncertainty about their sentience. In (...) a scientific context, we can draw inspiration from the science of human consciousness to disentangle conscious and unconscious perception (especially vision) in animals. Developing better ways to disentangle conscious and unconscious affect is a key priority for future research. (shrink) | |
The ideal of value free science states that the justification of scientific findings should not be based on non-epistemic (e.g. moral or political) values. It has been criticized on the grounds that scientists have to employ moral judgements in managing inductive risks. The paper seeks to defuse this methodological critique. Allegedly value-laden decisions can be systematically avoided, it argues, by making uncertainties explicit and articulating findings carefully. Such careful uncertainty articulation, understood as a methodological strategy, is exemplified by the current (...) practice of the Intergovernmental Panel on Climate Change (IPCC). (shrink) | |
We provide an analysis of the public's having warranted epistemic trust in science, that is, the conditions under which the public may be said to have well-placed trust in the scientists as providers of information. We distinguish between basic and enhanced epistemic trust in science and provide necessary conditions for both. We then present the controversy regarding the connection between autism and measles–mumps–rubella vaccination as a case study to illustrate our analysis. The realization of warranted epistemic public trust in science (...) requires various societal conditions, which we briefly introduce in the concluding section. (shrink) | |
In this paper, I distinguish three general approaches to public trust in science, which I call the individual approach, the semi-social approach, and the social approach, and critically examine their proposed solutions to what I call the problem of harmful distrust. I argue that, despite their differences, the individual and the semi-social approaches see the solution to the problem of harmful distrust as consisting primarily in trying to persuade individual citizens to trust science and that both approaches face two general (...) problems, which I call the problem of overidealizing science and the problem of overburdening citizens. I then argue that in order to avoid these problems we need to embrace a (thoroughly) social approach to public trust in science, which emphasizes the social dimensions of the reception, transmission, and uptake of scientific knowledge in society and the ways in which social forces influence both positively and negatively the trustworthiness of science. (shrink) | |
Proponents of the value ladenness of science rely primarily on arguments from underdetermination or inductive risk, which share the premise that we should only consider values where the evidence runs out or leaves uncertainty; they adopt a criterion of lexical priority of evidence over values. The motivation behind lexical priority is to avoid reaching conclusions on the basis of wishful thinking rather than good evidence. This is a real concern, however, that giving lexical priority to evidential considerations over values is (...) a mistake and unnecessary for avoiding the wishful thinking. Values have a deeper role to play in science. (shrink) | |
The controversy over the old ideal of “value-free science” has cooled significantly over the past decade. Many philosophers of science now agree that even ethical and political values may play a substantial role in all aspects of scientific inquiry. Consequently, in the last few years, work in science and values has become more specific: Which values may influence science, and in which ways? Or, how do we distinguish illegitimate from illegitimate kinds of influence? In this paper, I argue that this (...) problem requires philosophers of science to take a new direction. I present two case studies in the influence of values on scientific inquiry: feminist values in archaeology and commercial values in pharmaceutical research. I offer a preliminary assessment of these cases, that the influence of values was legitimate in the feminist case, but not in the pharmaceutical case. I then turn to three major approaches to distinguish legitimate from illegitimate influences of values, including the distinction between epistemic and non-epistemic values and Heather Douglas’ distinction between direct and indirect roles for values. I argue that none of these three approaches gives an adequate analysis of the two cases. In the concluding section, I briefly sketch my own approach, which draws more heavily on ethics than the others, and is more promising as a solution to the current problem. This is the new direction in which I think science and values should move. (shrink) | |
When scientists or science reporters communicate research results to the public, this often involves ethical and epistemic risks. One such risk arises when scientific claims cause cognitive or behavioural changes in the audience that contribute to the self-fulfilment of these claims. I argue that the ethical and epistemic problems that such self-fulfilment effects may pose are much broader and more common than hitherto appreciated. Moreover, these problems are often due to a specific psychological phenomenon that has been neglected in the (...) research on science communication: many people tend to conform to ‘descriptive norms’, that is, norms capturing (perceptions of) what others commonly do, think, or feel. Because of this tendency, science communication may frequently produce significant social harm. I contend that scientists have a responsibility to assess the risk of this potential harm and consider adopting strategies to mitigate it. I introduce one such strategy and argue that its implementation is independently well motivated by the fact that it helps improve scientific accuracy. (shrink) | |
While it is widely acknowledged that science is not “free” of non-epistemic values, there is disagreement about the roles that values can appropriately play. Several have argued that non-epistemic values can play important roles in modeling decisions, particularly in addressing uncertainties ; Risbey 2007; Biddle and Winsberg 2010; Winsberg : 111-137, 2012); van der Sluijs 359-389, 2012). On the other hand, such values can lead to bias ; Bray ; Oreskes and Conway 2010). Thus, it is important to identify when (...) it is legitimate to appeal to non-epistemic values in modeling decisions. An approach is defended here whereby such value judgments are legitimate when they promote democratically endorsed epistemological and social aims of research. This framework accounts for why it is legitimate to appeal to non-epistemic values in a range of modeling decisions, while addressing concerns that the presence of such values will lead to bias or give scientists disproportionate power in deciding what values ought to be endorsed. (shrink) | |
People increasingly form beliefs based on information gained from automatically filtered Internet sources such as search engines. However, the workings of such sources are often opaque, preventing subjects from knowing whether the information provided is biased or incomplete. Users’ reliance on Internet technologies whose modes of operation are concealed from them raises serious concerns about the justificatory status of the beliefs they end up forming. Yet it is unclear how to address these concerns within standard theories of knowledge and justification. (...) To shed light on the problem, we introduce a novel conceptual framework that clarifies the relations between justified belief, epistemic responsibility, action, and the technological resources available to a subject. We argue that justified belief is subject to certain epistemic responsibilities that accompany the subject’s particular decision-taking circumstances, and that one typical responsibility is to ascertain, so far as one can, whether the information upon which the judgment will rest is biased or incomplete. What this responsibility comprises is partly determined by the inquiry-enabling technologies available to the subject. We argue that a subject’s beliefs that are formed based on Internet-filtered information are less justified than they would be if she either knew how filtering worked or relied on additional sources, and that the subject may have the epistemic responsibility to take measures to enhance the justificatory status of such beliefs.. (shrink) | |
Animal welfare has a long history of disregard. While in recent decades the study of animal welfare has become a scientific discipline of its own, the difficulty of measuring animal welfare can still be vastly underestimated. There are three primary theories, or perspectives, on animal welfare - biological functioning, natural living and affective state. These come with their own diverse methods of measurement, each providing a limited perspective on an aspect of welfare. This paper describes a perspectival pluralist account of (...) animal welfare, in which all three theoretical perspectives and their multiple measures are necessary to understand this complex phenomenon and provide a full picture of animal welfare. This in turn will offer us a better understanding of perspectivism and pluralism itself. (shrink) | |
Philosophers of science now broadly agree that doing good science involves making non-epistemic value judgments. I call attention to two very different normative standards which can be used to evaluate such judgments: standards grounded in ethics and standards grounded in political philosophy. Though this distinction has not previously been highlighted, I show that the values in science literature contain arguments of each type. I conclude by explaining why this distinction is important. Seeking to determine whether some value-laden determination meets substantive (...) ethical standards is a very different endeavor from seeking to determine if it is politically legitimate. (shrink) | |
As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of (...) science to machine learning programs to make the case that the resources required to respond to these inductive challenges render critical aspects of their design constitutively value-laden. I demonstrate these points specifically in the case of recidivism algorithms, arguing that contemporary debates concerning fairness in criminal justice risk-assessment programs are best understood as iterations of traditional arguments from inductive risk and demarcation, and thereby establish the value-laden nature of automated decision-making programs. Finally, in light of these points, I address opportunities for relocating the value-free ideal in machine learning and the limitations that accompany them. (shrink) | |
The value-free ideal in science has been criticised as both unattainable and undesirable. We argue that it can be defended as a practical principle guiding scientific research even if the unattainability and undesirability of a value-free end-state are granted. If a goal is unattainable, then one can separate the desirability of accomplishing the goal from the desirability of pursuing it. We articulate a novel value-free ideal, which holds that scientists should act as if science should be value-free, and we argue (...) that even if a purely value-free science is undesirable, this value-free ideal is desirable to pursue. (shrink) | |
Well–being, health and freedom are some of the many phenomena of interest to science whose definitions rely on a normative standard. Empirical generalizations about them thus present a special case of value-ladenness. I propose the notion of a ‘mixed claim’ to denote such generalizations. Against the prevailing wisdom, I argue that we should not seek to eliminate them from science. Rather, we need to develop principles for their legitimate use. Philosophers of science have already reconciled values with objectivity in several (...) ways, but none of the existing proposals are suitable for mixed claims. Using the example of the science of well-being, I articulate a conception of objectivity for this science and for mixed claims in general. _1_ Introduction _2_ What Are Mixed Claims? _3_ Mixed Claims Are Different _3.1_ Values as reasons to pursue science _3.2_ Values as agenda-setters _3.3_ Values as ethical constraints on research protocols _3.4_ Values as arbiters between underdetermined theories _3.5_ Values as determinants of standards of confirmation _3.6_ Values as sources of wishful thinking and fraud _4_ Mixed Claims Should Stay _4.1_ Against Nagel _5_ The Dangers of Mixed Claims _6_ The Existing Accounts of Objectivity _6.1_ The perils of impartiality _7_ Objectivity for Mixed Claims _8_ Three Rules _8.1_ Unearth the value presuppositions in methods and measures _8.2_ Check if value presuppositions are invariant to disagreements _8.3_ Consult the relevant parties _9_ Conclusion. (shrink) | |
The concept of bias is pervasive in both popular discourse and empirical theorizing within philosophy, cognitive science, and artificial intelligence. This widespread application threatens to render the concept too heterogeneous and unwieldy for systematic investigation. This article explores recent philosophical literature attempting to identify a single theoretical category—termed ‘bias’—that could be unified across different contexts. To achieve this aim, the article provides a comprehensive review of theories of bias that are significant in the fields of philosophy of mind, cognitive science, (...) machine learning, and epistemology. It focuses on key examples such as perceptual bias, implicit bias, explicit bias, and algorithmic bias, scrutinizing their similarities and differences. Although these explorations may not conclusively establish the existence of a natural theoretical kind, pursuing the possibility offers valuable insights into how bias is conceptualized and deployed across diverse domains, thus deepening our understanding of its complexities across a wide range of cognitive and computational processes. (shrink) No categories | |
According to the Value-Neutrality Thesis, technology is morally and politically neutral, neither good nor bad. A knife may be put to bad use to murder an innocent person or to good use to peel an apple for a starving person, but the knife itself is a mere instrument, not a proper subject for moral or political evaluation. While contemporary philosophers of technology widely reject the VNT, it remains unclear whether claims about values in technology are just a figure of speech (...) or nontrivial empirical claims with genuine factual content and real-world implications. This paper provides the missing argument. I argue that by virtue of their material properties, technological artifacts are part of the normative order rather than external to it. I illustrate how values can be empirically identified in technology. The reason why value-talk is not trivial or metaphorical is that due to the endurance and longevity of technological artifacts, values embedded in them have long-term implications that surpass their designers and builders. I further argue that taking sides in this debate has real-world implications in the form of moral constraints on the development of technology. (shrink) | |
The argument from inductive risk (AIR) states that scientists should consider the consequences of hypotheses and methodological choices in the course of ongoing research. It has played a central role in the widespread retreat from the ideal of value-free science. The argument is motivated, to a significant extent, by the laudable concern to use science to better society. I argue that this concern, when taken seriously, tells against the idea that individual working scientists should consider social consequences. First, I show (...) that the AIR assigns scientists a striking task: to frame social decisions and assess their full spectrum of consequences. Then, I argue that scientists should not be assigned such a task. For one thing, if both scientists and policy makers take social consequences into account, we are liable to end up with skewed and biased decisions. For another, scientists are not well-placed to carry out social decision making. I conclude by indicating a context in which the inductive risk line has more force, and by highlighting the importance, in discussing values in science, of attending to units and levels of analysis. (shrink) | |
Jonathan Y. Tsou examines and defends positions on central issues in philosophy of psychiatry. The positions defended assume a naturalistic and realist perspective and are framed against skeptical perspectives on biological psychiatry. Issues addressed include the reality of mental disorders; mechanistic and disease explanations of abnormal behavior; definitions of mental disorder; natural and artificial kinds in psychiatry; biological essentialism and the projectability of psychiatric categories; looping effects and the stability of mental disorders; psychiatric classification; and the validity of the DSM's (...) diagnostic categories. The main argument defended by Tsou is that genuine mental disorders are biological kinds with harmful effects. This argument opposes the dogma that mental disorders are necessarily diseases that result from biological dysfunction. Tsou contends that the broader ideal of biological kinds offers a more promising and empirically ascertainable naturalistic standard for assessing the reality of mental disorders and the validity of psychiatric categories. (shrink) | |
The public rejection of scientific claims is widely recognized by scientific and governmental institutions to be threatening to modern democratic societies. Intense conflict between science and the public over diverse health and environmental issues have invited speculation by concerned officials regarding both the source of and the solution to the problem of public resistance towards scientific and policy positions on such hot-button issues as global warming, genetically modified crops, environmental toxins, and nuclear waste disposal. The London Royal Society’s influential report (...) “Public Understanding of Science”, which spearheaded the now-thriving area of science... (shrink) | |
This book addresses key conceptual issues relating to the modern scientific and engineering use of computer simulations. It analyses a broad set of questions, from the nature of computer simulations to their epistemological power, including the many scientific, social and ethics implications of using computer simulations. The book is written in an easily accessible narrative, one that weaves together philosophical questions and scientific technicalities. It will thus appeal equally to all academic scientists, engineers, and researchers in industry interested in questions (...) related to the general practice of computer simulations. (shrink) | |
Philosophical inquiry strives to be the unencumbered exploration of ideas. That is, unlike scientific research which is subject to ethical oversight, it is commonly thought that it would either be inappropriate, or that it would undermine what philosophy fundamentally is, if philosophical research were subject to similar ethical oversight. Against this, I argue that philosophy is in need of a reckoning. Philosophical inquiry is a morally hazardous practice with its own risks. There are risks present in the methods we employ, (...) risks inherent in the content of the views under consideration, and risks to the subjects of our inquiry. Likely, there are more risks still. However, by starting with the identification of these three risks we can demonstrate not only why an ethics of practice is needed but also which avenues are the most promising for developing an ethics for philosophical practice. Although we might be in the business of asking questions, we do not absolve ourselves of responsibility for the risks that inquiry incurs. (shrink) | |
In the philosophy of science, it is a common proposal that values are illegitimate in science and should be counteracted whenever they drive inquiry to the confirmation of predetermined conclusions. Drawing on recent cognitive scientific research on human reasoning and confirmation bias, I argue that this view should be rejected. Advocates of it have overlooked that values that drive inquiry to the confirmation of predetermined conclusions can contribute to the reliability of scientific inquiry at the group level even when they (...) negatively affect an individual’s cognition. This casts doubt on the proposal that such values should always be illegitimate in science. It also suggests that advocates of that proposal assume a narrow, individualistic account of science that threatens to undermine their own project of ensuring reliable belief formation in science. (shrink) | |
In this paper, I consider the relationship between Inference to the Best Explanation and Bayesianism, both of which are well-known accounts of the nature of scientific inference. In Sect. 2, I give a brief overview of Bayesianism and IBE. In Sect. 3, I argue that IBE in its most prominently defended forms is difficult to reconcile with Bayesianism because not all of the items that feature on popular lists of “explanatory virtues”—by means of which IBE ranks competing explanations—have confirmational import. (...) Rather, some of the items that feature on these lists are “informational virtues”—properties that do not make a hypothesis \ more probable than some competitor \ given evidence E, but that, roughly-speaking, give that hypothesis greater informative content. In Sect. 4, I consider as a response to my argument a recent version of compatibilism which argues that IBE can provide further normative constraints on the objectively correct probability function. I argue that this response does not succeed, owing to the difficulty of defending with any generality such further normative constraints. Lastly, in Sect. 5, I propose that IBE should be regarded, not as a theory of scientific inference, but rather as a theory of when we ought to “accept” H, where the acceptability of H is fixed by the goals of science and concerns whether H is worthy of commitment as research program. In this way, IBE and Bayesianism, as I will show, can be made compatible, and thus the Bayesian and the proponent of IBE can be friends. (shrink) | |
Philosophers of science debate the proper role of non-epistemic value judgements in scientific reasoning. Many modern authors oppose the value free ideal, claiming that we should not even try to get scientists to eliminate all such non-epistemic value judgements from their reasoning. W. E. B. Du Bois, on the other hand, has a defence of the value free ideal in science that is rooted in a conception of the proper place of science in a democracy. In particular, Du Bois argues (...) that the value free ideal must be upheld in order to, first, retain public trust in science and, second, ensure that those best placed to make use of scientifically acquired information are able to do so. This latter argument turns out to relate Du Bois’ position on the value free ideal in science to his defence of epistemic democracy. In this essay I elaborate, motivate, and relate to the modern debate, Du Bois’ under-appreciated defence of the value free ideal. (shrink) | |
In their attempt to defend philosophy from accusations of uselessness made by prominent scientists, such as Stephen Hawking, some philosophers respond with the charge of ‘scientism.’ This charge makes endorsing a scientistic stance, a mistake by definition. For this reason, it begs the question against these critics of philosophy, or anyone who is inclined to endorse a scientistic stance, and turns the scientism debate into a verbal dispute. In this paper, I propose a different definition of scientism, and thus a (...) new way of looking at the scientism debate. Those philosophers who seek to defend philosophy against accusations of uselessness would do philosophy a much better service, I submit, if they were to engage with the definition of scientism put forth in this paper, rather than simply make it analytic that scientism is a mistake. (shrink) | |
Theories of scientific rationality typically pertain to belief. In this paper, the author argues that we should expand our focus to include motivations as well as belief. An economic model is used to evaluate whether science is best served by scientists motivated only by truth, only by credit, or by both truth and credit. In many, but not all, situations, scientists motivated by both truth and credit should be judged as the most rational scientists. No categories | |
Innovative modes of collaboration between archaeologists and Indigenous communities are taking shape in a great many contexts, in the process transforming conventional research practice. While critics object that these partnerships cannot but compromise the objectivity of archaeological science, many of the archaeologists involved argue that their research is substantially enriched by them. I counter objections raised by internal critics and crystalized in philosophical terms by Boghossian, disentangling several different kinds of pluralism evident in these projects and offering an analysis of (...) why they are epistemically productive when they succeed. My central thesis is that they illustrate the virtues of epistemic inclusion central to proceduralist accounts of objectivity, but I draw on the resources of feminist standpoint theory to motivate the extension of these social -cognitive norms beyond the confines of the scientific community. (shrink) | |
Straightening the current ‘value-laden turn’ (VLT) in the philosophical literature on values in science, and reviving the legacy of the value-free ideal of science (VFI), this paper argues that the influence of extra-scientific values should be minimised—not excluded—in the core phase of scientific inquiry where claims are accepted or rejected. Noting that the original arguments for the VFI (ensuring the truth of scientific knowledge, respecting the autonomy of science results users, preserving public trust in science) have not been satisfactorily addressed (...) by proponents of the VLT, it proposes four prerequisites which any model for values in the acceptance/rejection phase of scientific inquiry should respect, coming from the fundamental requirement to distinguish between facts and values: (1) the truth of scientific knowledge must be ensured; (2) the uncertainties associated with scientific claims must be stated clearly; (3) claims accepted into the scientific corpus must be distinguished from claims taken as a basis for action. An additional prerequisite of (4) simplicity and systematicity is desirable, if the model is to be applicable. Methodological documents from international institutions and regulation agencies are used to illustrate the prerequisites. A model combining Betz’s conception (stating uncertainties associated with scientific claims) and Hansson’s corpus model (ensuring the truth of the scientific corpus and distinguishing it from other claims taken as a basis for action) is proposed. Additional prerequisites are finally suggested for future research, stemming from the requirement for philosophy of science to self-reflect on its own values: (5) any model for values in science must be descriptively and normatively relevant; and (6) its consequences must be thoroughly assessed. (shrink) | |
Drawing on the SAGE minutes and other documents, I consider the wider lessons for norms of scientific advising that can be learned from the UK’s initial response to coronavirus in the period January-March 2020, when an initial strategy that planned to avoid total suppression of transmission was abruptly replaced by an aggressive suppression strategy. I introduce a distinction between “normatively light advice”, in which no specific policy option is recommended, and “normatively heavy advice” that does make an explicit recommendation. I (...) argue that, although scientific advisers should avoid normatively heavy advice in normal times in order to facilitate democratic accountability, this norm can be permissibly overridden in situations of grave emergency. SAGE’s major mistake in early 2020 was not that of endorsing a particular strategy, nor that of being insufficiently precautionary, but that of relying too heavily on a specific set of “reasonable worst-case” planning assumptions. I formulate some proposals that assign a more circumscribed role to “worst-case” thinking in emergency planning. In an epilogue, I consider what the implications of my proposals would have been for the UK’s response to the “second wave” of late 2020. (shrink) | |
In this paper I shall defend the idea that there is an abstract and general core meaning of objectivity, and what is seen as a variety of concepts or conceptions of objectivity are in fact criteria of, or means to achieve, objectivity. I shall then discuss the ideal of value-free science and its relation to the objectivity of science; its status can be at best a criterion of, or means for, objectivity. Given this analysis, we can then turn to the (...) problem of inductive risk. Do the value judgements regarding inductive risk really pose a threat to the objectivity of science? I claim that this is not the case because they do not lower the thresholds scientifically postulated for objectivity. I shall conclude the paper with a discussion of under-appreciated influences of values on science, which indeed pose a serious threat to the objectivity of some scientific disciplines. (shrink) | |
In recent years, the argument from inductive risk against value free science has enjoyed a revival. This paper investigates and clarifies this argument through means of a case-study: neonicitinoid research. Sect. 1 argues that the argument from inductive risk is best conceptualised as a claim about scientists’ communicative obligations. Sect. 2 then shows why this argument is inapplicable to “public communication”. Sect. 3 outlines non-epistemic reasons why non-epistemic values should not play a role in public communicative contexts. Sect. 4 analyses (...) the implications of these arguments both for the specific case of neonicitinoid research and for understanding the limits of the argument from inductive risk. Sect. 5 sketches the broader implications of my claims for understanding the “Value Free Ideal” for science. (shrink) | |
Examining previous discussions on how to construe the concepts of gender and race, we advocate what we call strategic conceptual engineering. This is the employment of a (possibly novel) concept for specific epistemic or social aims, concomitant with the openness to use a different concept (e.g., of race) for other purposes. We illustrate this approach by sketching three distinct concepts of gender and arguing that all of them are needed, as they answer to different social aims. The first concept serves (...) the aim of identifying and explaining gender-based discrimination. It is similar to Haslanger’s well-known account, except that rather than offering a definition of ‘woman’ we focus on ‘gender’ as one among several axes of discrimination. The second concept of gender is to assign legal rights and social recognitions, and thus is to be trans-inclusive. We argue that this cannot be achieved by previously suggested concepts that include substantial gender-related psychological features, such as awareness of social expectations. Instead, our concept counts someone as being of a certain gender solely based on the person’s self-identification with this gender. The third concept of gender serves the aim of personal empowerment by means of one’s gender identity. In this context, substantial psychological features and awareness of one’s social situation are involved. While previous accounts of concepts have focused on their role in determining extensions, we point to contexts where a concept’s role in explanation and moral reasoning can be more important. (shrink) | |
Ethnobiology has become increasingly concerned with applied and normative issues such as climate change adaptation, forest management, and sustainable agriculture. Applied ethnobiology emphasizes the practical importance of local and traditional knowledge in tackling these issues but thereby also raises complex theoretical questions about the integration of heterogeneous knowledge systems. The aim of this article is to develop a framework for addressing questions of integration through four core domains of philosophy - epistemology, ontology, value theory, and political theory. In each of (...) these dimensions, we argue for a model of “partial overlaps” that acknowledges both substantial similarities and differences between knowledge systems. While overlaps can ground successful collaboration, their partiality requires reflectivity about the limitations of collaboration and co-creation. By outlining such a general and programmatic framework, the article aims to contribute to developing “philosophy of ethnobiology” as a field of interdisciplinary exchange that provides new resources for addressing foundational issues in ethnobiology and also expands the agenda of philosophy of biology. (shrink) No categories | |
Scientists often diverge widely when choosing between research programs. This can seem to be rooted in disagreements about which of several theories, competing to address shared questions or phenomena, is currently the most epistemically or explanatorily valuable—i.e. most successful. But many such cases are actually more directly rooted in differing judgments of pursuit-worthiness, concerning which theory will be best down the line, or which addresses the most significant data or questions. Using case studies from 16th-century astronomy and 20th-century geology and (...) biology, I argue that divergent theory choice is thus often driven by considerations of scientific process, even where direct epistemic or explanatory evaluation of its final products appears more relevant. Broadly following Kuhn’s analysis of theoretical virtues, I suggest that widely shared criteria for pursuit-worthiness function as imprecise, mutually-conflicting values. However, even Kuhn and others sensitive to pragmatic dimensions of theory ‘acceptance’, including the virtue of fruitfulness, still commonly understate the role of pursuit-worthiness—especially by exaggerating the impact of more present-oriented virtues, or failing to stress how ‘competing’ theories excel at addressing different questions or data. This framework clarifies the nature of the choice and competition involved in theory choice, and the role of alternative theoretical virtues. (shrink) | |
Kevin Elliott and others separate two common arguments for the legitimacy of societal values in scientific reasoning as the gap and the error arguments. This article poses two questions: How are these two arguments related, and what can we learn from their interrelation? I contend that we can better understand the error argument as nested within the gap because the error is a limited case of the gap with narrower features. Furthermore, this nestedness provides philosophers with conceptual tools for analyzing (...) more robustly how values pervade science. (shrink) | |
Scientists have the ability to influence policy in important ways through how they present their results. Surprisingly, existing codes of scientific ethics have little to say about such choices. I propose that we can arrive at a set of ethical guidelines to govern scientists’ presentation of information to policymakers by looking to bioethics: roughly, just as a clinician should aim to promote informed decision-making by patients, a scientist should aim to promote informed decision-making by policymakers. Though this may sound like (...) a natural proposal, I show it offers guidance that conflicts with standard scientific practices. I conclude by considering one cost of the proposal: that it would prevent scientists from acting as advocates in a way that is currently common in certain fields. I accept that the proposal would restrict scientists’ political advocacy rights, but argue that the benefits of adopting it — promoting democratic governance — justify the restriction. (shrink) | |
We call attention to an underappreciated way in which non-epistemic values influence evidence evaluation in science. Our argument draws upon some well-known features of scientific modeling. We show that, when scientific models stand in for background knowledge in Bayesian and other probabilistic methods for evidence evaluation, conclusions can be influenced by the non-epistemic values that shaped the setting of priorities in model development. Moreover, it is often infeasible to correct for this influence. We further suggest that, while this value influence (...) is not particularly prone to the problem of wishful thinking, it could have problematic non-epistemic consequences in some cases. (shrink) | |
The aim of this article is to argue that ontological choices in scientific practice undermine common formulations of the value-free ideal in science. First, I argue that the truth values of scientific statements depend on ontological choices. For example, statements about entities such as species, race, memory, intelligence, depression, or obesity are true or false relative to the choice of a biological, psychological, or medical ontology. Second, I show that ontological choices often depend on non-epistemic values. On the basis of (...) these premises, I argue that it is often neither possible nor desirable to evaluate scientific statements independently of non-epistemic values. Finally, I suggest that considerations of ontological choices do not only challenge the value-free ideal but also help to specify positive roles of non-epistemic values in an often neglected area of scientific practice. (shrink) | |
This essay analyzes and develops recent views about explanation in biology. Philosophers of biology have parted with the received deductive-nomological model of scientific explanation primarily by attempting to capture actual biological theorizing and practice. This includes an endorsement of different kinds of explanation (e.g., mathematical and causal-mechanistic), a joint study of discovery and explanation, and an abandonment of models of theory reduction in favor of accounts of explanatory reduction. Of particular current interest are philosophical accounts of complex explanations that appeal (...) to different levels of organismal organization and use contributions from different biological disciplines. The essay lays out one model that views explanatory integration across different disciplines as being structured by scientific problems. I emphasize the philosophical need to take the explanatory aims pursued by different groups of scientists into account, as explanatory aims determine whether different explanations are competing or complementary and govern the dynamics of scientific practice, including interdisciplinary research. I distinguish different kinds of pluralism that philosophers have endorsed in the context of explanation in biology, and draw several implications for science education, especially the need to teach science as an interdisciplinary and dynamic practice guided by scientific problems and explanatory aims. (shrink) | |
Non-epistemic values play important roles in classificatory practice, such that philosophical accounts of kinds and classification should be able to accommodate them. Available accounts fail to do so, however. Our aim is to fill this lacuna by showing how non-epistemic values feature in scientific classification, and how they can be incorporated into a philosophical theory of classification and kinds. To achieve this, we present a novel account of kinds and classification, discuss examples from biological classification where non-epistemic values play decisive (...) roles, and show how this account accommodates the role of non-epistemic values. (shrink) | |
When discussing scientific objectivity, many philosophers of science have recently focused on accounts that can be applied in practice when assessing the objectivity of something. It has become clear that in different contexts, objectivity is realized in different ways, and the many senses of objectivity recognized in the recent literature seem to be conceptually distinct. I argue that these diverse ‘applicable’ senses of scientific objectivity have more in common than has thus far been recognized. I combine arguments from philosophical discussions (...) of trust, from negative accounts of objectivity, and from the recent literature on epistemic risks. When we call X objective, we endorse it: we say that we rely on X, and that others should do so too. But the word ‘objective’ is reserved for a specific type of reliance: it is based on the belief that important epistemic risks arising from our imperfections as epistemic agents have been effectively averted. All the positive senses of objectivity identify either some risk of this type, or some efficient strategy for averting one or more such risks. (shrink) | |
This paper addresses the problem of judgment aggregation in science. How should scientists decide which propositions to assert in a collaborative document? We distinguish the question of what to write in a collaborative document from the question of collective belief. We argue that recent objections to the application of the formal literature on judgment aggregation to the problem of judgment aggregation in science apply to the latter, not the former question. The formal literature has introduced various desiderata for an aggregation (...) procedure. Proposition-wise majority voting emerges as a procedure that satisfies all desiderata which represent norms of science. An interesting consequence is that not all collaborating scientists need to endorse every proposition asserted in a collaborative document. (shrink) | |
In their recent book, Oreskes and Conway describe the ‘tobacco strategy’, which was used by the tobacco industry to influence policymakers regarding the health risks of tobacco products. The strategy involved two parts, consisting of promoting and sharing independent research supporting the industry’s preferred position and funding additional research, but selectively publishing the results. We introduce a model of the tobacco strategy, and use it to argue that both prongs of the strategy can be extremely effective—even when policymakers rationally update (...) on all evidence available to them. As we elaborate, this model helps illustrate the conditions under which the tobacco strategy is particularly successful. In addition, we show how journalists engaged in ‘fair’ reporting can inadvertently mimic the effects of industry on public belief. 1Introduction2Epistemic Network Models3Selective Sharing4Biased Production5Journalists as Unwitting Propagandists6ConclusionAppendix. (shrink) | |
In this paper we focus on some new normativist positions and compare them with traditional ones. In so doing, we claim that if normative judgments are involved in determining whether a condition is a disease only in the sense identified by new normativisms, then disease is normative only in a weak sense, which must be distinguished from the strong sense advocated by traditional normativisms. Specifically, we argue that weak and strong normativity are different to the point that one ‘normativist’ label (...) ceases to be appropriate for the whole range of positions. If values and norms are not explicit components of the concept of disease, but only intervene in other explanatory roles, then the concept of disease is no more value-laden than many other scientific concepts, or even any other scientific concept. We call the newly identified position “value-conscious naturalism” about disease, and point to some of its theoretical and practical advantages. (shrink) | |
Several decades of work in both philosophy and psychology acutely highlights our limitations as individual inquirers. One way to recognize these limitations is to defer to experts: roughly, to form one’s beliefs on the basis of expert testimony. Yet, as has become salient in the age of Brexit, Trumpist politics, and climate change denial, people are often mistrustful of experts, and unwilling to defer to them. It’s a trope of highbrow public discourse that this unwillingness is a serious pathology. But (...) to what extent is this trope accurate? Answering this requires us to settle both a normative question—under exactly what conditions ought we to defer to experts?—and an empirical question—under what conditions are people willing to defer to experts? The first question has been investigated primarily by philosophers; the second, primarily by psychologists. Yet there is little work integrating these literatures and putting together their results. The aim of this review article is to begin this task, enabling us to begin reaching conclusions about how much real practices of deference diverge from the ideal. We present an opinionated guide to relevant work from both philosophy and psychology, and note places where the literature has important gaps. (shrink) | |
Normatively inappropriate scientific dissent prevents warranted closure of scientific controversies and confuses the public about the state of policy-relevant science, such as anthropogenic climate change. Against recent criticism by de Melo-Martín and Intemann of the viability of any conception of normatively inappropriate dissent, I identify three conditions for normatively inappropriate dissent: its generation process is politically illegitimate, it imposes an unjust distribution of inductive risks, and it adopts evidential thresholds outside an accepted range. I supplement these conditions with an inference-to-the-best-explanation (...) account of knowledge-based consensus and dissent to allow policy makers to reliably identify unreliable scientific dissent. (shrink) |