| |
Longtermism is the view that the most urgent global priorities, and those to which we should devote the largest portion of our resources, are those that focus on (i) ensuring a long future for humanity, and perhaps sentient or intelligent life more generally, and (ii) improving the quality of the lives that inhabit that long future. While it is by no means the only one, the argument most commonly given for this conclusion is that these interventions have greater expected goodness (...) per unit of resource devoted to them than each of the other available interventions, including those that focus on the health and well-being of the current population. In this paper, I argue that, even if we grant the consequentialist ethics upon which this argument depends, and even if we grant one of the axiologies that are typically paired with that ethics to give the argument, we are not morally required to choose an option that maximises expected utility; indeed, we might not even be permitted to do so. Instead, I will argue, if the argument's consequentialism is correct, we should choose using a decision theory that is sensitive to risk, and allows us to give greater weight to worse-case outcomes than expected utility theory does. And, I will show, such decision theories do not always recommend longtermist interventions. Indeed, sometimes, they recommend exactly the opposite: sometimes, they recommend hastening human extinction. Many, though not all, will take this as a reductio of the consequentialism or the axiology of the argument. I remain agnostic on the conclusion we should draw. (shrink) | |
The main aim of this book is to introduce the topic of limited awareness, and changes in awareness, to those interested in the philosophy of decision-making and uncertain reasoning. | |
The thesis of instrumental convergence holds that a wide range of ends have common means: for instance, self preservation, desire preservation, self improvement, and resource acquisition. Bostrom contends that instrumental convergence gives us reason to think that "the default outcome of the creation of machine superintelligence is existential catastrophe". I use the tools of decision theory to investigate whether this thesis is true. I find that, even if intrinsic desires are randomly selected, instrumental rationality induces biases towards certain kinds of (...) choices. Firstly, a bias towards choices which leave less up to chance. Secondly, a bias towards desire preservation, in line with Bostrom's conjecture. And thirdly, a bias towards choices which afford more choices later on. I do not find biases towards any other of the convergent instrumental means on Bostrom's list. I conclude that the biases induced by instrumental rationality at best weakly support Bostrom's conclusion that machine superintelligence is likely to lead to existential catastrophe. (shrink) | |
Decision theory and folk psychology both purport to represent the same phenomena: our belief-like and desire- and preference-like states. They also purport to do the same work with these representations: explain and predict our actions. But they do so with different sets of concepts. There's much at stake in whether one of these two sets of concepts can be accounted for with the other. Without such an account, we'd have two competing representations and systems of prediction and explanation, a dubious (...) dualism. Folk psychology structures our daily lives and has proven fruitful in the study of mind and ethics, while decision theory is pervasive in various disciplines, including the quantitative social sciences, especially economics, and philosophy. My interest is in accounting for folk psychology with decision theory -- in particular, for believe and wanting, which decision theory omits. Many have attempted this task for belief. (The Lockean Thesis says that there is such an account.) I take up the parallel task for wanting, which has received far less attention. I propose necessary and sufficient conditions, stated in terms of decision theory, for when you're truly said to want; I give an analogue of the Lockean Thesis for wanting. My account is an alternative to orthodox accounts that link wanting to preference (e.g. Stalnaker (1984), Lewis (1986)), which I argue are false. I argue further that want ascriptions are context-sensitive. My account explains this context-sensitivity, makes sense of conflicting desires, and accommodates phenomena that motivate traditional theses on which 'want' has multiple senses (e.g. all-things-considered vs. pro tanto). (shrink) | |
1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st... | |
We defend three controversial claims about preference, credence, and choice. First, all agents (not just rational ones) have complete preferences. Second, all agents (again, not just rational ones) have real-valued credences in every proposition in which they are confident to any degree. Third, there is almost always some unique thing we ought to do, want, or believe. | |
This paper considers two competing views of the relationship between preference and desire. On what I call the “preference-first view,” preference is our most basic form of conative attitude, and desire reduces to preference. This view is widely assumed, and essentially treated as orthodoxy, among standard decision theorists, economists, and others. I argue, however, that the preference-first view has things the wrong way around. I first show that the standard motivation offered for this view—motivation underlying foundational work in decision theory (...) and economics—leaves the view with unacceptable psychological implications. I then introduce an alternative view—the “desire-first view”—on which desire is our most basic form of conative attitude, and preference reduces to desire. On the desire-first view I propose, preferences, as comparisons, are best understood as comparisons of the extents to which alternatives are desired. I show that this desire-first view is simple, ecumenical, and explanatorily powerful. (shrink) | |
In “Perceptual Confidence,” I argue that our perceptual experiences assign degrees of confidence. In “Precision, not Confidence, Describes the Uncertainty of Perceptual Experience,” Rachel Denison disagrees. In this reply I first clarify what i mean by ‘perceptual experiences’, ‘assign’ and ‘confidence’. I then argue, contra Denison, that perception involves automatic categorization, and that there is an intrinsic difference between a blurry perception of a sharp image and a sharp perception of a blurry image. -/- . | |
ABSTRACT Issues concerning the putative perception/cognition divide are not only age-old, but also resurface in contemporary discussions in various forms. In this paper, I connect a relatively new debate concerning perceptual confidence to the perception/cognition divide. The term ‘perceptual confidence’ is quite common in the empirical literature, but there is an unsettled question about it, namely: are confidence assignments perceptual or post-perceptual? John Morrison in two recent papers puts forward the claim that confidence arises already at the level of perception. (...) In this paper, I first argue that Morrison’s case is unconvincing, and then develop one picture on perceptual precision with the notion of ‘matching profile’ and ‘supervaluation’ : 481–495.), highlighting the fact that this is a vagueness account, which is similar to but importantly different from indeterminacy accounts : 156–184.). With this model in hand, there can be rich resources with which to draw a theoretical line between perception and cognition. (shrink) | |
This paper is about the alethic aspect of epistemic rationality. The most common approaches to this aspect are either normative (what a reasoner ought to/may believe?) or evaluative (how rational is a reasoner?), where the evaluative approaches are usually comparative (one reasoner is assessed compared to another). These approaches often present problems with blindspots. For example, ought a reasoner to believe a currently true blindspot? Is she permitted to? Consequently, these approaches often fail in describing a situation of alethic maximality, (...) where a reasoner fulfills all the alethic norms and could be used as a standard of rationality (as they are, in fact, used in some of these approaches). I propose a function α, which accepts a set of beliefs as inputand returns a numeric alethic value. Then I use this function to define a notion of alethic maximality that is satisfiable by finite reasoners (reasoners with cognitive limitations) and does not present problems with blindspots. Function α may also be used in alethic norms and evaluation methods (comparative and non-comparative) that may be applied to finite reasoners and do not present problems with blindspots. A result of this investigation isthat the project of providing purely alethic norms is defective. The use of function α also sheds light on important epistemological issues, such as the lottery and the preface paradoxes, and the principles of clutter avoidance and reflection. (shrink) | |
Unpleasant dreams occur much more frequently than many people realise. If one is a hedonistic utilitarian – or, at least, one thinks that dreams have positive or negative moral value in virtue of their experiential quality – then one has considerable reason to try to make such dreams more positive. Given it is possible to improve the quality of our dreams, we ought to be promoting and implementing currently available interventions that improve our dream experiences, and conducting research to find (...) new, more effective interventions. (shrink) | |
Confronted with the possibility of severe environmental harms, such as catastrophic climate change, some researchers have suggested that we should abandon the principle at the heart of standard decision theory—the injunction to maximize expected utility—and embrace a different one: the Precautionary Principle. Arguably, the most sophisticated philosophical treatment of the Precautionary Principle is due to Steel. Steel interprets PP as a qualitative decision rule and appears to conclude that a quantitative decision-theoretic statement of PP is both impossible and unnecessary. In (...) this article, we propose a decision-theoretic formulation of PP in terms of lexical utilities. We show that this lexical model is largely faithful to Steel’s approach, but also that it corrects three problems with Steel’s account and clarifies the relationship between PP and standard decision theory. Using a range of examples, we illustrate how the lexical model can be used to explore a variety of issues related to precautionary reasoning. (shrink) | |
Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence, a fast-growing research area that is so far lacking in firm theoretical foundations. In this article, an expanded version of a paper originally presented at the 37th Conference on Uncertainty in Artificial Intelligence, we attempt to fill this gap. Building on work in logic, probability, and causality, we establish the central role of (...) necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We propose a novel formulation of these concepts, and demonstrate its advantages over leading alternatives. We present a sound and complete algorithm for computing explanatory factors with respect to a given context and set of agentive preferences, allowing users to identify necessary and sufficient conditions for desired outcomes at minimal cost. Experiments on real and simulated data confirm our method’s competitive performance against state of the art XAI tools on a diverse array of tasks. (shrink) | |
Newcomb’s problem has spawned a debate about which variant of expected utility maximisation should guide rational choice. In this paper, we provide a new argument against what is probably the most popular variant: causal decision theory. In particular, we provide two scenarios in which CDT voluntarily loses money. In the first, an agent faces a single choice and following CDT’s recommendation yields a loss of money in expectation. The second scenario extends the first to a diachronic Dutch book against CDT. No categories | |
This paper aims to flesh out the celebrated notion of reflective equilibrium within a probabilistic framework for epistemic rationality. On the account developed here, an agent's attitudes are in reflective equilibrium when there is a certain sort of harmony between the agent's credences, on the one hand, and what the agent accepts, on the other hand. Somewhat more precisely, reflective equilibrium is taken to consist in the agent accepting, or being prepared to accept, all and only claims that follow from (...) a maximally comprehensive theory that is more probable than any other such theory. Drawing on previous work, the paper shows that when an agent is in reflective equilibrium in this sense, the set of claims they accept or are prepared to accept is bound to be logically consistent and closed under logical implication. The paper also argues that this account can explain various features of philosophical argumentation in which the notion of reflective equilibrium features centrally, such as the emphasis on evaluating philosophical theories holistically rather than in a piecemeal fashion. (shrink) | |
Perfect being theism is the view that the perfect being exists and the property being-perfect is the property being-God. According to the strong analysis of perfection, a being is perfect just in case it exemplifies all perfections. On the other hand, the weak analysis of perfection claims that a being is perfect just in case it exemplifies the best possible combination of compatible perfections. Strong perfect being theism accepts the former analysis while weak perfect being theism accepts the latter. In (...) this paper, I argue that there are good reasons to reject both versions of perfect being theism. On the one hand, strong perfect being theism is false if there are incompatible perfections; I argue that there are. On the other hand, if either no comparison can be made between sets of perfections, or they are equally good, then there is no best possible set of perfections. I argue for the antecedent of this conditional statement, concluding that weak perfect being theism is false. In the absence of other analyses of perfection, I conclude that we have reason to reject perfect being theism. (shrink) | |
Hamblin distinguished between formal and descriptive dialectic. Formal normative models of deliberation dialogue have been strongly emphasized as argumentation frameworks in computer science. But making such models of deliberation applicable to real natural language examples has reached a point where the descriptive aspect needs more interdisciplinary work. The new formal and computational models of deliberation dialogue that are being built in computer science seem to be closely related to some already existing and very well established computing technologies such as problem (...) solving and decision making, but whether or how dialectical argumentation can be helpful to support these systems remains an open question. The aim of this paper is to examine some real examples of argumentation that seem to hover on the borderlines between deliberation, problem solving and decision making. (shrink) No categories | |
Many people are worried about the harmful effects of climate change but nevertheless enjoy some activities that contribute to the emission of greenhouse gas (driving, flying, eating meat, etc.), the main cause of climate change. How should such people make choices between engaging in and refraining from enjoyable greenhouse-gas-emitting activities? In this chapter, we look at the answer provided by decision theory. Some scholars think that the right answer is given by interactive decision theory, or game theory; and moreover think (...) that since private climate decisions are instances of the prisoner’s dilemma, one rationally should engage in these activities provided that one enjoys them. Others think that the right answer is given by expected utility theory, the best-known version of individual decision theory under risk and uncertainty. In this chapter, we review these different answers, with a special focus on the latter answer and the debate it has generated. (shrink) | |
This paper regiments and responds to an objection to skeptical theism. The conclusion of the objection is that it is not reasonable for skeptical theists to prevent evil, even when it would be easy for them to do so. I call this objection a “Dominance-Reasoning Objection” because it can be regimented utilizing dominance reasoning familiar from decision theory. Nonetheless, I argue, the objection ultimately fails because it neglects a distinction between justifying goods that are necessary for the existence of a (...) good and those that are necessary for God’s permission of the good. (shrink) | |
In this paper, I argue for a new normative theory of rational choice under risk, namely expected comparative utility (ECU) theory. I first show that for any choice option, a, and for any state of the world, G, the measure of the choiceworthiness of a in G is the comparative utility (CU) of a in G—that is, the difference in utility, in G, between a and whichever alternative to a carries the greatest utility in G. On the basis of this (...) principle, I then argue that for any agent, S, faced with any decision under risk, S should rank his or her decision options (in terms of how choiceworthy they are) according to their comparative expected comparative utility (CECU) and should choose whichever option carries the greatest CECU. For any option, a, a’s CECU is the difference between its ECU and that of whichever alternative to a carries the greatest ECU, where a’s ECU is a probability‐weighted sum of a’s CUs across the various possible states of the world. I lastly demonstrate that in some ordinary decisions under risk, ECU theory delivers different verdicts from those of standard decision theory. (shrink) | |
Bayesian epistemologists support the norms of probabilism and conditionalization using Dutch book and accuracy arguments. These arguments assume that rationality requires agents to maximize practical or epistemic value in every doxastic state, which is evaluated from a subjective point of view (e.g., the agent’s expectancy of value). The accuracy arguments also presuppose that agents are opinionated. The goal of this paper is to discuss the assumptions of these arguments, including the measure of epistemic value. I have designed AI agents based (...) on the Bayesian model and a nonmonotonic framework and tested how they achieve practical and epistemic value in conditions in which an alternative set of assumptions holds. In one of the tested conditions, the nonmonotonic agent, which is not opinionated and fulfills neither probabilism nor conditionalization, outperforms the Bayesian in the measure of epistemic value that I argue for in the paper (α-value). I discuss the consequences of these results for the epistemology of rationality. (shrink) | |
This paper presents a decision problem called the holiday puzzle. The decision problem is one that involves incommensurable goods and sequences of choices. This puzzle points to a tension between three prima facie plausible, but jointly incompatible claims. I present a way out of the trilemma which demonstrates that it is possible for agents to have incomplete preferences and to be dynamically rational. The solution also suggests that the relationship between preference and rational permission is more subtle than standardly assumed. | |
In this paper, I argue for a new normative theory of rational choice under risk, namely expected comparative utility (ECU) theory. I first show that for any choice option, a, and for any state of the world, G, the measure of the choiceworthiness of a in G is the comparative utility (CU) of a in G—that is, the difference in utility, in G, between a and whichever alternative to a carries the greatest utility in G. On the basis of this (...) principle, I then argue that for any agent, S, faced with any decision under risk, S should rank his or her decision options (in terms of how choiceworthy they are) according to their comparative expected comparative utility (CECU) and should choose whichever option carries the greatest CECU. For any option, a, a's CECU is the difference between its ECU and that of whichever alternative to a carries the greatest ECU, where a's ECU is a probability-weighted sum of a's CUs across the various possible states of the world. I lastly demonstrate that in some ordinary decisions under risk, ECU theory delivers different verdicts from those of standard decision theory. (shrink) | |
Pondering the question of free will in the context of probability allows us to take a fresh look at a number of old problems. We are able to avoid deterministic entrapments and attempt to look at free will as an outcome of the entire decision-making system. In my paper, I will argue that free will should be considered in the context of a complex system of decisions, not individual cases. The proposed system will be probabilistic in character, so it will (...) be embedded in the calculus of probability. To achieve the stated goal, I will refer to two areas of Carnap’s interest: the relationship between free will and determinism, and the probability-based decision-making system. First, I will present Carnap’s compatibilist position. On this basis, I will show how free will can be examined on deterministic grounds. Then I will present Carnap’s probabilistic project—the so-called logical interpretation of probability. In addition to presenting its characteristics and functionality, I will argue for its usefulness in the context of decision analysis and its immunity to problems associated with determinism. Finally, I will show how the two mentioned elements can be combined, as a result of which I will present a concept for a probabilistic analysis of free will. In this context, I will identify free will with the individual characteristics of the system. My main aim is to present the theme of free will in the light of a formal analysis based on probability rather than metaphysical assumptions. (shrink) | |
Moral dilemmas have long been debated in moral philosophy without reaching a definitive consensus. The majority of value pluralists attribute their origin to the incommensurability of moral values, i.e. the statement that, since moral values are many and different in nature, they may conflict and cannot be compared. Neuroscientific studies on the neural common currency show that the comparison between allegedly incompatible alternatives is a practical possibility, namely it is the basis of the way in which the agent evaluates choice (...) options. Indeed, both in economic and moral decision-making, the value of options is represented and directly compared in the ventromedial prefrontal cortex. Therefore, we contend that moral dilemmas do not originate from value incommensurability and, on the basis of the neuroscientific discoveries on the neural currency, we derive the implications for the philosophical debate on moral dilemmas. We also provide a possible connection between the experience of moral dilemmas and their neural representation: one of the causes of the individual’s indecision is the neural tie, i.e. the condition in which two options have the same value at neural level, and her regret could be due to the motivational force of the rejected option that is still signalled by affective processes in the brain. We apply this interpretation and the common currency hypothesis to vocational decisions and propose that, although from the agent’s perspective the options are qualitatively different, they may be nevertheless equivalent at neural level. This can be seen as a reason for downgrading the importance commonly attributed to the risk of making the “wrong choice”. (shrink) | |
The aim of this thesis is to improve our understanding of how to assess and communicate uncertainty in areas of research deeply afflicted by it, the assessment and communication of which are made more fraught still by the studies’ immediate policy implications. The IPCC is my case study throughout the thesis, which consists of three parts. In Part 1, I offer a thorough diagnosis of conceptual problems faced by the IPCC uncertainty framework. The main problem I discuss is the persistent (...) ambiguity surrounding the concepts of ‘confidence’ and ‘likelihood’; I argue that the lack of a conceptually valid interpretation of these concepts compatible with the IPCC uncertainty guide’s recommendations has worrying implications for both the IPCC authors’ treatment of uncertainties and the interpretability of the information provided in the AR5. Finally, I show that an understanding of the reasons behind the IPCC’s decision to include two uncertainty scales can offer insights into the nature of this problem. In Part 2, I review what philosophers have said about model-based robustness analysis. I assess several arguments that have been offered for its epistemic import and relate this discussion to the context of climate model ensembles. I also discuss various measures of independence in the climate literature, and assess the extent to which these measures can help evaluate the epistemic import of model robustness. In Part 3, I explore the notion of the ‘weight of evidence’ typically associated with Keynes. I argue that the Bayesian is bound to struggle with this notion, and draw some lessons from this fact. Finally, I critically assess some recent proposals for a new IPCC uncertainty framework that significantly depart from the current one. (shrink) No categories | |
This thesis is about making decisions when we are uncertain about what will happen, how valuable it will be, and even how to make decisions. Even the most sure-footed amongst us are sometimes uncertain about all three, but surprisingly little attention has been given to the latter two. The three essays that constitute my thesis hope to do a small part in rectifying this problem. The first essay is about the value of finding out how to make decisions. Society spends (...) considerable resources funding people (like me) to research decision-making, so it is natural to wonder whether society is getting a good deal. This question is so shockingly underresearched that bedrock facts are readily discoverable, such as when this kind of information is valuable. My second essay concerns whether we can compare value when we are uncertain about value. Many people are in fact uncertain about value, and how we deal with this uncertainty hinges on these comparisons. I argue that value comparisons are only sometimes possible; I call this weak comparability. This essay is largely a synthesis of the literature, but I also present an argument which begins with a peculiar view of the self: it is as if each of us is a crowd of different people separated by time (but connected by continuity of experience). I’m not the first to endorse this peculiar view of the self, but I am the first to show how it supports the benign view that value is sometimes comparable. We may be uncertain of any decision rules, even those that would tell us how to act when we face uncertainty in decision rules. We may be uncertain of how to decide how to decide how to... And so on. If so, we might have to accept infinitely many decision rules just to make any mundane decision, such as whether to pick up a five-cent piece from the gutter. My third essay addresses this problem of regress. I think all of our decisions are forced: we must decide now or continue to deliberate. Surprisingly, this allows us to avoid the original problem. I call this solution “when forced, do your best”. (shrink) | |
This thesis in philosophy consists of an introduction and five papers on three themes related to transport: valuations of time, the metric of transport justice, and future mobility solutions. The first paper analyses the properties of time as an economic resource taking into account literature on behaviour concerning time. The intent is to add to the understanding of the underlying assumption of transferability between time and money in the context of transportation. The second paper is on the metric of transport (...) justice. If we are concerned with distributive justice in the context of transportation, what type of good is being distributed? So far, most of the transport literature on transport justice takes accessibility to be the most appropriate metric. However, I argue that many operationalisations of accessibility are insufficient as metrics of justice. They are both too narrow and exclude relevant burdens of transportation. Additionally, accessibility can be achieved by other, non-travel-based means. I end by formulating tentative criteria for an alternative metric of transport justice. The third paper considers temporal justice in the context of transportation. Building on an argument against the claim of substitutability between time and money, I argue that temporal perspectives have been overlooked in the literature on transport justice. In part, this might be due to accessibility being the established metric of justice. Most common measures of accessibility do not capture temporal constraints and might consequently not capture temporal inequalities. Based on the case of gender differences in travel patterns and behaviour, I argue that an alternative account of the appropriate metric of transport justice is needed to capture temporal constraints and reflect gender inequalities sufficiently. The fourth paper argues that the diversity of possible mobility solutions based on self-driving vehicles has been somewhat overlooked in the current literature on the value of travel time. Thus, the complexity of valuing travel time for self-driving vehicles has not been fully addressed. The paper consists of a morphological analysis of the parameters that might impact the value of travel time for self-driving vehicles and a deeper analysis of five plausible self-driving vehicle mobility concepts. It is claimed that not all such concepts can be easily mapped into transport modes. It might be more appropriate to differentiate the value of travel based on travel characteristics. The fifth paper is a literature review of work on attitudes toward automation technology, specifically self-driving vehicles. In particular, I examine the narratives and values related to gender. Generally, women tend to be more sceptical of the prospect of automated vehicles. The review found that this tendency is often explained by women being more risk-averse and less tech-savvy. Moreover, the policy recommendations in the examined literature based focus on educational efforts. Such perspectives can downplay or neglect valid reasons why women are less enthusiastic. Moreover, needs related to women's specific travel patterns might not be considered in the design and planning process. In conclusion, more awareness in needed of the gender differences, needs and expectations to ensure that future transport solutions are designed with everyone in mind. (shrink) | |
The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what (...) theory can explain the process of moral reasoning, decision and action, for AI entities in virtual, simulated and real-life moral scenarios? This thesis answers these two research questions with its two main contributions to the field of AI ethics, a substantial and a methodological contribution. The substantial contribution is a coherent and novel theory named Ethics of Systems Framework, as well as a possible inception of a new field of study: ethics of systems. The methodological contribution is the creation of its main methodological tool, the Ethics of Systems Interface. The second part of the research effort was focused on testing and demonstrating the capacities of the Ethics of Systems Framework and Interface in modeling and managing moral scenarios in which AI and other entities participate. Further work can focus on building on top of the foundations of the Framework provided here, increasing the scope of moral theories and simulated scenarios, improving the level of detail and parameters to reflect real-life situations, and field-testing the Framework on actual AI systems. (shrink) | |
This volume announces a new era in the philosophy of God. Many of its contributions work to create stronger links between the philosophy of God, on the one hand, and mathematics or metamathematics, on the other hand. It is about not only the possibilities of applying mathematics or metamathematics to questions about God, but also the reverse question: Does the philosophy of God have anything to offer mathematics or metamathematics? The remaining contributions tackle stereotypes in the philosophy of religion. The (...) volume includes 35 contributions. It is divided into nine parts: 1. Who Created the Concept of God; 2. Omniscience, Omnipotence, Timelessness and Spacelessness of God; 3. God and Perfect Goodness, Perfect Beauty, Perfect Freedom; 4. God, Fundamentality and Creation of All Else; 5. Simplicity and Ineffability of God; 6. God, Necessity and Abstract Objects; 7. God, Infinity, and Pascal's Wager; 8. God and (Meta-)Mathematics; and 9. God and Mind. (shrink) |