Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs

Results for 'Modeling error'

956 found
Order:

1 filter applied
  1.  162
    StructuralModelingError and the System Individuation Problem.Jon Lawhead -forthcoming -British Journal for the Philosophy of Science.
    Recent work by Frigg et. al. and Mayo-Wilson have called attention to a particular sort oferror associated with attempts to model certain complex systems: structuralmodelingerror. The assessment of the degree of SME in a model presupposes agreement between modelers about the best way to individuate natural systems, an agreement which can be more problematic than it appears. This problem, which we dub “the system individuation problem” arises in many of the same contexts as SME, (...) and the two often compound one another. This paper explores the common roots of the two problems in concerns about the precision of predictions generated by scientific models, and discusses how both concerns bear on the study of complex natural systems, particularly the global climate. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  
  2.  23
    Developing a feeling forerror: Practices of monitoring and modelling air pollution data.Emma Garnett -2016 -Big Data and Society 3 (2).
    This paper is based on ethnographic research of data practices in a public health project called Weather Health and Air Pollution. I examine two different kinds of practices that make air pollution data, focusing on how they relate to particular modes of sensing and articulating air pollution. I begin by describing the interstitial spaces involved in making measurements of air pollution at monitoring sites and in the running of a computer simulation. Specifically, I attend to a shared dimension of these (...) practices, the checking of a numerical reading forerror. Checking a measurement forerror is routine practice and a fundamental component of making data, yet these are also moments of interpretation, where the form and meaning of numbers are ambiguous. Through two case studies of modelling and monitoring data practices, I show that making a ‘good’ measurement requires developing a feeling for the instrument–air pollution interaction in terms of the intended functionality of the measurements made. These affective dimensions of practice are useful analytically, making explicit the interaction of standardised ways of knowing and embodied skill in stabilising data. I suggest that environmental data practices can be studied through researchers’ materialisation oferror, which complicate normative accounts of Big Data and highlight the non-linear and entangled relations that are at work in the making of stable, accurate data. (shrink)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  3.  170
    Complexity and scientific modelling.Bruce Edmonds -2000 -Foundations of Science 5 (3):379-390.
    It is argued that complexity is not attributable directly to systems or processes but rather to the descriptions of their `best' models, to reflect their difficulty. Thus it is relative to the modelling language and type of difficulty. This approach to complexity is situated in a model of modelling. Such an approach makes sense of a number of aspects of scientific modelling: complexity is not situated between order and disorder; noise can be explicated by approaches to excess modellingerror; (...) and simplicity is not truth indicative but a useful heuristic when models are produced by a being with a tendency to elaborate in the face oferror. (shrink)
    Direct download(8 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  4.  54
    Varieties ofError and Varieties of Evidence in Scientific Inference.Barbara Osimani &Jürgen Landes -2023 -British Journal for the Philosophy of Science 74 (1):117-170.
    According to the variety of evidence thesis items of evidence from independent lines of investigation are more confirmatory, ceteris paribus, than, for example, replications of analogous studies. This thesis is known to fail (Bovens and Hartmann; Claveau). However, the results obtained by Bovens and Hartmann only concern instruments whose evidence is either fully random or perfectly reliable; instead, for Claveau, unreliability is modelled as deterministic bias. In both cases, the unreliable instrument delivers totally irrelevant information. We present a model that (...) formalizes both reliability and unreliability differently. Our instruments either are reliable, but affected by randomerror, or are biased but not deterministically so.Bovens and Hartmann’s results are counter-intuitive in that in their model a long series of consistent reports from the same instrument does not raise suspicion of ‘too-good-to-be-true’ evidence. This happens precisely because they contemplate neither the role of systematic bias, nor unavoidable randomerror of reliable instruments. In our model, the variety of evidence thesis fails as well, but the area of failure is considerably smaller than for Bovens and Hartmann and Claveau, and holds for (the majority of) realistic cases (that is, where biased instruments are very biased). The essential mechanism that triggers variety of evidence thesis failure is the rate of false to true positives for the two kinds of instruments. Our emphasis is on modelling beliefs about sources of knowledge and their role in hypothesis confirmation in interaction with dimensions of evidence, such as variety and consistency. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  5. Modeling Measurement:Error and Uncertainty.Alessandro Giordani &Luca Mari -2014 - In Marcel Boumans, Giora Hon & Arthur C. Petersen,Error and Uncertainty in Scientific Practice. Pickering & Chatto. pp. 79-96.
    In the last few decades the role played by models andmodeling activities has become a central topic in the scientific enterprise. In particular, it has been highlighted both that the development of models constitutes a crucial step for understanding the world and that the developed models operate as mediators between theories and the world. Such perspective is exploited here to cope with the issue as to whethererror-based and uncertainty-basedmodeling of measurement are incompatible, and thus (...) alternative with one another, as sometimes claimed nowadays. The crucial problem is whether assuming this standpoint implies definitely renouncing to maintain a role for truth and the related concepts, particularly accuracy, in measurement. It is argued here that the well known objections against true values in measurement, which would lead to refuse the concept of accuracy as non-operational, or to maintain it as only qualitative, derive from a not clear distinction between three distinct processes: the metrological characterization of measuring systems, their calibration, and finally measurement. Under the hypotheses that (1) the concept of true value is related to the model of a measurement process, (2) the concept of uncertainty is related to the connection between such model and the world, and (3) accuracy is a property of measuring systems (and not of measurement results) and uncertainty is a property of measurement results (and not of measuring systems), not only the compatibility but actually the conjoint need oferror-based and uncertainty-basedmodeling emerges. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  6. Errormodeling in the ACT-R production system.Christian Lebière,John R. Anderson &Lynne M. Reder -1994 - In Ashwin Ram & Kurt Eiselt,Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society: August 13 to 16, 1994, Georgia Institute of Technology. Erlbaum. pp. 555--559.
    No categories
     
    Export citation  
     
    Bookmark   1 citation  
  7.  113
    Error statisticalmodeling and inference: Where methodology meets ontology.Aris Spanos &Deborah G. Mayo -2015 -Synthese 192 (11):3533-3555.
    In empiricalmodeling, an important desiderata for deeming theoretical entities and processes as real is that they can be reproducible in a statistical sense. Current day crises regarding replicability in science intertwines with the question of how statistical methods link data to statistical and substantive theories and models. Different answers to this question have important methodological consequences for inference, which are intertwined with a contrast between the ontological commitments of the two types of models. The key to untangling them (...) is the realization that behind every substantive model there is a statistical model that pertains exclusively to the probabilistic assumptions imposed on the data. It is not that the methodology determines whether to be a realist about entities and processes in a substantive field. It is rather that the substantive and statistical models refer to different entities and processes, and therefore call for different criteria of adequacy. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  8.  547
    On the epistemological analysis ofmodeling and computationalerror in the mathematical sciences.Nicolas Fillion &Robert M. Corless -2014 -Synthese 191 (7):1451-1467.
    Interest in the computational aspects ofmodeling has been steadily growing in philosophy of science. This paper aims to advance the discussion by articulating the way in whichmodeling and computational errors are related and by explaining the significance oferror management strategies for the rational reconstruction of scientific practice. To this end, we first characterize the role and nature ofmodelingerror in relation to a recipe for model construction known as Euler’s recipe. We (...) then describe a general model that allows us to assess the quality of numerical solutions in terms of measures of computational errors that are completely interpretable in terms ofmodelingerror. Finally, we emphasize that this type oferror analysis involves forms of perturbation analysis that go beyond the basic model-theoretical and statistical/probabilistic tools typically used to characterize the scientific method; this demands that we revise and complement our reconstructive toolbox in a way that can affect our normative image of science. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  9.  50
    Modeling andError Compensation of Robotic Articulated Arm Coordinate Measuring Machines Using BP Neural Network.Guanbin Gao,Hongwei Zhang,Hongjun San,Xing Wu &Wen Wang -2017 -Complexity:1-8.
    Articulated arm coordinate measuring machine is a specific robotic structural instrument, which uses D-H method for the purpose of kinematicmodeling anderror compensation. However, it is difficult for the existingerror compensation models to describe various factors, which affects the accuracy of AACMM. In this paper, amodeling anderror compensation method for AACMM is proposed based on BP Neural Networks. According to the available measurements, the poses of the AACMM are used as the (...) input, and the coordinates of the probe are used as the output of neural network. To avoid tedious training and improve the training efficiency and prediction accuracy, a data acquisition strategy is developed according to the actual measurement behavior in the joint space. A neural network model is proposed and analyzed by using the data generated via Monte-Carlo method in simulations. The structure and parameter settings of neural network are optimized to improve the prediction accuracy and training speed. Experimental studies have been conducted to verify the proposed algorithm with neural network compensation, which shows that 97%error of the AACMM can be eliminated after compensation. These experimental results have revealed the effectiveness of the proposedmodeling and compensation method for AACMM. (shrink)
    No categories
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  10.  22
    Modeling and Simulation of Athlete’sError Motion Recognition Based on Computer Vision.Luo Dai -2021 -Complexity 2021:1-10.
    Computer vision is widely used in manufacturing, sports, medical diagnosis, and other fields. In this article, a multifeature fusionerror action expression method based on silhouette and optical flow information is proposed to overcome the shortcomings in the effectiveness of a singleerror action expression method based on the fusion of features for human bodyerror action recognition. We analyse and discuss the humanerror action recognition method based on the idea of template matching to analyse (...) the key issues that affect the overall expression of theerror action sequences, and then, we propose a motion energy model based on the direct motion energy decomposition of the video clips of humanerror actions in the 3 Deron action sequence space through the filter group. The method can avoid preprocessing operations such as target localization and segmentation; then, we use MET features and combine with SVM to test the human bodyerror database and compare the experimental results obtained by using different feature reduction and classification methods, and the results show that the method has the obvious comparative advantage in the recognition rate and is suitable for other dynamic scenes. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  11.  571
    Modeling the interaction of computer errors by four-valued contaminating logics.Roberto Ciuni,Thomas Macaulay Ferguson &Damian Szmuc -2019 - In Rosalie Iemhoff, Michael Moortgat & Ruy de Queiroz,Logic, Language, Information, and Computation. Folli Publications on Logic, Language and Information. pp. 119-139.
    Logics based on weak Kleene algebra (WKA) and related structures have been recently proposed as a tool for reasoning about flaws in computer programs. The key element of this proposal is the presence, in WKA and related structures, of a non-classical truth-value that is “contaminating” in the sense that whenever the value is assigned to a formula ϕ, any complex formula in which ϕ appears is assigned that value as well. Under such interpretations, the contaminating states represent occurrences of a (...) flaw. However, since different programs and machines can interact with (or be nested into) one another, we need to account for different kind of errors, and this calls for an evaluation of systems with multiple contaminating values. In this paper, we make steps toward these evaluation systems by considering two logics, HYB1 and HYB2, whose semantic interpretations account for two contaminating values beside classical values 0 and 1. In particular, we provide two main formal contributions. First, we give a characterization of their relations of (multiple-conclusion) logical consequence—that is, necessary and sufficient conditions for a set Δ of formulas to logically follow from a set Γ of formulas in HYB1 or HYB2 . Second, we provide sound and complete sequent calculi for the two logics. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  12.  63
    Incorporating measurementerror in n = 1 psychological autoregressivemodeling.Noémi K. Schuurman,Jan H. Houtveen &Ellen L. Hamaker -2015 -Frontiers in Psychology 6:152530.
    Measurementerror is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurementerror into account. Disregarding measurementerror when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurementerror into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare (...) the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurementerror in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurementerror, and that disregarding this measurementerror results in a substantial underestimation of the autoregressive parameters. (shrink)
    Direct download(9 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13.  55
    Connectionist modelling of word recognition.Peter Mcleod,David Plaut &Tim Shallice -2001 -Synthese 129 (2):173 - 183.
    Connectionist models offer concretemechanisms for cognitive processes. When these modelsmimic the performance of human subjects theycan offer insights into the computationswhich might underlie human cognition. We illustratethis with the performance of a recurrentconnectionist network which produces the meaningof words in response to their spellingpattern. It mimics a paradoxical pattern oferrors produced by people trying to read degradedwords. The reason why the network produces thesurprisingerror pattern lies in the nature ofthe attractors which it develops as it learns tomap spelling (...) patterns to semantics. The keyrole of attractor structure in the successfulsimulation suggests that the normal adult semanticreading route may involve attractor dynamics, andthus the paradoxicalerror pattern isexplained. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  
  14.  53
    Modeling dopaminergic and other processes involved in learning from reward predictionerror: contributions from an individual differences perspective.Alan D. Pickering &Francesca Pesola -2014 -Frontiers in Human Neuroscience 8.
  15.  21
    Elicitation and modelling of imprecise utility of health states.Michał Jakubczyk &Dominik Golicki -2020 -Theory and Decision 88 (1):51-71.
    Utilities of health states are often estimated to support public decisions in health care. People’s preferences may be imprecise, for lack of actual trade-off experience. We show how to elicit the utilities accounting for imprecision, discover the main drivers of imprecision, and compare several approaches to modelling health state utility data in the fuzzy setting. We extended the time trade-off questionnaire, to elicit utilities of states defined in the EQ-5D-3L descriptive system in184 respondents. Our study demonstrates that respondents are capable (...) of assessing their own imprecision and rigorous mathematical modelling is possible. The imprecision is larger than as inferred from the standard TTO method and is larger than estimationerror, even in our smallish sample. Non-trading in TTO often results from imprecision, rather than lexicographic preferences for longevity over quality. People are especially imprecise in assessing the impact of usual activities on utility; also, the internal inconsistency of a health state increases the imprecision. Fuzzy least squares method seems best suited to assign disutilities to individual dimensions, while separately modelling the location of utility and amount of imprecision seems best to produce value sets. If crisp parameters are estimated, accounting for imprecision changes the results little. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  16.  38
    Review of Deborah G. Mayo, Aris Spanos (eds.),Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science[REVIEW]Adam La Caze -2010 -Notre Dame Philosophical Reviews 2010 (7).
    Deborah Mayo's view of science is that learning occurs by severely testing specific hypotheses. Mayo expounded this thesis in her (1996)Error and the Growth of Experimental Knowledge (EGEK). This volume consists of a series of exchanges between Mayo and distinguished philosophers representing competing views of the philosophy of science. The tone of the exchanges is lively, edifying and enjoyable. Mayo'serror-statistical philosophy of science is critiqued in the light of positions which place more emphasis on large-scale theories. (...) The result clarifies Mayo's account and highlights her contribution to the philosophy of science -- in particular, her contribution to the philosophy of those sciences that rely heavily on statistical analysis. The second half of the volume considers the application (or extension) of anerror-statistical philosophy of science to theory testing in economics, causal modelling and legal epistemology. The volume also includes a contribution to the frequentist philosophy of statistics written by Mayo in collaboration with Sir David Cox. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  17.  260
    TheoreticalModeling of Cognitive Dysfunction in Schizophrenia by Means of Errors and Corresponding Brain Networks.Yuliya Zaytseva,Iveta Fajnerová,Boris Dvořáček,Eva Bourama,Ilektra Stamou,Kateřina Šulcová,Jiří Motýl,Jiří Horáček,Mabel Rodriguez &Filip Španiel -2018 -Frontiers in Psychology 9.
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  18.  24
    Optimal Economic Modelling of Hybrid Combined Cooling, Heating, and Energy Storage System Based on Gravitational Search Algorithm-Random Forest Regression.Muhammad Shahzad Nazir,Sami ud Din,Wahab Ali Shah,Majid Ali,Ali Yousaf Kharal,Ahmad N. Abdalla &Padmanaban Sanjeevikumar -2021 -Complexity 2021:1-13.
    The hybridization of two or more energy sources into a single power station is one of the widely discussed solutions to address the demand and supply havoc generated by renewable production, heating power, and cooling power) and its energy storage issues. Hybrid energy sources work based on the complementary existence of renewable sources. The combined cooling, heating, and power is one of the significant systems and shows a profit from its low environmental impact, high energy efficiency, low economic investment, and (...) sustainability in the industry. This paper presents an economic model of a microgrid system containing the CCHP system and energy storage considering the energy coupling and conversion characteristics, the effective characteristics of each microsource, and energy storage unit is proposed. The random forest regression model was optimized by the gravitational search algorithm. The test results show that the GSA-RFR model improves prediction accuracy and reduces the generalizationerror. The detail of the MG network and the energy storage architecture connected to the other renewable energy sources is discussed. The mathematical formulation of energy coupling and energy flow of the MG network including wind turbines, photovoltaic, CCHP system, fuel cell, and energy storage devices are presented. The testing system has been analysed under load peak cutting and valley filling of energy utilization index, energy utilization rate, the heat pump, the natural gas consumption of the microgas turbine, and the energy storage unit. The energy efficiency costs were observed as 88.2% and 86.9% with heat pump and energy storage operation comparing with GSA-RFR-based operation costs as 93.2% and 93% in summer and winter season, respectively. The simulation results extended the rationality and economy of the proposed model. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  19.  88
    Using path diagrams as a structural equation modelling tool.Clark Glymour -unknown
    Linear structural equation models (SEMs) are widely used in sociology, econometrics, biology, and other sciences. A SEM (without free parameters) has two parts: a probability distribution (in the Normal case specified by a set of linear structural equations and a covariance matrix among the “error” or “disturbance” terms), and an associated path diagram corresponding to the causal relations among variables specified by the structural equations and the correlations among theerror terms. It is often thought that the path (...) diagram is nothing more than a heuristic device for illustrating the assumptions of the model. However, in this paper, we will show how path diagrams can be used to solve a number of important problems in structural equation modelling. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  20.  58
    Using path diagrams as a structural equation modelling tool.Peter Spirtes,Thomas Richardson,Chris Meek &Richard Scheines -unknown
    Linear structural equation models (SEMs) are widely used in sociology, econometrics, biology, and other sciences. A SEM (without free parameters) has two parts: a probability distribution (in the Normal case specified by a set of linear structural equations and a covariance matrix among the “error” or “disturbance” terms), and an associated path diagram corresponding to the functional composition of variables specified by the structural equations and the correlations among theerror terms. It is often thought that the path (...) diagram is nothing more than a heuristic device for illustrating the assumptions of the model. However, in this paper, we will show how path diagrams can be used to solve a number of important problems in structural equation modelling. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  21.  356
    Five-Year-Olds’ Systematic Errors in Second-Order False Belief Tasks Are Due to First-Order Theory of Mind Strategy Selection: A ComputationalModeling Study.Burcu Arslan,Niels A. Taatgen &Rineke Verbrugge -2017 -Frontiers in Psychology 8.
  22.  15
    Emotion against reason? Self-control conflict as self-modelling rivalry.J. M. Araya -2024 -Synthese 204 (1):1-21.
    Divided-mind approaches to the conflict involved in self-control are pervasive. According to an influential version of the divided-mind approach, self-control conflict is a dispute between affective reactions and “cold” cognitive processes. I argue that divided-mind approaches are based on problematic bipartite architectural assumptions. Thus views that understand self-control as “control _of_ the self” might be better suited to account for self-control. I subsequently aim to expand on this kind of view. I suggest that self-control conflict involves a rivalry between narrative (...) self-models aimed at reducingerror, analogous to model rivalry in binocular rivalry paradigms. This approach straightforwardly accounts for the sense of conflict that is characteristic of self-control within a unified-mind approach, and among its other explanatory advantages, it directly aligns with current views that account for addiction in terms of maladaptive self-representational processes. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  92
    Computer Vision withError Estimation for Reduced OrderModeling of Macroscopic Mechanical Tests.Franck Nguyen,Selim M. Barhli,Daniel Pino Muñoz &David Ryckelynck -2018 -Complexity 2018:1-10.
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  24.  47
    Computermodeling and simulation: towards epistemic distinction between verification and validation.Vitaly Pronskikh -unknown
    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, (...) the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a codingerror or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computermodeling and simulation needs to be made. Holding on to that distinction, I propose to relate verification tomodeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model ofmodeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  25.  39
    Lethal mutagenesis,error thresholds, and the fight against viruses: Rigorousmodeling is facilitated by a firm physical background.Peter Schuster -2011 -Complexity 17 (2):5-9.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  26.  47
    Fuzzy Adaptation Algorithms’ Control for Robot Manipulators with Uncertainty Modelling Errors.Yongqing Fan,Keyi Xing &Xiangkui Jiang -2018 -Complexity 2018:1-8.
    A novel fuzzy control scheme with adaptation algorithms is developed for robot manipulators’ system. At the beginning, one adjustable parameter is introduced in the fuzzy logic system, the robot manipulators system with uncertain nonlinear terms as the master device and a reference model dynamic system as the slave robot system. To overcome the limitations such as online learning computation burden and logic structure in conventional fuzzy logic systems, a parameter should be used in fuzzy logic system, which composes fuzzy logic (...) system with updated parameter laws, and can be formed for a new fashioned adaptation algorithms controller. Theerror closed-loop dynamical system can be stabilized based on Lyapunov analysis, the number of online learning computation burdens can be reduced greatly, and the different kinds of fuzzy logic systems with fuzzy rules or without any fuzzy rules are also suited. Finally, effectiveness of the proposed approach has been shown in simulation example. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  27.  43
    The limits of probability modelling: A serendipitous tale of goldfish, transfinite numbers, and pieces of string. [REVIEW]Ranald R. Macdonald -2000 -Mind and Society 1 (2):17-38.
    This paper is about the differences between probabilities and beliefs and why reasoning should not always conform to probability laws. Probability is defined in terms of urn models from which probability laws can be derived. This means that probabilities are expressed in rational numbers, they suppose the existence of veridical representations and, when viewed as parts of a probability model, they are determined by a restricted set of variables. Moreover, probabilities are subjective, in that they apply to classes of events (...) that have been deemed (by someone) to be equivalent, rather than to unique events. Beliefs on the other hand are multifaceted, interconnected with all other beliefs, and inexpressible in their entirety. It will be argued that there are not sufficient rational numbers to characterise beliefs by probabilities and that the idea of a veridical set of beliefs is questionable. The concept of a complete probability model based on Fisher's notion of identifiable subsets is outlined. It is argued that to be complete a model must be known to be true. This can never be the case because whatever a person supposes to be true must be potentially modifiable in the light of new information. Thus to infer that an individual's probability estimate is biased it is necessary not only to show that the estimate differs from that given by a probability model, but also to assume that this model is complete, and completeness is not empirically verifiable. It follows that probability models and Bayes theorem are not necessarily appropriate standards for people's probability judgements. The quality of a probability model depends on how reasonable it is to treat some existing uncertainty as if it were equivalent to that in a particular urn model and this cannot be determined empirically. Bias can be demonstrated in estimates of proportions of finite populations such as in the false consensus effect. However the modification of beliefs by ad hoc methods like Tversky and Kahneman's heuristics can be justified, even though this results in biased judgements. This is because of pragmatic factors such as the cost of obtaining and taking account of additional information which are not included even in a complete probability model. Finally, an analogy is drawn between probability models and geometric figures. Both idealisations are useful but qualitatively inadequate characterisations of nature. A difference between the two is that the size of anyerror can be limited in the case of the geometric figure in a way that is not possible in a probability model. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  
  28.  58
    Diagnosing errors in climate model intercomparisons.Ryan O’Loughlin -2023 -European Journal for Philosophy of Science 13 (2):1-29.
    I examineerror diagnosis (model-model disagreement) in climate model intercomparisons including its difficulties, fruitful examples, and prospects for streamliningerror diagnosis. I suggest that features of climate model intercomparisons pose a more significant challenge forerror diagnosis than do features of individual model construction and complexity. Such features of intercomparisons include, e.g., the number of models involved, how models from different institutions interrelate, and what scientists know about each model. By considering numerous examples in the climate (...) class='Hi'>modeling literature, I distill general strategies (e.g., employing physical reasoning and using dimension reduction techniques) used to diagnose modelerror. Based on these examples, I argue that anerror repertoire could be beneficial for improvingerror diagnosis in climatemodeling, although constructing one faces several difficulties. Finally, I suggest that the practice oferror diagnosis demonstrates that scientists have a tacit-yet-working understanding of their models which has been under-appreciated by some philosophers. (shrink)
    Direct download(7 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29.  16
    Graphical causalmodeling anderror statistics : exchanges with Clark Glymour.Aris Spanos -2009 - In Deborah G. Mayo & Aris Spanos,Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. New York: Cambridge University Press. pp. 364.
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  63
    Values and inductive risk in machine learning modelling: the case of binary classification models.Koray Karaca -2021 -European Journal for Philosophy of Science 11 (4):1-27.
    I examine the construction and evaluation of machine learning binary classification models. These models are increasingly used for societal applications such as classifying patients into two categories according to the presence or absence of a certain disease like cancer and heart disease. I argue that the construction of ML classification models involves an optimisation process aiming at the minimization of the inductive risk associated with the intended uses of these models. I also argue that the construction of these models is (...) underdetermined by the available data, and that this makes it necessary for ML modellers to make social value judgments in determining theerror costs used in ML optimization. I thus suggest that the assessment of the inductive risk with respect to the social values of the intended users is an integral part of the construction and evaluation of ML classification models. I also discuss the implications of this conclusion for the philosophical debate concerning inductive risk. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  31.  145
    Error Statistics Using the Akaike and Bayesian Information Criteria.Henrique Cheng &Beckett Sterner -forthcoming -Erkenntnis.
    Many biologists, especially in ecology and evolution, analyze their data by estimating fits to a set of candidate models and selecting the best model according to the Akaike Information Criterion (AIC) or the Bayesian Information Criteria (BIC). When the candidate models represent alternative hypotheses, biologists may want to limit the chance of a false positive to a specified level. Existing model selection methodology, however, allows for only indirect control overerror rates by setting a threshold for the difference in (...) AIC scores. We present a novel theoretical framework for parametric Neyman-Pearson (NP) model selection using information criteria that does not require a pre-data null and applies to three or more non-nested models simultaneously. We apply the theoretical framework to theError Control for Information Criteria (ECIC) procedure introduced by Cullan et al. (J Appl Stat 47: 2565–2581, 2019), and we show it shares many of the desirable properties of AIC-type methods, including false positive and negative rates that converge to zero asymptotically. We discuss implications for the compatibility of evidentialist and severity-based approach to evidence in philosophy of science. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  
  32.  28
    Modeling Misretrieval and Feature Substitution in Agreement Attraction: A Computational Evaluation.Dario Paape,Serine Avetisyan,Sol Lago &Shravan Vasishth -2021 -Cognitive Science 45 (8):e13019.
    We present computationalmodeling results based on a self‐paced reading study investigating number attraction effects in Eastern Armenian. We implement three novel computational models of agreement attraction in a Bayesian framework and compare their predictive fit to the data using k‐fold cross‐validation. We find that our data are better accounted for by an encoding‐based model of agreement attraction, compared to a retrieval‐based model. A novel methodological contribution of our study is the use of comprehension questions with open‐ended responses, so (...) that both misinterpretation of the number feature of the subject phrase and misassignment of the thematic subject role of the verb can be investigated at the same time. We find evidence for both types of misinterpretation in our study, sometimes in the same trial. However, the specificerror patterns in our data are not fully consistent with any previously proposed model. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  33.  97
    (1 other version)Modeling Truth.Paul Teller -2017 -Philosophia 45 (1):143-161.
    Many in philosophy understand truth in terms of precise semantic values, true propositions. Following Braun and Sider, I say that in this sense almost nothing we say is, literally, true. I take the stand that this account of truth nonetheless constitutes a vitally useful idealization in understanding many features of the structure of language. The Fregean problem discussed by Braun and Sider concerns issues about application of language to the world. In understanding these issues I propose an alternativemodeling (...) tool summarized in the idea that inaccuracy of statements can be accommodated by their imprecision. This yields a pragmatist account of truth, but one not subject to the usual counterexamples. The account can also be viewed as an elaboratederror theory. The paper addresses some prima facie objections and concludes with implications for how we address certain problems in philosophy. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  34.  21
    Modeling and PID control of quadrotor UAV based on machine learning.Pradeep Kumar Singh,Anton Pljonkin &Lirong Zhou -2022 -Journal of Intelligent Systems 31 (1):1112-1122.
    The aim of this article was to discuss themodeling and control method of quadrotor unmanned aerial vehicle. In the process ofmodeling, mechanismmodeling and experimental testing are combined, especially the motor and propeller are modeled in detail. Through the understanding of the body structure and flight principle of the quadrotor UAV, the Newton–Euler method is used to analyze the dynamics of the quadrotor UAV, and the mathematical model of the UAV is established under the small (...) angle rotation. Process identifier is used to control it. First, the attitude angle of the model is controlled by PID, and based on this, the speed in each direction is controlled by PID. Then, the PID control of the four rotor aircraft with the center of gravity offset is simulated by MATLAB. The results show that the pitch angle and roll angle can be controlled by 5 degrees together without center of gravity deviation, and the PID can effectively control the control quantity and achieve the desired effect in a short time. Classical BP algorithm, classical GA-BP algorithm, and improved GA-BP algorithm were trained, respectively, with a total of 150 sets of training data, training function uses Levenberg-Marquardt, and performance function uses mean squarederror. In the background of the same noise, the improved GA-BP algorithm has the highest detection rate, classical GA-BP algorithm is the second, and classical BP algorithm is the worst. The simulation results show that the PID control law can effectively control the attitude angle and speed of the rotor UAV in the case of center of gravity deviation. (shrink)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  35.  25
    Modeling the Developmental Patterning of Finiteness Marking in English, Dutch, German, and Spanish Using MOSAIC.Daniel Freudenthal,Julian M. Pine,Javier Aguado-Orea &Fernand Gobet -2007 -Cognitive Science 31 (2):311-341.
    In this study, we apply MOSAIC (model of syntax acquisition in children) to the simulation of the developmental patterning of children's optional infinitive (OI) errors in 4 languages: English, Dutch, German, and Spanish. MOSAIC, which has already simulated this phenomenon in Dutch and English, now implements a learning mechanism that better reflects the theoretical assumptions underlying it, as well as a chunking mechanism that results in frequent phrases being treated as 1 unit. Using 1, identical model that learns from child‐directed (...) speech, we obtain a close quantitative fit to the data from all 4 languages despite there being considerable cross‐linguistic and developmental variation in the OI phenomenon. MOSAIC successfully simulates the difference between Spanish (a pro‐drop language in which OI errors are virtually absent) and obligatory subject languages that do display the OI phenomenon. It also highlights differences in the OI phenomenon across German and Dutch, 2 closely related languages whose grammar is virtually identical with respect to the relation between finiteness and verb placement. Taken together, these results suggest that (a) cross‐linguistic differences in the rates at which children produce OIs are graded, quantitative differences that closely reflect the statistical properties of the input they are exposed to and (b) theories of syntax acquisition need to consider more closely the role of input characteristics as determinants of quantitative differences in the cross‐linguistic patterning of phenomena in language acquisition. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  36.  18
    Modeling Response Time and Responses in Multidimensional Health Measurement.Chun Wang,David J. Weiss &Shiyang Su -2019 -Frontiers in Psychology 10.
    This study explored calibrating a large item bank for use in multidimensional health measurement with computerized adaptive testing, using both item responses and response time (RT) information. The Activity Measure for Post-Acute Care is a patient-reported outcomes measure comprised of three correlated scales (Applied Cognition, Daily Activities, and Mobility). All items from each scale are Likert type, so that a respondent chooses a response from an ordered set of four response options. The most appropriate item response theory model for analyzing (...) and scoring these items is the multidimensional graded response model (MGRM). During the field testing of the items, an interviewer read each item to a patient and recorded, on a tablet computer, the patient’s responses and the software recorded RTs. Due to the large item bank with over 300 items, data collection was conducted in four batches with a common set of anchor items to link the scale. Van der Linden’s (2007) hierarchicalmodeling framework was adopted. Several models, with or without interviewer as a covariate and with or without interaction between interviewer and items, were compared for each batch of data. It was found that the model with the interaction between interviewer and item, when the interaction effect was constrained to be proportional, fit the data best. Therefore, the final hierarchical model with lognormal model for RT and the MGRM for response data was fitted to all batches of data via a concurrent calibration. Evaluation of parameter estimates revealed that (1) adding response time information did not affect the item parameter estimates and their standard errors significantly; (2) adding response time information helped reduce the standarderror of patients’ multidimensional latent trait estimates, but adding interviewer as a covariate did not result in further improvement. Implications of the findings for follow up adaptive test delivery design are discussed. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  37.  117
    Theerror statistical philosopher as normative naturalist.Deborah Mayo &Jean Miller -2008 -Synthese 163 (3):305 - 314.
    We argue for a naturalistic account for appraising scientific methods that carries non-trivial normative force. We develop our approach by comparison with Laudan’s (American Philosophical Quarterly 24:19–31, 1987, Philosophy of Science 57:20–33, 1990) “normative naturalism” based on correlating means (various scientific methods) with ends (e.g., reliability). We argue that such a meta-methodology based on means–ends correlations is unreliable and cannot achieve its normative goals. We suggest another approach for meta-methodology based on a conglomeration of tools and strategies (from statistical (...) class='Hi'>modeling, experimental design, and related fields) that affords forward looking procedures for learning fromerror and for controllingerror. The resulting “error statistical” appraisal is empirical—methods are appraised by examining their capacities to controlerror. At the same time, this account is normative, in that the strategies that pass muster are claims about how actually to proceed in given contexts to reach reliable inferences from limited data. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  38.  41
    Belief Systems and theModeling Relation.Roberto Poli -2016 -Foundations of Science 21 (1):195-206.
    The paper presents the most general aspects of scientificmodeling and shows that social systems naturally include different belief systems. Belief systems differ in a variety of respects, most notably in the selection of suitable qualities to encode and the internal structure of the observables. The following results emerge from the analysis: conflict is explained by showing that different models encode different qualities, which implies that they model different realities; explicitly connecting models to the realities that they encode makes (...) it possible to clarify the relations among models; by understanding that social systems are complex one knows that there is no chance of developing a maximal model of the system; the distinction among different levels of depth implicitly includes a strategy for inducing change; identity-preserving models are among the most difficult to modify; since models do not customarily generate internal signals oferror, strategies with which to determine when models are out of synch with their situations are especially valuable; changing the form of power from a zero sum game to a positive sum game helps transform the nature of conflicts. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39.  47
    Modeling Cuteness: Moving towards a Biosemiotic Model for Understanding the Perception of Cuteness and Kindchenschema.Jason Mario Dydynski -2020 -Biosemiotics 13 (2):223-240.
    This research seeks to expand on the current literature surrounding scientific and aesthetic concepts of cuteness through a biosemiotic lens. By first re-evaluating Konrad Lorenz’s Kindchenschema, and identifying the importance of schematic vs featural perception, we identify the presence of a series of perceptual errors that underlie existing research on cuteness. There is, then, a need to better understand the cognitive structure underlying one’s perception of cuteness. We go on to employ the methodological framework ofModeling Systems Theory to (...) identify and establish the forms that underlie both the encoding and decoding of cute phenomena. In redefining cuteness as a cohesive code, and establishing Kindchenschema as a schematic metaform, we set the foundation for the incorporation of biological and cultural theories of cuteness. This research offers an initial methodological framework for the examination of cute artifacts, that can be utilized in the fields of normative aesthetics, marketing, and design. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40.  27
    ComputerModeling and Simulation: Increasing Reliability by Disentangling Verification and Validation.Vitaly Pronskikh -2019 -Minds and Machines 29 (1):169-186.
    Verification and validation of computer codes and models used in simulations are two aspects of the scientific practice of high importance that recently have been discussed widely by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to the model’s relation to the real world and its intended use. Because complex simulations are generally opaque to a practitioner, the Duhem problem can (...) arise with verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a codingerror or the model’s general inadequacy to its target should be blamed in the case of a failure. I argue that a clear distinction between computermodeling and simulation has to be made to disentangle verification and validation. Drawing on that distinction, I suggest to associatemodeling with verification and simulation, which shares common epistemic strategies with experimentation, with validation. To explain the reasons for their entanglement in practice, I propose a Weberian ideal–typical model ofmodeling and simulation as roles in practice. I examine an approach to mitigate the Duhem problem for verification and validation that is generally applicable in practice and is based on differences in epistemic strategies and scopes. Based on this analysis, I suggest two strategies to increase the reliability of simulation results, namely, avoiding alterations of verified models at the validation stage as well as performing simulations of the same target system using two or more different models. In response to Winsberg’s claim that verification and validation are entangled I argue that deploying the methodology proposed in this work it is possible to mitigate inseparability of V&V in many if not all domains wheremodeling and simulation are used. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  41.  459
    Physicalmodeling applies to physiology, too.Vincent Hayward -1992 -Behavioral and Brain Sciences 15 (2):342-343.
    A physical model was utilized to show that the neural system can memorize a target position and is able to cause motor and sensory events that move the arm to a target with more accuracy. However, this cannot indicate in which coordinates the necessary computations are carried out. Turning off the lights causes theerror to increase which is accomplished by cutting off one feedback path. The geometrical properties of arm kinematics and the properties of the kinesthetic and visual (...) sensorial systems should be better known before inferences about higher levels of processing can be drawn. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark  
  42.  15
    Brainwave Phase Stability: PredictiveModeling of Irrational Decision.Zu-Hua Shan -2022 -Frontiers in Psychology 13.
    A predictive model applicable in both neurophysiological and decision-making studies is proposed, bridging the gap between psychological/behavioral and neurophysiological studies. Supposing the electromagnetic waves are carriers of decision-making, and electromagnetic waves with the same frequency, individual amplitude and constant phase triggered by conditions interfere with each other and the resultant intensity determines the probability of the decision. Accordingly, brainwave-interference decision-making model is built mathematically and empirically test with neurophysiological and behavioral data. Event-related potential data confirmed the stability of the phase (...) differences in a given decision context. Behavioral data analysis shows that phase stability exists across categorization-decision, two-stage gambling, and prisoner’s dilemma decisions. Irrational decisions occurring in those experiments are actually rational as their phases could be quantitatively derived from the phases of the riskiest and safest choices. Model fitting result reveals that the root-mean-square deviations between the fitted and actual phases of irrational decisions are less than 10°, and the mean absolute percentage errors of the fitted probabilities are less than 0.06. The proposed model is similar in mathematical form compared with the quantummodeling approach, but endowed with physiological/psychological connection and predictive ability, and promising in the integration of neurophysiological and behavioral research to explore the origin of the decision. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  59
    Modeling Semantic Containment and Exclusion in Natural Language Inference.Christopher D. Manning -unknown
    We propose an approach to natural language inference based on a model of natural logic, which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We greatly extend past work in natural logic, which has focused solely on semantic containment and monotonicity, to incorporate both semantic exclusion and implicativity. Our system decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical entailment relation for each edit using a statistical classifier; (...) propagates these relations upward through a syntax tree according to semantic properties of intermediate nodes; and composes the resulting entailment relations across the edit sequence. We evaluate our system on the FraCaS test suite, and achieve a 27% reduction inerror from previous work. We also show that hybridizing an existing RTE system with our natural logic system yields significant gains on the RTE3 test suite. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  44.  27
    Structural EquationModeling of Vocabulary Size and Depth Using Conventional and Bayesian Methods.Rie Koizumi &Yo In’Nami -2020 -Frontiers in Psychology 11.
    In classifications of vocabulary knowledge, vocabulary size and depth have often been separately conceptualized (Schmitt, 2014). Although size and depth are known to be substantially correlated, it is not clear whether they are a single construct or two separate components of vocabulary knowledge (Yanagisawa & Webb, 2020). This issue has not been addressed extensively in the literature and can be better examined using structural equationmodeling (SEM), with measurementerror modeled separately from the construct of interest. The current (...) study reports on conventional and Bayesian SEM approaches (e.g., Muthén & Asparouhov, 2012) to examine the factor structure of the size and depth of second language vocabulary knowledge of Japanese adult learners of English. A total of 255 participants took five vocabulary tests. One test was designed to measure vocabulary size in terms of the number of words known, while the remaining four were designed to measure vocabulary depth in terms of word association, polysemy, and collocation. All tests used a multiple-choice format. The size test was divided into three subtests according to word frequency. Results from conventional and Bayesian SEM show that a correlated two-factor model of size and depth with three and four indicators, respectively, fit better than a single-factor model of size and depth. In the two-factor model, vocabulary size and depth were strongly correlated (r =.945 for conventional SEM and.943 for Bayesian SEM with cross loadings), but they were distinct. The implications of these findings are discussed. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45.  38
    MathematicalModeling of Respiratory System Mechanics in the Newborn Lamb.Virginie Le Rolle,Nathalie Samson,Jean-Paul Praud &Alfredo I. Hernández -2013 -Acta Biotheoretica 61 (1):91-107.
    In this paper, a mathematical model of the respiratory mechanics is used to reproduce experimental signal waveforms acquired from three newborn lambs. As the main challenge is to determine specific lamb parameters, a sensitivity analysis has been realized to find the most influent parameters, which are identified using an evolutionary algorithm. Results show a close match between experimental and simulated pressure and flow waveforms obtained during spontaneous ventilation and pleural pressure variations acquired during the application of positive pressure, since root (...) mean square errors equal to 0.0119, 0.0052 and 0.0094. The identified parameters were discussed in light of previous knowledge of respiratory mechanics in the newborn. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Foundational Issues in StatisticalModeling : Statistical Model Specification.Aris Spanos -2011 -Rationality, Markets and Morals 2:146-178.
    Statistical model specification and validation raise crucial foundational problems whose pertinent resolution holds the key to learning from data by securing the reliability of frequentist inference. The paper questions the judiciousness of several current practices, including the theory-driven approach, and the Akaike-type model selection procedures, arguing that they often lead to unreliable inferences. This is primarily due to the fact that goodness-of-fit/prediction measures and other substantive and pragmatic criteria are of questionable value when the estimated model is statistically misspecified. Foisting (...) one's favorite model on the data often yields estimated models which are both statistically and substantively misspecified, but one has no way to delineate between the two sources oferror and apportion blame. The paper argues that theerror statistical approach can address this Duhemian ambiguity by distinguishing between statistical and substantive premises and viewing empiricalmodeling in a piecemeal way with a view to delineate the various issues more effectively. It is also argued that Hendry's general to specific procedures does a much better job in model selection than the theory-driven and the Akaike-type procedures primary because of itserror statistical underpinnings. (shrink)
     
    Export citation  
     
    Bookmark  
  47.  13
    Bifactor exploratory structural equationmodeling: A meta-analytic review of model fit.Andreas Gegenfurtner -2022 -Frontiers in Psychology 13.
    Multivariate behavioral research often focuses on latent constructs—such as motivation, self-concept, or wellbeing—that cannot be directly observed. Typically, these latent constructs are measured with items in standardized instruments. To test the factorial structure and multidimensionality of latent constructs in educational and psychological research, Morin et al. proposed bifactor exploratory structural equationmodeling. This meta-analytic review aimed to estimate the extent to which B-ESEM model fit differs from other model representations, including confirmatory factor analysis, exploratory structural equationmodeling, hierarchical (...) CFA, hierarchical ESEM, and bifactor-CFA. The study domains included learning and instruction, motivation and emotion, self and identity, depression and wellbeing, and interpersonal relations. The meta-analyzed fit indices were the χ2/df ratio, the comparative fit index, the Tucker-Lewis index, the root mean squareerror of approximation, and the standardized root mean squared residual. The findings of this meta-analytic review indicate that the B-ESEM model fit is superior to the fit of reference models. Furthermore, the results suggest that model fit is sensitive to sample size, item number, and the number of specific and general factors in a model. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  48.  35
    Structural EquationModeling Analysis on Associations of Moral Distress and Dimensions of Organizational Culture in Healthcare: A Cross-Sectional Study of Healthcare Professionals.Tessy A. Thomas,Shelley Kumar,F. Daniel Davis,Peter Boedeker &Satid Thammasitboon -2024 -AJOB Empirical Bioethics 15 (2):120-132.
    Objective Moral distress is a complex phenomenon experienced by healthcare professionals. This study examined the relationships between key dimensions of Organizational Culture in Healthcare (OCHC)—perceived psychological safety, ethical climate, patient safety—and healthcare professionals’ perception of moral distress.Design Cross-sectional surveySetting Pediatric and adult critical care medicine, and adult hospital medicine healthcare professionals in the United States.Participants Physicians (n = 260), nurses (n = 256), and advanced practice providers (n = 110) participated in the study.Main outcome measures Three dimensions of OCHC were (...) measured using validated questionnaires: Olson’s Hospital Ethical Climate Survey, Agency for Healthcare Research and Quality’s Patient Safety Culture Survey, and Edmondson’s Team Psychological Safety Survey. The perception of moral distress was measured using the Moral Distress Amidst a Pandemic Survey. The hypothesized relationships between various dimensions were tested with structural equationmodeling (SEM).Results Adequate model fit was achieved in the SEM: a root-mean-squareerror of approximation =0.072 (90% CI 0.069 to 0.075), standardized root mean square residual = 0.056, and comparative fit index =0.926. Perceived psychological safety (β= −0.357, p<.001) and patient safety culture (β = −0.428, p<.001) were negatively related to moral distress experience. There was no significant association between ethical climate and moral distress (β = 0.106, p = 0.319). Ethical Climate, however, was highly correlated with Patient Safety Culture (factor correlation= 0.82).Conclusions We used structural equation model to test a theoretical model of multi-dimensional organizational culture and healthcare climate (OCHC) and moral distress.Significant associations were found, supporting mitigating strategies to optimize psychological safety and patient safety culture to address moral distress among healthcare professionals. Future initiatives and studies should account for key dimensions of OCHC with multi-pronged targets to preserve the moral well-being of individuals, teams, and organizations. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  49.  58
    The Role of the Anterior Cingulate Cortex in PredictionError and Signaling Surprise.William H. Alexander &Joshua W. Brown -2019 -Topics in Cognitive Science 11 (1):119-135.
    In the past two decades, reinforcement learning has become a popular framework for understanding brain function. A key component of RL models, predictionerror, has been associated with neural signals throughout the brain, including subcortical nuclei, primary sensory cortices, and prefrontal cortex. Depending on the location in which activity is observed, the functional interpretation of predictionerror may change: Prediction errors may reflect a discrepancy in the anticipated and actual value of reward, a signal indicating the salience or (...) novelty of a stimulus, and many other interpretations. Anterior cingulate cortex has long been recognized as a region involved in processing behavioralerror, and recent computational models of the region have expanded this interpretation to include a more general role for the region in predicting likely events, broadly construed, and signaling deviations between expected and observed events. Ongoingmodeling work investigating the interaction between ACC and additional regions involved in cognitive control suggests an even broader role for cingulate in computing a hierarchically structured surprise signal critical for learning models of the environment. The result is a predictive coding model of the frontal lobes, suggesting that predictive coding may be a unifying computational principle across the neocortex. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  50.  35
    CausalModeling, Explanation and Severe Testing.Clark Glymour,Deborah G. Mayo &Aris Spanos -2009 - In Deborah G. Mayo & Aris Spanos,Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. New York: Cambridge University Press. pp. 331-375.
1 — 50 / 956
Export
Limit to items.
Filters





Configure languageshere.Sign in to use this feature.

Viewing options


Open Category Editor
Off-campus access
Using PhilPapers from home?

Create an account to enable off-campus access through your institution's proxy server or OpenAthens.


[8]ページ先頭

©2009-2025 Movatter.jp