| |
Phronesis, or practical wisdom, is a capacity the possession of which enables one to make good practical judgments and thus fulfill the distinctive function of human beings. Nir Eisikovits and Dan Feldman convincingly argue that this capacity may be undermined by statistical machine-learning-based AI. The critic questions: why should we worry that AI undermines phronesis? Why can’t we epistemically defer to AI, especially when it is superintelligent? Eisikovits and Feldman acknowledge such objection but do not consider it seriously. In this (...) paper, we argue that there is a way to reconcile Eisikovits and Feldman with their critic by adopting the principle of epistemic heed, according to which we should exercise our rational capacity as much as possible while heeding a superintelligence’s output whenever possible. (shrink) | |
While AI systems are increasingly assuming roles traditionally occupied by human epistemic authorities (EAs), their epistemological status remains unclear. This paper aims to address this lacuna by assessing the potential for AI systems to be recognized as artificial epistemic authorities. In a first step, I examine the arguments against considering AI systems as EAs, in particular the established model of EAs as engaging in intentional belief transfer via testimony to laypeople – a process seemingly inapplicable to intentionless and beliefless AI. (...) Despite this, AI systems exhibit striking epistemic parallels with human EAs, including epistemic asymmetry and opacity that give rise to comparable challenges for both laypeople and AI users. These challenges include the identification problem – how to recognize reliable EAs/AI systems – and the deference problem – determining the appropriate epistemic stance towards EAs/AI systems. Faced with this dilemma, I discuss three possible solutions: 1. reject the concept of artificial EAs, 2. accept that AI can possess beliefs and intentions and thus align with the standard model, or 3. develop an alternative model that encompasses artificial EAs. I argue that while each option has its benefits and costs, a particularly strong case can be made for option 3. (shrink) | |
Programming artificial intelligence to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair or fair. It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would be willing (...) to accept if done to themselves. Secondly, that such a construal is ontologically reflected in Word Embeddings, by virtue of their ability to reflect the dimensions of such a perception. These dimensions being: responsibility vs. irresponsibility, gain vs. loss, reward vs. sanction, joy vs. pain, all as a single vector. The paper finds it possible to quantify and qualify a verb as fair or unfair by calculating the cosine similarity of the said verb’s embedding vector against FairVec—which represents the above dimensions. We apply this to Glove and Word2Vec embeddings. Testing on a list of verbs produces an F1 score of 95.7, which is improved to 97.0. Lastly, a demonstration of the method’s applicability to sentence measurement is carried out. (shrink) | |
Programming artificial intelligence to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair or fair. It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would be willing (...) to accept if done to themselves. Secondly, that such a construal is ontologically reflected in Word Embeddings, by virtue of their ability to reflect the dimensions of such a perception. These dimensions being: responsibility vs. irresponsibility, gain vs. loss, reward vs. sanction, joy vs. pain, all as a single vector. The paper finds it possible to quantify and qualify a verb as fair or unfair by calculating the cosine similarity of the said verb’s embedding vector against FairVec—which represents the above dimensions. We apply this to Glove and Word2Vec embeddings. Testing on a list of verbs produces an F1 score of 95.7, which is improved to 97.0. Lastly, a demonstration of the method’s applicability to sentence measurement is carried out. (shrink) |