Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Order:

1 filter applied
  1.  101
    Generative AI models should include detection mechanisms as a condition for public release.Alistair Knott,Dino Pedreschi,Raja Chatila,Tapabrata Chakraborti,Susan Leavy,Ricardo Baeza-Yates,David Eyers,Andrew Trotman,Paul D. Teal,Przemyslaw Biecek,Stuart Russell &Yoshua Bengio -2023 -Ethics and Information Technology 25 (4):1-7.
    The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the (...) content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  2.  15
    Human-AI coevolution.Dino Pedreschi,Luca Pappalardo,Emanuele Ferragina,Ricardo Baeza-Yates,Albert-László Barabási,Frank Dignum,Virginia Dignum,Tina Eliassi-Rad,Fosca Giannotti,János Kertész,Alistair Knott,Yannis Ioannidis,Andrea Passarella,Alex Sandy Pentland,John Shawe-Taylor &Alessandro Vespignani -2025 -Artificial Intelligence 339 (C):104244.
  3.  19
    AI content detection in the emerging information ecosystem: new obligations for media and tech companies.Alistair Knott,Dino Pedreschi,Toshiya Jitsuzumi,Susan Leavy,David Eyers,Tapabrata Chakraborti,Andrew Trotman,Sundar Sundareswaran,Ricardo Baeza-Yates,Przemyslaw Biecek,Adrian Weller,Paul D. Teal,Subhadip Basu,Mehmet Haklidir,Virginia Morini,Stuart Russell &Yoshua Bengio -2024 -Ethics and Information Technology 26 (4):1-14.
    The world is about to be swamped by an unprecedented wave of AI-generated content. We need reliable ways of identifying such content, to supplement the many existing social institutions that enable trust between people and organisations and ensure social resilience. In this paper, we begin by highlighting an important new development: providers of AI content generators have new obligations to support the creation of reliable detectors for the content they generate. These new obligations arise mainly from the EU’s newly finalised (...) AI Act, but they are enhanced by the US President’s recent Executive Order on AI, and by several considerations of self-interest. These new steps towards reliable detection mechanisms are by no means a panacea—but we argue they will usher in a new adversarial landscape, in which reliable methods for identifying AI-generated content are commonly available. In this landscape, many new questions arise for policymakers. Firstly, if reliable AI-content detection mechanisms are available, who should be required to use them? And how should they be used? We argue that new duties arise for media and Web search companies arise for media companies, and for Web search companies, in the deployment of AI-content detectors. Secondly, what broader regulation of the tech ecosystem will maximise the likelihood of reliable AI-content detectors? We argue for a range of new duties, relating to provenance-authentication protocols, open-source AI generators, and support for research and enforcement. Along the way, we consider how the production of AI-generated content relates to ‘free expression’, and discuss the important case of content that is generated jointly by humans and AIs. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  4.  40
    GLocalX - From Local to Global Explanations of Black Box AI Models.Mattia Setzu,Riccardo Guidotti,Anna Monreale,Franco Turini,Dino Pedreschi &Fosca Giannotti -2021 -Artificial Intelligence 294 (C):103457.
  5.  93
    Integrating induction and deduction for finding evidence of discrimination.Salvatore Ruggieri,Dino Pedreschi &Franco Turini -2010 -Artificial Intelligence and Law 18 (1):1-43.
    We present a reference model for finding evidence of discrimination in datasets of historical decision records in socially sensitive tasks, including access to credit, mortgage, insurance, labor market and other benefits. We formalize the process of direct and indirect discrimination discovery in a rule-based framework, by modelling protected-by-law groups, such as minorities or disadvantaged segments, and contexts where discrimination occurs. Classification rules, extracted from the historical records, allow for unveiling contexts of unlawful discrimination, where the degree of burden over protected-by-law (...) groups is evaluated by formalizing existing norms and regulations in terms of quantitative measures. The measures are defined as functions of the contingency table of a classification rule, and their statistical significance is assessed, relying on a large body of statistical inference methods for proportions. Key legal concepts and reasonings are then used to drive the analysis on the set of classification rules, with the aim of discovering patterns of discrimination, either direct or indirect. Analyses of affirmative action, favoritism and argumentation against discrimination allegations are also modelled in the proposed framework. Finally, we present an implementation, called LP2DD, of the overall reference model that integrates induction, through data mining classification rule extraction, and deduction, through a computational logic implementation of the analytical tools. The LP2DD system is put at work on the analysis of a dataset of credit decision records. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  6.  11
    Correction: AI content detection in the emerging information ecosystem: new obligations for media and tech companies.Alistair Knott,Dino Pedreschi,Toshiya Jitsuzumi,Susan Leavy,David Eyers,Tapabrata Chakraborti,Andrew Trotman,Sundar Sundareswaran,Ricardo Baeza-Yates,Przemyslaw Biecek,Adrian Weller,Paul D. Teal,Subhadip Basu,Mehmet Haklidir,Virginia Morini,Stuart Russell &Yoshua Bengio -2024 -Ethics and Information Technology 26 (4):1-2.
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  7.  130
    Give more data, awareness and control to individual citizens, and they will help COVID-19 containment.Mirco Nanni,Gennady Andrienko,Albert-László Barabási,Chiara Boldrini,Francesco Bonchi,Ciro Cattuto,Francesca Chiaromonte,Giovanni Comandé,Marco Conti,Mark Coté,Frank Dignum,Virginia Dignum,Josep Domingo-Ferrer,Paolo Ferragina,Fosca Giannotti,Riccardo Guidotti,Dirk Helbing,Kimmo Kaski,Janos Kertesz,Sune Lehmann,Bruno Lepri,Paul Lukowicz,Stan Matwin,David Megías Jiménez,Anna Monreale,Katharina Morik,Nuria Oliver,Andrea Passarella,Andrea Passerini,Dino Pedreschi,Alex Pentland,Fabio Pianesi,Francesca Pratesi,Salvatore Rinzivillo,Salvatore Ruggieri,Arno Siebes,Vicenc Torra,Roberto Trasarti,Jeroen van den Hoven &Alessandro Vespignani -2021 -Ethics and Information Technology 23 (S1):1-6.
    The rapid dynamics of COVID-19 calls for quick and effective tracking of virus transmission chains and early detection of outbreaks, especially in the “phase 2” of the pandemic, when lockdown and other restriction measures are progressively withdrawn, in order to avoid or minimize contagion resurgence. For this purpose, contact-tracing apps are being proposed for large scale adoption by many countries. A centralized approach, where data sensed by the app are all sent to a nation-wide server, raises concerns about citizens’ privacy (...) and needlessly strong digital surveillance, thus alerting us to the need to minimize personal data collection and avoiding location tracking. We advocate the conceptual advantage of a decentralized approach, where both contact and location data are collected exclusively in individual citizens’ “personal data stores”, to be shared separately and selectively, voluntarily, only when the citizen has tested positive for COVID-19, and with a privacy preserving level of granularity. This approach better protects the personal sphere of citizens and affords multiple benefits: it allows for detailed information gathering for infected people in a privacy-preserving fashion; and, in turn this enables both contact tracing, and, the early detection of outbreak hotspots on more finely-granulated geographic scale. The decentralized approach is also scalable to large populations, in that only the data of positive patients need be handled at a central level. Our recommendation is two-fold. First to extend existing decentralized architectures with a light touch, in order to manage the collection of location data locally on the device, and allow the user to share spatio-temporal aggregates—if and when they want and for specific aims—with health authorities, for instance. Second, we favour a longer-term pursuit of realizing a Personal Data Store vision, giving users the opportunity to contribute to collective good in the measure they want, enhancing self-awareness, and cultivating collective efforts for rebuilding society. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  8.  17
    Anonymity preserving sequential pattern mining.Anna Monreale,Dino Pedreschi,Ruggero G. Pensa &Fabio Pinelli -2014 -Artificial Intelligence and Law 22 (2):141-173.
    The increasing availability of personal data of a sequential nature, such as time-stamped transaction or location data, enables increasingly sophisticated sequential pattern mining techniques. However, privacy is at risk if it is possible to reconstruct the identity of individuals from sequential data. Therefore, it is important to develop privacy-preserving techniques that support publishing of really anonymous data, without altering the analysis results significantly. In this paper we propose to apply the Privacy-by-design paradigm for designing a technological framework to counter the (...) threats of undesirable, unlawful effects of privacy violation on sequence data, without obstructing the knowledge discovery opportunities of data mining technologies. First, we introduce a k-anonymity framework for sequence data, by defining the sequence linking attack model and its associated countermeasure, a k-anonymity notion for sequence datasets, which provides a formal protection against the attack. Second, we instantiate this framework and provide a specific method for constructing the k-anonymous version of a sequence dataset, which preserves the results of sequential pattern mining, together with several basic statistics and other analytical properties of the original data, including the clustering structure. A comprehensive experimental study on realistic datasets of process-logs, web-logs and GPS tracks is carried out, which empirically shows how, in our proposed method, the protection of privacy meets analytical utility. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
Export
Limit to items.
Filters





Configure languageshere.Sign in to use this feature.

Viewing options


Open Category Editor
Off-campus access
Using PhilPapers from home?

Create an account to enable off-campus access through your institution's proxy server or OpenAthens.


[8]ページ先頭

©2009-2025 Movatter.jp