Liaison statement
Liaison statement from ETSI ISG SAI on Securing Artificial Intelligence
Additional information about IETF liaison relationships is available on theIETF webpage and theInternet Architecture Board liaison webpage.
| State | Posted |
|---|---|
| Submitted Date | 2019-12-09 |
| From Group | ETSI-ISG-SAI |
| From Contact | Sonia Compan <isgsupport@etsi.org> |
| To Group | IETF |
| To Contacts | The IETF Chair <chair@ietf.org> |
| Cc | The IESG <iesg@ietf.org> The IETF Chair <chair@ietf.org> |
| Response Contact | isgsupport@etsi.org |
| Purpose | For information |
| Attachments | SAI(19)001010r1_Liaison_statement_from_ETSI_ISG_SAI |
| Body | This is to announce that the Kick-off Meeting for the new ETSI ISG on SecuringArtificial Intelligence (ISG SAI) was held on 23 October 2019.The intent of the ISG SAI is to address 3 aspects of AI in the standards domain:1. Securing AI from attack e.g. where AI is a component in the system thatneeds defending.2. Mitigating against AI e.g. where AI is the ‘problem’ (or used to improve andenhance other more conventional attack vectors).3. Using AI to enhance security measures against attack from other things e.g.AI is part of the ‘solution’ (or used to improve and enhance more conventionalcountermeasures).The ETSI ISG SAI aims to develop the technical knowledge that acts as abaseline in ensuring that artificial intelligence is secure. Stakeholdersimpacted by the activity of this group include end users, manufacturers,operators and governments.At the first meeting the following New Work Items were agreed:AI Threat OntologyThe purpose of this work item is to define what would be considered an AIthreat and how it might differ from threats to traditional systems. Thestarting point that offers the rationale for this work is that currently, thereis no common understanding of what constitutes an attack on AI and how it mightbe created, hosted and propagated. The AI Threat Ontology deliverable will seekto align terminology across the different stakeholders and multiple industries.This document will define what is meant by these terms in the context of cyberand physical security and with an accompanying narrative that should be readilyaccessible by both experts and less informed audiences across the multipleindustries. Note that this threat ontology will address AI as system, anadversarial attacker, and as a system defenderData Supply Chain ReportData is a critical component in the development of AI systems. This includesraw data as well as information and feedback from other systems and humans inthe loop, all of which can be used to change the function of the system bytraining and retraining the AI. However, access to suitable data is oftenlimited causing a need to resort to less suitable sources of data. Compromisingthe integrity of training data has been demonstrated to be a viable attackvector against an AI system. This means that securing the supply chain of thedata is an important step in securing the AI. This report will summarise themethods currently used to source data for training AI along with theregulations, standards and protocols that can control the handling and sharingof that data. It will then provide gap analysis on this information to scopepossible requirements for standards for ensuring traceability and integrity inthe data, associated attributes, information and feedback, as well as theconfidentiality of these.Security Testing of AIThe purpose of this work item it to identify objectives, methods and techniquesthat are appropriate for security testing of AI-based components. The overallgoal is to have guidelines for security testing of AI and AI-based componentstaking into account of the different algorithms of symbolic and subsymbolic AIand addressing relevant threats from the work item “AI threat ontology”.Security testing of AI has some commonalities with security testing oftraditional systems but provides new challenges and requires differentapproaches, due to (a) significant differences between symbolic and subsymbolicAI and traditional systems that have strong implications on their security andon how to test their security properties, (b) non-determinism since AI-basedsystems may evolve over time (self-learning systems) and security propertiesmay degrade, (c) test oracle problem, assigning a test verdict is different andmore difficult for AI-based systems since not all expected results are known apriori, and (d) data-driven algorithms: in contrast to traditional systems,(training) data forms the behaviour of subsymbolic AI. The scope of this workitem is to cover the following topics (but not limited to): • security testingapproaches for AI • testing data for AI from a security point of view •security test oracles for AI • definition of test adequacy criteria forsecurity testing of AI • test goals for security attributes of AI and provideguidelines for security testing of AI taking into account the abovementionedtopics. The guidelines will use the results of the work item "AI ThreatOntology" to cover relevant threats for AI through security testing, and willalso address challenges and limitations when testing AI-based system. The workitems starts with a state-of-the-art and gap analysis to identify what iscurrently possible in the area of security testing of AI and what are thelimitations. The works will be coordinated with TC MTS.The ISG is also discussing adoption of a work item on:Securing AI Problem StatementThis work will define and prioritise potential AI threats along withrecommended actions.ETSI ISG SAI believes that this work will be of interest to many othertechnical standards groups and looks forward to engaging with such groups. |