Movatterモバイル変換


[0]ホーム

URL:


loading
PapersPapers/2022PapersPapers/2022

Scitepress Logo

The Search is performed on all of the following fields:

Note: Please use complete words only.
  • Publication Title
  • Abstract
  • Publication Keywords
  • DOI
  • Proceeding Title
  • Proceeding Foreword
  • ISBN (Completed)
  • Insticc Ontology
  • Author Affiliation
  • Author Name
  • Editor Name
If you already have a Primoris Account you can use the same username/password here.
Research.Publish.Connect.

The Search is performed on all of the following fields:

Note: Please use complete words only.
  • Publication Title
  • Abstract
  • Publication Keywords
  • DOI
  • Proceeding Title
  • Proceeding Foreword
  • ISBN (Completed)
  • Insticc Ontology
  • Author Affiliation
  • Author Name
  • Editor Name
If you're looking for an exact phrase use quotation marks on text fields.

Paper

Paper Unlock

Authors:Zhi Lu andVrizlynn L. L. Thing

Affiliation:Cyber Security Strategic Technology Centre, ST Engineering, Singapore

Keyword(s):Cyber Security, Explainable AI, Malware Detection, Machine Learning.

Abstract:The explanation to an AI model’s prediction used to support decision making in cyber security, is of critical importance. It is especially so when the model’s incorrect prediction can lead to severe damages or even losses to lives and critical assets. However, most existing AI models lack the ability to provide explanations on their prediction results, despite their strong performance in most scenarios. In this work, we propose a novel explainable AI method, called PhilaeX, that provides the heuristic means to identify the optimized subset of features to form the complete explanations of AI models’ predictions. It identifies the features that lead to the model’s borderline prediction, and those with positive individual contributions are extracted. The feature attributions are then quantified through the optimization of a Ridge regression model. We verify the explanation fidelity through two experiments. First, we assess our method’s capability in correctly identifying the activated features in the adversarial samples of Android malwares, through the features attribution values from PhilaeX. Second, the deduction and augmentation tests, are used to assess the fidelity of the explanations. The results show that PhilaeX is able to explain different types of classifiers correctly, with higher fidelity explanations, compared to the state-of-the-arts methods such as LIME and SHAP.(More)

The explanation to an AI model’s prediction used to support decision making in cyber security, is of critical importance. It is especially so when the model’s incorrect prediction can lead to severe damages or even losses to lives and critical assets. However, most existing AI models lack the ability to provide explanations on their prediction results, despite their strong performance in most scenarios. In this work, we propose a novel explainable AI method, called PhilaeX, that provides the heuristic means to identify the optimized subset of features to form the complete explanations of AI models’ predictions. It identifies the features that lead to the model’s borderline prediction, and those with positive individual contributions are extracted. The feature attributions are then quantified through the optimization of a Ridge regression model. We verify the explanation fidelity through two experiments. First, we assess our method’s capability in correctly identifying the activated features in the adversarial samples of Android malwares, through the features attribution values from PhilaeX. Second, the deduction and augmentation tests, are used to assess the fidelity of the explanations. The results show that PhilaeX is able to explain different types of classifiers correctly, with higher fidelity explanations, compared to the state-of-the-arts methods such as LIME and SHAP.

Full Text

Download
Please type the code

CC BY-NC-ND 4.0

Sign In

Guests can use SciTePress Digital Library without having a SciTePress account. However, guests have limited access to downloading full text versions of papers and no access to special options.
Guests can use SciTePress Digital Library without having a SciTePress account. However, guests have limited access to downloading full text versions of papers and no access to special options.
Guest:Register as new SciTePress user now for free.

Sign In

Download limit per month - 500 recent papers or 4000 papers more than 2 years old.
SciTePress user: please login.

PDF ImageMy Papers

PopUp Banner

Unable to see papers previously downloaded, because you haven't logged in as SciTePress Member.

If you are already a member please login.
You are not signed in, therefore limits apply to your IP address 153.126.140.213

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total
Popup Banner

PDF ButtonFull Text

Download
Please type the code

Paper citation in several formats:
Lu, Z. and Thing, V. L. L. (2022).PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection. InProceedings of the 7th International Conference on Internet of Things, Big Data and Security - IoTBDS; ISBN 978-989-758-564-7; ISSN 2184-4976, SciTePress, pages 37-46. DOI: 10.5220/0010986700003194

@conference{iotbds22,
author={Zhi Lu and Vrizlynn L. L. Thing},
title={PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection},
booktitle={Proceedings of the 7th International Conference on Internet of Things, Big Data and Security - IoTBDS},
year={2022},
pages={37-46},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010986700003194},
isbn={978-989-758-564-7},
issn={2184-4976},
}

TY - CONF

JO - Proceedings of the 7th International Conference on Internet of Things, Big Data and Security - IoTBDS
TI - PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection
SN - 978-989-758-564-7
IS - 2184-4976
AU - Lu, Z.
AU - Thing, V.
PY - 2022
SP - 37
EP - 46
DO - 10.5220/0010986700003194
PB - SciTePress

    - Science and Technology Publications, Lda.
    RESOURCES

    Proceedings

    Papers

    Authors

    Ontology

    CONTACTS

    Science and Technology Publications, Lda
    Avenida de S. Francisco Xavier, Lote 7 Cv. C,
    2900-616 Setúbal, Portugal.

    Phone: +351 265 520 185(National fixed network call)
    Fax: +351 265 520 186
    Email:info@scitepress.org

    EXTERNAL LINKS

    PRIMORIS

    INSTICC

    SCITEVENTS

    CROSSREF

    PROCEEDINGS SUBMITTED FOR INDEXATION BY:

    dblp

    Ei Compendex

    SCOPUS

    Semantic Scholar

    Google Scholar

    Microsoft Academic


    [8]
    ページ先頭

    ©2009-2025 Movatter.jp