Toward Learning Model-Agnostic Explanations for Deep Learning-Based Signal Modulation Classifiers

Yunzhe Tian, Dongyue Xu, Endong Tong, Rui Sun, Kang Chen, Yike Li, Thar Baker, Wenjia Niu, Jiqiang Liu

Research output: Contribution to journalArticlepeer-review

Abstract

Recent advances in deep learning (DL) have brought tremendous gains in signal modulation classification. However, DL-based classifiers lack transparency and interpretability, which raises concern about model's reliability and hinders the wide deployment in real-word applications. While explainable methods have recently emerged, little has been done to explain the DL-based signal modulation classifiers. In this work, we propose a novel model-agnostic explainer, Model-Agnostic Signal modulation classification Explainer (MASE), which provides explanations for the predictions of black-box modulation classifiers. With the subsequence-based signal interpretable representation and in-distribution local signal sampling, MASE learns a local linear surrogate model to derive a class activation vector, which assigns importance values to the timesteps of signal instance. Besides, the constellation-based explanation visualization is adopted to spotlight the important signal features relevant to model prediction. We furthermore propose the first generic quantitative explanation evaluation framework for signal modulation classification to automatically measure the faithfulness, sensitivity, robustness, and efficiency of explanations. Extensive experiments are conducted on two real-world datasets with four black-box signal modulation classifiers. The quantitative results indicate MASE outperforms two state-of-the-art methods with 44.7% improvement in faithfulness, 30.6% improvement in robustness, and 44.1% decrease in sensitivity. Through qualitative visualizations, we further demonstrate the explanations of MASE are more human interpretable and provide better understanding into the reliability of black-box model decisions.
Original languageEnglish
Pages (from-to)1529-1543
Number of pages15
JournalIEEE Transactions on Reliability
Volume73
Issue number3
DOIs
Publication statusPublished - 8 Mar 2024

Bibliographical note

Publisher Copyright:
© 1963-2012 IEEE.

Keywords

  • Electrical and Electronic Engineering
  • Safety, Risk, Reliability and Quality

Fingerprint

Dive into the research topics of 'Toward Learning Model-Agnostic Explanations for Deep Learning-Based Signal Modulation Classifiers'. Together they form a unique fingerprint.

Cite this