Abstract
Deep Learning predictions with measurable confidence are increasingly desirable for real-world problems, especially in high-risk settings. The Conformal Prediction (CP) framework is a versatile solution that guarantees a maximum error rate given minimal constraints [1]. In this paper, we propose a novel conformal loss function that approximates the traditionally two-step CP approach in a single step. By evaluating and penalising deviations from the stringent expected CP output distribution, a Deep Learning model may learn the direct relationship between the input data and the conformal p-values. We carry out a comprehensive empirical evaluation to show our novel loss function’s competitiveness for seven binary and multi-class prediction tasks on five benchmark datasets. On the same datasets, our approach achieves significant training time reductions up to 86% compared to Aggregated Conformal Prediction (ACP, [2]), while maintaining comparable approximate validity and predictive efficiency.
Original language | English |
---|---|
Journal | Annals of Mathematics and Artificial Intelligence |
DOIs | |
Publication status | Published - 1 Jul 2023 |
Bibliographical note
Funding Information:This research is funded by University of Brighton’s ‘Rising Stars’ research grant, and Innovate UK’s AKT2I grant ‘Machine vision segmentation for automated UK train tracking and railway maintenance’.
Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer Nature Switzerland AG.
Keywords
- Conformal prediction
- Deep learning
- Prediction confidence