Deep Learning predictions with measurable confidence are increasingly desirable for real-world problems, especially in high-risk settings. The Conformal Prediction (CP) framework is a versatile solution that guarantees a maximum error rate given minimal constraints . In this paper, we propose a novel conformal loss function that approximates the traditionally two-step CP approach in a single step. By evaluating and penalising deviations from the stringent expected CP output distribution, a Deep Learning model may learn the direct relationship between the input data and the conformal p-values. We carry out a comprehensive empirical evaluation to show our novel loss function’s competitiveness for seven binary and multi-class prediction tasks on five benchmark datasets. On the same datasets, our approach achieves significant training time reductions up to 86% compared to Aggregated Conformal Prediction (ACP, ), while maintaining comparable approximate validity and predictive efficiency.
Bibliographical noteFunding Information:
This research is funded by University of Brighton’s ‘Rising Stars’ research grant, and Innovate UK’s AKT2I grant ‘Machine vision segmentation for automated UK train tracking and railway maintenance’.
© 2023, The Author(s), under exclusive licence to Springer Nature Switzerland AG.
- Conformal prediction
- Deep learning
- Prediction confidence