skip to content

Out-of-Distribution Segmentation via Pixel-wise Gradient Uncertainty

Kira Maaga, and Tobias Riedlingerb

a Ruhr University Bochum, Germany
b University of Wuppertal, Germany

In recent years, deep neural networks have defined the state-of-the-art in semantic segmentation where their predictions are constrained to a predefined set of semantic classes. They are to be deployed in applications such as automated driving, although their categorically confined expressive power runs contrary to such open world scenarios. Thus, the detection and segmentation of objects from outside their predefined semantic space, i.e., out-of-distribution (OoD) objects, is of highest interest. Since uncertainty estimation methods like softmax entropy or Bayesian models are sensitive to erroneous predictions, these methods are a natural baseline for OoD detection. Here, we present a method for obtaining uncertainty scores from pixel-wise loss gradients which can be computed efficiently during inference [1]. Our approach is simple to implement for a large class of models, does not require any additional training or auxiliary data and can be readily used on pre-trained segmentation models. Our experiments show the ability of our method to identify wrong pixel classifications and to estimate prediction quality. In particular, we observe superior performance in terms of OoD segmentation to comparable baselines on the SegmentMeIfYouCan benchmark [2], clearly outperforming methods which are similarly flexible to implement.

[1] K. Maag and T. Riedlinger. Pixel-wise Gradient Uncertainty for Convolutional Neural Networks applied to Out-of-Distribution Segmentation. 2023.

[2] R. Chan, K. Lis, S. Uhlemeyer, H. Blum, S. Honari, R. Siegwart, P. Fua, M. Salzmann and M. Rottmann. SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation. Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2021