[ICLR-2017]
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks.
Authors: Dan Hendrycks, Kevin Gimpel
Institution: University of California, Berkeley; Toyota Technological Institute at Chicago
The starting point of OOD detection, proposing a baseline simply uses softmax probabilities to detect OOD.
Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. However, due to the overconfidence characteristics of deep models, the baseline cannot be well performed. The overconfidence property comes from softmax always modeling sharp distribution for predictions.
[ICLR-2018]
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks.
Authors: Shiyu Liang, Yixuan Li, R.rikant
Institution: University of Illinois at Urbana-Champaign; University of Wisconsin-Madison
Using temperature scaling on softmax probabilities with small perturbations for reliability.
Temperature scaling has a strong smoothing effect that transforms the softmax score back to the logit space, which effectively distinguishes ID vs.OOD. A perturbation on each sample at test time can further increase the separability between ID and OOD data.
[NeurIPS-2020]
Energy-based Out-of-distribution Detection.
Authors: Weitang Liu, Xiaoyun Wang, John D. Owens, Yixuan Li
Institution: University of California, San Diego; University of California, Davis; University of Wisconsin-Madison
Using energy scores instead of softmax scores to conveniently achieve good results.
Unlike softmax confidence scores, energy scores are theoretically aligned with the probability density of the inputs and are less susceptible to the overconfidence issue. The paper shows that energy can conveniently replace softmax confidence for any pre-trained neural network, and proposes an energy-bounded learning objective to fine-tune the network.
[ICML-2020]
Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices.
Authors: Chandramouli Shama Sastry, Sageev Oore
Institution: Dalhousie University, Halifax
[CVPR-2021]
MOOD: Multi-level Out-of-distribution Detection.
Authors: Ziqian Lin, Sreya Dutta Roy, Yixuan Li
Institution: University of Wisconsin-Madison
Accelerate training by finding optimal exit level via data complexity.
We explore and establish a direct relationship between the OOD data complexity and optimal exit level, and show that easy OOD examples can be effectively detected early without propagating to deeper layers.
[NeurIPS-2021]
ReAct: Out-of-distribution detection with rectified activations.
Authors: Yiyou Sun, Chuan Guo and Yixuan Li
Institution: University of Wisconsin-Madison, Facebook AI Research
[NeurIPS-2021]
On the Importance of Gradients for Detecting Distributional Shifts in the Wild.
Authors: Rui Huang, Andrew Geng, Yixuan Li
Institution: University of Wisconsin-Madison
[NeurIPS-2021]
Can multi-label classification networks know what they don’t know?.
Authors: Haoran Wang, Weitang Liu, Alex Bocchieri, Yixuan Li
Institution: Carnegie Mellon University; University of California, San Diego; Department of Computer Sciences
University of Wisconsin-Madison
[arXiv-2021]
On the Effectiveness of Sparsification for Detecting the Deep Unknowns.
Authors: Yiyou Sun, Yixuan Li
Institution: University of Wisconsin-Madison
[NeurIPS-2022]
RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection.
Authors: Yue Song, Nicu Sebe, Wei Wang
Institution: University of Trento
[arXiv-2017]
Improved Regularization of Convolutional Neural Networks with Cutout.
Authors: Terrance DeVries, Graham W. Taylor
Institution: University of Guelph; Canadian Institute for Advanced Research and Vector Institute
[arXiv-2018]
Learning Confidence for Out-of-Distribution Detection in Neural Networks.
Authors: Terrance DeVries, Graham W. Taylor
Institution: University of Guelph; Vector Institute
Neural network augmented with a confidence estimation branch.
During training, the predictions are modified according to the confidence of the network such that they are closer to the target probability distribution y. The gradual training procedure helps a better estimation of confidence.
[ECCV-2018]
Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers.
Authors: Apoorv Vyas, Nataraj Jammalamadaka, Xia Zhu, Dipankar Das, Bharat Kaul, Theodore L. Willke
Institution: Intel labs, Bangalore, India; Intel labs, Hillsboro; Idiap Research Institute, Switzerland
Training multiple classifers by leaving out a random subset of training data as OOD data and the rest as in-distribution for ensembling.
They add a novel margin-based loss term to maintain a margin between the average entropy of OOD and ID samples respectively. An ensemble of K leave-out classifiers is used for OOD detection. The weakness is that the large computational cost and extra OOD dataset for hyper-parameter search.
[CVPR-2019]
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem.
Authors: Matthias Hein, Maksym Andriushchenko, Julian Bitterwolf
Institution: University of T¨ubingen; Saarland University
ReLU-networks lead to over-confident predictions
ReLU-networks lead to over-confident predictions even for samples that are far away from the in-domain distributions and propose methods to mitigate this problem
[NeurIPS-2019]
On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks.
Authors: Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, Sarah Michalak
Institution: Los Alamos National Laboratory; University of Washington
Mixup-training helps.
We also observe that mixup-trained DNNs are less prone to over-confident predictions on out-of-distribution and random-noise data.
[CVPR-2019]
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features.
Authors: Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo
Institution: 1Clova AI Research, NAVER Corp; Clova AI Research, LINE Plus Corp; Yonsei University
[arXiv-2019]
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty.
Authors: Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, Balaji Lakshminarayanan
Institution: DeepMind; Google
[arXiv-2019]
Towards neural networks that provably know when they don't know.
Authors: Alexander Meinke, Matthias Hein
Institution: University of Tübingen
Certified Certain Uncertainty
We propose a Certified Certain Uncertainty (CCU) model with which one can train deep neural networks that provably make low-confidence predictions far away from the training data.
[CVPR-W-2020]
On Out-of-Distribution Detection Algorithms With Deep Neural Skin Cancer Classifiers.
Authors: Andre G. C. Pacheco, Chandramouli S. Sastry, Thomas Trappenberg, Sageev Oore, Renato A. Krohling
Institution: Federal University of Espirito Santo; Dalhousie University; Vector Institute
[CVPR-2020]
Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution Data.
Authors: Yen-Chang Hsu, Yilin Shen, Hongxia Jin, Zsolt Kira
Institution: Georgia Institute of Technology; Samsung Research
Improving ODIN by decomposed confidence scoring and a modified input pre-processing method.
The method find that previous work relies on the class posterior probability p(y|x), which does not consider the domain variable at all. Therefore, they use the explicit variable in the classifier, rewriting it as the quotient of the joint class-domain probability and the domain probability using the rule of conditional probability, and take the decomposed confidence scores for OOD detection. The decomposed confidence in the end is the probability of an input being in-distribution, computed by the cosine similarity between sample features and class features. The method also modifies the input preprocessing by only optimizing in-distribution data, therefore extra OOD validation samples are not required.
[NeurIPS-2020]
Certifiably Adversarially Robust Detection of Out-of-Distribution Data.
Authors: Julian Bitterwolf, Alexander Meinke, Matthias Hein
Institution: University of Tübingen
[arXiv-2020]
Robust Out-of-distribution Detection for Neural Networks.
Authors: Jiefeng Chen, Yixuan Li, Xi Wu, Yingyu Liang, Somesh Jha
Institution: University of Wisconsin-Madison; Stanford University; Google
Add a optimized adversarial perturbations on in-distribution and OOD samples for robust model.
This paper shows that existing detection mechanisms can be extremely brittle when evaluating on inputs with minimal adversarial perturbations which don’t change their semantics. To address the problem, the method performs robust training by exposing the model to perturbed adversarial in-distribution and outlier examples.
[ICLR-2020]
Novelty Detection Via Blurring.
Authors: Sungik Choi, Sae-Young Chung
Institution: Korea Advanced Institute of Science and Technology
[NeurIPS-2018]
Reducing network agnostophobia.
Authors: Akshay Raj Dhamija, Manuel Günther, Terrance E. Boult
Institution: Vision and Security Technology Lab; University of Colorado Colorado Springs
An entropic open-set loss and an OOD-feature-magnitudes-suppression loss on the additional background class.
The paper designs novel losses to maximize entropy for unknown inputs while increasing separation in deep feature space by modifying magnitudes of known and unknown samples. In sum, logits entropy and feature magnitudes are used for OOD detection.
[ICLR-2019]
Deep anomaly detection with outlier exposure
Authors: Dan Hendrycks, Mantas Mazeika, Thomas Dietterich
Institution: University of California, Berkeley; University of Chicago; Oregon State University
A baseline model to produce a uniform posterior distribution on auxiliary dataset of outliers.
It can learn effective heuristics for detecting out-of-distribution inputs by exposing the model to OOD examples, thus learning a more conservative concept of the inliers and enabling the detection of novel forms of anomalies. The result is shown effective on both CV and NLP tasks.
[ICCV-2019]
Unsupervised out-of-distribution detection by maximum classifier discrepancy
Authors: Qing Yu, Kiyoharu Aizawa
Institution: The University of Tokyo
A network with two branches, between which entropy discrepancy is enlarged for OOD training data.
It trains a two-head CNN consisting of one common feature extractor and two classifiers which have different decision boundaries but can classify in-distribution samples correctly. Then it uses the unlabeled data to maximize the discrepancy between the decision boundaries of two classifiers to push OOD samples outside the manifold of the in-distribution samples, which enables to detect OOD samples that are far from the support of the ID samples.
[BMVC-2019]
A Less Biased Evaluation of Out-of-distribution Sample Detectors.
Authors: Alireza Shafaei, Mark Schmidt, James J. Little
Institution: University of British Columbia
[AAAI-2020]
Self-Supervised Learning for Generalizable Out-of-Distribution Detection
Authors: Sina Mohseni, Mandar Pitale, JBS Yadawa, Zhangyang Wang
Institution: Texas A&M University; NVIDIA
Pseudo-labeling external unlabeled set for later OOD training.
It simultaneously trains in-distribution classifiers and out-of-distribution detectors in one network. By pseudo-labeling, the unlabeled data can be assigned with label index or reject label for later training.
[CVPR-2020]
Background data resampling for outlier-aware classification
Authors: Yi Li, Nuno Vasconcelos
Institution: University of California, San Diego
Using adversarial resampling approach to obtain a compact yet representative set of background data points.
This work focuses on training with background and claims that using all background data leads to inefficient or even impractical solution due to imbalance and computational complexity. The resampling algorithm takes inspiration from prior work on hard negative mining, performing an iterative adversarial weighting on the background examples and using the learned weights to obtain the subset of desired size.
[AAAI-2020]
Self-Supervised Learning for Generalizable Out-of-Distribution Detection
Authors: Sina Mohseni,Mandar Pitale, JBS Yadawa,Zhangyang Wang
Institution: Texas A&M University; NVIDIA
Pseudo-labeling external unlabeled set for later OOD training.
It simultaneously trains in-distribution classifiers and out-of-distribution detectors in one network. By pseudo-labeling, the unlabeled data can be assigned with label index or reject label for later training.
[Neurocomputing-2021]
Outlier exposure with confidence control for out-of-distribution detection
Authors: Aristotelis-Angelos Papadopoulos, Mohammad Reza Rajati, Nazim Shaikh, Jiamian Wang
Institution: University of Southern California
Performing prediction confidence calibration on the top of OE.
Based on the loss function of OE, this work add the second regularization term to minimize the Euclidean distance between the training accuracy of a DNN and the average confidence in its predictions on the training set.
[ICCV-2021]
Semantically Coherent Out-of-Distribution Detection.
Authors: Jingkang Yang, Haoqi Wang, Litong Feng, Xiaopeng Yan, Huabin Zheng, Wayne Zhang, Ziwei Liu
Institution: Nanyang Technological University; SenseTime Research, Shanghai Jiaotong Univerisity; Shanghai AI Lab.
[ECML PKDD-2021]
ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining
Authors: Jiefeng Chen, Yixuan Li, Xi Wu, Yingyu Liang, Somesh Jha
Institution: University of Wisconsin-Madison; Google
Using informative auxiliary outlier data to learn a tight decision boundary between ID and OOD data.
By mining informative auxiliary OOD data, one can significantly improve OOD detection performance, and somewhat surprisingly, generalize to unseen adversarial attack. The key idea is to selectively utilize auxiliary outlier data for estimating a tight decision boundary between ID and OOD data, which leads to robust OOD detection performance.
[arXiv-2021]
An Effective Baseline for Robustness to Distributional Shift
Authors: Sunil Thulasidasan, Sushil Thapa, Sayera Dhaubhadel, Gopinath Chennupati, Tanmoy Bhattacharya, Jeff Bilmes
Institution: Los Alamos; New Mexico Tech; University of Washington
An extra abstention (or rejection class) in combination with outlier training data for effective OoD detection.
This work demonstrates the efficacy of using an extra abstention (or rejection class) in combination with outlier training data for effective OoD detection.
[ICLR-2018]
Training confidence-calibrated classifiers for detecting out-of-distribution samples
Authors: Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin
Institution: KAIST; University of Michigan, Ann Arbor; Google Brain
An confidence loss to encourage a uniform prediction of GAN-generated ‘boundary’ OOD samples.
It proposes an confidence loss to minimize the KL divergence from the predictive distribution on the GAN-generated OOD samples to the uniform one in order to give less confident predictions on them. The proposed GAN generates ‘boundary’ samples in the in-distribution low-density area.
[NeurIPSW-2018]
Building robust classifiers through generation of confident out of distribution examples
Authors: Kumar Sricharan, Ashok Srivastava
Institution: Central Data Science Organization, Intuit Inc
[NeurIPSW-2019]
Out-of-distribution detection in classifiers via generation
Authors: Sachin Vernekar, Ashish Gaurav, Vahdat Abdelzad, Taylor Denouden, Rick Salay, Krzysztof Czarnecki
Institution: University of Waterloo
[CVPR-2019]
Out-of-distribution detection for generalized zero-shot action recognition
Authors: Devraj Mandal, Sanath Narayan, Saikumar Dwivedi, Vikram Gupta, Shuaib Ahmed, Fahad Shahbaz Khan, Ling Shao
Institution: Indian Institute of Science, Bangalore; Inception Institute of Artificial Intelligence, UAE; Mercedes-Benz R&D India, Bangalore
[ICLR-2022]
VOS: Learning What You Don't Know By Virtual Outlier Synthesis
Authors: Xuefeng Du, Zhaoning Wang, Mu Cai, Yixuan Li
Institution: University of Wisconsin - Madison
A novel framework for OOD detection by adaptively synthesizing virtual outliers that can meaningfully regularize the model's decision boundary during training.
VOS samples virtual outliers from the low-likelihood region of the class-conditional distribution estimated in the feature space. VOS achieves strong performance on both object detection and image classification models.
[ICLR-2018]
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks.
Authors: Shiyu Liang, Yixuan Li, R. Srikant
Institution: University of Illinois at Urbana-Champaign; University of Wisconsin-Madison
Using temperature scaling on softmax probabilities with small perturbations for robustness.
Temperature scaling can calibrate the softmax probabilities so the model takes the calibrated maximum softmax probabilities as the indicator for OOD detection. A perturbation on each sample at test time can further exploit the model robustness in detecting ID samples. However, it requires an OOD validation set for hyperparameter tuning.
[NeurIPS-2021]
On the Importance of Gradients for Detecting Distributional Shifts in the Wild.
Authors: Rui Huang, Andrew Geng, Yixuan Li
Institution: University of Wisconsin-Madison
[ICML-2016]
Dropout as a bayesian approximation: Representing model uncertainty in deep learning.
Authors: Yarin Gal , Zoubin Ghahramani
Institution: University of Cambridge
[NeurIPS-2017]
Simple and scalable predictive uncertainty estimation using deep ensembles.
Authors: Balaji Lakshminarayanan , Alexander Pritzel , Charles Blundell
Institution: DeepMind
[NeurIPS-2018]
Predictive Uncertainty Estimation via Prior Networks
Authors: Andrey Malinin, Mark Gales
Institution: University of Cambridge
[NeurIPS-2019]
Practical deep learning with bayesian principles
Authors: Kazuki Osawa, Siddharth Swaroop, Anirudh Jain, Runa Eschenhagen, Richard E. Turner, Rio Yokota, Mohammad Emtiyaz Khan
Institution: Tokyo Institute of Technology; University of Cambridge; Indian Institute of Technology (ISM); University of Osnabrück; RIKEN Center for AI Project
[NeurIPS-2019]
Reverse kl-divergence training of prior networks: Improved uncertainty and adversarial robustness
Authors: Andrey Malinin, Mark Gales
Institution: Yandex; University of Cambridge
[NeurIPS-2020]
Towards maximizing the representation gap between in-domain & out-of-distribution examples
Authors: Jay Nandy, Wynne Hsu, Mong Li Lee
Institution: National University of Singapore
[CVPR-2021]
Mos: Towards scaling out-of-distribution detection for large semantic space
Authors: Rui Huang, Yixuan Li
Institution: University of Wisconsin-Madison
[arXiv-2021]
Exploring the limits of out-of-distribution detection
Authors: Stanislav Fort, Jie Ren, Balaji Lakshminarayanan
Institution: Stanford University; Google Research
Large-scale pre-trained transformers significantly improve near-OOD tasks
This work explores the effectiveness of large-scale pre-trained transformers, especially when few-shot outlier exposure is available. It also shows that the pre-trained multi-modal image-text transformers CLIP is also effective on OOD detection if using the names of outlier classes as candidate text labels.
[arXiv-2020]
Pretrained transformers improve out-of-distribution robustness
Authors: Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, Dawn Song
Institution: UC Berkeley; Shanghai Jiao Tong University; University of Chicago
[arXiv-2021]
Oodformer: Out-of-distribution detection transformer
Authors: Rajat Koner, Poulami Sinhamahapatra, Karsten Roscher, Stephan Günnemann, Volker Tresp
Institution: Ludwig Maximilian University; Technical University; Fraunhofer, IKS; Siemens AG
[ICML-2016]
Pixel recurrent neural networks
Authors: Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu
Institution: Google DeepMind
[NeurIPS-2018]
Generative probabilistic novelty detection with adversarial autoencoders
Authors: Stanislav Pidhorskyi, Ranya Almohsen, Donald A. Adjeroh, Gianfranco Doretto
Institution: West Virginia University
[NeurIPS-2018]
Glow: Generative flow with invertible 1x1 convolutions
Authors: Diederik P. Kingma, Prafulla Dhariwal
Institution: OpenAI
[ECML/KDD-2018]
Image anomaly detection with generative adversarial networks
Authors: Lucas Deecke, authorRobert Vandermeulen, Lukas RuffStephan Mandt, Marius Kloft
Institution: University of Edinburgh; TU Kaiserslautern; Hasso Plattner Institute; University of California
[ICLR-2018]
Deep autoencoding gaussian mixture model for unsupervised anomaly detection
Authors: Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Daeki Cho, Haifeng Chen
Institution: NEC Laboratories America; Washington State University
[NeurIPS-2018]
Do deep generative models know what they don’t know?
Authors: Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan
Institution: DeepMind
[arXiv-2018]
Waic, but why? generative ensembles for robust anomaly detection
Authors: Hyunsun Choi, Eric Jang, Alexander A. Alemi
Institution: Google Inc.
[CVPR-2018]
Adversarially learned one-class classifier for novelty detection
Authors: Mohammad Sabokrou, Mohammad Khalooei, Mahmood Fathy, Ehsan Adeli
Institution: Institute for Research in Fundamental Sciences; Amirkabir University of Technology; Stanford University
[NeurIPS-2019]
Likelihood ratios for out-of-distribution detection
Authors: Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, Balaji Lakshminarayanan
Institution: Google Research; DeepMind
Using likelihood ratios to cancel out background influence.
This work finds the likelihood score is heavily affected by background, so likelihood ratios are used to cancel out background influence. The Likelihood Ratio (LR) is the likelihood that a given test result would be expected in a patient with the target disorder compared to the likelihood that that same result would be expected in a patient without the target disorder.
[CVPR-2019]
Latent space autoregression for novelty detection
Authors: Davide Abati, Angelo Porrello, Simone Calderara, Rita Cucchiara
Institution: University of Modena and Reggio Emilia
[CVPR-2020]
Deep residual flow for out of distribution detection
Authors: Ev Zisselman, Aviv Tamar
Institution: Technion
[NeurIPS-2020]
Why normalizing flows fail to detect out-of-distribution data
Authors: Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson
Institution: New York University
[ICLR-2020]
Input complexity and out-of-distribution detection with likelihood-based generative models
Authors: Joan Serra, David Alvarez, Vicenc¸ Gomez, Olga Slizovskaia, Jose F. Nunez, Jordi Luque
Institution: Dolby Laboratories; Telefonica Research; Universitat Politecnica de Catalunya; Universitat Pompeu Fabra
[NeurIPS-2020]
Likelihood regret: An out-ofdistribution detection score for variational auto-encoder
Authors: Zhisheng Xiao, Qing Yan, Yali Amit
Institution: University of Chicago
[TPAMI-2020]
Normalizing flows: An introduction and review of current methods
Authors: Ivan Kobyzev, Simon J.D. Prince, Marcus A. Brubaker
Institution: Borealis AI
[NeurIPS-2018]
A simple unified framework for detecting out-of-distribution samples and adversarial attacks
Authors: Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin
Institution: Korea Advanced Institute of Science and Technology (KAIST); University of Michigan; Google Brain; AItrics
[ACCV-2020]
Hyperparameter-free out-of-distribution detection using cosine similarity
Authors: Engkarat Techapanurak, Masanori Suganuma, Takayuki Okatani
Institution: Tohoku University; RIKEN Center for AIP
Using scaled cosine similarity between test sample features and class features to determine OOD samples.
The first work employs softmax of scaled cosine similarity instead of ordinary softmax of logits. Taking the metric learning idea into OOD detection. It is also the concurrent work of Generalized ODIN.
[ECCV-2020]
A boundary based out-of-distribution classifier for generalized zero-shot learning
Authors: Xingyu Chen, Xuguang Lan, Fuchun Sun, Nanning Zheng
Institution: Xian Jiaotong University; Tsinghua University
[ICML-2020]
Uncertainty estimation using a single deep deterministic neural network
Authors: Joost van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin Gal
Institution: University of Oxford
[arXiv-2020]
Feature Space Singularity for Out-of-Distribution Detection
Authors: Haiwen Huang, Zhihan Li, Lulu Wang, Sishuo Chen, Bin Dong, Xinyu Zhou
Institution: University of Oxford; Peking University; MEGVII Technology; etc.
Distance to Feature Space Singularity can measure OOD.
It is observed that in feature spaces, OOD samples concentrate near a Feature Space Singularity (FSS) point, and the distance from a sample to FSS measures the degree of OOD. It can be exlained that moving speeds of features of other data depend on their similarity to the training data. During training, they use generated uniform noise or validation data as OOD.
[CVPR-2021]
Out-of-Distribution Detection Using Union of 1-Dimensional Subspaces
Authors: Alireza Zaeemzadeh, Niccolò Bisagno, Zeno Sambugaro, Nicola Conci, Nazanin Rahnavard, Mubarak Shah
Institution: University of Central Florida; University of Trento
Calculating class membership probabilities in a union of 1-dimensional subspaces.
The cosine similarities between the extracted feature and the class vectors are used to compute the class membership probabilities, using a Union of 1-dimensional subspaces. The 1-dimensional subspaces is spanned by the first singular vector of the feature vectors extracted from the training set. Feature vectors lie on a union of 1-dimensional subspaces helps OOD samples to be robustly detected.
[arXiv-2021]
A simple fix to mahalanobis distance for improving near-ood detection
Authors: Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy, Balaji Lakshminarayanan
Institution: Google Research; Stanford University; Harvard University; Google Health