Generalized Out-of-Distribution Detection - A Survey
Created: =dateformat(this.file.ctime,"dd MMM yyyy, hh:mm a") | Modified: =dateformat(this.file.mtime,"dd MMM yyyy, hh:mm a")
Tags: knowledge,
Annotations
The distributional shifts canbe caused by semantic shift (e.g., OOD samples are drawnfrom different classes) [12], or covariate shift (e.g., OODsamples from a different domain) [13], [14], [15]. * show annotation
Fig. 1: Taxonomy of generalized OOD detection framework,illustrated by classification tasks. * show annotation
generalized OOD detection, whichencapsulates five related sub-topics: anomaly detection (AD),novelty detection (ND), open set recognition (OSR), out-of-distribution (OOD) detection, and outlier detection (OD) * show annotation
alldefine a certain in-distribution, with the common goal ofdetecting out-of-distribution samples * show annotation
ubtle differences exist among the sub-topics in terms of the specific definition and properties * show annotation
Novelty detection aims to detect any testsamples that do not fall into any training category. Thedetected novel samples are usually prepared for futureconstructive procedures, such as more specialized analysis, orincremental learning of the model itself. * show annotation
field of out-of-distribution detection emerges, requiringthe model to reject inputs that are semantically differentfrom the training distribution and therefore should not bepredicted by the model * show annotation
OOD detection takes onedataset as ID and find several other datasets as OOD withthe guarantee of non-overlapping categories between ID /OOD datasets. * show annotation
n terms of motivation, novelty detection usuallydoes not perceive “novel” test samples as erroneous, fraudu-lent, or malicious as AD does, but cherishes them as learningresources for potential future use with a positive learningattitude * show annotation
problem settings in AD, ND, OSR, andOOD detect unseen test samples that are different fromthe training data distribution * show annotation
outlier detectiondirectly processes all observations and aims to select outliersfrom the contaminated dataset [ * show annotation
Open-world recognition [99]aims to build a lifelong learning machine that can activelydetect novel images [100], label them as new classes, andperform continuous learning. * show annotation
Open set recognition requires the multi-classclassifier to simultaneously: 1) accurately classify test samplesfrom “known known classes”, and 2) detect test samples from“unknown unknown classes”. * show annotation
Out-of-distribution detection, or OOD detec-tion, aims to detect test samples that drawn from a distri-bution that is different from the training distribution, withthe definition of distribution to be well-defined accordingto the application in the target * show annotation
OSR benchmarks usuallysplit one multi-class classification dataset into ID and OODparts * show annotation
OSR discourages theusage of additional data during training by design * show annotation
OODdetection encompasses a broader spectrum of learning tasks * show annotation
- ==== * show annotation
What about Data drift and Concept drift?