Affordable Access

Access to the full text

Domain-Agnostic Outlier Ranking Algorithms—A Configurable Pipeline for Facilitating Outlier Detection in Scientific Datasets

Authors
  • Kerner, Hannah R.1
  • Rebbapragada, Umaa2
  • Wagstaff, Kiri L.2
  • Lu, Steven2
  • Dubayah, Bryce1
  • Huff, Eric2
  • Lee, Jake2
  • Raman, Vinay3
  • Kulshrestha, Sakshum1
  • 1 College Park, Maryland, MD , (United States)
  • 2 California Institute of Technology, Pasadena, CA , (United States)
  • 3 Montgomery Blair High School, Silver Spring, MD , (United States)
Type
Published Article
Journal
Frontiers in Astronomy and Space Sciences
Publisher
Frontiers Media S.A.
Publication Date
May 10, 2022
Volume
9
Identifiers
DOI: 10.3389/fspas.2022.867947
Source
Frontiers
Keywords
Disciplines
  • Astronomy and Space Sciences
  • Original Research
License
Green

Abstract

Automatic detection of outliers is universally needed when working with scientific datasets, e.g., for cleaning datasets or flagging novel samples to guide instrument acquisition or scientific analysis. We present Domain-agnostic Outlier Ranking Algorithms (DORA), a configurable pipeline that facilitates application and evaluation of outlier detection methods in a variety of domains. DORA allows users to configure experiments by specifying the location of their dataset(s), the input data type, feature extraction methods, and which algorithms should be applied. DORA supports image, raster, time series, or feature vector input data types and outlier detection methods that include Isolation Forest, DEMUD, PCA, RX detector, Local RX, negative sampling, and probabilistic autoencoder. Each algorithm assigns an outlier score to each data sample. DORA provides results interpretation modules to help users process the results, including sorting samples by outlier score, evaluating the fraction of known outliers in n selections, clustering groups of similar outliers together, and web visualization. We demonstrated how DORA facilitates application, evaluation, and interpretation of outlier detection methods by performing experiments for three real-world datasets from Earth science, planetary science, and astrophysics, as well as one benchmark dataset (MNIST/Fashion-MNIST). We found that no single algorithm performed best across all datasets, underscoring the need for a tool that enables comparison of multiple algorithms.

Report this publication

Statistics

Seen <100 times