Publication search
with Out-of-distribution generalization as keyword
Kirchmeyer, Matthieu
Deep learning has emerged as a powerful approach for modelling static data like images and more recently for modelling dynamical systems like those underlying times series, videos or physical phenomena. Yet, neural networks were observed to not generalize well outside the training distribution, in other words out-of-distribution. This lack of gener...
Dagaev, Nikolay Roads, Brett D Luo, Xiaoliang Barry, Daniel N Patil, Kaustubh R Love, Bradley C
Published in
Pattern recognition letters
Despite their impressive performance in object recognition and other tasks under standard testing conditions, deep networks often fail to generalize to out-of-distribution (o.o.d.) samples. One cause for this shortcoming is that modern architectures tend to rely on ǣshortcutsǥ superficial features that correlate with categories without capturing de...
Kook, Lucas Sick, Beate Bühlmann, Peter
Published in
Statistics and computing
Prediction models often fail if train and test data do not stem from the same distribution. Out-of-distribution (OOD) generalization to unseen, perturbed test data is a desirable but difficult-to-achieve property for prediction models and in general requires strong assumptions on the data generating process (DGP). In a causally inspired perspective...
Fröhlich, Herberth Birck de Oliveira, Bernardo Cassimiro Fonseca Gonçalves, Armando Albertazzi Jr.
Published in
Journal of Nondestructive Evaluation
Shearography and thermography are two well-established nondestructive testing methods. Yet, both methods present high subjectivity in the interpretation of their results, which difficulties the automation. Many efforts go towards applying intelligent algorithms, drawing together many limitations, such as the need for a large data set and fine-tunin...