Monocular depth estimation has greatly improved in the recent years but models predicting metric depth still struggle to generalize across diverse camera poses and datasets. While recent supervised methods mitigate this issue by leveraging ground prior information at inference, their adaptability to self-supervised settings is limited due to the ad...
We address the problem of learning imitation policies that generalize across environments sharing the same underlying causal structure between the system dynamics and the task. We introduce a novel loss for learning invariant state representations that draws inspiration from adversarial robustness. Our approach is algorithm-agnostic and does not re...
The complexity of neural circuits makes it challenging to decipher the brain's algorithms of intelligence. Recent break-throughs in deep learning have produced models that accurately simulate brain activity, enhancing our understanding of the brain's computational objectives and neural coding. However, these models struggle to generalize beyond the...
Le Bars, BatisteBellet, AurélienTommasi, MarcScaman, KevinNeglia, Giovanni
This paper presents a new generalization error analysis for Decentralized Stochastic Gradient Descent (D-SGD) based on algorithmic stability. The obtained results overhaul a series of recent works that suggested an increased instability due to decentralization and a detrimental impact of poorly-connected communication graphs on generalization. On t...
Rosenberg, Benjamin MYoung, Katherine SNusslock, RobinZinbarg, Richard ECraske, Michelle G
BackgroundPavlovian fear paradigms involve learning to associate cues with threat or safety. Aberrances in Pavlovian fear learning correlate with psychopathology, especially anxiety disorders. This study evaluated symptom dimensions of anxiety and depression in relation to Pavlovian fear acquisition and generalization.Methods256 participants (70.31...
The empirical risk minimization (ERM) problem with relative entropy regularization (ERM-RER) is investigated under the assumption that the reference measure is a $\sigma$-finite measure, and not necessarily a probability measure. Under this assumption, which leads to a generalization of the ERM-RER problem allowing a larger degree of flexibility fo...
From the information forensics point of view, it is important to correctly classify between natural images (outputs of digital cameras) and computer-graphics images (outputs of advanced graphics rendering engines), so as to know the source of the images and the authenticity of the scenes described in the images. It is challenging to achieve good cl...
Abstraction is key to human and artificial intelligence as it allows one to see common structure in otherwise distinct objects or situations and as such it is a key element for generality in AI. Anti-unification (or generalization) is \textit{the} part of theoretical computer science and AI studying abstraction. It has been successfully applied to ...