Torres, Bernardo Peeters, Geoffroy Richard, Gaël

In neural audio signal processing, pitch conditioning has been used to enhance the performance of synthesizers. However, jointly training pitch estimators and synthesizers is a challenge when using standard audio-to-audio reconstruction loss, leading to reliance on external pitch trackers. To address this issue, we propose using a spectral loss fun...

Nenna, Luca Pegon, Paul

We investigate the convergence rate of multi-marginal optimal transport costs that are regularized with the Boltzmann-Shannon entropy, as the noise parameter $\varepsilon$ tends to $0$. We establish lower and upper bounds on the difference with the unregularized cost of the form $C\varepsilon\log(1/\varepsilon)+O(\varepsilon)$ for some explicit dim...

Genest, Baptiste Courty, Nicolas Coeurjolly, David

In machine learning and computer graphics, a fundamental task is the approximation of a probability density function through a well-dispersed collection of samples. Providing a formal metric for measuring the distance between probability measures on general spaces, Optimal Transport (OT) emerges as a pivotal theoretical framework within this contex...

Miclo, Laurent

Helmholtz decompositions break down any vector field into a sum of a gradient field and a divergence-free vector field. Such a result is extended to finite irreducible and reversible Markov processes, where vector fields cor-respond to anti-symmetric functions on the oriented edges of the underlying graph.

Fettal, Chakib Labiod, Lazhar Nadif, Mohamed

Clustering is an important task in computer vision and machine learning in general, and new applications are constantly appearing. A common way of obtaining an image dataset partition is through graph cuts, which are also used as a component in more complex clustering paradigms such as subspace clustering. One drawback of classical min-cut algorith...

Caillet, Thibault Santambrogio, Filippo

We prove an existence result for a large class of PDEs with a nonlinear Wasserstein gradient flow structure. We use the classical theory of Wasserstein gradient flow to derive an EDI formulation of our PDE and prove that under some integrability assumptions on the initial condition the PDE is satisfied in the sense of distributions.

Leluc, Rémi Dieuleveut, Aymeric Portier, François Segers, Johan Zhuman, Aigerim

The Sliced-Wasserstein (SW) distance between probability measures is defined as the average of the Wasserstein distances resulting for the associated one-dimensional projections. As a consequence, the SW distance can be written as an integral with respect to the uniform measure on the sphere and the Monte Carlo framework can be employed for calcula...

Collas, Antoine Flamary, Rémi Gramfort, Alexandre

This paper introduces a novel domain adaptation technique for time series data, called Mixing model Stiefel Adaptation (MSA), specifically addressing the challenge of limited labeled signals in the target dataset. Leveraging a domain-dependent mixing model and the optimal transport domain adaptation assumption, we exploit abundant unlabeled data in...

Nismi, Rimaz

Magnetic Resonance Imaging (MRI) is an essential healthcare technology, with diffusion MRI being a specialized technique. Diffusion MRI exploits the inherent diffusion of water molecules within the human body to produce a high-resolution tissue image. An MRI image contains information about a 3D volume in space, composed of 3D units called voxels. ...

Dumont, Théo Lacombe, Théo Vialard, François-Xavier

In this work, we study the structure of minimizers of the quadratic Gromov--Wasserstein (GW) problem on Euclidean spaces for two different costs. The first one is the scalar product for which we prove that it is always possible to find optimizers as Monge maps and we detail the structure of such optimal maps. The second cost is the squared Euclidea...