Melnykova, Anna

Multidimensional hypoelliptic diffusions arise naturally as models of neuronal activity. Estimation in those models is complex because of the degenerate structure of the diffusion coefficient. We build a consistent estimator of the drift and variance parameters with the help of a discretized log-likelihood of the continuous process in the case of f...

Donnet, Sophie Rivoirard, Vincent Rousseau, Judith

This paper studies nonparametric estimation of parameters of multivariate Hawkes processes. We consider the Bayesian setting and derive posterior concentration rates. First rates are derived for L1-metrics for stochastic intensities of the Hawkes process. We then deduce rates for the L1-norm of interactions functions of the process. Our results are...

Poinas, Arnaud

We prove a general inequality on $\beta$-mixing coefficients of point processes depending uniquely on their $n$-th order intensity functions. We apply this inequality in the case of determinantal point processes and show that the rate of decay of the $\beta$-mixing coefficients of a wide class of DPPs is optimal.

Sart, Mathieu

We propose a unified study of three statistical settings by widening the ρ-estimation method developed in [BBS17]. More specifically, we aim at estimating a density, a hazard rate (from censored data), and a transition intensity of a time inhomogeneous Markov process. We relate the performance of ρ-estimators to deviations of an empirical process. ...

Albert, Clément Dutfoy, Anne Girard, Stéphane

We investigate the asymptotic behavior of the (relative) extrapolation error associated with some estimators of extreme quantiles based on extreme-value theory. It is shown that the extrapolation error can be interpreted as the remainder of a first order Taylor expansion. Necessary and sufficient conditions are then provided such that this error te...

Comte, Fabienne Dion, Charlotte

This paper presents a general methodology for nonparametric estimation of a function s related to a nonnegative real random variable X, under a constraint of type s(0) = c. Three dierent examples are investigated: the direct observations model (X is observed), the multiplicative noise model (Y = XU is observed, with U following a uniform distributi...

Gribonval, Rémi Blanchard, Gilles Keriven, Nicolas Traonmilin, Yann

We describe a general framework –compressive statistical learning– for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task. A near-minimizer of the risk is com...

Rabier, Charles-Elie

In Quantitative Trait Locus detection, selective genotyping is a way to reduce costs due to genotyping : only individuals with extreme phenotypes are genotyped. We focus here on statistical inference for selective genotyping. We study, in a very large framework, the performances of different tests suitable for selective genotyping. We proof that we...

Diel, Roland Le Corff, Sylvain Lerasle, Matthieu

In this paper, we estimate the distribution of hidden nodes weights in large random graphs from the observation of very few edges weights. In this very sparse setting, the first non-asymptotic risk bounds for maximum likelihood estimators (MLE) are established. The proof relies on the construction of a graphical model encoding conditional dependenc...

Azaïs, Jean-Marc Bachoc, François Klein, Thierry Lagnoux, Agnès Nguyen, Thi Mong Ngoc

We consider the semi-parametric estimation of a scale parameter of a one-dimensional Gaussian process with known smoothness. We suggest an estimator based on quadratic variations and on the moment method. We provide asymptotic approximations of the mean and variance of this estimator, together with asymptotic normality results, for a large class of...