Moulin-Frier, Clément Brochard, Jules Stulp, Freek Oudeyer, Pierre-Yves
Infant vocal babbling strongly relies on jaw oscillations, especially at the stage of canonical babbling, which underlies the syllabic structure of world languages. In this paper, we propose, model and analyze an hypothesis to explain this predominance of the jaw in early babbling. This hypothesis states that general stochastic optimization princip...
García-Ibáñez, Yaiza Riesco-Eizaguirre, Garcilaso Santisteban, Pilar Casar, Berta Crespo, Piero
Published in
Cancers
RAS mutations are the second most common genetic alteration in thyroid tumors. However, the extent to which they are associated with the most aggressive phenotypes is still controversial. Regarding their malignancy, the majority of RAS mutant tumors are classified as undetermined, which complicates their clinical management and can lead to undesire...
Miech, Antoine Alayrac, Jean-Baptiste Bojanowski, Piotr Laptev, Ivan Sivic, Josef
Discriminative clustering has been successfully applied to a number of weakly-supervised learning tasks. Such applications include person and action recognition, text-to-video alignment, object co-segmentation and co-localization in videos and images. One drawback of dis-criminative clustering, however, is its limited scalability. We address this i...
Barbier, Jean Krzakala, Florent Macris, Nicolas Miolane, Léo Zdeborová, Lenka
We consider generalized linear models where an unknown $n$-dimensional signal vector is observed through the successive application of a random matrix and a non-linear (possibly probabilistic) componentwise function. We consider the models in the high-dimensional limit, where the observation consists of $m$ points, and $m/n {\to} {\alpha}$ where ${...
Klein, John Albardan, Mahmoud Guedj, Benjamin Colot, Olivier
We examine a network of learners which address the same classification task but must learn from different data sets. The learners cannot share data but instead share their models. Models are shared only one time so as to preserve the network load. We introduce DELCO (standing for Decentralized Ensemble Learning with COpulas), a new approach allowin...
Chan-Hon-Tong, Adrien
One of the most burning issue in deep learning community is the well known adversarial example phenomenon where two very close samples are classified differently. In this paper, I offer to see in this phenomenon the opportunity to help the training with a new kind of regularisation and/or semi supervised training. The main idea is to use both class...
Briot, Jean-Pierre Hadjeres, Gaëtan Pachet, François-David
Retoré, Christian
This paper is a reflexion on the computability of natural language semantics. It does not contain a new model or new results in the formal semantics of natural language: it is rather a computational analysis of the logical models and algorithms currently used in natural language semantics, defined as the mapping of a statement to logical formulas —...
Lu, Ying Chen, Liming Saidi, Alexandre
Training a Deep Neural Network (DNN) from scratch requires a large amount of labeled data. For a classification task where only small amount of training data is available, a common solution is to perform fine-tuning on a DNN which is pre-trained with related source data. This consecutive training process is time consuming and does not consider expl...
Mlynarski, Pawel Delingette, Hervé Criminisi, Antonio Ayache, Nicholas
We present an efficient deep learning approach for the challenging task of tumor segmentation in multisequence MR images. In recent years, Convolutional Neural Networks (CNN) have achieved state-of-the-art performances in a large variety of recognition tasks in medical imaging. Because of the considerable computational cost of CNNs, large volumes s...