Since the seminal work of Zawadzki in the seventies, the so-called Taylor's "frozen" hypothesis has been regularly used to study the statistical properties of rainfall patterns. This hypothesis yields a drastic simplification in terms of symmetry of the space-time structure-the large-scale advection velocity is the conversion factor used to link the time and space autocorrelation functions (ACFs) of the small-scale variability. This study revisits the frozen hypothesis with a geostatistical model. Using analytical developments and numerical simulations tuned on available case studies from the literature, the role of large- and small-scale rainfall kinematics on the properties of the space-time ACF and associated fluctuations is investigated. In particular, the merits and limits of the ACF signature classically used to test the frozen hypothesis are examined. The conclusion is twofold. Taylor's hypothesis, understood as the quest for a space-time symmetry in rain field variability, remains important in hydrometeorology four decades after the pioneering work of Zawadzki. The frozen hypothesis, introduced for simplification purposes, appears difficult to check and too constraining. The methods proposed to check the hypothesis rely too directly on the use of the advection velocity as a space-time conversion factor instead of contemplating the ACF signature more globally. The model proposed that using two characteristic velocities instead of one appears more flexible to fit the ACF behaviors presented in the literature. This remains to be checked over a long-term high-resolution dataset.