Affordable Access

Hierarchical Bayes Ensemble Kalman Filtering

Publication Date
Submission Date
arXiv ID: 1509.00652
External links


Ensemble Kalman filtering (EnKF), when applied to high-dimensional systems, suffers from an inevitably small affordable ensemble size, which results in poor estimates of the background error covariance matrix ${\bf B}$. The common remedy is a kind of regularization, usually an ad-hoc spatial covariance localization (tapering) combined with artificial covariance inflation. Instead of using an ad-hoc regularization, we adopt the idea by Myrseth and Omre (2010) and explicitly admit that the ${\bf B}$ matrix is unknown and random and estimate it along with the state (${\bf x}$) in an optimal hierarchical Bayes analysis scheme. We separate forecast errors into predictability errors (i.e. forecast errors due to uncertainties in the initial data) and model errors (forecast errors due to imperfections in the forecast model) and include the two respective components ${\bf P}$ and ${\bf Q}$ of the ${\bf B}$ matrix into the extended control vector $({\bf x},{\bf P},{\bf Q})$. Similarly, we break the traditional background ensemble into the predictability-error related ensemble and model-error related ensemble. At the observation update (analysis) step, we specify Inverse Wishart based priors for the random matrices ${\bf P}$ and ${\bf Q}$, and conditionally Gaussian prior for the state ${\bf x}$. Then, we update the prior distribution of $({\bf x},{\bf P},{\bf Q})$ using both observation and ensemble data, so that ensemble members are used as generalized observations and ordinary observations are allowed to influence the covariances. An approximation that leads to a practicable analysis algorithm is proposed. Performance of the new filter is studied in numerical experiments with a one-dimensional model of "truth" and "synthetic" observations. The experiments show that the new filter significantly outperforms EnKF in a wide range of filtering regimes.


Seen <100 times