Misspecified Bayesian Cramer-Rao Bound for Sparse Bayesian Learning

We consider a misspecified Bayesian Cramer-Rao bound (MBCRB), justified in a scenario where the assumed data model is different from the true generative model. As an example of this scenario, we study a popular sparse Bayesian learning (SBL) algorithm where the assumed data model, different from the true model, is constructed so as to facilitate a computationally feasible inference of a sparse signal within the Bayesian framework. Formulating the SBL as a Bayesian inference with a misspecified data model, we derive a lower bound on the mean square error (MSE) corresponding to the estimated sparse signal. The simulation study validates the derived bound and shows that the SBL performance approaches the MBCRB at very high signal-to-noise ratios.