Markov networks for low-level vision
|MERL Report: ||TR99-08: William T. Freeman, Egon C. Pasztor
We seek a learning-based algorithm that applies to various low-level vision problems. For each problem, we want to find the scene interpretation that best explains image data. For example, we may want to infer the projected velocities (scene) which best explain two consecutive image frames (image). From synthetic data, we model the relationship between local image and scene regions, and between a scene region and neighboring scene regions. Three probabilities are learned, which characterize the low-level vision algorithm: the local prior, the local likelihood, and the the conditional probabilities of scene neighbors. Given a new image, we propagate likelihood functions in a Markov network to infer the underlying scene. We use a factorization approximation, ignoring the effect of loops. This yields an efficient method to infer low-level scene interpretations, which we always find to be stable. We illustrate the method with different representations, and show it working for three applications: an explanatory example, motion analysis and estimating high resolution images from low-resolution ones.