Any facial feature localization algorithm needs to incorporate two sources of information: 1) prior shape knowledge, and 2) image observations. Existing methods have primarily focused on different ways of representing and incorporating the image observations into the problem solution. Prior shape knowledge, on the other hand, has been mostly modeled using parametrized shape models. Parametrized shape models have relatively few parameters to control the shape variations, and hence their representation power is limited with the examples provided in the training data. In this paper, we propose a novel method for modeling the prior shape knowledge. Rather than using a holistic approach, as in the case for parametrized shape models, we model the prior shape knowledge as a set of local compatibility potentials. This "distributed" approach provides a greater representation power as it allows for individual landmarks to move more freely. The prior shape knowledge is incorporated with local image observations in a probabilistic graphical model framework, where the inference is achieved through nonparametric belief propagation. Through qualitative and quantitative experiments, the proposed approach is shown to outperform the state-of-the-art methods in terms of localization accuracy.
Computer Vision and Pattern Recognition Workshops
nonparametric facial feature localization,parametrized shape model,shape variation,greater representation power,image observation,local image observation,local compatibility potential,holistic approach,prior shape knowledge,facial feature localization algorithm,face recognition,feature extraction,graph theory,mathematical model,shape,message passing,training data,data models,face,topology,probability
Facial recognition system,Computer vision,Active shape model,Pattern recognition,Computer science,Feature extraction,Nonparametric statistics,Artificial intelligence,Graphical model,Probabilistic logic,Heat kernel signature,Shape analysis (digital geometry)