related publications


Most existing biometrics utilize static features of the human face or body. Similar to gait-based body biometrics, evidence suggests that the dynamics of facial features are also useful in distinguishing between individuals. Facial dynamics typically implies skin motion. However, it is not merely the skin motion induced by such gestures, but the appearance of the skin changes that provides key identity information. For gestures and their appearance to be utilized as a biometric, it is critical that a robust model be established combining both appearance and motion information. We are currently exploring two fundamental models of facial dynamics for biometric feature extraction:

    1. G-Folds: A low dimensional appearance based model of facial gestures. G-folds may be used as biometric signatures in isolation, or may also be used to extract gesture intensity for construction of higher order signatures (ie, asymmetry profiles).

    2. Thin plate splines: A face-feature location based model that encapsulates the deformation characteristics of the face in a distributed “bending energy”. TPS features constitute a more traditional skin motion type dynamic feature.

G-Fold and TPS features will be fused with alternative sensory modes (thermal and audio) to reduce uncertainty on the unimodal identity decisions. Multiple sensing modalities provide robustness and adaptability to noise in any single mode including (1) visual occlusions, (2) unfavorable lighting conditions, and (3) audible noise.


For more information, see related publications.