Face Matching

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 24825 Experts worldwide ranked by ideXlab platform

Kevin W. Bowyer - One of the best experts on this subject based on the ideXlab platform.

  • identity document to selfie Face Matching across adolescence
    International Journal of Central Banking, 2020
    Co-Authors: Vitor Albiero, Nisha Srinivas, Esteban Villalobos, Roberto Rosenthal, Domingo Mery, Karl Ricanek, Jorge Perezfacuse, Kevin W. Bowyer
    Abstract:

    Matching live images (“selfies”) to images from ID documents is a problem that can arise in various applications. A challenging instance of the problem arises when the Face image on the ID document is from early adolescence and the live image is from later adolescence. We explore this problem using a private dataset called Chilean Young Adult (CHIYA) dataset, where we match live Face images taken at age 18–19 to Face images on scanned ID documents created at ages 9 to 18. State-of-the-art deep learning Face matchers (e.g., ArcFace) have relatively poor accuracy for document-to-selfie Face Matching. To achieve higher accuracy, we fine-tune the best available open-source model with triplet loss for a few-shot learning. Experiments show that our approach achieves higher accuracy than the DocFace+ model recently developed for this problem. Our fine-tuned model was able to improve the true acceptance rate for the most difficult (largest age span) subset from 62.92% to 96.67% at a false acceptance rate of 0.01%. Our fine-tuned model is available for use by other researchers.

  • IJCB - Identity Document to Selfie Face Matching Across Adolescence
    2020 IEEE International Joint Conference on Biometrics (IJCB), 2020
    Co-Authors: Vitor Albiero, Nisha Srinivas, Esteban Villalobos, Jorge Perez-facuse, Roberto Rosenthal, Domingo Mery, Karl Ricanek, Kevin W. Bowyer
    Abstract:

    Matching live images (“selfies”) to images from ID documents is a problem that can arise in various applications. A challenging instance of the problem arises when the Face image on the ID document is from early adolescence and the live image is from later adolescence. We explore this problem using a private dataset called Chilean Young Adult (CHIYA) dataset, where we match live Face images taken at age 18–19 to Face images on scanned ID documents created at ages 9 to 18. State-of-the-art deep learning Face matchers (e.g., ArcFace) have relatively poor accuracy for document-to-selfie Face Matching. To achieve higher accuracy, we fine-tune the best available open-source model with triplet loss for a few-shot learning. Experiments show that our approach achieves higher accuracy than the DocFace+ model recently developed for this problem. Our fine-tuned model was able to improve the true acceptance rate for the most difficult (largest age span) subset from 62.92% to 96.67% at a false acceptance rate of 0.01%. Our fine-tuned model is available for use by other researchers.

  • Identity Document to Selfie Face Matching Across Adolescence
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Vitor Albiero, Nisha Srinivas, Esteban Villalobos, Jorge Perez-facuse, Roberto Rosenthal, Domingo Mery, Karl Ricanek, Kevin W. Bowyer
    Abstract:

    Matching live images (``selfies'') to images from ID documents is a problem that can arise in various applications. A challenging instance of the problem arises when the Face image on the ID document is from early adolescence and the live image is from later adolescence. We explore this problem using a private dataset called Chilean Young Adult (CHIYA) dataset, where we match live Face images taken at age 18-19 to Face images on ID documents created at ages 9 to 18. State-of-the-art deep learning Face matchers (e.g., ArcFace) have relatively poor accuracy for document-to-selfie Face Matching. To achieve higher accuracy, we fine-tune the best available open-source model with triplet loss for a few-shot learning. Experiments show that our approach achieves higher accuracy than the DocFace+ model recently developed for this problem. Our fine-tuned model was able to improve the true acceptance rate for the most difficult (largest age span) subset from 62.92% to 96.67% at a false acceptance rate of 0.01%. Our fine-tuned model is available for use by other researchers.

  • a sparse representation approach to Face Matching across plastic surgery
    Workshop on Applications of Computer Vision, 2012
    Co-Authors: G Aggarwal, Soma Biswas, Patrick J Flynn, Kevin W. Bowyer
    Abstract:

    Plastic surgery procedures can significantly alter facial appearance, thereby posing a serious challenge even to the state-of-the-art Face Matching algorithms. In this paper, we propose a novel approach to address the challenges involved in automatic Matching of Faces across plastic surgery variations. In the proposed formulation, part-wise facial characterization is combined with the recently popular sparse representation approach to address these challenges. The sparse representation approach requires several images per subject in the gallery to function effectively which is often not available in several use-cases, as in the problem we address in this work. The proposed formulation utilizes images from sequestered non-gallery subjects with similar local facial characteristics to fulfill this requirement. Extensive experiments conducted on a recently introduced plastic surgery database [17] consisting of 900 subjects highlight the effectiveness of the proposed approach.

  • WACV - A sparse representation approach to Face Matching across plastic surgery
    2012 IEEE Workshop on the Applications of Computer Vision (WACV), 2012
    Co-Authors: G Aggarwal, Soma Biswas, Patrick J Flynn, Kevin W. Bowyer
    Abstract:

    Plastic surgery procedures can significantly alter facial appearance, thereby posing a serious challenge even to the state-of-the-art Face Matching algorithms. In this paper, we propose a novel approach to address the challenges involved in automatic Matching of Faces across plastic surgery variations. In the proposed formulation, part-wise facial characterization is combined with the recently popular sparse representation approach to address these challenges. The sparse representation approach requires several images per subject in the gallery to function effectively which is often not available in several use-cases, as in the problem we address in this work. The proposed formulation utilizes images from sequestered non-gallery subjects with similar local facial characteristics to fulfill this requirement. Extensive experiments conducted on a recently introduced plastic surgery database [17] consisting of 900 subjects highlight the effectiveness of the proposed approach.

A. Mike Burton - One of the best experts on this subject based on the ideXlab platform.

  • Multiple-image arrays in Face Matching tasks with and without memory.
    Cognition, 2021
    Co-Authors: Kay L. Ritchie, Robin S. S. Kramer, Mila Mileva, Adam Sandford, A. Mike Burton
    Abstract:

    Previous research has shown that exposure to within-person variability facilitates Face learning. A different body of work has examined potential benefits of providing multiple images in Face Matching tasks. Viewers are asked to judge whether a target Face matches a single Face image (as when checking photo-ID) or multiple Face images of the same person. The evidence here is less clear, with some studies finding a small multiple-image benefit, and others finding no advantage. In four experiments, we address this discrepancy in the benefits of multiple images from learning and Matching studies. We show that multiple-image arrays only facilitate Face Matching when arrays precede targets. Unlike simultaneous Face Matching tasks, sequential Matching and learning tasks involve memory and require abstraction of a stable representation of the Face from the array, for subsequent comparison with a target. Our results show that benefits from multiple-image arrays occur only when this abstraction is required, and not when array and target images are available at once. These studies reconcile apparent differences between Face learning and Face Matching and provide a theoretical framework for the study of within-person variability in Face perception.

  • Steps Towards a Cognitive Theory of Unfamiliar Face Matching
    Forensic Face Matching, 2021
    Co-Authors: Markus Bindemann, A. Mike Burton
    Abstract:

    The visual comparison of unfamiliar Faces—or ‘Face Matching’—is utilized widely for person identification in applied settings and has generated substantial research interest in psychology, but a cognitive theory to explain how observers perform this task does not exist. This chapter outlines issues of importance to support the development of a cognitive account of unfamiliar Face Matching. Characteristics of the Face, such as within-person variability and between-person similarity in appearance, are considered as the visual input upon which identification must build. The cognitive mechanisms that observers may bring to bear on Faces during identity comparison are analysed, focusing on attention, perception, evaluation, and decision processes, including sources of individual differences at each of these stages. Finally, the role of different experimental and occupational contexts in understanding Face Matching and for optimizing theory development is discussed.

  • Identity Documents Bias Face Matching.
    Perception, 2019
    Co-Authors: Xinran Feng, A. Mike Burton
    Abstract:

    Unfamiliar Face Matching is a difficult task. In typical experiments, viewers see isolated Face pairs and have to decide whether they show the same or different people. Recent research shows that e...

  • Smiles in Face Matching: Idiosyncratic information revealed through a smile improves unfamiliar Face Matching performance.
    British journal of psychology (London England : 1953), 2018
    Co-Authors: Mila Mileva, A. Mike Burton
    Abstract:

    Unfamiliar Face Matching is a surprisingly difficult task, yet we often rely on people’s Matching decisions in applied settings (e.g., border control). Most attempts to improve accuracy (including training and image manipulation) have had very limited success. In a series of studies, we demonstrate that using smiling rather than neutral pairs of images brings about significant improvements in Face Matching accuracy. This is true for both match and mismatch trials, implying that the information provided through a smile helps us detect images of the same identity as well as distinguishing between images of different identities. Study 1 compares Matching performance when images in the Face pair display either an open-mouth smile or a neutral expression. In Study 2, we add an intermediate level, closed-mouth smile, to identify the effect of teeth being exposed, and Study 3 explores Face Matching accuracy when only information about the lower part of the Face is available. Results demonstrate that an open-mouth smile changes the Face in an idiosyncratic way which aids Face Matching decisions. Such findings have practical implications for Matching in the applied context where we typically use neutral images to represent ourselves in official documents.

  • Face Matching impairment in developmental prosopagnosia
    Quarterly journal of experimental psychology (2006), 2016
    Co-Authors: David White, A. Mike Burton, Davide Rivolta, Shahd Al-janabi, Romina Palermo
    Abstract:

    Developmental prosopagnosia (DP) is commonly referred to as ‘Face blindness’, a term that implies a perceptual basis to the condition. However, DP presents as a deficit in Face recognition and is diagnosed using memory-based tasks. Here, we test Face identification ability in six people with DP, who are severely impaired on Face memory tasks, using tasks that do not rely on memory. First, we compared DP to control participants on a standardised test of unfamiliar Face Matching using facial images taken on the same day and under standardised studio conditions (Glasgow Face Matching Test; GFMT). DP participants did not differ from normative accuracy scores on the GFMT. Second, we tested Face Matching performance on a test created using images that were sourced from the Internet and so vary substantially due to changes in viewing conditions and in a person’s appearance (Local Heroes Test; LHT). DP participants show significantly poorer Matching accuracy on the LHT relative to control participants, for both unfamiliar and familiar Face Matching. Interestingly, this deficit is specific to ‘match’ trials, suggesting that people with DP may have particular difficulty in Matching images of the same person that contain natural day-to-day variations in appearance. We discuss these results in the broader context of individual differences in Face Matching ability.

Anil K. Jain - One of the best experts on this subject based on the ideXlab platform.

  • Face Matching and retrieval using soft biometrics
    IEEE Transactions on Information Forensics and Security, 2010
    Co-Authors: Unsang Park, Anil K. Jain
    Abstract:

    Soft biometric traits embedded in a Face (e.g., gender and facial marks) are ancillary information and are not fully distinctive by themselves in Face-recognition tasks. However, this information can be explicitly combined with Face Matching score to improve the overall Face-recognition accuracy. Moreover, in certain application domains, e.g., visual surveillance, where a Face image is occluded or is captured in off-frontal pose, soft biometric traits can provide even more valuable information for Face Matching or retrieval. Facial marks can also be useful to differentiate identical twins whose global facial appearances are very similar. The similarities found from soft biometrics can also be useful as a source of evidence in courts of law because they are more descriptive than the numerical Matching scores generated by a traditional Face matcher. We propose to utilize demographic information (e.g., gender and ethnicity) and facial marks (e.g., scars, moles, and freckles) for improving Face image Matching and retrieval performance. An automatic facial mark detection method has been developed that uses (1) the active appearance model for locating primary facial features (e.g., eyes, nose, and mouth), (2) the Laplacian-of-Gaussian blob detection, and (3) morphological operators. Experimental results based on the FERET database (426 images of 213 subjects) and two mugshot databases from the forensic domain (1225 images of 671 subjects and 10 000 images of 10 000 subjects, respectively) show that the use of soft biometric traits is able to improve the Face-recognition performance of a state-of-the-art commercial matcher.

  • deformation modeling for robust 3d Face Matching
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008
    Co-Authors: Anil K. Jain
    Abstract:

    Face recognition based on 3D surFace Matching is promising for overcoming some of the limitations of current 2D image-based Face recognition systems. The 3D shape is generally invariant to the pose and lighting changes, but not invariant to the nonrigid facial movement such as expressions. Collecting and storing multiple templates to account for various expressions for each subject in a large database is not practical. We propose a facial surFace modeling and Matching scheme to match 2.5D facial scans in the presence of both nonrigid deformations and pose changes (multiview) to a stored 3D Face model with neutral expression. A hierarchical geodesic-based resampling approach is applied to extract landmarks for modeling facial surFace deformations. We are able to synthesize the deformation learned from a small group of subjects (control group) onto a 3D neutral model (not in the control group), resulting in a deformed template. A user-specific (3D) deformable model is built for each subject in the gallery with respect to the control group by combining the templates with synthesized deformations. By fitting this generative deformable model to a test scan, the proposed approach is able to handle expressions and pose changes simultaneously. A fully automatic and prototypic deformable model based 3D Face Matching system has been developed. Experimental results demonstrate that the proposed deformation modeling scheme increases the 3D Face Matching accuracy in comparison to Matching with 3D neutral models by 7 and 10 percentage points, respectively, on a subset of the FRGC v2.0 3D benchmark and the MSU multiview 3D Face database with expression variations.

  • deformation analysis for 3d Face Matching
    Workshop on Applications of Computer Vision, 2005
    Co-Authors: Anil K. Jain
    Abstract:

    Current two-dimensional image based Face recognition systems encounter difficulties with large facial appearance variations due to the pose, illumination and expression changes. Utilizing 3D information of human Faces is promising to handle the pose and lighting variations. While the 3D shape of a Face does not change due to head pose (rigid) and lighting changes, it is not invariant to the non-rigid facial movement and evolution, such as expressions and aging effect. We propose a Face surFace Matching framework to take into account both rigid and non-rigid variations to match a 2.5D Face image to a 3D Face model. The rigid registration is achieved by a modified Iterative Closest Point (ICP) algorithm. The thin plate spline (TPS) model is applied to estimate the deformation displacement vector field, which is used to represent the non-rigid deformation. For the purpose of Face Matching, the non-rigid deformations from different sources are identified, which is formulated as a two-class classification problem: intra-subject deformation vs. inter-subject deformation. The deformation classification results are integrated with the Matching distances to make the final decision. Experimental results on a database containing 100 3D Face models and 98 2.5D scans with smiling expression show that the number of errors is reduced from 28 to 18.

  • WACV/MOTION - Deformation Analysis for 3D Face Matching
    2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV MOTION'05) - Volume 1, 2005
    Co-Authors: Anil K. Jain
    Abstract:

    Current two-dimensional image based Face recognition systems encounter difficulties with large facial appearance variations due to the pose, illumination and expression changes. Utilizing 3D information of human Faces is promising to handle the pose and lighting variations. While the 3D shape of a Face does not change due to head pose (rigid) and lighting changes, it is not invariant to the non-rigid facial movement and evolution, such as expressions and aging effect. We propose a Face surFace Matching framework to take into account both rigid and non-rigid variations to match a 2.5D Face image to a 3D Face model. The rigid registration is achieved by a modified Iterative Closest Point (ICP) algorithm. The thin plate spline (TPS) model is applied to estimate the deformation displacement vector field, which is used to represent the non-rigid deformation. For the purpose of Face Matching, the non-rigid deformations from different sources are identified, which is formulated as a two-class classification problem: intra-subject deformation vs. inter-subject deformation. The deformation classification results are integrated with the Matching distances to make the final decision. Experimental results on a database containing 100 3D Face models and 98 2.5D scans with smiling expression show that the number of errors is reduced from 28 to 18.

  • ICME (2) - Semantic Face Matching
    Proceedings. IEEE International Conference on Multimedia and Expo, 1
    Co-Authors: R.-l. Hsu, Anil K. Jain
    Abstract:

    The need for efficient methods for archiving and retrieving personal digital photo collections arises due to a significant increase in the number of digital images and videos that people have to manage. We propose a semantic Face Matching approach for managing consumer photographs based on semantic Face attributes. These attributes are organized as a semantic Face graph (derived from a 3D generic Face model) containing facial components such as eyes and mouth in the spatial domain. We align the semantic facial components in the semantic Face graph with the extracted facial features in a given image. Aligned facial components are transformed to a feature space spanned by Fourier descriptors of facial components for Face Matching. The semantic Face graph allows Face Matching based on selected facial components. Our experimental results demonstrate that the proposed semantic representation of the Face is useful for Face Matching and visualization (e.g., generating facial caricatures).

G Aggarwal - One of the best experts on this subject based on the ideXlab platform.

  • a sparse representation approach to Face Matching across plastic surgery
    Workshop on Applications of Computer Vision, 2012
    Co-Authors: G Aggarwal, Soma Biswas, Patrick J Flynn, Kevin W. Bowyer
    Abstract:

    Plastic surgery procedures can significantly alter facial appearance, thereby posing a serious challenge even to the state-of-the-art Face Matching algorithms. In this paper, we propose a novel approach to address the challenges involved in automatic Matching of Faces across plastic surgery variations. In the proposed formulation, part-wise facial characterization is combined with the recently popular sparse representation approach to address these challenges. The sparse representation approach requires several images per subject in the gallery to function effectively which is often not available in several use-cases, as in the problem we address in this work. The proposed formulation utilizes images from sequestered non-gallery subjects with similar local facial characteristics to fulfill this requirement. Extensive experiments conducted on a recently introduced plastic surgery database [17] consisting of 900 subjects highlight the effectiveness of the proposed approach.

  • WACV - A sparse representation approach to Face Matching across plastic surgery
    2012 IEEE Workshop on the Applications of Computer Vision (WACV), 2012
    Co-Authors: G Aggarwal, Soma Biswas, Patrick J Flynn, Kevin W. Bowyer
    Abstract:

    Plastic surgery procedures can significantly alter facial appearance, thereby posing a serious challenge even to the state-of-the-art Face Matching algorithms. In this paper, we propose a novel approach to address the challenges involved in automatic Matching of Faces across plastic surgery variations. In the proposed formulation, part-wise facial characterization is combined with the recently popular sparse representation approach to address these challenges. The sparse representation approach requires several images per subject in the gallery to function effectively which is often not available in several use-cases, as in the problem we address in this work. The proposed formulation utilizes images from sequestered non-gallery subjects with similar local facial characteristics to fulfill this requirement. Extensive experiments conducted on a recently introduced plastic surgery database [17] consisting of 900 subjects highlight the effectiveness of the proposed approach.

David A Koolen - One of the best experts on this subject based on the ideXlab platform.

  • computer Face Matching technology using two dimensional photographs accurately matches the facial gestalt of unrelated individuals with the same syndromic form of intellectual disability
    BMC Biotechnology, 2017
    Co-Authors: Tracy Duddingbyth, Anne Baxter, Elizabeth G Holliday, Anna Hackett, Sheridan Odonnell, Susan M White, John Attia, Han G Brunner, Bert B A De Vries, David A Koolen
    Abstract:

    Massively parallel genetic sequencing allows rapid testing of known intellectual disability (ID) genes. However, the discovery of novel syndromic ID genes requires molecular confirmation in at least a second or a cluster of individuals with an overlapping phenotype or similar facial gestalt. Using computer Face-Matching technology we report an automated approach to Matching the Faces of non-identical individuals with the same genetic syndrome within a database of 3681 images [1600 images of one of 10 genetic syndrome subgroups together with 2081 control images]. Using the leave-one-out method, two research questions were specified: The computer Face-Matching technology correctly identifies a top match, at least one correct match in the top five and at least one in the top 10 more than expected by chance (P < 0.00001). There was low agreement between the technology and clinicians, with higher accuracy of the technology when results were discordant (P < 0.01) for all syndromes except Kabuki syndrome. Although the accuracy of the computer Face-Matching technology was tested on images of individuals with known syndromic forms of intellectual disability, the results of this pilot study illustrate the potential utility of Face-Matching technology within deep phenotyping platforms to facilitate the interpretation of DNA sequencing data for individuals who remain undiagnosed despite testing the known developmental disorder genes.

  • Computer Face-Matching technology using two-dimensional photographs accurately matches the facial gestalt of unrelated individuals with the same syndromic form of intellectual disability
    BMC biotechnology, 2017
    Co-Authors: Tracy Dudding-byth, Anne Baxter, Elizabeth G Holliday, Anna Hackett, Susan M White, John Attia, Han G Brunner, Bert B A De Vries, Sheridan O'donnell, David A Koolen
    Abstract:

    Massively parallel genetic sequencing allows rapid testing of known intellectual disability (ID) genes. However, the discovery of novel syndromic ID genes requires molecular confirmation in at least a second or a cluster of individuals with an overlapping phenotype or similar facial gestalt. Using computer Face-Matching technology we report an automated approach to Matching the Faces of non-identical individuals with the same genetic syndrome within a database of 3681 images [1600 images of one of 10 genetic syndrome subgroups together with 2081 control images]. Using the leave-one-out method, two research questions were specified: The computer Face-Matching technology correctly identifies a top match, at least one correct match in the top five and at least one in the top 10 more than expected by chance (P