Biometric Recognition - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Biometric Recognition

The Experts below are selected from a list of 6024 Experts worldwide ranked by ideXlab platform

Dimitris Hatzinakos – One of the best experts on this subject based on the ideXlab platform.

  • Transient Otoacoustic Emissions for Biometric Recognition
    2012 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2012
    Co-Authors: Foteini Agrafioti, Siyuan Wang, Dimitris Hatzinakos

    Abstract:

    This paper investigates the potential use of Transient Otoacoustic Emissions (TEOAE) for Biometric Recognition. Multiresolution decomposition of TEOAE is done by a modified Bivariate EmpiricalMode Decomposition (BEMD) combined with an auditory model. Matching scores are computed by combining ranked correlations across different levels. Recognition rate with recording from left ear is 96.30% and can be improved to 98.15% by utilizing a matching score fusion with information from right ear.

  • ICASSP – Transient Otoacoustic Emissions for Biometric Recognition
    2012 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2012
    Co-Authors: Foteini Agrafioti, Siyuan Wang, Dimitris Hatzinakos

    Abstract:

    This paper investigates the potential use of Transient Otoacoustic Emissions (TEOAE) for Biometric Recognition. Multiresolution decomposition of TEOAE is done by a modified Bivariate EmpiricalMode Decomposition (BEMD) combined with an auditory model. Matching scores are computed by combining ranked correlations across different levels. Recognition rate with recording from left ear is 96.30% and can be improved to 98.15% by utilizing a matching score fusion with information from right ear.

  • ecg in Biometric Recognition time dependency and application challenges
    , 2011
    Co-Authors: Dimitris Hatzinakos, Foteini Agrafioti

    Abstract:

    As Biometric Recognition becomes increasingly popular, the fear of circumvention, obfuscation and replay attacks is a rising concern. Traditional Biometric modalities such as the face, the fingerprint or the iris are vulnerable to such attacks, which defeats the purpose of Biometric Recognition, namely to employ physiological characteristics for secure identity Recognition.
    This thesis advocates the use the electrocardiogram (ECG) signal for human identity Recognition. The ECG is a vital signal of the human body, and as such, it naturally provides liveness detection, robustness to attacks, universality and permanence. In addition, ECG inherently satisfies uniqueness requirements, because the morphology of the signal is highly dependent on the particular anatomical and geometrical characteristics of the myocardium in the heart.
    However, the ECG is a continuous signal, and this presents a great challenge to Biometric Recognition. With this modality, instantaneous variability is expected even within recordings of the same individual due to a variety of factors, including recording noise, or physical and psychological activity. While the noise and heart rate variations due to physical exercise can be addressed with appropriate feature extraction, the effects of emotional activity on the ECG signal are more obscure.
    This thesis deals with this problem from an affective computing point of view. First, the psychological conditions that affect the ECG and endanger Biometric accuracy are identified. Experimental setups that are targeted to provoke active and passive arousal as well as positive and negative valence are presented. The empirical mode decomposition (EMD) is used as the basis for the detection of emotional patterns, after adapting the algorithm to the particular needs of the ECG signal. Instantaneous frequency and oscillation features are used for state classification in various clustering setups. The result of this analysis is the designation of psychological states which affect the ECG signal to an extent that Biometric matching may not be feasible. An updating methodology is proposed to address this problem, wherein the signal is monitored for instantaneous changes that require the design of a new template.
    Furthermore, this thesis presents the enhanced Autocorrelation- Linear Discriminant Analysis (AC/LDA) algorithm for feature extraction, which incorporates a signal quality assessment module based on the periodicity transform. Three deployment scenarios are considered namely a) small-scale Recognition systems, b) large-scale Recognition systems and c) Recognition in distributed systems. The enhanced AC/LDA algorithm is adapted to each setting, and the advantages and disadvantages of each scenario are discussed.
    Overall, this thesis attempts to provide the necessary algorithmic and practical framework for the real-life deployment of the ECG signal in Biometric Recognition.

Foteini Agrafioti – One of the best experts on this subject based on the ideXlab platform.

  • Transient Otoacoustic Emissions for Biometric Recognition
    2012 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2012
    Co-Authors: Foteini Agrafioti, Siyuan Wang, Dimitris Hatzinakos

    Abstract:

    This paper investigates the potential use of Transient Otoacoustic Emissions (TEOAE) for Biometric Recognition. Multiresolution decomposition of TEOAE is done by a modified Bivariate EmpiricalMode Decomposition (BEMD) combined with an auditory model. Matching scores are computed by combining ranked correlations across different levels. Recognition rate with recording from left ear is 96.30% and can be improved to 98.15% by utilizing a matching score fusion with information from right ear.

  • ICASSP – Transient Otoacoustic Emissions for Biometric Recognition
    2012 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2012
    Co-Authors: Foteini Agrafioti, Siyuan Wang, Dimitris Hatzinakos

    Abstract:

    This paper investigates the potential use of Transient Otoacoustic Emissions (TEOAE) for Biometric Recognition. Multiresolution decomposition of TEOAE is done by a modified Bivariate EmpiricalMode Decomposition (BEMD) combined with an auditory model. Matching scores are computed by combining ranked correlations across different levels. Recognition rate with recording from left ear is 96.30% and can be improved to 98.15% by utilizing a matching score fusion with information from right ear.

  • ecg in Biometric Recognition time dependency and application challenges
    , 2011
    Co-Authors: Dimitris Hatzinakos, Foteini Agrafioti

    Abstract:

    As Biometric Recognition becomes increasingly popular, the fear of circumvention, obfuscation and replay attacks is a rising concern. Traditional Biometric modalities such as the face, the fingerprint or the iris are vulnerable to such attacks, which defeats the purpose of Biometric Recognition, namely to employ physiological characteristics for secure identity Recognition.
    This thesis advocates the use the electrocardiogram (ECG) signal for human identity Recognition. The ECG is a vital signal of the human body, and as such, it naturally provides liveness detection, robustness to attacks, universality and permanence. In addition, ECG inherently satisfies uniqueness requirements, because the morphology of the signal is highly dependent on the particular anatomical and geometrical characteristics of the myocardium in the heart.
    However, the ECG is a continuous signal, and this presents a great challenge to Biometric Recognition. With this modality, instantaneous variability is expected even within recordings of the same individual due to a variety of factors, including recording noise, or physical and psychological activity. While the noise and heart rate variations due to physical exercise can be addressed with appropriate feature extraction, the effects of emotional activity on the ECG signal are more obscure.
    This thesis deals with this problem from an affective computing point of view. First, the psychological conditions that affect the ECG and endanger Biometric accuracy are identified. Experimental setups that are targeted to provoke active and passive arousal as well as positive and negative valence are presented. The empirical mode decomposition (EMD) is used as the basis for the detection of emotional patterns, after adapting the algorithm to the particular needs of the ECG signal. Instantaneous frequency and oscillation features are used for state classification in various clustering setups. The result of this analysis is the designation of psychological states which affect the ECG signal to an extent that Biometric matching may not be feasible. An updating methodology is proposed to address this problem, wherein the signal is monitored for instantaneous changes that require the design of a new template.
    Furthermore, this thesis presents the enhanced Autocorrelation- Linear Discriminant Analysis (AC/LDA) algorithm for feature extraction, which incorporates a signal quality assessment module based on the periodicity transform. Three deployment scenarios are considered namely a) small-scale Recognition systems, b) large-scale Recognition systems and c) Recognition in distributed systems. The enhanced AC/LDA algorithm is adapted to each setting, and the advantages and disadvantages of each scenario are discussed.
    Overall, this thesis attempts to provide the necessary algorithmic and practical framework for the real-life deployment of the ECG signal in Biometric Recognition.

Rafiqul Islam – One of the best experts on this subject based on the ideXlab platform.

  • Robust ear Biometric Recognition using neural network
    2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), 2017
    Co-Authors: Mozammel Chowdhury, Rafiqul Islam

    Abstract:

    Ear based Biometric Recognition has been gaining popularity in recent years because the ear is unique to individuals and generally unaffected by changing pattern over aging. This paper aims to propose a robust and efficient ear Biometric Recognition scheme using local features of the human ear employing a neural network algorithm. The scheme initially estimates the ear region from the facial image and then extracts the edge features from the detected ear. We use edge local features since they are invariant to illumination changes and occlusion. A neural network is used to recognize the user by matching the extracted ear features of the user with a feature database. The performance of the approach is tested with different datasets and compared with some existing well known methods. Experimental evaluation clearly proves the robustness and effectiveness of the proposed scheme over similar techniques.