Transformation Model

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 324492 Experts worldwide ranked by ideXlab platform

Yasushi Yagi - One of the best experts on this subject based on the ideXlab platform.

  • View Transformation Model Incorporating Quality Measures for Cross-View Gait Recognition
    IEEE Transactions on Cybernetics, 2016
    Co-Authors: Daigo Muramatsu, Yasushi Makihara, Yasushi Yagi
    Abstract:

    Cross-view gait recognition authenticates a person using a pair of gait image sequences with different observation views. View difference causes degradation of gait recognition accuracy, and so several solutions have been proposed to suppress this degradation. One useful solution is to apply a view Transformation Model (VTM) that encodes a joint subspace of multiview gait features trained with auxiliary data from multiple training subjects, who are different from test subjects (recognition targets). In the VTM framework, a gait feature with a destination view is generated from that with a source view by estimating a vector on the trained joint subspace, and gait features with the same destination view are compared for recognition. Although this framework improves recognition accuracy as a whole, the fit of the VTM depends on a given gait feature pair, and causes an inhomogeneously biased dissimilarity score. Because it is well known that normalization of such inhomogeneously biased scores improves recognition accuracy in general, we therefore propose a VTM incorporating a score normalization framework with quality measures that encode the degree of the bias. From a pair of gait features, we calculate two quality measures, and use them to calculate the posterior probability that both gait features originate from the same subjects together with the biased dissimilarity score. The proposed method was evaluated against two gait datasets, a large population gait dataset of over-ground walking (course dataset) and a treadmill gait dataset. The experimental results show that incorporating the quality measures contributes to accuracy improvement in many cross-view settings.

  • gait based person recognition using arbitrary view Transformation Model
    IEEE Transactions on Image Processing, 2015
    Co-Authors: Daigo Muramatsu, Yasushi Makihara, Akira Shiraishi, Md Zasim Uddin, Yasushi Yagi
    Abstract:

    Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view Transformation Models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for Transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress Transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios.

  • gait recognition using a view Transformation Model in the frequency domain
    European Conference on Computer Vision, 2006
    Co-Authors: Yasushi Makihara, Ryusuke Sagawa, Yasuhiro Mukaigawa, Tomio Echigo, Yasushi Yagi
    Abstract:

    Gait analyses have recently gained attention as methods of identification of individuals at a distance from a camera. However, appearance changes due to view direction changes cause difficulties for gait recognition systems. Here, we propose a method of gait recognition from various view directions using frequency-domain features and a view Transformation Model. We first construct a spatio-temporal silhouette volume of a walking person and then extract frequency-domain features of the volume by Fourier analysis based on gait periodicity. Next, our view Transformation Model is obtained with a training set of multiple persons from multiple view directions. In a recognition phase, the Model transforms gallery features into the same view direction as that of an input feature, and so the features match each other. Experiments involving gait recognition from 24 view directions demonstrate the effectiveness of the proposed method.

Yasushi Makihara - One of the best experts on this subject based on the ideXlab platform.

  • View Transformation Model Incorporating Quality Measures for Cross-View Gait Recognition
    IEEE Transactions on Cybernetics, 2016
    Co-Authors: Daigo Muramatsu, Yasushi Makihara, Yasushi Yagi
    Abstract:

    Cross-view gait recognition authenticates a person using a pair of gait image sequences with different observation views. View difference causes degradation of gait recognition accuracy, and so several solutions have been proposed to suppress this degradation. One useful solution is to apply a view Transformation Model (VTM) that encodes a joint subspace of multiview gait features trained with auxiliary data from multiple training subjects, who are different from test subjects (recognition targets). In the VTM framework, a gait feature with a destination view is generated from that with a source view by estimating a vector on the trained joint subspace, and gait features with the same destination view are compared for recognition. Although this framework improves recognition accuracy as a whole, the fit of the VTM depends on a given gait feature pair, and causes an inhomogeneously biased dissimilarity score. Because it is well known that normalization of such inhomogeneously biased scores improves recognition accuracy in general, we therefore propose a VTM incorporating a score normalization framework with quality measures that encode the degree of the bias. From a pair of gait features, we calculate two quality measures, and use them to calculate the posterior probability that both gait features originate from the same subjects together with the biased dissimilarity score. The proposed method was evaluated against two gait datasets, a large population gait dataset of over-ground walking (course dataset) and a treadmill gait dataset. The experimental results show that incorporating the quality measures contributes to accuracy improvement in many cross-view settings.

  • gait based person recognition using arbitrary view Transformation Model
    IEEE Transactions on Image Processing, 2015
    Co-Authors: Daigo Muramatsu, Yasushi Makihara, Akira Shiraishi, Md Zasim Uddin, Yasushi Yagi
    Abstract:

    Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view Transformation Models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for Transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress Transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios.

  • gait recognition using a view Transformation Model in the frequency domain
    European Conference on Computer Vision, 2006
    Co-Authors: Yasushi Makihara, Ryusuke Sagawa, Yasuhiro Mukaigawa, Tomio Echigo, Yasushi Yagi
    Abstract:

    Gait analyses have recently gained attention as methods of identification of individuals at a distance from a camera. However, appearance changes due to view direction changes cause difficulties for gait recognition systems. Here, we propose a method of gait recognition from various view directions using frequency-domain features and a view Transformation Model. We first construct a spatio-temporal silhouette volume of a walking person and then extract frequency-domain features of the volume by Fourier analysis based on gait periodicity. Next, our view Transformation Model is obtained with a training set of multiple persons from multiple view directions. In a recognition phase, the Model transforms gallery features into the same view direction as that of an input feature, and so the features match each other. Experiments involving gait recognition from 24 view directions demonstrate the effectiveness of the proposed method.

Daigo Muramatsu - One of the best experts on this subject based on the ideXlab platform.

  • View Transformation Model Incorporating Quality Measures for Cross-View Gait Recognition
    IEEE Transactions on Cybernetics, 2016
    Co-Authors: Daigo Muramatsu, Yasushi Makihara, Yasushi Yagi
    Abstract:

    Cross-view gait recognition authenticates a person using a pair of gait image sequences with different observation views. View difference causes degradation of gait recognition accuracy, and so several solutions have been proposed to suppress this degradation. One useful solution is to apply a view Transformation Model (VTM) that encodes a joint subspace of multiview gait features trained with auxiliary data from multiple training subjects, who are different from test subjects (recognition targets). In the VTM framework, a gait feature with a destination view is generated from that with a source view by estimating a vector on the trained joint subspace, and gait features with the same destination view are compared for recognition. Although this framework improves recognition accuracy as a whole, the fit of the VTM depends on a given gait feature pair, and causes an inhomogeneously biased dissimilarity score. Because it is well known that normalization of such inhomogeneously biased scores improves recognition accuracy in general, we therefore propose a VTM incorporating a score normalization framework with quality measures that encode the degree of the bias. From a pair of gait features, we calculate two quality measures, and use them to calculate the posterior probability that both gait features originate from the same subjects together with the biased dissimilarity score. The proposed method was evaluated against two gait datasets, a large population gait dataset of over-ground walking (course dataset) and a treadmill gait dataset. The experimental results show that incorporating the quality measures contributes to accuracy improvement in many cross-view settings.

  • gait based person recognition using arbitrary view Transformation Model
    IEEE Transactions on Image Processing, 2015
    Co-Authors: Daigo Muramatsu, Yasushi Makihara, Akira Shiraishi, Md Zasim Uddin, Yasushi Yagi
    Abstract:

    Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view Transformation Models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for Transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress Transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios.

Jian Zhang - One of the best experts on this subject based on the ideXlab platform.

  • multiple views gait recognition using view Transformation Model based on optimized gait energy image
    International Conference on Computer Vision, 2009
    Co-Authors: Worapan Kusakunniran, Qiang Wu, Hongdong Li, Jian Zhang
    Abstract:

    Gait is one of well recognized biometrics that has been widely used for human identification. However, the current gait recognition might have difficulties due to viewing angle being changed. This is because the viewing angle under which the gait signature database was generated may not be the same as the viewing angle when the probe data are obtained. This paper proposes a new multi-view gait recognition approach which tackles the problems mentioned above. Being different from other approaches of same category, this new method creates a so called View Transformation Model (VTM) based on spatial-domain Gait Energy Image (GEI) by adopting Singular Value Decomposition (SVD) technique. To further improve the performance of the proposed VTM, Linear Discriminant Analysis (LDA) is used to optimize the obtained GEI feature vectors. When implementing SVD there are a few practical problems such as large matrix size and over-fitting. In this paper, reduced SVD is introduced to alleviate the effects caused by these problems. Using the generated VTM, the viewing angles of gallery gait data and probe gait data can be transformed into the same direction. Thus, gait signatures can be measured without difficulties. The extensive experiments show that the proposed algorithm can significantly improve the multiple view gait recognition performance when being compared to the similar methods in literature.

Worapan Kusakunniran - One of the best experts on this subject based on the ideXlab platform.

  • multiple views gait recognition using view Transformation Model based on optimized gait energy image
    International Conference on Computer Vision, 2009
    Co-Authors: Worapan Kusakunniran, Qiang Wu, Hongdong Li, Jian Zhang
    Abstract:

    Gait is one of well recognized biometrics that has been widely used for human identification. However, the current gait recognition might have difficulties due to viewing angle being changed. This is because the viewing angle under which the gait signature database was generated may not be the same as the viewing angle when the probe data are obtained. This paper proposes a new multi-view gait recognition approach which tackles the problems mentioned above. Being different from other approaches of same category, this new method creates a so called View Transformation Model (VTM) based on spatial-domain Gait Energy Image (GEI) by adopting Singular Value Decomposition (SVD) technique. To further improve the performance of the proposed VTM, Linear Discriminant Analysis (LDA) is used to optimize the obtained GEI feature vectors. When implementing SVD there are a few practical problems such as large matrix size and over-fitting. In this paper, reduced SVD is introduced to alleviate the effects caused by these problems. Using the generated VTM, the viewing angles of gallery gait data and probe gait data can be transformed into the same direction. Thus, gait signatures can be measured without difficulties. The extensive experiments show that the proposed algorithm can significantly improve the multiple view gait recognition performance when being compared to the similar methods in literature.