Human Motions

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 10122 Experts worldwide ranked by ideXlab platform

Jun Morimoto - One of the best experts on this subject based on the ideXlab platform.

  • Real-time stylistic prediction for whole-body Human Motions
    Neural Networks, 2012
    Co-Authors: Takamitsu Matsubara, Sang-ho Hyon, Jun Morimoto
    Abstract:

    The ability to predict Human motion is crucial in several contexts such as Human tracking by computer vision and the synthesis of Human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body Human Motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body Human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in Humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for Human walking, running, and jumping behaviors. © 2011 Elsevier Ltd.

  • real time stylistic prediction for whole body Human Motions
    Neural Networks, 2012
    Co-Authors: Takamitsu Matsubara, Sang-ho Hyon, Jun Morimoto
    Abstract:

    The ability to predict Human motion is crucial in several contexts such as Human tracking by computer vision and the synthesis of Human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body Human Motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body Human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in Humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for Human walking, running, and jumping behaviors.

Changyu Shen - One of the best experts on this subject based on the ideXlab platform.

  • a highly stretchable and stable strain sensor based on hybrid carbon nanofillers polydimethylsiloxane conductive composites for large Human Motions monitoring
    Composites Science and Technology, 2018
    Co-Authors: Yanjun Zheng, Yilong Li, Guoqiang Zheng, Yan Wang, Changyu Shen
    Abstract:

    Abstract Stretchable strain sensors have promising potentials in wearable electronics for Human motion detection, health monitoring and so on. A reliable strain sensor with high flexibility and good stability should be designed to detect Human joints Motions with a large deformation. Here, a simple and facile solution mixing-casting method was adopted to fabricate a highly stretchable strain sensor based on composites mixing polydimethylsiloxane (PDMS) with hybrid carbon nanotubes (CNTs) and carbon black (CB) conductive nanofillers (CNTs-CB). Bridged and overlapped hybrid CNTs-CB nanofillers structure was achieved in the composite on the basis of the morphology observation. In monotonic stretching test, the CNTs-CB/PDMS composites strain sensors exhibited high stretchability, strain-dependent sensitivity in a wide strain sensing range (ca. 300% strain) and an excellent linear current-voltage behavior. During stretching-releasing cycles, the strain sensors presented excellent repeatability, good stability and superior durability (2500 cycles at 200% strain). Combined with the above outstanding strain sensing performances, the fabricated stretchable strain sensors were attached onto different joints of Human body to monitor the corresponding Human Motions, demonstrating their attractive perspective in large Human Motions detection.

  • A highly stretchable and stable strain sensor based on hybrid carbon nanofillers/polydimethylsiloxane conductive composites for large Human Motions monitoring
    Composites Science and Technology, 2018
    Co-Authors: Yanjun Zheng, Chuntai Liu, Kun Dai, Yilong Li, Guoqiang Zheng, Yan Wang, Changyu Shen
    Abstract:

    Stretchable strain sensors have promising potentials in wearable electronics for Human motion detection, health monitoring and so on. A reliable strain sensor with high flexibility and good stability should be designed to detect Human joints Motions with a large deformation. Here, a simple and facile solution mixing-casting method was adopted to fabricate a highly stretchable strain sensor based on composites mixing polydimethylsiloxane (PDMS) with hybrid carbon nanotubes (CNTs) and carbon black (CB) conductive nanofillers (CNTs-CB). Bridged and overlapped hybrid CNTs-CB nanofillers structure was achieved in the composite on the basis of the morphology observation. In monotonic stretching test, the CNTs-CB/PDMS composites strain sensors exhibited high stretchability, strain-dependent sensitivity in a wide strain sensing range (ca. 300% strain) and an excellent linear current-voltage behavior. During stretching-releasing cycles, the strain sensors presented excellent repeatability, good stability and superior durability (2500 cycles at 200% strain). Combined with the above outstanding strain sensing performances, the fabricated stretchable strain sensors were attached onto different joints of Human body to monitor the corresponding Human Motions, demonstrating their attractive perspective in large Human Motions detection.

Hubert P. H. Shum - One of the best experts on this subject based on the ideXlab platform.

  • spatio temporal manifold learning for Human Motions via long horizon modeling
    IEEE Transactions on Visualization and Computer Graphics, 2019
    Co-Authors: He Wang, Edmond S. L. Ho, Hubert P. H. Shum
    Abstract:

    Data-driven modeling of Human Motions is ubiquitous in computer graphics and computer vision applications, such as synthesizing realistic Motions or recognizing actions. Recent research has shown that such problems can be approached by learning a natural motion manifold using deep learning on a large amount data, to address the shortcomings of traditional data-drivenapproaches. However, previous deep learning methods can be sub-optimal for two reasons. First, the skeletal information has not been fully utilized for feature extraction. Unlike images, it is difficult to define spatial proximity in skeletal Motions in the way that deep networks can be applied for feature extraction. Second, motion is time-series data with strong multi-modal temporal correlations between frames. On the one hand, a frame could be followed by several candidate frames leading to different Motions; on the other hand, long-range dependencies exist where a number of frames in the beginning correlate to a number of frames later. Ineffective temporal modeling would either under-estimate the multi-modality and variance, resulting in featureless mean motion or over-estimate them resulting in jittery Motions, which is a major source of visual artifacts. In this paper, we propose a new deep network to tackle these challenges by creating a natural motion manifold that is versatile for many applications. The network has a new spatial component for feature extraction. It is also equipped with a new batch prediction model that predicts a large number of frames at once, such that long-term temporally-based objective functions can be employed to correctly learn the motion multi-modality and variances. With our system, long-duration Motions can be predicted/synthesized using an open-loop setup where the motion retains the dynamics accurately. It can also be used for denoising corrupted Motions and synthesizing new Motions with given control signals. We demonstrate that our system can create superior results comparing to existing work in multiple applications.

  • spatio temporal manifold learning for Human Motions via long horizon modeling
    arXiv: Graphics, 2019
    Co-Authors: He Wang, Edmond S. L. Ho, Hubert P. H. Shum
    Abstract:

    Data-driven modeling of Human Motions is ubiquitous in computer graphics and computer vision applications, such as synthesizing realistic Motions or recognizing actions. Recent research has shown that such problems can be approached by learning a natural motion manifold using deep learning to address the shortcomings of traditional data-driven approaches. However, previous methods can be sub-optimal for two reasons. First, the skeletal information has not been fully utilized for feature extraction. Unlike images, it is difficult to define spatial proximity in skeletal Motions in the way that deep networks can be applied. Second, motion is time-series data with strong multi-modal temporal correlations. A frame could be followed by several candidate frames leading to different Motions; long-range dependencies exist where a number of frames in the beginning correlate to a number of frames later. Ineffective modeling would either under-estimate the multi-modality and variance, resulting in featureless mean motion or over-estimate them resulting in jittery Motions. In this paper, we propose a new deep network to tackle these challenges by creating a natural motion manifold that is versatile for many applications. The network has a new spatial component for feature extraction. It is also equipped with a new batch prediction model that predicts a large number of frames at once, such that long-term temporally-based objective functions can be employed to correctly learn the motion multi-modality and variances. With our system, long-duration Motions can be predicted/synthesized using an open-loop setup where the motion retains the dynamics accurately. It can also be used for denoising corrupted Motions and synthesizing new Motions with given control signals. We demonstrate that our system can create superior results comparing to existing work in multiple applications.

  • Spatio-temporal Manifold Learning for Human Motions via Long-horizon Modeling
    IEEE Transactions on Visualization and Computer Graphics, 1
    Co-Authors: He Wang, Edmond S. L. Ho, Hubert P. H. Shum
    Abstract:

    Data-driven modeling of Human Motions is ubiquitous in computer graphics and vision applications. Such problems can be approached by deep learning on a large amount data. However, existing methods can be sub-optimal for two reasons. First, skeletal information has not been fully utilized. Unlike images, it is difficult to define spatial proximity in skeletal Motions in the way that deep networks can be applied for feature extraction. Second, motion is time-series data with strong multi-modal temporal correlations between frames. A frame could lead to different Motions; on the other hand, long-range dependencies exist where a number of frames in the beginning correlate to a number of frames later. Ineffective temporal modeling would either under-estimate the multi-modality and variance. We propose a new deep network to tackle these challenges by creating a natural motion manifold that is versatile for many applications. The network has a new spatial component and is equipped with a new batch prediction model that predicts a large number of frames at once, such that long-term temporally-based objective functions can be employed to correctly learn the motion multi-modality and variances. We demonstrate that our system can create superior results comparing to existing work in multiple applications.

Yunping Hu - One of the best experts on this subject based on the ideXlab platform.

  • high performance strain sensor based on buckypaper for full range detection of Human Motions
    Nanoscale, 2018
    Co-Authors: Chengwei Li, Dongmei Zhang, Chenghao Deng, Peng Wang, Yunping Hu
    Abstract:

    A high-performance strain sensor based on buckypaper has been fabricated and studied. The sensor with an ultrahigh gauge factor of 20 216 can detect a maximum and a minimum strain range of 75% and 0.1%, respectively. During stretching, the strain sensor achieves a high stability and reproducibility of 10 000 cycles, and a fast response time of less than 87 ms. On the other hand, the sensor shows an excellent sensing performance upon pressure. The pressure range, pressure sensitivity and loading–unloading cycles are 0–1.68 MPa, 89.7 kPa−1 and 3000 cycles, respectively. A concept of the optimal value is utilized to evaluate the strain and pressure performances of the sensor. The optimal values of the sensor upon tensile strain and pressure are calculated to be 3.07 × 108 and 1.35 × 107, respectively, which are much higher than those of most strain and pressure sensors reported in the literature. Precise detection of full-range Human Motions, acoustic vibrations and even pulse waves at a small scale has been successfully demonstrated by the buckypaper-based sensor. Owning to its advantages including ultrahigh sensitivity, wide detection range and good stability, the buckypaper-based sensor suggests a great potential for applications in wearable sensors, electronic skins, micro/nano electromechanical systems, vibration sensing devices and other strain sensing devices.

Seung Hwan Ko - One of the best experts on this subject based on the ideXlab platform.

  • a deep learned skin sensor decoding the epicentral Human Motions
    Nature Communications, 2020
    Co-Authors: Inho Ha, Joonhwa Choi, Sungho Jo, Seung Hwan Ko
    Abstract:

    State monitoring of the complex system needs a large number of sensors. Especially, studies in soft electronics aim to attain complete measurement of the body, mapping various stimulations like temperature, electrophysiological signals, and mechanical strains. However, conventional approach requires many sensor networks that cover the entire curvilinear surfaces of the target area. We introduce a new measuring system, a novel electronic skin integrated with a deep neural network that captures dynamic Motions from a distance without creating a sensor network. The device detects minute deformations from the unique laser-induced crack structures. A single skin sensor decodes the complex motion of five finger Motions in real-time, and the rapid situation learning (RSL) ensures stable operation regardless of its position on the wrist. The sensor is also capable of extracting gait Motions from pelvis. This technology is expected to provide a turning point in health-monitoring, motion tracking, and soft robotics. Real-time monitoring Human Motions normally demands connecting a large number of sensors in a complicated network. To make it simpler, Kim et al. decode the motion of fingers using a flexible sensor attached on wrist that measures skin deformation with the help of a deep-learning architecture.