Real-Time Action

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 752271 Experts worldwide ranked by ideXlab platform

Dong Huang - One of the best experts on this subject based on the ideXlab platform.

  • Temporal Unet: Sample Level Human Action Recognition using WiFi
    arXiv: Signal Processing, 2019
    Co-Authors: F. Wang, Jimuyang Zhang, Yunpeng Song, Dong Huang
    Abstract:

    Human doing Actions will result in WiFi distortion, which is widely explored for Action recognition, such as the elderly fallen detection, hand sign language recognition, and keystroke estimation. As our best survey, past work recognizes human Action by categorizing one complete distortion series into one Action, which we term as series-level Action recognition. In this paper, we introduce a much more fine-grained and challenging Action recognition task into WiFi sensing domain, i.e., sample-level Action recognition. In this task, every WiFi distortion sample in the whole series should be categorized into one Action, which is a critical technique in precise Action localization, continuous Action segmentation, and Real-Time Action recognition. To achieve WiFi-based sample-level Action recognition, we fully analyze approaches in image-based semantic segmentation as well as in video-based frame-level Action recognition, then propose a simple yet efficient deep convolutional neural network, i.e., Temporal Unet. Experimental results show that Temporal Unet achieves this novel task well. Codes have been made publicly available at this https URL.

Gopal Gupta - One of the best experts on this subject based on the ideXlab platform.

  • Design and implementation of AT : A Real-Time Action description language
    Lecture Notes in Computer Science, 2006
    Co-Authors: Luke Simon, Ajay Mallya, Gopal Gupta
    Abstract:

    Real world applications of Action description languages involve systems that have Real-Time constraints. The occurrence of an Action is just as important as the time at which the Action occurs. In order to be able to model such Real-Time systems, the Action description language A is extended with Real-Time clocks and constraints. The formal syntax and semantics of the extended language are defined, and the use of logic programming as a means to an implementation of Real-Time A is discussed.

  • LOPSTR - Design and Implementation of A T : a Real-Time Action description language
    Logic Based Program Synthesis and Transformation, 2006
    Co-Authors: Luke Simon, Ajay Mallya, Gopal Gupta
    Abstract:

    Real world applications of Action description languages involve systems that have Real-Time constraints. The occurrence of an Action is just as important as the time at which the Action occurs. In order to be able to model such Real-Time systems, the Action description language A is extended with Real-Time clocks and constraints. The formal syntax and semantics of the extended language are defined, and the use of logic programming as a means to an implementation of Real-Time A is discussed.

F. Wang - One of the best experts on this subject based on the ideXlab platform.

  • Temporal Unet: Sample Level Human Action Recognition using WiFi
    arXiv: Signal Processing, 2019
    Co-Authors: F. Wang, Jimuyang Zhang, Yunpeng Song, Dong Huang
    Abstract:

    Human doing Actions will result in WiFi distortion, which is widely explored for Action recognition, such as the elderly fallen detection, hand sign language recognition, and keystroke estimation. As our best survey, past work recognizes human Action by categorizing one complete distortion series into one Action, which we term as series-level Action recognition. In this paper, we introduce a much more fine-grained and challenging Action recognition task into WiFi sensing domain, i.e., sample-level Action recognition. In this task, every WiFi distortion sample in the whole series should be categorized into one Action, which is a critical technique in precise Action localization, continuous Action segmentation, and Real-Time Action recognition. To achieve WiFi-based sample-level Action recognition, we fully analyze approaches in image-based semantic segmentation as well as in video-based frame-level Action recognition, then propose a simple yet efficient deep convolutional neural network, i.e., Temporal Unet. Experimental results show that Temporal Unet achieves this novel task well. Codes have been made publicly available at this https URL.

Hanli Wang - One of the best experts on this subject based on the ideXlab platform.

  • real time Action recognition with deeply transferred motion vector cnns
    IEEE Transactions on Image Processing, 2018
    Co-Authors: Bowen Zhang, Limin Wang, Zhe Wang, Yu Qiao, Hanli Wang
    Abstract:

    The two-stream CNNs prove very successful for video-based Action recognition. However, the classical two-stream CNNs are time costly, mainly due to the bottleneck of calculating optical flows (OFs). In this paper, we propose a two-stream-based Real-Time Action recognition approach by using motion vector (MV) to replace OF. MVs are encoded in video stream and can be extracted directly without extra calculation. However, directly training CNN with MVs degrades accuracy severely due to the noise and the lack of fine details in MVs. In order to relieve this problem, we propose four training strategies which leverage the knowledge learned from OF CNN to enhance the accuracy of MV CNN. Our insight is that MV and OF share inherent similar structures which allow us to transfer knowledge from one domain to another. To fully utilize the knowledge learned in OF domain, we develop deeply transferred MV CNN. Experimental results on various datasets show the effectiveness of our training strategies. Our approach is significantly faster than OF based approaches and achieves processing speed of 390.7 frames per second, surpassing Real-Time requirement. We release our model and code to facilitate further research. 1 1 https://github.com/zbwglory/MV-release

  • real time Action recognition with enhanced motion vector cnns
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Bowen Zhang, Limin Wang, Zhe Wang, Yu Qiao, Hanli Wang
    Abstract:

    The deep two-stream architecture exhibited excellent performance on video based Action recognition. The most computationally expensive step in this approach comes from the calculation of optical flow which prevents it to be Real-Time. This paper accelerates this architecture by replacing optical flow with motion vector which can be obtained directly from compressed videos without extra calculation. However, motion vector lacks fine structures, and contains noisy and inaccurate motion patterns, leading to the evident degradation of recognition performance. Our key insight for relieving this problem is that optical flow and motion vector are inherent correlated. Transferring the knowledge learned with optical flow CNN to motion vector CNN can significantly boost the performance of the latter. Specifically, we introduce three strategies for this, initialization transfer, supervision transfer and their combination. Experimental results show that our method achieves comparable recognition performance to the state-of-the-art, while our method can process 390.7 frames per second, which is 27 times faster than the original two-stream method.

  • CVPR - Real-Time Action Recognition with Enhanced Motion Vector CNNs
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016
    Co-Authors: Bowen Zhang, Limin Wang, Zhe Wang, Yu Qiao, Hanli Wang
    Abstract:

    The deep two-stream architecture [23] exhibited excellent performance on video based Action recognition. The most computationally expensive step in this approach comes from the calculation of optical flow which prevents it to be Real-Time. This paper accelerates this architecture by replacing optical flow with motion vector which can be obtained directly from compressed videos without extra calculation. However, motion vector lacks fine structures, and contains noisy and inaccurate motion patterns, leading to the evident degradation of recognition performance. Our key insight for relieving this problem is that optical flow and motion vector are inherent correlated. Transferring the knowledge learned with optical flow CNN to motion vector CNN can significantly boost the performance of the latter. Specifically, we introduce three strategies for this, initialization transfer, supervision transfer and their combination. Experimental results show that our method achieves comparable recognition performance to the state-of-the-art, while our method can process 390.7 frames per second, which is 27 times faster than the original two-stream method.

Luke Simon - One of the best experts on this subject based on the ideXlab platform.

  • Design and implementation of AT : A Real-Time Action description language
    Lecture Notes in Computer Science, 2006
    Co-Authors: Luke Simon, Ajay Mallya, Gopal Gupta
    Abstract:

    Real world applications of Action description languages involve systems that have Real-Time constraints. The occurrence of an Action is just as important as the time at which the Action occurs. In order to be able to model such Real-Time systems, the Action description language A is extended with Real-Time clocks and constraints. The formal syntax and semantics of the extended language are defined, and the use of logic programming as a means to an implementation of Real-Time A is discussed.

  • LOPSTR - Design and Implementation of A T : a Real-Time Action description language
    Logic Based Program Synthesis and Transformation, 2006
    Co-Authors: Luke Simon, Ajay Mallya, Gopal Gupta
    Abstract:

    Real world applications of Action description languages involve systems that have Real-Time constraints. The occurrence of an Action is just as important as the time at which the Action occurs. In order to be able to model such Real-Time systems, the Action description language A is extended with Real-Time clocks and constraints. The formal syntax and semantics of the extended language are defined, and the use of logic programming as a means to an implementation of Real-Time A is discussed.