Activity Classification

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 115971 Experts worldwide ranked by ideXlab platform

Senem Velipasalar - One of the best experts on this subject based on the ideXlab platform.

  • Autonomous Human Activity Classification From Wearable Multi-Modal Sensors
    IEEE Sensors Journal, 2019
    Co-Authors: Senem Velipasalar
    Abstract:

    There has been significant amount of research work on human Activity Classification relying either on Inertial Measurement Unit (IMU) data or data from static cameras providing a third-person view. There has been relatively less work using wearable cameras, providing first-person or egocentric view, and even fewer approaches combining egocentric video with IMU data. Using only IMU data limits the variety and complexity of the activities that can be detected. For instance, the sitting Activity can be detected by IMU data, but it cannot be determined whether the subject has sat on a chair or a sofa, or where the subject is. To perform fine-grained Activity Classification, and to distinguish between activities that cannot be differentiated by only IMU data, we present an autonomous and robust method using data from both wearable cameras and IMUs. In contrast to convolutional neural network-based approaches, we propose to employ capsule networks to obtain features from egocentric video data. Moreover, Convolutional Long Short Term Memory framework is employed both on egocentric videos and IMU data to capture the temporal aspect of actions. We also propose a genetic algorithm-based approach to autonomously and systematically set various network parameters, rather than using manual settings. Experiments have been conducted to perform 9- and 26-label Activity Classification, and the proposed method, using autonomously set network parameters, has provided very promising results, achieving overall accuracies of 86.6% and 77.2%, respectively. The proposed approach, combining both modalities, also provides increased accuracy compared to using only egovision data and only IMU data.

  • Autonomous Human Activity Classification from Ego-vision Camera and Accelerometer Data.
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Senem Velipasalar
    Abstract:

    There has been significant amount of research work on human Activity Classification relying either on Inertial Measurement Unit (IMU) data or data from static cameras providing a third-person view. Using only IMU data limits the variety and complexity of the activities that can be detected. For instance, the sitting Activity can be detected by IMU data, but it cannot be determined whether the subject has sat on a chair or a sofa, or where the subject is. To perform fine-grained Activity Classification from egocentric videos, and to distinguish between activities that cannot be differentiated by only IMU data, we present an autonomous and robust method using data from both ego-vision cameras and IMUs. In contrast to convolutional neural network-based approaches, we propose to employ capsule networks to obtain features from egocentric video data. Moreover, Convolutional Long Short Term Memory framework is employed both on egocentric videos and IMU data to capture temporal aspect of actions. We also propose a genetic algorithm-based approach to autonomously and systematically set various network parameters, rather than using manual settings. Experiments have been performed to perform 9- and 26-label Activity Classification, and the proposed method, using autonomously set network parameters, has provided very promising results, achieving overall accuracies of 86.6\% and 77.2\%, respectively. The proposed approach combining both modalities also provides increased accuracy compared to using only egovision data and only IMU data.

  • GlobalSIP - HUMAN Activity Classification INCORPORATING EGOCENTRIC VIDEO AND INERTIAL MEASUREMENT UNIT DATA
    2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2018
    Co-Authors: Senem Velipasalar
    Abstract:

    Many methods have been proposed for human Activity Classification, which rely either on Inertial Measurement Unit (IMU) data or data from static cameras watching subjects. There have been relatively less work using egocentric videos, and even fewer approaches combining egocentric video and IMU data. Systems relying only on IMU data are limited in the complexity of the activities that they can detect. In this paper, we present a robust and autonomous method, for fine-grained Activity Classification, that leverages data from multiple wearable sensor modalities to differentiate between activities, which are similar in nature, with a level of accuracy that would be impossible by each sensor alone. We use both egocentric videos and IMU sensors on the body. We employ Capsule Networks together with Convolutional Long Short Term Memory (LSTM) to analyze egocentric videos, and an LSTM framework to analyze IMU data, and capture temporal aspect of actions. We performed experiments on the CMU-MMAC dataset achieving overall recall and precision rates of 85.8% and 86.2%, respectively. We also present results of using each sensor modality alone, which show that the proposed approach provides 19.47% and 39.34% increase in accuracy compared to using only ego-vision data and only IMU data, respectively.

  • Automatic Fall Detection and Activity Classification by a Wearable Camera
    Distributed Embedded Smart Cameras, 2014
    Co-Authors: Koray Ozcan, Anvith Katte Mahabalagiri, Senem Velipasalar
    Abstract:

    Automated monitoring of everyday physical activities of elderly has come a long way in the past two decades. These activities might range from critical events such as falls requiring rapid and robust detection to classifying daily activities such as walking, sitting and lying down for long term prognosis. Researchers have constantly strived to come up with innovative methods based on different sensor systems in order to build a robust automated system. These sensor systems can be broadly classified into wearable and ambient sensors. Various vision and non-vision based sensors have been employed in the process. Most popular wearable sensors employ non-vision based sensors such as accelerometers and gyroscopes and have the advantage of not being confined to restricted environments. But resource limitations leave them vulnerable to false positives and render the task of classifying activities very challenging. On the other hand, popular ambient vision based sensors like wall mounted cameras which have resource capabilities for better Activity Classification are confined to a specific monitoring environment and by nature raise privacy concerns. Recently, integrated wearable sensor systems with accelerometers and camera on a single device have been introduced wherein the camera is used to provide contextual information in order to validate the accelerometer readings. In this chapter, a new idea of using a smart camera as a waist worn fall detection and Activity Classification system is presented. Therefore, a methodology to classify sitting and lying down activities with such a system is introduced in order to further substantiate the concept of event detection and Activity Classification with wearable smart cameras.

  • Automatic fall detection and Activity Classification by a wearable embedded smart camera
    IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2013
    Co-Authors: Koray Ozcan, Mauricio Casares, Anvith Katte Mahabalagiri, Senem Velipasalar
    Abstract:

    Robust detection of events and activities, such as falling, sitting, and lying down, is a key to a reliable elderly Activity monitoring system. While fast and precise detection of falls is critical in providing immediate medical attention, other activities like sitting and lying down can provide valuable information for early diagnosis of potential health problems. In this paper, we present a fall detection and Activity Classification system using wearable cameras. Since the camera is worn by the subject, monitoring is not limited to confined areas, and extends to wherever the subject may go including indoors and outdoors. Furthermore, since the captured images are not of the subject, privacy concerns are alleviated. We present a fall detection algorithm employing histograms of edge orientations and strengths, and propose an optical flow-based method for Activity Classification. The first set of experiments has been performed with prerecorded video sequences from eight different subjects wearing a camera on their waist. Each subject performed around 40 trials, which included falling, sitting, and lying down. Moreover, an embedded smart camera implementation of the algorithm was also tested on a CITRIC platform with subjects wearing the CITRIC camera, and each performing 50 falls and 30 non-fall activities. Experimental results show the success of the proposed method.

Andre Pollok - One of the best experts on this subject based on the ideXlab platform.

  • Automatic adaptive speech separation using beamformer-output-ratio for voice Activity Classification
    Signal Processing, 2015
    Co-Authors: Thuy N Tran, William G. Cowley, Andre Pollok
    Abstract:

    This paper focuses on the practical challenge of adaptation control for speech separation systems. Adaptive beamforming methods, such as minimum variance distortionless response (MDVR), can effectively extract the desired speech signal from interference and noise. However, to avoid the signal cancellation problem, the beamformer adaptation is halted when the desired speaker is active. An automated scheme for this adaptation requires classifying speakers' voice Activity status, which remains a challenge for multi-speaker environments. In this paper, we propose a novel approach to identify voice activities for two speakers based on a new metric, called the beamformer-output-ratio (BOR). Statistical properties of the BOR are studied and used to develop a hypothesis-based method for voice Activity Classification. The method is further refined using an algorithm detecting incorrect beamformer adaptation by analysing changes in the output power of a blind adapting MVDR beamformer. Based on the new methods, we construct an automatic adaptive beamforming system to simultaneously separate speech for two speakers. The speech separation module of the system uses MVDR beamformers whose adaptation is guided by the voice Activity Classification. Our methods can lead to, in some cases, 20% reduction in voice Activity Classification error, and 8dB improvement on the output SINR. The results are verified on both synthesised signals and realistic recordings. HighlightsWe design an automated adaptive beamforming system to extract speech of two speakers.The quantity BOR and its roles in active speaker identification are introduced.The BOR-VAC method is developed, in both generic form and practical realisation.We model the beamformer output power behaviour to detect incorrect adaptation.The proposed systems are tested in both real and synthesised recordings.

  • AusCTW - Multi-speaker beamforming for voice Activity Classification
    2013 Australian Communications Theory Workshop (AusCTW), 2013
    Co-Authors: Thuy N Tran, William G. Cowley, Andre Pollok
    Abstract:

    In a multi-speaker environment, voice Activity Classification (VAC) attempts to identify active speaker(s) at different recording periods. Using a beamformer-output-ratio (BOR) from a multi-beamforming system, an efficient solution for VAC is available by comparing the calculated BOR with pre-specified thresholds. Considering two speakers, this paper derives theoretical results on BOR statistics, including the probability distribution function and the cumulative distribution function (c.d.f.) of the BOR employing an assumption that the narrow-band signal power in the frequency domain is Gamma distributed. Using the c.d.f. of the BOR, the thresholds for VAC can be automatically calculated via a closed form expression for given acceptable mis-detection rates. The method is tested with simulated recording setups for a non-reverberant environment and a 0:3 second reverberation time environment. Both simulations show high accuracy for the Classification.

  • AusCTW - Voice Activity Classification using beamformer-output-ratio
    2012 Australian Communications Theory Workshop (AusCTW), 2012
    Co-Authors: Thuy N Tran, William G. Cowley, Andre Pollok
    Abstract:

    In a conversation between multiple speakers, each person participates in the speech at different times. Therefore the active speakers in each speech segment are unknown. However, identifying the voice Activity (VA) of the speakers of interest is required for adaptive beamforming techniques such as minimum variance distortionless response beamforming and the adaptive blocking beamforming (AB). Considering two speakers, this paper addresses a voice Activity Classification (VAC) problem that focuses on identifying the active speaker(s) in each speech segment. The proposed method is based on a new concept, the beamformer-output-ratio (BOR). This value is calculated from the outputs of two different beamformers steering at two speakers. The first part of the paper introduces the definition of BOR, the VAC method using BOR and simulation results. The simulations are based on real recordings and show a high Classification accuracy. In the second part of the paper, the theoretical results of the BOR of the delay-and-sum (DS) beamforming are presented, including BOR formula derived in different environments and its behaviour in relation to parameter errors.

Mihaela Van Der Schaar - One of the best experts on this subject based on the ideXlab platform.

  • Personalized Active Learning for Activity Classification Using Wireless Wearable Sensors
    IEEE Journal of Selected Topics in Signal Processing, 2016
    Co-Authors: Linqi Song, Gregory J. Pottie, Mihaela Van Der Schaar
    Abstract:

    Enabling accurate and low-cost Classification of a range of motion activities is important for numerous applications, ranging from disease treatment and in-community rehabilitation of patients to athlete training. This paper proposes a novel contextual online learning method for Activity Classification based on data captured by low-cost, body-worn inertial sensors, and smartphones. The proposed method is able to address the unique challenges arising in enabling online, personalized and adaptive Activity Classification without requiring training phase from the individual. Another key challenge of Activity Classification is that the labels may change over time, as the data as well as the Activity to be monitored evolve continuously, and the true label is often costly and difficult to obtain. The proposed algorithm is able to actively learn when to ask for the true label by assessing the benefits and costs of obtaining them. We rigorously characterize the performance of the proposed learning algorithm and Our experiments show that the proposed algorithm outperforms existing algorithms.

  • GLOBECOM - Context-driven online learning for Activity Classification in wireless health
    2014 IEEE Global Communications Conference, 2014
    Co-Authors: Linqi Song, Gregory J. Pottie, Mihaela Van Der Schaar
    Abstract:

    Enabling accurate and low-cost Classification of a range of motion activities is of significant importance for wireless health through body worn inertial sensors and smartphones, due to the need by healthcare and fitness professonals to monitor exercises for quality and compliance. This paper proposes a novel contextual multi-armed bandits approach for large-scale Activity Classification. The proposed method is able to address the unique challenges arising from scaling, lack of training data and adaptation by melding context augmentation and continuous online learning into traditional Activity Classification. We rigorously characterize the performance of the proposed learning algorithm and prove that the learning regret (i.e. reward loss) is sublinear in time, thereby ensuring fast convergence to the optimal reward as well as providing short-term performance guarantees. Our experiments show that the proposed algorithm outperforms existing algorithms in terms of both providing higher Classification accuracy as well as lower energy consumption.

Yung-jung Chiu - One of the best experts on this subject based on the ideXlab platform.

  • Wearable Sport Activity Classification Based on Deep Convolutional Neural Network
    IEEE Access, 2019
    Co-Authors: Yu-liang Hsu, Hsing-cheng Chang, Yung-jung Chiu
    Abstract:

    This paper develops a wearable sport Activity Classification system and its associated deep learning-based sport Activity Classification algorithm for accurately recognizing sport activities. The proposed wearable system used two wearable inertial sensing modules worn on athletes’ wrist and ankle to collect sport motion signals and utilized a deep convolutional neural network (CNN) to extract the inherent features from the spectrograms of the short-term Fourier transform (STFT) of the sport motion signals. The wearable inertial sensing module is composed of a microcontroller, a triaxial accelerometer, a triaxial gyroscope, an RF wireless transmission module, and a power supply circuit. All ten participants wore the two wearable inertial sensing modules on their wrist and ankle to collect motion signals generated by sport activities. Subsequently, we developed a deep learning-based sport Activity Classification algorithm composed of sport motion signal collection, signal preprocessing, sport motion segmentation, signal normalization, spectrogram generation, image mergence/resizing, and CNN-based Classification to recognize ten types of sport activities. The CNN classifier consisting of two convolutional layers, two pooling layers, a fully-connected layer, and a softmax layer can be used to divide the sport activities into table tennis, tennis, badminton, golf, batting baseball, shooting basketball, volleyball, dribbling basketball, running, and bicycling, respectively. Finally, the experimental results show that the proposed wearable sport Activity Classification system and its deep learning-based sport Activity Classification algorithm can recognize 10 sport activities with the Classification rate of 99.30%.

Jong-hoon Youn - One of the best experts on this subject based on the ideXlab platform.

  • Mariners’ physical Activity Classification at sea using a wrist-worn wearable sensor.
    Biomedical Research-tokyo, 2017
    Co-Authors: Ik Hyun Youn, Jong-hoon Youn, Jung Min Lee, Teukseob Song
    Abstract:

    A long-term sea voyage imposes a special living environment on mariners that directly influences their physical health. To our best knowledge, there have been few research efforts that evaluate mariners' physical health during sea life. This study aims to develop wearable-based mariner physical Activity Classification models. Twenty-eight participants (n=7 females, n=21 males, mean age=21.4, and mean BMI=22.9) wore a single accelerometer on their dominant hand. The wrist acceleration data were collected and analyzed to extract wrist motion features compared to the criterion measures (i.e., direct observation) including four major physical Activity types in a maritime setting. Three machine learning algorithms were applied to develop an accurate Classification model. The results of the criterion-based Classification show that more than 95% of mariners’ daily physical activities were accurately classified. Based on the experimental results, we conclude that the wrist motion features efficiently differentiate major physical Activity patterns in a maritime environment. The proposed physical Activity Classification models can be used as an objective measurement of mariners’ physical Activity levels during their long voyage.

  • CCNC - On-board processing of acceleration data for real-time Activity Classification
    2013 IEEE 10th Consumer Communications and Networking Conference (CCNC), 2013
    Co-Authors: Sangil Choi, Richelle Lemay, Jong-hoon Youn
    Abstract:

    The assessment of a person's ability to consistently perform the fundamental activities of daily living is essential in monitoring the patient's progress and measuring the success of treatment. Therefore, many researchers have been interested in this issue and have proposed various monitoring systems based on accelerometer sensors. However, few systems focus on energy consumption of sensor devices. In this paper, we introduce an energy-efficient physical Activity monitoring system using a wearable wireless sensor. The proposed system is capable of monitoring most daily activities of the human body: standing, sitting, walking, lying, running, and so on. To reduce energy consumption and prolong the lifetime of the system, we have focused on minimizing the total energy spent for wireless data exchange by manipulating real-time acceleration data on the sensor platform. Furthermore, one of our key contributions is that all functionalities including data processing, Activity Classification, wireless communication, and storing classified activities were achieved in a single sensor node without compromising the accuracy of Activity Classification. Our experimental results show that the accuracy of our Classification system is over 95%.