AI News, guillaume-chevalier/LSTM-Human-Activity-Recognition

guillaume-chevalier/LSTM-Human-Activity-Recognition

Other research on the activity recognition dataset used mostly use a big amount of feature engineering, which is rather a signal processing approach combined with classical data science techniques.

The dataset's description goes like this: The sensor signals (accelerometer and gyroscope) were pre-processed by applying noise filters and then sampled in fixed-width sliding windows of 2.56 sec and 50% overlap (128 readings/window).

The sensor acceleration signal, which has gravitational and body motion components, was separated using a Butterworth low-pass filter into body acceleration and gravity.

That said, I will use the almost raw data: only the gravity effect has been filtered out of the accelerometer as a preprocessing step for another 3D feature as an input to help learning.

It can be roughly pictured like in the image below, imagining each rectangle has a vectorial depth and other special hidden quirks in the image below.

In our case, the 'many to one' architecture is used: we accept time series of feature vectors (one vector per time step) to convert them to a probability vector at the output for classification.

And it can peak to values such as 92.73%, at some moments of luck during the training, depending on how the neural network's weights got initialized at the start of the training, randomly.

In another open-source repository of mine, the accuracy is pushed up to 94% using a special deep LSTM architecture which combines the concepts of bidirectional RNNs, residual connections and stacked cells.

If you want to learn more about deep learning, I have also built a list of the learning ressources for deep learning which have revealed to be the most useful to me here.

Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

The main findings from the direct comparison of our novel DeepConvLSTM against the baseline model using standard feedforward units in the dense layer is that: (i) DeepConvLSTM reaches a higher F1 score;

Training time, however, increases with the number of layers, and depending on computational resources available, future work may consider how to find trade-offs between system performance and training time.

DeepConvLSTM does not include pooling operations because the input of the network is constrained by the sliding window mechanism defined by the OPPORTUNITY challenge, and this fact limits the possibility of downsampling the data, given that DeepConvLSTM requires a data sequence to be processed by the recurrent layers.

We show how convolution operations are robust enough to be directly applied to raw sensor data, to learn features (salient patterns) that, within a deep framework, successfully outperformed previous results on the problem.

This is particularly important, as activity recognition techniques are applied to domains that include more complex activities or open-ended scenarios, where classifiers must adaptively model a varying number of classes.

It is also noticeable, in terms of data, how the recurrent model is capable of obtaining a very good performance with relatively small datasets, since the largest training dataset used during the experiments (the one corresponding to the OPPORTUNITY dataset) is composed of ~80 k sensor samples, corresponding to 6 h of recordings.

This seems to indicate that although deep learning techniques are often employed with large amounts of data, (e.g., millions of frames in computer vision [22]), they may actually be applicable to problem domains where acquiring annotated data is very costly, such as in supervised activity recognition.

Removing the dependency on engineered features by exploiting convolutional layers is particularly important if the set of activities to recognise is changing over time, for instance as additional labelled data become available (e.g., through crowd-sourcing [48]).

learning scenario, where the number of classes could be increased after the system is initially deployed, backpropagation of the gradient to the convolutional layer could be used to incrementally adapt the kernels according to the new data at runtime.

Future work may consider the representational limits of such networks for open-ended learning and investigate rules to increase network size (e.g., adding new kernels) to maintain a desired representational power.

Action Detection Using A Deep Recurrent Neural Network

An example of MERL's deep recurrent neural network for action detection working on a test video from the MERL Shopping Dataset. The yellow box in the upper left shows the action that is detected...

Temporal Activity Detection in Untrimmed Videos with Recurrent Neural Networks (NIPS WS 2016)

This thesis explore different approaches using Convolutional and Recurrent Neural Networks to classify and temporally localize activities..

General Sequence Learning using Recurrent Neural Networks

indico's Head of Research, Alec Radford, led a workshop on general sequence learning using recurrent neural networks at Next.ML in San Francisco. His presentation and workshop resources are...

Gesture Recognition using Long Short Term Memory (LSTM)

Implementation of Long Short-Term Memory (LSTM) neural network using PyBrain open source library. The algorithm is able to classify 6 learned behaviors when they are performed by the robot...

Deep Learning for Music Generation

In this episode of the AI show Erika explains how to create deep learning models with music as the input. She begins by describing the problem of generating music by specifically describing...

How to visualize neural network parameters and activity - Justin Shenk

Description Visualizing neural network parameters and activity using open source software such as Yosinski's Deep Convolutional Toolbox, Karpathy's RNNs, and TensorFlow's tools. Abstract Learn...

3D CNN-Action Recognition Part-1

This video explains the implementation of 3D CNN for action recognition. It explains little theory about 2D and 3D Convolution. The implementation of the 3D CNN in Keras continues in the next...

Attentional Push - A Deep Convolutional Network for Augmenting Image Salience | Spotlight 2-2C

Siavash Gorji; James J. Clark We present a novel visual attention tracking technique based on Shared Attention modeling. By considering the viewer as a participant in the activity occurring...

Signal Processing and Machine Learning Techniques for Sensor Data Analytics

Free MATLAB Trial: Request a Quote: Contact Us: Learn more about MATLAB: Learn more about Simulink

Recurrent Neural Networks (DLAI D7L1 2017 UPC Deep Learning for Artificial Intelligence)

Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable...