AI News, How to Use Convolutional Neural Networks for Time Series Classification

How to Use Convolutional Neural Networks for Time Series Classification

A large amount of data is stored in the form of time series: stock indices, climate measurements, medical tests, etc.

Time series classification has a wide range of applications: from identification of stock market anomalies to automated detection of heart and brain diseases.

Most of them consist of two major stages: on the first stage you either use some algorithm for measuring the difference between time series that you want to classify (dynamic time warping is a well-known one) or you use whatever tools are at your disposal (simple statistics, advanced mathematical methods etc.) to represent your time series as feature vectors.

Fortunately, there are models that not only incorporate feature engineering in one framework, but also eliminate any need to do it manually: they are able to extract features and create informative representations of time series automatically.

For example, for electroencephalography it is the number of channels (nodes on the head of a person), and for a weather time series it can be such variables as temperature, pressure, humidity etc.

The resulting value becomes an element of a new “filtered” univariate time series, and then the kernel moves forward along the time series to produce the next value.

A new vector is formed from these values, and this vector of maximums is the final feature vector that can be used as an input to a regular fully connected layer.

The smaller the coefficient, the more detailed the new time series is, and, therefore, it consolidates information about the time series features on a smaller time scale.

Down-sampling with larger coefficients results in less detailed new time series which capture and emphasize those features of the original data that exhibit themselves on larger time scales.

More specifically, the pooling kernel size is determined by the formula n/p, where n is the length of the time series, and p is a pooling factor, typically chosen between the values {2, 3, 5}.

After all the transformations and convolutions, you are left with a flat vector of deep, complex features that capture information about the original time series in a wide range of frequency and time scale domains.

The second branch processes the medium-length (1024 timesteps) down-sampled version of the time series, and the filter length used here is 16.

Mask-MCNet: Instance Segmentation in 3D Point Cloud of Intra-oral Scans

Accurate segmentation of teeth in dental imaging is a principal element in computer-aided design (CAD) in modern dentistry.

In this paper, we present a new framework based on deep learning models for segmenting tooth instances in 3D point cloud data of an intra-oral scan (IOS).

Consequently, the model is able to localize each object instance by predicting its 3D bounding box and simultaneously segmenting all the points inside each box.