# AI News, The tensor renaissance in data science

- On Monday, June 4, 2018
- By Read More

## The tensor renaissance in data science

After sitting in on UC Irvine Professor Anima Anandkumar’s Strata + Hadoop World 2015 in San Jose presentation, I wrote a post urging the data community to build tensor decomposition libraries for data science.

That was one of the main reasons that when the computers were not yet very powerful, tensors could not be handled efficiently. However, I think now we are seeing a renaissance of tensors because we have an explosion in computational capabilities, and tensor operations are highly parallelizable — they can be run on the cloud.

On the other hand, what our research and some of our collaborators and other researchers in this field have shown is that there are a lot of tensor-related problems and machine learning that are not hard. We do not encounter the worst-case hard tensors for machine learning applications.

Anandkumar highlights recent contributions of tensor methods in feature learning: The latest set of results we have been looking at is the use of tensors for feature learning as a general concept.

The idea of feature learning is to look at transformations of the input data that can be classified more accurately using simpler classifiers. This is now an emerging area in machine learning that has seen a lot of interest, and our latest analysis is to ask how can tensors be employed for such feature learning.

For instance, another application we’ve been looking at is to use these hierarchical, graphical models and also deep learning framework features that are extracted from deep learning … to have a better detection of multiple objects in the same image. Most of deep learning currently has focused on benchmark data sets where there’s mostly one image in one object, whereas we are now looking at if there are a lot of objects in an image: how can we efficiently learn this by also using the fact that objects tend to co-occur in images?

- On Tuesday, September 17, 2019

**Vectors - The Math of Intelligence #3**

We're going to explore why the concept of vectors is so important in machine learning. We'll talk about how they are used to represent both data and models.

**Deep Learning with Tensorflow - Tensors, Variables and Placeholders**

Enroll in the course for free at: Deep Learning with TensorFlow Introduction The majority of data ..

**Dimensionality Reduction for Matrix- and Tensor-Coded Data [Part 1]**

Alex Williams, Stanford University In many scientific domains, data is coded in large tables or higher-dimensional arrays. Compressing these data into smaller, ...

**Tensor Decompositions for Learning Hidden Variable Models**

In many applications, we face the challenge of modeling the interactions between multiple observations. A popular and successful approach in machine learning ...

**Deep Learning with Tensorflow - Convolution and Feature Learning**

Enroll in the course for free at: Deep Learning with TensorFlow Introduction The majority of data ..

**Dimensionality Reduction for Matrix- and Tensor-Coded Data [Part 2]**

Alex Williams, Stanford University In many scientific domains, data is coded in large tables or higher-dimensional arrays. Compressing these data into smaller, ...

**Preprocessing cont'd - Deep Learning with Neural Networks and TensorFlow part 6**

Welcome to part six of the Deep Learning with Neural Networks and TensorFlow tutorials. Where we left off, we explained our plan and theory for applying our ...

**How to Make a Text Summarizer - Intro to Deep Learning #10**

I'll show you how you can turn an article into a one-sentence summary in Python with the Keras machine learning library. We'll go over word embeddings, ...

**Lecture 15 | Efficient Methods and Hardware for Deep Learning**

In Lecture 15, guest lecturer Song Han discusses algorithms and specialized hardware that can be used to accelerate training and inference of deep learning ...

**Discovery of Latent Factors in High-dimensional Data via Tensor Decomposition**

Latent or hidden variable models have applications in almost every domain, e.g., social network analysis, natural language processing, computer vision and ...