AI News, Building Machine Learning Estimator in TensorFlow

Building Machine Learning Estimator in TensorFlow

(This blog is featured in DataScienceWeekly here and Chinese translation here (中文) by Xiatian) Have you ever wondered what’s the magic behind the tutorials on Large-scale Linear Models and Wide &

The purpose of this post is to help you better understand the underlying principles of estimators in TensorFlow Learn and point out some tips and hints if you ever want to build your own estimator that’s suitable for your particular application.

It provides the basic functionalities like fit(), partial_fit(), evaluate(), and predict() by utilizing detailed logics hidden in graph_actions.py to handle model inference, evaluation, and training, as well as data_feeder.py to handle data batches fetching for different types of input (Note: in the future, DataFeeder will be replaced by learn.DataFrame).

While providing most of the logics required for building and evaluating a customized model function, it leaves implementations for _get_train_ops(), _get_eval_ops(), and _get_predict_ops() to its sub-classes, in order to give freedom to sub-classes that require custom handling.

For example, _get_train_ops() in Estimator takes features and targets as inputs, and then returns a tuple of train Operation and loss Tensor, using the customized model function.

For example, instead of passing all hyper-parameters to the contructor of TensorForestEstimator, they are passed into params in the contructor and the params are filled by params.fill() and later it will be used in Tensor Forest’s own RandomForestGraphs for constructing the whole graph.

Introduction to Tensorflow Estimators

Tensorflow is an open source numerical computing library for implementing production-ready machine learning models as well as experimenting with novel architectures released by Google.

It’s flexible architecture allows users to create and deploy machine learning and deep learning models in CPU, GPU, distributed machines and even mobile devices.

Google is using Tensorflow for search ranking, computer vision[Inception model], speech recognition, Youtube recommendations, machine translation for Google translate and in many other areas.

In this article , we’ll explore tensorflow and work on a regression problem to predict Airbnb rental listing prices from Boston Airbnb Open Data.

We’ll learn about basic concepts of tensorflow like Tensors and computational graph, learn how to execute simple programs and implement a linear regression model from scratch first.

Airbnb is an online marketplace that helps peope to lease or rent short-term lodging including vacation and apartment rentals, homestays, hotels.The data is publicly released in Kaggle.

However it does expect basic programming skills in python, knowledge of general machine learning workflow related concepts such as feature preprocessing, loss functions, model training and evaluation etc.

Machine learning is advancing at a rapid rate and to remain relevant a good machine learning framework has to hit the balance between flexibility and simplicity.

To implement novel architectures created by researchers, ML frameworks should be extendable and flexible, but regular users often want built in models which they can readily use on their own dataset.

So, Tensorflow has to deal with many different classes of userbase with varied interests, users who want to build their custom models, users who want to use common models and users who don’t care much about the specifics of the model, but wants to integrate the results in their own code infrastructure.

For users who just want to use the common models, Tensorflow provides pre-made estimators or “Canned Estimators” which refer to implementations of common machine learning models.

For example,a = 3 (treated as 0 dimensional tensor or scalars)a = [3,5] (treated as 1D tensor or vector)a = [[3,5],[1,1]] (treated as 2D tensor or a matrix) These tensors are passed to operations that perform computations on them.

We will define two constant tensors a and b with tf.constant with constants 5 and 3 and add them up with tf.add as shown in the computational graph.

We can think of tensorflow core programs as having two distinct sections, first we have to define a computational graph that specifies the computations we want to do, then we have to run the code to get our actual results.

Since in machine learning we want to update the paramaters of the models when training simply using constants whose values don’t change is not enough, we need some mechanism to add trainable parameters to the computational graph.

graph can also be fed external inputs using placeholders so that we can feed arbitrary number of inputs from the training sets to the model.

Tensorboard is a visualization tool that comes packaged with tensorflow.It’s very useful to visualize large scale machine learning models to debug them and understand what’s going on under the hood.

We have to pass the directory name where our graph log files will be saved and the computational graph we want to save into the summary writer object when calling it..

sess.graph contains the default computational graph for this session and writer writes it into the directory provided in logdir parameter.

In a simple dataset with only one feature and one output to predict, the form of the equation looks like We can see that for different values of input X we can get the predictions by using the equation.

We try to find out the best possible value for the weight and bias parameters using optimization technique against a loss function in order to fit a line through the data using the weight and the bias parameter when it comes to single feature.Loss functions tell us how good our predicted value is compared to the actual output.

As mentioned earlier, estimators is a high level API integrated with Tensorflow that allows us to work with pre-implemented models and provides tools for quickly creating new models as need by customizing them.

Estimators deal with all the details of creating computational graphs, initializing variables, training the model and saving checkpoint and logging files for Tensorboard behind the scene.

Tensorflow is offering pre-made model implementations for doing it and giving functionalities for representing our features in different ways using the feature columns.

Predicting Income with the Census Income Dataset

The Census Income Data Set contains over 48,000 samples with attributes including age, occupation, education, and income (a binary label, either >50K or <=50K).

The wide model is able to memorize interactions with data with a large number of features but not able to generalize these learned interactions on new data.

The wide and deep model truly shines on larger data sets with high-cardinality features, where each feature has millions/billions of unique possible values (which is the specialty of the wide model).

It allows you to move from single-worker training to distributed training, and it makes it easy to export model binaries for prediction.

You can run the code locally as follows: The model is saved to /tmp/census_model by default, which can be changed using the --model_dir flag.

You can also experiment with -inter and -intra flag to explore inter/intra op parallelism for potential better performance as follows: Please note the above optional inter/intra op does not affect model accuracy.

You can export the model into Tensorflow SavedModel format by using the argument --export_dir: After the model finishes training, use saved_model_cli to inspect and execute the SavedModel.

You can also run this model on Cloud ML Engine, which provides hyperparameter tuning to maximize your model's results and enables deploying your model for prediction.

Writing your own Machine Learning models using TensorFlow - TF Workshop - Session 3

TensorFlow workshop is a three part series instructed by Dr. Ashish Tendulkar in Chennai, India. In session 3 Ashish explains custom model building with TF ...

Distributed TensorFlow (TensorFlow Dev Summit 2017)

TensorFlow gives you the flexibility to scale up to hundreds of GPUs, train models with a huge number of parameters, and customize every last detail of the ...

On-device machine learning: TensorFlow on Android (Google Cloud Next '17)

In this video, Yufeng Guo applies deep learning models to local prediction on mobile devices. Yufeng shows you how to use TensorFlow to implement a ...

Adapting to video feed - TensorFlow Object Detection API Tutorial p.2

Welcome to part 2 of the TensorFlow Object Detection API tutorial. In this tutorial, we're going to cover how to adapt the sample code from the API's github repo to ...

A Guide to CoreML on iOS

Apple's newly released CoreML framework makes it super simple for developers to run inference of pre-trained models on their iOS devices. Let's talk about ...

Lecture 15: Coreference Resolution

Lecture 15 covers what is coreference via a working example. Also includes research highlight "Summarizing Source Code", an introduction to coreference ...

Lecture 3 | Loss Functions and Optimization

Lecture 3 continues our discussion of linear classifiers. We introduce the idea of a loss function to quantify our unhappiness with a model's predictions, and ...

Lecture 13 | Generative Models

In Lecture 13 we move beyond supervised learning, and discuss generative modeling as a form of unsupervised learning. We cover the autoregressive ...

Continuously Train & Deploy Spark ML and Tensorflow AI Models from Jupyter Notebook to Production

Notebook: ...

Lecture 14 | Deep Reinforcement Learning

In Lecture 14 we move from supervised learning to reinforcement learning (RL), in which an agent must learn to interact with an environment in order to ...