AI News, Retraining a state-of-the-art image classification Neural Network to classify your own images in TensorFlow
- On Tuesday, March 6, 2018
- By Read More
Retraining a state-of-the-art image classification Neural Network to classify your own images in TensorFlow
tl;dr I contributed code to the Google TensorFlow project on GitHub that adds TensorBoard visualizations to the existing TensorFlow “How to Retrain Inception’s Final Layer for New Categories” tutorial.
The Google TensorFlow project has a great tutorial which shows you how to quickly get started retraining the Inception v3 model to classify images of flowers and then repurpose the code for your own image classification needs.
Before I added TensorBoard summaries to the TensorFlow image classification tutorial, it was not possible to visualize the model architecture or compare model training performance over many training steps.
To take advantage of the TensorBoard visualization capabilities, I added code to the retraining script that allows you to visualize the model training statistics and overall model architecture.
Once you execute the retraining according to the tutorial, visualizing the retrain process and model architecture is as simple as: Once TensorBoard is running, selecting the EVENTS tab allows you to visualize the change in model statistics such as accuracy and cross entropy.
NOTE: The following examples are run from the tensorflow/examples/image_retraining directory of the TensorFlow GitHub project Example run 1 In this training run, let’s set the learning_rate to 0.01.
How to Retrain Inception's Final Layer for New Categories
Modern object recognition models have millions of parameters and can take weeks to
by taking a fully-trained model for a set of categories like ImageNet, and retrains
many applications, and can be run in as little as thirty minutes on a laptop,
Image by Kelly Sikkema Before you start any training, you'll need a set of images to teach the network about
of flower photos, run these commands: Once you have the images, you can clone the tensorflow repository using the following
command (these examples are not included in the installation): Then checkout the version of the tensorflow repository matching your installation
and this tutorial as follows: In the simplest cases the retrainer can then be run like this: The script has many other options.
You can get a full listing with: This script loads the pre-trained Inception v3 model, removes the old top layer, and
magic of transfer learning is that lower layers that have been trained to distinguish
The script can take thirty minutes or more to complete, depending on the speed of
This penultimate layer has been trained to output a set of values
our final layer retraining can work on new classes is that it turns out the
Because every image is reused multiple times during training and calculating each
bottleneck takes a significant amount of time, it speeds things up to cache
you rerun the script they'll be reused so you don't have to wait for this part
Once the bottlenecks are complete, the actual training of the top layer of the network
what percent of the images used in the current training batch were labeled
the training accuracy is based on images that the network has been able to
learn from so the network can overfit to the noise in the training data.
measure of the performance of the network is to measure its performance on a
data set not contained in the training data -- this is measured by the validation
low, that means the network is overfitting and memorizing particular features
so you can tell if the learning is working by keeping an eye on whether
the loss keeps trending downwards, ignoring the short-term noise.
By default this script will run 4,000 training steps.
at random from the training set, finds their bottlenecks from the cache, and
compared against the actual labels to update the final layer's weights through
reported accuracy improve, and after all the steps are done, a final test accuracy
evaluation is run on a set of images kept separate from the training and
value of between 90% and 95%, though the exact value will vary from run to
percent of the images in the test set that are given the correct label after
The script includes TensorBoard summaries that make it easier to understand, debug, and optimize the retraining.
For example, you can visualize the graph and statistics, such as how the weights or accuracy varied during training.
To launch TensorBoard, run this command during or after retraining: Once TensorBoard is running, navigate your web browser to localhost:6006 to view the TensorBoard.
The script will write out a version of the Inception v3 network with a final layer
read in, so you can start using your new model immediately.
the top layer, you will need to specify the new name in the script, for example
graphs: You should see a list of flower labels, in most cases with daisy on top (though
If you find the default Inception v3 model is too large or slow for your application,
If you've managed to get the script working on the flower example images, you can
start looking at teaching it to recognize categories you care about instead. In
theory all you'll need to do is point it at a set of sub-folders, each named after
do that and pass the root folder of the subdirectories as the argument to --image_dir,
Here's what the folder structure of the flowers archive looks like, to give you and
The first place to start is by looking at the images you've gathered, since the most
common issues we see with training come from the data that's being fed in.
For example, if you take all your photos indoors against a blank wall and
your users are trying to recognize objects outdoors, you probably won't see good
end up basing its prediction on the background color, not the features of the
only things you'll ever be asked to categorize are the classes of object you know
common way of improving the results of image training is by deforming, cropping,
of expanding the effective size of the training data thanks to all the possible
variations of the same images, and tends to help the network learn to cope
that the bottleneck caching is no longer useful, since input images are never reused
mirror half of the images horizontally, which makes sense as long as those
one training step, and because the learning rate is applied per batch you'll
split is to put 80% of the images into the main training set, keep 10% aside
test set, since they are likely to merely reflect more general problems in the
By default the script uses a pretrained version of the Inception v3 model architecture.
example: This will create a 941KB model file in /tmp/output_graph.pb, with 25% of the parameters
of the full Mobilenet, taking 128x128 sized input images, and with its
(and to some extent the speed), '224', '192', '160', or '128' for the input image
size, with smaller sizes giving faster speeds, and an optional '_quantized'
at the end to indicate whether the file should contain 8-bit or 32-bit
The speed and size advantages come at a loss to accuracy of course, but for many purposes
you'll need to feed in an image of the specified size converted to a float
TensorBoard has bad graphs after deleting the model directory and retraining #652
Looking into this more, I think it's best resolved by making event purging for out-of-order steps work again, which was reported separately in #803 and which I have a PR out to implement.
The graphs look weird because when the model script re-evaluates, the new data gets picked up as a continuation of the old data, even though they're using the same step sequence, resulting in the data series 'zig-zagging' back to 0 on the horizontal axis and then appearing to overlay over the old data.
Note that strictly speaking this is a different condition because we're actually redoing the whole dataset from scratch, so it would be more correct to dump the entire set of data TensorBoard has loaded, not just data at a higher step count (since in theory, the new run might not start at step 1).
- On Monday, September 23, 2019
Train an Image Classifier with TensorFlow for Poets - Machine Learning Recipes #6
Monet or Picasso? In this episode, we'll train our own image classifier, using TensorFlow for Poets. Along the way, I'll introduce Deep Learning, and add context and background on why the...
Build a TensorFlow Image Classifier in 5 Min
In this episode we're going to train our own image classifier to detect Darth Vader images. The code for this repository is here:
Tensorflow 18 Saver (neural network tutorials)
This tutorial code: Once you have built a network and trained this network using tensorflow, you can actually..
TensorFlow How to Retrain Inception's Final Layer for New Categories Tutorial
This tutorial demonstrates how to setup and use the "How to Retrain Inception's Final Layer for New Categories" example on the TensorFlow website. The code for this tutorial is available below....
Training/Testing on our Data - Deep Learning with Neural Networks and TensorFlow part 7
Welcome to part seven of the Deep Learning with Neural Networks and TensorFlow tutorials. We've been working on attempting to apply our recently-learned basic deep neural network on a dataset...
Deep Learning Tutorial in Python #7 - How To Use Tensorboard Part1
Hi everyone in this video we will learn how to use tensorboard to debug and visualize the network and its node value during the session run. For complete Code and Documentation Visit:
How to Deploy a Tensorflow Model to Production
Once we've trained a model, we need a way of deploying it to a server so we can use it as a web or mobile app! We're going to use the Tensorflow Serving library to help us run a model on a...
Python Computer Vision -- Transfer Learning With Tensorflow #1
In this video, I will show you how to use Tensorflow to do transfer learning. Transfer learning is using a pretrained-model and making some adjustments to the end layers to make the model...
Tutorial on CNN implementation for own data set in keras(TF & Theano backend)-part-1
A complete tutorial on using own dataset to train a CNN from scratch in Keras (TF & Theano Backend)-Part-1. Github: It explains..
Serving Models in Production with TensorFlow Serving (TensorFlow Dev Summit 2017)
Serving is the process of applying a trained model in your application. In this talk, Noah Fiedel describes TensorFlow Serving, a flexible, high-performance ML serving system designed for productio...