AI News, Transitioning from Academic Machine Learning to AI in Industry
Transitioning from Academic Machine Learning to AI in Industry
When machine learning and deep learning is employed to solve business problems, you must design systems that consider the overall business operations.
Questions to ask for each project: Jupyter notebooks, while wildly popular for rapidly prototyping deep learning models, are not meant to be deployed in production.
For this reason, academics should push themselves to build structured ML modules that both use best practices and demonstrate you can build solutions that others can use.
Action Items: Academics often run code to find and eliminate errors in an ad hoc manner, but building AI products requires a shift towards using a testing framework to systematically check if systems are functioning correctly.
Action Items: No matter what company you join, you will have to access their often large data stores to provide the training and testing data you need for your experiments and model building.
To demonstrate industry know-how, academics should show that they can (1) query from large datasets and (2) construct more efficient datasets for deep learning training.
Google’s AutoML lets you train custom machine learning models without having to code
While Google plans to expand this custom ML model builder under the AutoML brand to other areas, the service for now only supports computer vision models, but you can expect the company to launch similar versions of AutoML for all the standard ML building blocks in its repertoire (think speech, translation, video, natural language recognition, etc.).
The basic idea here, Google says, is to allow virtually anybody to bring their images, upload them (and import their tags or create them in the app) and then have Google’s systems automatically create a customer machine learning model for them.
The company says that Disney, for example, has used this system to make the search feature in its online store more robust because it can now find all the products that feature a likeness of Lightning McQueen and not just those where your favorite talking race car was tagged in the text description.
Build and train machine learning models on our new Google Cloud TPUs
We’re excited to announce that our second-generation Tensor Processing Units (TPUs) are coming to Google Cloud to accelerate a wide range of machine learning workloads, including both training and inference.
These breakthroughs required enormous amounts of computation, both to train the underlying machine learning models and to run those models once they’re trained (this is called “inference”).
Training a machine learning model is even more difficult than running it, and days or weeks of computation on the best available CPUs and GPUs are commonly required to reach state-of-the-art levels of accuracy.
However, this wasn’t enough to meet our machine learning needs, so we designed an entirely new machine learning system to eliminate bottlenecks and maximize overall performance.
How to build a processor for machine learning
Graphcore CTO, Simon Knowles, talks about hardware for machine intelligence with Project Juno’s Libby Kinsey, after presenting at their inaugural Machine Intelligence Showcase in London.
We’ve heard that it is prohibitively expensive for startups and academics to train machine learning models, and this is due to the rental or purchase costs of hardware.
The results from one recent Google paper were estimated to cost $13k to emulate: That’s just to reproduce the final model, not to emulate the whole experimentation and hyperparameter optimisation caboodle.
Equally, there are intelligence tasks (training, inference, or prediction) that would ideally happen on the cellphone or remote sensor but are too compute constrained locally, so currently rely on uploading data to the cloud for processing.
In particular, machine intelligence permits and requires massively parallel processing, but the design of parallel processors and methods of programming them are nascent arts.
The first is approximate computing – efficiently finding probably good answers where perfect answers are not possible, usually because there is insufficient information, time, or energy.
So baking today’s favourite models or learning algorithms into fixed hardware, by building an application specific integrated circuit (ASIC) would be foolish.
Instead, what is required is silicon architecture that efficiently supports the essential new characteristics of intelligence as a workload, yet is flexible enough to maintain utility as the details evolve.
If we think of the Central Processing Unit (CPU) in your laptop as being designed for scalar-centric control tasks, and the Graphics Processing Unit (GPU) as being designed for vector-centric graphics tasks, then this new class of processor would be an Intelligence Processing Unit (IPU), designed for graph-centric intelligence tasks.
But only a subset of machine intelligence is amenable to wide vector machines, and the high arithmetic precision required by graphics is far too wasteful for the probability processing of intelligence.
It’s especially exciting to see Google advocating tailored processor design for machine learning — they wouldn’t do chip design if it didn’t make a big difference to what they can deliver in application performance, and ultimately cost.
But a basic analysis of energy efficiency immediately concludes that an electrical spike (two edges) is half as efficient for information transmission as a single edge, so following the brain is not automatically a good idea.
A life-long tech enthusiast, she has master's degrees in maths and machine learning and spent 10 years investing in deep tech startups.
- On Tuesday, March 26, 2019
Building Custom AI Models on Azure using TensorFlow and Keras : Build 2018
Learn how to simplify your Machine Learning workflow by using the experimentation, model management, and deployment services from AzureML.
Hello World - Machine Learning Recipes #1
Six lines of Python is all it takes to write your first machine learning program! In this episode, we'll briefly introduce what machine learning is and why it's ...
Intro to machine learning on Google Cloud Platform (Google I/O '18)
There are revolutionary changes happening in hardware and software that are democratizing machine learning (ML). Whether you're new to ML or already an ...
Andreas Mueller - Machine Learning with Scikit-Learn
PyData Amsterdam 2016 Description Scikit-learn has emerged as one of the most popular open source machine learning toolkits, now widely used in academia ...
The 7 Steps of Machine Learning
How can we tell if a drink is beer or wine? Machine learning, of course! In this episode of Cloud AI Adventures, Yufeng walks through the 7 steps involved in ...
How to Predict Stock Prices Easily - Intro to Deep Learning #7
We're going to predict the closing price of the S&P 500 using a special type of recurrent neural network called an LSTM network. I'll explain why we use ...
Predicting the Winning Team with Machine Learning
Can we predict the outcome of a football game given a dataset of past games? That's the question that we'll answer in this episode by using the scikit-learn ...
Open Source TensorFlow Models (Google I/O '17)
Come to this talk for a tour of the latest open source TensorFlow models for Image Classification, Natural Language Processing, and Computer Generated ...
Generative Adversarial Nets - Fresh Machine Learning #2
This episode of Fresh Machine Learning is all about a relatively new concept called a Generative Adversarial Network. A model continuously tries to fool another ...
Random Forest in R - Classification and Prediction Example with Definition & Steps
Provides steps for applying random forest to do classification and prediction. R code file: Data: Machine Learning .