AI News, Amazon Machine Learning: Use Cases and a Real Example in Python

Amazon Machine Learning: Use Cases and a Real Example in Python

“Amazon Machine Learning is a service that makes it easy for developers of all skill levels to use machine learning technology.” After using AWS Machine Learning for a few hours I can definitely agree with this definition, although I still feel that too many developers have no idea what they could use machine learning for, as they lack the mathematicalbackground to really grasp its concepts.

HereI would like to share my personal experience with this amazing technology, introduce some of the most important, and sometimes misleading, concepts of machine learning, and givethis new AWS service a try with an open dataset in order to train and use a real-world AWS Machine Learning model.

In my personal experience, the most crucial and time-consuming part of the job is defining the problem and buildinga meaningful dataset, which actually means: The first point may seem trivial, but it turns out that not every problem can be solved with machine learning, even AWS Machine Learning.Therefore, you will need to understand whether your scenario fits or not.

You might decide to discard some input features in advance and somehow, inadvertently, decrease your model’s accuracy.On the other hand, deciding to keep the wrong column might expose your model tooverfitting during thetraining and therefore weaken your new predictions.

If your current dataset mostly contains data about male users, since very few females have signed up, you might end up with an always-negative prediction for every new female user, even though it’s not actually the case.

You will need to adapt their input format to the kind of simple csv file AWS Machine Learning expects and understand how the input features have been computed so that you canactually use the model with your own online data toobtain predictions.

It is freely available here.This dataset contains more than 10,000 records, each defined by 560 features and one manually labeled target column, which can take one of the following values: The 560 features columns are the input data of our model and represent the time and frequency domain variables obtained by the accelerometer and gyroscope signals.

Also, the usual 70/30 dataset split has already been performed by the dataset authors (you will find four files in total), but in our case, AWS Machine Learning will do all of that for us, so we want to upload the whole set as one single csv file.

for example we could add more “standing” recordsto help the model distinguish it from “sitting.” Or we might even find out that our data is wrong or biased by some experimental assumptions, in which case we’ll need to come up with new ideas or solutions to improve the data (i.e.

In this specific case, we would need to sit down and study how those 560 input features have been computed, code the sameinto our mobile app, and then call our AWS Machine Learning model to obtain an online prediction for the given record.

In order to simplify thisdemo, let’s assume that we have already computed the features vector, we’re using python on our server, and we have installed thewell knownboto3 library.

That way, you’ll avoid having to deal with meaningless input names such as “Var001”, “Var002.” In my python script below, I am reading the features record from a local file and generating names based on the column index (you can find the full commented code and the record.csv file here).

While there are a million use cases with datasets unique to a variety of specific contexts, AWS Machine Learning successfully manages the process to allow you to focus just on your data, without wasting your time trying tons of models and dealing with boring math.

At the moment this is quite painful, as you would need to upload a brand new source to S3 and go through the whole training/testing process every time, ending up withN models, N evaluations, and N*3 data sources on your AWS Machine Learning dashboard.

The Best Way to Prepare a Dataset Easily

Only a few days left to signup for my Decentralized Applications course! In this video, I go over the 3 steps you need to prepare a ..

How to Train Your Models in the Cloud

Only a few days left to signup for my Decentralized Applications course! Let's discuss whether you should train your models locally or in ..

Build, Train and Deploy ML Models at Scale with Amazon SageMaker

Learn more at - Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, ..

Deep Learning for Data Scientists: Using Apache MXNet and R on AWS

Learning Objectives: - Deploy a Data science environment in minutes with the AWS Deep Learning AMI - Getting started with Apache MXNet on R - Train and ...

Intro to Amazon Machine Learning

The Amazon Web Services Machine Learning platform is finely tuned to enable even the most inexperienced data scientist to build and deploy predictive models ...

Getting started with the AWS Deep Learning AMI

Twitter: @julsimon Medium: Slideshare: - What is the AWS Deep Learning AMI? - Running .

Deploying Python Machine Learning Models in Production | SciPy 2015 | Krishna Sridhar

Shawn Scully: Production and Beyond: Deploying and Managing Machine Learning Models

PyData NYC 2015 Machine learning has become the key component in building intelligence-infused applications. However, as companies increase the number ...

Intro to Azure ML: Building a Machine Learning Model

Let's build our first machine learning model in Azure ML. First, we have to go shopping for a machine learning model. We must identify what type of machine ...

Building Real-Time Data Analytics Applications on AWS - September 2016 Webinar Series

Evolving your analytics from batch processing to real-time processing can have a major impact on your business. However, designing and maintaining ...