AI News, Amazon Machine Learning: Use Cases and a Real Example in Python
Amazon Machine Learning: Use Cases and a Real Example in Python
“Amazon Machine Learning is a service that makes it easy for developers of all skill levels to use machine learning technology.” After using AWS Machine Learning for a few hours I can definitely agree with this definition, although I still feel that too many developers have no idea what they could use machine learning for, as they lack the mathematical background to really grasp its concepts.Here I would like to share my personal experience with this amazing technology, introduce some of the most important, and sometimes misleading, concepts of machine learning, and give this new AWS service a try with an open dataset in order to train and use a real-world AWS Machine Learning model.Luckily, AWS has done a great job in creating documentation that makes it easy for anyone to understand what machine learning is, when it can be used, and what you need to build a useful model.You should check out the official AWS tutorial and its ready-to-use dataset.
In my personal experience, the most crucial and time-consuming part of the job is defining the problem and building a meaningful dataset, which actually means: The first point may seem trivial, but it turns out that not every problem can be solved with machine learning, even AWS Machine Learning. Therefore, you will need to understand whether your scenario fits or not.The second point is important as well, since you will often discover unpredictable correlations between your input data (or “features”) and the target value (i.e.
column you are trying to classify or predict).You might decide to discard some input features in advance and somehow, inadvertently, decrease your model’s accuracy. On the other hand, deciding to keep the wrong column might expose your model to overfitting during the training and therefore weaken your new predictions.For example, let’s say that you are trying to predict whether your registered users will pay for your product or not, and you include their “gender” field in the model.
If your current dataset mostly contains data about male users, since very few females have signed up, you might end up with an always-negative prediction for every new female user, even though it’s not actually the case.In a few words, overfitting simply means creating a model that is too specific to your current dataset and will not behave well with new data.
It is freely available here. This dataset contains more than 10,000 records, each defined by 560 features and one manually labeled target column, which can take one of the following values: The 560 features columns are the input data of our model and represent the time and frequency domain variables obtained by the accelerometer and gyroscope signals.
Also, the usual 70/30 dataset split has already been performed by the dataset authors (you will find four files in total), but in our case, AWS Machine Learning will do all of that for us, so we want to upload the whole set as one single csv file.I coded a tiny python script to convert the four matrix-formatted files into a single comma-separated file.
In this specific case, we would need to sit down and study how those 560 input features have been computed, code the same into our mobile app, and then call our AWS Machine Learning model to obtain an online prediction for the given record.In order to simplify this demo, let’s assume that we have already computed the features vector, we’re using python on our server, and we have installed the well known boto3 library.All we need to obtain a new prediction is the Model ID and its Prediction Endpoint.
While there are a million use cases with datasets unique to a variety of specific contexts, AWS Machine Learning successfully manages the process to allow you to focus just on your data, without wasting your time trying tons of models and dealing with boring math.I am personally very curious to see how this service will evolve and how AWS users will exploit its features.Moreover, I would like to see how AWS will handle another typical need: Keeping the model updated with new observations.
Build, Train, and Deploy a Machine Learning Model
In this tutorial, you will learn how to use Amazon SageMaker to build, train, and deploy a machine learning (ML) model using Python3 implementing the popular XGBoost ML algorithm.
Amazon SageMaker removes these complexities, making it easy to build ML models by providing everything you need to quickly connect to your training data and select the best algorithm and framework for your application, while managing all of the underlying infrastructure, so you can train models at petabyte scale.
To build a machine learning model with Amazon SageMaker in this tutorial, you will create a notebook instance, prepare the data, train the model to learn from the data, deploy the model, then evaluate your machine learning models performance.
Amazon Machine Learning Product Details
Amazon Machine Learning is a managed service for building ML models and generating predictions, enabling the development of robust, scalable smart applications.
Amazon Machine Learning combines powerful machine learning algorithms with interactive visual tools to guide you towards easily creating, evaluating, and deploying machine learning models.
Once a model is built, the service's intuitive model evaluation and fine-tuning console help you understand its strengths and weaknesses, and adjust its performance to meet business objectives.
Registry of Open Data on AWS
The data is stored in columnar storage formats (ORC) to make it straightforward to query using standard tools like Amazon Athena or Apache Spark.
The data itself is originally intended to be used for building decision support tools for farmers and digital agriculture.
The data has been converted to ORC to optimize storage space and to, more importantly, simplify data access via standard data analytics tools.
- On Monday, September 23, 2019
The Best Way to Prepare a Dataset Easily
In this video, I go over the 3 steps you need to prepare a dataset to be fed into a machine learning model. (selecting the data, processing it, and transforming it).
Live Coding with AWS | Training and Deploying AI with Amazon SageMaker
Check out the upcoming schedule, previous recordings, and links to the resources discussed at - Build a model to predict a time series ..
Intro to Amazon Machine Learning
The Amazon Web Services Machine Learning platform is finely tuned to enable even the most inexperienced data scientist to build and deploy predictive models ...
Live Coding with AWS | Machine Learning
Learn more at or watch live on Join AWS on Twitch every week for interactive live coding from building .
Getting started with the AWS Deep Learning AMI
Twitter: @julsimon Medium: Slideshare: - What is the AWS Deep Learning AMI? - Running .
Demystifying Machine Learning on AWS (Level 200)
Learn more about the AWS Innovate Online Conference at - Machine learning (ML) is having a major impact on society, so how can ..
Deep Learning on AWS with TensorFlow - 2017 AWS Online Tech Talks
Learning Objectives: - Learn how to set up your deep learning instance using the AWS Deep Learning AMI - Learn the fundamentals of the TensorFlow and ...
Building Real-Time Data Analytics Applications on AWS - September 2016 Webinar Series
Evolving your analytics from batch processing to real-time processing can have a major impact on your business. However, designing and maintaining ...
Deep Learning for Data Scientists: Using Apache MXNet and R on AWS
Learning Objectives: - Deploy a Data science environment in minutes with the AWS Deep Learning AMI - Getting started with Apache MXNet on R - Train and ...
Deploying Python Machine Learning Models in Production | SciPy 2015 | Krishna Sridhar