AI News, Getting Value from Machine Learning Isn’t About Fancier Algorithms — It’s About Making It Easier to Use

Getting Value from Machine Learning Isn’t About Fancier Algorithms — It’s About Making It Easier to Use

Machine learning can drive tangible business value for a wide range of industries— but only if it is actually put to use.

Despite the many machine learning discoveries being made by academics, new research papers showing what is possible, and an increasing amount of data available, companies are struggling to deploy machine learning to solve real business problems.

Our team decided to address this problem by finding patterns in a complex volume of data, building machine learning models, and using them to anticipate the occurrence of critical problems.

The model identified red flags that might indicate an upcoming problem in project performance, including an increase in the average time spent resolving a bug, and backlog processing and resolution time.

This lead time allows service provider teams to determine the nature of the upcoming problem, identify the areas that would be impacted, and take remedial actions to prevent it from occurring at all.

Currently, the AI project manager (tested and integrated in Accenture’s myWizard Automation Platform across delivery projects) serves predictions on a weekly basis and correctly predicts red flags 80% of the time, helping to improve KPIs related to project delivery.

The next step for the project will be to use the same data to create models that can predict cost overruns, delays in the delivery schedule, and other critical aspects of project execution that are critical to the business performance of an organization.

Instead, our biggest requirements were for a robust software engineering practice, automation that allowed domain experts to come in at the right level, and tools that could support comprehensive model testing.

Greater involvement of domain experts: Domain experts determined key variables — for instance, which specific events posed a risk to project performance, how far ahead the model had to be able to predict for the information to be valuable, and which past projects should be used to train the model.

Domain experts are often better than machines at suggesting patterns that hold predictive power — for example, an increase in the average response time for a ticket could eventually lead to poor project performance;

The automated testing suite built into ML 2.0 gave the deployment team the flexibility to simulate previous states of the data, add data that had been withheld from the development process, and conduct their own tests for several points in time.

4 Questions to Ask Before You Start a Machine Learning Project

Companies can't turn a blind eye to machine learning anymore, as it is so powerful at certain tasks.

For instance, we argue in this article that solid data engineering alone can be enough to find invaluable business insights for companies across numerous industries.

According to TechCrunch, one can split all machine learning use cases into two categories: Research institutions and tech companies have made massive progress in certain areas of machine learning, including computer vision, speech recognition, and natural language processing.

This technique is valid when you’ve got some big datasets of customer information and historical records that reveal who clicked your ads in the past.

A supervised machine learning model analyzes that input data to find patterns and predict what demographic groups are most likely to click your ad.

For instance, we can take the input data from the example above, and let the AI engine group people according to demographics and personal interests.

With reinforcement learning, data scientists specify the rules of the “game”, the environment where the “game” takes place, and the final reward (in chess analogy, that would be the victory).

Deep learning, a technique that utilizes artificial neural networks, is applicable to all three machine learning types, but is most often used in supervised learning.

For instance, it can be used to categorize pictures of cats and dogs with high precision.  Deep learning is behind Facebook’s Face Recognition technology, which is 99 percent accurate.

The same technology powers advanced natural language processing (NLP), image and speech recognition software, which can be used in document processing (e.g., legal documents), sentiment analysis and word-processing software.

Before going for data science, you need to extract data from fragmented sources, transform it into usable datasets, and load it to the AI engine.

On the downside, you cannot freely configure system parameters. For instance, Amazon uses only logistic regression models, so is practically useless if you need to use different models for a particular project.

Machine Learning

Supervised learning algorithms are trained using labeled examples, such as an input where the desired output is known.

The learning algorithm receives a set of inputs along with the corresponding correct outputs, and the algorithm learns by comparing its actual output with correct outputs to find errors.

Through methods like classification, regression, prediction and gradient boosting, supervised learning uses patterns to predict the values of the label on additional unlabeled data.

Popular techniques include self-organizing maps, nearest-neighbor mapping, k-means clustering and singular value decomposition.

Machine Learning Project Structure: Stages, Roles, and Tools

Various businesses use machine learning to manage and improve operations.

For example, a small data science team would have to collect, preprocess, and transform data, as well as train, validate, and (possibly) deploy a model to do a single prediction.

Netflix data scientists would follow a similar project scheme to provide personalized recommendations to the service’s audience of 100 million.

While a business analyst defines the feasibility of a software solution and sets the requirements for it, a solution architect organizes the development.

The job of a data analyst is to find ways and sources of collecting relevant and comprehensive data, interpreting it, and analyzing results with the help of statistical techniques.

In turn, the number of attributes data scientists will use when building a predictive model depends on the attributes’ predictive value.

For example, those who run an online-only business and want to launch a personalization campaign сan try out such web analytic tools as Mixpanel, Hotjar, CrazyEgg, well-known Google analytics, etc.

It stores data about users and their online behavior: time and length of visit, viewed pages or objects, and location.

Visualr, Tableau, Oracle DV, QlikView, Charts.js, dygraphs, D3.js Supervised machine learning, which we’ll talk about below, entails training a predictive model on historical data with predefined target answers.

Data labeling takes much time and effort as datasets sufficient for machine learning may require thousands of records to be labeled.

For instance, if your image recognition algorithm must classify types of bicycles, these types should be clearly defined and labeled in a dataset.

This technique is about using knowledge gained while solving similar machine learning problems by other data science teams.

A data scientist needs to define which elements of the source training dataset can be used for a new modeling task.

Transfer learning is mostly applied for training neural networks — models used for image or speech recognition, image segmentation, human motion modeling, etc.

For example, if you were to open your analog of Amazon Go store, you would have to train and deploy object recognition models to let customers skip cashiers.

crowdsourcing labeling platforms, spreadsheets After having collected all information, a data analyst chooses a subgroup of data to solve the defined problem.

For instance, if you save your customers’ geographical location, you don’t need to add their cell phones and bank card numbers to a dataset.

data scientist, who is usually responsible for data preprocessing and transformation, as well as model building and evaluation, can be also assigned to do data collection and selection tasks in small data science teams.

A data scientist uses this technique to select a smaller but representative data sample to build and run models much faster, and at the same time to produce accurate outcomes.

spreadsheets, automated solutions (Weka, Trim, Trifacta Wrangler, RapidMiner), MLaaS (Google Cloud AI, Amazon Machine Learning, Azure Machine Learning) In this final preprocessing phase, a data scientist transforms or consolidates data into a form appropriate for mining (creating algorithms to get insights from data) or machine learning.

For example, to estimate a demand for air conditioners per month, a market research analyst converts data representing demand per quarters.

The choice of applied techniques and the number of iterations depend on a business problem and therefore on the volume and quality of data collected for analysis.

dataset used for machine learning should be partitioned into three subsets — training, test, and validation sets.

A data scientist uses a training set to train a model and define its optimal parameters — parameters it has to learn from data.

The latter means a model’s ability to identify patterns in new unseen data after having been trained over a training data.

The purpose of a validation set is to tweak a model’s hyperparameters — higher-level structural settings that can’t be directly learned from data.

At the same time, machine learning practitioner Jason Brownlee suggests using 66 percent of data for training and 33 percent for testing.

MLaaS (Google Cloud AI, Amazon Machine Learning, Azure Machine Learning), ML frameworks (TensorFlow, Caffe, Torch, scikit-learn) During this stage, a data scientist trains numerous models to define which one of them provides the most accurate predictions.

An algorithm will process data and output a model that is able to find a target value (attribute) in new data — an answer you want to get with predictive analysis.

The goal of model training is to find hidden interconnections between data objects and structure objects by similarities or differences.

MLaaS (Google Cloud AI, Amazon Machine Learning, Azure Machine Learning), ML frameworks (TensorFlow, Caffe, Torch, scikit-learn) Data scientists mostly create and train one or several dozen models to be able to choose the optimal model among well-performing ones.

Also known as stacked generalization, this approach suggests developing a meta-model or higher-level learner by combining multiple base models.

A data scientist first uses subsets of an original dataset to develop several averagely performing models and then combines them to increase their performance using majority vote.

Once a data scientist has chosen a reliable model and specified its performance requirements, he or she delegates its deployment to a data engineer or database administrator.

Besides working with big data, building and maintaining a data warehouse, a data engineer takes part in model deployment.

Model productionalization also depends on whether your data science team performed the above-mentioned stages (dataset preparation and preprocessing, modeling) manually using in-house IT infrastructure and or automatically with one of the machine learning as a service products.

Machine learning as a service is an automated or semi-automated cloud platform with tools for data preprocessing, model training, testing, and deployment, as well as forecasting.

With real-time streaming analytics, you can instantly analyze live streaming data and quickly react to events that take place at any moment.

Real-time prediction allows for processing of sensor or market data, data from IoT or mobile devices, as well as from mobile or desktop applications and websites.

MlaaS (Google Cloud AI, Amazon Machine Learning, Azure Machine Learning), ML frameworks (TensorFlow, Caffe, Torch, scikit-learn), open source cluster computing frameworks (Apache Spark), cloud or in-house servers Regardless of a machine learning project’s scope, its implementation is a time-consuming process consisting of the same basic steps with a defined set of tasks.

The distribution of roles in data science teams is optional and may depend on a project scale, budget, time frame, and a specific problem.

Even though a project’s key goal — development and deployment of a predictive model — is achieved, a project continues.

Building a Business Case for your Machine Learning Idea

This presentation will discuss building a business model for your machine learning idea. In this talk, our presenter, Neeti Gupta, will provide a 10-step checklist ...

AI in Industry - Lessons from 50+ Companies and Example Projects

Recent progress in deep learning has created a compelling opportunity for scientists and engineers with machine learning skills who are looking to join data ...

Predicting Stock Prices - Learn Python for Data Science #4

In this video, we build an Apple Stock Prediction script in 40 lines of Python using the scikit-learn library and plot the graph using the matplotlib library.

How business people can deploy machine learning models

Machine learning is a complex topic, but companies are trying to make it more accessible by enabling business people to design and deploy such models.

Model Management and Deployment in Watson Studio

Build models that learn over time with Watson Machine Learning and Watson Studio. ibm.com/cloud/watson-studio We hear from many clients that one of the ...

How AI is changing Business: A look at the limitless potential of AI | ANIRUDH KALA | TEDxIITBHU

Now a household name in the Indian computer science scene, Anirudh Kala offers us a sneak peek into the mind-boggling commercial potential of Artificial ...

Making Predictions with Data and Python : Predicting Credit Card Default | packtpub.com

This playlist/video has been uploaded for Marketing purposes and contains only selective videos. For the entire video course and code, visit ...

Predicting the Winning Team with Machine Learning

Can we predict the outcome of a football game given a dataset of past games? That's the question that we'll answer in this episode by using the scikit-learn ...

Lifecycle of a machine learning model (Google Cloud Next '17)

In this video, you'll hear lessons learned from our experience with machine learning and how having a great model is only a part of the story. You'll see how ...

Deploying Python Machine Learning Models in Production | SciPy 2015 | Krishna Sridhar