AI News, AI in the Enterprise – Making Corporations Smart Again

AI in the Enterprise – Making Corporations Smart Again

Machine Learning uses algorithms to detect patterns in old data and build models that can be used to make predictions from new data.

Understanding the algorithms behind Machine Learning is difficult and running the infrastructure needed to build accurate models and use these models at scale is very challenging.

At Uber and Amazon my teams built Machine Learning services that easily allow business teams to embed intelligence into their applications that can perform important functions such as ETA, fraud detection, churn prediction, forecasting demand, and much more.

All Customer Success Stories

It ensures web store visitors can find and buy the products they want easily regardless of traffic numbers thanks to a back-end infrastructure running on Amazon EC2 instances with Auto Scaling, an Amazon S3 data repository, and Amazon Kinesis to capture and process web-store clickstreams in real time.

The evolution of machine learning

Major tech companies have actively reoriented themselves around AI and machine learning: Google is now “AI-first,” Uber has ML running through its veins and internal AI research labs keep popping up.

Engineers still use traditional software engineering tools for machine learning engineering, and they don’t work: The pipelines that take data to model to result end up built out of scattered, incompatible pieces.

There is change coming, as big tech companies smooth out this process by building new machine learning-specific platforms with end-to-end functionality.

Despite the focus on deep learning at the big tech company AI research labs, most applications of machine learning at these same companies do not rely on neural networks and instead use traditional machine learning models.

These are the models behind, among other services tech companies use, friend suggestions, ad targeting, user interest prediction, supply/demand simulation and search result ranking.

Furthermore, the large scale of big tech companies compounds errors, making careful deployment and monitoring of models in production imperative.

So instead of unit tests, engineers take a less structured approach: They manually monitor dashboards and program alerts for new models.

And shifts in real-world data may make trained models less accurate, so engineers re-train production models on fresh data on a daily to monthly basis, depending on the application.

But a lack of machine learning-specific support in the existing engineering infrastructure can create a disconnect between models in development and models in production —

Many engineers still rely on rudimentary methods of deploying models to production, like saving a serialized version of the trained model or model weights to a file.

To address these issues, a few big companies, with the resources to build custom tooling, have invested time and engineering effort into creating their own machine learning-specific tools.

They allow engineers to construct training and validation data sets with an intuitive user interface, decreasing time spent on this stage from days to hours.

Services like Azure Machine Learning and Amazon Machine Learning are publicly available alternatives that provide similar end-to-end platform functionality but only integrate with other Amazon or Microsoft services for the data storage and deployment components of the pipeline.

They still use traditional machine learning models instead of more-advanced deep learning, and still depend on a traditional infrastructure of tools poorly suited to machine learning.

With these internal tools, or potentially with third-party machine learning platforms that are able to integrate tightly into their existing infrastructures, organizations can realize the potential of AI.

The Business of Artificial Intelligence

For more than 250 years the fundamental drivers of economic growth have been technological innovations.

The internal combustion engine, for example, gave rise to cars, trucks, airplanes, chain saws, and lawnmowers, along with big-box retailers, shopping centers, cross-docking warehouses, new supply chains, and, when you think about it, suburbs.

that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given.

The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning.

We see business plans liberally sprinkled with references to machine learning, neural nets, and other forms of the technology, with little connection to its real capabilities.

The term artificial intelligence was coined in 1955 by John McCarthy, a math professor at Dartmouth who organized the seminal conference on the topic the following year.

A study by the Stanford computer scientist James Landay and colleagues found that speech recognition is now about three times as fast, on average, as typing on a cell phone.

Vision systems, such as those used in self-driving cars, formerly made a mistake when identifying a pedestrian as often as once in 30 frames (the cameras in these systems record about 30 frames a second);

The error rate for recognizing images from a large database called ImageNet, with several million photographs of common, obscure, or downright weird images, fell from higher than 30% in 2010 to about 4% in 2016 for the best systems.

Google’s DeepMind team has used ML systems to improve the cooling efficiency at data centers by more than 15%, even after they were optimized by human experts.

A system using IBM technology automates the claims process at an insurance company in Singapore, and a system from Lumidatum, a data science platform firm, offers timely advice to improve customer support.

Infinite Analytics developed one ML system to predict whether a user would click on a particular ad, improving online ad placement for a global consumer packaged goods company, and another to improve customers’

For instance, Aptonomy and Sanbot, makers respectively of drones and robots, are using improved vision systems to automate much of the work of security guards.

More fundamentally, we can marvel at a system that understands Chinese speech and translates it into English, but we don’t expect such a system to know what a particular Chinese character means —

The fallacy that a computer’s narrow understanding implies broader understanding is perhaps the biggest source of confusion, and exaggerated claims, about AI’s progress.

The most important thing to understand about ML is that it represents a fundamentally different approach to creating software: The machine learns from examples, rather than being explicitly programmed for a particular outcome.

For most of the past 50 years, advances in information technology and its applications have focused on codifying existing knowledge and procedures and embedding them in machines.

In this second wave of the second machine age, machines built by humans are learning from examples and using structured feedback to solve on their own problems such as Polanyi’s classic one of recognizing a face.

Artificial intelligence and machine learning come in many flavors, but most of the successes in recent years have been in one category: supervised learning systems, in which the machine is given lots of examples of the correct answer to a particular problem.

How data and machine learning are 'part of Uber's DNA'

A year ago, Danny Lange took over as the head of machine learning at Uber.

Machine learning has been there from the beginning, but what we're doing is to take it a notch up—to make sure that it's not just in one part, say, in marketplace management, but that it's in every part of the company.

We have found over time that machine learning does add value in areas where people initially didn't think of machine learning as an option.

But we actually now have the data about how long it takes to make noodles, how long it takes to make a hamburger, and how long it takes to deliver it in different parts of town at different times of day.

You can start building machine learning models that can give you a more accurate prediction based on the data, not on some finite computation.

You can see how an application, in a short time span, goes from being a hardwired application to becoming a smart and dynamic application that benefits from knowing your behavior and from knowing other people's behavior.

When you request a car and we tell you it's going to be 14 minutes or 12 minutes before it shows up, we want to make sure that that estimate is as precise as it can be.

We basically use data to build models that estimate the time it will typically take for the car to reach you at any given time of the day, any given time of the week.

Also, being smart around force detection, basically detecting fraudulent behavior as it happens, so that we don't accept rides with a stolen credit card, as an example.

A lot of existing maps and map services are really good, but there is certain information on those maps that's not important to us, and then there's other information kind of missing from those maps—where they will say, 'You are at your destination,' but that's within a block of your destination.

One of the surprises is basically a positive one, which is amongst the engineers at Uber, there's a very strong desire to use this kind of technology to improve apps and services.

There's a lot of open-mindedness looking at the business challenges and then jumping on board and using a technology that was essentially almost unknown five years ago.

As I gave the example with improving the pickup spots, I think it's really incredible that you can use a piece of technology at a scale that no human can do, you know?

We talk about decay of data, so basically the data this month is much better than the data from the previous month, but some data you have to look at over a longer time span to get seasonal understanding.

Uber Technology Day: Automatic Algorithm Selection for Anomaly Detection

Yiren Lu, a New York City-based software engineer on Uber's Observability Operations team, presented on how Uber is automating anomaly detection through a ...

Uber Technology Day: Building a Scalable, Reliable Data Platform

Deepti Chheda and Ayesha Yasmeen, engineers with the Data Workflow Management Platform and Rider Experience teams, discussed their experience ...

AWS re:Invent 2017: Real-Time Anomaly Detection Using Amazon Kinesis (ABD335)

Amazon Kinesis Analytics offers a built-in machine learning algorithm that you can use to easily detect anomalies in your VPC network traffic and improve ...

Dynamic Pricing | Uber Prices

Uber uses an automated algorithm to increase prices to "surge" price levels, responding rapidly to changes of supply and demand in the market, and to attract ...

How Search Works

| The life span of a Google query is less then 1/2 second, and involves quite a few steps before you see the most ..

Deep Learning & Supply Optimization at Instacart

From the SF Bayarea Machine Learning Meetup at Instacart *00:27:05* Deep ..

Machine Learning with Python - Part 1: Spotify EDA

In this series, we'll explore machine learning with Python by building a classifier to determine whether or not we might like a song based on its attributes, which ...

Building a Machine Learning Platform at Quora

Each month, over 100 million people use Quora to share and grow their knowledge. Machine learning has played a critical role in enabling the company to grow ...

When HAL Met Sally - How AI will Optimize Marketplaces - MITCNC Tech Conference 2017

BOB PHILLIPS, Head of Marketplace Data Science, UBER JEREMY STANLEY, VP Data Science, Instacart MONICA ROGATI, Data Science Advisor, Data ...