AI News, Artificial Intelligence In Cars: 10 Examples Of AI Automotive
Artificial Intelligence Plus the Internet of Things (IoT) – 3 Examples Worth Learning From
The purpose of this article is to flesh out the actual applications of combining AI and IoT being used in industry today, and to demonstrate important trends and future use cases for technologists who want get a lay-of-the-land of how artificial intelligence might help to facilitate and make sense of the myriad connected devices in the coming decade.
Dozens of our interviews with emerging technology executives and researchers and hours of combing the insights of major market research firms (which I’ve compiled conveniently at the end of this article) seems to point to one over-arching theme in the connection of artificial intelligence and the internet of things, namely: Artificial intelligence will be functionally necessary to wield the vast number of connected “things”
This AI application is certainly novel, but it’s pragmatic benefit (home comfort, potentially serious reduction of energy use) and effective marketing could be said to be the biggest factors behind it’s sales success (an estimated 100,000 sales per month in January 2014).
This isn’t necessarily because autonomous vehicles will be the easiest IoT innovation to bring to life (with legal and ethical concerns, the jury is out on how long it’ll take to have driverless highways anytime soon), but with nearly all major car manufacturers throwing billions of dollars at the problem, it certainly has momentum (pun intended, I suppose).
Though we weren’t able to find key fob / access key technologies integrating artificial intelligence or predictive analytics, we would suppose that as fob technology and adoption improve, this area may be rife with security insight (particularly for larger firms assessing data across many locations).
We provide broad definitions (and related links) below: Internet of things: Network of physical objects that contain embedded technology to communicate and sense or interact with their internal states or the external environment (Gartner) Artificial intelligence: an area of computer science that deals with giving machines the ability to seem like they have human intelligence (Merriam Webster) The famed John McCarthy (Stanford professors and argued to have originally dubbed the term “artificial intelligence”) has articulated some of the difficulties of defining “artificial intelligence”
Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look (SAS) Deep learning: a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures, or otherwise composed of multiple non-linear transformations (Wikipedia) Ambient intelligence: refers to electronic environments that are sensitive and responsive to the presence of people … In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices (Wikipedia) Smart objects: an object that enhances the interaction with not only people but also with other Smart Objects.
Can Artificial Intelligence “Think”?
Sci-fi and science can’t seem to agree on the way we should think about artificial intelligence.
Sci-fi wants to portray artificial intelligence agents as thinking machines, while businesses today use artificial intelligence for more mundane tasks like filling out forms with robotic process automation or driving your car.
When interacting with these artificial intelligence interfaces at our current level of AI technology, our human inclination is to treat them like vending machines, rather than to treat them like a person.
Today’s AI is very narrow, and so straying across the invisible line between what these systems can and can’t do leads to generic responses like “I don’t understand that” or “I can’t do that yet”.
In the book “Thinking Fast and Slow”, Nobel laureate Daniel Kahneman talks about the two systems in our brains that do thinking: A fast automated thinking system (System 1), and a slow more deliberative thinking system (System 2).
Just like we have a left and right brain stuck in our one head, we also have these two types of thinking systems baked into our heads, talking to each other and forming the way we see the world.
Today's AI systems learn to think fast and automatically (like System 1), but artificial intelligence as a science doesn’t yet have a good handle on how to do the thinking slow approach we get from System 2.
In the future, thinking algorithms that teach themselves may themselves represent most of the value in an AI system, but for now, you still need data to make an AI system, and the data is the most valuable part of the project.
A colleague of mine has a funny story from her undergraduate math degree at a respected university, where the students would play a game called “stats chicken”, where they delay taking their statistics course until the fourth year, hoping every year that the requirement to take the course will be dropped from the program.
When we see a really relevant movie or product recommendation, we feel impressed by this amazing recommendation magic trick, but don’t get to see the way the magic trick is performed.
In fact, there are some accusations even in respected academic circles (slide 24, here) that the basic theory of artificial intelligence as a field of science is not yet rigerously defined.
Engineers don’t tend to ask questions like “is it thinking?”, and instead ask questions like “is it broken?” and “what is the test score?” Supervised learning is a very popular type of artificial intelligence that makes fast predictions in some narrow domain.
Explicit models like decision trees are a common approach to developing an interpretable AI system, where a set of rules is learned that defines your path from observation to prediction, like a choose your own adventure story where each piece of data follows a path from the beginning of the book to the conclusion.
Another type of artificial intelligence called reinforcement learning involves learning the transition from one decision to the next based on what’s going on in the environment and what happened in the past.
We know that without much better “environment” models of the world, these approaches are going to learn super slowly, to do even the most basic tasks.
In a game playing simulator an AI model can play against itself very quickly to get smart, but in human-related applications the slow pace of data collection gums up the speed of the project.
CES showcases more than 4,500 exhibiting companies, including manufacturers, developers and suppliers of consumer technology hardware, content, technology delivery systems and more.
Photographer: David Paul Morris/Bloomberg Regardless of the underlying technical machinery, when you interact with a trained artificial intelligence model in the vast majority of real-life applications today, the model is pre-trained and is not learning on the fly.
It is useful to think about more general mathematical models like rainfall estimation and sovereign credit risk modeling to think about how mathematical models are carefully designed by humans, encoding huge amounts of careful and deliberative human thinking.
I asked Kurt a lot of technology questions, leading up to the question “Does the system think like people do?” AstraLaunch is a pretty advanced product involving both supervised and unsupervised learning for matching technologies with company needs on a very technical basis.
Everyday Examples of Artificial Intelligence and Machine Learning
With all the excitement and hype about AI that’s “just around the corner”—self-driving cars, instant machine translation, etc.—it can be difficult to see how AI is affecting the lives of regular people from moment to moment. What are examples of artificial intelligence that you’re already using—right now?
You’ve also likely used AI on your way to work, communicating online with friends, searching on the web, and making online purchases. We distinguish between AI and machine learning (ML) throughout this article when appropriate.
According to a 2015 report by the Texas Transportation Institute at Texas A&M University, commute times in the US have been steadily climbing year-over-year, resulting in 42 hours of rush-hour traffic delay per commuter in 2014—more than a full work week per year, with an estimated $160 billion in lost productivity.
driving to a train station, riding the train to the optimal stop, and then walking or using a ride-share service from that stop to the final destination), not to mention the expected and the unexpected: construction;
Engineering Lead for Uber ATC Jeff Schneider discussed in an NPR interview how the company uses ML to predict rider demand to ensure that “surge pricing”(short periods of sharp price increases to decrease rider demand and increase driver supply) will soon no longer be necessary.
Glimpse into the future In the future, AI will shorten your commute even further via self-driving cars that result in up to 90% fewer accidents, more efficient ride sharing to reduce the number of cars on the road by up to 75%, and smart traffic lights that reduce wait times by 40% and overall travel time by 26% in a pilot study.
“filter out messages with the words ‘online pharmacy’ and ‘Nigerian prince’ that come from unknown addresses”) aren’t effective against spam, because spammers can quickly update their messages to work around them.
In a research paper titled, “The Learning Behind Gmail Priority Inbox”, Google outlines its machine learning approach and notes “a huge variation between user preferences for volume of important mail…Thus, we need some manual intervention from users to tune their threshold.
The researchers tested the effectiveness of Priority Inbox on Google employees and found that those with Priority Inbox “spent 6% less time reading email overall, and 13% less time reading unimportant email.” Glimpse into the future Can your inbox reply to emails for you?
Smart reply uses machine learning to automatically suggest three different brief (but customized) responses to answer the email. As of early 2016, 10% of mobile Inbox users’ emails were sent via smart reply.
A brute force search comparing every string of text to every other string of text in a document database will have a high accuracy, but be far too computationally expensive to use in practice. One MIT paper highlights the possibility of using machine learning to optimize this algorithm.
– Credit Decisions Whenever you apply for a loan or credit card, the financial institution must quickly determine whether to accept your application and if so, what specific terms (interest rate, credit line amount, etc.) to offer. FICO uses ML both in developing your FICO score, which most banks use to make credit decisions, and in determining the specific risk assessment for individual customers.
In early 2016, Wealthfront announced it was taking an AI-first approach, promising “an advice engine rooted in artificial intelligence and modern APIs, an engine that we believe will deliver more relevant and personalized advice than ever before.” While there is no data on the long-term performance of robo-advisors (Betterment was founded in 2008, Wealthfront in 2011), they will become the norm for regular people looking to invest their savings.
In a short video highlighting their AI research (below), Facebook discusses the use of artificial neural networks—ML algorithms that mimic the structure of the human brain—to power facial recognition software.
In June 2016, Facebook announced a new AI initiative: DeepText, a text understanding engine that, the company claims “can understand with near-human accuracy the textual content of several thousand posts per second, spanning more than 20 languages.” DeepText is used in Facebook Messenger to detect intent—for instance, by allowing you to hail an Uber from within the app when you message “I need a ride” but not when you say, “I like to ride donkeys.” DeepText is also used for automating the removal of spam, helping popular public figures sort through the millions of comments on their posts to see those most relevant, identify for sale posts automatically and extract relevant information, and identify and surface content in which you might be interested.
– Pinterest Pinterest uses computer vision, an application of AI where computers are taught to “see,” in order to automatically identify objects in images (or “pins”) and then recommend visually similar pins. Other applications of machine learning at Pinterest include spam prevention, search and discovery, ad performance and monetization, and email marketing.
– Instagram Instagram, which Facebook acquired in 2012, uses machine learning to identify the contextual meaning of emoji, which have been steadily replacing slang (for instance, a laughing emoji could replace “lol”).
This may seem like a trivial application of AI, but Instagram has seen a massive increase in emoji use among all demographics, and being able to interpret and analyze it at large scale via this emoji-to-text translation sets the basis for further analysis on how people use Instagram.
A few months later, it opened its messenger platform to developers, allowing anyone to build a chatbot and integrate Wit.ai’s bot training capability to more easily create conversational bots.
–Recommendations You see recommendations for products you’re interested in as “customers who viewed this item also viewed” and “customers who bought this item also bought”, as well as via personalized recommendations on the home page, bottom of item pages, and through email.
While Amazon doesn’t reveal what proportion of its sales come from recommendations, research has shown that recommenders increase sales (in this linked study, by 5.9%, but in other studies recommenders have shown up to a 30% increase in sales) and that a product recommendation carries the same sales weight as a two-star increase in average rating (on a five-star scale).
Square, a credit card processor popular among small businesses, charges 2.75% for card-present transactions, compared to 3.5% + 15 cents for card-absent transactions.
By utilizing AI that can learn your purchasing habits, credit card processors minimize the probability of falsely declining your card while maximizing the probability of preventing somebody else from fraudulently charging it.
We may soon see retailers take it one step further and design your entire experience individually for you. Google already does this with search, even with users who are logged out, so this is well within the realm of possibility for retailers.
however, a month later Amazon’s press release boasted a 9x increase in Echo family sales over the previous year’s holiday sales, suggesting that 5 million sold is a significant underestimate.
For example, casual chess players regularly use AI powered chess engines to analyze their games and practice tactics, and bloggers often use mailing-list services that use ML to optimize reader engagement and open-rates.
Fighting Fires and Saving Elephants: How 12 Companies are Using the AI Drone to Solve Big Problems
More than ever before, drones play key problem-solving roles in a variety of sectors — including defense, agriculture, natural disaster relief, security and construction.
Now, thanks to artificial intelligence software, they can perceive their surroundings, which enables them to map areas, track objects and provide analytical feedback in real-time.
This new wave of drones, for instance, can map up to 2.7 million square miles (an area roughly as large as the contiguous 48 U.S. states).
Additionally, the military deploys them in war zones and emergency response teams — such as firefighters battling forest fires — use them in containment and recovery efforts. Below are five companies that install AI technology in drones.
Location: Austin, Texas How it's using AI: DroneSense is a drone software platform for public safety officials that takes raw data captured by drones and turns it into actionable insights for police, fire and other emergency teams.
The AI-powered software assists SWAT teams in gathering scene intelligence, assessing damage after hurricanes and tornadoes and even employs thermal imaging to locate missing persons.
The company claims that in order to scan crowds for an individual, its AI-powered software only needs 20 minutes to understand the image of an individual, rather than industry-standard hours or days.
The artificially intelligent drones use the company’s image recognition technology to monitor elephant herds and spot possible poachers miles before they reach the elephants.
Industry impact: The Scale machine learning platform is used for drone training purposes by insurance companies like Liberty Mutual, which employs the UAVs to identify and quantify insurance claims.
The company’s drone software comes with 2D/3D modelings, terrain mapping, image recognition technology and even soil analysis sensors, which enables the drones to inspect everything from crop yields to wind turbines.
Drones are surveying land and equipment at greater and more accurate rates than humanly possible, assisting military personnel in war and even flying themselves without a human operator.
How it's using AI: Airlitix makes AI- and machine learning-enabled drones for greenhouse management that can fly around greenhouses and collect data on temperature, humidity and carbon dioxide levels to ensure that plants are growing in an optimal ecosystem.
Industry impact: In addition to ecosystem measurements, the Airlitix AI drones can analyze soil and crop health to ensure that plants are disease-free and unhindered in their growth.
The zeppelin-like drone can fly over forests, mountains, quarries or farms to assess damage from natural disasters or climate change and then report back using the analytics platform.
Industry impact: Besides environmental data-gathering, Above's UAV can be used to inspect railroads, power lines or oil pipelines for quality-control and upkeep purposes.