AI News, Artificial intelligenceRise of the machines

Artificial intelligenceRise of the machines

A serial entrepreneur who made his first fortune in the early days of the world wide web, he has since helped found a solar-power company to generate green electricity, an electric-car firm to liberate motorists from the internal-combustion engine, and a rocketry business—SpaceX—to pursue his desire to see a human colony on Mars within his lifetime.

In a speech in October at the Massachusetts Institute of Technology, Mr Musk described artificial intelligence (AI) as “summoning the demon”, and the creation of a rival to human intelligence as probably the biggest threat facing the world.

Nick Bostrom, a philosopher at the University of Oxford who helped develop the notion of “existential risks”—those that threaten humanity in general—counts advanced artificial intelligence as one such, alongside giant asteroid strikes and all-out nuclear war.

Their business is not so much making new sorts of minds as it is removing some of the need for the old sort, by taking tasks that used to be things which only people could do and making them amenable to machines.

The torrent of data thrown off by the world’s internet-connected computers, tablets and smartphones, and the huge amounts of computing power now available for processing that torrent, means that their algorithms are more and more capable of understanding languages, recognising images and the like.

If computers replace some of the people now doing this, either by providing an automated alternative or by making a few such workers far more productive, there will be more white collars in the dole queue.

Firms such as Narrative Science, in Chicago, which hopes to automate the writing of reports (and which is already used by Forbes, a business magazine, to cover basic financial stories), and Kensho, of Cambridge, Massachusetts, which aims to automate some of the work done by “quants” in the financial industry, have been showered in cash by investors.

Much of the current excitement concerns a subfield of it called “deep learning”, a modern refinement of “machine learning”, in which computers teach themselves tasks by crunching large sets of data.

Algorithms created in this manner are a way of bridging a gap that bedevils all AI research: by and large, tasks that are hard for humans are easy for computers, and vice versa.

At the same time, the most powerful computers have, in the past, struggled with things that people find trivial, such as recognising faces, decoding speech and identifying objects in images.

Frustrated by the difficulty of coming up with a legally watertight definition, he threw up his hands and wrote that, although he could not define porn in the abstract, “I know it when I see it.” Machine learning is a way of getting computers to know things when they see them by producing for themselves the rules their programmers cannot specify.

In the past few years, however, the remarkable number-crunching power of chips developed for the demanding job of drawing video-game graphics has revived interest.

By working from the bottom up in this way, machine-learning algorithms learn to recognise features, concepts and categories that humans understand but struggle to define in code.

Programs often needed hints from their designers, in the form of hand-crafted bits of code that were specific to the task at hand—one set of tweaks for processing images, say, and another for voice recognition.

In 2014 Facebook unveiled an algorithm called DeepFace that can recognise specific human faces in images around 97% of the time, even when those faces are partly hidden or poorly lit.

Microsoft likes to boast that the object-recognition software it is developing for Cortana, a digital personal assistant, can tell its users the difference between a picture of a Pembroke Welsh Corgi and a Cardigan Welsh Corgi, two dog breeds that look almost identical (see pictures).

A report published on May 5th showed how America’s spies use voice-recognition software to convert phone calls into text, in order to make their contents easier to search.

The machine learned to categorise common things it saw, including human faces and (to the amusement of the internet’s denizens) the cats—sleeping, jumping or skateboarding—that are ubiquitous online.

Being able to break down and interpret a scene would be useful for robotics researchers, for instance, helping their creations—from industrial helpmeets to self-driving cars to battlefield robots—to navigate the cluttered real world.

It is a general-purpose pattern-recognition technique, which means, in principle, that any activity which has access to large amounts of data—from running an insurance business to research into genetics—might find it useful.

At a recent competition held at CERN, the world’s biggest particle-physics laboratory, deep-learning algorithms did a better job of spotting the signatures of subatomic particles than the software written by physicists—even though the programmers who created these algorithms had no particular knowledge of physics.

There is no result from decades of neuroscientific research to suggest that the brain is anything other than a machine, made of ordinary atoms, employing ordinary forces and obeying the ordinary laws of nature.

Computers can now do some narrowly defined tasks which only human brains could manage in the past (the original “computers”, after all, were humans, usually women, employed to do the sort of tricky arithmetic that the digital sort find trivially easy).

These offer insight into how the algorithms operate—by matching patterns to other patterns, but doing so blindly, with no recourse to the sort of context (like realising a baseball is a physical object, not just an abstract pattern vaguely reminiscent of stitching) that stops people falling into the same traps.

It is even possible to construct images that, to a human, look like meaningless television static, but which neural networks nevertheless confidently classify as real objects.

Kensho’s system is designed to interpret natural-language search queries such as, “What happens to car firms’ share prices if oil drops by $5 a barrel?” It will then scour financial reports, company filings, historical market data and the like, and return replies, also in natural language, in seconds.

Yseop, a French firm, uses its natural-language software to interpret queries, chug through data looking for answers, and then write them up in English, Spanish, French or German at 3,000 pages a second.

Forecasting how many more jobs might go the same way is much harder—although a paper from the Oxford Martin School, published in 2013, scared plenty of people by concluding that up to half of the job categories tracked by American statisticians might be vulnerable.

Perhaps the best way to think about AI is to see it as simply the latest in a long line of cognitive enhancements that humans have invented to augment the abilities of their brains.

10 Companies Using Machine Learning in Cool Ways

If science-fiction movies have taught us anything, it’s that the future is a bleak and terrifying dystopia ruled by murderous sentient robots.

Classifying images into simple exterior/interior categories is easy for humans,but surprisingly difficult for computers Since images are almost as vital to Yelp as user reviews themselves, it should come as little surprise that Yelp is always trying to improve how it handles image processing.

Yelp’s machine learning algorithms help the company’s human staff to compile, categorize, and label images more efficiently — no small feat when you’re dealing with tens of millions of photos.

Twitter has been at the center of numerous controversies of late (not least of which were the much-derided decisions to round out everyone’s avatars and changes to the way people are tagged in @ replies), but one of the more contentious changes we’ve seen on Twitter was the move toward an algorithmic feed.

Rob Lowe was particularly upset by the introduction ofalgorithmically curated Twitter timelines Whether you prefer to have Twitter show you “the best tweets first” (whatever that means) or as a reasonably chronological timeline, these changes are being driven by Twitter’s machine learning technology.

selection of images created by Google’s neural network The most visible developments in Google’s neural network research has been the DeepMind network, the “machine that dreams.” It’s the same network that produced those psychedelic images everybody was talking about a while back.

According to Google, the company is researching “virtually all aspects of machine learning,” which will lead to exciting developments in what Google calls “classical algorithms” as well as other applications including natural language processing, speech translation, and search ranking and prediction systems.

In addition to streamlining the ecommerce experience in order to improve conversion rates, Edgecase plans to leverage its tech to provide a better experience for shoppers who may only have a vague idea of what they’re looking for, by analyzing certain behaviors and actions that signify commercial intent — an attempt to make casual browsing online more rewarding and closer to the traditional retail experience.

simplified five-step diagram illustrating the key stages ofa natural language processing system One of the most interesting (and disconcerting) developments at Baidu’s R&D lab is what the company calls Deep Voice, a deep neural network that can generate entirely synthetic human voices that are very difficult to distinguish from genuine human speech.

Far from an idle experiment, Deep Voice 2 — the latest iteration of the Deep Voice technology — promises to have a lasting impact on natural language processing, the underlying technology behind voice search and voice pattern recognition systems.

Predictive lead scoring is just one of the many potential applicationsof AI and machine learning HubSpot plans to use Kemvi’s technology in a range of applications — most notably, integrating Kemvi’s DeepGraph machine learning and natural language processing tech in its internal content management system.

This, according to HubSpot’s Chief Strategy Officer Bradford Coffey, will allow HubSpot to better identify “trigger events” — changes to a company’s structure, management, or anything else that affects day-to-day operations — to allow HubSpot to more effectively pitch prospective clients and serve existing customers.

Salesforce Einstein allows businesses that use Salesforce’s CRM software to analyze every aspect of a customer’s relationship — from initial contact to ongoing engagement touch points — to build much more detailed profiles of customers and identify crucial moments in the sales process.

Can Artificial Intelligence Identify Pictures Better than Humans?

Computer-based artificial intelligence (AI) has been around since the 1940s, but the current innovation boom around everything from virtual personal assistants and visual search engines to real-time translation and driverless cars has led to new milestones in the field.

As image recognition experiments have shown, computers can easily and accurately identify hundreds of breeds of cats and dogs faster and more accurately than humans, but does that mean that machines are better than us at recognizing what’s in a picture?

Loosely based on human brain processes, deep learning implements large artificial neural networks --hierarchical layers of interconnected nodes -- that rearrange themselves as new information comes in, enabling computers to literally teach themselves.

Considering that more than three billion images are shared across the internet every day --Google Photos alone saw uploads of 50 billion photos in its first four months of existence --it’s safe to say that the amount of data available for training these days is phenomenal.

Then, in 2012, a team at the Google X research lab approached the task a different way, by feeding 10 million randomly selected thumbnail images from YouTube videos into an artificial neural network with more than 1 billion connections spread over 16,000 CPUs.

The computer looked for the most recurring images and accurately identified ones that contained faces 81.7 percent of the time, human body parts 76.7 percent of the time, and cats 74.8 percent of the time.

The accomplishment was not simply correctly identifying images containing dogs, but correctly identifying around 200 different dog breeds in images, something that only the most computer-savvy canine experts might be able to accomplish in a speedy fashion.

Not that impressive, but as in the previous example with dog breeds, the computer was able to correctly identify which type of bird was drawn in the sketch 42.5 percent of the time, an accuracy rate nearly twice that of the people in the study, with 24.8 percent.

What computers are better at is sorting through vast amounts of data and processing it quickly, which comes in handy when, say, a radiologist needs to narrow down a list of x-rays with potential medical maladies or a marketer wants to find all the images relevant to his brand on social media.

From robot disaster relief and large-object avoidance in cars to high-tech criminal investigations and augmented reality (AR) gamingleaps and bounds beyond Pokemon GO, computer vision’s future may well lie in things that humans simply can’t (or won’t) do.

Artificial intelligenceRise of the machines

A serial entrepreneur who made his first fortune in the early days of the world wide web, he has since helped found a solar-power company to generate green electricity, an electric-car firm to liberate motorists from the internal-combustion engine, and a rocketry business—SpaceX—to pursue his desire to see a human colony on Mars within his lifetime.

In a speech in October at the Massachusetts Institute of Technology, Mr Musk described artificial intelligence (AI) as “summoning the demon”, and the creation of a rival to human intelligence as probably the biggest threat facing the world.

Nick Bostrom, a philosopher at the University of Oxford who helped develop the notion of “existential risks”—those that threaten humanity in general—counts advanced artificial intelligence as one such, alongside giant asteroid strikes and all-out nuclear war.

Their business is not so much making new sorts of minds as it is removing some of the need for the old sort, by taking tasks that used to be things which only people could do and making them amenable to machines.

The torrent of data thrown off by the world’s internet-connected computers, tablets and smartphones, and the huge amounts of computing power now available for processing that torrent, means that their algorithms are more and more capable of understanding languages, recognising images and the like.

If computers replace some of the people now doing this, either by providing an automated alternative or by making a few such workers far more productive, there will be more white collars in the dole queue.

Firms such as Narrative Science, in Chicago, which hopes to automate the writing of reports (and which is already used by Forbes, a business magazine, to cover basic financial stories), and Kensho, of Cambridge, Massachusetts, which aims to automate some of the work done by “quants” in the financial industry, have been showered in cash by investors.

Much of the current excitement concerns a subfield of it called “deep learning”, a modern refinement of “machine learning”, in which computers teach themselves tasks by crunching large sets of data.

Algorithms created in this manner are a way of bridging a gap that bedevils all AI research: by and large, tasks that are hard for humans are easy for computers, and vice versa.

At the same time, the most powerful computers have, in the past, struggled with things that people find trivial, such as recognising faces, decoding speech and identifying objects in images.

Frustrated by the difficulty of coming up with a legally watertight definition, he threw up his hands and wrote that, although he could not define porn in the abstract, “I know it when I see it.” Machine learning is a way of getting computers to know things when they see them by producing for themselves the rules their programmers cannot specify.

In the past few years, however, the remarkable number-crunching power of chips developed for the demanding job of drawing video-game graphics has revived interest.

By working from the bottom up in this way, machine-learning algorithms learn to recognise features, concepts and categories that humans understand but struggle to define in code.

Programs often needed hints from their designers, in the form of hand-crafted bits of code that were specific to the task at hand—one set of tweaks for processing images, say, and another for voice recognition.

In 2014 Facebook unveiled an algorithm called DeepFace that can recognise specific human faces in images around 97% of the time, even when those faces are partly hidden or poorly lit.

Microsoft likes to boast that the object-recognition software it is developing for Cortana, a digital personal assistant, can tell its users the difference between a picture of a Pembroke Welsh Corgi and a Cardigan Welsh Corgi, two dog breeds that look almost identical (see pictures).

A report published on May 5th showed how America’s spies use voice-recognition software to convert phone calls into text, in order to make their contents easier to search.

The machine learned to categorise common things it saw, including human faces and (to the amusement of the internet’s denizens) the cats—sleeping, jumping or skateboarding—that are ubiquitous online.

Being able to break down and interpret a scene would be useful for robotics researchers, for instance, helping their creations—from industrial helpmeets to self-driving cars to battlefield robots—to navigate the cluttered real world.

It is a general-purpose pattern-recognition technique, which means, in principle, that any activity which has access to large amounts of data—from running an insurance business to research into genetics—might find it useful.

At a recent competition held at CERN, the world’s biggest particle-physics laboratory, deep-learning algorithms did a better job of spotting the signatures of subatomic particles than the software written by physicists—even though the programmers who created these algorithms had no particular knowledge of physics.

There is no result from decades of neuroscientific research to suggest that the brain is anything other than a machine, made of ordinary atoms, employing ordinary forces and obeying the ordinary laws of nature.

Computers can now do some narrowly defined tasks which only human brains could manage in the past (the original “computers”, after all, were humans, usually women, employed to do the sort of tricky arithmetic that the digital sort find trivially easy).

These offer insight into how the algorithms operate—by matching patterns to other patterns, but doing so blindly, with no recourse to the sort of context (like realising a baseball is a physical object, not just an abstract pattern vaguely reminiscent of stitching) that stops people falling into the same traps.

It is even possible to construct images that, to a human, look like meaningless television static, but which neural networks nevertheless confidently classify as real objects.

Kensho’s system is designed to interpret natural-language search queries such as, “What happens to car firms’ share prices if oil drops by $5 a barrel?” It will then scour financial reports, company filings, historical market data and the like, and return replies, also in natural language, in seconds.

Yseop, a French firm, uses its natural-language software to interpret queries, chug through data looking for answers, and then write them up in English, Spanish, French or German at 3,000 pages a second.

Forecasting how many more jobs might go the same way is much harder—although a paper from the Oxford Martin School, published in 2013, scared plenty of people by concluding that up to half of the job categories tracked by American statisticians might be vulnerable.

Perhaps the best way to think about AI is to see it as simply the latest in a long line of cognitive enhancements that humans have invented to augment the abilities of their brains.

Google's DeepMind AI Just Taught Itself To Walk

Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance ...

Google's Deep Mind Explained! - Self Learning A.I.

Subscribe here: Become a Patreon!: Visual animal AI: .

Google and NASA's Quantum Artificial Intelligence Lab

A peek at the early days of the Quantum AI Lab: a partnership between NASA, Google, USRA, and a 512-qubit D-Wave Two quantum computer. Learn more at ...

What Makes A Machine Intelligent?

Artificial intelligence is constantly improving, but will it ever be sentient? Will AI ever pass human intelligence? Is Artificial Intelligence the Next Phase of Human ...

Deep image reconstruction: Natural images

Reconstruction of visual images from human brain activity measured by fMRI To reconstruct visual images, we first decoded (translated) measured brain activity ...

English for Technology - VV 54: Artificial Intelligence (AI) | Business English Vocabulary

Download this business English vocabulary lesson from Business English Pod: In this ..

Avi Goldfarb & Ajay Agrawal: "Prediction Machines: The Simple Economics of AI" | Talks at Google

The idea of artificial intelligence--job-killing robots, self-driving cars, and self-managing organizations--captures the imagination, evoking a combination of ...

Introducing ML.NET : Build 2018

ML.NET is aimed at providing a first class experience for Machine Learning in .NET. Using ML.NET, .NET developers can develop and infuse custom AI into ...

The computer that mastered Go

Go is an ancient Chinese board game, often viewed as the game computers could never play. Now researchers from Google-owned company DeepMind have ...

What Google's Hallucinating Computers See (Photos)

Visit this link if you're interested in health, animal welfare, and/or the environment Images released June 2015 from a Google ..