AI News, Artificial intelligenceRise of the machines
- On Sunday, June 3, 2018
- By Read More
Artificial intelligenceRise of the machines
A serial entrepreneur who made his first fortune in the early days of the world wide web, he has since helped found a solar-power company to generate green electricity, an electric-car firm to liberate motorists from the internal-combustion engine, and a rocketry business—SpaceX—to pursue his desire to see a human colony on Mars within his lifetime.
In a speech in October at the Massachusetts Institute of Technology, Mr Musk described artificial intelligence (AI) as “summoning the demon”, and the creation of a rival to human intelligence as probably the biggest threat facing the world.
Nick Bostrom, a philosopher at the University of Oxford who helped develop the notion of “existential risks”—those that threaten humanity in general—counts advanced artificial intelligence as one such, alongside giant asteroid strikes and all-out nuclear war.
Their business is not so much making new sorts of minds as it is removing some of the need for the old sort, by taking tasks that used to be things which only people could do and making them amenable to machines.
The torrent of data thrown off by the world’s internet-connected computers, tablets and smartphones, and the huge amounts of computing power now available for processing that torrent, means that their algorithms are more and more capable of understanding languages, recognising images and the like.
If computers replace some of the people now doing this, either by providing an automated alternative or by making a few such workers far more productive, there will be more white collars in the dole queue.
Firms such as Narrative Science, in Chicago, which hopes to automate the writing of reports (and which is already used by Forbes, a business magazine, to cover basic financial stories), and Kensho, of Cambridge, Massachusetts, which aims to automate some of the work done by “quants” in the financial industry, have been showered in cash by investors.
Much of the current excitement concerns a subfield of it called “deep learning”, a modern refinement of “machine learning”, in which computers teach themselves tasks by crunching large sets of data.
Algorithms created in this manner are a way of bridging a gap that bedevils all AI research: by and large, tasks that are hard for humans are easy for computers, and vice versa.
At the same time, the most powerful computers have, in the past, struggled with things that people find trivial, such as recognising faces, decoding speech and identifying objects in images.
Frustrated by the difficulty of coming up with a legally watertight definition, he threw up his hands and wrote that, although he could not define porn in the abstract, “I know it when I see it.” Machine learning is a way of getting computers to know things when they see them by producing for themselves the rules their programmers cannot specify.
In the past few years, however, the remarkable number-crunching power of chips developed for the demanding job of drawing video-game graphics has revived interest.
By working from the bottom up in this way, machine-learning algorithms learn to recognise features, concepts and categories that humans understand but struggle to define in code.
Programs often needed hints from their designers, in the form of hand-crafted bits of code that were specific to the task at hand—one set of tweaks for processing images, say, and another for voice recognition.
In 2014 Facebook unveiled an algorithm called DeepFace that can recognise specific human faces in images around 97% of the time, even when those faces are partly hidden or poorly lit.
Microsoft likes to boast that the object-recognition software it is developing for Cortana, a digital personal assistant, can tell its users the difference between a picture of a Pembroke Welsh Corgi and a Cardigan Welsh Corgi, two dog breeds that look almost identical (see pictures).
A report published on May 5th showed how America’s spies use voice-recognition software to convert phone calls into text, in order to make their contents easier to search.
The machine learned to categorise common things it saw, including human faces and (to the amusement of the internet’s denizens) the cats—sleeping, jumping or skateboarding—that are ubiquitous online.
Being able to break down and interpret a scene would be useful for robotics researchers, for instance, helping their creations—from industrial helpmeets to self-driving cars to battlefield robots—to navigate the cluttered real world.
It is a general-purpose pattern-recognition technique, which means, in principle, that any activity which has access to large amounts of data—from running an insurance business to research into genetics—might find it useful.
At a recent competition held at CERN, the world’s biggest particle-physics laboratory, deep-learning algorithms did a better job of spotting the signatures of subatomic particles than the software written by physicists—even though the programmers who created these algorithms had no particular knowledge of physics.
There is no result from decades of neuroscientific research to suggest that the brain is anything other than a machine, made of ordinary atoms, employing ordinary forces and obeying the ordinary laws of nature.
Computers can now do some narrowly defined tasks which only human brains could manage in the past (the original “computers”, after all, were humans, usually women, employed to do the sort of tricky arithmetic that the digital sort find trivially easy).
These offer insight into how the algorithms operate—by matching patterns to other patterns, but doing so blindly, with no recourse to the sort of context (like realising a baseball is a physical object, not just an abstract pattern vaguely reminiscent of stitching) that stops people falling into the same traps.
It is even possible to construct images that, to a human, look like meaningless television static, but which neural networks nevertheless confidently classify as real objects.
Kensho’s system is designed to interpret natural-language search queries such as, “What happens to car firms’ share prices if oil drops by $5 a barrel?” It will then scour financial reports, company filings, historical market data and the like, and return replies, also in natural language, in seconds.
Yseop, a French firm, uses its natural-language software to interpret queries, chug through data looking for answers, and then write them up in English, Spanish, French or German at 3,000 pages a second.
Forecasting how many more jobs might go the same way is much harder—although a paper from the Oxford Martin School, published in 2013, scared plenty of people by concluding that up to half of the job categories tracked by American statisticians might be vulnerable.
Perhaps the best way to think about AI is to see it as simply the latest in a long line of cognitive enhancements that humans have invented to augment the abilities of their brains.
- On Tuesday, March 26, 2019
Google's DeepMind AI Just Taught Itself To Walk
Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance ...
Google's Deep Mind Explained! - Self Learning A.I.
Subscribe here: Become a Patreon!: Visual animal AI: .
The wonderful and terrifying implications of computers that can learn | Jeremy Howard
What happens when we teach a computer how to learn? Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of ...
The jobs we'll lose to machines -- and the ones we won't | Anthony Goldbloom
Machine learning isn't just for simple tasks like assessing credit risk and sorting mail anymore -- today, it's capable of far more complex applications, like grading ...
Google DeepMind's Deep Q-learning playing Atari Breakout
Google DeepMind created an artificial intelligence program using deep reinforcement learning that plays Atari games and improves itself to a superhuman level.
Impact of Algorithms in Dermatology | Mircea Popa | TEDxEroilor
How AI combined with simple technology can help us even when trying to identify skin cancer. Mircea Popa is the cofounder of SkinVision, a company which ...
Deep image reconstruction: Geometric shapes
Reconstruction of visual images from human brain activity measured by fMRI To reconstruct visual images, we first decoded (translated) measured brain activity ...
The computer that mastered Go
Go is an ancient Chinese board game, often viewed as the game computers could never play. Now researchers from Google-owned company DeepMind have ...
Deep Blue vs Kasparov: How a computer beat best chess player in the world - BBC News
Twenty years ago IBM's Deep Blue defeated previously unbeaten chess grandmaster Gary Kasparov. Its designers tell the BBC how they won and what it means ...
IBM Watson: How it Works
Learn how IBM Watson works and has similar thought processes to a human.