AI News, 22 Best Artificial Intelligence images in 2019
- On 28. maj 2019
- By Read More
DeepMind and Google: the battle to control artificial intelligence
One afternoon in August 2010, in a conference hall perched on the edge of San Francisco Bay, a 34-year-old Londoner called Demis Hassabis took to the stage.
Walking to the podium with the deliberate gait of a man trying to control his nerves, he pursed his lips into a brief smile and began to speak: “So today I’m going to be talking about different approaches to building…” He stalled, as though just realising that he was stating his momentous ambition out loud.
AGI stands for artificial general intelligence, a hypothetical computer program that can perform intellectual tasks as well as, or better than, a human.
AGI will be able to complete discrete tasks, such as recognising photos or translating languages, which are the single-minded focus of the multitude of artificial intelligences (AIs) that inhabit our phones and computers.
It will also understand physics papers, compose novels, devise investment strategies and make delightful conversation with strangers.
It will monitor nuclear reactions, manage electricity grids and traffic flow, and effortlessly succeed at everything else.
But soon enough it will discover new sources of energy by digesting more physics papers in a second than a human could in a thousand lifetimes.
Hassabis told the Observer, a British newspaper, that he expected AGI to master, among other disciplines, “cancer, climate change, energy, genomics, macro-economics [and] financial systems”.
Since this future is constructed entirely on a scaffolding of untested presumptions, it is a matter of almost religious belief whether one considers the Singularity to be Utopia or hell.
On one track, known as symbolic AI, human researchers tried to describe and program all the rules needed for a system that could think like a human.
Instead, Hassabis proposed a middle ground: AGI should take inspiration from the broad methods by which the brain processes information – not the physical systems or the particular rules it applies in specific situations.
New techniques like functional magnetic resonance imaging (fMRI), which made it possible to peer inside the brain while it engaged in activities, had started to make this kind of understanding feasible.
The latest studies, he told the audience, showed that the brain learns by replaying experiences during sleep, in order to derive general principles.
In a talk at the Singularity Summit in 2009, Thiel had said that his biggest fear for the future was not a robot uprising (though with an apocalypse-proof bolthole in the New Zealand outback, he’s better prepared than most people).
Hassabis thought DeepMind would be a hybrid: it would have the drive of a startup, the brains of the greatest universities, and the deep pockets of one of the world’s most valuable companies.
It was a huge success, selling 15m copies and forming part of a new genre of simulation games in which the goal is not to defeat an opponent but to optimise the functioning of a complex system like a business or a city.
As a teen, he’d run between floors at board-game competitions to compete in simultaneous bouts of chess, scrabble, poker and backgammon.
When Hassabis met Masahiko Fujuwarea, a Japanese board-game master, he spoke of a plan that would combine his interests in strategy games and artificial intelligence: one day he would build a computer program to beat the greatest human Go player.
Years earlier, when still in school, Hassabis had told his friend Mustafa Suleyman that the world needed grand simulations in order to model its complex dynamics and solve the toughest social problems.
One paper, which has since been cited over 1,000 times, showed that people with amnesia also had difficulty imagining new experiences, suggesting that there is a connection between remembering and creating mental images.
Such a program is built to gather information about its environment, then learn from it by repeatedly replaying its experiences, much like the description that Hassabis gave of human-brain activity during sleep in his Singularity Summit lecture.
DeepMind’s work culminated in 2016 when a team built an AI program that used reinforcement learning alongside other techniques to play Go.
Its grace and sophistication, the transcendence of its computational muscle, seemed to show that DeepMind was further ahead than its competitors on the quest for a program that could treat disease and manage cities.
Far from being a cosmetic concession from Google, the Ethics Board gives DeepMind solid legal backing to keep control of its most valuable and potentially most dangerous technology, according to the same source.
(DeepMind refused to answer a detailed set of questions about the Review Agreement but said that “ethics oversight and governance has been a priority for us from the earliest days.”) Hassabis can determine DeepMind’s destiny by other means too.
His programme, which offers fascinating and important work free from the pressures of academia, has attracted hundreds of the world’s most talented experts.
Another program learned to play chess from scratch using similar architecture to AlphaGo, becoming the greatest chess player of all time after just nine hours playing against itself.
In December 2018 a program called AlphaFold proved more accurate than competitors at predicting the three-dimensional structure of proteins from a list of their composites, potentially paving the way to treat diseases such as Parkinson’s and Alzheimer’s.
DeepMind is particularly proud of the algorithms it developed that calculate the most efficient means to cool Google’s data centres, which contain an estimated 2.5m computer servers.
Google Fiber, an effort to build an internet-service provider, was put on hiatus after it became clear that it would take decades to make a return on investment.
Some researchers complain that it can be difficult to publish their work: they have to battle through layers of internal approval before they can even submit work to conferences and journals.
The firm’s founders and early employees are approaching earn-out, when they can leave with the financial compensation that they received from the acquisition (Hassabis’s stock was probably worth around £100m).
Suleyman, whose mother was an NHS nurse, hoped to create a program called Streams that would warn doctors when a patient’s health deteriorated.
Because this work required access to sensitive information about patients, Suleyman established an Independent Review Panel (IRP) populated by the great and good of British health care and technology.
Suleyman had written in 2016 that “at no stage will patient data ever be linked or associated with Google accounts, products or services.” His promise seemed to have been broken.
Streams becoming a Google service does not mean the patient data...can be used to provide other Google products or services.”) Google’s annexation has angered employees at DeepMind Health.
When Bracken asked Suleyman if he would give panel members the accountability and governance powers of non-executive directors, Suleyman scoffed. (A spokesperson for DeepMind said they had “no recollection” of the incident.) Julian Huppert, the head of the IRP, argues that the panel delivered “more radical governance” than Bracken expected because members were able to speak openly and not bound by a duty of confidentiality.
DeepMind said in a statement that “we all agreed that it makes sense to bring these efforts together in one collaborative effort, with increased resources.” This begs the question of whether Google will apply the same logic to DeepMind’s work on AGI.
A Breakout player controls a bat that she can move horizontally across the bottom of the screen, using it to bounce a ball against blocks that hover above it, destroying them on impact.
Without human instruction, DeepMind’s program not only learned to play the game but also worked out how to cannon the ball into the space behind the blocks, taking advantage of rebounds to break more blocks.
The skill learned by DeepMind’s program is so restricted that it cannot react even to tiny changes to the environment that a person would take in their stride – at least not without thousands more rounds of reinforcement learning.
The second caveat, which DeepMind rarely talks about, is that success within virtual environments depends on the existence of a reward function: a signal that allows software to measure its progress.
Reconciling the reward signal for climate health (the concentration of CO₂ in the atmosphere) with the reward signal for oil companies (share price) requires satisfying many human beings with conflicted motivations.
Decisions taken early in the game have ramifications later on, which is closer to the sort of convoluted and delayed feedback that characterises many real-world tasks.
But putting human instruction in the loop risks losing the effects of scale and speed that unadulterated computer-processing offered.
Current and former researchers at DeepMind and Google, who requested anonymity due to stringent non-disclosure agreements, have also expressed scepticism that DeepMind can reach AGI through such methods.
To these individuals, the focus on achieving high performance within simulated environments makes the reward-signal problem hard to tackle.
The pursuit of AGI may eventually lose its way, having invented some useful medical technologies and out-classed the world’s greatest board-game players.
- On 9. august 2020
The Next Frontier Of Artificial Intelligence Is Here, And Its A Bit Eerie
Hello, welcome to NeoScribe. Using our imagination is easy. We can all close our eyes, and think of ice cream, or cake, or even better, cake and ice cream.
How smart is today's artificial intelligence?
Current AI is impressive, but it's not intelligent. Subscribe to our channel! Sources: ..
Research at NVIDIA: AI Reconstructs Photos with Realistic Results
Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted image, one that ...
MIT 6.S093: Introduction to Human-Centered Artificial Intelligence (AI)
Introductory lecture on Human-Centered Artificial Intelligence (MIT 6.S093) I gave on February 1, 2019. For more lecture videos on deep learning, reinforcement ...
Google and NASA's Quantum Artificial Intelligence Lab
A peek at the early days of the Quantum AI Lab: a partnership between NASA, Google, USRA, and a 512-qubit D-Wave Two quantum computer. Learn more at ...
Worship Artificial Intelligence Or Else?! Google Exec Creates A.I. Church - Image Of The Beast
R.G. Lewis with the Comment of the year on this video... And he had power to give life unto the image of the beast, (Man is called a Beast, AI would be Mans ...
NVIDIA's Image Restoration AI: Almost Perfect
The paper "Noise2Noise: Learning Image Restoration without Clean Data" and its source code are available here: 1. 2
AI Codes its Own ‘AI Child’ - Artificial Intelligence breakthrough!
Subscribe here: Check out the previous episode: Become a Patro
5 CREEPIEST Things Done By Artificial Intelligence Robots...
Previous Videos: Narrated By: Ty Notts Music: Co.Ag ___ FB: .
Artificial Intelligence in Google reverse image search
Objects in a screenshot from a ReCap point cloud can be recognized in the Google reverse image search. Is this the way software works for us in the future?