AI News, Artificial general intelligence artificial intelligence
Tomorrow's ‘general’ AI revolution will grow from today's technology
In today's pop culture, machines with artificial general intelligence (AGI) are typically portrayed as walking, talking human analogs replete with personalities -- from The Terminator's murderous intent to Vision's noble heroism.
AGI isn't so much a singular standalone system -- it's no digital Athena bursting forth from Zeus' forehead -- but rather a threshold of capability derived from a collection of narrow AI's working together, Michael said.
'It's the combination of specialized AI that creates increasingly sophisticated specialized intelligence that allows this organic intelligent system to become increasingly capable,' he said.
Michael pointed out that, thanks to advances in data storage, 'we start to get to this area of asking, how do we understand the nature of the data that we have, how that influences the performance of the algorithms, and how that change in performance impacts system performance overall.'
'If we understand the nature of the algorithm then we understand how it behaves given different types of data inputs and how that variation will ultimately impact the performance of the algorithm.'
Just as we today expect Siri to give us accurate weather reports and our GPS systems to keep us from driving off cliffs, the general AIs of tomorrow will need to earn the trust of their users if they're to be widely adopted.
Sure, we bellyache about robots coming to take all our most menial, dangerous and low-paying jobs and flip out with T2 memes every time Boston Dynamics releases a new robo-hound -- but humans have shown themselves to be more than ready and willing to adapt to new tech.
However, the people of the time (if not the horses) were able to progressively adapt to the presence of the new technology, in part due to its incremental introduction.
Yet within 12 months, we've not only seen the service spread across the smartphone ecosystem and 43 states, but the introduction of a complementary AI service for businesses, dubbed CallJoy, as well.
And with the company's latest breakthrough in machine-learning, the Assistant will soon be unshackled from network connectivity, an advancement that will pack the full power of Google's AI into any smartphone.
DeepMind and Google: the battle to control artificial intelligence
One afternoon in August 2010, in a conference hall perched on the edge of San Francisco Bay, a 34-year-old Londoner called Demis Hassabis took to the stage.
Walking to the podium with the deliberate gait of a man trying to control his nerves, he pursed his lips into a brief smile and began to speak: “So today I’m going to be talking about different approaches to building…” He stalled, as though just realising that he was stating his momentous ambition out loud.
AGI stands for artificial general intelligence, a hypothetical computer program that can perform intellectual tasks as well as, or better than, a human.
AGI will be able to complete discrete tasks, such as recognising photos or translating languages, which are the single-minded focus of the multitude of artificial intelligences (AIs) that inhabit our phones and computers.
It will also understand physics papers, compose novels, devise investment strategies and make delightful conversation with strangers.
It will monitor nuclear reactions, manage electricity grids and traffic flow, and effortlessly succeed at everything else.
But soon enough it will discover new sources of energy by digesting more physics papers in a second than a human could in a thousand lifetimes.
Hassabis told the Observer, a British newspaper, that he expected AGI to master, among other disciplines, “cancer, climate change, energy, genomics, macro-economics [and] financial systems”.
Since this future is constructed entirely on a scaffolding of untested presumptions, it is a matter of almost religious belief whether one considers the Singularity to be Utopia or hell.
On one track, known as symbolic AI, human researchers tried to describe and program all the rules needed for a system that could think like a human.
Instead, Hassabis proposed a middle ground: AGI should take inspiration from the broad methods by which the brain processes information – not the physical systems or the particular rules it applies in specific situations.
New techniques like functional magnetic resonance imaging (fMRI), which made it possible to peer inside the brain while it engaged in activities, had started to make this kind of understanding feasible.
The latest studies, he told the audience, showed that the brain learns by replaying experiences during sleep, in order to derive general principles.
In a talk at the Singularity Summit in 2009, Thiel had said that his biggest fear for the future was not a robot uprising (though with an apocalypse-proof bolthole in the New Zealand outback, he’s better prepared than most people).
Hassabis thought DeepMind would be a hybrid: it would have the drive of a startup, the brains of the greatest universities, and the deep pockets of one of the world’s most valuable companies.
It was a huge success, selling 15m copies and forming part of a new genre of simulation games in which the goal is not to defeat an opponent but to optimise the functioning of a complex system like a business or a city.
As a teen, he’d run between floors at board-game competitions to compete in simultaneous bouts of chess, scrabble, poker and backgammon.
When Hassabis met Masahiko Fujuwarea, a Japanese board-game master, he spoke of a plan that would combine his interests in strategy games and artificial intelligence: one day he would build a computer program to beat the greatest human Go player.
Years earlier, when still in school, Hassabis had told his friend Mustafa Suleyman that the world needed grand simulations in order to model its complex dynamics and solve the toughest social problems.
One paper, which has since been cited over 1,000 times, showed that people with amnesia also had difficulty imagining new experiences, suggesting that there is a connection between remembering and creating mental images.
Such a program is built to gather information about its environment, then learn from it by repeatedly replaying its experiences, much like the description that Hassabis gave of human-brain activity during sleep in his Singularity Summit lecture.
DeepMind’s work culminated in 2016 when a team built an AI program that used reinforcement learning alongside other techniques to play Go.
Its grace and sophistication, the transcendence of its computational muscle, seemed to show that DeepMind was further ahead than its competitors on the quest for a program that could treat disease and manage cities.
Far from being a cosmetic concession from Google, the Ethics Board gives DeepMind solid legal backing to keep control of its most valuable and potentially most dangerous technology, according to the same source.
(DeepMind refused to answer a detailed set of questions about the Review Agreement but said that “ethics oversight and governance has been a priority for us from the earliest days.”) Hassabis can determine DeepMind’s destiny by other means too.
His programme, which offers fascinating and important work free from the pressures of academia, has attracted hundreds of the world’s most talented experts.
Another program learned to play chess from scratch using similar architecture to AlphaGo, becoming the greatest chess player of all time after just nine hours playing against itself.
In December 2018 a program called AlphaFold proved more accurate than competitors at predicting the three-dimensional structure of proteins from a list of their composites, potentially paving the way to treat diseases such as Parkinson’s and Alzheimer’s.
DeepMind is particularly proud of the algorithms it developed that calculate the most efficient means to cool Google’s data centres, which contain an estimated 2.5m computer servers.
Google Fiber, an effort to build an internet-service provider, was put on hiatus after it became clear that it would take decades to make a return on investment.
Some researchers complain that it can be difficult to publish their work: they have to battle through layers of internal approval before they can even submit work to conferences and journals.
The firm’s founders and early employees are approaching earn-out, when they can leave with the financial compensation that they received from the acquisition (Hassabis’s stock was probably worth around £100m).
Suleyman, whose mother was an NHS nurse, hoped to create a program called Streams that would warn doctors when a patient’s health deteriorated.
Because this work required access to sensitive information about patients, Suleyman established an Independent Review Panel (IRP) populated by the great and good of British health care and technology.
Suleyman had written in 2016 that “at no stage will patient data ever be linked or associated with Google accounts, products or services.” His promise seemed to have been broken.
Streams becoming a Google service does not mean the patient data...can be used to provide other Google products or services.”) Google’s annexation has angered employees at DeepMind Health.
When Bracken asked Suleyman if he would give panel members the accountability and governance powers of non-executive directors, Suleyman scoffed. (A spokesperson for DeepMind said they had “no recollection” of the incident.) Julian Huppert, the head of the IRP, argues that the panel delivered “more radical governance” than Bracken expected because members were able to speak openly and not bound by a duty of confidentiality.
DeepMind said in a statement that “we all agreed that it makes sense to bring these efforts together in one collaborative effort, with increased resources.” This begs the question of whether Google will apply the same logic to DeepMind’s work on AGI.
A Breakout player controls a bat that she can move horizontally across the bottom of the screen, using it to bounce a ball against blocks that hover above it, destroying them on impact.
Without human instruction, DeepMind’s program not only learned to play the game but also worked out how to cannon the ball into the space behind the blocks, taking advantage of rebounds to break more blocks.
The skill learned by DeepMind’s program is so restricted that it cannot react even to tiny changes to the environment that a person would take in their stride – at least not without thousands more rounds of reinforcement learning.
The second caveat, which DeepMind rarely talks about, is that success within virtual environments depends on the existence of a reward function: a signal that allows software to measure its progress.
Reconciling the reward signal for climate health (the concentration of CO₂ in the atmosphere) with the reward signal for oil companies (share price) requires satisfying many human beings with conflicted motivations.
Decisions taken early in the game have ramifications later on, which is closer to the sort of convoluted and delayed feedback that characterises many real-world tasks.
But putting human instruction in the loop risks losing the effects of scale and speed that unadulterated computer-processing offered.
Current and former researchers at DeepMind and Google, who requested anonymity due to stringent non-disclosure agreements, have also expressed scepticism that DeepMind can reach AGI through such methods.
To these individuals, the focus on achieving high performance within simulated environments makes the reward-signal problem hard to tackle.
The pursuit of AGI may eventually lose its way, having invented some useful medical technologies and out-classed the world’s greatest board-game players.
- On 10. april 2021
GOTO 2018 • On the Road to Artificial General Intelligence • Danny Lange
This presentation was recorded at GOTO Copenhagen 2018. #gotocon #gotocph Danny Lange - VP of AI and ML at Unity Technologies and ..
MIT AGI: Artificial General Intelligence
This is the opening lecture for course 6.S099: Artificial General Intelligence. This class is free and open to everyone. Our goal is to take an engineering approach ...
Possible Paths to Artificial General Intelligence
Yoshua Bengio (MILA), Irina Higgins (DeepMind), Nick Bostrom (FHI), Yi Zeng (Chinese Academy of Sciences), and moderator Joshua Tenenbaum (MIT) ...
General Artificial Intelligence: Making sci-fi a reality | Darya Hvizdalova | TEDxTrencin
Darya Hvizdalova is a member of the international research & development company GoodAI which focuses on building a general artificial intelligence software ...
Artificial Superintelligence - Why It's Already Too Late - 2019 FACTS
What is Artificial Intelligence? AI is something Elon Musk said many times would be mankind's last invention. When we get to the Artificial Superintelligence ...
Artificial general intelligence: What it really takes to program the future | Ben Goertzel
Read more at BigThink.com: Follow Big Think here: YouTube: Facebook: Twitter: .
How artificial intelligence will change your world in 2019, for better or worse
From a science fiction dream to a critical part of our everyday lives, artificial intelligence is everywhere. You probably don't see AI at work, and that's by design.
Artificial General Intelligence: Humanity's Last Invention | Ben Goertzel
For all the talk of AI, it always seems that gossip is faster than progress. But it could be that within this century, we will fully realize the visions science fiction has ...
Greg Brockman: OpenAI and AGI | Artificial Intelligence Podcast (MIT AI)
Greg Brockman is the Co-Founder and CTO of OpenAI, a research organization developing ideas in AI that lead eventually to a safe & friendly artificial general ...
How Far Away is Artificial General Intelligence? - Expert Opinions
Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary ...