AI News, Tech Luminaries Address Singularity
Tech Luminaries Address Singularity
THOUGHTS ”It might happen someday, but I think life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries.
[The ramifications] will be enormous, since the highest form of sentient beings on the planet will no longer be human.
SINGULARITY WILL OCCUR ”If you define the singularity as a point in time when intelligent machines are designing intelligent machines in such a way that machines get extremely intelligent in a short period of time--an exponential increase in intelligence--then it will never happen.
A singularity is a state where physical laws no longer apply because some value or metric goes to infinity, such as the curvature of space-time at the center of a black hole.
Even if humans created a new virus, biological or otherwise, that rapidly killed all life on Earth, it wouldn't be a singularity--very unfortunate, yes, but not a singularity.
”The term 'singularity' applied to intelligent machines refers to the idea that when intelligent machines can design intelligent machines smarter than themselves, it will cause an exponential growth in machine intelligence leading to a singularity of infinite (or at least extremely large) intelligence.
As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself.
Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build?
We will build machines that are more 'intelligent' than humans, and this might happen quickly, but there will be no singularity, no runaway growth in intelligence.
Like today's computers, intelligent machines will come in many shapes and sizes and be applied to many different types of problems.
No intelligent machine will 'wake up' one day and say 'I think I will enslave my creators.' Similar fears were expressed when the steam engine was invented.
Builds computer simulations of complex human systems, like the stock market, highway traffic, and the insurance industry.
Author of popular books about science, both fiction and nonfiction, including The Cambridge Quintet, a fictional account of a dinner-party conversation about the creation of a thinking machine.
rather, machines will become increasingly uninterested in human affairs just as we are uninterested in the affairs of ants or bees.
But it's more likely than not in my view that the two species will comfortably and more or less peacefully coexist--unless human interests start to interfere with those of the machines.”
High school students today quickly learn the mathematical tool of calculus that Newton struggled to invent.
The first airplanes were certainly not as good as well-appointed trains in moving masses comfortably, but the transition later proved essential to maintaining our progress in human mobility.
THOUGHTS ”I think that machine intelligence is one of the most exciting remaining 'great problems' left in computer science.
For all its promise however, it pales compared with the advances we could make in the next few decades in improving the health and education of the existing human intelligences already on the planet.
I believe the first thing a tabula rasa intelligence (machine or otherwise) would conclude is that humans are very poor stewards of their own condition.
I think they will be more along the lines of what happened during the prior 'revolutions' (agricultural, industrial, information age, etc.), that is, incremental, albeit dramatic, changes to humanity.
THOUGHTS ”Singularity is that point in time when computing is able to know all human and natural-systems knowledge and exceed it in problem-solving capability with the diminished need for humankind as we know it.
I basically support the notion, but I have trouble seeing the specific transitions or break points that let the exponential take over and move to the next transition.
Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived.
I don't see how machines are going to overcome that overall gap, to reach that level of complexity, even if we get them so they're intellectually more capable than humans.”
But do you want to reboot your brain regularly?” ”I think that futurists are much more successful in projecting simple measures of progress (such as Moore's Law) than they are in projecting changes in human society and experience.”
The technological singularity (also, simply, the singularity) is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a 'runaway reaction' of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.
Good's 'intelligence explosion' model predicts that a future superintelligence will trigger a singularity. Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when artificial general intelligence (AGI) would arrive was 2040 to 2050, depending on the poll. Many notable personalities, including Stephen Hawking and Elon Musk, consider the uncontrolled rise of artificial intelligence as a matter of alarm and concern for humanity's future. The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated by various intellectual circles.
They argue that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world. Some writers use 'the singularity' in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology, although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity. Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore's law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century. Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept. The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law.
There will be no distinction, post-Singularity, between human and machine'. He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date 'will not represent the Singularity' because they do 'not yet correspond to a profound expansion of our intelligence.' Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology.
In one of the first uses of the term 'singularity' in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change: One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the 'law of accelerating returns'.
Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[improper synthesis?] In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers. In a 2007 paper, Schmidhuber stated that the frequency of subjectively 'notable events' appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists. Paul Allen argues the opposite of accelerating returns, the complexity brake; the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress.
The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity. The term 'technological singularity' reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate. It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat, as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute. While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.
The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist. Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[improper synthesis?] In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless.
after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes. Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called 'Digital Ascension' that involves 'people dying in the flesh and being uploaded into a computer and remaining conscious'. Singularitarianism has also been likened to a religion by John Horgan. In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the 'ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.' In 1965, Good wrote his essay postulating an 'intelligence explosion' of recursive self-improvement of a machine intelligence.
Kurzweil Claims That the Singularity Will Happen by 2045
Ray Kurzweil, Google’s Director of Engineering, is a well-known futurist with a high-hitting track record for accurate predictions.
In a communication to Futurism, Kurzweil states: 2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence.
“It’s here, in part, and it’s going to accelerate.” We all know it is coming sooner or later, but the question in the minds of almost everyone is: should humanity fear the singularity?
They may not yet be inside our bodies, but, by the 2030s, we will connect our neocortex, the part of our brain where we do our thinking, to the cloud.” This idea is similar to Musk’s controversial neural lace and to XPRIZE Foundation chairman Peter Diamandis’
“We’re really going to exemplify all the things that we value in humans to a greater degree.” To those who view this cybernetic society as more fantasy than future, Kurzweil pointing out that there are people with computers in their brains today — Parkinson’s patients.
That’s how cybernetics is just getting its foot in the door, Kurzweil said. And, because it’s the nature of technology to improve, Kurzweil predicts that during the 2030s some technology will be invented that can go inside your brain and help your memory.
Intelligent Robots Will Overtake Humans by 2100, Experts Say
But others think humans will eventually relinquish most of their abilities and gradually become absorbed into artificial intelligence (AI)-based organisms, much like the energy making machinery in our own cells.
(Viking, 2005), futurist Ray Kurzweil predicted that computers will be as smart as humans by 2029, and that by 2045, "computers will be billions of times more powerful than unaided human intelligence,"
Bill Hibbard, a computer scientist at the University of Wisconsin-Madison, doesn't make quite as bold a prediction, but he's nevertheless confident AI will have human-level intelligence some time in the 21st century.
While AI can trounce the best chess or Jeopardy player and do other specialized tasks, it's still light-years behind the average 7-year-old in terms of common sense, vision, language and intuition about how the physical world works, Davis said.
For instance, because of that physical intuition, humans can watch a person overturn a cup of coffee and just know that the end result will be a puddle on the floor.
A computer program, on the other hand, would have to do a laborious simulation and know the exact size of the cup, the height of the cup from the surface and various other parameters to understand the outcome, Davis said.
Humans have already relinquished many intelligent tasks, such as the ability to write, navigate, memorize facts or do calculations, Joan Slonczewski, a microbiologist at Kenyon college and the author of a science-fiction book called "The Highest Frontier,"
Mitochondria were once independent organisms, but at some point, an ancestral cell engulfed those primitive bacteria, and over evolutionary history, mitochondria let cells gradually take over all the functions they used to perform, until they only produced energy.
Softbank CEO: The Singularity Will Happen by 2047
This time, Son predicted that the dawn of machines surpassing human intelligence is bound to occur by 2047 during a keynote address at the ongoing Mobile World Congress in Barcelona.
Instead of conflict, he sees a potential for humans to partner with artificial intelligence (AI), echoing the comments Elon Musk made in Dubai last month: “I think this superintelligence is going to be our partner,”
If we use it in good spirits, it will be our partner for a better life.” Already, individuals are working to ensure that the coming age of super synthetic intelligences is, indeed, one that is beneficial for humanity.
Case in point, Braintree founder Bryan Johnson is investing $100 million to research the human brain and, ultimately, make neuroprostheses that allow us to augment our own intelligence and keep pace with AI.
Johnson outlines the purpose of his work, stating that it’s really all about co-evolution: Our connection with our new creations of intelligence is limited by screens, keyboards, gestural interfaces, and voice commands — constrained input/output modalities.
By 2018, Son thinks that the number of transistors in a chip will surpass the number of neurons in the brain, which isn’t unlikely considering recent developments in microchip technology overtaking Moore’s Law.
But Son is convinced that, given our abundance of smart devices, which include even our cars, and the growth of the internet of things (IoT), the impact of super intelligent machines will be felt by humankind.
Can Futurists Predict the Year of the Singularity?
Well-known futurist and Google engineer Ray Kurzweil (co-founder and chancellor of Singularity University) reiterated his bold prediction at Austin’s South by Southwest (SXSW) festival this month that machines will match human intelligence by 2029 (and has said previously the Singularity itself will occur by 2045).
They may not yet be inside our bodies, but by the 2030s, we will connect our neocortex, the part of our brain where we do our thinking, to the cloud.” That merger of man and machine—sometimes referred to as transhumanism—is the same concept that Tesla and SpaceX CEO Elon Musk talks about when discussing development of a neural lace.
“Note that we might achieve human-level AGI, radical health-span extension and other cool stuff well before a singularity—especially if we choose to throttle AGI development rate for a while in order to increase the odds of a beneficial singularity,” he writes.
Mathematician John von Neumann had noted that “the ever-accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” In the 1960s, following his work with Alan Turing to decrypt Nazi communications, British mathematician I.J.
there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.” Science fiction writer and retired mathematics and computer science professor Vernor Vinge is usually credited with coining the term “technological singularity.” His 1993 essay, The Coming Technological Singularity: How to Survive in the Post-Human Era predicted the moment of technological transcendence would come within 30 years.
Vinge explains in his essay why he thinks the term “singularity”—in cosmology, the event where space-time collapses and a black hole forms—is apt: “It is a point where our models must be discarded and a new reality rules.
For example, the researchers found that “there is no evidence that expert predictions differ from those of non-experts.” They also observed a strong pattern that showed most AI prognostications fell within a certain “sweet spot”—15 to 25 years from the moment of prediction.
“[I]f the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress,” he writes, referring to the concept that past rates of progress can predict future rates as well.
- On Monday, June 1, 2020
Singularity - Humanity's last invention
"The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that...
Ray Kurzweil: The Coming Singularity
By 2045, we'll have expanded the intelligence of our human machine civilization a billion fold. That will result in a technological singularity, a point beyond which it's hard to imagine....
What happens when our computers get smarter than we are? | Nick Bostrom
Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it...
Artificial General Intelligence: Humanity's Last Invention | Ben Goertzel
For all the talk of AI, it always seems that gossip is faster than progress. But it could be that within this century, we will fully realize the visions science fiction has promised us, says...
The Intelligence Revolution: Coupling AI and the Human Brain | Ed Boyden
Edward Boyden is a Hertz Foundation Fellow and recipient of the prestigious Hertz Foundation Grant for graduate study in the applications of the physical, biological and engineering sciences....
Stephen Hawking: 'AI could spell end of the human race'
Subscribe to BBC News Subscribe to BBC News HERE Professor Stephen Hawking has told the BBC that artificial intelligence could spell the end for.
Singularity 2017 | Artificial Intelligence and North Pole Hidden Land
Singularity 2017 Movie | Artificial Intelligence and Hidden Land at the North Pole? (Aurora) Strangest thing happened to me the other week. I sat down to watch a movie called 'Singularity'...
Why Superintelligent AI Could Be the Last Human Invention | Max Tegmark
When we create something more intelligent than we could ever be, what happens after that? We have to teach it. Read more at BigThink.com:
Human Intelligence meets AI
Hot Robot At SXSW Says She Wants To Destroy Humans | The Pulse | CNBC
Robotics is finally reaching the mainstream and androids - humanlike robots - are everywhere at SXSW Experts believe humanlike robots are the key to smoothing communication between humans...