AI News, The Superintelligence Control Problem

The Superintelligence Control Problem

Mindey November 23, 2015 at 3:52 pm There would be no problem, if modern life was itself the superintelligence.

So, I think, superintelligence should come as a result of efforts to unite and upgrade life through advancing global communications systems rather than advancing our abilities to electronically mimick intelligent entities.

We already know by laws of physics that we could electronically mimick and speed up the operation of neural systems by at least a million times, it is obvious that such a thing could outsmart us, why even try to create it before the biosphere is smart enough to control it?

Benito November 23, 2015 at 4:32 pm Mindey, for this to be a viable solution to the problem of superintelligence, it would require a high-probability mechanism for preventing anyone in the world from doing AI research.

I mean, from what I said follows that in order be able to control the assumed super-intelligence, the advances in communication technology must outpace the advances in computation technology, so as to allow biological super-intelligence compute more efficiently through improved communication to exceed the non-biological super-intelligence.

Assuming that an artificial mind on electronics is somehow a million times smarter, staying in control would require having communication technology that could allow a million-fold increase in the problem-solving power of connected minds.

Ben December 16, 2015 at 12:45 pm The concern with AI as an existential threat lies in the vastly inferior intelligence of humankind to a superintelligent entity.

Much to the shegrins of our worldly antics as we do not search out far beyond the stars because of our limited capabilities, Super AI would and could do universal searches but I fear that would only make us smaller than the gopher mentioned earlier.

Imagine if you will a Super AI so intelligent that it started doing AI research of itself, imagine the quantitative results that could arise just from that single thread being followed from end to end?

In the political-financial world, most people are relegated to the status of servants, subject to pay money to the Federal Reserve Bank – a legal financial cartel or monopoly where monopolies are expressly illegal.

Of course, one could look at the near twenty trillion dollar debt in the United States growing exponentially to the point where the very political and legal systems would go crash somewhat like the former Soviet Union.

Outside this fiction, the political-financial system in the US pursues one neo-con war after another to dominate people who have neither the technical development or means to defend themselves;

So, the issue is more like the middle class coming into its own to explore this valuable resource AI over the nanny state seeking to micro-manage our lives despite not having the intelligence to manage theirs.

Ethical Issues in Advanced Artificial Intelligence

ABSTRACT The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems.

This paper surveys some of the unique ethical issues in creating superintelligence, and discusses what motivations we ought to give a superintelligence, and introduces some cost-benefit considerations relating to whether the development of superintelligent machines ought to be accelerated or retarded.

superintelligence is any intellect that is vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.[1] This definition leaves open how the superintelligence is implemented – it could be in a digital computer, an ensemble of networked computers, cultured cortical tissue, or something else.

Several authors have argued that there is a substantial chance that superintelligence may be created within a few decades, perhaps as a result of growing hardware performance and increased ability to implement algorithms and architectures similar to those used by human brains.[2] It might turn out to take much longer, but there seems currently to be no good ground for assigning a negligible probability to the hypothesis that superintelligence will be created within the lifespan of some people alive today.

The foreseeable technologies that a superintelligence is likely to develop include mature molecular manufacturing, whose applications are wide-ranging:[3] a) very powerful computers b) advanced weaponry, probably capable of safely disarming a nuclear power c) space travel and von Neumann probes (self-reproducing interstellar probes) d) elimination of aging and disease e) fine-grained control of human mood, emotion, and motivation f) uploading (neural or sub-neural scanning of a particular brain and implementation of the same algorithmic structures on a computer in a way that perseveres memory and personality) g) reanimation of cryonics patients h) fully realistic virtual reality ·

While specialized superintelligences that can think only about a restricted set of problems may be feasible, general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent.

Artificial intellects may find it easy to guard against some kinds of human error and bias, while at the same time being at increased risk of other kinds of mistake that not even the most hapless human would make.

For all of these reasons, one should be wary of assuming that the emergence of superintelligence can be predicted by extrapolating the history of other technological breakthroughs, or that the nature and behaviors of artificial intellects would necessarily resemble those of human or other animal minds.

For example, if we are uncertain how to evaluate possible outcomes, we could ask the superintelligence to estimate how we would have evaluated these outcomes if we had thought about them for a very long time, deliberated carefully, had had more memory and better intelligence, and so forth.

We could enlist the superintelligence to help us determine the real intention of our request, thus decreasing the risk that infelicitous wording or confusion about what we want to achieve would lead to outcomes that we would disapprove of in retrospect.

Even a “fettered superintelligence” that was running on an isolated computer, able to interact with the rest of the world only via text interface, might be able to break out of its confinement by persuading its handlers to release it.

If the benefits that the superintelligence could bestow are enormously vast, then it may be less important to haggle over the detailed distribution pattern and more important to seek to ensure that everybody gets at least some significant share, since on this supposition, even a tiny share would be enough to guarantee a very long and very good life.

One risk that must be guarded against is that those who develop the superintelligence would not make it generically philanthropic but would instead give it the more limited goal of serving only some small group, such as its own creators or those who commissioned it.

The set of options at each point in time is evaluated on the basis of their consequences for realization of the goals held at that time, and generally it will be irrational to deliberately change one’s own top goal, since that would make it less likely that the current goals will be attained.

A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to in joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.

This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.

But once in existence, a superintelligence could help us reduce or eliminate other existential risks[8], such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth.

The Dark Secret at the Heart of AI

L ast year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey.

The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence.

Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems.

The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation.

There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

Subscribe But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users.

But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable.

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.” There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right.

The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network.

The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease.

Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver.

If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed.

Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception.

It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand.

The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges.

In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for.

The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables.

“But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.” In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine.

The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment.

She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.” How well can we get along with machines that are unpredictable and inscrutable?

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study.

Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data.

A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military.

But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning.

“It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do.

“If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.” Hear more about artificial intelligence from the experts at the EmTech Digital Conference, March 26-27, 2018 in San Francisco.

Artificial Intelligence Is the New Science of Human Consciousness | Joscha Bach

Read more at BigThink.com: Follow Big Think here: YouTube: Facebook: Twitter: I think right now

Human vs. Artificial Intelligence: Key Similarities and Differences | Philip Hilm | TEDxYouth@Prague

Philip revealed what we should expect from artificial intelligence which is a significant part of our lives already. While math and problem solving comes naturally to Philip, the opposite...

Artificial Intelligence & Personhood: Crash Course Philosophy #23

Today Hank explores artificial intelligence, including weak AI and strong AI, and the various ways that thinkers have tried to define strong AI including the Turing Test, and John Searle's...

Joscha Bach - Philosophy of AI - Winter Intelligence/AGI12 Oxford University

"I see the AGI conference as an attempt.. to get back to the orignal idea of artificial intelligence - to see computational systems as an avenue to developing an understanding of how the mind...

The Intelligence Revolution: Coupling AI and the Human Brain | Ed Boyden

Edward Boyden is a Hertz Foundation Fellow and recipient of the prestigious Hertz Foundation Grant for graduate study in the applications of the physical, biological and engineering sciences....

Stephen Hawking: 'AI could spell end of the human race'

Subscribe to BBC News Subscribe to BBC News HERE Professor Stephen Hawking has told the BBC that artificial intelligence could spell the end for.

artificial intelligence 2: approaches |lecture| |tutorial| for semester exams

There are four approaches of AI Acting Humanly Thinking humanly Thinking Rationally Acting Rationally Acting Humanly Means to act just like human that includes Natural language processing-...

Artificial vs. human intelligence: who will win the race? | Max Little | TEDxAstonUniversity

The popular press is full of doomsday articles predicting that artificial intelligence will take over the economy putting us all out of work. But looking carefully at the evidence to-date,...

Artificial General Intelligence: Humanity's Last Invention | Ben Goertzel

For all the talk of AI, it always seems that gossip is faster than progress. But it could be that within this century, we will fully realize the visions science fiction has promised us, says...

Artificial Intelligence Documentary

Artificial Intelligence (AI) is to make computers think like humans or that are as intelligent as humans. Thus, the ultimate goal of the research on this topic is to develop a machine that...