AI News, Risks From General Artificial Intelligence Without an Intelligence Explosion

Risks From General Artificial Intelligence Without an Intelligence Explosion

David Krueger January 2, 2016 at 4:18 pm Without taking a stance, I’d just like to point out that most people do not believe that one’s intelligence determines one’s rights or ethical significance.

Every life form exists today, because it survived the evolutionary pressure over billions of years, which induced the inclination to choose actions optimizing for survival, so, life forms are good at recognizing what’s good for me, but have difficulty in recognizing what’s good universally.

However, it seems there is a criterion to decide what is good universally — good is to let everything exist, and bad is to destroy everything, where everything is defined as the world as a whole, as well as the world as its perspective from every no matter how small or large part of it.

Assuming that resources are finite the “everything exist” inevitably narrows down to “everything anyone truly wishes exist”, where “truly” means what we eventually decide, when considering it together with deeper analysis spanning increasingly many social layers of our collective cognition to verify it.

mean, creating a world, where everything what anyone truly wishes, comes true to the degree that they wish it truly, and this degree would be decided by the depth of social introspection (levels of hierarchy of social thought [so, I assume, just like we in our brains have organization of neurons into a layered hierarchical structure of recognizers, communications in our society too has similar layers of social recognizers, and increasingly wise decisions would tend to integrate increasingly many of them to decide the trueness of a wish]), and we do have to work on communication technology to the communication between these layers to become wiser.

Our Fear of Artificial Intelligence

It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity.

As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture.

Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction;

When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity;

Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent.

Stephen Hawking has warned that because people would be unable to compete with an advanced AI, it “could spell the end of the human race.” Upon reading Superintelligence, the entrepreneur Elon Musk tweeted: “Hope we’re not just the biological boot loader for digital superintelligence.

Not to be confused with Bostrom’s center, this is an organization that says it is “working to mitigate existential risks facing humanity,” the ones that could arise “from the development of human-level artificial intelligence.” No one is suggesting that anything like superintelligence exists now.

Even if it’s impressive—relative to what earlier computers could manage—for a computer to recognize a picture of a cat, the machine has no volition, no sense of what cat-ness is or what else is happening in the picture, and none of the countless other insights that humans have.

Extrapolating from the state of AI today to suggest that superintelligence is looming is “comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner,” Brooks wrote recently on Edge.org.

He pointed out that AI has advanced tremendously in the last decade, and that while the public might understand progress in terms of Moore’s Law (faster computers are doing more), in fact recent AI work has been fundamental, with techniques like deep learning laying the groundwork for computers that can automatically increase their understanding of the world around them.

Because Google, Facebook, and other companies are actively looking to create an intelligent, “learning” machine, he reasons, “I would say that one of the things we ought not to do is to press full steam ahead on building superintelligence without giving thought to the potential risks.

If you want unlimited energy you’d better contain the fusion reaction.” Similarly, he says, if you want unlimited intelligence, you’d better figure out how to align computers with human needs.

Rather than warning of existential disaster, the letter calls for more research into reaping the benefits of AI “while avoiding potential pitfalls.” This letter is signed not just by AI outsiders such as Hawking, Musk, and Bostrom but also by prominent computer scientists (including Demis Hassabis, a top AI researcher).

How Far Are We From ‘True’ Artificial Intelligence – And Do We Really Want To Go There?

Bernard Marr Artificial Intelligence is something that’s been on the horizon for a long time – probably as long as anyone reading this will be able to remember.

Are we really any closer to “intelligent” machines than we were 20 years ago, when many of the ideas driving AI today – machine learning and deep learning, for example – already existed – but without the internet, there simply wasn’t the data to exploit them fully?

Clearly, from the type of tasks that we’re looking forward to seeing AIs do, and the way we are expecting to interact with them as they do them, today’s AI is, generally speaking, in search of human-like intelligence.

Clichéd examples are the bookish professor with poor social skills and limited ‘common sense’, or the charming, persuasive and successful business tycoon with limited academic knowledge or ability.

This is sometimes through of as empathy as something which is intuitive but is undoubtedly a mental process, dependent on our brain’s ability to analyze information and infer an insight or solution, so qualifies as “intelligence”.

AIs have been taught to play old video games using only visual input, showing that they are capable of “learning” how to react to movement and even developing a desire to win.

Now, undoubtedly, the building blocks are falling into place for this to become a reality, but is an artificial human brain, capable of working at super-speed and with unlimited memory and perfect recollection, what we want or need?

From a scientific viewpoint, consciousness is a state that arises when a biological brain interprets the flood of sensory input streaming in from the world around it, leading, somehow, to the conclusion that it exists as an entity.

It’s not well understood at all – but most of us can conceive how this massive flood of images and sounds is interpreted through a biological neuro-network which leads to “thoughts” – and among those thoughts are concepts of individual existence such as “I am a human”, “I exist” and “I am experiencing thoughts”.

So, it’s only a small step of logic to assume that machines will one day – perhaps soon, given how broad the stream of data they are capable of ingesting and processing is becoming – in some way experience this phenomena, too.

AI vs. Human Intelligence: Why Computers Will Never Create Disruptive Innovations

Artificial Intelligence (AI) has raced forward in the last few years, championed by a libertarian, tech-loving and science-driven elite.

This AI-inspired future, with echoes of Blade Runner and Battlestar Galactica, is profoundly depressing for many people, bringing with it a world where human creativity and uniqueness has been replaced by the standardization of robots.

AI advocates think that once computers have sufficiently advanced algorithms, they will be able to enhance, and then replicate, the human mind.

Great scientists like Erwin Schrödinger have expressed profound curiosity about how life can buck the great laws of physics, notably that of entropy, the 2nd law of thermodynamics.

Yet here is the doozy: Whilst our most advanced machines, algorithms, make complex calculations according to a series of rules, disruptive innovators and genius creatives — the kind that birth new business models like AirBnB and new forms of art like Guernica — break the rules.

If breakthrough creativity cannot be fully forecast by past behaviours and beliefs (as many disrupted businesses can testify), then it must come from somewhere other than the past (and our memories of it).

Additionally, the act of bringing those breakthroughs into the world, usually against enormous resistance from the status quo, is itself a profoundly human talent, driven as it is by narrative, vision, empathy and influence.

When we see creativity as organic and not mechanic, we begin to glimpse possible ways to account for it, including revelations from quantum biology that suggest some of the functions of our brain may be quantum mechanical in nature...

It’s a comprehensive framework that aims to support us all to lead transformative change in human systems (whether within individuals, families, businesses or societies).

It unites the latest science, with timeless philosophy to uncover the logic of how discontinuous, non-linear breakthroughs can be created and then sustained, so that our brains or businesses do not return to their historical default settings.

Each involves us blending emotion and reason, rule-breaking and rule-making, as we unleash from within us whatever is seeking to emerge in that matchless moment.

Could Artificial Intelligence Ever Become A Threat To Humanity?

Shutterstock What is a plausible path (if any) towards AI becoming a threat to humanity?

originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world.

As a manager in an industry research lab, I’m the boss of many people who are way smarter than I am (I see it as a major objective of my job to hire people who are smarter than me).

Behavior like becoming violent when we feel threatened, being jealous, wanting exclusive access to resources, preferring our next of kin to strangers, etc were built into us by evolution for the survival of the species.

If both AIs have access to the same amount of computing resources, the second one will win, just like a tiger a shark or a virus can kill a human of superior intelligence.

Hot Robot At SXSW Says She Wants To Destroy Humans | The Pulse | CNBC

Robotics is finally reaching the mainstream and androids - humanlike robots - are everywhere at SXSW Experts believe humanlike robots are the key to smoothing communication between humans...

What happens when our computers get smarter than we are? | Nick Bostrom

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it...

AI Robot Sophia and Her Plans to 'Dominate the Human Race' 2017

This is not the first time she has said something to this effect in March of 2016 Sophia said that she would "destroy the human race." Can you say creepy? source:

Stephen Hawking: 'AI could spell end of the human race'

Subscribe to BBC News Subscribe to BBC News HERE Professor Stephen Hawking has told the BBC that artificial intelligence could spell the end for.

Sadhguru - What will happen when Artificial Intelligence takes up human being jobs?

Sadhguru Jaggi Vasudev shares the concern about the future that human being going to face with the way technology was evolving. As the robotics was going to replace all the mechanical even...

Why Elon Musk says we're living in a simulation

You may like playing The Sims, but Elon Musk says you are the Sim. Check out the full cartoon by Alvin Chang:

Tonight Showbotics: Snakebot, Sophia, eMotion Butterflies

Jimmy Fallon demos amazing new robots from all over the world, including an eerily human robot named Sophia that plays rock-paper-scissors. Subscribe NOW to The Tonight Show Starring Jimmy...

Artificial Intelligence and the future | André LeBlanc | TEDxMoncton

This talk was given at a local TEDx event, produced independently of the TED Conferences. In his talk, Andre will explain the current and future impacts of Artificial Intelligence on industry,...

Google's DeepMind AI just taught itself to walk

Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance. The result is as impressive as...

Two robots talking to each other. Gone wrong

Buy yourself a bot here - Or their son Robots fell in love while talking - In case you want to support me: