AI News, BOOK REVIEW: The Open Mind artificial intelligence

Artificial Intelligence keeps Gov.-elect Newsom up at night. Here’s what he can do about it | The Sacramento Bee

An out of control trolley rushes toward five people tied to the tracks.

This ethical thought experiment — the “trolley problem” — has new relevance following the first self-driving car fatality, a 2018 incident that did little to slow the race to a world of autonomous vehicles and artificial intelligence.

And no single person in 2019 has a more important role to play in making sure we craft policy that gets AI right – pursuing its opportunities, protecting against its risks – than Governor-elect Gavin Newsom.

Indiana’s future of work taskforce is looking for ways to boost growth and productivity and protect vulnerable populations.

By sharing their views, collecting feedback and considering tradeoffs alongside the public, our state leaders will be in a better position to craft policy which reflects the public’s priorities.

Among its key points, the commission calls on California to develop a holistic AI plan that looks at risks and opportunities in equal measure.

A holistic plan should offer views of how to apply state resources to high-priority projects not generally viewed as AI, such as forecasting floods and wildfires in disaster-prone areas or detecting lead in drinking water.

One possible model is an advisory board of business leaders, educators, community leaders, worker representatives and policy experts.

California — from local school districts to the UC system, regional workforce development organizations and beyond — will need a tactical plan to upskill the state’s current and future workforce.

Artificial general intelligence

Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can.

cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in)[11]

see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.

The most difficult problems for computers are informally known as 'AI-complete' or 'AI-hard', implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.[16]

AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.[17]

In the 1990s and early 21st century, mainstream AI has achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks, computer vision or data mining.[30]

Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems using an integrated agent architecture, cognitive architecture or subsumption architecture.

Hans Moravec wrote in 1988: 'I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs.

for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: 'The expectation has often been voiced that 'top-down' (symbolic) approaches to modeling cognition will somehow meet 'bottom-up' (sensory) approaches somewhere in between.

A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer).'[32]

Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near[1]

A 2017 survey of AGI categorized forty-five known 'active R&D projects' that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI based article [43].

A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device.

The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.[44]

An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 1014 (100 trillion) synaptic updates per second (SUPS).[47]

In 1997 Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 1016 computations per second (cps).[48]

(For comparison, if a 'computation' was equivalent to one 'floating point operation' – a measure used to rate current supercomputers – then 1016 'computations' would be equivalent to 10 petaFLOPS, achieved in 2011).

He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.

The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate.

In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.[49]

The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 108 synapses in 2006.[51]

A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: 'It is not impossible to build a human brain and we can do it in 10 years,' Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford.[52]

Hans Moravec addressed the above arguments ('brains are more complicated', 'neurons have to be modeled in more detail') in his 1997 paper 'When will computer hardware match the human brain?'.[54]

fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence.

The first one is called 'the strong AI hypothesis' and the second is 'the weak AI hypothesis' because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test.

Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.[68]

Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do[example needed].[68]

The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers from emulating the function of the human brain into a computer hardware.[71]

Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously can people then overlook solutions to problematic questions.[37]

possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.[71]

There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:

It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent.

Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require 'unforeseeable and fundamentally unpredictable breakthroughs' and a 'scientifically deep understanding of cognition'.[76]

Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.[77]

Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they'd be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081.

But this has not stopped philosophers and researchers from guessing what the smart computers or robots of the future may do, including forming a utopia by being our friends or overwhelming us in an AI takeover.

A growing population of intelligent robots could conceivably out-compete inferior humans in job markets, in business, in science, in politics (pursuing robot rights), and technologically, sociologically (by acting as one), and militarily.

Looking to the Future: Cognitive Artificial Intelligence | Inventory Improves Software |

AI systems can generally only respond to the questions we know to ask and are constrained by the data we feed into them.With Cognitive AI, systems can act based on pure learning so they can proactively deliver information, detect and prevent potential problems, identify data patterns, and more.The AI capabilities we’re getting to know today will undoubtedly expand over time.

Greatest Opportunities for Cognitive AI Big Data: By combining processing capacity with cognitive learning capabilities, AI will be able to identify patterns in data that would be difficult or nearly impossible for humans to see alone.

Have an Open Mind: While artificial intelligence is still fairly “unknown”in some ways,keeping an open mind to new technologies will set your company up to be more productive, efficient,and profitable.

Johan Oldenkamp at the Open Mind Conference 2016

The Brussels terror attacks happened on March 22nd this year. In numbers, this date is represented by 3/22. The number 322 is on the Skull & Bones logo.

Google's Deep Mind Explained! - Self Learning A.I.

Subscribe here: Become a Patreon!: Visual animal AI: .

The Open Mind: Angels and Demons of A.I. - Wendell Wallach

Yale scholar Wendell Wallach talks about how to keep technology from slipping beyond our control. (Taped: 03-10-16) Premiered in May 1956, Open Mind was ...

Google's DeepMind AI Just Taught Itself To Walk

Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance ...

Elon Musk’s A.I. Destroys Champion Gamer!

Subscribe here: Check out the previous episode: Become a Patreo

John Searle: "Consciousness in Artificial Intelligence" | Talks at Google

John Searle is the Slusser Professor of Philosophy at the University of California, Berkeley. His Talk at Google is focused on the philosophy of mind and the ...

Keynote: AI in the Open World

Fielding AI solutions in the open world requires systems to grapple with incompleteness and uncertainty. This session will address several promising areas of ...

Joe Rogan - Elon Musk on Artificial Intelligence

Taken from Joe Rogan Experience #1169:

Billionaires on Artificial Intelligence, AI (Elon Musk, Bill Gates, Jack Ma)

Bill Gates, Jack Ma, Elon Musk, Jeff Bezos talk about Artificial Intelligence, automation and its impacts. Billionaires on... A series exploring the billionaire mindset ...

Scientists Put the Brain of a Worm Into a Robot… and It MOVED

This robot contains the digitized brain of a worm, and without any outside input it just... works! Here's what this could mean for the future of AI. This Is How Your ...