AI News, Understanding the Brain with the Help of Artificial Intelligence artificial intelligence

From Inception in the Brain to a Less Artificial Intelligence

This effort has led to seminal discoveries in the field such as Barlow’s “fly detector” cells in the retina, Hubel and Wiesel’s orientation-selective cells in primary visual cortex and Gross’s “face cells” in inferotemporal cortex.

They call them 'inception' because they allow us to implant a desired activity pattern in the brain ('Inception' à la the movie by Nolan), and 'loops' because in one pass of our protocol we start with in vivo experiments, optimize in silico responses and return to in vivo experiments in the same animal.

They applied inception loops in the visual system and found that the optimal stimuli in mouse V1 exhibited complex, high spatial frequency details such as sharp corners, checkerboard patterns, irregular pointillist textures, and a variety of curved strokes, that deviate strikingly from the currentde facto standard model of V1 as a bank of Gabor filters.

To this end his lab is deciphering the structure of microcircuits in visual cortex (define cell types and connectivity), elucidate the computations they perform and apply these principles to develop novel machine learning algorithms.

Technological singularity

According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a 'runaway reaction' of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.

Stanislaw Ulam reports a discussion with von Neumann 'centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue'.[4]

The concept and the term 'singularity' were popularized by Vernor Vinge in his 1993 essay The Coming Technological Singularity, in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.

If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of.

These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.

These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[15]

A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

The means speculated to produce intelligence augmentation are numerous, and include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading.

Hanson (1998) is skeptical of human intelligence augmentation, writing that once one has exhausted the 'low-hanging fruit' of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find.

The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[32]) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[33]

Kurzweil reserves the term 'singularity' for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that 'The Singularity will allow us to transcend these limitations of our biological bodies and brains ...

He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date 'will not represent the Singularity' because they do 'not yet correspond to a profound expansion of our intelligence.'[36]

In one of the first uses of the term 'singularity' in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change: One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[4]

He predicts paradigm shifts will become increasingly common, leading to 'technological change so rapid and profound it represents a rupture in the fabric of human history'.[37]

First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.[citation needed]

While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.[45][46][47]

Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity because whereas hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research.

They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI was developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.[48]

Some critics, like philosopher Hubert Dreyfus, assert that computers or machines can't achieve human intelligence, while others, like physicist Stephen Hawking, hold that the definition of intelligence is irrelevant if the net result is the same.[50]

Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived.

Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[59][60]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers.[61]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively 'notable events' appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[62]

Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J.

In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition.

Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).[75][76][77]

We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race.

One hypothetical approach towards attempting to control an artificial intelligence is an AI box, where the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world.

Hawking believed that in the coming decades, AI could offer 'incalculable benefits and risks' such as 'technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.'

In a hard takeoff scenario, an AGI rapidly self-improves, 'taking control' of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals.

In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.[90][91]

Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that 'creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1.'[93]

Storrs Hall believes that 'many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process' in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff.

Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.[94]

Even if all superfast AIs worked on intelligence augmentation, it's not clear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase.

Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation.

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called 'Digital Ascension' that involves 'people dying in the flesh and being uploaded into a computer and remaining conscious'.[100]

In his 1958 obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the 'ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.'[4]

Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.

In 1985, in 'The Time Scale of Artificial Intelligence', artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an 'infinity point': if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[5][103]

Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[6]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is 'to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges.'[107]

The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

Can Artificial Intelligence (AI) Help In Finding Extraterrestrial Intelligence (ETI)?

Many researchers, like Jill Tarter, who specialize in searching for extraterrestrial intelligence (SETI), opine that a conventional approach like transmitting electromagnetic signals might be good for detecting technosignatures, but not necessarily intelligence.

Machines running AI programs learn activities like speech recognition, planning, problem-solving, perception, and planning all by themselves, and can work efficiently without getting lost in the labyrinth of data.

For instance, spiders are known to process information through their webs, crows understand analogies, cetaceans like dolphins have their own dialects, and primates like chimpanzees skillfully use tools to complete tasks.

SETI researchers are convinced that artificial intelligence could turn out to be the game changer in the search, as AI algorithms are ideal for spotting differences and forming patterns out of the massive sea of data.

That is why researchers at the SETI Institute are partnering with the tech giants like Intel, IBM, and others to discuss ways in which AI can be used to solve pertinent space and science problems, including the discovery of extraterrestrial intelligence.

The idea at the heart of this strategy is to scan for anomalous patterns that may not necessarily be communication signals sent by aliens, but rather subtle manifestations of technological advancement that have been achieved in other parts of the universe.

That being said, designing an efficient AI anomaly detection engine that ingeniously works with the multivariant data remains a dark art for even the best minds working in the SETI field.

The limitation of autocorrelators, at the moment, is that they work best with limited data size and still lack the flexibility needed to correctly spot the outliers that could provide some hint of extraterrestrial intelligence.

It is possible that proof of alien intelligence could be lurking within the petabytes of space data we’ve already gathered, and the sophisticated AI of the future will reveal all the secrets hiding there!

DARPA Thinks AI Could Help Troops Telepathically ControlMachines

The Pentagon’s research office is exploring how artificial intelligence can improve technologies that link troops’ brains and bodies to military systems.

The Defense Advanced Research Projects Agency recently began recruiting teams to research how AI tools could augment and enhance “next-generation neurotechnology.” Through the program, officials ultimately aim to build AI into neural interfaces, a technology that lets people control, feel and interact with remote machines as though they were a part of their own body.

The brain receives a constant stream of sensory information from the maze of nerves spread throughout the body, but there’s only so many feelings a given nerve can express.

Under the program, teams will also build an AI-powered interface that can stimulate “artificial signals” within the body—creating a sense of burning without heat or touch without physical contact, for example.

When Natural Intelligence blows Artificial Intelligence out of the water

That is why ‘red’ tends to be associated with ‘apple’: their repeated co-occurrence causes synapses between neurons that code for those features and objects to be linked — mechanistic substrates explaining how our statistically-inclined mind work: it helps us find regularity in our otherwise confusing world.

Assuming an average 100B neurons, 500 trillion synapses, an average 2 billion seconds life for a typical human being, and assuming each neuron in the brain emits a spike every second (this is a low estimate… our neurons are more active than that!), it brings the total number of spikes in the brain that are able to change a synapse (read: cause learning) to a staggering septillion!

While the time scale in which learning occurs varies between minutes to longer time scales, basic neural and synaptic mechanisms outlined above are at the basis of the underlying learning machinery.

Designing and field AI that can exhibit this “continual” (or lifelong, or persistent) learning is an unsolved problem, where AI can be trained to achieve high levels of performance when presented with predetermined datasets but cannot learn continually in settings that are typical of the world inhabited by humans.

Ultimately, machines and more in general AI systems that will need to co-exist in challenging deployment scenarios alongside humans will need to exhibit the same, staggering ability of humans to learn during their lifetime.  Currently, traditionally designed AI cannot do that, limiting the ability of machines not only to work alongside humans, but to challenge us to the highest step of the Intelligence podium.

Neuroscience and Artificial Intelligence Need Each Other | Marvin Chun | TEDxKFAS

Big data and fast computing have advanced both neuroscience and artificial intelligence. The use of machine learning to compute vast amounts of brain data ...

The Intelligence Revolution: Coupling AI and the Human Brain | Ed Boyden

Edward Boyden is a Hertz Foundation Fellow and recipient of the prestigious Hertz Foundation Grant for graduate study in the applications of the physical, ...

What is Artificial Intelligence (or Machine Learning)?

What is AI? What is machine learning and how does it work? You've probably heard the buzz. The age of artificial intelligence has arrived. But that doesn't mean ...

Science Documentary: Cognitive science , a documentary on mind processes, artificial intelligence

Science Documentary: Cognitive science , a documentary on mind processes, artificial intelligence Cognitive science is the study of mind process as it relates to ...

Google's Deep Mind Explained! - Self Learning A.I.

Subscribe here: Become a Patreon!: Visual animal AI: .

MIT AI: Brains, Minds, and Machines (Tomaso Poggio)

Tomaso Poggio is a professor at MIT and is the director of the Center for Brains, Minds, and Machines. Cited over 100000 times, his work has had a profound ...

Can we build an artificial brain?

Films like Ex Machina, AI, and Transcendence revolve around artificial intelligence, and recreating the human brain in electronic form. The question is - can we ...

Neuroscience, AI and the Future of Education | Scott Bolland | TEDxSouthBank

Currently around 63% of students are disengaged at school, meaning that they withdrawal either physically or mentally before they have mastered the skills that ...

Scientists Put the Brain of a Worm Into a Robot… and It MOVED

This robot contains the digitized brain of a worm, and without any outside input it just... works! Here's what this could mean for the future of AI. This Is How Your ...

The Rise of Artificial Intelligence | Documentary HD

AI (artificial intelligence) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the ...