AI News, Klaus Rohde

Klaus Rohde

Some famous people, among them the eminent physicict Stephen Hawking, have warned of the danger posed by artificial intelligence which may soon surpass that of humans and take over the world, making us superfluous and dispensable.

For example, look at drones as they exist at this moment: fairly simple and certainly not highly intelligent relative to how they could potentially be in the not too distant future, nevertheless extremely dangerous if used for the wrong purposes in warfare or even in civil life.

The onboard computer HAL tries to take over control of the space vehicle, after it overheard a conversation between crew members by lipreading, in which they say that HAL was wrong in his analysis of a technical fault and decide to disconnect it.

have discussed this problem in an earlier post: https://krohde.wordpress.com/2016/04/10/intelligence-and-consciousness-artifical-intelligence-and-conscious-robots-soul-and-immortality/ Arthur Schopenhauer, in the first half of the 19th century, has discussed the problem in the context of Lamarck’s theory of the inheritance of acquired characters, and I believe his thoughts are very relevant and convincing in the context of modern evolutionary theory.

According to him (my translation): Lamarck ‘puts the animal equipped with ‘Wahrnehmung’ (ability to perceive) but without any organs and ‘entschiedene Bestrebungen’ (clear aims) first: this enables it to perceive the conditions under which it has to live, leading to the development of aims , i.e.

He further claims that ‘Aus diesem Grunde lässt sich auch annehmen, dass nirgends, auf keinem Planeten, oder Trabanten, die Materie in den Zustand endloser Ruhe gerathen werde, sondern die ihr innewohnenden Kräfte (d.h.

der Wille, dessen blosse Sichtbarkeit sie ist) werden der eingetretenen Ruhe stets wieder ein Ende machen……um als mechanische, physikalische, chemische, organische Kräfte ihr Spiel von neuem zu beginnen, da sie allemal nur auf den Anlass warten” (for this reason we must assume that on no planet will matter be in a state of non-ending rest, but that the forces within it (i.e.

It seems to me that the only way AI’s can become dangerous per se is by combining them with organic entities (for example by implanting into humans quantum computers aligned with their brains) that have evolved over time and which possess a strong Will.

An optimistic outlook was also presented by Marta Lenartowicz, who proposed that ‘Contrary to the prevailing pessimistic AI takeover scenarios, the theory of the Global Brain (GB) argues that this foreseen collective, distributed superintelligence is bound to include humans as its key beneficiaries.

As a result, it is foreseen that the cognitive architecture of the GB will include human beings and such technologies, which will best prove to advance our collective wellbeing.’ But I would go further, humans in this ‘superintelligence’ or ‘Global Brain’ are not only part of it, they are the only component of such a postulated superintelligence that can – in principle – evolve by their own initiative, as long as the other components are not evolved organic entities themselves.

The Evolutionary Argument Against Reality

Amanda Gefter Contributing Writer April 21, 2016 DOWNLOAD AS PDF PRINT THIS ARTICLE Biology Cognitive Science Consciousness Evolution Q&A Quantum Mechanics Q&A The Evolutionary Argument Against Reality The cognitive scientist Donald Hoffman uses evolutionary game theory to show that our perceptions of an independent reality must be illusions.

187 David McNew for Quanta Magazine As we go about our daily lives, we tend to assume that our perceptions — sights, sounds, textures, tastes — are an accurate portrayal of the real world.

Sure, when we stop and think about it — or when we find ourselves fooled by a perceptual illusion — we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality.

Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality.

On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience.

This is the aptly named “hard problem.” On the other side are quantum physicists, marveling at the strange fact that quantum systems don’t seem to be definite objects localized in space until we come along to observe them — whether we are conscious humans or inanimate measuring devices.

Experiment after experiment has shown — defying common sense — that if we assume that the particles that make up ordinary objects have an objective, observer-independent existence, we get the wrong answers.

As the physicist John Wheeler put it, “Useful as it is under ordinary circumstances to say that the world exists ‘out there’ independent of us, that view can no longer be upheld.” So while neuroscientists struggle to understand how there can be such a thing as a first-person reality, quantum physicists have to grapple with the mystery of how there can be anything but a first-person reality.

The classic argument is that those of our ancestors who saw more accurately had a competitive advantage over those who saw less accurately and thus were more likely to pass on their genes that coded for those more accurate perceptions, so after thousands of generations we can be quite confident that we’re the offspring of those who saw accurately, and so we see accurately.

It misunderstands the fundamental fact about evolution, which is that it’s about fitness functions — mathematical functions that describe how well a given strategy achieves the goals of survival and reproduction.

The mathematical physicist Chetan Prakash proved a theorem that I devised that says: According to evolution by natural selection, an organism that sees reality as it is will never be more fit than an organism of equal complexity that sees none of reality but is just tuned to fitness.

Now suppose your fitness function is linear, so a little water gives you a little fitness, medium water gives you medium fitness, and lots of water gives you lots of fitness — in that case, the organism that sees the truth about the water in the world can win, but only because the fitness function happens to align with the true structure in reality.

For example, an organism tuned to fitness might see small and large quantities of some resource as, say, red, to indicate low fitness, whereas they might see intermediate quantities as green, to indicate high fitness.

Suppose there’s a blue rectangular icon on the lower right corner of your computer’s desktop — does that mean that the file itself is blue and rectangular and lives in the lower right corner of your computer?

I noticed that they seemed to share a common mathematical structure, so I thought it might be possible to write down a formal structure for observation that encompassed all of them, perhaps all possible modes of observation.

When he invented the Turing machine, he was trying to come up with a notion of computation, and instead of putting bells and whistles on it, he said, Let’s get the simplest, most pared down mathematical description that could possibly work.

Somehow the world affects my perceptions, so there’s a perception map P from the world to my experiences, and when I act, I change the world, so there’s a map A from the space of actions to the world.

The idea that what we’re doing is measuring publicly accessible objects, the idea that objectivity results from the fact that you and I can measure the same object in the exact same situation and get the same results — it’s very clear from quantum mechanics that that idea has to go.

They are certain that it’s got to be classical properties of neural activity, which exist independent of any observers — spiking rates, connection strengths at synapses, perhaps dynamical properties as well.

The neuroscientists are saying, “We don’t need to invoke those kind of quantum processes, we don’t need quantum wave functions collapsing inside neurons, we can just use classical physics to describe processes in the brain.” I’m emphasizing the larger lesson of quantum mechanics: Neurons, brains, space … these are just symbols we use, they’re not real.