AI News, Cyberbotics' Robot Curriculum/What is Artificial Intelligence?

Cyberbotics' Robot Curriculum/What is Artificial Intelligence?

Artificial Intelligence (AI) is an interdisciplinary field of study that includes computer science, engineering, philosophy and psychology.

Early in the 17th century, René Descartes envisioned the bodies of animals as complex but reducible machines, thus formulating the mechanistic theory, also known as the 'clockwork paradigm'.

Wilhelm Schickard created the first mechanical digital calculating machine in 1623, followed by machines of Blaise Pascal (1643) and Gottfried Wilhelm von Leibniz (1671), who also invented the binary system.

In 1931 Kurt Gödel showed that sufficiently powerful consistent formal systems contain true theorems unprovable by any theorem-proving AI that is systematically deriving all possible theorems from the axioms.

Leonard Uhr and Charles Vossler published 'A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators' in 1963, which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt.

Ted Shortliffe demonstrated the power of rule-based systems for knowledge representation and inference in medical diagnosis and therapy in what is sometimes called the first expert system.

In 1995, one of Ernst Dickmanns' robot cars drove more than 1000 miles in traffic at up to 110 mph, tracking and passing other cars (simultaneously Dean Pomerleau of Carnegie Mellon tested a semi-autonomous car with human-controlled throttle and brakes).

In 1995, one of Ernst Dickmanns' robot cars drove more than 1000 miles in traffic at up to 110 mph, tracking and passing other cars (simultaneously Dean Pomerleau of Carnegie Mellon tested a semi-autonomous car with human-controlled throttle and brakes).

Hence, he will interact with the machine, for example by chatting using the keyboard and the screen to try to understand whether or not there is a human intelligence behind this machine writing the answers to his questions.

Hence he will want to ask very complicated questions and see what the machine answers and try to determine if the answers are generated by an AI program or if they come from a real human being.

Although the original Turing test is often described as a computer chat session (see picture), the interaction between the observer and the machine may take very various forms, including a chess game, playing a virtual reality video game, interacting with a mobile robot, etc.

Unlike adults who will generally say that the robots were programmed in some way to perform this behavior, possibly mentioning the sensors, actuators and micro-processor of the robot, the children will describe the behavior of the robots using the same words they would use to describe the behavior of a cat running after a mouse.

They will grant feelings to the robots like ”he is afraid of”, ”he is angry”, ”he is excited”, ”he is quiet”, ”he wants to...”, etc.

For example if a benchmark consists in playing chess against the Deep Blue program, some observers may think that this requires some intelligence and hence it is a cognitive benchmark, whereas some other observers may object that it doesn't require intelligence and hence it is not a cognitive benchmark.

They include IQ tests developed by psychologists as well as animal intelligence tests developed by biologists to evaluate for example how well rats remember the path to a food source in a maze, or how do monkeys learn to press a lever to get food.

The last chapter of this book will introduce you to a series of robotics cognitive benchmarks (especially the Rat's Life benchmark) for which you will be able to design your own intelligent systems and compare them to others.

Artificial intelligence

Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals.

In computer science AI research is defined as the study of 'intelligent agents': any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1]

Colloquially, the term 'artificial intelligence' is applied when a machine mimics 'cognitive' functions that humans associate with other human minds, such as 'learning' and 'problem solving'.[2]

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring 'intelligence' are often removed from the definition, a phenomenon known as the AI effect, leading to the quip, 'AI is whatever hasn't been done yet.'[3]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[13]

Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics.

This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity.[19]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding;

and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[22][11]

The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as '0' and '1', could simulate any conceivable act of mathematical deduction.

The success was due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[38]

According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a 'sporadic usage' in 2012 to more than 2,700 projects.

He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[11]

An AI's intended goal function can be simple ('1 if the AI wins a game of Go, 0 otherwise') or complex ('Do actions mathematically similar to the actions that got you rewards in the past').

this is similar to how animals evolved to innately desire certain goals such as finding food, or how dogs can be bred via artificial selection to possess desired traits.[51]

Some of the 'learners' described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, if given infinite data, time, and memory, learn to approximate any function, including whatever combination of mathematical functions would best describe the entire world.

In practice, it is almost never possible to consider every possibility, because of the phenomenon of 'combinatorial explosion', where the amount of time needed to solve a problem grows exponentially.

The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: 'After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza'.

A fourth approach is harder to intuitively understand, but is inspired by how the brain's machinery works: the artificial neural network approach uses artificial 'neurons' that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to 'reinforce' connections that seemed to be useful.

Therefore, to be successful, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better.

Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is.[61]

A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.[62]

instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects.

Humans also have a powerful mechanism of 'folk psychology' that helps them to interpret natural-language sentences such as 'The city councilmen refused the demonstrators a permit because they advocated violence'.

For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[71][72][73]

By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[75]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a 'combinatorial explosion': they became exponentially slower as the problems grew larger.[55]

In addition, some projects attempt to gather the 'commonsense knowledge' known to the average person into a database containing extensive knowledge about the world.

by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern).

They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or 'value') of available choices.[97]

A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts.

Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well.

Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications.

is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world.

a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its 'object model' to assess that fifty-meter pedestrians do not exist.[113]

Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[115]

the paradox is named after Hans Moravec, who stated in 1988 that 'it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility'.[119][120]

Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[129]

Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.[130]

These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI.

Many researchers predict that such 'narrow AI' work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[17][133]

One high-profile example is that DeepMind in the 2010s developed a 'generalized artificial intelligence' that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[134][135][136]

hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to 'slurp up' a comprehensive knowledge base from the entire unstructured Web.[5]

Finally, a few 'emergent' approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[139][140]

For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence).

A problem like machine translation is considered 'AI-complete', because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation.

in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science.

Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems.

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.[14]

His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[147]

found that solving difficult problems in vision and natural language processing required ad-hoc solutions – they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior.

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[151]

By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition.

This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[153][154][155][156]

Artificial neural networks are an example of soft computing --- they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient.

Much of traditional GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results.

However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures.

The shared mathematical language permitted a high level of collaboration with more established fields (like mathematics, economics or operations research).[d]

Compared with GOFAI, new 'statistical learning' techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring semantic understanding of the datasets.

The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models;

In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.[161][162]

These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top.

Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[175][176]

Fuzzy set theory assigns a 'degree of truth' (between 0 and 1) to vague statements such as 'Alice is old' (or rich, or tall, or hungry) that are too linguistically imprecise to be completely true or false.

Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules such as 'if you are close to the destination station and moving fast, increase the train's brake pressure';

Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[194]

Complicated graphs with diamonds or other 'loops' (undirected cycles) can require a sophisticated method such as Markov Chain Monte Carlo, which spreads an ensemble of random walkers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities.

Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as 'naive Bayes' on most practical data sets.[210][211]

A simple 'neuron' N accepts input from multiple other neurons, each of which, when activated (or 'fired'), cast a weighted 'vote' for or against whether neuron N should itself activate.

one simple algorithm (dubbed 'fire together, wire together') is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another.

In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending;

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events).

Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning ('fire together, wire together'), GMDH or competitive learning.[216]

However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches.

For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a 'credit assignment path' (CAP) depth of seven.

Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[225][226][224]

In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning.[232]

Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[233]

In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[243]

The main areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[citation needed]

The 'imitation game' (an interpretation of the 1950 Turing test that assesses whether a computer can imitate a human) is nowadays considered too exploitable to be a meaningful benchmark.[262]

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[268]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[271]

In 2016, a ground breaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.[273]

Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[276]

The team supervised the robot while it performed soft-tissue surgery, stitching together a pig's bowel during open surgery, and doing so better than a human surgeon, the team claimed.[277]

However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[284]

Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[285]

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation.

For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing.

Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking.

report by the Guardian newspaper in the UK in 2018 found that online gambling companies were using AI to predict the behavior of customers in order to target them with personalized promotions.[299]

Developers of commercial AI platforms are also beginning to appeal more directly to casino operators, offering a range of existing and potential services to help them boost their profits and expand their customer base.[300]

He argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down.

If this AI's goals do not reflect humanity's – one example is an AI told to compute as many digits of pi as possible – it might harm humanity in order to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal.

For this danger to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which a minority of experts argue is a possibility far enough in the future to not be worth researching.[326][327]

Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[336]

The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.[342]

The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: 'Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines.

In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines.

Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence.

Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share).

I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.'[345]

The philosophical position that John Searle has named 'strong AI' states: 'The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.'[347]

Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization.

Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029, and predicts that the singularity will occur in 2045.[353]

Invisible, this auxiliary lobe answers your questions with information beyond the realm of your own memory, suggests plausible courses of action, and asks questions that help bring out relevant facts.

In the 1980s, artist Hajime Sorayama's Sexy Robots series were painted and published in Japan depicting the actual organic human form with lifelike muscular metallic skins and later 'the Gynoids' book followed that was used by or influenced movie makers including George Lucas and other creatives.

Sorayama never considered these organic robots to be real part of nature but always unnatural product of the human mind, a fantasy existing in the mind even when realized in actual form.

2029: the year when robots will have the power to outsmart their makers

The entrepreneur and futurologist has predicted that in 15 years' time computers will be more intelligent than we are and will be able to understand what we say, learn from experience, make jokes, tell stories and even flirt.

Kurzweil, 66, who is considered by some to be the world's leading artificial intelligence (AI) visionary, is recognised by technologists for popularising the idea of 'the singularity' – the moment in the future when men and machines will supposedly converge.

This month it bought the cutting-edge British artificial intelligence startup DeepMind for £242m and hired Geoffrey Hinton, a British computer scientist and the world's leading expert on neural networks.

In 1990 he predicted that a computer would defeat a world chess champion by 1998 (in 1997, IBM's Deep Blue defeated Garry Kasparov), and he predicted the future prominence of the world wide web at a time when it was only an obscure system that was used by a few academics.

We'll give you the independence you've had with your own company, but you'll have these Google-scale resources.'' In 2009 Kurzweil co-founded the Singularity University, partly funded by Google, an unaccredited graduate school devoted to his ideas and the aim of exploring exponential technologies.

Mark Halpern In the October 1950 issue of the British quarterly Mind, Alan Turing published a 28-page paper titled “Computing Machinery and Intelligence.”

In 1956, less than six years after its publication in a small periodical read almost exclusively by academic philosophers, it was reprinted in The World of Mathematics, an anthology of writings on the classic problems and themes of mathematics and logic, most of them written by the greatest mathematicians and logicians of all time.

(In an act that presaged much of the confusion that followed regarding what Turing really said, James Newman, editor of the anthology, silently re-titled the paper “Can a Machine Think?”) Since then, it has become one of the most reprinted, cited, quoted, misquoted, paraphrased, alluded to, and generally referenced philosophical papers ever published.

Turing’s paper claimed that suitably programmed digital computers would be generally accepted as thinking by around the year 2000, achieving that status by successfully responding to human questions in a human-like way.

The part that has seized our imagination, to the point where thousands who have never seen the paper nevertheless clearly remember it, is Turing’s proposed test for determining whether a computer is thinking —

If the interrogator cannot distinguish computers from humans any better than he can distinguish, say, men from women by the same means of interrogation, then we have no good reason to deny that the computer that deceived him was thinking.

Turing does not argue for the premise that the ability to convince an unspecified number of observers, of unspecified qualifications, for some unspecified length of time, and on an unspecified number of occasions, would justify the conclusion that the computer was thinking —

Some of his defenders have tried to supply the underpinning that Turing himself apparently thought unnecessary by arguing that the Test merely asks us to judge the unseen entity in the same way we regularly judge our fellow humans: if they answer our questions in a reasonable way, we say they’re thinking.

If his responses seemed like nothing more than reshufflings and echoes of the words we had addressed to him, or if they seemed to parry or evade our questions rather than address them, we might conclude that he was not acting in good faith, or that he was gravely brain-damaged and thus accidentally deprived of his birthright ability to think.

Turing expressed his judgment that computers can think in the form of a prediction: namely, that the general public of fifty years hence will have no qualms about using “thinking”

Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

Note that Turing bases that prediction not on an expectation that the computer will perform any notable mathematical, scientific, or logical feat, such as playing grandmaster-level chess or proving mathematical theorems, but on the expectation that it will be able, within two generations or so, to carry on a sustained question-and-answer exchange well enough to leave most people, most of the time, unable to distinguish it from a human being.

The belief that a hidden entity is thinking depends heavily on the words he addresses to us being not re-hashings of the words we just said to him, but words we did not use or think of ourselves —

When members of the AI community need some illustrious forebear to lend dignity to their position, Turing’s name is regularly invoked, and his paper referred to as if holy writ.

His ideas, we are then told, are no longer the foundation of AI work, and his paper may safely be relegated to the shelf where unread classics gather dust, even while we are asked to pay its author the profoundest respect.

Perhaps the key to successful discrimination between a programmed computer and a human being is to ask the unseen entity the sort of questions that humans find easy to answer (not necessarily correctly), but that an AI programmer will find impossible to predict and handle, and to use such questions to unmask evasive and merely word-juggling answers.

The second question is likewise without discriminatory value, since neither man nor machine would have any trouble with this arithmetic task, given 30 seconds to perform it;

The questions Turing puts in the interrogator’s mouth seem almost deliberately designed to keep him from understanding what he’s dealing with, and Turing endows the computer with enough cleverness to fool the interrogator forever.

And if we read him with some care, we note also a glaring contradiction in Turing’s position: that between his initial refusal to respect the common understanding of key words and concepts, and his appeal at the conclusion of his argument to just such common usage.

Turing’s initial repudiation of common usage (circa 1950) gets forgotten as soon as he imagines an era (circa 2000) in which common usage supports his thesis.

Wilkes, himself a winner of the Turing Award, put it thus in 1992, in a statement as true today as it was then: Originally, the term AI was used exclusively in the sense of Turing’s dream that a computer might be programmed to behave like an intelligent human being.

Indeed, it is difficult to escape the conclusion that, in the 40 years that have elapsed since 1950, no tangible progress has been made towards realizing machine intelligence in the sense that Turing had envisaged.

Hayes does not even mention the Test as a goal for AI workers, but does conclude with a respectful quotation from Turing, thus exemplifying the double attitude toward the master: ignore his specific proposal even while donning his mantle to cover your own nakedness.

In a survey article in the Proceedings of the IRE in 1961, Minsky defends the idea that computers might think by saying that “we cannot assign all the credit to its programmer if the operation of a system comes to reveal structures not recognizable nor anticipated by the programmer,”

As an illustration, he mentions a wide variety of accomplishments, such as playing high-level chess, guiding an automobile down a road, and making possible the “electronic book.”

Instead, he attacks those who minimize AI’s achievements, like Hubert Dreyfus, author of What Computers Can’t Do: The trouble with those people who think that computer intelligence is in the future is that they have never done serious research on human intelligence.

As Jaron Lanier told the New York Times: “Turing assumed that the computer in this case [i.e., having passed the Test] has become smarter or more humanlike, but the equally likely conclusion is that the person has become dumber and more computerlike.”

Wilks (not to be confused with Maurice Wilkes, quoted earlier) offers us here a reductio ad absurdum: the Turing Test asks us to evaluate an unknown entity by comparing its performance, at least implicitly, with that of a known quantity, a human being.

For Raj Reddy, the question of defining intelligence has been answered by the late Herbert Simon, and he uses Simon’s definition as the basis for his sweeping claims about AI success: Can a computer exhibit real intelligence?

I know my friend is intelligent because he plays pretty good chess (can keep a car on the road, can diagnose symptoms of a disease, can solve the problem of the Missionaries and Cannibals, etc.).

Lenat is dedicated to building a computing system with enough facts about the world, and enough power of drawing inferences from those facts, to be able to arrive at reasonable conclusions about matters it has not been explicitly informed about.

In 1987, Peter Wegner, a computer scientist at Brown University, declared with charming candor: The bottom line is that we can answer the question [of whether computers understand] either way, depending on our interpretation of the term “understanding.”

This argument brushes aside both Turing and his critics: Turing’s operational approach to AI is treated as just another fuzzy-minded, metaphysical piece of wool-gathering, and his critics are rejected because, true or false, their negativity dampens the enthusiasm of AI workers, and thus impedes the progress of computer science.

If they come to believe that the doctrine that machines can think is simply a carrot being dangled in front of them to get them to pull the wagon, and that even if they pass the Test the carrot will remain out of reach —

If you’re going to give a patient a placebo, you don’t tell him you’re doing so, and if you’re going to take a position you don’t really believe in, hoping that it will motivate other people, you don’t publish a letter announcing your plan.

An observer’s surprise at learning that the interlocutor he thought was human is in fact a computer, or his surprise at learning that a computer has performed some feat that he thought only humans could perform, is the very essence of the Test.

This thought experiment demonstrates, Searle claims, that the ability to replace one string of symbols by another, however meaningful and responsive that output may be to human observers, can be done without an understanding of those symbols.

and, somewhat more seriously, that the collection of elements in the thought experiment (the room, its inhabitant, the slips of paper on which symbols are handed in and out, etc.) constitutes a “system”

For those who suspect that I’m making all this up, here is a representative sample from Douglas Hofstadter, found in his and Daniel Dennett’s The Mind’s I: Let us add a little color to this drab experiment and say that the simulated Chinese speaker involved is a woman and that the demons (if animate) are always male.

He gets quite carried away by the brainstorming spirit, and quite careless of the fact that the force of his original thought experiment is diluted by every variation and elaboration he entertains.

What is needed is the simplest thought experiment that will establish his basic proposition: namely, that some results usually obtainable only by the exercise of thought and understanding can be obtained without them.

The man who secretly possessed that sole copy, though completely unmathematical himself, could make a handsome living selling instant sine values to everyone who needed them.

And just as one man acquired an undeserved reputation as a mathematician by responding instantly to any request for a sine value, so the other will be seen as a brilliant Sinologist by responding in perfect Chinese to Chinese-language questions —

This is not to say that thinking has never been involved in the history of the Chinese Room (presumably the lexicon writer could think), only that active thinking is already finished before the Chinese Room opens for business.

In his defense of AI’s achievements, quoted above, Raj Reddy said that, “The trouble with those people who think that computer intelligence is in the future is that they have never done serious research on human intelligence....

Computers are general-purpose algorithm executors, and their apparent intelligent activity is simply an illusion suffered by those who do not fully appreciate the way in which algorithms capture and preserve not intelligence itself but the fruits of intelligence.

In 1991, a New Jersey businessman named Hugh Loebner founded and subsidized an annual competition, the Loebner Prize Competition in Artificial Intelligence, to identify and reward the computer program that best approximates artificial intelligence as Turing defined it.

The officials presiding over the competition had to settle a number of details ignored in Turing’s paper, such as how often the judges must guess that a computer is human before we accept their results as significant, and how long a judge may interact with a hidden entity before he has to decide.

Beyond these practical concerns, there are deeper questions about how to interpret the range of possible outcomes: What conclusions are we justified in reaching if the judges are generally successful in identifying humans as humans and computers as computers?

three competition judges made this mistake, as discussed below.) In addition, the Test calls for the employment of computer-naïve judges, who know virtually nothing of AI and its claims, and who listen to the hidden entities without prejudice.

It does not pretend to be more than a verbatim record of the exchanges between the judges and the terminals, but often it fails to be reliable even at that: a number of passages are impossible to follow because of faulty transcription, bad printing, and similar extraneous mechanical problems.

We are left to wonder: How could any attentive and serious judge fail to see the difference between a lively human exchange and the near-random fragments of verbiage emitted by the computer-driven terminals, whose connection to the questions that elicited them was, at best, the echoing of a few of the questioner’s words?

In another exchange, this one with Judge 1, T4 tries to enlarge and deepen the conversation, but the judge is not prepared to discuss Shakespearean stagecraft in any detail, and cuts off T4’s attempt to enrich the exchange: Judge 1: What is your opinion on Shakespeare’s plays?

At times, a reader of the transcripts finds himself checking an exchange again to be sure which is the terminal and which is the judge, since it is often the judge who seems to be avoiding the kind of closely engaged conversation that a computer program would be incapable of.

Of course, anyone with an understanding of how computers are made to mimic human responses would need no subject-matter expertise whatever to detect a computer posing as a human.

Such a judge would simply demand that the hidden entity respond to the ideas represented by his questions, warning that it would be severely penalized for repeating any of the key words in those questions.

T3’s statement on the nature of machines is supposed to come from an eight-year-old, one whose performance up to this point suggested that, if real, she is amazingly scatter-brained and ignorant even for her age.

While an eight-year-old would be forgiven for not knowing as much about the world as an adult, she would have mental quirks that would be harder for an adult programmer to foresee and mimic than the mature reactions of an adult.

The program, known during the trials as Terminal 5, issued remarks that were more amusing than most made by computer-driven terminals (this is not high praise), but were otherwise perfectly standard for such programs.

strategy, introduced many years ago by Joseph Weizenbaum and (separately) Kenneth Colby, in which the program picks up a fragment from the input (e.g., the X in “I wish I knew more about X”), and inserts it into a canned response (“Why are you interested in X?”).

answer, notices that T5 has even reproduced a typo and a grammatical error that were part of his question, but he assumes that T5 is just making fun of him: Judge 2: I getting tired and yes how to live is a topugh one.

When T5 can’t find a usable fragment in its input to incorporate in a therapeutic answer, it falls back on issuing some non-responsive remark, yet one with enough meat in it to have a chance of distracting the judges from noticing its total irrelevance.

And the gambit usually works, since most of the judges simply follow T5 wherever its random response generator takes it, never demanding that a consecutive, rational sequence of exchanges be developed.

Overall, the performance of the judges leaves us to draw some sad conclusions about their inability to engage in sustained conversation, their lack of knowledge on general human subjects, and their need to share their personal concerns even with entities that contribute little more to the “conversation”

remain amazingly simpleminded, and as time goes on fool fewer judges, belying Epstein’s prediction of 1993 that “the confederates will never get much better at the task, but the computers will get better each year.”

what counts more heavily is that it is becoming clear to more and more observers that even if it were to be realized, its success would not signify what Turing and his followers assumed: even giving plausible answers to an interrogator’s questions does not prove the presence of active intelligence in the device through which the answers are channeled.

In the deepest sense, the AI champions see their critics as trying to reverse the triumph of the Enlightenment, with its promise that man’s mind can understand everything, and as retreating to an obscurantist, religious outlook on the world.

How Machines Learn

How do all the algorithms around us learn to do their jobs? Bot Wallpapers on Patreon: Discuss this video: ..

John Searle: "Consciousness in Artificial Intelligence" | Talks at Google

John Searle is the Slusser Professor of Philosophy at the University of California, Berkeley. His Talk at Google is focused on the philosophy of mind and the ...

Machine Learning Control: Overview

This lecture provides an overview of how to use machine learning optimization directly to design control laws, without the need for a model of the dynamics.

The Dawn of Killer Robots (Full Length)

Subscribe to Motherboard Radio today! In INHUMAN KIND, Motherboard gains exclusive access to a small fleet of US Army bomb ..

Artificial Intelligence Is the New Science of Human Consciousness | Joscha Bach

Read more at BigThink.com: Follow Big Think here: YouTube: Facebook: Twitter: .

The Mind-Controlled Bionic Arm With a Sense of Touch

In the first episode of Humans+, Motherboard dives into the world of future prosthetics, and the people working on closing the gap between man and machine.

MIT 6.S094: Introduction to Deep Learning and Self-Driving Cars

This is lecture 1 of course 6.S094: Deep Learning for Self-Driving Cars taught in Winter 2017. Course website: Lecture 1 slides: ..

The astounding athletic power of quadcopters | Raffaello D'Andrea

In a robot lab at TEDGlobal, Raffaello D'Andrea demos his flying quadcopters: robots that think like athletes, solving physical problems with algorithms that help ...

Ethnography for Artificial Intelligence

An introduction to ethnography for Artificial Intelligence as well as conversational analysis and its relevance to AI. See more on this video at ...

Noam Chomsky - Can Machines Think?