AI News, This Artificial Intelligence Pioneer Has a Few Concerns

This Artificial Intelligence Pioneer Has a Few Concerns

In January, the British-American computer scientist Stuart Russell drafted and became the first signatory of an open letter calling for researchers to look beyond the goal of merely making artificial intelligence more powerful.

“Our AI systems must do what we want them to do.” Thousands of people have since signed the letter, including leading artificial intelligence researchers at Google, Facebook, Microsoft and other industry hubs along with top computer scientists, physicists and philosophers around the world.

By the end of March, about 300 research groups had applied to pursue new research into “keeping artificial intelligence beneficial” with funds contributed by the letter’s 37th signatory, the inventor-entrepreneur Elon Musk.Original story reprinted with permission from Quanta Magazine, an editorially independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. Russell,

In a bombshell result reported recently in Nature, a simulated network of artificial neurons learned to play Atari video games better than humans in a matter of hours given only data representing the screen and the goal of increasing the score at the top—but no preprogrammed knowledge of aliens, bullets, left, right, up or down.

I think one answer is a technique called “inverse reinforcement learning.” Ordinary reinforcement learning is a process where you are given rewards and punishments as you behave, and your goal is to figure out the behavior that will get you the most rewards.

For example, your domestic robot sees you crawl out of bed in the morning and grind up some brown round things in a very noisy machine and do some complicated thing with steam and hot water and milk and so on, and then you seem to be happy.

And then when I was applying to grad school I applied to do theoretical physics at Oxford and Cambridge, and I applied to do computer science at MIT, Carnegie Mellon and Stanford, not realizing that I’d missed all the deadlines for applications to the U.S. Fortunately Stanford waived the deadline, so I went to Stanford.

Instead they look ahead a dozen moves into the future and make a guess about how useful those states are, and then they choose a move that they hope leads to one of the good states.

thing that’s really essential is to think about the decision problem at multiple levels of abstraction, so “hierarchical decision making.” A person does roughly 20 trillion physical actions in their lifetime.

The future is spread out, with a lot of detail very close to us in time, but these big chunks where we’ve made commitments to very abstract actions, like, “get a Ph.D.,” “have children.” Are computers currently capable of hierarchical decision making?

There are some games where DQN just doesn’t get it, and the games that are difficult are the ones that require thinking many, many steps ahead in the primitive representations of actions—ones where a person would think, “Oh, what I need to do now is unlock the door,” and unlocking the door involves fetching the key, etcetera.

The basic idea of the intelligence explosion is that once machines reach a certain level of intelligence, they’ll be able to work on AI just like we do and improve their own capabilities—redesign their own hardware and so on—and their intelligence will zoom off the charts.

The most convincing argument has to do with value alignment: You build a system that’s extremely good at optimizing some utility function, but the utility function isn’t quite right.

otherwise it’s going to do pretty stupid things, like put the cat in the oven for dinner because there’s no food in the fridge and the kids are hungry.

If the machine makes these tradeoffs in ways that reveal that it just doesn’t get it—that it’s just missing some chunk of what’s obvious to humans—then you’re not going to want that thing in your house.

Then there’s the question, if we get it right such that some intelligent systems behave themselves, as you make the transition to more and more intelligent systems, does that mean you have to get better and better value functions that clean up all the loose ends, or do they still continue behaving themselves?

With a cyber-physical system, you’ve got a bunch of bits representing an air traffic control program, and then you’ve got some real airplanes, and what you care about is that no airplanes collide.

What you would do is write a very conservative mathematical description of the physical world—airplanes can accelerate within such-and-such envelope—and your theorems would still be true in the real world as long as the real world is somewhere inside the envelope of behaviors.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Late in the afternoon of May 11, 1997, in front of the cameras of a small television studio 35 floors up a Manhattan skyscraper, Garry Kasparov sat down at a chessboard.

The boisterous and temperamental World Chess Champion had never lost an official match, but entering the sixth and final game he was tied with his opponent at two games each (the third game having ended in a draw).

To the reporters present and chess fans following the broadcast in a nearby auditorium, as well as those viewing it live around the world, Kasparov’s frustration was evident as he sighed and held his head in his hands.

During his NPR interview Greengard voiced the question that stumped Kasparov: “How could something play like God, then play like an idiot in the same game?” Long after Deep Blue was retired, Murray Campbell, one of the computer’s engineers, revealed that the illogical sacrifice was neither human interference nor artificial ingenuity.

Engineers packed away their slide rules with the arrival of the punch card–driven IBM 604 electronic calculator, capable of addition, subtraction, multiplication, and division, 1951.

Similarly, a human chess player might rely on past experience with a certain opponent or a gut feeling to decide on a play, but a computer program like Deep Blue must simulate millions of possible moves and their ripple effects before making a move.

In fact, chess provided one of the first opportunities to show that a task is easier to complete, and potentially more fulfilling, when AI and humans work together rather than in competition.

Within a year he had a flash of insight: if people used computers while playing chess, they could play at a high level without worrying about memorization and small mistakes.

The chess world called them “centaurs” and, more recently, “cyborgs.” The idea of manufactured intelligence has existed for centuries, from Greek automatons to Frankenstein, but in the decades following World War II a collection of academics, philosophers, and scientists set about the task of actually creating an artificial mind.

At the time, many psychologists measured intelligence by focusing on specific skills independently, such as the ability to solve math problems and the ability to navigate social situations.

If we apply Spearman’s definition to Deep Blue and other computer programs that have only one skill, such as playing chess or guessing passwords using brute force, those programs can never be considered intelligent.

In 2009, while giving a lecture at Xiamen University in China, Goertzel declared that general intelligence is a type of behavior, one whose only requirement is “achieving complex goals in complex environments.” As he sees it, intelligence should be measured by a thing’s ability to perform a task, be that thing natural or artificial.

Although the flowchart analogy vastly simplifies how a computer calculates (a series of switches turning on and off), in many ways it echoes how some have imagined the function of the human brain at the most basic level.

If that is the case, it is easy to conclude that all we need to achieve artificial consciousness is to scale up the complexity of a computer until it is on par with the tangle of gray matter in our heads, just as Turing’s flowchart can be scaled up to a program that can beat a grandmaster.

The emerging consensus, supported by Goertzel and other researchers and put forth in the widely used textbook Artificial Intelligence: A Modern Approach, divides AI into three levels, each one paving the way for the next.

You’ll find sections labeled “Inspired by your shopping trends” and “Recommendation for you.” You probably haven’t searched for any of the items suggested, but they eerily reflect your purchasing habits on Amazon.

The theory goes that once software is slightly more intelligent than us, it will be able to exponentially improve itself until it is infinitely more intelligent than us, an event dubbed the singularity by computer scientist and futurist Ray Kurzweil.

The prospect of a singularity scares people and has led to an abundance of funding for philosophers and scientists dedicated to figuring out how to ensure that when ASI arrives, it comes with a sense of ethics.

While the dream (and nightmare) of a superior AI persists, most of the work in the field for the past 40 years has focused on refining ANI and better incorporating it into the human realm, taking advantage of what computers do well (sorting through massive amounts of data) and combining it with what humans are good at (using experience and intuition).

And to understand how those early expectations of AGI transformed into ANI, we have to go back to the spring of 1956, when a mathematics professor with a bold plan organized a conference in the White Mountains of New Hampshire: during two months in the summer the professor hoped to replicate, at least in theory, human intelligence in a machine.

Even so, from behind his thick, horn-rimmed glasses and bushy beard he saw the potential of combining the knowledge of those working on neural networks, robotics, and programming languages.

To entice these young researchers to join him for his unprecedented summer research project (and to generate funding), he included a flashy new term at the top of his proposal: artificial intelligence.

The name implied that human consciousness could be defined and replicated in a computer program, and it replaced the vague automata studies that McCarthy previously used (with little success) to define the field.

The conference proposal outlined his plan for a 10-person team that would work over the summer on the “conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Such a goal seemed achievable in the wave of American postwar exuberance, a time when humans wielded the power of the atom and seemed destined to control even the weather.

McCarthy believed that in the wake of such breakthroughs a descriptive model of the human brain would quickly lead to what would become known as AGI, just as Niels Bohr’s model of the atom led to the atomic bomb.

However, when the program’s designers submitted one of their proofs to the Journal of Symbolic Logic, it was rejected because it proved the existing and well-known theorem that the angles opposite the equal sides of an isosceles triangle are also equal.

Despite the apparent success of the Logic Theorist, it was a small program that used brute force to solve a relatively simple problem, similar to how Deep Blue determined the best moves in its game against Kasparov.

Searle’s point is that a computer will always be the transcriber in that room, writing responses without understanding their meaning, and will therefore never be a conscious being, despite how it may appear to the computer’s user.

Strikingly, his conference proposal also included a section on creativity and imagination: McCarthy believed that some aspect of consciousness that generates hunches and educated guesses was programmable in a computer and that creativity was a necessary part of building AI.

The AI researcher Ben Goertzel used this concept in 2009 when he stated that AGI will someday be a type of intelligence that can complete complex goals in complex environments, no different from the intelligence we observe in humans and other animals.

The Dartmouth conference failed to produce an intelligent machine, but after the success of narrow programs similar to the Logic Theorist, universities, foundations, and governments bought into McCarthy’s optimism and money began rolling in.

The government saw plenty of potential for smart machines, whether to analyze masses of geological data for oil and coal exploration or to speed the search for new drugs.

Computer-science lore has it that when the biblical saying “the spirit is willing but the flesh is weak” was put through a Russian translation program, it returned with the Russian equivalent of “the vodka is good but the meat is rotten.” Since then, language processing has improved markedly, but today’s researchers face the same issues.

they arise naturally and with irregular features, like a small town that is slowly built into a metropolis with winding, narrow roads instead of a logical grid.

Computer-science lore has it that when the biblical saying “the spirit is willing but the flesh is weak” was put through a Russian translation program, it returned with the Russian equivalent of “the vodka is good but the meat is rotten.” Researchers at McCarthy’s Artificial Intelligence Project and elsewhere ran into another problem.

In order to survive what later became known as “AI winter” (a name meant to evoke the apocalyptic vision of a “nuclear winter”), computer scientists adopted the more conservative definition of intelligence as a type of narrow behavior—the approach championed by Searle.

Xiaoice, part of a phone application, listens to you vent, offers reassuring advice, and is simultaneously available to millions of people at a time, just as Samantha interacted with thousands of people in Her.

She records details from conversations with each user and weaves them in later, creating the illusion of memory, and keeps track of the positive responses she gets so she can serve up the best ones to other users.

(If you mention that you broke up with your significant other, for example, she will ask how you are holding up.) Essentially, Xiaoice is the transcriber in the Chinese room: thanks to the vast amount of data readily available on the web, she is able to carry on intimate and relatively believable conversations without understanding what they mean.

Communicating with a computer that appeared almost human made me uneasy, like navigating an automated menu over the phone where you must repeat yourself because the machine doesn’t understand that you just want to talk to a real person.

Goertzel sees two potential paths to AGI: either a major breakthrough in our understanding of intelligence brings about the virtual brain McCarthy once dreamed of, or computer scientists make individual programs for every task an intelligent being would ever need to do and then someone mushes them all together somehow.

Programs that play games well, like Deep Blue and more recently AlphaGo—a Google-made AI that defeated one of the best players in the world at Go, an ancient game many times more complex than chess—are symbols of progress in one tiny area.

Making a thinking machine

On Black Friday 2017, Amazon’s best-selling item was its Echo Dot, the voice-activated 'smart speaker' that, like similar devices, acts as a mini personal assistant for the digital age—always at the ready to read you a recipe, order pizza, call your mom, adjust your thermostat and much more.

Now, many experts believe that AI is on the cusp of joining the human world in ways that may have more profound—even life-and-death—consequences, such as in self-driving cars or in systems that could evaluate medical records and suggest diagnoses.

There are many things that humans do exceptionally well that computers can’t even begin to match, such as creative thinking, learning a new concept from just one example ('one-shot learning') and understanding the nuances of spoken language.

Indeed, the systems that have driven nearly all the recent progress in AI—known as deep neural networks—are inspired by the way that neurons connect in the brain and are related to the 'connectionist' way of thinking about human intelligence.

Connectionist theories essentially say that learning—human and artificial—is rooted in interconnected networks of simple units, either real neurons or artificial ones, that detect patterns in large amounts of data.

But today, the enormous increase in computing power and the amount and type of data available to analyze have made deep neural networks increasingly powerful, useful and—with technology giants such as Google and Facebook leading the way—ubiquitous.

A deep neural network called AlphaGo, created by the Google-affiliated company DeepMind, analyzed millions of games of the complex board game Go to beat the human world champion in 2016, a feat long thought impossible.

Because neural networks are not programmed with explicit rules, and instead develop their own rules as they extract patterns from data, no one—not even the people who program them—can know exactly how they arrive at their conclusions.

Now, psychologists and AI researchers are looking to insights from cognitive and developmental psychology to address these limitations and to capture aspects of human thinking that deep neural networks can’t yet simulate, such as curiosity and creativity.

Lake’s system, which he developed after studying hundreds of videos of how people write characters, instead proposes multiple series of pen strokes that are likely to produce the character shown.

Using an algorithm based on this method, his AI system was able to recognize characters from many different alphabets after seeing just one example of each and then produce new versions that were indistinguishable from human-drawn examples (Scienceeqaeuecbfvcwtawec, Vol.

People learn by asking questions, and while curiosity might seem like an abstract concept, Lake and his colleagues have grounded it by building an AI system that plays 'Battleship,' the game in which players locate their opponent’s battleship on a hidden board by asking questions.

Only certain questions are allowed in the original game, but Lake and his colleagues allowed human players to ask any open-ended questions that they wanted to, and then used those questions to build a model of the types of questions that elicit the most useful information.

'There’s some fixed contribution that comes from the literal meaning of the words, but actually uncovering the interpretation that the speaker intends is a complicated process of inference that invokes our knowledge about the world,' Goodman says.

Take the concept of hyperbole: When someone says, 'It cost a million dollars,' how do you decide whether they mean that the item literally cost a million dollars or only that it cost a lot of money?

Humans may be able to understand jokes and recognize pineapples after seeing just one example, but they do so with decades (or, in the case of children, months or years) of experience observing and learning about the world in general.

So connectionist-oriented AI researchers believe that if we want to build machines with truly flexible, humanlike intelligence, we will need to not only write algorithms that reflect human reasoning, but also understand how the brain develops those algorithms to begin with.

In one study, for example, they found that during mealtimes, 8- to 10-month-old babies look preferentially at a limited number of scenes and objects—their chair, utensils, food and more—in a way that may later help them learn their first words.

Smith is collaborating with machine learning researchers to try to understand more about how the structure of this kind of visual and other data—the order in which babies choose to take in the world—helps babies (and, eventually, machines) develop the mental models that will underlie learning throughout their lives.

When the solution to the problem was unexpected (more than one object was required to make the machine light up), then children were more likely than adults to arrive at the right answer, and younger children were better at it than older children were (PNAS, Vol.

Building models that reflect this and other unique aspects of how children learn could help AI researchers develop computers that capture some of children’s creativity, flexible thinking and learning ability, Gopnik says.

In fact, according to Matthew Botvinick, PhD, a cognitive scientist and the director of neuroscience research at DeepMind, AI systems are moving in the direction of deep neural networks that can build their own mental models of the sort that currently must be programmed in by humans.

Botvinick believes that we have a long way to go before we can sort out which threats are genuine and which are not, but he says that tech companies are beginning to take such safety issues and larger societal issues seriously.

As society ponders those questions, it’s also important to remember that the knowledge that psychologists and other AI researchers are gaining as they aim to build thinking machines is also helping us to better understand ourselves.

Turing test

p. 460).[3] It opens with the words: 'I propose to consider the question, 'Can machines think?'' Because 'thinking' is difficult to define, Turing chooses to 'replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.'[4] Turing's new question is: 'Are there imaginable digital computers which would do well in the imitation game?'[5] This question, Turing believed, is one that can actually be answered.

Researchers in the United Kingdom had been exploring 'machine intelligence' for up to ten years prior to the founding of the field of artificial intelligence (AI) research in 1956.[14] It was a common topic among the members of the Ratio Club, who were an informal group of British cybernetics and electronics researchers that included Alan Turing, after whom the test is named.[15] Turing, in particular, had been tackling the notion of machine intelligence since at least 1941[16] and one of the earliest-known mentions of 'computer intelligence' was made by him in 1947.[17] In Turing's report, 'Intelligent Machinery',[18] he investigated 'the question of whether or not it is possible for machinery to show intelligent behaviour'[19] and, as part of that investigation, proposed what may be considered the forerunner to his later tests: It is not difficult to devise a paper machine which will play a not very bad game of chess.[20] Now get three men as subjects for the experiment.

to 'Can machines do what we (as thinking entities) can do?'[22] The advantage of the new question, Turing argues, is that it draws 'a fairly sharp line between the physical and intellectual capacities of a man.'[23] To demonstrate this approach Turing proposes a test inspired by a party game, known as the 'imitation game', in which a man and a woman go into separate rooms and guests try to tell them apart by writing a series of questions and reading the typewritten answers sent back.

In this version, which Turing discussed in a BBC radio broadcast, a jury asks questions of a computer and the role of the computer is to make a significant proportion of the jury believe that it is really a man.[26] Turing's paper considered nine putative objections, which include all the major arguments against artificial intelligence that have been raised in the years since the paper was published (see 'Computing Machinery and Intelligence').[6] In 1966, Joseph Weizenbaum created a program which appeared to pass the Turing test.

If a keyword is not found, ELIZA responds either with a generic riposte or by repeating one of the earlier comments.[27] In addition, Weizenbaum developed ELIZA to replicate the behaviour of a Rogerian psychotherapist, allowing ELIZA to be 'free to assume the pose of knowing almost nothing of the real world.'[28] With these techniques, Weizenbaum's program was able to fool some people into believing that they were talking to a real person, with some subjects being 'very hard to convince that ELIZA [...] is not human.'[28] Thus, ELIZA is claimed by some to be one of the programs (perhaps the first) able to pass the Turing test,[28][29] even though this view is highly contentious (see below).

'CyberLover', a malware program, preys on Internet users by convincing them to 'reveal information about their identities or to lead them to visit a web site that will deliver malicious content to their computers'.[33] The program has emerged as a 'Valentine-risk' flirting with people 'seeking relationships online in order to collect their personal data'.[34] John Searle's 1980 paper Minds, Brains, and Programs proposed the 'Chinese room' thought experiment and argued that the Turing test could not be used to determine if a machine can think.

Therefore, Searle concludes, the Turing test cannot prove that a machine can think.[35] Much like the Turing test itself, Searle's argument has been both widely criticised[36] and highly endorsed.[37] Arguments such as Searle's and others working on the philosophy of mind sparked off a more intense debate about the nature of intelligence, the possibility of intelligent machines and the value of the Turing test that continued through the 1980s and 1990s.[38] The Loebner Prize provides an annual platform for practical Turing tests with the first competition held in November 1991.[39] It is underwritten by Hugh Loebner.

As Loebner described it, one reason the competition was created is to advance the state of AI research, at least in part, because no one had taken steps to implement the Turing test despite 40 years of discussing it.[40] The first Loebner Prize competition in 1991 led to a renewed discussion of the viability of the Turing test and the value of pursuing it, in both the popular press[41] and academia.[42] The first contest was won by a mindless program with no identifiable intelligence that managed to fool naïve interrogators into making the wrong identification.

Saul Traiger argues that there are at least three primary versions of the Turing test, two of which are offered in 'Computing Machinery and Intelligence' and one that he describes as the 'Standard Interpretation'.[45] While there is some debate regarding whether the 'Standard Interpretation' is that described by Turing or, instead, based on a misreading of his paper, these three versions are not regarded as equivalent,[45] and their strengths and weaknesses are distinct.[46] Huma Shah points out that Turing himself was concerned with whether a machine could think and was providing a simple method to examine this: through human-machine question-answer sessions.[47] Shah argues there is one imitation game which Turing described could be practicalised in two different ways: a) one-to-one interrogator-machine test, and b) simultaneous comparison of a machine with a human, both questioned in parallel by an interrogator.[24] Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalises naturally to all of human performance capacity, verbal as well as nonverbal (robotic).[48] Turing's original article describes a simple party game involving three players.

Common understanding has it that the purpose of the Turing test is not specifically to determine whether a computer is able to fool an interrogator into believing that it is a human, but rather whether a computer could imitate a human.[7] While there is some dispute whether this interpretation was intended by Turing, Sterrett believes that it was[49] and thus conflates the second version with this one, while others, such as Traiger, do not[45] – this has nevertheless led to what can be viewed as the 'standard interpretation.'

The general structure of the OIG test could even be used with non-verbal versions of imitation games.[51] Still other writers[52] have interpreted Turing as proposing that the imitation game itself is the test, without specifying how to take into account Turing's statement that the test that he proposed using the party version of the imitation game is based upon a criterion of comparative frequency of success in that imitation game, rather than a capacity to succeed at one round of the game.

To return to the original imitation game, he states only that player A is to be replaced with a machine, not that player C is to be made aware of this replacement.[23] When Colby, FD Hilf, S Weber and AD Kramer tested PARRY, they did so by assuming that the interrogators did not need to know that one or more of those being interviewed was a computer during the interrogation.[55] As Ayse Saygin, Peter Swirski,[56] and others have highlighted, this makes a big difference to the implementation and outcome of the test.[7] An experimental study looking at Gricean maxim violations using transcripts of Loebner's one-to-one (interrogator-hidden interlocutor) Prize for AI contests between 1994–1999, Ayse Saygin found significant differences between the responses of participants who knew and did not know about computers being involved.[57] The power and appeal of the Turing test derives from its simplicity.

The challenge for the computer, rather, will be to demonstrate empathy for the role of the female, and to demonstrate as well a characteristic aesthetic sensibility—both of which qualities are on display in this snippet of dialogue which Turing has imagined: When Turing does introduce some specialised knowledge into one of his imagined dialogues, the subject is not maths or electronics, but poetry: Turing thus once again demonstrates his interest in empathy and aesthetic sensitivity as components of an artificial intelligence;

Turing does not specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term 'average interrogator': '[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning'.[69] Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings.

Nonetheless, some of these experts have been deceived by the machines.[70] Michael Shermer points out that human beings consistently choose to consider non-human objects as human whenever they are allowed the chance, a mistake called the anthropomorphic fallacy: They talk to their cars, ascribe desire and intentions to natural forces (e.g., 'nature abhors a vacuum'), and worship the sun as a human-like being with intelligence.

takes the fifth, then it is not possible for an interrogator to accurately identify the machine other than by means of a calculated guess.[72] Even taking into account a parallel/hidden human as part of the test may not help the situation as humans can often be misidentified as being a machine.[73] Mainstream AI researchers argue that trying to pass the Turing test is merely a distraction from more fruitful research.[43] Indeed, the Turing test is not an active focus of much academic or commercial effort—as Stuart Russell and Peter Norvig write: 'AI researchers have devoted little attention to passing the Turing test.'[74] There are several reasons.

Turing wanted to provide a clear and understandable example to aid in the discussion of the philosophy of artificial intelligence.[75] John McCarthy observes that the philosophy of AI is 'unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science.'[76] Robert French (1990) makes the case that an interrogator can distinguish human and non-human interlocutors by posing questions that reveal the low-level (i.e., unconscious) processes of human cognition, as studied by cognitive science.

Software that could reverse CAPTCHA with some accuracy by analysing patterns in the generating engine started being developed soon after the creation of CAPTCHA.[80] In 2013, researchers at Vicarious announced that they had developed a system to solve CAPTCHA challenges from Google, Yahoo!, and PayPal up to 90% of the time.[81] In 2014, Google engineers demonstrated a system that could defeat CAPTCHA challenges with 99.8% accuracy.[82] In 2015, Shuman Ghosemajumder, former click fraud czar of Google, stated that there were cybercriminal sites that would defeat CAPTCHA challenges for a fee, to enable various forms of fraud.[83] Another variation is described as the subject matter expert Turing test, where a machine's response cannot be distinguished from an expert in a given field.

related approach to Hutter's prize which appeared much earlier in the late 1990s is the inclusion of compression problems in an extended Turing test.[90] or by tests which are completely derived from Kolmogorov complexity.[91] Other related tests in this line are presented by Hernandez-Orallo and Dowe.[92] Algorithmic IQ, or AIQ for short, is an attempt to convert the theoretical Universal Intelligence Measure from Legg and Hutter (based on Solomonoff's inductive inference) into a working practical test of machine intelligence.[93] Two major advantages of some of these tests are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

in fact, he estimated that by the year 2000, machines with around 100 MB of storage would be able to fool 30% of human judges in a five-minute test, and that people would no longer consider the phrase 'thinking machine' contradictory.[4] (In practice, from 2009–2012, the Loebner Prize chatterbot contestants only managed to fool a judge once,[95] and that was only due to the human contestant pretending to be a chatbot.[96]) He further predicted that machine learning would be an important part of building powerful machines, a claim considered plausible by contemporary researchers in artificial intelligence.[69] In a 2008 paper submitted to 19th Midwest Artificial Intelligence and Cognitive Science Conference, Dr. Shane T.

principles of artificial intelligence and argue that as now conceived it is limited to a very particular kind of intelligence: one that can usefully be likened to bureaucracy.

Any thoughts of resisting this inevitable evolution is just a form of _speciesism,_ born from a romantic and irrational attachment to the peculiarities of the human organism.

But its shortcomings are far more mundane: we have not yet been able to construct a machine with even a modicum of common sense or one that can converse on everyday topics in ordinary

I will argue that _artificial intelligence_ as now conceived is limited to a very particular kind of intelligence: one that can usefully be likened to bureaucracy in its rigidity, obtuseness, and inability to adapt to changing circumstances.

In their quest for mechanical explanations of (or substitutes for) human reason, researchers in artificial intelligence are heirs to a long tradition .

Descartes himself did not believe that reason could be achieved through mechanical devices, his understanding laid the groundwork for the symbol-processing machines of the modern age.

is that of Euclidean geometry, in which a small set of clear and self- evident postulates provides a basis for generating the right answers

This label subsumes the varied (and at times hotly opposed) inheritors of Descartes' legacy---those who seek to achieve rational reason through a precise method of symbolic calculation.

Researchers in operations research and decision theory addressed policy questions by developing complex mathematical models of social and political systems and calculating the results of proposed alternatives.

Although there are still attempts to quantify matters of social import (for example in applying mathematical risk analysis to decisions about nuclear power), there is an overall disillusionment with the potential

Leibniz_s _Let us calculate_ is taken in Hobbes broader sense to include not just numbers but also _affirmations_ and _syllogisms._ Attempts to duplicate formal non-numerical reasoning on a machine date back to the earliest computers, but the endeavor began in earnest with the artificial intelligence (AI) projects of the mid 1950s.@^ The goals were ambitious: to fully duplicate the human capacities of thought and language on a digital computer.

Early claims that a complete theory of intelligence would be achieved within a few decades have long since been abandoned, but the reach has not diminished.

For example, a recent book by Minsky (one of the founders of AI) offers computational models for phenonemena as diverse as conflict, pain and pleasure, the self, the soul, consciousness, confusion, genius, infant emotion, foreign accents, and freedom of will.

On the one hand is the quest to explain human mental processes as thoroughly and unambiguously as physics explains the functioning of ordinary mechanical devices.

On the other hand is the drive to create intelligent tools--- machines that apply intelligence to serve some purpose, regardless of how closely they mimic the details of human intelligence.

Researchers such as Newell and Simon (two other founding fathers of artificial intelligence) have sought precise and scientifically testable theories of more modest scope than Minsky suggests.

In reducing the study of mind to the formulation of rule-governed operations on symbol systems, they focus on detailed aspects of cognitive functioning, using empirical measures such as memory capacity and reaction time.

_knowledge engineering,_ was coined to indicate a shift to the pragmatic interests of the engineer, rather than the scientist_s search for theoretical knowledge.

These systems do not attempt to explain human intelligence in detail, but are justified in terms of their practical applications, for which extravagant claims have been made.

At least one high-performance medical diagnosis program sits unused because the physicians it was designed to assist didn_t perceive that they needed such assistance;

The high hopes and ambitious aspirations of knowledge engineering are well documented, and the claims are often taken at face value, even in serious intellectual discussions.

systems illustrate specific potentials, the successes are still isolated pinnacles in a landscape of research prototypes, feasibility studies,

Artificial intelligence draws its appeal from the same ideas of mechanized reasoning that attracted Descartes, Leibniz and Hobbes, but it differs from the more classical forms of rationalism in a critical

The new patchwork rationalism is built upon mounds of _micro-truths_ gleaned through common sense introspection, ad hoc programming and so-called _knowledge acquisition_ techniques for interviewing experts.

accumulation of different, useful ways to chain things together._ In the days before computing, _ways to chain things together_ would have remained a vague metaphor.

It is easy to build a program to which we enter _Most birds can fly_ and _Tweety is a bird_ and which then produces _Tweety can fly_ according to a

Minsky places the blame for lack of success in explaining ordinary reasoning on the rigidity of logic, and does not raise the more fundamental questions about the nature of all symbolic representations and of formal (though possibly _non- logical_) systems of rules for manipulating them.

elaborating on the problems, let us first review some assumptions on which this work proceeds: The fundamental principle is the identification of intelligence with the functioning of a rule-governed symbol-manipulating device.

of speed and complexity._ This _physical symbol system hypothesis_ presupposes materialism: the claim that all of the observed properties of

It adds the claim that these processes can be described at a level of abstraction in which all relevant aspects of physical state can be understood as the encoding of symbol structures

Newell and Simon_s physical symbol systems aspire not to an idealized rationality, but to _behavior appropriate to the ends of the system and adaptive to the demands of the environment._ This shift reflects the formulation that won Simon a Nobel prize in economics.

decision theories based on optimization with a theory of _satisficing__effectively using finite decision- making resources to come up with adequate, but not necessarily optimal plans of action.

The _problem space_ is a formal structure that can be thought of as enumerating the results of all possible sequences of actions that might be taken by the program.

The number of possibilities grows exponentially with the number of moves, and is beyond practical reach after a small number.

However, one can limit search in this space by following heuristics that operate on the basis of local cues (_If one of your pieces could be taken on the opponent_s next move, try moving it...._).

A lawyer will have many questions about whether a plaintiff was _negligent,_ but for the program it is a simple matter of whether a certain symbolic expression of the form _Negligent(x)_ appears in the store of representations, or whether there is a rule of the form _If ....

There has been a great deal of technical debate over the detailed form of rules, but two principles are taken for granted in essentially all of the work: For example, there may be cases in which the _sulfate ion test is positive_ even though the spill is not sulfuric acid.

The question is not whether each of the rules is true, but whether the output of the program as a whole is _appropriate._ The knowledge engineers hope that by devising and tuning such rules they can capture more than the deductive logic of the domain:

the rules of thumb, the hunches, the intuition and capacity for judgement that are seldom explicitly laid down but which form the basis of an expert_s skill, acquired over a lifetime_s experience.

This ad hoc nature of the logic applies equally to the cognitive models of Newell and Simon, in which a large collection of separate _production

The symbols don_t stand for chemical spills and law, but for hypothesized psychological features, such as the symbolic contents of short term memory.

The cognitive modeler does not build an overall model of the system_s performance on a task, but designs the individual rules in hopes that appropriate behavior will emerge from their interaction.

Minsky illustrates his view in a simple _micro-world_ of toy blocks, populated by agents such as BUILDER (which stacks up the blocks), ADD (which adds a single block to a

It takes an almost childish leap of faith to assume that the modes of explanation that work for the details of block manipulation will be adequate for understanding conflict, consciousness,

In looking at the development of computer technology, one cannot help but be struck by the successes at reducing complex and varied tasks to systematic combinations of elementary operations.

systems, the reduction must be possible and only our current lack of knowledge prevents us from explicating it in detail, all the way from BUILDER_s clever ploy down to the logical circuitry.

All of the approaches described above depend on interactions among large numbers of individual elements: rules, productions, or agents.

[emphasis in the original] This statement is typical of much writing on expert systems, both in the parochial perspective that inflates a homily into a _conceptual

They are hard at work on techniques of _knowledge acquisition_ and see it as just a matter of sufficient money and effort: We have the opportunity at this moment to do a new version

it._ The optimistic claims for artificial intelligence have far outstripped the achievements, both in the theoretical enterprise of cognitive

It is all too easy to write a program that would produce that particular behavior, and all too hard to build one that covers a sufficiently general range to inspire confidence.

Newell and his colleagues_ painstaking attention to detailed architecture of production systems is an attempt to better constrain the computational model, in hopes that experiments can then test detailed

Proponents argue that the methods and theoretical foundations that are being applied to micro-behavior will eventually be extended and generalized to cover the full range of cognitive phenomena.

and governmental organizations are mounting serious efforts to build expert systems for tasks such as air traffic control, nuclear power plant operation and_most distressingly_the control of weapons systems.

system for dealing with acid spills may not consider the possibility of rain leaking into the building, or of a power failure, or that a

A human expert faced with a problem in such a circumstance falls back on common sense and a general background of knowledge.

(computational or not) we must assume the possibility of reducing all forms of tacit knowledge (skills, intuition, and the like) to explicit facts and rules.

The breakdown may not even provide sharp criteria for knowing what to change, as with a chess program that is just failing to come up with good moves.

The problem is one of human understanding--- the ability of a person to understand how a new situation experienced in the world is related to an existing set of representations, and to possible modifications of those representations.

reactor-operator falls asleep, but because a knowledge engineer didn_t think of putting in a rule specifying how to handle a particular failure when the emergency system is undergoing its periodic test, and the backup system is out of order.

The hope that a system based on patchwork rationalism will respond _appropriately_ in such cases is just that: a hope, and one that can engender dangerous illusions of safety and security.

nurse _Is the patient eating?_ If they are deciding whether to perform an examination, the request might be paraphrased _Is she eating at this moment?_ If the patient is in the hospital for anorexia and the doctor is checking the efficacy of the treatment, it might be more like _Has the patient eaten some minimal amount in the past day?_ If the patient has recently undergone surgery, it might mean _Has the patient taken any nutrition by mouth,_ and so on.

A medical expert system might have a rule of the form: _IF Eating(x) THEN ...,_ which is to be applied only if the patient is eating, along with others of the form _IF ...

_ Such approaches work for the cases that programmers anticipate, but of course are subject to the infinite regress of trying to decontextualize context.

consequence of decontextualized representation is the difficulty of creating AI programs in any but the most carefully restricted domains, where almost all of the knowledge required to perform the task is special to that domain (i.e., little common sense knowledge is

involving friendship and adultery,_@^ proceed by replacing the real situation with a cartoon-like caricature, governed by simplistic rules whose inadequacy is immediately obvious (even to the creators, who argue that they simply need further elaboration).

This is of concern not only when actions are based directly on the output of the computer system (as in one controlling weapons systems), but also when, for example, medical expert systems are used to evaluate the work of physicians.@^ Since the system is based on a reduced representation of the situation, it systematically (if invisibly) values some aspects of care while remaining blind to others.

bears within it a background of cultural orientation that does not appear as explicit claims, but is manifest in the very terms in which the _facts_ are expressed and in the judgment of what constitutes a

Just as scientific management found its idealization in automation and programmable production robots, one might consider an artificially intelligent knowledge-based system as the ideal bureaucrat..._@^ Lee_s stated goal is _improved bureaucratic

But in his classic work on bureaucracy, Weber argued its great advantages over earlier, less formalized systems, calling it the _unambiguous yardstick for the modernization of the state._ He notes that _bureaucracy has a _rational_ character, with rules, means-ends calculus, and matter-of- factness predominating,_ and that it succeeds in _eliminating from official business love, hatred, and all purely personal, irrational, and emotional elements which escape calculation.

Precision, speed, unambiguity, knowledge of the files, continuity, discretion, unity, strict subordination, reduction of friction and of material and personal costs_these are raised to the optimum point in the strictly bureaucratic administration.

There are striking similarities here with the arguments given for the benefits of expert systems, and equally striking analogies with the shortcomings as pointed out, for example, by March and Simon: The reduction in personalized relationships, the increased

[emphasis in original] Given Simon_s role in artificial intelligence, it is ironic that he notes these weaknesses of human-embodied rule systems, but sees the behavior of rule-based physical symbol systems as _adaptive to the demands of the environment._ Indeed, systems based on symbol manipulation exhibit the rigidities of bureaucracies, and are most problematic in dealing with _client satisfaction_---the mismatch between the decontextualized application of rules and the human interpretation of the symbols that appear in them.

Michie_s claim that expert systems can encode _the rules of thumb, the hunches, the intuition and capacity for judgement..._ is wrong in the same way that it is wrong to seek a full account of an organization in its formal rules and procedures.

We have seen how this question has been reformulated in the pursuit of artificial intelligence, to reflect a particular design based on patchwork rationalism.

provocative words) nothing but _meat machines._ If we take _machine_ to stand for any physically constituted device subject to the causal laws of nature, then the question reduces to one of materialism, and is not to be resolved through computer research.

rehabilitated in _connectionist_ theories, based on _massively parallel distributed processing._ In this work, each computing element (analogous to a neuron) operates on simple general principles, and intelligence

Connectionism is one manifestation of what Turkle calls _emergent AI._@^ The fundamental intuition guiding this work is that cognitive structure in organisms emerges through learning and experience, not through

It is not yet clear whether we will see a turn back towards the heritage of cybernetics or simply a _massively parallel_ variant of current

connectionism may breathe new life into cognitive modelling research, it suffers an uneasy balance between symbolic and physiological description.

Connectionism, like its parent cognitive theory, must be placed in the category of brash unproved hypotheses, which have not really begun to deal with the complexities of mind, and whose current explanatory power is extremely limited.

sufficient part of the world_s knowledge_ or into a quest for the philosopher_s stone of _massively parallel processing._ Discussions of the problems and dangers of computers often leave the impression that on the whole we would be better off if we could return to the pre-computer era.

The very notion of _symbol system_ is inherently linguistic and what we duplicate in our programs with their rules and propositions is really a form of verbal argument, not the workings of mind.

This grounding is especially evident for statements of the kind that Roszak characterizes as _ideas_ rather than _information._ _All men are created equal_ cannot be judged as a true or false description of the objective world.

But instead we can see the computer as a way of organizing, searching and manipulating texts that are created by people, in a context, and ultimately intended for human interpretation.

system described above is being converted from _Internist_ (a doctor specializing in internal medicine) to an _advisory system_ called _QMR_ (for _Quick Medical Reference_).@^ The rules can be thought of as constituting an automated textbook, which can access and logically

In a similar vein, an interactive computer-based encyclopedia need not cover all of human knowledge or provide general purpose deduction in order to take advantage of the obvious computer capacities of speed, volume, and sophisticated inferential indexing.

acts in a logic of _conversation for action_ oriented towards completion (a state in which neither party is awaiting further action by the other).

The theory of such conversations has been developed as the basis for a computer program called The Coordinator_, which is used for facilitating and organizing computer-message conversations in an organization.@^ It emphasizes the role of commitment by the speaker in each speech act and provides the basis for timely and effective action.

He argues that their use of computers while on field missions increases the _transparency_ of their decision-making process, hence increasing their accountability and enhancing opportunities for meaningful negotiation.

As a result, the dialogue between them [the bankers and their clients] suddenly becomes less about the final results__the numbers__and more about the assumptions behind the numbers, the criteria on which

machines?_ In asking this kind of question we engage in a kind of projection--- understanding humanity by projecting an image of ourself onto the machine and the image of the machine back onto ourselves.

In the tradition of artificial intelligence, we project an image of our language activity onto the symbolic manipulations of the machine, then project that back onto the full human mind.

In projecting language as a rule-governed manipulation of symbols, we all too easily dismiss the concerns of human meaning that make up the humanities, and indeed of any socially grounded understanding of human language and action.

Minsky of the MIT AI- lab), for example, is exploring new forms of logic that attempt to preserve the rigor of ordinary deduction, while dealing with some of the properties of commonsense reasoning, as described in the papers in Bobrow (ed.), Special Issue on Nonmonotonic Logic.

Thinking Machines Corporation

The CM-1 and 2 came first in models with 64K (65,536) bit-serial processors (16 processors per chip) and later, the smaller 16K and 4K configurations.

The CM-1 through CM-200 were examples of SIMD architecture (Single Instruction Multiple Data), while the later CM-5 and CM-5E were MIMD (Multiple Instructions Multiple Data) that combined commodity SPARC processors and proprietary vector processors in a 'fat tree' network.

Thinking Machines also introduced an early commercial RAID2 disk array, the DataVault, circa 1988.[3] In May 1985, Thinking Machines became the third company to register a .com domain name (think.com).

It became profitable in 1989, in part because of it DARPA contracts.[4] The following year, they sold $65 million (USD) worth of hardware and software, making them the market leader in parallel supercomputers.

The hardware portion of the company was purchased by Sun Microsystems, and TMC re-emerged as a small software company specializing in parallel software tools for commodity clusters and data mining software for its installed base and former competitors' parallel supercomputers.

DARPA's Connection Machines were decommissioned by 1996.[5] In the 1993 film Jurassic Park, Connection Machines (non-functioning dummies) are visible in the park's control room, scientist Dennis Nedry mentions 'eight Connection Machines'[6] and a video about dinosaur cloning mentions 'Thinking Machines supercomputers'.

In the 1996 film Mission Impossible, Luther Stickell asks Franz Krieger for 'Thinking Machine laptops' to help hack into the CIA's Langley supercomputer.[7] Tom Clancy's novel Rainbow Six speaks of the NSA's 'star machine from a company gone bankrupt, the Super-Connector from Thinking Machines, Inc., of Cambridge, Massachusetts' in the NSA's basement.

In addition, in The Bear and the Dragon says the National Security Agency could crack nearly any book or cipher with one of three custom operating systems designed for a Thinking Machines supercomputer.

The Thinking Machine (Artificial Intelligence in the 1960s)

Can machines really think? Here is a series of interviews to some of the AI pioneers, Jerome Wiesner, Oliver Selfridge, and Claude Shannon. A view at the future ...

How Machines Learn

How do all the algorithms around us learn to do their jobs? Bot Wallpapers on Patreon: Discuss this video: ..

Machine Learning & Artificial Intelligence: Crash Course Computer Science #34

So we've talked a lot in this series about how computers fetch and display data, but how do they make decisions on this data? From spam filters and self-driving ...

11. Introduction to Machine Learning

MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016 View the complete course: Instructor: Eric Grimson ..

How Do You Programme Intelligence? - Horizon: The Hunt for AI - BBC Two

SUBSCRIBE for more BBC highlights: More about this programme: Some scientists believe that .

Machine Intelligence: Stronger, Faster, Smarter?

What do we mean when we say a machine thinks? Computers and robots have long been able to crunch impossibly large numbers and execute complex, ...

The Rise of the Machines – Why Automation is Different this Time

Automation in the Information Age is different. Books we used for this video: The Rise of the Robots: The Second Machine Age: ..

Thinking machines Summit on artificial intelligence and robotics. DAY2

Thinking machines Summit on artificial intelligence and robotics. Experts from some of the worlds leading companies and research institutions are discussing ...

Hamming, "Artificial Intelligence - Part I" (April 7, 1995)

Intro: Today is the beginning of talking about artificial intelligence, it is a very different topic. I spent much of last night and today thinking about it, and talking to ...

Google's DeepMind AI Just Taught Itself To Walk

Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance ...