AI News, This Artificial Intelligence Pioneer Has a Few Concerns

This Artificial Intelligence Pioneer Has a Few Concerns

In January, the British-American computer scientist Stuart Russell drafted and became the first signatory of an open letter calling for researchers to look beyond the goal of merely making artificial intelligence more powerful.

“Our AI systems must do what we want them to do.” Thousands of people have since signed the letter, including leading artificial intelligence researchers at Google, Facebook, Microsoft and other industry hubs along with top computer scientists, physicists and philosophers around the world.

By the end of March, about 300 research groups had applied to pursue new research into “keeping artificial intelligence beneficial” with funds contributed by the letter’s 37th signatory, the inventor-entrepreneur Elon Musk.Original story reprinted with permission from Quanta Magazine, an editorially independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. Russell,

In a bombshell result reported recently in Nature, a simulated network of artificial neurons learned to play Atari video games better than humans in a matter of hours given only data representing the screen and the goal of increasing the score at the top—but no preprogrammed knowledge of aliens, bullets, left, right, up or down.

I think one answer is a technique called “inverse reinforcement learning.” Ordinary reinforcement learning is a process where you are given rewards and punishments as you behave, and your goal is to figure out the behavior that will get you the most rewards.

For example, your domestic robot sees you crawl out of bed in the morning and grind up some brown round things in a very noisy machine and do some complicated thing with steam and hot water and milk and so on, and then you seem to be happy.

And then when I was applying to grad school I applied to do theoretical physics at Oxford and Cambridge, and I applied to do computer science at MIT, Carnegie Mellon and Stanford, not realizing that I’d missed all the deadlines for applications to the U.S. Fortunately Stanford waived the deadline, so I went to Stanford.

Instead they look ahead a dozen moves into the future and make a guess about how useful those states are, and then they choose a move that they hope leads to one of the good states.

thing that’s really essential is to think about the decision problem at multiple levels of abstraction, so “hierarchical decision making.” A person does roughly 20 trillion physical actions in their lifetime.

The future is spread out, with a lot of detail very close to us in time, but these big chunks where we’ve made commitments to very abstract actions, like, “get a Ph.D.,” “have children.” Are computers currently capable of hierarchical decision making?

There are some games where DQN just doesn’t get it, and the games that are difficult are the ones that require thinking many, many steps ahead in the primitive representations of actions—ones where a person would think, “Oh, what I need to do now is unlock the door,” and unlocking the door involves fetching the key, etcetera.

The basic idea of the intelligence explosion is that once machines reach a certain level of intelligence, they’ll be able to work on AI just like we do and improve their own capabilities—redesign their own hardware and so on—and their intelligence will zoom off the charts.

The most convincing argument has to do with value alignment: You build a system that’s extremely good at optimizing some utility function, but the utility function isn’t quite right.

otherwise it’s going to do pretty stupid things, like put the cat in the oven for dinner because there’s no food in the fridge and the kids are hungry.

If the machine makes these tradeoffs in ways that reveal that it just doesn’t get it—that it’s just missing some chunk of what’s obvious to humans—then you’re not going to want that thing in your house.

Then there’s the question, if we get it right such that some intelligent systems behave themselves, as you make the transition to more and more intelligent systems, does that mean you have to get better and better value functions that clean up all the loose ends, or do they still continue behaving themselves?

With a cyber-physical system, you’ve got a bunch of bits representing an air traffic control program, and then you’ve got some real airplanes, and what you care about is that no airplanes collide.

What you would do is write a very conservative mathematical description of the physical world—airplanes can accelerate within such-and-such envelope—and your theorems would still be true in the real world as long as the real world is somewhere inside the envelope of behaviors.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Late in the afternoon of May 11, 1997, in front of the cameras of a small television studio 35 floors up a Manhattan skyscraper, Garry Kasparov sat down at a chessboard.

The boisterous and temperamental World Chess Champion had never lost an official match, but entering the sixth and final game he was tied with his opponent at two games each (the third game having ended in a draw).

To the reporters present and chess fans following the broadcast in a nearby auditorium, as well as those viewing it live around the world, Kasparov’s frustration was evident as he sighed and held his head in his hands.

During his NPR interview Greengard voiced the question that stumped Kasparov: “How could something play like God, then play like an idiot in the same game?” Long after Deep Blue was retired, Murray Campbell, one of the computer’s engineers, revealed that the illogical sacrifice was neither human interference nor artificial ingenuity.

Engineers packed away their slide rules with the arrival of the punch card–driven IBM 604 electronic calculator, capable of addition, subtraction, multiplication, and division, 1951.

Similarly, a human chess player might rely on past experience with a certain opponent or a gut feeling to decide on a play, but a computer program like Deep Blue must simulate millions of possible moves and their ripple effects before making a move.

In fact, chess provided one of the first opportunities to show that a task is easier to complete, and potentially more fulfilling, when AI and humans work together rather than in competition.

Within a year he had a flash of insight: if people used computers while playing chess, they could play at a high level without worrying about memorization and small mistakes.

The chess world called them “centaurs” and, more recently, “cyborgs.” The idea of manufactured intelligence has existed for centuries, from Greek automatons to Frankenstein, but in the decades following World War II a collection of academics, philosophers, and scientists set about the task of actually creating an artificial mind.

At the time, many psychologists measured intelligence by focusing on specific skills independently, such as the ability to solve math problems and the ability to navigate social situations.

If we apply Spearman’s definition to Deep Blue and other computer programs that have only one skill, such as playing chess or guessing passwords using brute force, those programs can never be considered intelligent.

In 2009, while giving a lecture at Xiamen University in China, Goertzel declared that general intelligence is a type of behavior, one whose only requirement is “achieving complex goals in complex environments.” As he sees it, intelligence should be measured by a thing’s ability to perform a task, be that thing natural or artificial.

Although the flowchart analogy vastly simplifies how a computer calculates (a series of switches turning on and off), in many ways it echoes how some have imagined the function of the human brain at the most basic level.

If that is the case, it is easy to conclude that all we need to achieve artificial consciousness is to scale up the complexity of a computer until it is on par with the tangle of gray matter in our heads, just as Turing’s flowchart can be scaled up to a program that can beat a grandmaster.

The emerging consensus, supported by Goertzel and other researchers and put forth in the widely used textbook Artificial Intelligence: A Modern Approach, divides AI into three levels, each one paving the way for the next.

You’ll find sections labeled “Inspired by your shopping trends” and “Recommendation for you.” You probably haven’t searched for any of the items suggested, but they eerily reflect your purchasing habits on Amazon.

The theory goes that once software is slightly more intelligent than us, it will be able to exponentially improve itself until it is infinitely more intelligent than us, an event dubbed the singularity by computer scientist and futurist Ray Kurzweil.

The prospect of a singularity scares people and has led to an abundance of funding for philosophers and scientists dedicated to figuring out how to ensure that when ASI arrives, it comes with a sense of ethics.

While the dream (and nightmare) of a superior AI persists, most of the work in the field for the past 40 years has focused on refining ANI and better incorporating it into the human realm, taking advantage of what computers do well (sorting through massive amounts of data) and combining it with what humans are good at (using experience and intuition).

And to understand how those early expectations of AGI transformed into ANI, we have to go back to the spring of 1956, when a mathematics professor with a bold plan organized a conference in the White Mountains of New Hampshire: during two months in the summer the professor hoped to replicate, at least in theory, human intelligence in a machine.

Even so, from behind his thick, horn-rimmed glasses and bushy beard he saw the potential of combining the knowledge of those working on neural networks, robotics, and programming languages.

To entice these young researchers to join him for his unprecedented summer research project (and to generate funding), he included a flashy new term at the top of his proposal: artificial intelligence.

The name implied that human consciousness could be defined and replicated in a computer program, and it replaced the vague automata studies that McCarthy previously used (with little success) to define the field.

The conference proposal outlined his plan for a 10-person team that would work over the summer on the “conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Such a goal seemed achievable in the wave of American postwar exuberance, a time when humans wielded the power of the atom and seemed destined to control even the weather.

McCarthy believed that in the wake of such breakthroughs a descriptive model of the human brain would quickly lead to what would become known as AGI, just as Niels Bohr’s model of the atom led to the atomic bomb.

However, when the program’s designers submitted one of their proofs to the Journal of Symbolic Logic, it was rejected because it proved the existing and well-known theorem that the angles opposite the equal sides of an isosceles triangle are also equal.

Despite the apparent success of the Logic Theorist, it was a small program that used brute force to solve a relatively simple problem, similar to how Deep Blue determined the best moves in its game against Kasparov.

Searle’s point is that a computer will always be the transcriber in that room, writing responses without understanding their meaning, and will therefore never be a conscious being, despite how it may appear to the computer’s user.

Strikingly, his conference proposal also included a section on creativity and imagination: McCarthy believed that some aspect of consciousness that generates hunches and educated guesses was programmable in a computer and that creativity was a necessary part of building AI.

The AI researcher Ben Goertzel used this concept in 2009 when he stated that AGI will someday be a type of intelligence that can complete complex goals in complex environments, no different from the intelligence we observe in humans and other animals.

The Dartmouth conference failed to produce an intelligent machine, but after the success of narrow programs similar to the Logic Theorist, universities, foundations, and governments bought into McCarthy’s optimism and money began rolling in.

The government saw plenty of potential for smart machines, whether to analyze masses of geological data for oil and coal exploration or to speed the search for new drugs.

Computer-science lore has it that when the biblical saying “the spirit is willing but the flesh is weak” was put through a Russian translation program, it returned with the Russian equivalent of “the vodka is good but the meat is rotten.” Since then, language processing has improved markedly, but today’s researchers face the same issues.

they arise naturally and with irregular features, like a small town that is slowly built into a metropolis with winding, narrow roads instead of a logical grid.

Computer-science lore has it that when the biblical saying “the spirit is willing but the flesh is weak” was put through a Russian translation program, it returned with the Russian equivalent of “the vodka is good but the meat is rotten.” Researchers at McCarthy’s Artificial Intelligence Project and elsewhere ran into another problem.

In order to survive what later became known as “AI winter” (a name meant to evoke the apocalyptic vision of a “nuclear winter”), computer scientists adopted the more conservative definition of intelligence as a type of narrow behavior—the approach championed by Searle.

Xiaoice, part of a phone application, listens to you vent, offers reassuring advice, and is simultaneously available to millions of people at a time, just as Samantha interacted with thousands of people in Her.

She records details from conversations with each user and weaves them in later, creating the illusion of memory, and keeps track of the positive responses she gets so she can serve up the best ones to other users.

(If you mention that you broke up with your significant other, for example, she will ask how you are holding up.) Essentially, Xiaoice is the transcriber in the Chinese room: thanks to the vast amount of data readily available on the web, she is able to carry on intimate and relatively believable conversations without understanding what they mean.

Communicating with a computer that appeared almost human made me uneasy, like navigating an automated menu over the phone where you must repeat yourself because the machine doesn’t understand that you just want to talk to a real person.

Goertzel sees two potential paths to AGI: either a major breakthrough in our understanding of intelligence brings about the virtual brain McCarthy once dreamed of, or computer scientists make individual programs for every task an intelligent being would ever need to do and then someone mushes them all together somehow.

Programs that play games well, like Deep Blue and more recently AlphaGo—a Google-made AI that defeated one of the best players in the world at Go, an ancient game many times more complex than chess—are symbols of progress in one tiny area.

Making a thinking machine

On Black Friday 2017, Amazon’s best-selling item was its Echo Dot, the voice-activated 'smart speaker' that, like similar devices, acts as a mini personal assistant for the digital age—always at the ready to read you a recipe, order pizza, call your mom, adjust your thermostat and much more.

Now, many experts believe that AI is on the cusp of joining the human world in ways that may have more profound—even life-and-death—consequences, such as in self-driving cars or in systems that could evaluate medical records and suggest diagnoses.

There are many things that humans do exceptionally well that computers can’t even begin to match, such as creative thinking, learning a new concept from just one example ('one-shot learning') and understanding the nuances of spoken language.

Indeed, the systems that have driven nearly all the recent progress in AI—known as deep neural networks—are inspired by the way that neurons connect in the brain and are related to the 'connectionist' way of thinking about human intelligence.

Connectionist theories essentially say that learning—human and artificial—is rooted in interconnected networks of simple units, either real neurons or artificial ones, that detect patterns in large amounts of data.

But today, the enormous increase in computing power and the amount and type of data available to analyze have made deep neural networks increasingly powerful, useful and—with technology giants such as Google and Facebook leading the way—ubiquitous.

A deep neural network called AlphaGo, created by the Google-affiliated company DeepMind, analyzed millions of games of the complex board game Go to beat the human world champion in 2016, a feat long thought impossible.

Because neural networks are not programmed with explicit rules, and instead develop their own rules as they extract patterns from data, no one—not even the people who program them—can know exactly how they arrive at their conclusions.

Now, psychologists and AI researchers are looking to insights from cognitive and developmental psychology to address these limitations and to capture aspects of human thinking that deep neural networks can’t yet simulate, such as curiosity and creativity.

Lake’s system, which he developed after studying hundreds of videos of how people write characters, instead proposes multiple series of pen strokes that are likely to produce the character shown.

Using an algorithm based on this method, his AI system was able to recognize characters from many different alphabets after seeing just one example of each and then produce new versions that were indistinguishable from human-drawn examples (Scienceedxzdeybycbuuzdr, Vol.

People learn by asking questions, and while curiosity might seem like an abstract concept, Lake and his colleagues have grounded it by building an AI system that plays 'Battleship,' the game in which players locate their opponent’s battleship on a hidden board by asking questions.

Only certain questions are allowed in the original game, but Lake and his colleagues allowed human players to ask any open-ended questions that they wanted to, and then used those questions to build a model of the types of questions that elicit the most useful information.

'There’s some fixed contribution that comes from the literal meaning of the words, but actually uncovering the interpretation that the speaker intends is a complicated process of inference that invokes our knowledge about the world,' Goodman says.

Take the concept of hyperbole: When someone says, 'It cost a million dollars,' how do you decide whether they mean that the item literally cost a million dollars or only that it cost a lot of money?

Humans may be able to understand jokes and recognize pineapples after seeing just one example, but they do so with decades (or, in the case of children, months or years) of experience observing and learning about the world in general.

So connectionist-oriented AI researchers believe that if we want to build machines with truly flexible, humanlike intelligence, we will need to not only write algorithms that reflect human reasoning, but also understand how the brain develops those algorithms to begin with.

In one study, for example, they found that during mealtimes, 8- to 10-month-old babies look preferentially at a limited number of scenes and objects—their chair, utensils, food and more—in a way that may later help them learn their first words.

Smith is collaborating with machine learning researchers to try to understand more about how the structure of this kind of visual and other data—the order in which babies choose to take in the world—helps babies (and, eventually, machines) develop the mental models that will underlie learning throughout their lives.

When the solution to the problem was unexpected (more than one object was required to make the machine light up), then children were more likely than adults to arrive at the right answer, and younger children were better at it than older children were (PNAS, Vol.

Building models that reflect this and other unique aspects of how children learn could help AI researchers develop computers that capture some of children’s creativity, flexible thinking and learning ability, Gopnik says.

In fact, according to Matthew Botvinick, PhD, a cognitive scientist and the director of neuroscience research at DeepMind, AI systems are moving in the direction of deep neural networks that can build their own mental models of the sort that currently must be programmed in by humans.

Botvinick believes that we have a long way to go before we can sort out which threats are genuine and which are not, but he says that tech companies are beginning to take such safety issues and larger societal issues seriously.

As society ponders those questions, it’s also important to remember that the knowledge that psychologists and other AI researchers are gaining as they aim to build thinking machines is also helping us to better understand ourselves.

Turing test

p. 460).[3] It opens with the words: 'I propose to consider the question, 'Can machines think?'' Because 'thinking' is difficult to define, Turing chooses to 'replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.'[4] Turing's new question is: 'Are there imaginable digital computers which would do well in the imitation game?'[5] This question, Turing believed, is one that can actually be answered.

Researchers in the United Kingdom had been exploring 'machine intelligence' for up to ten years prior to the founding of the field of artificial intelligence (AI) research in 1956.[14] It was a common topic among the members of the Ratio Club, who were an informal group of British cybernetics and electronics researchers that included Alan Turing, after whom the test is named.[15] Turing, in particular, had been tackling the notion of machine intelligence since at least 1941[16] and one of the earliest-known mentions of 'computer intelligence' was made by him in 1947.[17] In Turing's report, 'Intelligent Machinery',[18] he investigated 'the question of whether or not it is possible for machinery to show intelligent behaviour'[19] and, as part of that investigation, proposed what may be considered the forerunner to his later tests: It is not difficult to devise a paper machine which will play a not very bad game of chess.[20] Now get three men as subjects for the experiment.

to 'Can machines do what we (as thinking entities) can do?'[22] The advantage of the new question, Turing argues, is that it draws 'a fairly sharp line between the physical and intellectual capacities of a man.'[23] To demonstrate this approach Turing proposes a test inspired by a party game, known as the 'imitation game', in which a man and a woman go into separate rooms and guests try to tell them apart by writing a series of questions and reading the typewritten answers sent back.

In this version, which Turing discussed in a BBC radio broadcast, a jury asks questions of a computer and the role of the computer is to make a significant proportion of the jury believe that it is really a man.[26] Turing's paper considered nine putative objections, which include all the major arguments against artificial intelligence that have been raised in the years since the paper was published (see 'Computing Machinery and Intelligence').[6] In 1966, Joseph Weizenbaum created a program which appeared to pass the Turing test.

If a keyword is not found, ELIZA responds either with a generic riposte or by repeating one of the earlier comments.[27] In addition, Weizenbaum developed ELIZA to replicate the behaviour of a Rogerian psychotherapist, allowing ELIZA to be 'free to assume the pose of knowing almost nothing of the real world.'[28] With these techniques, Weizenbaum's program was able to fool some people into believing that they were talking to a real person, with some subjects being 'very hard to convince that ELIZA [...] is not human.'[28] Thus, ELIZA is claimed by some to be one of the programs (perhaps the first) able to pass the Turing test,[28][29] even though this view is highly contentious (see below).

'CyberLover', a malware program, preys on Internet users by convincing them to 'reveal information about their identities or to lead them to visit a web site that will deliver malicious content to their computers'.[33] The program has emerged as a 'Valentine-risk' flirting with people 'seeking relationships online in order to collect their personal data'.[34] John Searle's 1980 paper Minds, Brains, and Programs proposed the 'Chinese room' thought experiment and argued that the Turing test could not be used to determine if a machine can think.

Therefore, Searle concludes, the Turing test cannot prove that a machine can think.[35] Much like the Turing test itself, Searle's argument has been both widely criticised[36] and highly endorsed.[37] Arguments such as Searle's and others working on the philosophy of mind sparked off a more intense debate about the nature of intelligence, the possibility of intelligent machines and the value of the Turing test that continued through the 1980s and 1990s.[38] The Loebner Prize provides an annual platform for practical Turing tests with the first competition held in November 1991.[39] It is underwritten by Hugh Loebner.

As Loebner described it, one reason the competition was created is to advance the state of AI research, at least in part, because no one had taken steps to implement the Turing test despite 40 years of discussing it.[40] The first Loebner Prize competition in 1991 led to a renewed discussion of the viability of the Turing test and the value of pursuing it, in both the popular press[41] and academia.[42] The first contest was won by a mindless program with no identifiable intelligence that managed to fool naïve interrogators into making the wrong identification.

Saul Traiger argues that there are at least three primary versions of the Turing test, two of which are offered in 'Computing Machinery and Intelligence' and one that he describes as the 'Standard Interpretation'.[45] While there is some debate regarding whether the 'Standard Interpretation' is that described by Turing or, instead, based on a misreading of his paper, these three versions are not regarded as equivalent,[45] and their strengths and weaknesses are distinct.[46] Huma Shah points out that Turing himself was concerned with whether a machine could think and was providing a simple method to examine this: through human-machine question-answer sessions.[47] Shah argues there is one imitation game which Turing described could be practicalised in two different ways: a) one-to-one interrogator-machine test, and b) simultaneous comparison of a machine with a human, both questioned in parallel by an interrogator.[24] Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalises naturally to all of human performance capacity, verbal as well as nonverbal (robotic).[48] Turing's original article describes a simple party game involving three players.

Common understanding has it that the purpose of the Turing test is not specifically to determine whether a computer is able to fool an interrogator into believing that it is a human, but rather whether a computer could imitate a human.[7] While there is some dispute whether this interpretation was intended by Turing, Sterrett believes that it was[49] and thus conflates the second version with this one, while others, such as Traiger, do not[45] – this has nevertheless led to what can be viewed as the 'standard interpretation.'

The general structure of the OIG test could even be used with non-verbal versions of imitation games.[51] Still other writers[52] have interpreted Turing as proposing that the imitation game itself is the test, without specifying how to take into account Turing's statement that the test that he proposed using the party version of the imitation game is based upon a criterion of comparative frequency of success in that imitation game, rather than a capacity to succeed at one round of the game.

To return to the original imitation game, he states only that player A is to be replaced with a machine, not that player C is to be made aware of this replacement.[23] When Colby, FD Hilf, S Weber and AD Kramer tested PARRY, they did so by assuming that the interrogators did not need to know that one or more of those being interviewed was a computer during the interrogation.[55] As Ayse Saygin, Peter Swirski,[56] and others have highlighted, this makes a big difference to the implementation and outcome of the test.[7] An experimental study looking at Gricean maxim violations using transcripts of Loebner's one-to-one (interrogator-hidden interlocutor) Prize for AI contests between 1994–1999, Ayse Saygin found significant differences between the responses of participants who knew and did not know about computers being involved.[57] The power and appeal of the Turing test derives from its simplicity.

The challenge for the computer, rather, will be to demonstrate empathy for the role of the female, and to demonstrate as well a characteristic aesthetic sensibility—both of which qualities are on display in this snippet of dialogue which Turing has imagined: When Turing does introduce some specialised knowledge into one of his imagined dialogues, the subject is not maths or electronics, but poetry: Turing thus once again demonstrates his interest in empathy and aesthetic sensitivity as components of an artificial intelligence;

Turing does not specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term 'average interrogator': '[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning'.[69] Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings.

Nonetheless, some of these experts have been deceived by the machines.[70] Michael Shermer points out that human beings consistently choose to consider non-human objects as human whenever they are allowed the chance, a mistake called the anthropomorphic fallacy: They talk to their cars, ascribe desire and intentions to natural forces (e.g., 'nature abhors a vacuum'), and worship the sun as a human-like being with intelligence.

takes the fifth, then it is not possible for an interrogator to accurately identify the machine other than by means of a calculated guess.[72] Even taking into account a parallel/hidden human as part of the test may not help the situation as humans can often be misidentified as being a machine.[73] Mainstream AI researchers argue that trying to pass the Turing test is merely a distraction from more fruitful research.[43] Indeed, the Turing test is not an active focus of much academic or commercial effort—as Stuart Russell and Peter Norvig write: 'AI researchers have devoted little attention to passing the Turing test.'[74] There are several reasons.

Turing wanted to provide a clear and understandable example to aid in the discussion of the philosophy of artificial intelligence.[75] John McCarthy observes that the philosophy of AI is 'unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science.'[76] Robert French (1990) makes the case that an interrogator can distinguish human and non-human interlocutors by posing questions that reveal the low-level (i.e., unconscious) processes of human cognition, as studied by cognitive science.

Software that could reverse CAPTCHA with some accuracy by analysing patterns in the generating engine started being developed soon after the creation of CAPTCHA.[80] In 2013, researchers at Vicarious announced that they had developed a system to solve CAPTCHA challenges from Google, Yahoo!, and PayPal up to 90% of the time.[81] In 2014, Google engineers demonstrated a system that could defeat CAPTCHA challenges with 99.8% accuracy.[82] In 2015, Shuman Ghosemajumder, former click fraud czar of Google, stated that there were cybercriminal sites that would defeat CAPTCHA challenges for a fee, to enable various forms of fraud.[83] Another variation is described as the subject matter expert Turing test, where a machine's response cannot be distinguished from an expert in a given field.

related approach to Hutter's prize which appeared much earlier in the late 1990s is the inclusion of compression problems in an extended Turing test.[90] or by tests which are completely derived from Kolmogorov complexity.[91] Other related tests in this line are presented by Hernandez-Orallo and Dowe.[92] Algorithmic IQ, or AIQ for short, is an attempt to convert the theoretical Universal Intelligence Measure from Legg and Hutter (based on Solomonoff's inductive inference) into a working practical test of machine intelligence.[93] Two major advantages of some of these tests are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

in fact, he estimated that by the year 2000, machines with around 100 MB of storage would be able to fool 30% of human judges in a five-minute test, and that people would no longer consider the phrase 'thinking machine' contradictory.[4] (In practice, from 2009–2012, the Loebner Prize chatterbot contestants only managed to fool a judge once,[95] and that was only due to the human contestant pretending to be a chatbot.[96]) He further predicted that machine learning would be an important part of building powerful machines, a claim considered plausible by contemporary researchers in artificial intelligence.[69] In a 2008 paper submitted to 19th Midwest Artificial Intelligence and Cognitive Science Conference, Dr. Shane T.

Thinking Machines Corporation

The CM-1 and 2 came first in models with 64K (65,536) bit-serial processors (16 processors per chip) and later, the smaller 16K and 4K configurations.

The CM-1 through CM-200 were examples of SIMD architecture (Single Instruction Multiple Data), while the later CM-5 and CM-5E were MIMD (Multiple Instructions Multiple Data) that combined commodity SPARC processors and proprietary vector processors in a 'fat tree' network.

Thinking Machines also introduced an early commercial RAID2 disk array, the DataVault, circa 1988.[3] In May 1985, Thinking Machines became the third company to register a .com domain name (think.com).

It became profitable in 1989, in part because of it DARPA contracts.[4] The following year, they sold $65 million (USD) worth of hardware and software, making them the market leader in parallel supercomputers.

The hardware portion of the company was purchased by Sun Microsystems, and TMC re-emerged as a small software company specializing in parallel software tools for commodity clusters and data mining software for its installed base and former competitors' parallel supercomputers.

DARPA's Connection Machines were decommissioned by 1996.[5] In the 1993 film Jurassic Park, Connection Machines (non-functioning dummies) are visible in the park's control room, scientist Dennis Nedry mentions 'eight Connection Machines'[6] and a video about dinosaur cloning mentions 'Thinking Machines supercomputers'.

In the 1996 film Mission Impossible, Luther Stickell asks Franz Krieger for 'Thinking Machine laptops' to help hack into the CIA's Langley supercomputer.[7] Tom Clancy's novel Rainbow Six speaks of the NSA's 'star machine from a company gone bankrupt, the Super-Connector from Thinking Machines, Inc., of Cambridge, Massachusetts' in the NSA's basement.

In addition, in The Bear and the Dragon says the National Security Agency could crack nearly any book or cipher with one of three custom operating systems designed for a Thinking Machines supercomputer.

The Thinking Machine (Artificial Intelligence in the 1960s)

Can machines really think? Here is a series of interviews to some of the AI pioneers, Jerome Wiesner, Oliver Selfridge, and Claude Shannon. A view at the future ...

How Machines Learn

How do all the algorithms around us learn to do their jobs? Bot Wallpapers on Patreon: Discuss this video: ..

The incredible inventions of intuitive AI | Maurice Conti

What do you get when you give a design tool a digital nervous system? Computers that improve our ability to think and imagine, and robotic systems that come ...

Machine Learning & Artificial Intelligence: Crash Course Computer Science #34

So we've talked a lot in this series about how computers fetch and display data, but how do they make decisions on this data? From spam filters and self-driving ...

11. Introduction to Machine Learning

MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016 View the complete course: Instructor: Eric Grimson ..

Google's DeepMind AI Just Taught Itself To Walk

Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance ...

Hamming, "Artificial Intelligence - Part I" (April 7, 1995)

Intro: Today is the beginning of talking about artificial intelligence, it is a very different topic. I spent much of last night and today thinking about it, and talking to ...

The Rise of the Machines – Why Automation is Different this Time

Automation in the Information Age is different. Books we used for this video: The Rise of the Robots: The Second Machine Age: ..

Machine Intelligence: Stronger, Faster, Smarter?

What do we mean when we say a machine thinks? Computers and robots have long been able to crunch impossibly large numbers and execute complex, ...

Artificial intelligence can read! Tech firms race to smarten up thinking machines

PROVIDENCE, R.I. (AP) — Seven years ago, a computer beat two human quizmasters on a "Jeopardy" challenge. Ever since, the tech industry has been ...