AI News, The Computer Revolution/Artificial Intelligence/Turing Test

The Computer Revolution/Artificial Intelligence/Turing Test

Alan, a mathematician at the time ( 1950 ) stated that it would be impossible to tell if a conversation with a computer could be distinguishable from conversation with a person.

now both human being (1) and the computer program are not in eye site of human being (2) so that human being (2) has no visual reference of which source ( human (1) or computer ) is providing the reference.

if the human being (2) is unable to determine which source responding to him is either human or computer, this demostrates the computers ability to hold human intelligence.

Turing test

The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech.[2]

It opens with the words: 'I propose to consider the question, 'Can machines think?'' Because 'thinking' is difficult to define, Turing chooses to 'replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.'[4]

The question of whether it is possible for machines to think has a long history, which is firmly entrenched in the distinction between dualist and materialist views of the mind.

René Descartes prefigures aspects of the Turing test in his 1637 Discourse on the Method when he writes: Here Descartes notes that automata are capable of responding to human interactions but argues that such automata cannot respond appropriately to things said in their presence in the way that any human can.

Descartes fails to consider the possibility that future automata might be able to overcome such insufficiency, and so does not propose the Turing test as such, even if he prefigures its conceptual framework and criterion.

In his book, Language, Truth and Logic, Ayer suggested a protocol to distinguish between a conscious man and an unconscious machine: 'The only ground I can have for asserting that an object which appears to be conscious is not really a conscious being, but only a dummy or a machine, is that it fails to satisfy one of the empirical tests by which the presence or absence of consciousness is determined.'[13]

To demonstrate this approach Turing proposes a test inspired by a party game, known as the 'imitation game', in which a man and a woman go into separate rooms and guests try to tell them apart by writing a series of questions and reading the typewritten answers sent back.

(Huma Shah argues that this two-human version of the game was presented by Turing only to introduce the reader to the machine-human question-answer test.[24]) Turing described his new version of the game as follows:

In this version, which Turing discussed in a BBC radio broadcast, a jury asks questions of a computer and the role of the computer is to make a significant proportion of the jury believe that it is really a man.[26]

Turing's paper considered nine putative objections, which include all the major arguments against artificial intelligence that have been raised in the years since the paper was published (see 'Computing Machinery and Intelligence').[6]

'CyberLover', a malware program, preys on Internet users by convincing them to 'reveal information about their identities or to lead them to visit a web site that will deliver malicious content to their computers'.[33]

Arguments such as Searle's and others working on the philosophy of mind sparked off a more intense debate about the nature of intelligence, the possibility of intelligent machines and the value of the Turing test that continued through the 1980s and 1990s.[38]

As Loebner described it, one reason the competition was created is to advance the state of AI research, at least in part, because no one had taken steps to implement the Turing test despite 40 years of discussing it.[40]

However, the competition has awarded the bronze medal every year for the computer system that, in the judges' opinions, demonstrates the 'most human' conversational behaviour among that year's entries.

Shah argues there is one imitation game which Turing described could be practicalised in two different ways: a) one-to-one interrogator-machine test, and b) simultaneous comparison of a machine with a human, both questioned in parallel by an interrogator.[24]

Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalises naturally to all of human performance capacity, verbal as well as nonverbal (robotic).[48]

Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?[23]

Common understanding has it that the purpose of the Turing test is not specifically to determine whether a computer is able to fool an interrogator into believing that it is a human, but rather whether a computer could imitate a human.[7]

The test that employs the party game and compares frequencies of success is referred to as the 'Original Imitation Game Test', whereas the test consisting of a human judge conversing with a human and a machine is referred to as the 'Standard Turing Test', noting that Sterrett equates this with the 'standard interpretation' rather than the second version of the imitation game.

Sterrett agrees that the standard Turing test (STT) has the problems that its critics cite but feels that, in contrast, the original imitation game test (OIG test) so defined is immune to many of them, due to a crucial difference: Unlike the STT, it does not make similarity to human performance the criterion, even though it employs human performance in setting a criterion for machine intelligence.

A man can fail the OIG test, but it is argued that it is a virtue of a test of intelligence that failure indicates a lack of resourcefulness: The OIG test requires the resourcefulness associated with intelligence and not merely 'simulation of human conversational behaviour'.

have interpreted Turing as proposing that the imitation game itself is the test, without specifying how to take into account Turing's statement that the test that he proposed using the party version of the imitation game is based upon a criterion of comparative frequency of success in that imitation game, rather than a capacity to succeed at one round of the game.

The imitation game also includes a 'social hack' not found in the standard interpretation, as in the game both computer and male human are required to play as pretending to be someone they are not.[54]

An experimental study looking at Gricean maxim violations using transcripts of Loebner's one-to-one (interrogator-hidden interlocutor) Prize for AI contests between 1994–1999, Ayse Saygin found significant differences between the responses of participants who knew and did not know about computers being involved.[57]

The philosophy of mind, psychology, and modern neuroscience have been unable to provide definitions of 'intelligence' and 'thinking' that are sufficiently precise and general to be applied to machines.

As a Cambridge honours graduate in mathematics, Turing might have been expected to propose a test of computer intelligence requiring expert knowledge in some highly technical field, and thus anticipating a more recent approach to the subject.

Instead, as already noted, the test which he described in his seminal 1950 paper requires the computer to be able to compete successfully in a common party game, and this by performing as well as the typical man in answering a series of questions so as to pretend convincingly to be the woman contestant.

Given the status of human sexual dimorphism as one of the most ancient of subjects, it is thus implicit in the above scenario that the questions to be answered will involve neither specialised factual knowledge nor information processing technique.

The challenge for the computer, rather, will be to demonstrate empathy for the role of the female, and to demonstrate as well a characteristic aesthetic sensibility—both of which qualities are on display in this snippet of dialogue which Turing has imagined:

It is further noted, however, that whatever inspiration Turing might be able to lend in this direction depends upon the preservation of his original vision, which is to say, further, that the promulgation of a 'standard interpretation' of the Turing test—i.e., one which focuses on a discursive intelligence only—must be regarded with some caution.

He wanted to provide a clear and understandable alternative to the word 'think', which he could then use to reply to criticisms of the possibility of 'thinking machines' and to suggest ways that research might move forward.

The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behaviour by following a simple (but large) list of mechanical rules, without thinking or having a mind at all.

Turing does not specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term 'average interrogator': '[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning'.[69]

Michael Shermer points out that human beings consistently choose to consider non-human objects as human whenever they are allowed the chance, a mistake called the anthropomorphic fallacy: They talk to their cars, ascribe desire and intentions to natural forces (e.g., 'nature abhors a vacuum'), and worship the sun as a human-like being with intelligence.

If the Turing test is applied to religious objects, Shermer argues, then, that inanimate statues, rocks, and places have consistently passed the test throughout history.[citation needed]

Believable human characters may be interesting in a work of art, a game, or a sophisticated user interface, but they are not part of the science of creating intelligent machines, that is, machines that solve problems using intelligence.

Robert French (1990) makes the case that an interrogator can distinguish human and non-human interlocutors by posing questions that reveal the low-level (i.e., unconscious) processes of human cognition, as studied by cognitive science.

This is an extension of the original question that Turing attempted to answer but would, perhaps, offer a high enough standard to define a machine that could 'think' in a way that we typically define as characteristically human.

The rationale is that software sufficiently sophisticated to read and reproduce the distorted image accurately does not exist (or is not available to the average user), so any system able to do so is likely to be a human.

2015, Shuman Ghosemajumder, former click fraud czar of Google, stated that there were cybercriminal sites that would defeat CAPTCHA challenges for a fee, to enable various forms of fraud.[83]

The letter states: 'In the EHR context, though a human physician can readily distinguish between synthetically generated and real live human patients, could a machine be given the intelligence to make such a determination on its own?'

and further the letter states: 'Before synthetic patient identities become a public health problem, the legitimate EHR market might benefit from applying Turing Test-like techniques to ensure greater data reliability and diagnostic value.

It eliminates text chat problems like anthropomorphism bias, and does not require emulation of unintelligent human behaviour, allowing for systems that exceed human intelligence.

The Turing test inspired the Ebert test proposed in 2011 by film critic Roger Ebert which is a test whether a computer-based synthesised voice has sufficient skill in terms of intonations, inflections, timing and so forth, to make people laugh.[94]

in fact, he estimated that by the year 2000, machines with around 100 MB of storage would be able to fool 30% of human judges in a five-minute test, and that people would no longer consider the phrase 'thinking machine' contradictory.[4]

and that was only due to the human contestant pretending to be a chatbot.[96]) He further predicted that machine learning would be an important part of building powerful machines, a claim considered plausible by contemporary researchers in artificial intelligence.[69]

During the Long Now Turing Test, each of three Turing test judges will conduct online interviews of each of the four Turing test candidates (i.e., the computer and the three Turing test human foils) for two hours each for a total of eight hours of interviews.

Two significant events occurred in that year: The first was the Turing Colloquium, which was held at the University of Sussex in April, and brought together academics and researchers from a wide variety of disciplines to discuss the Turing test in terms of its past, present, and future;

Blay Whitby lists four major turning points in the history of the Turing test – the publication of 'Computing Machinery and Intelligence' in 1950, the announcement of Joseph Weizenbaum's ELIZA in 1966, Kenneth Colby's creation of PARRY, which was first described in 1972, and the Turing Colloquium in 1990.[101]

CAPTCHA

A CAPTCHA (/kæp.tʃə/, an acronym for 'Completely Automated Public Turing test to tell Computers and Humans Apart') is a type of challenge–response test used in computing to determine whether or not the user is human.[1]

This form of CAPTCHA requires that the user type the letters of a distorted image, sometimes with the addition of an obscured sequence of letters or digits that appears on the screen.

This user identification procedure has received many criticisms, especially from disabled people, but also from other people who feel that their everyday work is slowed down by distorted words that are difficult to read.

In 2001, PayPal used such tests as part of a fraud prevention strategy in which they asked humans to 'retype distorted text that programs have difficulty recognizing.'[8]

Looking for a way to make their images resistant to OCR attack, the team looked at the manual of their Brother scanner, which had recommendations for improving OCR's results (similar typefaces, plain backgrounds, etc.).

Their patent application details that 'The invention is based on applying human advantage in applying sensory and cognitive skills to solving simple problems that prove to be extremely hard for computer software.

Both patents predate other publications by several years, though they do not use the term CAPTCHA, they describe the ideas in detail and precisely depict the graphical CAPTCHAs used in the Web today.

This is done to demonstrate that breaking it requires the solution to a difficult problem in the field of artificial intelligence (AI) rather than just the discovery of the (secret) algorithm, which could be obtained through reverse engineering or other means.

Modern text-based CAPTCHAs are designed such that they require the simultaneous use of three separate abilities—invariant recognition, segmentation, and parsing—to correctly complete the task with any consistency.[10]

In the case of image and text based CAPTCHAs, if an AI were capable of accurately completing the task without exploiting flaws in a particular CAPTCHA design, then it would have solved the problem of developing an AI that is capable of complex object recognition in scenes.[12]

CAPTCHAs based on reading text — or other visual-perception tasks — prevent blind or visually impaired users from accessing the protected resource.[14]

Since it is too hard for most of spam robots to parse and execute JavaScript, using a simple script which fills the CAPTCHA fields and hides the image and the field from human eyes was proposed.

Although these are much easier to defeat using software, they are suitable for scenarios where graphical imagery is not appropriate, and they provide a much higher level of accessibility for blind users than the image-based CAPTCHAs.

Other kinds of challenges, such as those that require understanding the meaning of some text (e.g., a logic puzzle, trivia question, or instructions on how to create a password) can also be used as a CAPTCHA.

There are a few approaches to defeating CAPTCHAs: using cheap human labor to recognize them, exploiting bugs in the implementation that allow the attacker to completely bypass the CAPTCHA, and finally using machine learning to build an automated solver.[20]

In October 2013, artificial intelligence company Vicarious claimed that it had developed a generic CAPTCHA-solving algorithm that was able to solve modern CAPTCHAs with character recognition rates of up to 90%.[23]

Another technique used consists of using a script to re-post the target site's CAPTCHA as a CAPTCHA to a site owned by the attacker, which unsuspecting humans visit and correctly solve within a short while for the script to use.[26] However,

Sometimes, if part of the software generating the CAPTCHA is client-side (the validation is done on a server but the text that the user is required to identify is rendered on the client side), then users can modify the client to display the un-rendered text.

With the demonstration that text distortion based CAPTCHAs are vulnerable to machine learning based attacks, some researchers have proposed alternatives including image recognition CAPTCHAs which require users to identify simple objects in the images presented.

The argument in favor of these schemes is that tasks like object recognition are typically more complex to perform than text recognition and therefore should be more resilient to machine learning based attacks.

Human computing power: Can humans decide the halting problem on Turing Machines?

If individual atoms can be simulated then one can simulate the human mind as well by building a big enough computer system for the simulation of the individual atoms.

Because the following seems to indicate that it may not be possible: http://hps.org/publicinformation/ate/faqs/faqradbods.html ) - basically a 'brain in the jar' - scenario, you would probably still get truely random processes, which would occur somethere in the human brain.

Also don't forget that a human is in a sense also part of his or her environment: http://en.wikipedia.org/wiki/Human_Microbiome_Project Perhaps some of these bacteria also influence the inner workings of the human brain in some way and the composition of this bacteria can change in a human's lifetime (also within certain boundaries I suppose?).

If at least one process within at least one of these organisms is truely random and also somehow indirectly affects the human brain then one would need a TM with an entropy source to simulate a human mind.

Did Google’s Duplex AI Demo Just Pass the Turing Test? [Update]

Yesterday, at I/O 2018, Google showed off a new digital assistant capability that’s meant to improve your life by making simple boring phone calls on your behalf.

If you listen to both segments, the male voice booking the restaurant sounds a bit more like a person than the female does, but the gap isn’t large and the female voice is still noticeably better than a typical AI.

The British computer scientist, mathematician, and philosopher Alan Turing devised the Turing test as a means of measuring whether a computer was capable of demonstrating intelligent behavior equivalent to or indistinguishable from that of a human.

This broad formulation allows for the contemplation of many such tests, though the general test case presented in discussion is a conversation between a researcher and a computer in which the computer responds to questions.

The Turing test is not intended to be the final word on whether an AI is intelligent and, given that Turing conceived it in 1950, obviously doesn’t take into consideration later advances or breakthroughs in the field.

The Turing test: Can a computer pass for a human? - Alex Gendler

View full lesson: What is consciousness? Can an artificial machine ..

How the "Most Human Human" passed the Turing Test

To prove he was human, Brian Christian competed against some of the world's most advanced AI. SUBSCRIBE: Brian Christian competed ..

Computer Passes Turing Test for the First Time

For decades, people have been able to have conversations with computers. However, humans have always been able to tell if they were talking with a person or ...

Bots Are Passing The Turing Test. Here's Why That's a Problem | Answers with Joe

The Turing Test was created to see if a computer or machine could pass itself off as a human. It was created by Alan Turing, a computer pioneer who was way ...

Computer Passes Turing Test For First Time in History, Artificial Intelligence Officially Here

Computer Passes Turing Test For First Time in History, Artificial Intelligence Officially Here. *SUBSCRIBE* for more great videos! Mark Dice is a media analyst, ...

Alan Turing: Crash Course Computer Science #15

Today we're going to take a step back from programming and discuss the person who formulated many of the theoretical concepts that underlie modern ...

Artificial Intelligence & Personhood: Crash Course Philosophy #23

Today Hank explores artificial intelligence, including weak AI and strong AI, and the various ways that thinkers have tried to define strong AI including the Turing ...

What Makes A Machine Intelligent?

Artificial intelligence is constantly improving, but will it ever be sentient? Will AI ever pass human intelligence? Is Artificial Intelligence the Next Phase of Human ...

The Turing Test PC ULTRA 60 FPS - 2 Girls 1 Let's Play Gameplay Walkthrough Part 1: AI

Enter the mysteries of Europa, one of the moons of Jupiter. Did the scientists find life? And if so, what moral dilemmas do they face now?What is life when it can't ...

Can A Machine Think?

Video taken from the movie The Imitation Game.