AI News, Computer Science artificial intelligence

Computational Sciences - Course Explorer | Minerva Schools

Students learn about models of computation that provide the theoretical basis for modern computer science.

Topics include deterministic and nondeterministic finite state machines, Turing machines, formal language theory, computational complexity and the classification of algorithms.

and what role does a grammar plays in the way we analyze problems, solve problems, communicate with the computer, and even analyze natural languages?

Computer Science: Artificial Intelligence

Artificial intelligence (AI) is a branch of computer science that seeks to build machines that carry out tasks which, when performed by humans, require intelligence.

“If someone would doubt my results,” Leibniz wrote, “I would say to him, ‘Let us calculate, Sir,’ and thus by taking pen and ink, we should settle the question.” Leibniz did not anticipate decision-making by mechanical means rather than by pen and ink—that is, artificial intelligence—but a century later British mathematician George Boole (1815–1864) described the logical rules, today called Boolean algebra, by which true-or-false statements can be identified with binary numbers and manipulated with mathematical rigor.

In 1950 British mathematician Alan Turing (1912–1954) published one of the most famous papers in computer-science history, “Computing Machinery and Intelligence.” In it he took up the already old question “Can machines think?” and proposed that it could be answered by the now-famous Turing test, which he called the imitation game.

The imitation game would work as follows: if a human interrogator communicating with both a human being and a computer in another room, say by exchanging typed messages, could not reliably tell which was the human and which the computer, even after an extended exchange—that is, if the computer could imitate unrestricted human conversation—then it would be reasonable to say that the computer thinks and is intelligent.

All that remained, some scientists thought, was to code the rules of human thought (human thought was assumed to depend on hidden rules) in binary form, supply the computer with a mass of digitally encoded facts about how the world works, and run some programs.

For example, AI pioneer Herbert Simon (1916–2001) predicted in 1957 that “within ten years a digital computer will be the world's chess champion.” (A computer did not beat the world chess champion until 1997, 30 years behind schedule.) In 1965 Simon predicted that “Machines will be capable, within twenty years, of doing any work that a man can do.” (They still cannot.) In 1968, Stanley Kubrick's hit movie 2001: A Space Odyssey depicted a conversational computer, the HAL-9000, as being a reality in 2001.

As one AI textbook put it in 2005, “The problem of over-promising has been a recurring one for the field of AI.” Jobs that seem hard to people, like handling large sets of numbers rapidly, are easy for computers, while things that seem easy to most people, like having a conversation or cleaning house, are hard—extremely hard—for computers.

The sheer number of facts that any normal person knows, and the number of ways in which they apply those facts in performing a typical task of daily life, including speech, is simply too large for even a modern computer to handle (even assuming that human intelligence can be understood in terms of applying rules to facts, which is a matter of dispute).

A human being washing dishes by hand must deal simultaneously with the mechanical properties of arms and hands, caked-on food, grease, water, soap, scrubbers, utensils of scores of different shapes, plastics, metals, and ceramics, and so on.

In 1970, for example, Life magazine announced excitedly that a turtle-like machine called Shaky, which could navigate a simple indoor environment, was the “first electronic person,” and promised its readers that by 1985 at the latest we would “have a machine with the general intelligence of an average human being.” The Science AI research restricted to specific problems such as pattern identification, question-answering, navigation, and the like is often called “weak” AI.

GOFAI seeks to produce computer programs that apply symbolic rules to coded information in order to make decisions about how to manipulate objects, identify the content of sounds or images, steer vehicles, aim weapons, or the like.

The RAND Corporation, a private strategic think-tank often hired by the U.S. military, reported in 2001 that “[t]he increasing sophistication of robotics, coupled with software advances (e.g., in artificial intelligence and speech understanding) removes jobs from the marketplace, both in low-skilled, entry-level positions and more sophisticated specialties.” Military applications for AI are now occurring, including the partially self-guided weapons termed “smart bombs” and autonomous navigation by unmanned airplanes and submarines.

In popular culture, Captain Jean-Luc Picard informs watchers of the TV drama Star Trek: The Next Generation that human beings “are machines—just machines of a different type.” Some, such as physicist Roger Penrose (1931–) and philosophers John Searl (1932–) and Hubert Dreyfus (1929–), argue that the strong-AI equation of mechanical and human thought is based on fallacious assumptions about thinking, information, and physical systems.

Artificial Intelligence | Research and Which Majors to Pick

Part 2: This video covers artificial intelligence, research being done in the field, what major to pick to get started ..

Machine Learning & Artificial Intelligence: Crash Course Computer Science #34

So we've talked a lot in this series about how computers fetch and display data, but how do they make decisions on this data? From spam filters and self-driving ...

Why study artificial intelligence?

Northwestern Engineering's Kristian Hammond, Bill and Cathy Osborn Professor of Computer Science, discusses the dramatic progress in the rise of artificial ...

Artificial Intelligence Tutorial | AI Tutorial for Beginners | Artificial Intelligence | Simplilearn

This Artificial Intelligence tutorial video will help you understand what is Artificial Intelligence, types of Artificial Intelligence, ways of achieving Artificial ...

What is Artificial Intelligence Exactly?

Subscribe here: Check out the previous episode: Become a Patreo

Artificial Intelligence

This lecture discusses artificial intelligence (AI) in the context of data science and machine learning. Book website: Steve Brunton's ..

What Is Artificial Intelligence? Crash Course AI #1

Artificial intelligence is everywhere and it's already making a huge impact on our lives. It's autocompleting texts on our cellphones, telling us which videos to ...