AI News, Artificial intelligence could impact half of jobs in NYS artificial intelligence

History of artificial intelligence

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols[citation needed].

In 1973, in response to the criticism from James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an 'AI winter'.

Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.

Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to the presence of powerful computer hardware.

The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotion—Hermes Trismegistus wrote that 'by discovering the true nature of the gods, man has been able to reproduce it.'[10][11]

Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwārizmī (who developed algebra and gave his name to 'algorithm') and European scholastic philosophers such as William of Ockham and Duns Scotus.[12]

Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[14]

The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction.

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain.

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 1930s, 1940s, and early 1950s.

This simplified version of the problem allowed Turing to argue convincingly that a 'thinking machine' was at least plausible and the paper answered all the most common objections to the proposition.[30]

Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[32]

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought.

To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end.

Other 'searching' programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter's Geometry Theorem Prover (1958) and SAINT, written by Minsky's student James Slagle (1961).[49]

They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies.

Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors.

One of the earliest was John Lucas, who argued that Gödel's incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[90]

Hubert Dreyfus ridiculed the broken promises of the 1960s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little 'symbol processing' and a great deal of embodied, instinctive, unconscious 'know how'.[91][92]

However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[100]

A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[101] Prolog

In 1975, in a seminal paper, Minsky noted that many of his fellow 'scruffy' researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something.

We know these facts are not always true and that deductions using these facts will not be 'logical', but these structured sets of assumptions are part of the context of everything we say and think.

In the 1980s a form of AI program called 'expert systems' was adopted by corporations around the world and knowledge became the focus of mainstream AI research.

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts.

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place.

'AI researchers were beginning to suspect—reluctantly, for it violated the scientific canon of parsimony—that intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,'[112]

'[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay'.[113]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows.

Douglas Lenat, who started and led the project, argued that there is no shortcut ― the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand.

Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[117]

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a 'Hopfield net') could learn and process information in a completely new way.

Around the same time, Geoffrey Hinton and David Rumelhart popularized a method for training neural networks called 'backpropagation', also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa (1970) and applied to neural networks by Paul Werbos.

Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[120][123]

The term 'AI winter' was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[124]

They were difficult to update, they could not learn, they were 'brittle' (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier.

They believed that, to show real intelligence, a machine needs to have a body — it needs to perceive, move, survive and deal with the world.

They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec's paradox).

He rejected all symbolic approaches (both McCarthy's logic and Minsky's frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place.

robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since 'the world is its own best model.

In the 1980s and 1990s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[135]

Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability.

Inside the field there was little agreement on the reasons for AI's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s.

Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of 'artificial intelligence'.[136]

The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second.

Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[140]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous increase in the speed and capacity of computer by the 90s.[142]

When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete.

By this definition, simple programs that solve specific problems are 'intelligent agents', as are human beings and organizations of human beings, such as firms.

It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory.

It was hoped that a complete agent architecture (like Newell's SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[146][148]

There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research.

The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable;

Nick Bostrom explains 'A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.'[160]

In the commercial world at least, the failed promises of the AI Winter continued to haunt AI research into the 2000s, as the New York Times reported in 2005: 'Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.'[161][162][163]

Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms.

Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[168]

In the first decades of the 21st century, access to large amounts of data (known as 'big data'), faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy.

In fact, McKinsey Global Institute estimated in their famous paper 'Big data: The next frontier for innovation, competition, and productivity' that 'by 2009, nearly all sectors in the US economy had at least an average of 200 terabytes of stored data'.

Advances in deep learning (particularly deep convolutional neural networks and recurrent neural networks) drove progress and research in image and video processing, text analysis, and even speech recognition.[172]

Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers.[172]

A common problem for recurrent neural networks is the vanishing gradient problem, which is where gradients passed between layers gradually shrink and literally disappear as they are rounded off to zero.

State-of-the-art deep neural network architectures can sometimes even rival human accuracy in fields like computer vision, specifically on things like the MNIST database, and traffic sign recognition.[174]

Language processing engines powered by smart search engines can easily beat humans at answering general trivia questions (such as IBM Watson), and recent developments in deep learning have produced astounding results in competing with humans, in things like Go and Doom (which, being a First-Person Shooter game, has sparked some controversy).[175][176][177][178]

Big data refers to a collection of data that cannot be captured, managed, and processed by conventional software tools within a certain time frame.

In other words, if big data is likened to an industry, the key to realizing profitability in this industry is to increase the “Process capability” of the data and realize the “Value added” of the data through “Processing”.

Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that responds in a manner similar to human intelligence.

or as the ability of a machine to perform 'general intelligent action'.[3] Academic sources reserve 'strong AI' to refer to machines capable of experiencing consciousness.

The Rise of the Machines – Why Automation is Different this Time

Automation in the Information Age is different. Books we used for this video: The Rise of the Robots: The Second Machine Age: ..

AI Explained

Here we take a deep dive into the current state of AI.

Humans Need Not Apply

Support Grey making videos: ## Robots, Etc: Terex Port automation: ..

Tech Infrastructure Almost In Place To Control Humans, Expert Warn Of Malicious Use Of AI

The world gets creepier by the day. ..

The 5 Jobs Robots Will Take Last | Shelly Palmer on Fox 5

Are robots coming for your job? What jobs will AI take last. Shelly Palmer talks about the future of work on Fox 5 NY with Teresa Priolo and Antwan Lewis.

Robot Surgeons are the Future of Medicine

Share on Facebook: DISCLAIMER: Surgical imagery depicted. Not for the easily squeamish! // Medical technology is getting weirder ..

If Robots Take Our Jobs, What Will Be Left for Humans to Do? | WIRED

Speakers at the WIRED Business Conference grapple with how AI will transform the job market. Still haven't subscribed to WIRED on YouTube?

#227 Stephen Wolfram & Anthony Scriffignano on Artificial Intelligence AI

How do computers think, and how is that changing? For a peek into the ethics and governance surrounding artificial intelligence (AI) and advanced computing, ...

Big Thinkers - Rodney Brooks [Roboticist]

Big Thinkers is a former ZDTV (later TechTV) television program. It featured a half-hour interview with a "big thinker" in science, technology, and other fields.

Case Study: How a Large Brewery Uses Machine Learning for Preventive Maintenance (Cloud Next '18)

Learn how machine learning is used to optimize the beer manufacturing process. This use case has a direct impact on the production line and identifying ...