AI News, Simons Institute Open Lecture: Does Computational Complexity Restrict Artificial Intelligence (AI) and Machine Learning?

Simons Institute Open Lecture: Does Computational Complexity Restrict Artificial Intelligence (AI) and Machine Learning?

Starting in the 1980s, a long body of work led to the conclusion that many interesting approaches—even modest ones—towards achieving AI were computationally intractable, meaning NP-hard or similar.

in recent years, empirical discoveries have undermined this argument, as computational tasks hitherto considered intractable turn out to be easily solvable on very large-scale instances.

We survey methods used in recent years to design provably efficient (polynomial-time) algorithms for a host of intractable machine learning problems under realistic assumptions on the input.

History of artificial intelligence

McCorduck (2004) writes 'artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized,' expressed in humanity's myths, legends, stories, speculation and clockwork automatons.[3] Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion's Galatea.[4] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jābir ibn Hayyān's Takwin, Paracelsus' homunculus and Rabbi Judah Loew's Golem.[5] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R.

Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwārizmī (who developed algebra and gave his name to 'algorithm') and European scholastic philosophers such as William of Ockham and Duns Scotus.[14] Majorcan philosopher Ramon Llull (1232–1315) developed several logical machines devoted to the production of knowledge by logical means;[15] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[16] Llull's work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[17] In the 17th century, Leibniz, Thomas Hobbes and René Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[18] Hobbes famously wrote in Leviathan: 'reason is nothing but reckoning'.[19] Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that 'there would be no more need of disputation between two philosophers than between two accountants.

The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead's Principia Mathematica, and find new and more elegant proofs for some.[36] Simon said that they had 'solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind.'[37] (This was an early statement of the philosophical position John Searle would later call 'Strong AI': that machines can contain minds just as human bodies do.)[38] The Dartmouth Conference of 1956[39] was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM.

Simon, all of whom would create important programs during the first decades of AI research.[41] At the conference Newell and Simon debuted the 'Logic Theorist' and McCarthy persuaded the attendees to accept 'Artificial Intelligence' as the name of the field.[42] The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[43] The years after the Dartmouth conference were an era of discovery, of sprinting across new ground.

Few at the time would have believed that such 'intelligent' behavior by machines was possible at all.[45] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[46] Government agencies like DARPA poured money into the new field.[47] There were many successful programs and new directions in the late 50s and 1960s.

Researchers would reduce the search space by using heuristics or 'rules of thumb' that would eliminate those paths that were unlikely to lead to a solution.[49] Newell and Simon tried to capture a general version of this algorithm in a program called the 'General Problem Solver'.[50] Other 'searching' programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter's Geometry Theorem Prover (1958) and SAINT, written by Minsky's student James Slagle (1961).[51] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[52] An important goal of AI research is to allow computers to communicate in natural languages like English.

Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[74] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky's devastating criticism of perceptrons.[75] Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[76] In the early seventies, the capabilities of AI programs were limited.

After spending 20 million dollars, the NRC ended all support.[86] In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its 'grandiose objectives' and led to the dismantling of AI research in that country.[87] (The report specifically mentioned the combinatorial explosion problem as a reason for AI's failings.)[88] DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[89] By 1974, funding for AI projects was hard to find.

One of the earliest was John Lucas, who argued that Gödel's incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[92] Hubert Dreyfus ridiculed the broken promises of the 60s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little 'symbol processing' and a great deal of embodied, instinctive, unconscious 'know how'.[93][94] John Searle's Chinese Room argument, presented in 1980, attempted to show that a program could not be said to 'understand' the symbols that it uses (a quality called 'intentionality').

However, straightforward implementations, like those attempted by McCarthy and his students in the late 60s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[101] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[102] Prolog uses a subset of logic (Horn clauses, closely related to 'rules' and 'production rules') that permit tractable computation.

Gerald Sussman observed that 'using precise language to describe essentially imprecise concepts doesn't make them any more precise.'[106] Schank described their 'anti-logic' approaches as 'scruffy', as opposed to the 'neat' paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[107] In 1975, in a seminal paper, Minsky noted that many of his fellow 'scruffy' researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something.

'[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay'.[114] Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[115] The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows.

A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or 'MCC') to fund large scale projects in AI and information technology.[120][121] DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[122] In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a 'Hopfield net') could learn and process information in a completely new way.

Indeed, some of them, like 'carry on a casual conversation' had not been met by 2010.[129] As with other AI projects, expectations had run much higher than what was actually possible.[129] In the late 80s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[130] They believed that, to show real intelligence, a machine needs to have a body — it needs to perceive, move, survive and deal with the world.

The trick is to sense it appropriately and often enough.'[134] In the 80s and 90s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[135] The field of AI, now more than a half a century old, finally achieved some of its oldest goals.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[137] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second.

The event was broadcast live over the internet and received over 74 million hits.[138] In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[139] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[140] In February 2011, in a Jeopardy!

champions, Brad Rutter and Ken Jennings, by a significant margin.[141] These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.[142] In fact, Deep Blue's computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[143] This dramatic increase is measured by Moore's law, which predicts that the speed and memory capacity of computers doubles every two years.

It was hoped that a complete agent architecture (like Newell's SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[146][148] AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[149] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research.

AI had solved a lot of very difficult problems[153] and their solutions proved to be useful throughout the technology industry,[154] such as data mining, industrial robotics, logistics,[155] speech recognition,[156] banking software,[157] medical diagnosis[157] and Google's search engine.[158] The field of AI received little or no credit for these successes in the 1990s and early 2000s.

Many of AI's greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[159] Nick Bostrom explains 'A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.'[160] Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence.

The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[164] In 2001, AI founder Marvin Minsky asked 'So the question is why didn't we get HAL in 2001?'[165] Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms.

John McCarthy, on the other hand, still blamed the qualification problem.[166] For Ray Kurzweil, the issue is computer power and, using Moore's Law, he predicted that machines with human-level intelligence will appear by 2029.[167] Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[168] There were many other explanations and for each there was a corresponding research program underway.

Advances in deep learning (particularly deep convolutional neural networks and recurrent neural networks) drove progress and research in image and video processing, text analysis, and even speech recognition.[172] Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers.[172] According to the Universal approximation theorem, deep-ness isn't necessary for a neural network to be able to approximate arbitrary continuous functions.

State-of-the-art deep neural network architectures can sometimes even rival human accuracy in fields like computer vision, specifically on things like the MNIST database, and traffic sign recognition.[174] Language processing engines powered by smart search engines can easily beat humans at answering general trivia questions (such as IBM Watson), and recent developments in deep learning have produced astounding results in competing with humans, in things like Go and Doom (which, being a FPS, has sparked some controversy).[175][176][177][178] Artificial general intelligence (AGI) research aims to create machines that can solve any problem that requires intelligence.

Artificial intelligence

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring 'intelligence' are often removed from the definition, a phenomenon known as the AI effect, leading to the quip 'AI is whatever hasn't been done yet.'[3] For instance, optical character recognition is frequently excluded from 'artificial intelligence', having become a routine technology.[4] Capabilities generally classified as AI as of 2017[update] include successfully understanding human speech,[5] competing at the highest level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data, including images and videos.

'robotics' or 'machine learning'),[13] the use of particular tools ('logic' or 'neural networks'), or deep philosophical differences.[14][15][16] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[12] The traditional problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[13] General intelligence is among the field's long-term goals.[17] Approaches include statistical methods, computational intelligence, and traditional symbolic AI.

The field was founded on the claim that human intelligence 'can be so precisely described that a machine can be made to simulate it'.[18] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[19] Some people also consider AI to be a danger to humanity if it progresses unabatedly.[20] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[21] In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding;

This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis.[26][page needed] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain.[27] The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete 'artificial neurons'.[24] The field of AI research was born at a workshop at Dartmouth College in 1956.[28] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[29] They and their students produced programs that the press described as 'astonishing':[30] computers were learning checkers strategies (c.

At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[8] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[10] In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[22] The success was due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields and a commitment by researchers to mathematical methods and scientific standards.[38] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.[39] Advanced statistical techniques (loosely known as deep learning), access to large amounts of data and faster computers enabled advances in machine learning and perception.[citation needed] By the mid 2010s, machine learning applications were used throughout the world.[citation needed] In a Jeopardy!

quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin.[40] The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research[41] as do intelligent personal assistants in smartphones.[42] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[6][43] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[44] who at the time continuously held the world No.

Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[47] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[11] Other cited examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people.[47] The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner.

The traits described below have received the most attention.[13] Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[48] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[49] For difficult problems, algorithms can require enormous computational resources—most experience a 'combinatorial explosion': the amount of memory or computer time required becomes astronomical for problems of a certain size.

Video events are often represented as SWRL rules, which can be used, among others, to automatically generate subtitles for constrained videos.[60] Among the most difficult problems in knowledge representation are: Intelligent agents must be able to set goals and achieve them.[67] They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or 'value') of available choices.[68] In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[69] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty.

The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.[citation needed] Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[76][77][78][79] Natural language processing[80] gives machines the ability to read and understand human language.

These systems require that an agent is able to: Be spatially cognizant of its surroundings, learn from and build a map of its environment, figure out how to get from one point in space to another, and execute that movement (which often involves compliant motion, a process where movement requires maintaining physical contact with an object).[89][90] Affective computing is the study and development of systems that can recognize, interpret, process, and simulate human affects.[92][93] It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science.[94] While the origins of the field may be traced as far back as the early philosophical inquiries into emotion,[95] the more modern branch of computer science originated with Rosalind Picard's 1995 paper[96] on 'affective computing'.[97][98] A motivation for the research is the ability to simulate empathy, where the machine would be able to interpret human emotions and adapts its behavior to give an appropriate response to those emotions.

This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[110][111] Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[14] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[112] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[113] Researchers at MIT (such as Marvin Minsky and Seymour Papert)[114] found that solving difficult problems in vision and natural language processing required ad-hoc solutions – they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior.

Roger Schank described their 'anti-logic' approaches as 'scruffy' (as opposed to the 'neat' paradigms at CMU and Stanford).[15] Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of 'scruffy' AI, since they must be built by hand, one complicated concept at a time.[115] When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[116] This 'knowledge revolution' led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[37] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

Stuart Russell and Peter Norvig describe this movement as nothing less than a 'revolution' and 'the victory of the neats'.[38] Critics argue that these techniques (with few exceptions[120]) are too focused on particular problems and have failed to address the long-term goal of general intelligence.[121] There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.[122][123] In the course of 60 or so years of research, AI has developed a large number of tools to solve the most difficult problems in computer science.

For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[128] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[129] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[88] Many learning algorithms use search algorithms based on optimization.

AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[142] Bayesian networks[143] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[144] learning (using the expectation-maximization algorithm),[145] planning (using decision networks)[146] and perception (using dynamic Bayesian networks).[147] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[147] A

Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[148] and information value theory.[68] These tools include models such as Markov decision processes,[149] dynamic decision networks,[147] game theory and mechanism design.[150] The simplest AI applications can be divided into two types: classifiers ('if shiny then diamond') and controllers ('if shiny then pick up').

Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[159] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning, GMDH or competitive learning.[160] Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[161][162] and was introduced to neural networks by Paul Werbos.[163][164][165] Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[166] In short, most neural networks use some form of gradient descent on a hand-created neural topology.

Many deep learning systems need to be able to learn chains ten or more causal links in length.[168] Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[169][170][168] According to one overview,[171] the expression 'Deep Learning' was introduced to the Machine Learning community by Rina Dechter in 1986[172] and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[173] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V.

In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[179] Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions.[168] CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind's 'AlphaGo Lee', the program that beat a top Go champion in 2016.[180] Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[181] which are in theory Turing complete[182] and can run arbitrary programs to process arbitrary sequences of inputs.

this phenomenon is described as the AI effect.[206] High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[207] and targeting online advertisements.[205][208][209] With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[210] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[211] There are a number of competitions and prizes to promote research in artificial intelligence.

However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[222] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[223] Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger.

AI can react to changes overnight or when business is not taking place.[225] In August 2001, robots beat humans in a simulated financial trading competition.[226] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[227] The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[228] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing.

Collective AI is a platform architecture that combines individual AI into a collective entity, in order to achieve global results from individual behaviors.[233][234] With its collective structure, developers can crowdsource information and extend the functionality of existing AI domains on the platform for their own use, as well as continue to create and share new domains and capabilities for the wider community and greater good.[235] As developers continue to contribute, the overall platform grows more intelligent and is able to perform more requests, providing a scalable model for greater communal benefit.[234] Organizations like SoundHound Inc.

McKinsey Global Institute study found a shortage of 1.5 million highly trained data and AI professionals and managers[237] and a number of private bootcamps have developed programs to meet that demand, including free programs like The Data Incubator or paid programs like General Assembly.[238] Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence.[239] They stated: 'This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.'[239] Apple joined other tech companies as a founding member of the Partnership on AI in January 2017.

This concern has recently gained attention after mentions by celebrities including Stephen Hawking, Bill Gates,[252] and Elon Musk.[253] A group of prominent tech titans including Peter Thiel, Amazon Web Services and Musk have committed $1billion to OpenAI a nonprofit company aimed at championing responsible AI development.[254] The opinion of experts within the field of artificial intelligence is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI.[255] In his book Superintelligence, Nick Bostrom provides an argument that artificial intelligence will pose a threat to mankind.

for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at 'high risk' of potential automation, while an OECD report classifies only 9% of U.S. jobs as 'high risk'.[267][268][269] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[270] Author Martin Ford and others go further and argue that a large number of jobs are routine, repetitive and (to an AI) predictable;

This issue was addressed by Wendell Wallach in his book titled Moral Machines in which he introduced the concept of artificial moral agents (AMA).[271] For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as 'Does Humanity Want Computers Making Moral Decisions'[272] and 'Can (Ro)bots Really Be Moral'.[273] For Wallach the question is not centered on the issue of whether machines can demonstrate the equivalent of moral behavior in contrast to the constraints which society may place on the development of AMAs.[274] The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.[275] The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: 'Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines.

The philosophical position that John Searle has named 'strong AI' states: 'The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.'[281] Searle counters this assertion with his Chinese room argument, which asks us to look inside the computer and try to find where the 'mind' might be.[282] Mary Shelley's Frankenstein considers a key issue in the ethics of artificial intelligence: if a machine can be created that has intelligence, could it also feel?

Jarrod McClean: "Quantum Computation for the Discovery of New Materials and [...]" | Talks at Google

Quantum computing is an exciting new technology that promises to accelerate the solution of some problems beyond our wildest imagination. In this talk, I start from the ground up and explain...

Andrew Lo

Andrew W. Lo Charles E. and Susan T. Harris Professor of Finance Director, Laboratory for Finance and Engineering Andrew Lo is the Charles E. and Susan T. Harris professor of finance at the...

Active Matter Summit: Session 5

In recent decades, developments in software and hardware technologies have created dramatic shifts in design, manufacturing and research. Software technologies have facilitated automated process...

6/7/14 Circuits for Intelligence - Winrich Freiwald: Neural Circuits of Face Processing

The Human Microbiome: Emerging Themes at the Horizon of the 21st Century (Day 2)

The Human Microbiome: Emerging Themes at the Horizon of the 21st Century (Day 2) Air date: Thursday, August 17, 2017, 8:15:00 AM Category: Conferences Runtime: 07:32:24 Description:...

Microsoft Azure OpenDev—June 2017

This first-ever virtual event from Microsoft showcasing open source technologies in the cloud is hosted by John Gossman, Lead Architect of Microsoft Azure, and features industry thought leaders...