AI News, Expert Systems

Expert Systems

'A branch of computer science dealing with the simulation of intelligent behavior in computers (…machine to imitate intelligent human behavior)' 'A field of computer science that uses statistical techniques to give computer systems the ability to'learn' (e.g., progressively improve performance of a specific task) with data, without being explicitly programmed' Another related term is 'expert system,' closely related to both AI and ML, which uses a knowledge base of expert information plus an inference engine to make decisions and solve complex problems.

Digitalist Magazine

AI = Artificial Intelligence (from Merriam-Webster): “A branch of computer science dealing with the simulation of intelligent behavior in computers (…machine to imitate intelligent human behavior)” ML = Machine learning (from Wikipedia): “A field of computer science that uses statistical techniques to give computer systems the ability to ‘learn’ (e.g., progressively improve performance of a specific task) with data, without being explicitly programmed” Another related term is “expert system,”

What’s next—bad grammar, swear words, and a dog barking in the background? I hope our future robocalls will understand when I say, “I’m on the Do Not Call List!” But to get back on topic, where machine learning meets AI would involve an AI agent evaluating its own behavior and then adjusting as needed. To continue with the robocall analogy, if the robo-agent could learn that the canned sales pitch did not produce enough sales orders within a certain demographic and then adapt the pitch for future calls, now that would be something! (Notice that I didn’t say “something good.”) You may be wondering how this relates to GRC.

It’s cumbersome to determine which internal controls to test, how to test them, and how frequently to test. Perhaps we could make this a bit easier with an expert system and machine learning. Without going all “SOX wonk” on you, let’s assume you have captured some information about each control, such as: With this information, the system could determine which controls to evaluate, how, when—and perhaps even who—based upon rules. From there, why not automatically schedule the evaluations, as well as route resulting issues or exceptions, if any? Make use of machine learning capabilities by adjusting the schedule automatically based upon changes to risk level, control failure, and such.

Artificial intelligence

In computer science AI research is defined as the study of 'intelligent agents': any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] Colloquially, the term 'artificial intelligence' is applied when a machine mimics 'cognitive' functions that humans associate with other human minds, such as 'learning' and 'problem solving'.[2] The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring 'intelligence' are often removed from the definition, a phenomenon known as the AI effect, leading to the quip, 'AI is whatever hasn't been done yet.'[3] For instance, optical character recognition is frequently excluded from 'artificial intelligence', having become a routine technology.[4] Capabilities generally classified as AI as of 2017[update] include successfully understanding human speech,[5] competing at the highest level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery network and military simulations.

'robotics' or 'machine learning'),[13] the use of particular tools ('logic' or artificial neural networks), or deep philosophical differences.[14][15][16] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[12] The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[13] General intelligence is among the field's long-term goals.[17] Approaches include statistical methods, computational intelligence, and traditional symbolic AI.

The field was founded on the claim that human intelligence 'can be so precisely described that a machine can be made to simulate it'.[18] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity.[19] Some people also consider AI to be a danger to humanity if it progresses unabatedly.[20] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[21] In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding;

Turing proposed that 'if a human could not distinguish between responses from a machine and a human, the machine could be considered “intelligent'.[26] The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete 'artificial neurons'.[27] The field of AI research was born at a workshop at Dartmouth College in 1956.[28] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[29] They and their students produced programs that the press described as 'astonishing':[30] computers were learning checkers strategies (c.

At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[8] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[10] In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[22] The success was due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[38] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.[39] In 2011, a Jeopardy!

data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[41] The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research[42] as do intelligent personal assistants in smartphones.[43] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[6][44] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[45] who at the time continuously held the world No.

Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[48] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[11] Other cited examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people.[48] In a 2017 survey, one in five companies reported they had 'incorporated AI in some offerings or processes'.[49][50] A

The traits described below have received the most attention.[13] Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[74] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[75] These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a 'combinatorial explosion': they became exponentially slower as the problems grew larger.[55] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model.

Such formal knowledge representations can be used in content-based indexing and retrieval,[85] scene interpretation,[86] clinical decision support,[87] knowledge discovery (mining 'interesting' and actionable inferences from large databases),[88] and other areas.[89] Among the most difficult problems in knowledge representation are: Intelligent agents must be able to set goals and achieve them.[96] They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or 'value') of available choices.[97] In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[98] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty.

a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its 'object model' to assess that fifty-meter pedestrians do not exist.[113] AI is heavily used in robotics.[114] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[115] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment;

the paradox is named after Hans Moravec, who stated in 1988 that 'it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility'.[119][120] This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[121] Moravec's paradox can be extended to many forms of social intelligence.[123][124] Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.[125] Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects.[126][127][128] Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[129] In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent.

Nowadays, the vast majority of current AI researchers work instead on tractable 'narrow AI' applications (such as medical diagnosis or automobile navigation).[132] Many researchers predict that such 'narrow AI' work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[17][133] Many advances have general, cross-domain significance.

One high-profile example is that DeepMind in the 2010s developed a 'generalized artificial intelligence' that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[134][135][136] Besides transfer learning,[137] hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to 'slurp up' a comprehensive knowledge base from the entire unstructured Web.[5] Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, 'Master Algorithm' could lead to AGI.[138] Finally, a few 'emergent' approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[139][140] Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do.

This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[145][146] Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[14] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[147] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[148] Researchers at MIT (such as Marvin Minsky and Seymour Papert)[149] found that solving difficult problems in vision and natural language processing required ad-hoc solutions – they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior.

Roger Schank described their 'anti-logic' approaches as 'scruffy' (as opposed to the 'neat' paradigms at CMU and Stanford).[15] Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of 'scruffy' AI, since they must be built by hand, one complicated concept at a time.[150] When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[151] This 'knowledge revolution' led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[37] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[153][154][155][156] Interest in neural networks and 'connectionism' was revived by David Rumelhart and others in the middle of the 1980s.[157] Artificial neural networks are an example of soft computing --- they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient.

For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[169] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[170] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[115] Many learning algorithms use search algorithms based on optimization.

AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[188] Bayesian networks[189] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[190] learning (using the expectation-maximization algorithm),[f][192] planning (using decision networks)[193] and perception (using dynamic Bayesian networks).[194] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[194] Compared with symbolic logic, formal Bayesian inference is computationally expensive.

Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[196] and information value theory.[97] These tools include models such as Markov decision processes,[197] dynamic decision networks,[194] game theory and mechanism design.[198] The simplest AI applications can be divided into two types: classifiers ('if shiny then diamond') and controllers ('if shiny then pick up').

The decision tree[200] is perhaps the most widely used machine learning algorithm.[201] Other widely used classifiers are the neural network,[202] k-nearest neighbor algorithm,[g][204] kernel methods such as the support vector machine (SVM),[h][206] Gaussian mixture model,[207] the extremely popular naive Bayes classifier[i][209] and improved version of decision tree - decision stream.[210] Classifier performance depends greatly on the characteristics of the data to be classified, such as the dataset size, the dimensionality, and the level of noise.

Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[216] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning ('fire together, wire together'), GMDH or competitive learning.[217] Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[218][219] and was introduced to neural networks by Paul Werbos.[220][221][222] Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[223] In short, most neural networks use some form of gradient descent on a hand-created neural topology.

Many deep learning systems need to be able to learn chains ten or more causal links in length.[225] Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[226][227][225] According to one overview,[228] the expression 'Deep Learning' was introduced to the Machine Learning community by Rina Dechter in 1986[229] and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[230] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V.

Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[234] Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[235] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture.

In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[236] Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions.[225] CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind's 'AlphaGo Lee', the program that beat a top Go champion in 2016.[237] Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[238] which are in theory Turing complete[239] and can run arbitrary programs to process arbitrary sequences of inputs.

thus, an RNN is an example of deep learning.[225] RNNs can be trained by gradient descent[240][241][242] but suffer from the vanishing gradient problem.[226][243] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[244] Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter &

There is no consensus on how to characterize which tasks AI tends to excel at.[255] While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets.[256][257] Researcher Andrew Ng has suggested, as a 'highly imperfect rule of thumb', that 'almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI.'[258] Moravec's paradox suggests that AI lags humans at many tasks that the human brain has specifically evolved to perform well.[121] Games provide a well-publicized benchmark for assessing rates of progress.

this phenomenon is described as the AI effect.[268] High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[269] and targeting online advertisements.[267][270][271] With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[272] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[273] Artificial intelligence is breaking into the healthcare industry by assisting doctors.

Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[275] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[276] According to CNN, a recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot.

However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[284] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[285] Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger.

AI can react to changes overnight or when business is not taking place.[287] In August 2001, robots beat humans in a simulated financial trading competition.[288] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[289] The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[290] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing.

This concern has recently gained attention after mentions by celebrities including the late Stephen Hawking, Bill Gates,[312] and Elon Musk.[313] A group of prominent tech titans including Peter Thiel, Amazon Web Services and Musk have committed $1billion to OpenAI a nonprofit company aimed at championing responsible AI development.[314] The opinion of experts within the field of artificial intelligence is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI.[315] In his book Superintelligence, Nick Bostrom provides an argument that artificial intelligence will pose a threat to mankind.

for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at 'high risk' of potential automation, while an OECD report classifies only 9% of U.S. jobs as 'high risk'.[327][328][329] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[330] Author Martin Ford and others go further and argue that a large number of jobs are routine, repetitive and (to an AI) predictable;

This issue was addressed by Wendell Wallach in his book titled Moral Machines in which he introduced the concept of artificial moral agents (AMA).[331] For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as 'Does Humanity Want Computers Making Moral Decisions'[332] and 'Can (Ro)bots Really Be Moral'.[333] For Wallach the question is not centered on the issue of whether machines can demonstrate the equivalent of moral behavior in contrast to the constraints which society may place on the development of AMAs.[334] The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.[335] The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: 'Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines.

The philosophical position that John Searle has named 'strong AI' states: 'The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.'[341] Searle counters this assertion with his Chinese room argument, which asks us to look inside the computer and try to find where the 'mind' might be.[342] Mary Shelley's Frankenstein considers a key issue in the ethics of artificial intelligence: if a machine can be created that has intelligence, could it also feel?

Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.[347][133] Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029, and predicts that the singularity will occur in 2045.[347] You awake one morning to find your brain has another lobe functioning.

Psychology Today

Some of today's top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives.

Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us—or perhaps just apathetically dispose of us, much like scum getting obliterated by a windshield wiper.

Even the word “recognize” is misleading because it implies a subjective experience, so perhaps it is better to simply say that computers are sensitive to symbols, whereas the brain is capable of semantic understanding.  It

The influential philosopher John Searle has cleverly depicted this fact by analogy in his famous and highly controversial “Chinese Room Argument”, which has been convincing minds that “syntax is not sufficient for semantics” since it was published in 1980.

And although some esoteric rebuttals have been put forth (the most common being the “Systems Reply”), none successfully bridge the gap between syntax and semantics. But even if one is not fully convinced based on the Chinese Room Argument alone, it does not change the fact that Turing machines are symbol manipulating machines and not thinking machines, a position taken by the great physicist Richard Feynman over a decade earlier.

Feynman described the computer as “A glorified, high-class, very fast but stupid filing system,” managed by an infinitely stupid file clerk (the central processing unit) who blindly follows instructions (the software program).

In a famous lecture on computer heuristics, Feynman expressed his grave doubts regarding the possibility of truly intelligent machines, stating that, “Nobody knows what we do or how to define a series of steps which correspond to something abstract like thinking.”  These

But unlike digital computers, brains contain a host of analogue cellular and molecular processes, biochemical reactions, electrostatic forces, global synchronized neuron firing at specific frequencies, and unique structural and functional connections with countless feedback loops.   Even

A perfect computer simulation—an emulation—of photosynthesis will never be able to convert light into energy no matter how accurate, and no matter what type of hardware you provide the computer with.

These machines do not merely simulate the physical mechanisms underlying photosynthesis in plants, but instead duplicate, the biochemical and electrochemical forces using photoelectrochemical cells that do photocatalytic water splitting.   In

a similar way, a simulation of water isn’t going to possess the quality of ‘wetness’, which is a product of a very specific molecular formation of hydrogen and oxygen atoms held together by electrochemical bonds.

Even the hot new consciousness theory from neuroscience, Integrated Information Theory, makes very clear that a perfectly accurate computer simulation of a brain would not have consciousness like a real brain, just as a simulation of a black hole won't cause your computer and room to implode.

Neuroscientists Giulio Tononi and Christof Koch, who established the theory, do not mince words on the subject: 'IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.'

With this in mind, we can still speculate about whether non-biological machines that support consciousness can exist, but we must realize that these machines may need to duplicate the essential electrochemical processes (whatever those may be) that are occurring in the brain during conscious states.

If this turns out to be possible without organic materials—which have unique molecular and atomic properties—it would presumably require more than Turing machines, which are purely syntactic processors (symbol manipulators), and digital simulations, which may lack the necessary physical mechanisms.   The

The AI Revolution: Our Immortality or Extinction

We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI that’s at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement we’ve seen in the past suggests that AGI might not be as far away as it seems.

This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI that’s way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.11← open these Before we dive into things, let’s remind ourselves what it would mean for a machine to be superintelligent.

Often, someone’s first thought when they imagine a super-smart computer is one that’s as intelligent as a human but can think much, much faster2—they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.

What makes humans so much more intellectually capable than chimps isn’t a difference in thinking speed—it’s that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps’

Speeding up a chimp’s brain by thousands of times wouldn’t bring him to our level—even with a decade’s time, he wouldn’t be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours.

But it’s not just that a chimp can’t do what we do, it’s that his brain is unable to grasp that those worlds even exist—a chimp can become familiar with what a human is and what a skyscraper is, but he’ll never be able to understand that the skyscraper was built by humans.

And like the chimp’s incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to us—let alone do it ourselves.

In an intelligence explosion—where the smarter a machine gets, the quicker it’s able to increase its own intelligence, until it begins to soar upwards—a machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once it’s on the dark green step two above us, and by the time it’s ten steps above us, it might be jumping up in four-step leaps every second that goes by.

Which is why we need to realize that it’s distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something that’s here on the staircase (or maybe a million times higher):

And since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us.

Or maybe this is part of evolution—maybe the way evolution works is that intelligence creeps up more and more until it hits the level where it’s capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:

So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, it’s only a matter of time before some other species, some gust of nature’s wind, or a sudden beam-shaking asteroid knocks it off.

And while most scientists I’ve come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI’s abilities could be used to bring individual humans, and the species as a whole, to a second attractor state—species immortality.

If Bostrom and others are right, and from everything I’ve read, it seems like they really might be, we have two pretty shocking facts to absorb: 1) The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.

Those people subscribe to the belief that this is happening soon—that exponential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.

Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that we’re not actually that close to the tripwire.

The Kurzweil camp would counter that the only underestimating that’s happening is the underappreciation of exponential growth, and they’d compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.

The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress.

The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.

We don’t know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, let’s estimate that they’d have said 20 years.

So the median opinion—the one right in the center of the world of AI experts—believes the most realistic guess for when we’ll hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.

Of course, all of the above statistics are speculative, and they’re only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI.

Müller and Bostrom’s survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad.

Some reasons most people aren’t really thinking about this topic: One of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if you’re just standing on the intersection of the two dotted lines in the square above, totally uncertain.

During my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most people’s opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:

The thing that separates these people from the other thinkers we’ll discuss later isn’t their lust for the happy side of the beam—it’s their confidence that that’s the side we’re going to land on.

We’ll cover both sides, and you can form your own opinion about this as you read, but for this section, put your skepticism away and let’s take a good hard look at what’s over there on the fun side of the balance beam—and try to absorb the fact that the things you’re reading might really happen.

If you had shown a hunter-gatherer our world of indoor comfort, technology, and endless abundance, it would have seemed like fictional magic to him—we have to be humble enough to acknowledge that it’s possible that an equally inconceivable transformation could be in our future.

He began inventing things as a teenager and in the following decades, he came up with several breakthrough inventions, including the first flatbed scanner, the first scanner that converted text to speech (allowing the blind to read standard texts), the well-known Kurzweil music synthesizer (the first true electric piano), and the first commercially marketed large-vocabulary speech recognition.

He’s well-known for his bold predictions and has a pretty good record of having them come true—including his prediction in the late ’80s, a time when the internet was an obscure thing, that by the early 2000s, it would become a global phenomenon.

His AI-related timeline used to be seen as outrageously overzealous, and it still is by many,6 but in the last 15 years, the rapid advances of ANI systems have brought the larger world of AI experts much closer to Kurzweil’s timeline.

Before we move on—nanotechnology comes up in almost everything you read about the future of AI, so come into this blue box for a minute so we can discuss it— What AI Could Do For Us Armed with superintelligence and all the technology superintelligence would know how to create, ASI would likely be able to solve every problem in humanity.

Nanotech could turn a pile of garbage into a huge vat of fresh meat or other food (which wouldn’t have to have its normal shape—picture a giant cube of apple)—and distribute all this food around the world using ultra-advanced transportation.

ASI could even solve our most complex macro issues—our debates over how economies should be run and how world trade is best facilitated, even our haziest grapplings in philosophy or ethics—would all be painfully obvious to ASI.

few months ago, I mentioned my envy of more advanced potential civilizations who had conquered their own mortality, never considering that I might later write a post that genuinely made me believe that this is something humans could do within my lifetime.

If we live long enough to reproduce and raise our children to an age that they can fend for themselves, that’s enough for evolution—from an evolutionary point of view, the species can thrive with a 30+ year lifespan, so there’s no reason mutations toward unusually long life would have been favored in the natural selection process.

Kurzweil talks about intelligent wifi-connected nanobots in the bloodstream who could perform countless tasks for human health, including routinely repairing or replacing worn down cells in any part of the body.

that a 60-year-old could walk into, and they’d walk out with the body and skin of a 30-year-old.10 Even the ever-befuddling brain could be refreshed by something as smart as ASI, which would figure out how to do so without affecting the brain’s data (personality, memories, etc.).

Then he believes we could begin to redesign the body—things like replacing red blood cells with perfected red blood cell nanobots who could power their own movement, eliminating the need for a heart at all.

He even gets to the brain and believes we’ll enhance our brain activities to the point where humans will be able to think billions of times faster than they do now and access outside information because the artificial additions to the brain will be able to communicate with all the info in the cloud.

Freitas has already designed blood cell replacements that, if one day implemented in the body, would allow a human to sprint for 15 minutes without taking a breath—so you can only imagine what ASI could do for our physical capabilities.

Virtual reality would take on a new meaning—nanobots in the body could suppress the inputs coming from our senses and replace them with new signals that would put us entirely in a new environment, one that we’d see, hear, feel, and smell.

Eventually, Kurzweil believes humans will reach a point when they’re entirely artificial;11 a time when we’ll look at biological material and think how unbelievably primitive it was that humans were ever made of that;

a time the AI Revolution could bring to an end with the merging of humans and AI.12 This is how Kurzweil believes humans will ultimately conquer our biology and become indestructible and eternal—this is his vision for the other side of the balance beam.

Others have questioned his optimistic timeline, or his level of understanding of the brain and body, or his application of the patterns of Moore’s law, which are normally applied to advances in hardware, to a broad range of things, including software.

A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.

Being in the middle of the chart doesn’t mean that you think the arrival of ASI will be neutral—the neutrals were given a camp of their own—it means you think both the extremely good and extremely bad outcomes are plausible but that you’re not sure yet which one of them it’ll be.

it’s permanent) and it’s devastating or death-inducing in its consequences.14 It technically includes a situation in which all humans are permanently in a state of suffering or torture, but again, we’re usually talking about extinction.

There are three things that can cause humans an existential catastrophe: 1) Nature—a large asteroid collision, an atmospheric shift that makes the air inhospitable to humans, a fatal virus or bacterial sickness that sweeps the world, etc.

This would definitely be bad—but in these scenarios, most experts aren’t worried about ASI’s human creators doing bad things with their ASI, they’re worried that the creators will have been rushing to make the first ASI and doing so without careful thought, and would thus lose control of it.

Experts do think a malicious human agent could do horrific damage with an ASI working for it, but they don’t seem to think this scenario is the likely one to kill us all, because they believe bad humans would have the same problems containing an ASI that good humans would have.

It would just happen because it was specifically programmed that way—like an ANI system created by the military with a programmed goal to both kill people and to advance itself in intelligence so it can become even better at killing people.

The existential crisis would happen if the system’s intelligence self-improvements got out of hand, leading to an intelligence explosion, and now we had an ASI ruling the world whose core drive in life is to murder humans.

Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.

Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples.

She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.

but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.

At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat.

It seems weird that a story about a handwriting machine turning on humans, somehow killing everyone, and then for some reason filling the galaxy with friendly notes is the exact kind of scenario Hawking, Musk, Gates, and Bostrom are terrified of.

But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans.

To test this and remove other factors, if there are two guinea pigs, one normal one and one with the mind of a tarantula, I would feel much less comfortable holding the latter guinea pig, even if I knew neither would hurt me.

If Siri ever becomes superintelligent through self-learning and without any further human-made changes to her programming, she will quickly shed her apparent human-like qualities and suddenly be an emotionless, alien bot who values human life no more than your calculator does.

So a supersmart spider would probably be extremely dangerous to us, not because it would be immoral or evil—it wouldn’t be—but because hurting us might be a stepping stone to its larger goal, and as an amoral creature, it would have no reason to consider otherwise.

She was smart enough to understand that humans could destroy her, dismantle her, or change her inner coding (this could alter her goal, which is just as much of a threat to her final goal as someone destroying her).

Or maybe a different AI’s initial job is to write out the number pi to as many digits as possible, which might one day compel it to convert the whole Earth to hard drive material that could store immense amounts of digits.

The jury’s out on which one will prove correct when the world sees its first AGI, but Bostrom, who admits he doesn’t know when we’ll get to AGI, believes that whenever we do, a fast takeoff is the most likely scenario (for reasons we discussed in Part 1, like a recursive self-improvement intelligence explosion).

But before Turry’s takeoff, when she wasn’t yet that smart, doing her best to achieve her final goal meant simple instrumental goals like learning to scan handwriting samples more quickly.

She knew there would be some precautionary measure against her getting one, so she came up with the perfect request, predicting exactly how the discussion among Robotica’s team would play out and knowing they’d end up giving her the connection.

Once on the internet, Turry unleashed a flurry of plans, which included hacking into servers, electrical grids, banking systems and email networks to trick hundreds of different people into inadvertently carrying out a number of steps of her plan—things like delivering certain DNA strands to carefully-chosen DNA-synthesis labs to begin the self-construction of self-replicating nanobots with pre-loaded instructions and directing electricity to a number of projects of hers in a way she knew would go undetected.

Likewise, Turry would be able to figure out some way of powering herself, even if humans tried to unplug her—perhaps by using her signal-sending technique to upload herself to all kinds of electricity-connected places.

For example, what if we try to align an AI system’s values with our own and give it the goal, “Make people happy”?19 Once it becomes smart enough, it figures out that it can most effectively achieve this goal by implanting electrodes inside people’s brains and stimulating their pleasure centers.

Even letting go of the fact that the world’s humans would never be able to agree on a single set of morals, giving an AI that command would lock humanity in to our modern moral understanding for eternity.

And we can’t just shoo all the kids away from the bomb—there are too many large and small parties working on it, and because many techniques to build innovative AI systems don’t require a large amount of capital, development can take place in the nooks and crannies of society, unmonitored.

There’s also no way to gauge what’s happening, because many of the parties working on it—sneaky governments, black market or terrorist organizations, stealth tech companies like the fictional Robotica—will want to keep developments a secret from their competitors.

The especially troubling thing about this large and varied group of parties working on AI is that they tend to be racing ahead at top speed—as they develop smarter and smarter ANI systems, they want to beat their competitors to the punch as they go.

The most ambitious parties are moving even faster, consumed with dreams of the money and awards and power and fame they know will come if they can be the first to get to AGI.20 And when you’re sprinting as fast as you can, there’s not much time to stop and ponder the dangers.

On the contrary, what they’re probably doing is programming their early systems with a very simple, reductionist goal—like writing a simple note with a pen on paper—to just “get the AI to work.”

Bostrom calls this a decisive strategic advantage, which would allow the world’s first ASI to become what’s called a singleton—an ASI that can rule the world at its whim forever, whether its whim is to lead us to immortality, wipe us from existence, or turn the universe into endless paperclips.

If the people thinking hardest about AI theory and human safety can come up with a fail-safe way to bring about Friendly ASI before any AI reaches human-level intelligence, the first ASI may turn out friendly.21 It could then use its decisive strategic advantage to secure singleton status and easily keep an eye on any potential Unfriendly AI being developed.

But if things go the other way—if the global rush to develop AI reaches the ASI takeoff point before the science of how to ensure AI safety is developed, it’s very likely that an Unfriendly ASI like Turry emerges as the singleton and we’ll be treated to an existential catastrophe.

Or we’ll be the people responsible for blowing it—for letting this incredibly special species, with its music and its art, its curiosity and its laughter, its endless discoveries and inventions, come to a sad and unceremonious end.

2018 Isaac Asimov Memorial Debate: Artificial Intelligence

Isaac Asimov's famous Three Laws of Robotics might be seen as early safeguards for our reliance on artificial intelligence, but as Alexa guides our homes and ...

Guest Pastor Billy Crone: AI & the Rise of the Machines

Pastor Billy informs us of the prophecy in Daniel 12: 1-4 and how it relates to the current worlds interest in and rapid increase of knowledge and artificial ...

Microsoft AI – Everyday, Everywhere, for Everyone

Led by Harry Shum, EVP of AI and Research at Microsoft, presenters discuss everyday AI in Bing, Cortana, Office 365 and a partnership with Reddit announced ...

Thorium.

Thorium is an abundant material which can be transformed into massive quantities of energy. To do so efficiently requires a very ..

The World We Dream- Lisa Randall & Ron Garan Zeitgeist Americas 2012

The World We Dream-- Lisa Randall, Professor of Physics, Harvard University; Ron Garan, NASA Astronaut. Putting a man on the moon was once simply a ...

Data Science Initiative Launch Keynote Lecture: Andrew Moore

Keynote lecture from the Data Science Initiative's launch event: Four Fronts of the Data War, by Andrew Moore, PhD, Professor and Dean, School of Computer ...

2015 AUSTRALIAN DIGITAL SUMMIT

"NO GOING BACK NOW" - THE INTERSECTION OF PEOPLE, BUSINESS AND DIGITISATION Every aspect of our lives and work is changing forever with the ...

Salon | Paola Antonelli, "Hybrid: The Space in Between"

11/3/2015 We are living in a time of unprecedented—and often gleeful—contaminations. Disciplinary boundaries are fluid, schools and workspheres are ...

Google I/O 2016 - Day 3 Track 1

Google Developer Days India 2017 - Day 2 (Track 1)

Join us for the livestream of Day 2 at GDD India '17! This livestream will cover all sessions taking place at the Bengaluru International Exhibition Centre in ...