AI News, BOOK REVIEW: deep learning

Artificial intelligence

Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals.

In computer science AI research is defined as the study of 'intelligent agents': any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1]

Colloquially, the term 'artificial intelligence' is applied when a machine mimics 'cognitive' functions that humans associate with other human minds, such as 'learning' and 'problem solving'.[2]

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring 'intelligence' are often removed from the definition, a phenomenon known as the AI effect, leading to the quip, 'AI is whatever hasn't been done yet.'[3][citation not found]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[13]

Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics.

This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity.[19]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding;

and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[22][11]

The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as '0' and '1', could simulate any conceivable act of mathematical deduction.

The success was due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[38]

According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a 'sporadic usage' in 2012 to more than 2,700 projects.

He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[11]

An AI's intended goal function can be simple ('1 if the AI wins a game of Go, 0 otherwise') or complex ('Do actions mathematically similar to the actions that got you rewards in the past').

this is similar to how animals evolved to innately desire certain goals such as finding food, or how dogs can be bred via artificial selection to possess desired traits.[51]

Some of the 'learners' described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, if given infinite data, time, and memory, learn to approximate any function, including whatever combination of mathematical functions would best describe the entire world.

In practice, it is almost never possible to consider every possibility, because of the phenomenon of 'combinatorial explosion', where the amount of time needed to solve a problem grows exponentially.

The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: 'After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza'.

A fourth approach is harder to intuitively understand, but is inspired by how the brain's machinery works: the artificial neural network approach uses artificial 'neurons' that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to 'reinforce' connections that seemed to be useful.

Therefore, to be successful, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better.

Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is.[61]

A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.[62]

instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects.

Humans also have a powerful mechanism of 'folk psychology' that helps them to interpret natural-language sentences such as 'The city councilmen refused the demonstrators a permit because they advocated violence'.

For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[71][72][73]

By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[75]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a 'combinatorial explosion': they became exponentially slower as the problems grew larger.[55]

In addition, some projects attempt to gather the 'commonsense knowledge' known to the average person into a database containing extensive knowledge about the world.

by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern).

They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or 'value') of available choices.[97]

A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts.

Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well.

Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications.

is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world.

a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its 'object model' to assess that fifty-meter pedestrians do not exist.[113]

Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[115]

the paradox is named after Hans Moravec, who stated in 1988 that 'it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility'.[119][120]

Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[129]

Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.[130]

These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI.

Many researchers predict that such 'narrow AI' work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[17][133]

One high-profile example is that DeepMind in the 2010s developed a 'generalized artificial intelligence' that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[134][135][136]

hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to 'slurp up' a comprehensive knowledge base from the entire unstructured Web.[5]

Finally, a few 'emergent' approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[139][140]

For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence).

A problem like machine translation is considered 'AI-complete', because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation.

in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science.

Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems.

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.[14]

His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[147]

found that solving difficult problems in vision and natural language processing required ad-hoc solutions – they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior.

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[151]

By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition.

This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[154][155][156][157]

Artificial neural networks are an example of soft computing --- they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient.

Much of traditional GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results.

However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures.

The shared mathematical language permitted a high level of collaboration with more established fields (like mathematics, economics or operations research).[d]

Compared with GOFAI, new 'statistical learning' techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring semantic understanding of the datasets.

The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models;

In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.[162][163]

Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[171]

The solution, for many problems, is to use 'heuristics' or 'rules of thumb' that prioritize choices in favor of those that are more likely to reach a goal and to do so in a shorter number of steps.

In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called 'pruning the search tree').

These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top.

Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[176][177]

Fuzzy set theory assigns a 'degree of truth' (between 0 and 1) to vague statements such as 'Alice is old' (or rich, or tall, or hungry) that are too linguistically imprecise to be completely true or false.

Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules such as 'if you are close to the destination station and moving fast, increase the train's brake pressure';

Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[195]

Complicated graphs with diamonds or other 'loops' (undirected cycles) can require a sophisticated method such as Markov Chain Monte Carlo, which spreads an ensemble of random walkers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities.

Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as 'naive Bayes' on most practical data sets.[210][211]

A simple 'neuron' N accepts input from multiple other neurons, each of which, when activated (or 'fired'), cast a weighted 'vote' for or against whether neuron N should itself activate.

one simple algorithm (dubbed 'fire together, wire together') is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another.

In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending;

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events).

Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning ('fire together, wire together'), GMDH or competitive learning.[216]

However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches.

For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a 'credit assignment path' (CAP) depth of seven.

Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[225][226][224]

In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning.[232]

Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[233]

In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[243]

The main areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[citation needed]

The 'imitation game' (an interpretation of the 1950 Turing test that assesses whether a computer can imitate a human) is nowadays considered too exploitable to be a meaningful benchmark.[262]

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[268]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[271]

In 2016, a ground breaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.[273]

Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[276]

The team supervised the robot while it performed soft-tissue surgery, stitching together a pig's bowel during open surgery, and doing so better than a human surgeon, the team claimed.[277]

However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[284]

Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[285]

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation.

For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing.

Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking.

report by the Guardian newspaper in the UK in 2018 found that online gambling companies were using AI to predict the behavior of customers in order to target them with personalized promotions.[299]

Developers of commercial AI platforms are also beginning to appeal more directly to casino operators, offering a range of existing and potential services to help them boost their profits and expand their customer base.[300]

He argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down.

If this AI's goals do not reflect humanity's – one example is an AI told to compute as many digits of pi as possible – it might harm humanity in order to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal.

For this danger to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which a minority of experts argue is a possibility far enough in the future to not be worth researching.[327][328]

Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[337]

The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.[343]

The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: 'Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines.

In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines.

Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence.

Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share).

I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.'[346]

The philosophical position that John Searle has named 'strong AI' states: 'The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.'[349]

Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization.

Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029, and predicts that the singularity will occur in 2045.[355]

Invisible, this auxiliary lobe answers your questions with information beyond the realm of your own memory, suggests plausible courses of action, and asks questions that help bring out relevant facts.

In the 1980s, artist Hajime Sorayama's Sexy Robots series were painted and published in Japan depicting the actual organic human form with lifelike muscular metallic skins and later 'the Gynoids' book followed that was used by or influenced movie makers including George Lucas and other creatives.

Sorayama never considered these organic robots to be real part of nature but always unnatural product of the human mind, a fantasy existing in the mind even when realized in actual form.

Benefits Risks of Artificial Intelligence

et al.[1] Lets keep it that way lest systems built to protect human rights on millenniums of wisdom is brought down by some artificial intelligence engineer trying to clock a milestone on their gantt chart!!!!

The strength of the FDA, the MDD, the TGA and their likes in the developing nations is a testament to how the rigor of the conduct of the research and the regulations grow together so another initiative such as the development of atomic bomb are nibbled before they so much as think of budding!!!

 And then I read about the enormous engagement of the global software industry in the areas of Artificial Intelligence and Neuroscience.

These standards would serve as instruments to preserve the simple fact upon which every justice system in the world has been built viz., the brain and nervous system of an individual belongs to an individual and is not to be accessed by other individuals or machines with out stated consent for stated purposes.

The standards will identify the frequency bands or pulse trains for exclusion in all research tools- software or otherwise, commercially available products, regulated devices, tools of trade, and communication infrastructure such that inadvertent breech of barriers to an individual’s brain and nervous system is prohibited.

Artificial Intelligence

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

Don't Trust Artificial Intelligence? Time To Open The AI 'Black Box'

“Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners,” explains David Gunning, program manager at the Defense Advanced Research Projects Agency (DARPA), an agency of the DoD.

“Explainable AI is going to be extremely important for us in healthcare in actually bridging this gap from understanding what might be possible and what might be going on with your health, and actually giving clinicians tools so that they can really be comfortable and understand how to use,” says Sanji Fernando, vice president and head of the OptumLabs Center for Applied Data Science at

“That’s why we think there’s some amazing work happening in academia, in academic institutions, in large companies, and within the federal government, to safely approve this kind of decision making.” The Explainability Tradeoff While organizations like DARPA are actively investing in XAI, there is an open question as to whether such efforts detract from the central priority to make AI algorithms better.

“Computers are increasingly a more important part of our lives, and automation is just going to improve over time, so it’s increasingly important to know why these complicated AI and ML systems are making the decisions that they are.” Some AI research efforts may thus be at cross-purposes with others.

“If you want your system to be explainable, you’re going to have to make do with a simpler system that isn’t as powerful or accurate.” The $2 billion that DARPA is investing in what it calls ‘third-wave AI systems,’ however, may very well be sufficient to resolve this tradeoff.

“XAI is one of a handful of current DARPA programs expected to enable ‘third-wave AI systems,’ where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real world phenomena,” DARPA’s Gunning adds.

Association for the Advancement of Artificial Intelligence

Founded in 1979, the Association for the Advancement of Artificial Intelligence (AAAI) (formerly the American Association for Artificial Intelligence) is a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.

AAAI also aims to increase public understanding of artificial intelligence, improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.

Major AAAI activities include organizing and sponsoring conferences, symposia, and workshops, publishing a quarterly magazine for all members, publishing books, proceedings, and reports, and awarding grants, scholarships, and other honors.

Contributions make possible projects such as the AI poster, the open access initiative, components of the AAAI annual conference, a lowered membership rate for students as well as student scholarships, and more.

Artificial Intelligence: Mankind's Last Invention

Artificial Intelligence: Mankind's Last Invention - Technological Singularity Explained Signup and get 20% off a premium subscription! ..

Preparing for a future with Artificial Intelligence | Robin Winsor | TEDxYYC

It's often said that history repeats itself. Many times in the course of our history, new technologies have wiped out entire workforces. For upcoming generations ...

The Current State and Future of Artificial Intelligence | A Documentary by Ashlee Vance

In this documentary the story of AI's rise is told in detail for the first time, as journalist Ashlee Vance heads to the unexpected birthplace of the technology, ...

Elon Musk's Last Warning About Artificial Intelligence

Subscribe for weekly Elon Musk videos.

The Birth of Artificial Intelligence

This video was made possible by Skillshare. Be one of the first 500 people to sign up with this link and get your first 2 months of premium subscription for FREE!

Joe Rogan - Elon Musk on Artificial Intelligence

Taken from Joe Rogan Experience #1169:

Scary Artificial Intelligence Breakthrough - What You're About to See will Shock You

Follow us on Facebook ▷ - - - Youtube \ Video \ Hedgehog world news prophetic prophecy strange mystery ..

*MUST SEE* Dangers of artificial intelligence documentary (2018)

The future is looking pretty bleak... This video uses copyrighted material in a manner that does not require approval of the copyright holder. It is a fair use under ...

True Artificial Intelligence will change everything | Juergen Schmidhuber | TEDxLakeComo

Artificial Intelligence Scientist. Scientific Director of the Swiss AI Lab, IDSIA, DTI, SUPSI; Prof. of AI, Faculty of Informatics, USI, Lugano; Co-founder & Chief ...

5 CREEPIEST Things Done By Artificial Intelligence Robots...

Previous Videos: Narrated By: Ty Notts Music: Co.Ag ___ FB: .