AI News, BOOK REVIEW: Technology artificial intelligence

Technological singularity

According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) will eventually enter a 'runaway reaction' of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an 'explosion' in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

Stanislaw Ulam reports a discussion with von Neumann 'centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue'.[5]

The concept and the term 'singularity' were popularized by Vernor Vinge in his 1993 essay The Coming Technological Singularity, in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.

If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of.

These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.

These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[16]

A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

The means speculated to produce intelligence augmentation are numerous, and include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading.

Hanson (1998) is skeptical of human intelligence augmentation, writing that once one has exhausted the 'low-hanging fruit' of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find.

Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity.[citation needed]

The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[33]) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[34]

Kurzweil reserves the term 'singularity' for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that 'The Singularity will allow us to transcend these limitations of our biological bodies and brains ...

He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date 'will not represent the Singularity' because they do 'not yet correspond to a profound expansion of our intelligence.'[38]

In one of the first uses of the term 'singularity' in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change: One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[5]

He predicts paradigm shifts will become increasingly common, leading to 'technological change so rapid and profound it represents a rupture in the fabric of human history'.[39]

First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.[citation needed]

While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.[47][48][49]

Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity because whereas hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research.

They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI was developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.[50]

Some critics, like philosopher Hubert Dreyfus, assert that computers or machines cannot achieve human intelligence, while others, like physicist Stephen Hawking, hold that the definition of intelligence is irrelevant if the net result is the same.[52]

Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived.

Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[61][62]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers.[63]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively 'notable events' appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[64]

Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2007–2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J.

In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition.

Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).[77][78][79]

We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race.

One hypothetical approach towards attempting to control an artificial intelligence is an AI box, where the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world.

Hawking believed that in the coming decades, AI could offer 'incalculable benefits and risks' such as 'technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.'

In a hard takeoff scenario, an AGI rapidly self-improves, 'taking control' of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals.

In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.[92][93]

Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that 'creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1.'[95]

Storrs Hall believes that 'many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process' in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff.

Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.[96]

Even if all superfast AIs worked on intelligence augmentation, it's not clear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase.

Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation.

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called 'Digital Ascension' that involves 'people dying in the flesh and being uploaded into a computer and remaining conscious'.[102]

In his 1958 obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the 'ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.'[5]

Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.

In 1985, in 'The Time Scale of Artificial Intelligence', artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an 'infinity point': if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[6][105]

Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[7]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is 'to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges.'[109]

The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

19 Artificial Intelligence Technologies To Look For In 2019

Tech decision makers are (and should keep) looking for ways to effectively implement artificial intelligence technologies into their businesses and, therefore, drive value.

By providing algorithms, APIs (application programming interface), development and training tools, big data, applications and other machines, ML platforms are gaining more and more traction every day.

The last one is actually the first and only audience management tool in the world that applies real AI and machine learning to digital advertising to find the most profitable audience or demographic group for any ad.

Deep learning platforms use a unique form of ML that involves artificial neural circuits with various abstraction layers that can mimic the human brain, processing data and creating patterns for decision making.

It allows for more natural interactions between humans and machines, including interactions related to touch, image, speech, and body language recognition, and is big within the market research field.

Robotic processes automation uses scripts and methods that mimic and automate human tasks to support corporate processes.  It is particularly useful for situations when hiring humans for a specific job or task is too expensive or inefficient.

Their digital twins are basically, lines of software code, but the most elaborate versions look like 3-D computer-aided design drawings full of interactive charts, diagrams, and data points.

AI and ML are now being used to move cyberdefense into a new evolutionary phase in response to an increasingly hostile environment: Breach Level Index detected a total of over 2 billion breached records during 2017.

Recurrent neural networks, which are capable of processing sequences of inputs, can be used in combination with ML techniques to create supervised learning technologies, which uncover suspicious user activity and detect up to 85% of all cyber attacks.  Startups such as Darktrace, which pairs behavioral analytics with advanced mathematics to automatically detect abnormal behavior within organizations and Cylance, which applies AI algorithms to stop malware and mitigate damage from zero-day attacks, are both working in the area of AI-powered cyber defense.

Compliance is the certification or confirmation that a person or organization meets the requirements of accepted practices, legislation, rules and regulations, standards or the terms of a contract, and there is a significant industry that upholds it.

And the volume of transaction activities flagged as potential examples of money laundering can be reduced as deep learning is used to apply increasingly sophisticated business rules to each one.  Companies working in this area include Compliance.ai, a Retch company that matches regulatory documents to a corresponding business function;

Merlon Intelligence, a global compliance technology company that supports the financial services industry to combat financial crimes, and Socure, whose patented predictive analytics platform boosts customer acceptance rates while reducing fraud and manual reviews.

While some are rightfully concerned about AI replacing people in the workplace, let’s not forget that AI technology also has the potential to vastly help employees in their work, especially those in knowledge work.

Content creation now includes any material people contribute to the online world, such as videos, ads, blog posts, white papers, infographics, and other visual or written assets.

But peer-to-peer networks are also used by cryptocurrencies, and have the potential to even solve some of the world’s most challenging problems, by collecting and analyzing large amounts of data, says Ben Hartman, CEO of Bet Capital LLC, to Entrepreneur.  Nano Vision, a startup that rewards users with cryptocurrency for their molecular data, aims to change the way we approach threats to human health, such as superbugs, infectious diseases, and cancer, among others.

Another player utilizing peer-to-peer networks and AI is Presearch, a decentralized search engine that’s powered by the community and rewards members with tokens for a more transparent search system.

It uses software to automate customer segmentation, customer data integration, and campaign management, and streamlines repetitive tasks, allowing strategic minds to get back to doing what they do best.

The software automates all the process of campaign management and optimization, making daily adjustments per ad to super-optimize campaigns and managing budgets across multiple platforms and over several different demographic and micro demographic groups per ad.

Why Cognitive Technology May Be A Better Term Than Artificial Intelligence

Getty One of the challenges for those tracking the artificial intelligence industry is that, surprisingly, there’s no accepted, standard definition of what artificial intelligence really is.

In general, most people would agree that the fundamental goals of AI are to enable machines to have cognition, perception, and decision-making capabilities that previously only humans or other intelligent creatures have.

Saying AI but meaning something else There are certainly a subset of those pursuing AI technologies with a goal of solving the ultimate problem: creating artificial general intelligence (AGI) that can handle any problem, situation, and thought process that a human can.

AGI is certainly the goal for many in the AI research being done in academic and lab settings as it gets to the heart of answering the basic question of whether intelligence is something only biological entities can have.

While there certainly are a few narrow AI solutions that aim to solve broader questions of intelligence, the vast majority of narrow AI solutions are not trying to achieve anything greater than the specific problem the technology is being applied to.

Rather than trying to build an artificial intelligence, enterprises are leveraging cognitive technologies to automate and enable a wide range of problem areas that require some aspect of cognition.

From this perspective, it’s clear that while cognitive technologies are indeed a subset of Artificial Intelligence technologies, with the main difference being that AI can be applied both towards the goals of AGI as well as narrowly-focused AI applications.

On the other-hand, using the term cognitive technology instead of AI is an acceptance of the fact that the technology being applied borrows from AI capabilities but doesn’t have ambitions of being anything other than technology applied to a narrow, specific task.

The Invention of “Ethical AI”

In the penal case, our research led us to strongly oppose the adoption of risk assessment tools, and to reject the proposed technical adjustments that would supposedly render them “unbiased” or “fair.” But the Partnership’s draft statement seemed, as a colleague put it in an internal email to Ito and others, to “validate the use of RA [risk assessment] by emphasizing the issue as a technical one that can therefore be solved with better data sets, etc.” A second colleague agreed that the “PAI statement is weak and risks doing exactly what we’ve been warning against re: the risk of legitimation via these industry led regulatory efforts.” A third colleague wrote, “So far as the criminal justice work is concerned, what PAI is doing in this realm is quite alarming and also in my opinion seriously misguided.

and post-deployment evaluation.” To be sure, the Partnership staff did respond to criticism of the draft by noting in the final version of the statement that “within PAI’s membership and the wider AI community, many experts further suggest that individuals can never justly be detained on the basis of their risk assessment score alone, without an individualized hearing.” This meek concession — admitting that it might not be time to start imprisoning people based strictly on software, without input from a judge or any other “individualized” judicial process — was easier to make because none of the major firms in the Partnership sell risk assessment tools for pretrial decision-making;

I argued, “If academic and nonprofit organizations want to make a difference, the only viable strategy is to quit PAI, make a public statement, and form a counter alliance.” Then a colleague proposed, “there are many other organizations which are doing much more substantial and transformative work in this area of predictive analytics in criminal justice — what would it look like to take the money we currently allocate in supporting PAI in order to support their work?” We believed Ito had enough autonomy to do so because the MIT-Harvard fund was supported largely by the Knight Foundation, even though most of the money came from tech investors Pierre Omidyar, founder of eBay, via the Omidyar Network, and Reid Hoffman, co-founder of LinkedIn and Microsoft board member.

How artificial intelligence will change your world in 2019, for better or worse

From a science fiction dream to a critical part of our everyday lives, artificial intelligence is everywhere. You probably don't see AI at work, and that's by design.

Artificial Intelligence & the Future - Rise of AI (Elon Musk, Bill Gates, Sundar Pichai)|Simplilearn

Artificial Intelligence (AI) is currently the hottest buzzword in tech. Here is a video on the role of Artificial Intelligence and its scope in the future. We have put ...

What is Artificial Intelligence Exactly?

Subscribe here: Check out the previous episode: Become a Patreo

Technology: AI in China

Companies and governments are turning to artificial intelligence to make streets safer, shopping more targeted and health care more accurate. China is one of ...

Machine Learning: Living in the Age of AI | A WIRED Film

Machine Learning: Living in the Age of AI,” examines the extraordinary ways in which people are interacting with AI today. Hobbyists and teenagers are now ...

✪ TOP 5: NEW Artificial Intelligence Technology You NEED To See (AI Gadgets 2017)

Check out our latest picks of Top 5 Awesome New Artificial Intelligence Technology and Amazing AI Gadgets. Let us know in the comments which of these new ...

How China Is Using Artificial Intelligence in Classrooms | WSJ

A growing number of classrooms in China are equipped with artificial-intelligence cameras and brain-wave trackers. While many parents and teachers see them ...

Artificial Intelligence: Mankind's Last Invention

Artificial Intelligence: Mankind's Last Invention - Technological Singularity Explained Signup and get 20% off a premium subscription! ..

Artificial intelligence: What the tech can do today

Is the artificial intelligence we see in science fiction movies at all realistic? Many tech industry experts believe the idea of a superintelligent or sentient AI is ...

Top 10 Artificial Intelligence Technologies in 2020 | Artificial Intelligence Trends | Edureka

PGP in AI and Machine Learning (9 Months Online Program) : This Edureka video on "Top 10 ..