AI News, 7 Ways An Artificial Intelligence Future Will Change The World
According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) will eventually enter a 'runaway reaction' of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an 'explosion' in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.
Stanislaw Ulam reports a discussion with von Neumann 'centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue'.
The concept and the term 'singularity' were popularized by Vernor Vinge in his 1993 essay The Coming Technological Singularity, in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.
If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of.
These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.
These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.
A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.
The means speculated to produce intelligence augmentation are numerous, and include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading.
Hanson (1998) is skeptical of human intelligence augmentation, writing that once one has exhausted the 'low-hanging fruit' of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find.
Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity.
The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law.
Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.
Kurzweil reserves the term 'singularity' for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that 'The Singularity will allow us to transcend these limitations of our biological bodies and brains ...
He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date 'will not represent the Singularity' because they do 'not yet correspond to a profound expansion of our intelligence.'
In one of the first uses of the term 'singularity' in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change: One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
He predicts paradigm shifts will become increasingly common, leading to 'technological change so rapid and profound it represents a rupture in the fabric of human history'.
First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.
While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.
Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity because whereas hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research.
They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI was developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.
Some critics, like philosopher Hubert Dreyfus, assert that computers or machines cannot achieve human intelligence, while others, like physicist Stephen Hawking, hold that the definition of intelligence is irrelevant if the net result is the same.
Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived.
Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.
In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers.
In a 2007 paper, Schmidhuber stated that the frequency of subjectively 'notable events' appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.
Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2007–2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J.
In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition.
Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).
We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.
While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race.
One hypothetical approach towards attempting to control an artificial intelligence is an AI box, where the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world.
Hawking believed that in the coming decades, AI could offer 'incalculable benefits and risks' such as 'technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.'
In a hard takeoff scenario, an AGI rapidly self-improves, 'taking control' of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals.
In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.
Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that 'creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1.'
Storrs Hall believes that 'many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process' in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff.
Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.
Even if all superfast AIs worked on intelligence augmentation, it's not clear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase.
Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation.
Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called 'Digital Ascension' that involves 'people dying in the flesh and being uploaded into a computer and remaining conscious'.
In his 1958 obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the 'ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.'
Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.
When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.
In 1985, in 'The Time Scale of Artificial Intelligence', artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an 'infinity point': if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.
Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.
In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is 'to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges.'
The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.
The Future of Artificial Intelligence
Sitting at his cluttered desk, located near an oft-used ping-pong table and prototypes of drones from his college days suspended overhead, Gyongyosi punches some keys on a laptop to pull up grainy video footage of a forklift driver operating his vehicle in a warehouse.
It was captured from overhead courtesy of a Onetrack.AI “forklift vision system.” Employing machine learning and computer vision for detection and classification of various “safety events,” the shoebox-sized device doesn’t see all, but it sees plenty.
The mere knowledge that one of IFM’s devices is watching, Gyongyosi claims, has had “a huge effect.” “If you think about a camera, it really is the richest sensor available to us today at a very interesting price point,” he says.
Here’s another: Tesla founder and tech titan Elon Musk recently donated $10 million to fund ongoing research at the non-profit research company OpenAI — a mere drop in the proverbial bucket if his $1 billion co-pledge in 2015 is any indication.
This, however, is not: After more than seven decades marked by hoopla and sporadic dormancy during a multi-wave evolutionary period that began with so-called “knowledge engineering,” progressed to model- and algorithm-based machine learning and is increasingly focused on perception, reasoning and generalization, AI has re-taken center stage as never before.
There’s virtually no major industry modern AI — more specifically, “narrow AI,” which performs objective functions using data-trained models and often falls into the categories of deep learning or machine learning — hasn’t already affected.
That’s especially true in the past few years, as data collection and analysis has ramped up considerably thanks to robust IoT connectivity, the proliferation of connected devices and ever-speedier computer processing.
With companies spending nearly $20 billion collective dollars on AI products and services annually, tech giants like Google, Apple, Microsoft and Amazon spending billions to create those products and services, universities making AI a more prominent part of their respective curricula (MIT alone is dropping $1 billion on a new college devoted solely to computing, with an AI focus), and the U.S. Department of Defense upping its AI game, big things are bound to happen.
Of the former, he warned: “The bottom 90 percent, especially the bottom 50 percent of the world in terms of income or education, will be badly hurt with job displacement…The simple question to ask is, ‘How routine is a job?’ And that is how likely [it is] a job will be replaced by AI, because AI can, within the routine task, learn to optimize itself.
And the more quantitative, the more objective the job is—separating things into bins, washing dishes, picking fruits and answering customer service calls—those are very much scripted tasks that are repetitive and routine in nature.
In the matter of five, 10 or 15 years, they will be displaced by AI.” In the warehouses of online giant and AI powerhouse Amazon, which buzz with more than 100,000 robots, picking and packing functions are still performed by humans — but that will change.
“One of the absolute prerequisites for AI to be successful in many [areas] is that we invest tremendously in education to retrain people for new jobs,” says Klara Nahrstedt, a computer science professor at the University of Illinois at Urbana–Champaign and director of the school’s Coordinated Science Laboratory.
In the future, if you don’t know coding, you don’t know programming, it’s only going to get more difficult.” And while many of those who are forced out of jobs by technology will find new ones, Vandegrift says, that won’t happen overnight.
“The transition between jobs going away and new ones [emerging],” Vandegrift says, “is not necessarily as painless as people like to think.” 'In the future, if you don’t know coding, you don’t know programming, it’s only going to get more difficult.” Mike Mendelson, a “learner experience designer” for NVIDIA, is a different kind of educator than Nahrstedt.
While some of these uses, like spam filters or suggested items for online shopping, may seem benign, others can have more serious repercussions and may even pose unprecedented threats to the right to privacy and the right to freedom of expression and information (‘freedom of expression’).
Speaking at London’s Westminster Abbey in late November of 2018, internationally renowned AI expert Stuart Russell joked (or not) about his “formal agreement with journalists that I won’t talk to them unless they agree not to put a Terminator robot in the article.” His quip revealed an obvious contempt for Hollywood representations of far-future AI, which tend toward the overwrought and apocalyptic.
Once we have that capability, you could then query all of human knowledge and it would be able to synthesize and integrate and answer questions that no human being has ever been able to answer because they haven't read and been able to put together and join the dots between things that have remained separate throughout history.” That’s a mouthful.
More than a few leading AI figures subscribe (some more hyperbolically than others) to a nightmare scenario that involves what’s known as “singularity,” whereby superintelligent machines take over and permanently alter human existence through enslavement or eradication.
The late theoretical physicist Stephen Hawking famously postulated that if AI itself begins designing better AI than human programmers, the result could be “machines whose intelligence exceeds ours by more than ours exceeds that of snails.” Elon Musk believes and has for years warned that AGI is humanity’s biggest existential threat.
“I think that maybe five or ten years from now, I’ll have to reevaluate that statement because we’ll have different methods available and different ways to go about these things.” While murderous machines may well remain fodder for fiction, many believe they’ll supplant humans in various ways.
As MIT physics professors and leading AI researcher Max Tegmark put it in a 2018 TED Talk, “The real threat from AI isn’t malice, like in silly Hollywood movies, but competence — AI accomplishing goals that just aren’t aligned with ours.” That’s Laird’s take, too.
“I think that’s science fiction and not the way it’s going to play out.” What Laird worries most about isn’t evil AI, per se, but “evil humans using AI as a sort of false force multiplier” for things like bank robbery and credit card fraud, among many other crimes.
Referencing the rapid transformational effect of nuclear fission (atom splitting) by British physicist Ernest Rutherford in 1917, he added, “It’s very, very hard to predict when these conceptual breakthroughs are going to happen.” But whenever they do, if they do, he emphasized the importance of preparation.
Bonus! Marketing Tools for Retail Marketers:
business owners need to embrace technology and solutions like artificial intelligence (AI) if they want to even think about remaining open in the months and years ahead.
Although promotions, discounts, and various other marketing techniques worked in the past, they no longer have a place in marketing simply because they don’t work.
Examples of AI Changing the Retail World By this stage, we’re sure you know that artificial intelligence is the idea of technology being able to think and react to different scenarios.
Although some films and TV shows might dramatize the notion a little, it does describe machine and computer programs being able to think and learn.
We’re going to talk about visual search, beacon technology, the potential of an augmented reality app, how to remove friction from the checkout process, and more.
As time goes on, it’s expected that more brands will introduce technology that uses images to bring users to the right products.
Compared to other ideas we’re going to explore in this list, visual search is still in the early phases of its life but we’re sure it will continue to contribute to retail’s evolution.
If you haven’t used this previously, it uses the camera on a smartphone so that users can quite literally place items in their home.
Rather than employees wasting their time with stocking shelves, they’re testing robots to traverse the aisles and assess missing products.
They don’t have to look at empty shelves, they don’t need to ask if there’s more of a certain item out the back, and the whole shopping process is easier.
As we’ve said before regarding AI, it isn’t so much replacing employees in retail, it’s making their jobs easier (while also improving the customer experience!).
To keep people entertained and make the shopping experience more fun, Pepper even lights up, dances, and plays music.
There are many reasons why people have started to shop online — the convenience, delivery to the door, sometimes even cheaper prices — but there’s one problem with physical stores that can be fixed in 2019;
If you weren’t sure of the extent of this problem, one study saw three-quarters of respondents quoting the checkout process as the biggest pain point in retail.
If a retail outlet can provide a shopping experience that’s pleasant and meaningful, this is the first step to success.
In the future, it’s not beyond the realms of imagination to consider walking into a store, scanning the items with our mobile devices, and then paying through the app.
With a bird’s eye view of the store, fewer people will risk stealing or behaving in a way that causes trouble.
While visual search and augmented reality apps help those at home, beacon technology is attracting those nearby to a physical store with clever recommendations.