AI News, Singularity artificial intelligence

Digging for Data Part 2: Ethics, AI and Singularity

He connects data to the design process to examine, evaluate and visualize solutions, providing a high-value service to clients, key stakeholders and end users.  By measuring potential design solutions against consistent experience and human-centered performance metrics, he enables clients and key stakeholders to accurately understand the trade-offs of each proposed solution and make highly informed strategic decisions that are backed by academic research and statistically significant factors.

As the chief data scientist at linear A, Brett Nebeker brings the rare combination of experience in full-stack data science, data engineering, organizational data strategy, and direct, hands-on stakeholder engagement to clients, owners and end users.  Brett brings thought leadership and deep technical expertise to each project at linear A.

Technological singularity

The technological singularity—also, simply, the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.[2][3]

According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a 'runaway reaction' of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an 'explosion' in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

Stanislaw Ulam reports a discussion with von Neumann 'centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue'.[5]

The concept and the term 'singularity' were popularized by Vernor Vinge in his 1993 essay The Coming Technological Singularity, in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.

If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of.

because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine.

These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.

AGI would be capable of recursive self-improvement, leading to the rapid emergence of artificial superintelligence (ASI), the limits of which are unknown, shortly after technological singularity is achieved.

These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[16]

A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

The speculated ways to produce intelligence augmentation are many, and include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading.

Hanson (1998) is skeptical of human intelligence augmentation, writing that once the 'low-hanging fruit' of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult to find.

Despite all of the speculated ways for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity.[citation needed]

The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[35]) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[36]

Kurzweil reserves the term 'singularity' for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that 'The Singularity will allow us to transcend these limitations of our biological bodies and brains ...

He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date 'will not represent the Singularity' because they do 'not yet correspond to a profound expansion of our intelligence.'[40]

In one of the first uses of the term 'singularity' in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change: One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[5]

He predicts paradigm shifts will become increasingly common, leading to 'technological change so rapid and profound it represents a rupture in the fabric of human history'.[41]

First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.[citation needed]

While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.[49][50][51]

They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.[52]

Some critics, like philosopher Hubert Dreyfus, assert that computers or machines cannot achieve human intelligence, while others, like physicist Stephen Hawking, hold that the definition of intelligence is irrelevant if the net result is the same.[54]

Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived.

Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[38][63][64]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers.[65]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively 'notable events' appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[66]

Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2007–2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J.

Hawking believed that in the coming decades, AI could offer 'incalculable benefits and risks' such as 'technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.'[77]

Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators.[78][79][80]

We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race.

In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition.

In a hard takeoff scenario, an AGI rapidly self-improves, 'taking control' of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals.

In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.[92][93]

Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that 'creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1.'[95]

Storrs Hall believes that 'many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process' in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff.

Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.[96]

Even if all superfast AIs worked on intelligence augmentation, it's not clear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase.

Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation.

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called 'Digital Ascension' that involves 'people dying in the flesh and being uploaded into a computer and remaining conscious'.[102]

In his 1958 obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the 'ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.'[5]

Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.

In 1985, in 'The Time Scale of Artificial Intelligence', artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an 'infinity point': if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[6][106]

Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[7]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is 'to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges.'[110]

The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

More Industrial Automation, Robots and Unmanned Vehicles Resources

Further to it, the fear of machines taking the work and control from humans along with the fear of AI singularity where it would be impossible to control the machines resulting in unexpected changes to human civilization and adverse effects on genetic composition, needs to be addressed.

As far as the impact of AI and automation to the existing jobs is related, it is imperative that the reskilling and upskilling plan needs to be in place before rolling out the automated systems.Let’s take an example of data entry work (reading from excel and feeding into SAP system for an invoicing process) which can be automated, and work can be shifted to virtual workforce.

Not creating a hype around driverless cars involving AIoT, following sub-tasks are required for continuous rendering of the surrounding environment and the prediction of possible changes to those surroundings: (https://iiot-world.com/machine-learning/machine-learning-algorithms-in-autonomous-driving/) The above sub-tasks do not imply that humans will not be required to validate the pattern matching for safety, continuously improving the training set for accuracy and updating the model, a neural network for building human-centered AI systems.

in this century of technology disruption due to displacement of humans and non-clarity on the reskilling/upskilling can be shortened by communicating the clarity of purpose, the value and impact it brings with a clear strategy design and its implementation plan along with objective of human augmentation and not deduction.

At the same time, the human compassion, empathy, wisdom, critical thinking, storytelling and other soft/professional skills (now becoming hard skills) will always prevail in any type of technology innovation for socio-economic and political growth of the entire nation.

Verify Verification Code

Wednesday, MARCH 11th, 2019, 6:00 – 9:00 PMPrinceton, New Jersey Topic: AI/ML - Realizing the Potential This is part 2 as a follow up from session 1 which was held on topic 'Artificial Intelligence to Singularity” – on Oct 16th,2019 .

Panelists: TBC Moderators: Rajesh Makhija, CEO, GoGestalt, Founding Partner, 92angels CEO, GoGestalt Corporation: Rajesh is the Founder and CEO of GoGestalt, which helps enterprises transform their workforce by elevating digital competency and mindset, thus enabling digital culture & help imagine what is digitally possible.

Rajesh is also the founding Partner of 92angels, an angel investment company nurturing startups focused on human aspects of technology;

In addition, Arun is a topic lead at the ITU/WHO Focus Group on AI for Health – a WHO initiative for evaluating AI and creating a global community on AI for health.

Foreshadowing the Singularity

Exciting thoughts on the "technological singularity" from some of it's foremost thought leaders including; Ray Kurzweil, Peter Diamandis, Jason Silva, Kevin Kelly, ...

Ray Kurzweil: The Coming Singularity | Big Think

Ray Kurzweil: The Coming Singularity New videos DAILY: Join Big Think Edge for exclusive videos: .

Should We Fear or Welcome the Singularity? Nobel Week Dialogue 2015 - The Future of Intelligence

Should science and society welcome 'the singularity' – the idea of the hypothetical moment in time when artificial intelligence surpasses human intelligence?

The Future of Technology, AI, and the Singularity | Nikola Danaylov | Devo Talks #005

Today's episode of Devo Talks features an intriguing Skype interview with Nikola Danaylov from Toronto. Now if you don't know Nik or Socrates as many of us ...

Artificial Intelligence | Future of Everything with Jason Silva | Singularity University

"AI is perhaps the granddaddy of all exponential technologies. Surely to transform the world and the human race in ways that we can barely wrap our heads ...

A World Transformed By AI | Global Summit 2018 | Singularity University

Anita Schjøll Brede, Co-Founder & CEO, Iris.ai Barney Pell, Co-Founder & Chairman, LocoMobi and Co-Founder, Moon Express Neil Jacobstein, Chair, Artificial ...

What is Technological Singularity? | Origins: The Journey of Humankind

Origins host Jason Silva explains the concept of technological singularity and how artificial intelligence is nothing to be afraid of. ➡ Subscribe: ...

Artificial Intelligence: Beyond the Robot Singularity

Moderator Rich Karlgaard Publisher and Futurist, Forbes Media Speakers Tom Bianculli Chief Technology Officer, Zebra Technologies Virginie Maisonneuve ...

Neil Jacobstein on Artificial Intelligence (part 1 of 2) | Singularity University

Neil Jacobstein, co-chair of the AI and Robotics at singularity university. Filmed during the November 2009 Executive Program at Singularity University.

Fabio Teixeira | Artificial Intelligence & Robotics | Singularity University

Fabio Teixeira is a Tech Entrepreneur and an Alumni from Singularity University and International Space University. He teaches at The Hebrew University of ...