AI News, Future Singularity artificial intelligence

Transhumanism

Transhumanism (abbreviated as H+ or h+) is an international philosophical movement that advocates for the transformation of the human condition by developing and making widely available sophisticated technologies to greatly enhance human intellect and physiology.[1][2]

The most common transhumanist thesis is that human beings may eventually be able to transform themselves into different beings with abilities so greatly expanded from the current condition as to merit the label of posthuman beings.[2]

The contemporary meaning of the term 'transhumanism' was foreshadowed by one of the first professors of futurology, FM-2030, who taught 'new concepts of the human' at The New School in the 1960s, when he began to identify people who adopt technologies, lifestyles and worldviews 'transitional' to posthumanity as 'transhuman'.[5]

The late 19th to early 20th century movement known as Russian cosmism also incorporated some ideas which later developed into the core of the transhumanist movement in particular by early protagonist Russian philosopher N.F.

the great majority of human beings (if they have not already died young) have been afflicted with misery… we can justifiably hold the belief that these lands of possibility exist, and that the present limitations and miserable frustrations of our existence could be in large measure surmounted… The human species can, if it wishes, transcend itself—not just sporadically, an individual here in one way, an individual there in another way, but in its entirety, as humanity.[16]

In the Material and Man section of the manifesto, Noboru Kawazoe suggests that: After several decades, with the rapid progress of communication technology, every one will have a “brain wave receiver” in his ear, which conveys directly and exactly what other people think about him and vice versa.

In 1992, More and Morrow founded the Extropy Institute, a catalyst for networking futurists and brainstorming new memeplexes by organizing a series of conferences and, more importantly, providing a mailing list, which exposed many to transhumanist views for the first time during the rise of cyberculture and the cyberdelic counterculture.

In 2012, the transhumanist Longevity Party had been initiated as an international union of people who promote the development of scientific and technological means to significant life extension, that for now has more than 30 national organisations throughout the world.[40][41]

While such a 'cultural posthumanism' would offer resources for rethinking the relationships between humans and increasingly sophisticated machines, transhumanism and similar posthumanisms are, in this view, not abandoning obsolete concepts of the 'autonomous liberal subject', but are expanding its 'prerogatives' into the realm of the posthuman.[51]

Some secular humanists conceive transhumanism as an offspring of the humanist freethought movement and argue that transhumanists differ from the humanist mainstream by having a specific focus on technological approaches to resolving human concerns (i.e.

However, other progressives have argued that posthumanism, whether it be its philosophical or activist forms, amounts to a shift away from concerns about social justice, from the reform of human institutions and from other Enlightenment preoccupations, toward narcissistic longings for a transcendence of the human body in quest of more exquisite ways of being.[53]

As an alternative, humanist philosopher Dwight Gilbert Jones has proposed a renewed Renaissance humanism through DNA and genome repositories, with each individual genotype (DNA) being instantiated as successive phenotypes (bodies or lives via cloning, Church of Man, 1978).

In his view, native molecular DNA 'continuity' is required for retaining the 'self' and no amount of computing power or memory aggregation can replace the essential 'stink' of our true genetic identity, which he terms 'genity'.

Instead, DNA/genome stewardship by an institution analogous to the Jesuits' 400 year vigil is a suggested model for enabling humanism to become our species' common credo, a project he proposed in his speculative novel The Humanist – 1000 Summers (2011), wherein humanity dedicates these coming centuries to harmonizing our planet and peoples.

The philosophy of transhumanism is closely related to technoself studies, an interdisciplinary domain of scholarly research dealing with all aspects of human identity in a technological society and focusing on the changing nature of relationships between humans and technology.[54]

Many transhumanists actively assess the potential for future technologies and innovative social systems to improve the quality of all life, while seeking to make the material reality of the human condition fulfill the promise of legal and political equality by eliminating congenital mental and physical barriers.

Transhumanist philosophers argue that there not only exists a perfectionist ethical imperative for humans to strive for progress and improvement of the human condition, but that it is possible and desirable for humanity to enter a transhuman phase of existence in which humans enhance themselves beyond what is naturally human.

Some theorists such as Ray Kurzweil think that the pace of technological innovation is accelerating and that the next 50 years may yield not only radical technological advances, but possibly a technological singularity, which may fundamentally change the nature of human beings.[56]

Certain transhumanist philosophers hold that since all assumptions about what others experience are fallible, and that therefore all attempts to help or protect beings that are not capable of correcting what others assume about them no matter how well-intentioned are in danger of actually hurting them, all sentient beings deserve to be sapient.

This includes increasing the neuron count and connectivity in animals as well as accelerating the development of connectivity in order to shorten or ideally skip non-sapient childhood incapable of independently deciding for oneself.

Transhumanists of this description stress that the genetic engineering that they advocate is general insertion into both the somatic cells of living beings and in germ cells, and not purging of individuals without the modifications, deeming the latter not only unethical but also unnecessary due to the possibilities of efficient genetic engineering.[59][60][61][62]

Unlike many philosophers, social critics and activists who place a moral value on preservation of natural systems, transhumanists see the very concept of the specifically natural as problematically nebulous at best and an obstacle to progress at worst.[63]

In keeping with this, many prominent transhumanist advocates, such as Dan Agin, refer to transhumanism's critics, on the political right and left jointly, as 'bioconservatives' or 'bioluddites', the latter term alluding to the 19th century anti-industrialisation social movement that opposed the replacement of human manual labourers by machines.[64]

while several controversial new religious movements from the late 20th century have explicitly embraced transhumanist goals of transforming the human condition by applying technology to the alteration of the mind and body, such as Raëlism.[72]

However, most thinkers associated with the transhumanist movement focus on the practical goals of using technology to help achieve longer and healthier lives, while speculating that future understanding of neurotheology and the application of neurotechnology will enable humans to gain greater control of altered states of consciousness, which were commonly interpreted as spiritual experiences, and thus achieve more profound self-knowledge.[73]

Some transhumanists believe in the compatibility between the human mind and computer hardware, with the theoretical implication that human consciousness may someday be transferred to alternative media (a speculative technique commonly known as mind uploading).[76]

Following this dialogue, William Sims Bainbridge, a sociologist of religion, conducted a pilot study, published in the Journal of Evolution and Technology, suggesting that religious attitudes were negatively correlated with acceptance of transhumanist ideas and indicating that individuals with highly religious worldviews tended to perceive transhumanism as being a direct, competitive (though ultimately futile) affront to their spiritual beliefs.[85]

Since 2009, the American Academy of Religion holds a 'Transhumanism and Religion' consultation during its annual meeting, where scholars in the field of religious studies seek to identify and critically evaluate any implicit religious beliefs that might underlie key transhumanist claims and assumptions;

and provide critical and constructive assessments of an envisioned future that place greater confidence in nanotechnology, robotics and information technology to achieve virtual immortality and create a superior posthuman species.[88]

As proponents of self-improvement and body modification, transhumanists tend to use existing technologies and techniques that supposedly improve cognitive and physical performance, while engaging in routines and lifestyles designed to improve health and longevity.[92]

Transhumanists support the emergence and convergence of technologies including nanotechnology, biotechnology, information technology and cognitive science (NBIC), as well as hypothetical future technologies like simulated reality, artificial intelligence, superintelligence, 3D bioprinting, mind uploading, chemical brain preservation and cryonics.

Therefore, they support the recognition and/or protection of cognitive liberty, morphological freedom and procreative liberty as civil liberties, so as to guarantee individuals the choice of using human enhancement technologies on themselves and their children.[96]

Criticisms of transhumanism and its proposals take two main forms: those objecting to the likelihood of transhumanist goals being achieved (practical criticisms) and those objecting to the moral principles or worldview sustaining transhumanist proposals or underlying transhumanism itself (ethical criticisms).

In her 1992 book Science as Salvation, philosopher Mary Midgley traces the notion of achieving immortality by transcendence of the material human body (echoed in the transhumanist tenet of mind uploading) to a group of male scientific thinkers of the early 20th century, including J.

However, bioethicist James Hughes suggests that one possible ethical route to the genetic manipulation of humans at early developmental stages is the building of computer models of the human genome, the proteins it specifies and the tissue engineering he argues that it also codes for.

With the exponential progress in bioinformatics, Hughes believes that a virtual model of genetic expression in the human body will not be far behind and that it will soon be possible to accelerate approval of genetic modifications by simulating their effects on virtual humans.[5]

Christian theologians and lay activists of several churches and denominations have expressed similar objections to transhumanism and claimed that Christians attain in the afterlife what radical transhumanism promises, such as indefinite life extension or the abolition of suffering.

Reflecting a strain of feminist criticism of the transhumanist program, philosopher Susan Bordo points to 'contemporary obsessions with slenderness, youth and physical perfection', which she sees as affecting both men and women, but in distinct ways, as 'the logical (if extreme) manifestations of anxieties and fantasies fostered by our culture.'[116]

He claims that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability.

The film Blade Runner (1982) and the novels The Boys From Brazil (1976) and The Island of Doctor Moreau (1896) depict elements of such scenarios, but Mary Shelley's 1818 novel Frankenstein is most often alluded to by critics who suggest that biotechnologies could create objectified and socially unmoored people as well as subhumans.

For example, few groups are more cautious than the Amish about embracing new technologies, but, though they shun television and use horses and buggies, some are welcoming the possibilities of gene therapy since inbreeding has afflicted them with a number of rare genetic diseases.[106]

At least one public interest organization, the U.S.-based Center for Genetics and Society, was formed, in 2001, with the specific goal of opposing transhumanist agendas that involve transgenerational modification of human biology, such as full-term human cloning and germinal choice technology.

Silver, the biologist and science writer who coined the term 'reprogenetics' and supports its applications, has expressed concern that these methods could create a two-tiered society of genetically engineered 'haves' and 'have nots' if social democratic reforms lag behind implementation of enhancement technologies.[126]

These criticisms are also voiced by non-libertarian transhumanist advocates, especially self-described democratic transhumanists, who believe that the majority of current or future social and environmental issues (such as unemployment and resource depletion) need to be addressed by a combination of political and technological solutions (like a guaranteed minimum income and alternative technology).

Therefore, on the specific issue of an emerging genetic divide due to unequal access to human enhancement technologies, bioethicist James Hughes, in his 2004 book Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future, argues that progressives or, more precisely, techno-progressives must articulate and implement public policies (i.e., a universal health care voucher system that covers human enhancement technologies) in order to attenuate this problem as much as possible, rather than trying to ban human enhancement technologies.

In his 2002 book Our Posthuman Future and in a 2004 Foreign Policy magazine article, political economist and philosopher Francis Fukuyama designates transhumanism as the world's most dangerous idea because he believes that it may undermine the egalitarian ideals of democracy (in general) and liberal democracy (in particular) through a fundamental alteration of 'human nature'.[45]

AI pioneer Joseph Weizenbaum criticizes what he sees as misanthropic tendencies in the language and ideas of some of his colleagues, in particular Marvin Minsky and Hans Moravec, which, by devaluing the human organism per se, promotes a discourse that enables divisive and undemocratic social policies.[129]

In fact, he says, political liberalism is already the solution to the issue of human and posthuman rights since in liberal societies the law is meant to apply equally to all, no matter how rich or poor, powerful or powerless, educated or ignorant, enhanced or unenhanced.[130]

Some fear future 'eugenics wars' as the worst-case scenario: the return of coercive state-sponsored genetic discrimination and human rights violations such as compulsory sterilization of persons with genetic defects, the killing of the institutionalized and, specifically, segregation and genocide of races perceived as inferior.[133]

The major transhumanist organizations strongly condemn the coercion involved in such policies and reject the racist and classist assumptions on which they were based, along with the pseudoscientific notions that eugenic improvements could be accomplished in a practically meaningful time frame through selective human breeding.[135]

In their 2000 book From Chance to Choice: Genetics and Justice, non-transhumanist bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler have argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals' reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements.[137]

The common transhumanist position is a pragmatic one where society takes deliberate action to ensure the early arrival of the benefits of safe, clean, alternative technology, rather than fostering what it considers to be anti-scientific views and technophobia.

In this approach, planners would strive to retard the development of possibly harmful technologies and their applications, while accelerating the development of likely beneficial technologies, especially those that offer protection against the harmful effects of others.[57]

Roko's basilisk

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence.

The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being.

It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development.

The core claim is that a hypothetical, but inevitable, singular ultimate superintelligence may punish those who fail to help it or help create it.

but it could do that most effectively not merely by preventing existential risk in its present, but by also 'reaching back' into its past to punish people who weren't MIRI-style effective altruists.

The entire affair constitutes a worked example of spectacular failure at community management and at controlling purportedly dangerous information.

The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture.[8]

Yudkowsky does not consider open discussion of the notion of 'acausal trade' with possible superintelligences to be provably safe[9], but doesn't think the basilisk would work:[10]

If that were a thing I expected to happen given some particular design, which it never was, then I would just build a different AI instead---what kind of monster or idiot do people take me for?

LessWrong user jimrandomh noted in a comment on the original post the idea's similarity to the 'Basilisk' image from David Langford's science fiction story BLIT, which was in turn named after the legendary serpent-creature from European mythology that killed those who saw it (also familiar from Harry Potter novels).

(Yudkowsky has described this as 'obsolete as of 2004', but CEV was still in live discussion as a plan for the Friendly AI in 2010.) Part of Roko's motivation for the basilisk post was to point out a possible flaw in the CEV proposal.

Thus, the most important thing in the world is to bring this future AI into existence properly and successfully ('this is crunch time for the entire human species'[20]), and therefore you should give all the money you can to the Institute,[21]

on a scenario in which you should torture one person for 50 years if it would prevent dust specks in the eyes of a sufficiently large number of people [27]

resulting in claims like eight lives being saved per dollar donated (a claim made using a calculation of this sort).

minimise the maximum loss in a worst-case scenario, which gives very different results from simple arithmetical utility maximisation, and is unlikely to lead to torture as the correct answer —

LessWrong holds that the human mind is implemented entirely as patterns of information in physical matter, and that those patterns could, in principle, be run elsewhere and constitute a person that feels they are you, like running a computer program with all its data on a different PC;

This is not unduly strange (the concept follows from materialism, though feasibility is another matter), but Yudkowsky further holds that you should feel that another instance of you is not a separate person very like you —

This conception of identity appears to have originated on the Extropians mailing list, which Yudkowsky frequented, in the 1990s, in discussions of continuity of identity in a world where minds could be duplicated.[30]

However, if one does not hold this view, the entire premise of Roko's Basilisk becomes meaningless, as you do not feel the torture of the simulated you, thus making the punishment irrelevant, and giving the hypothetical basilisk no incentive to proceed with the torture.

This is posited as a reasonable problem to consider in the context of superintelligent artificial intelligence, as an intelligent computer program could of course be copied and wouldn't know which copy it actually was and when.

Many LessWrong regulars are fans of the sort of manga and anime in which characters meticulously work out each other's 'I know that you know that I know' and then behave so as to interact with their simulations of each other, including their simulations of simulating each other —

More generally, narrative theorists have suggested that the kind of relationships a reader has with an author of a fiction and his or her fictional characters can be analyzed via evolutionary game theory as a kind of 'non-causal bargaining' that allowed humans to solve prisoner's dilemma in the evolution of cooperation.

which spoke of how, as MIRI (then SIAI) is the most important thing in the world, a good altruist's biggest problem is how to give everything they can to the cause without guilt at neglecting their loved ones, and how threats of being dumped for giving away too much of the couple's money had been an actual problem for some SIAI donors.[43]

The next day, 23 July, Roko posted 'Solutions to the Altruist's burden: the Quantum Billionaire Trick', which presents a scheme for action that ties together quantum investment strategy (if you gamble, you will definitely win in some Everett branch), acausal trade with unFriendly AIs in other Everett branches ...

there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation.

So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half).

Roko notes in the post that at least one Singularity Institute person had already worried about this scenario, to the point of nightmares, though it became convention to blame Roko for the idea —

Roko proposes a solution permitting such donors to escape this Hell for the price of a lottery ticket: if you buy a lottery ticket, there's an instance of you in some Everett branch who will win the lottery.

the line of reasoning was so compelling to them that they believed the AI (which would know they'd once read Roko's post) would now punish them even more for being aware of it and failing to donate all of their income to institutions devoted to the god-AI's development.

He returned in passing a few months later, but shared his regret about ever learning about all the LessWrong ideas that led him to the basilisk idea (and has since attempted to leave LessWrong ideas behind entirely):[52]

I wish very strongly that my mind had never come across the tools to inflict such large amounts of potential self-harm with such small durations of inattention, uncautiousness and/or stupidity, even if it is all premultiplied by a small probability.

The matter then became the occasional subject of contorted LW posts, as people tried to discuss the issue cryptically without talking about what they're talking about.[53][54][55]

Eventually, two and a half years after the original post, Yudkowsky started an official LessWrong uncensored thread on Reddit, in which he finally participated in discussion concerning the basilisk.

Meanwhile, his main reasoning tactic was to repeatedly assert that his opponents' arguments were flawed, while refusing to give arguments for his claims (another recurring Yudkowsky pattern), ostensibly out of fears of existential risk.

To greatly simplify it, a future AI entity with a capacity for extremely accurate predictions would be able to influence our behaviour in the present (hence the timeless aspect) by predicting how we would behave when we predicted how it would behave.

future AI who rewards or punishes us based on certain behaviours could make us behave as it wishes us to, if we predict its future existence and take actions to seek reward or avoid punishment accordingly.

Thus the hypothesised AI could use the punishment (in our future) as a deterrent in our present to gain our cooperation, in much the same way as a person who threatens us with violence (e.g., a mugger) can influence our actions, even though in the case of the basilisk there is no direct communication between ourselves and the AI, who each exist in possible universes that cannot interact.

it could not prove that it was not inside a simulated world created by an even more powerful AI which intended to reward or punish it based on its actions towards the simulated humans it has created;

The basilisk dilemma bears some resemblance to Pascal's wager, the policy proposed by 17th century mathematician Blaise Pascal] that one should devote oneself to God, even though we cannot be certain of God's existence, since God may offer us eternal reward (in heaven) or eternal punishment (in hell).

According to Pascal's reasoning, the probability of God's existence does not matter, since any finite cost (in Pascal's case, the burden of leading a Christian life) is far outweighed by the prospect of infinite reward or infinite punishment.

Pascal focused unduly on the characteristics of one possible variety of god (a Christian god who punishes and rewards based on belief alone), ignoring other possibilities, such as a god who punishes those who feign belief Pascal-style in the hope of reward.

In the story, AM blames humanity for its tortured existence and proceeds to wipe out the entire race, minus five lucky individuals who it takes its anger out on for all eternity.

This is addressed by another trope from LessWrong, Pascal's mugging, which suggests that it is irrational to permit events of slight probability but huge posited consequences from skewing your judgment.[65]

and as about the only place on the Internet talking about it at all, RW editors started getting email from distressed LW readers asking for help coping with this idea that LW refused to discuss.

Using formal methods to evaluate informal evidence lends spurious beliefs an improper veneer of respectability, and makes them appear more trustworthy than our intuition.

One necessary condition is that a simulation of you will have to eventually act upon its prediction that its simulator will apply a negative incentive if it does not act according to the simulator's goals.

If the simulator is unable to predict that you refuse acausal blackmail, then it does not have (1) a simulation of you that is good enough to draw action relevant conclusions about acausal deals and/or (2) a simulation that is sufficiently similar to you to be punished, because it wouldn't be you.

If a superhuman agent is able to simulate you accurately, then their simulation will arrive at the above conclusion, telling them that it is not instrumentally useful to blackmail you.

Compare voodoo dolls: injuries to voodoo dolls, or injuries to computer simulations you are imagining, are only effective against true believers of each.

Holding any individual deeply responsible for failing to create it sooner would be 'like punishing Hitler's great-great-grandmother for not having the foresight to refrain from giving birth to a monster's great-grandfather'.