AI News, Henry Markram Talks BrainSimulation
Henry Markram Talks BrainSimulation
Artificial intelligence is progressing rapidly, and its impact on our daily lives will only increase.
The brain is an infinite-dimensional network of networks of genes, proteins, cells, synapses, and brain regions, all operating in a dynamically changing cocktail of neurochemicals.
Let’s not speculate when we’ll be able to simulate every single molecule in all its possible states or assume that a coarse-grained simulation, where molecules are simulated in groups, would have enough resolution to capture the brain’s reactions.
To simulate the human brain at that resolution, we would need supercomputers on the yotta scale, with a million times more computing power than the exascale machines now on the horizon.
An important factor to keep in mind is that the computers we would need to run high-resolution simulations of the human brain would probably consume the output of a dedicated nuclear power plant.
If we now assume the intricacies of the cellular structure of the brain and its cells are unnecessary to recreate the essential reactions of the brain, we can treat cells as simple nodes of integration and collapse the whole brain to a point neuron network.
The problem is that in order to collapse the complexity down to this resolution in a formal and systematic series of scientific abstractions, we need a high-resolution digital reconstruction of the human brain to work with.
If we could now go further and just bypass billions of years of iterations in biological design, leaving aside all the detailed biological reactions, and mimicking just all the input/output transfer functions of the human brain in some kind of deep learning network, we might be able to achieve brainlike capabilities even earlier.
A seizure, near-death experience, magic mushroom trip, or wild dream may transiently disrupt the immersion, but the immersion is powerful and profound and we generally snap compulsively back to earth.
Only now are humans realizing that the human brain, as an organ belonging to an individual, already has superhuman capabilities: Every human brain embodies layers upon layers of knowledge and experience developed by all the other brains, present and past, who have contributed to building our societies, cultures, and physical environment.
So even if we create algorithms with higher “IQ” and better problem-solving abilities than any individual, even if we give them the ability to build ever more intelligent versions of themselves, we will still be far from achieving the superhuman intelligence we observe in individual humans today.
To make sure that superintelligent artificial beings don’t become to us what a car is to a bird, or a GPU is to our grandmother, we have to consider how our creations are immersed into society: what tasks we give them, what tasks we reserve for ourselves, how and what we allow them to learn from us, how and what we manage to learn from them.
What we need to look out for is the shallower versions of artificial intelligence, complete with human desires and emotions such as fear, but without the deep immersion that can put these desires and emotions in context.
is the hypothetical moment when the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.
According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a 'runaway reaction' of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.
Stanislaw Ulam reports a discussion with John von Neumann 'centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue'.
Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.
If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of.
These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.
These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.
A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.
The means speculated to produce intelligence augmentation are numerous, and include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading.
Hanson (1998) is skeptical of human intelligence augmentation, writing that once one has exhausted the 'low-hanging fruit' of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find.
The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law.
Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.
Kurzweil reserves the term 'singularity' for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that 'The Singularity will allow us to transcend these limitations of our biological bodies and brains ...
He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date 'will not represent the Singularity' because they do 'not yet correspond to a profound expansion of our intelligence.'
In one of the first uses of the term 'singularity' in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change: One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
He predicts paradigm shifts will become increasingly common, leading to 'technological change so rapid and profound it represents a rupture in the fabric of human history'.
First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.
While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.
Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity because whereas hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research.
They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI was developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.
Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.
Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived.
Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.
In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers.
In a 2007 paper, Schmidhuber stated that the frequency of subjectively 'notable events' appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.
Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J.
In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition.
Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).
We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.
While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race.
One hypothetical approach towards attempting to control an artificial intelligence is an AI box, where the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world.
Hawking believes that in the coming decades, AI could offer 'incalculable benefits and risks' such as 'technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.'
In a hard takeoff scenario, an AGI rapidly self-improves, 'taking control' of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals.
In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.
Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that 'creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1.'
Storrs Hall believes that 'many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process' in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff.
Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.
Even if all superfast AIs worked on intelligence augmentation, it's not clear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase.
Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation.
Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called 'Digital Ascension' that involves 'people dying in the flesh and being uploaded into a computer and remaining conscious'.
In his 1958 obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the 'ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.'
Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.
When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.
In 1985, in 'The Time Scale of Artificial Intelligence', artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an 'infinity point': if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.
Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.
In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is 'to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges.'
The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.
The AI Revolution: The Road to Superintelligence
Which probably feels pretty normal… _______________ Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay.
It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery.
But here’s the interesting thing—if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, he’d take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything.
The 1500 guy would learn some mind-bending shit about space and physics, he’d be impressed with how committed Europe turned out to be with that new imperialism fad, and he’d have to do some major revisions of his world map conception.
If someone from a purely hunter-gatherer world—from a time when humans were, more or less, just another animal species—saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being “inside,”
19th century humanity knew more and had better technology than 15th century humanity, so it’s no surprise that humanity made far more advances in the 19th century than in the 15th century—15th century humanity was no match for 19th century humanity.11← open these This works on smaller scales too.
The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985—because the former was a more advanced world—so much more change happened in the most recent 30 years than in the prior 30.
First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line.
The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones.
We’re also limited by our imagination, which takes our experience and uses it to conjure future predictions—but often, what we know simply doesn’t give us the tools to think accurately about the future.2 When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive.
Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, they’ll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human—kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth.
And if you spend some time reading about what’s going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap that’s coming next.
In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technology’s intelligence exceeds our own—a moment for him when life as we know it will be forever changed and normal rules will no longer apply.
Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which we’ll be living in a whole new world.
Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.”
AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”
Let’s take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think: Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing.
At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).
When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions.
Same idea goes for why it’s not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a site—it’s that your brain is super impressive for being able to.
On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we haven’t had any time to evolve a proficiency at them, so a computer doesn’t need to work too hard to beat us.
Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray.8 And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really is—a photo of an entirely-black, 3-D rock: And everything we just mentioned is still only taking in stagnant information and processing it.
One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.
Ray Kurzweil came up with a shortcut by taking someone’s professional estimate for the cps of one structure and that structure’s weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total.
But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build.
Here are the three most common strategies I came across: This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they can’t do nearly as well as that kid, and then they finally decide “k fuck it I’m just gonna copy that kid’s answers.”
The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, we’re discovering ingenious new ways to take advantage of neural circuitry.
If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which he’d probably be really excited about.
If that makes it seem like a hopeless project, remember the power of exponential progress—now that we’ve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.
The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a bird’s wing-flapping motions—often, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.
Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence—like revamping the ways cells produce energy—when we can remove those extra burdens and use things like electricity.
Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons: 1) Exponential growth is intense and what seems like a snail’s pace of advancement can quickly race upwards—this GIF illustrates this concept nicely: 2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier).
The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic we’re aware of about any animal’s intelligence is that it’s far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans.
The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range—so just after hitting village idiot level and being declared to be AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit us:
The median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly.
If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us.
The Future of Education: How A.I. and Immersive Tech Will Reshape Learning Forever
Mastery-based learning can fix all of those issues — through a deeply personal and individualized approach (intersecting with what we talked about regarding personalized learning), students can move quickly through the subjects they’re better at, and dedicate the required time wherever they face complications.
In scientific research and in the classroom, it has demonstrably been shown to be one the most effective forms of meaningfully retaining information — experiential learning engages most of the senses, builds social-emotional skills, creates a context for memorization, expands critical thinking and is unquestionably more relevant to real life applications of what’s being studied.
Millennials will benefit and suffer due to their hyperconnected lives
“They take a quick glance at it and sort it and/or tag it for future reference if it might be of interest.” Cathy Cavanaugh, an associate professor of educational technology at the University of Florida, noted, “Throughout human history, human brains have elastically responded to changes in environments, society, and technology by ‘rewiring’ themselves.
“This has nothing to do with technology but with the fears we have about young people engaging with strangers or otherwise interacting with people outside of adult purview.” William Schrader, a consultant who founded PSINet in the 1980s, expressed unbridled hope. “A new page is being turned in human history, and while we sometimes worry and most of the time stand amazed at how fast (or how slowly) things have changed, the future is bright for our youth worldwide,” he wrote.
In particular, I have hope for improved collaboration from these new differently ‘wired’ brains, for these teens and young adults are learning in online environments where working together and developing team skills allows them to advance.” David Weinberger, senior researcher at Harvard University’s Berkman Center for Internet &
“The people who will strive and lead the charge will be the ones able to disconnect themselves to focus on specific problems.” Stephen Masiclat, a communications professor at Syracuse University, said, “When the emphasis of our social exchanges shifts from the now to the next, and the social currency of being able to say ‘I was there first’ rises, we will naturally devalue retrospective reflection and the wisdom it imparts.” Masiclat said social systems will evolve to offer even more support to those who can implement deep-thinking skills.
Those who grow up with immediate access to media, quick response to email and rapid answers to all questions may be less likely to take longer routes to find information, seeking ‘quick fixes’ rather than taking the time to come to a conclusion or investigate an answer.” Richard Forno, a long-time cybersecurity expert, agreed with these younger respondents, saying he fears “where technology is taking our collective consciousness and ability to conduct critical analysis and thinking, and, in effect, individual determinism in modern society.” He added, “My sense is that society is becoming conditioned into dependence on technology in ways that, if that technology suddenly disappears or breaks down, will render people functionally useless.
Here’s a collection of comments along those lines: Annette Liska, an emerging-technologies design expert, observed, “The idea that rapidity is a panacea for improved cognitive, behavioral, and social function is in direct conflict with topical movements that believe time serves as a critical ingredient in the ability to adapt, collaborate, create, gain perspective, and many other necessary (and desirable) qualities of life.
Areas focusing on ‘sustainability’ make a strong case in point: slow food, traditional gardening, hands-on mechanical and artistic pursuits, environmental politics, those who eschew Facebook in favor of rich, active social networks in the ‘real’ world.” Enrique Piraces, senior online strategist for Human Rights Watch, said communication and knowledge acquisition are increasingly mediated by technology, noting that by 2020, “a significant part of the knowledge that anyone can discover will be processed by ‘third-party brains.’ Machines will learn from that processing, but I’m afraid the subjects won’t develop deep thinking based on this.” Robert F.
Here is my 2020 prediction: 60% of children over the age of 15 are overweight in the US, and the Web traffic to non-learning sites has grown threefold.” Bruce Nordman, a research scientist at Lawrence Berkeley National Laboratory and active leader in the Internet Engineering Task Force, expressed concerns over people’s information diets, writing: “The overall effect will be negative, based on my own experience with technology, attention, and deep thinking (I am 49), and observing my children and others.
While I am quite willing to believe that some ‘wiring’ differences are occurring and will occur, they will be a modest effect compared to others.” Eugene Spafford, a professor of computer science and engineering at Purdue University, responded that many young adults are unable to function in a confident and direct manner without immediate access to online sources and social affirmation.
“When Millennials remake our educational institutions so that they reflect this internet-based architecture, rather than the broadcast, ‘expert in the center’ framework of today’s K-doctorate educational systems,” he wrote, “then their ability to process, if not actually absorb, a greater amount of information will be used to produce positive outcomes for society.
“The question we face as individuals, organizations, educators and perhaps especially as parents is how we can help today’s kids to prepare for that world—the world they will actually live in and help to create—instead of the world we are already nostalgic for.” Computing pioneer and ACM Fellow Bob Frankston predicted that people will generally take all of this in stride.
Owens, an attorney and author of Internet Gaming Law, also pointed out the dual effects of humans’ uses of technologies, writing, “Good people do good things with their access to the internet and social media—witness the profusion of volunteer and good cause apps and programs which are continually appearing, the investigative journalism, the rallying of pro-democracy forces across the world.
Each new advance in knowledge and technology represents an increase in power, and the corresponding moral choices that go with that power.” Jessica Clark, a media strategist and senior fellow for two U.S. communications technology research centers, was among many who observed that there’s nothing new about concerns over teens and evolving ways they create content and share it.
She wrote: “What is being lost are the skills associated with print literacy, including the ability to organize complex processes in a sustained way over time, engage in detailed and nuanced argumentation, analytically compare and contrast information from diverse sources, etc.
My hypothesis is that high activity in online environments, particularly games, expends any political will or desire to effectively shape the environment so that there is none of that will left for engaging in our actual political environment.” Jesse Drew, an associate professor of technocultural studies at the University of California-Davis, echoed Braman.
Among them: And Sam Punnett, president of FAD Research, drew out the second scenario in a multilayered, doleful future: Respondents often pointed to formal educational systems as the key driver toward a positive and effective transition to taking full advantage of the fast-changing digital-knowledge landscape.
We will have missed enormous opportunities to produce independent life-long learners.” David Saer, a foresight researcher for Fast Future, said he’s a young adult who predicts a positive evolution but, “education will need to adapt to the wide availability of information, and concentrate on teaching sifting skills.” He added: “The desire for instantaneous content should not be seen as a lack of patience or short attention span but as a liberation from timetables set previously by others.
Learning opportunities could easily continue to be lost unless educators, venture capitalists, taxpayers, volunteers, and businesses all make concerted efforts to leverage the potential of new technology to enhance the critical thinking skills of young people.” Jeniece Lusk, a researcher and PhD in applied sociology at an Atlanta-based information technology company, responded, “Unless the educational paradigms used in our schools are changed to match the non-academic world of the Millennial student, I don’t foresee an increase in students’ abilities to analyze and use critical thinking.
Since they’ve been taught that e-technology has no place in the classroom, they also haven’t learned proper texting/emailing/social networking etiquette, or, most importantly, how to use these resources to their advantage.” Bonnie Bracey Sutton, a technology advocate and education consultant at the Power of US Foundation, said educators have to break through the old paradigm and implement new tools.
The technology makes it all possible, and we can include new areas of learning, computational thinking, problem solving, visualization and learning, and supercomputing.” An anonymous respondent said most teachers today can’t comprehend the necessary paradigm to implement the tools effectively: “Those who are teaching the children who will be teenagers and young adults by 2020 are not all up-to-speed with the internet, mobile technologies, social interfaces, and the numerous other technologies that have recently been made mainstream.
There will be a decline for behavior and cognition until those who have grown up with this type of technology are able to teach the children how to correctly and productively utilize the advantages it presents us.” Another anonymous respondent wrote, “Interactions will definitely be different as a result of kids growing up with all this technology at their fingertips.
I don’t think this will result in less-smart children, but it will definitely result in children who learn differently than those who grew up without constant stimulation from technology.” Tin Tan Wee, an internet expert based at the National University of Singapore, estimates a slow movement to try to adapt to deal with the likely divide.
We are already seeing this manifested in the economic scene, where the rich get richer and the poor poorer.” Ken Friedman, dean of the faculty of design at Swinburne University of Technology in Melbourne, Australia, said, “With an added repertoire of experiences and skills, it might be that technology could lead to a brighter future, but today’s young people generally do not seem to be gaining the added skills and experiences to make this so.” Freelance journalist Melinda Blau said education in internet literacy is key.
“This does represent an evolutionary change, but the focus must be on the fact that learning means knowing how to filter and interpret the vast quantities of data one is exposed to—we must use the fact that the internet has all of this information to spend less time doing rote memorization and more time on critical thinking and analysis of the information that is available to you.” Tom Franke, chief information officer for the University System of New Hampshire, noted that it is up to people to actively set the agenda if they want a positive outcome.
Creativity, demand for high stimulus, rapidly changing environments, and high agency (high touch) will be what makes the next revolution of workers for jobs they will invent themselves, changing our culture entirely at a pace that will leave many who choose not to evolve in the dust.” An anonymous survey respondent said children who grow up with access to technology plus the capacity to use it in a positive manner will generally be more successful than others: “Decision-making will yield better results and those who are adept at integrating knowledge will be very successful.
The result will be positive overall, but a new type of underclass will be created which will be independent of race, gender, or even geography.” Another anonymous respondent echoed those thoughts, writing, “Young people from intellectually weak backgrounds who have no special driving interest in self-development are all too likely to turn out exactly as the purveyors of a debased mass-culture want them to be: shallow, impulse-driven consumers of whatever is being sold as ‘hot’ at the moment.” Tin Tan Wee, an internet expert based at the National University of Singapore, noted: “The smart people who can adapt to the internet will become smarter, while the rest, probably the majority, will decline.
This in turn fuels the educational divide because only the richer can afford internet access with mobile devices at effective speeds.” Well-known blogger, author, and communications professor Jeff Jarvis said we are experiencing a transition from a textual era and this is altering the way we think, not the physiology of our brains.
But it won’t change our wires.” Jim Jansen, an associate professor of information science and technology at Penn State University and a member of the boards of eight international technology journals, noted, “I disagree with the opening phrase: ‘In 2020 the brains of multitasking teens and young adults are ‘wired’ differently from those over age 35.’ I find it hard to believe that hard wiring, evolved over millions of years can be re-wired.
Concentration and in-depth thought may be skills that are rare, and thus highly valued in 2020.” “I agree with all of those who say that multitasking is nothing more than switching endlessly from one thought to another—no one can think two things at once—but I don’t agree that this kind of attention-switching is destructive or unhealthy for young minds,” added Susan Crawford, professor at Harvard University’s Kennedy School of Government and formerly on the White House staff.
But once kids get on a skateboard, or start instant messaging, it’s the fall of Western civilization.” Boyd said it seems as if the social aspects of Web use frighten many detractors, adding, “But we have learned a great deal about social cognition in recent years, thanks to advances in cognitive science, and we have learned that people are innately more social than was ever realized.
Social tools are being adopted because they match the shape of our minds, but yes, they also stretch our minds based on use and mastery, just like martial arts, playing the piano, and badminton.” David Ellis, director of communications studies at York University in Toronto, has a front-row seat to observe how hyperconnectivity seems to be influencing young adults.
For now, it seems, addictive responses to peer pressure, boredom, and social anxiety are playing a much bigger role in wiring Millennial brains than problem-solving or deep thinking.” Rich Osborne, senior IT innovator at the University of Exeter in the UK, said his own life and approaches to informing and being informed have changed due to the influence of hyperconnectivity.
In the meantime, though, the immediate and bite-size nature of internet exchanges will make it harder for multitasking teens and young adults to undertake deep thinking in particular, and the ‘top-10’ effect, i.e., people selecting whatever Google proposes on the first page of search results, may lead to a plateau of intellectual thinking as we all start to attend to the same content.” An anonymous respondent agreed, writing, “I find in myself that switching constantly between tasks, and the eyesight and energy issues from sitting in front of a screen all day make it harder for me to concentrate and connect with others in both online and offline settings.
My creativity is zapped and I get very moody if I’m away from the Web for too long.” Debbie Donovan, a marketing blogger based in Mountain View, California, described her experience: “As an over 35-er, I can tell you that I’ve deliberately re-wired my brain and I can manage a more complex and rewarding life situation as a result of the digital skills deliberately acquired.
I am more effective in my personal life because I can reach out and stay in touch with a much larger circle of friends and family and cultivate the level of intimacy I can achieve in those relationships.” Heidi McKee, an associate professor of English at Miami University, said, “Nearly 20 years ago everyone was saying how teens were going to be wired differently, but when you look at surveys done by Pew, AARP, and others, older adults possess just as much ability and desire to communicate and connect with all available means.” Dan Ness, principal researcher at MetaFacts (producers of the Technology User Profile), noted that each generation laments the younger generation and imagines a world that’s either completely better or worse than the current one.
Ferguson, a professor from Texas A&M whose research specialty is technologies’ effects on human behavior, noted, “The tendency to moralize and fret over new media seems to be wired into us.” He added, “Societal reaction to new media seems to fit into a pattern described by moral panic theory.
“There are evolutionary traits and preferences that are hard-wired in, and that’s where the danger lies—not in teenagers wasting their time writing SMSs rather than novels for the ages, but in marketers’ ever-increasing ability to tap in to addictive and deep-seated psychological traits that are common to all of us, to convince us to play just one more round of Angry Birds, or have just one more scoop of salted-caramel ice cream,” wrote an anonymous respondent.
“The pervasive network allows people to build more quickly on the foundations laid by their predecessors, but it also allows more efficient delivery of increasingly addictive media that caters to our troop-of-apes-on-the-savannah social needs for popularity and attention.” One anonymous respondent noted that it is human to take the easy path, writing: “Learning requires three key underlying skill sets—patience, curiosity, and a willingness to question assumptions.
Ensuring that youth understand that really understanding something requires lots of time and substantial amounts of thinking and questioning is going to be a challenge.” Another anonymous respondent added that the easy path generally leads to entertainment more often than education or enlightenment: “We are already beginning to see the short attention spans people have as well as their lack of overall knowledge about their world and local context.
People are distracted from deep engagement and are solely interested in being entertained, most often by viewing the misfortune of others.” Human nature, one anonymous respondent noted, has its sunny side and its dark side: “Those who are interested, driven, engaged, and excited about learning will learn, grow, and develop—for its own sake.
An anonymous respondent wrote, “Our surrounding world is developing and changing, and teens, youth, and children are going to be leading the way through the new world just like they always have.” Another added, “I am an optimist with faith in the deeper motivations of our species to learn, acquire understanding, and be challenged.” And another added: “People will always want the same things—sex, power, affection, fulfillment, etc.—and they will use technologies as they always have, to seek out more of the things they want, which intrinsically involve interacting with other people.
Ask a geeky friendless kid in small-town America 40 years ago if he’d like to have some way of communicating with people who appreciate him.” Richard Titus, a venture capitalist based in London and San Francisco, said the construction of strong social and moral frameworks is necessary for positive evolution.
“The most important thing to bring a positive vision of 2020 is to steer the next generation towards results—meaningful, measurable results, with less focus on how they are arrived at—and to build stronger social, moral frameworks to replace those roles previously held by power structures which relied on the previous models.” Survey respondents say there’s still value to be found in traditional skills but new items are being added to the menu of most-desired capabilities.
Burstein, a student at New York University and author of Fast Future: How the Millennial Generation is Remaking Our World, noted, “A focus on nostalgia for print materials, penmanship, and analog clock reading skills will disappear as Millennials and the generation that follows us will redefine valued skills, which will likely include internet literacy, how to mine information, how to read online, etc.” Collective intelligence, crowd-sourcing, smart mobs, and the “global brain” are some of the descriptive phrases tied to humans working together to accomplish things in a collaborative manner online.
“The skills being honed on social networks today will be critical tomorrow, as work will be dominated by fast-moving, geographically diverse, free-agent teams of workers connected via socially mediating technologies.” Frank Odasz, a consultant and speaker on 21st century workforce readiness, rural e-work and telework, and online learning, said digital tools are allowing human networks to accelerate intelligence.
The ability to focus, to analyze critically, these require learning and practice.” An anonymous survey respondent said talented people will have the ability to work with people on both sides of the technology divide: “There is too much of a gap between the ‘people in charge’ and the ‘wired kids,’ leaving too much room for miscommunication and inevitable friction.
“Probably the most highly valued personal skills,” she wrote, “will be cosmopolitanism, in the way philosopher Kwame Appiah conceives it—the ability to listen to and accommodate to others—and communitarianism, in the way sociologist Amitai Etzioni has outlined—an awareness that there must be a balance between individual rights and social goods.” Tom Hood, CEO of the Maryland Association of CPAs, shared feedback from hundreds of grassroots members of the CPA profession, who weighed in on the critical skills for the future in the CPA Horizons 2025 report and arrived at these: 1) Strategic thinking—being flexible and future-minded, thinking critically and creatively.
Barry Chudakov, a research fellow in the McLuhan Program in Culture and Technology at the University of Toronto, said the challenge we’re facing is maintaining and deepening “integrity, the state of being whole and undivided,” noting: “There will be a premium on the skill of maintaining presence, of mindfulness, of awareness in the face of persistent and pervasive tool extensions and incursions into our lives.
That question, more than multitasking or brain atrophy due to accessing collective intelligence via the internet, will be the challenge of the future.” An anonymous respondent noted, “The ability to concentrate, focus, and distinguish between noise and the message in the ever growing ocean of information will be the distinguishing factor between leaders and followers.” Duane Degler, principal consultant at Design for Context, a designer of large-scale search facilities and interactive applications for clients such as the National Archives and Verisign, said we’re already witnessing a difference in cognitive abilities and perceptions dependent upon the information/communication tools people are using, and not just among the under-35 set.
As a result, the dominant social and information behaviors are likely to be influenced by other factors that we can’t yet see, in the same way current social and information behaviors are now being influenced by capabilities that are predominantly five years (or at most ten years) old.” Pamela Rutledge, director of the Media Psychology Research Center at Fielding Graduate University, says this evolution is creating a new approach to thinking.
Beliefs of agency and competence fuel intrinsic motivation, resilience, and engagement.” New York-based technology and communications consultant Stowe Boyd noted, “Our society’s concern with the supposed negative impacts of the internet will seem very old-fashioned in a decade, like Socrates bemoaning the downside of written language, or the 1950’s fears about Elvis Presley’s rock-and-roll gyrations.
- On Tuesday, January 28, 2020
Ray Kurzweil Explores the Next Phase of Virtual Reality
Futurist Ray Kurzweil explores the next phase of virtual reality. Question: How will next-gen virtual reality change our lives? Ray Kurzweil: Well, start from today ...
Should We Fear Artificial Intelligence? ft Veritasium | Ear Biscuits
Rhett & Link are joined by Derek Muller aka Veritasium for a conversation about artificial intelligence, the future of government, and if Derek thinks we should ...
As computers grow more powerful and complex, the plausibility of simulating an entire universe grows less absurd. In fact, many now believe that our universe ...
Building and Interacting with Virtual Brain
Google Tech Talk April 12, 2013 (more info below) Presented by Randy McIntosh ABSTRACT The Virtual Brain (TVB, thevirtualbrain.org) is an international ...
Blockchain, Virtual Reality & AI.
Checkout our New Cryptocurrency Hard Wallet with GPS tracking, and finger print scanner at: Main Ad4m Website:
What's on an Alien Mind
If and when we finally encounter aliens, it's likely they won't be biological creatures at all, but rather advanced robots that outstrip our intelligence in every ...
Tim Desmond: "The Self-Compassion Skills Workbook" | Talks at Google
Tim Desmond is a psychotherapist, author, Distinguished Faculty Scholar at Antioch University New England, and student of Zen Master Thich Nhat Hanh.
Understanding Human Nature with Steven Pinker - Conversations with History
Visit: Conversations host Harry Kreisler welcomes Harvard's Steven Pinker, Johnstone Family Professor of Psychology, for a discussion of his ..
Simulation of Individual Spontaneous Reactive Behavior
The context of this work is the search for realism and believability of Virtual Humans. Our contribution to achieve thisgoal is to enable Virtual Humans (VH) to ...