AI News, A Dialogue with Socrates on the topic of AI, Robotics and Free Will ... artificial intelligence

When Tech Knows You Better Than You Know Yourself

When you are 2 years old, your mother knows more about you than you know yourself.

(And they’ll certainly know them before you’ve told your mother.) Recently, I spoke with Harari, the author of three best-selling books, and Tristan Harris, who runs the Center for Humane Technology and who has played a substantial role in making “time well spent” perhaps the most-debated phrase in Silicon Valley in 2018.

But my whole background: I actually spent the last 10 years studying persuasion, starting when I was a magician as a kid, where you learn that there are things that work on all human minds.

And I think that's the thing that we both share is that the human mind is not the total secure enclave root of authority that we think it is, and if we want to treat it that way we're going to have to understand what needs to be protected first.

Because we have built our society, certainly liberal democracy with elections and the free market and so forth, on philosophical ideas from the 18th century which are simply incompatible not just with the scientific findings of the 21st century but above all with the technology we now have at our disposal.

Our society is built on the ideas that the voter knows best, that the customer is always right, that ultimate authority is, as Tristan said, is with the feelings of human beings and this assumes that human feelings and human choices are these sacred arena which cannot be hacked, which cannot be manipulated.

You come back to your computer and you think OK, I know those other times I end up watching two or three videos and I end up getting sucked into it, but this time it's going to be really different.

You wake up from a trance three hours later and you say, “What the hell just happened?” And it's because you didn't realize you had a supercomputer pointed at your brain.

So when you open up that video you're activating Google's billions of dollars of computing power and they've looked at what has ever gotten 2 billion human animals to click on another video.

But he can't see beyond a certain point like a mouse can see so many moves ahead in a maze, but a human can see way more moves ahead and then Garry can see even more moves ahead.

But everywhere you turn on the internet there's basically a supercomputer pointing at your brain, playing chess against your mind, and it's going to win a lot more often than not.

And if YouTube is using 2 billion human animals to calculate based on everybody who's ever wanted to learn how to play ukulele it can say, “Here's the perfect video I have to teach later that can be great.” The problem is it doesn't actually care about what you want, it just cares about what will keep you next on the screen.

Seventy percent of what people are watching is the recommended videos on the right hand side, which means 70 percent of 1.9 billion users, that's more than the number of followers of Islam, about the number followers of Christianity, of what they're looking at on YouTube for 60 minutes a day—that’s the average time people spend on YouTube.

If you go on with this illusion that human choice cannot be hacked, cannot be manipulated, and we can just trust it completely, and this is the source of all authority, then very soon you end up with an emotional puppet show.

And this is one of the greatest dangers that we are facing and it really is the result of a kind of philosophical impoverishment of just taking for granted philosophical ideas from the 18th century and not updating them with the findings of science.

And it's very difficult because you go to people—people don't want to hear this message that they are hackable animals, that their choices, their desires, their understanding of who am I, what are my most authentic aspirations, these can actually be hacked and manipulated.

I think that AI gets too much attention now, and we should put equal emphasis on what's happening on the biotech front because in order to hack human beings, you need biology and some of the most important tools and insights, they are not coming from computer science, they are coming from brain science.

TH: I Think one of Yuval’s major points here is that the biotech lets you understand by hooking up a sensor to someone features about that person that they won't know about themselves, and they're increasingly reverse-engineering the human animal.

Then if I put a supercomputer behind the camera, I can actually run a mathematical equation, and I can find the micro pulses of blood to your face that I as a human can’t see but that the computer can see, so I can pick up your heart rate.

I can point—there's a woman named Poppy Crum who gave a TED talk this year about the end of the poker face, that we had this idea that there can be a poker face, we can actually hide our emotions from other people.

But this talk is about the erosion of that, that we can point a camera at your eyes and see when your eyes dilate, which actually detects cognitive strains—when you're having a hard time understanding something or an easy time understanding something.

You know, one of the things with Cambridge Analytica is the idea—you know, which is all about the hacking of Brexit and Russia and all the other US elections—that was based on, if I know your big five personality traits, if I know Nick Thompson's personality through his openness, conscientiousness, extrovertedness, agreeableness, and neuroticism, that gives me your personality.

Now the whole scandal there was that Facebook let go of this data to be stolen by a researcher who used to have people fill in questionnaires to figure out what are Nick’s big five personality traits?

But now there's a woman named Gloria Mark at UC Irvine who has done research showing you can actually get people's big five personality traits just by their click patterns alone, with 80 percent accuracy.

We're going to be able to point AIs at human animals and figure out more and more signals from them including their micro expressions, when you smirk and all these things, we've got face ID cameras on all of these phones.

So it can be done in a diplomatic setting, like two prime ministers are meeting to resolve the Israeli-Palestinian conflict, and one of them has an ear bug and a computer is whispering in his ear what is the true emotional state.

Like, I don't know, I walk on the beach or even watch television, and there is—what was in the 1980s, Baywatch or something—and there is a guy in a swimsuit and there is a girl in a swimsuit and which way my eyes are going.

So there's this big question of how do we hand over information about us, and say, “I want you to use that to help me.” So on whose authority can I guarantee that you’re going to help me?

But they also watch you around your whole day, what you click on, which ads of Coca-Cola or Pepsi, or the shirtless man and the shirtless women, and all your conversations that you have with everybody else in your life—because they have Facebook Messenger, they have that data too—but imagine that this priest in a confession booth, their entire business model, is to sell access to the confession booth to another party.

We can design a different political and economic system in order to prevent this immense concentration of data and power in the hands of either governments or corporations that use it without being accountable and without being transparent about what they are doing.

A good example of this is if I put on a VR helmet, and now suddenly I'm in a space where there's a ledge, I'm at the edge of a cliff, I consciously know I'm sitting here in a room with Yuval and Nick, I know that consciously.

And so it's important we have to sort of think of this as like a new era, a kind of a new enlightenment where we have to see ourselves in a very different way and that doesn't mean that that's the whole answer.

So if everyone around me believes a conspiracy theory because YouTube is taking 1.9 billion human animals and tilting the playing field so everyone watches Infowars—by the way, YouTube has driven 15 billion recommendations of Alex Jones InfoWars, and that's recommendation.

I can say I don't want to use those things but I still live in a social fabric where all my other sexual opportunities, social opportunities, homework transmission, where people talk about that stuff, if they only use Instagram, I have to participate in that social fabric.

And this is why I began by saying that we suffer from philosophical impoverishment, that we are still running on the ideas of the basically the 18th century, which are good for two or three centuries, which were very good, but which are simply not adequate to understanding what's happening right now.

And which is why I also think that, you know, with all the talk about the job market and what should they study today that will be relevant to the job market in 20-30 years, I think philosophy is one of the best bets maybe.

I think, often, this conversation usually makes people conclude that there's nothing about human choice or the human mind's feelings that's worth respecting.

I think the point is that we need a new kind of philosophy that acknowledges a certain kind of thinking or cognitive process or conceptual process or social process that we we want that.

Like for example [James] Fishkin is a professor at Stanford who's done work on deliberative democracy and shown that if you get a random sample of people in a hotel room for two days and you have experts come in and brief them about a bunch of things, they change their minds about issues, they go from being polarized to less polarized, they can come to more agreement.

And there's a sort of a process there that you can put in a bin and say, that's a social cognitive sense-making process that we might want to be sampling from that one as opposed to an alienated lonely individual who's been shown photos of their friends having fun without them all day, and then we're hitting them with Russian ads.

So I think, you know, we're still stuck in a mind-body meatsuit, we're not getting out of it, so we better learn how do we use it in a way that brings out the higher angels of our nature, the more reflective parts of ourselves.

Well there's actually a value of someone wants to learn how to play the ukulele, but the computer doesn't know that, it's just recommending more ukulele videos.

But if it really knew that about you instead of just saying, here's like infinite more ukulele videos to watch, it might say and here's your 10 friends who know how to play ukulele that you didn't know know how to play ukulele and you can go hang out with them.

So if you have a weakness for funny cat videos and you spend an enormous amount of time, an inordinate amount of time, just watching—you know it’s not very good for you, but you just can't stop yourself clicking, then the AI will intervene, and whenever these funny cat videos try to pop up the AI says no no no no.

You had a 80-year-old child developmental psychologist who studied under the best child developmental psychologists and thought about in those kinds of moments the thing that's usually going on for a teenager at age 13 is a feeling of insecurity, identity development, like experimentation.

I think from a practical perspective I totally agree with this idea of an AI sidekick but if we're imagining like we live in the reality, the scary reality that we're talking about right now.

So we're actually thinking about how do we navigate to an actual state of affairs that we want, we probably don't want an AI sidekick to be this kind of optional thing that some people who are rich can afford and other people who don't can't, we probably want it to be baked into the way technology works in the first place, so that it does have a fiduciary responsibility to our best, subtle, compassionate, vulnerable interests.

YNH: One thing is to change the way that—if you go to university or college and learn computer science then an integral part of the course is to learn about ethics, about the ethics of coding.

And it's really, I think it's extremely irresponsible, that you can't finish, you can have a degree in computer science and in coding and you can design all these algorithms that now shape people's lives, and you just don't have any background in thinking ethically and philosophically about what you are doing.

TH: So this is where this business model conversation comes in is so important and also why Apple and Google's role is so important because they are before the business model of all these apps that want to steal your time and maximize attention.

Like we can't escape this instrument, and it turns out that being inside of community and having face to face contact is, you know—there's a reason why solitary confinement is the worst punishment we give human beings.

Facebook could also change its business model to be more about payments and people transacting based on exchanging things which is something they're looking into with the blockchain stuff that they're theoretically working on and also messenger payments.

And there could be whole teams of engineers at News Feed that are just thinking about what's best for society and then people would still ask these questions of, well who’s Facebook to say what's good for society?

And when you enter an arms race situation, then it becomes very quickly a race to the bottom, because you can very often hear this, OK, It's a bad idea to do this, to develop that, but they are doing it and it gives them some advantage and we can't stay behind.

And I think that you know much like high-frequency trading in the financial markets, you don't want people blowing up whole mountains so they can lay these copper cables so they can trade a microsecond faster.

When you add high-frequency trading to who can program human beings faster and who's more effective at manipulating culture wars across the world, that just becomes this like race to the bottom of the brainstem of total chaos.

So I think we have to say how do we slow this down and create a sensible pace, and I think this is also about a humane technology of instead of a child development psychologist, ask someone, like you know the psychologist, what are the clock rates of human decision making where we actually tend to make good thoughtful choices?

And so far, you know, with the Time Well Spent stuff, for example, it’s let's help people, because they're vulnerable to how much time they spend, set a limit on how much time they spend.

But that doesn't tackle any of these bigger issues about how you can program thoughts of a democracy, how mental health and alienation can be rampant among teenagers leading to doubling the rates of teen suicide for girls in the last eight years.

But the British Empire when they decided to abolish slavery, they had to give up 2 percent of their GDP every year for 60 years and they were able to make that transition over a transition period, and so I'm not equating advertising or programming human beings to slavery.

But there's a similar structure of the entire economy now, if you look at the stock market, like a huge chunk of the value, is driven by these advertising programming-human-animals-based systems.

I mean the basic tools were designed—you had the brightest people in the world, 10 or 20 years ago, cracking this problem of, How do I get people to click on ads?

And then the methods that they initially used to sell us underwear and sunglasses and vacations in the Caribbean and things like that, they were hijacked and weaponized, and are now used to sell us all kinds of things, including political opinions and entire ideologies.

Not to bring it back—not to equate slavery in a similar way, but when the British Empire decided to abolish slavery and subtract their dependence on that for their economy, they actually were concerned that if we do this, France's economy is still going to be powered by slavery and they're going to soar way past way past us.

If there is such a thing but at least kinds of human freedom that we want to preserve and that I think is something that is actually in everyone's interest and it's not necessarily equal capacity to achieve that because governments are very powerful.

They’re 18 years old, they want to devote their life to making sure that the dynamic between machines and humans does not become exploitive and becomes one in which we continue to live our rich fulfilled lives.

TH: And I think your earlier suggestion to understand that the philosophy of simple rational human choice is—we have to move from an 18th century model of how human beings work to a 21st century model of how human beings work.

NT: Well I, you know we started this conversation in a time where the optimistic and I am certainly optimistic that we have covered some of the hardest questions facing humanity that you have offered brilliant insights into them.

From Socrates to Expert Systems:

Dreyfus It has been half a century since the computer burst upon the world along with promises that it would soon be programmed to be intelligent, and the related promise or threat that we would soon learn to understand ourselves as computers.

They demonstrated that computers were physical symbol systems whose symbols could be made to stand for anything, including features of the real world, and whose programs could be used as rules for relating these features.

Newell and Simon's early work on problem solving was, indeed, impressive, and by 1965 Artificial Intelligence had turned into a flourishing research program, thanks to a series of micro-world successes such as Terry Winograd's SHRDLU, a program that could respond to English-like commands by moving simulated, idealized blocks.

Given this impasse, it made sense a year later for researchers to return to microworlds - domains isolated from everyday common-sense intuition - and try to develop theories of at least such isolated domains.

This is actually what happened - with the added realization that such isolated domains need not be games like chess nor micro-worlds like Winograd's blocks world, but could, instead, be skill domains like disease diagnosis or spectrograph analysis.

Thus, from the frustrating field of AI has recently emerged a new field called knowledge engineering, which by limiting its goals has applied AI research in ways that actually work in the real world.

Feigenbaum spells out the goal: The machines will have reasoning power: they will automatically engineer vast amounts of knowledge to serve whatever purpose humans propose, from medical diagnosis to product design, from management decisions to education.

What the knowledge engineers claim to have discovered is that in areas which are cut off from everyday common sense and social intercourse, all a machine needs in order to behave like an expert is specialized knowledge of two types: The facts of the domain - the widely shared knowledge ...

Again Feigenbaum puts the point very clearly: [T]he matters that set experts apart from beginners, are symbolic, inferential, and rooted in experiential knowledge.

Socrates gets annoyed and demands that Euthyphro, then, tell him his rules for recognizing these cases as examples of piety, but although Euthyphro claims he knows how to tell pious acts from impious ones, he cannot state the rules which generate his judgments.

'That's true, but if you see enough patients/rocks/chip designs/instruments readings, you see that it isn't true after all,' and Feigenbaum comments with Socratic annoyance: 'At this point, knowledge threatens to become ten thousand special cases."

The same story is repeated in every area of expertise, even in areas unlike checkers where expertise requires storing large numbers of facts, which should give an advantage to the computer.] In each area where there are experts with years of experience, the computer can do better than the beginner, and can even exhibit useful competence, but it cannot rival the very experts whose facts and supposed heuristics it is processing with incredible speed and unerring accuracy.

Many of our skills are acquired at an early age by trial and error or by imitation, but to make the phenomenology of skillful behavior as clear as possible let's look at how, as adults we learned new skills by instruction.

The student automobile driver learns to recognize such interpretation-free features as speed (indicated by the speedometer) and is given rules such as shift to second when the speedometer needle points to ten miles an hour.

The novice chess player learns a numerical value for each type of piece regardless of its position, and the rule: 'Always exchange if the total value of pieces captured exceeds the value of pieces lost.'

The player also learns to seek center control when no advantageous exchanges can be found, and is given a rule defining center squares and a rule for calculating extent of control.

Stage 2: Advanced Beginner As the novice gains experience actually coping with real situations, he begins to note, or an instructor points out, perspicuous examples of meaningful additional aspects of the situation.

The advanced beginner driver, using (situational) engine sounds as well as (non-situational) speed in his gear-shifting rules, learns the maxim: Shift up when the motor sounds like it is racing and down when it sounds like its straining.

With experience, the chess beginner learns to recognize such situational aspects of positions as a weakened king's side or a strong pawn structure despite the lack of a precise and situation-free definition.

At this point, since a sense of what is important in any particular situation is missing, performance becomes nerve-wracking and exhausting, and the student may well wonder how anyone ever masters the skill.

competent driver leaving the freeway on an off-ramp curve, after taking into account speed, surface condition, criticality of time, etc., may decide he is going too fast.

The class A chess player, here classed as competent, may decide after studying a position that her opponent has weakened his king's defenses so that an attack against the king is a viable goal.

Stage 4: Proficient If events are experienced with involvement as the learner practices her skill, the resulting positive and negative experiences will strengthen successful responses and inhibit unsuccessful ones.

As the brain of the performer acquires the ability to discriminate among a variety of situations, each entered into with concern and involvement, appropriate plans spring to mind and certain aspects of the situation stand out as important without the learner standing back and choosing those plans or deciding to adopt that perspective.

Valuable time may be lost while he is working out a decision, but the proficient driver is certainly more likely to negotiate the curve safely than the competent driver who spends additional time considering the speed, angle of bank, and felt gravitational forces, in order to decide whether the car's speed is excessive.

That is, with enough experience in a variety of situations, all seen from the same perspective but requiring different tactical decisions, the brain of the expert performer gradually decomposes this class of situations into subclasses, each of which shares the same action.

	[A few years ago Stuart performed an experiment in which an international master, Julio Kaplan, was required to add numbers presented to him audibly at the rate of about one number per second as rapidly as he could, while at the same time playing five-second-a-move chess against a slightly weaker, but master level player.

Almost anyone can add numbers and simultaneously recognize and respond to faces, even though each face will never exactly match the same face seen previously, and politicians can recognize thousands of faces, just as Julio Kaplan can recognize thousands of chess positions similar to ones previously encountered.

The number of classes of discriminable situations, built up on the basis of experience, must be immense.] It has been estimated that a master chess player can distinguish roughly 50,000 types of positions.

	We can see now that a beginner calculates using rules and facts just like a heuristically programmed computer, but that with talent and a great deal of involved experience, the beginner develops into an expert who intuitively sees what to do without recourse to rules.

Children play with water for years building up the necessary thousands of typical cases.) This would explain why research in AI has been stalled and why we should expect the attempt to make intelligent computers by using rules and features to be abandoned by the end of this century.] 	[In this idealized account of skillful expert coping it might seem that experts needn't think and are always right.

Rather, they reflect upon the goal or perspective that seems evident to them and upon the action that seems appropriate to achieving that goal..	Let us call the kind of inferential reasoning exhibited by the novice, advanced beginner and competent performer as they apply and improve their theories and rules, 'calculative rationality', and what experts exhibit when they have time, 'deliberative rationality.'

Deliberative rationality is detached, reasoned observation of one's intuitive, practice-based behavior with an eye to challenging, and perhaps improving, intuition without replacing it by the purely theory-based action of the novice, advanced beginner or competent performer..	For example, sometimes, due to a sequence of events, one is led to see a situation from an inappropriate perspective.

An expert will try to protect against this by trying to see the situation in alternative ways, sometimes through reflection and sometimes by consulting others and trying to be sympathetic to their perhaps differing views.

A ballistics expert who testified only that he had seen thousands of bullets and the gun barrels that had fired them, and that there was absolutely no doubt in his mind that the bullet in question had come from the gun offered in evidence, would be ridiculed by the opposing attorney and disregarded by the jury.

If he is experienced in legal proceedings, he will know how to construct arguments that convince the jury, but he does not tell the court what he intuitively knows, for he will be evaluated by the jury on the basis of his 'scientific' rationality, not in terms of his past record and good judgment.

	It is ironic that judges hearing a case will expect expert witnesses to rationalize their testimony, for when rendering a decision involving conflicting conceptions of what is the central issue in a case and therefore what is the appropriate guiding precedent, judges will rarely if ever attempt to explain their choice of precedents.

Psychology Today

Some of today's top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives.

Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us—or perhaps just apathetically dispose of us, much like scum getting obliterated by a windshield wiper.

Even the word “recognize” is misleading because it implies a subjective experience, so perhaps it is better to simply say that computers are sensitive to symbols, whereas the brain is capable of semantic understanding.  It

The influential philosopher John Searle has cleverly depicted this fact by analogy in his famous and highly controversial “Chinese Room Argument”, which has been convincing minds that “syntax is not sufficient for semantics” since it was published in 1980.

And although some esoteric rebuttals have been put forth (the most common being the “Systems Reply”), none successfully bridge the gap between syntax and semantics. But even if one is not fully convinced based on the Chinese Room Argument alone, it does not change the fact that Turing machines are symbol manipulating machines and not thinking machines, a position taken by the great physicist Richard Feynman over a decade earlier.

Feynman described the computer as “A glorified, high-class, very fast but stupid filing system,” managed by an infinitely stupid file clerk (the central processing unit) who blindly follows instructions (the software program).

In a famous lecture on computer heuristics, Feynman expressed his grave doubts regarding the possibility of truly intelligent machines, stating that, “Nobody knows what we do or how to define a series of steps which correspond to something abstract like thinking.”  These

But unlike digital computers, brains contain a host of analogue cellular and molecular processes, biochemical reactions, electrostatic forces, global synchronized neuron firing at specific frequencies, and unique structural and functional connections with countless feedback loops.   Even

A perfect computer simulation—an emulation—of photosynthesis will never be able to convert light into energy no matter how accurate, and no matter what type of hardware you provide the computer with.

These machines do not merely simulate the physical mechanisms underlying photosynthesis in plants, but instead duplicate, the biochemical and electrochemical forces using photoelectrochemical cells that do photocatalytic water splitting.   In

a similar way, a simulation of water isn’t going to possess the quality of ‘wetness’, which is a product of a very specific molecular formation of hydrogen and oxygen atoms held together by electrochemical bonds.

Even the hot new consciousness theory from neuroscience, Integrated Information Theory, makes very clear that a perfectly accurate computer simulation of a brain would not have consciousness like a real brain, just as a simulation of a black hole won't cause your computer and room to implode.

Neuroscientists Giulio Tononi and Christof Koch, who established the theory, do not mince words on the subject: 'IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.'

With this in mind, we can still speculate about whether non-biological machines that support consciousness can exist, but we must realize that these machines may need to duplicate the essential electrochemical processes (whatever those may be) that are occurring in the brain during conscious states.

If this turns out to be possible without organic materials—which have unique molecular and atomic properties—it would presumably require more than Turing machines, which are purely syntactic processors (symbol manipulators), and digital simulations, which may lack the necessary physical mechanisms.   The

The importance of human innovation in A.I. ethics

“Our efforts,” Stewart says, “are guided by the principle that our ethics group is obsessed with making sure the impact of our technology is good.” Kay Firth-Butterfield is chief officer of the EAP, and is charged with being on the vanguard of the ethical issues affecting the AI industry and society as a whole.

“Externally,” she notes, “we plan to apply Cyc intelligence (shorthand for ‘encyclopedia,’ Lucid’s AI causal reasoning platform) for research to demonstrate the benefits of AI and to advise Lucid’s leadership on key decisions, such as the recent signing of the LAWS letter and the end use of customer applications.” Ensuring the impact of AI technology is positive doesn’t happen by default.

How can we attain these goals without infringing on the cultural values we hold dear?” Aimee van Wynsberghe, PhD, is assistant professor of philosophy of technology at University of Twente in the Netherlands and a thought leader in the nascent Ethics Adviser industry.

“When an adviser acts as a member of the design team, they’re not placing limitations on the process but help create utopic visions leading to practical impact.” “You can sometimes do things that are entirely legal yet highly unethical.” Roland van Rijswijk works at SURFnet, the National Research and Education Network in the Netherlands connecting academia and research institutes throughout the country.

He recently worked with van Wynsberghe to create a booklet designed to help staff identify ethical issues concerning how their data would be used by outside researchers.

He admits the Socratic process is not always fun — clients will sometimes complain about his ‘aggravating questions’ — but it’s in posing these difficult scenarios that companies gain clarity about their business decisions.

“You ask a question that might seem silly at first, but when you track it to its logical conclusion you get to basic values you may not have realized you had.” “The risk of not starting a process for defining global ethical standards is that waiting could hinder innovation for AI by resulting in incompatible algorithms across companies.” Konstantinos Karachalios is managing director for the IEEE Standards Association, the consensus building organization that is part of IEEE, the world’s largest professional association of engineers.

If a person doesn’t have agency surrounding their data and communications, how can they contribute to any democratic process?” Karachalios’ perspective is refreshingly philosophical yet intentionally Socratic, designed to inspire global dialogue around AI ethics to help IEEE identify standards that advance and serve humanity.

“I am arguing,” he says, “that there are certain experiences humans will value more highly in other humans even if a computer could do them.” When it comes to scrutinizing our actions and influencing our emotions, the algorithms of the aggregated Internets already control our ethical identity.

The Public Policy Challenges of Artificial Intelligence

A Conversation with Dr. Jason Matheny Director, Intelligence Advanced Research Projects Activity (IARPA) Eric Rosenbach (Moderator) Co-Director, Belfer ...

How will narrow Artificial Intelligence (narrow AI) change the future of business?

In this video featuring our Rutgers Business School Executive Education faculty member Mike Moran, we discuss how Artificial Intelligence is becoming more ...

Martin Ford on Singularity 1on1: Technological Unemployment is an Issue We Need To Discus

Technological unemployment is an issue that I have mentioned a few times ..

A Transhumanist Manifesto

A Transhumanist Manifesto Preamble Intelligence wants to be free but everywhere is in chains

Noam Chomsky on Smart Machines, Programs and AI

This is just a short soundbite from my interview with Noam Chomsky on the technological singularity. You can see the full 30 min interview here: ...

Danko Nikolic on Singularity 1 on 1: Practopoiesis Tells Us Machine Learning Isn't Enough!

If there's ever been a case when I just wanted to jump on a plane and go interview someone in ..

Calum Chace on Pandora’s Brain: AI is Coming and It Could Be the Best or the Worst Thing

Pandora's Brain is one of the most philosophical science fiction novels I have read recently

Authors@Google: Cory Doctorow & Charles Stross | "The Rapture of Nerds"

Cory Doctorow and Charles Stross have just signed with Tor Books to co-author a fix-up novel based on a series of short stories called Rapture of the Nerds.

Ray Kurzweil: "How to Create a Mind" | Talks at Google

How to Create a Mind: The Secret of Human Thought Revealed About the book: In How to Create a Mind, The Secret of Human Thought Revealed, the bold ...