AI News, An Oxford mathematician explains how AI could enhance human ... artificial intelligence

Science Non-Fiction

In this article we will talk about artificial intelligence (hereinafter referred to as AI), and about the risks and consequences that could come from its development.

Good, an English mathematician who worked with the legendary Alan Turing in the mid-twentieth century in the design of the first modern computers: “The first ultra intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

It doesn’t seem normal that we are arguing all the time over some nonsense that always seems to be a matter of life and death, while issues that may not be a matter of life or death but of immortality or extinction are considered by most as stories for nerds.

In the case of the financial industry, which is our main concern at Goonder, AI can help algorithms not only to be able to better predict market fluctuations, but also to be able to learn from the individual evolution of each investor, allowing us to offer a much more personalized service.

But some other stuff already in development can be really scary, such as the police use of facial recognition (especially when there are clear signs of gender and skin type biased systems), lethal autonomous weapons or the use in China of social credits for citizen behavior. Beyond all these developments and the debate over the possible implications, the general perception today for ordinary mortals seems to be that if one of these AI tools has not yet taken my job, so far so good. Looking into the future, things could get seriously unsettling if we get to the next step, which is not a step but a huge leap, one that would make Neil Armstrong look like a child playing hopscotch.

In fact, one of the strategies that is making more progress today is programming the systems using neural networks and what is called deep reinforcement learning, so they learn and improve alone. Here’s an illustrative example.

However, the artificial intelligence company DeepMind (owned by Google) has achieved a huge evolutionary breakthrough this last year with AlphaZero, a system that they fed with just the rules of three complex board games: chess, shogi, and go.

The idea, and hence the name, is that the system learned to play all three games from scratch using neural networks and deep reinforcement learning, and without it being contaminated with any prior human knowledge.

Thus, it began playing ultra-fast games against itself, and it took the thing a few hours to reach the most advanced strategies of the grandmasters, and a little while more to discover absolutely unthinkable strategies for us pitiful humans.

And even when playing with a hand tied behind its back, it’s a walk in the park: of 100 games of go, and with the tenth of chips and time to move, AlphaZero crushed 100–0 the previous champion, its older brother AlphaGo (which was the system that in 2016 had defeated the last human champion, Lee Sedol, as you can see in this documentary). So, summarizing a bit: for a system to reach a level of human intelligence, the problem of process capacity will be solved in the medium term, and solving the problem of learning ability is achieving huge qualitative leaps with the use of neural networks and machine deep learning techniques. Now before we go on with the description of the truck, a word about the growing speed in which it’s coming.

In this case, we can see that the more a system learns and improves, the better it will be at learning and improving afterwards, since each improvement cycle will allow for better and faster increases in capacity, eventually leading to exponential growth.

Tim Urban explains it very well with the help of really funny and illustrative graphics in his blog Wait but why, where he has published two posts about the AI revolution: The road to superintelligence and Immortality or extinction (if you’re interested in AI, reading these two posts is highly recommended: they’re quite exhaustive and very entertaining).

Stephen Hawking: “The development of full artificial intelligence could spell the end of the human race.” Bill Gates: “I don’t understand why some people are not concerned.” Elon Musk: “We are summoning the demon.” Okay, maybe lately Elon Musk is not the most credible source of news in the world.

For starters, the number of those who consider that all this remains pure science fiction and that a system will never reach human intelligence, depending on the survey you look at, vary between 2 and 20%.

Regarding the when, the median in the experts’ answers is that we would have an AGI system between 2040 and 2060, and that an ASI would arrive afterwards, either very quickly, or at the most 30 years later. Put another way: if you are between 20 and 40 years old, there is a 50% chance that your children will be run over by the truck.

He predicts that the whole process will be controlled, and that very soon there will be breakthroughs that will overcome the classical barriers of biology, turning us effectively into a new species, with a human base and artificial evolutions.

Therefore, if we aspire to get on the truck and not to be run over, any future evolution minimally appreciable, and needless to say if it is so radical, must necessarily have a technological base. Let’s try with a more visual example: Who runs faster, Usain Bolt or your grandmother?

And if you have no time at all, maybe you can watch this TED talk for a quick summary. Nick sees the arrival of the ASI as an existential risk for the human species, the most likely candidate to be the black ball that we take out of the bag of new technologies and that means our doom.

The Creativity Code - Marcus Du Sautoy

Please note that these ratings solely represent the complete review's biased interpretation and subjective opinion of the actual reviews and do not claim to accurately reflect or represent the views of the reviewers.

Mathematician Marcus Du Sautoy's focus here is on the question of whether or not computers can, or will be able to, produce what humans can that goes beyond the familiar and formulaic -- art, for example, or not just calculating mathematical problems but devising proofs of theorems.

Does it come down to code -- essentially, an algorithm --, even in humans (the 'creativity code' of the title), or is something more -- a uniquely human attribute, or at least consciousness -- required to produce actual art ?

It's been repeatedly demonstrated that, for many games, vast computing power (and the right algorithm ...) is sufficient to play at and beyond human capabilities.

Tic-tac-toe (noughts and crosses), with its limited possible moves, was easy to crack (in particular because, as is easy to figure out, optimal play invariably leads to a draw);

Because of the far, far greater number of possible moves in the Chinese game of Go -- "the complexity of Go makes it impossible to analyze the tree of possibilities in any reasonable timeframe"

The key to cracking it came with the approach: not pure number-crunching, considering all the possible moves -- the long prevalent way of employing computers, since that seems to be where they have such a great edge over humans, able to crunch numbers so much more quickly --, but rather programming the computer to figure it out for itself.

Instead of taking a purely top-down approach -- a program that is essentially a decision tree, with fixed rules set by the programmer -- the approach was essentially bottom up: "allowing the algorithm to create its own decision tree based on training data".

among the interesting observations Du Sautoy makes is that the game seemed to have been in a bit of a rut when AlphaGo came on the scene: human play was at a high level -- but perhaps at just a local, rather than absolute peak.

AlphaGo's self-learned approach to the game actually suggests new vistas for the game -- though that perhaps can't quite compensate for the fact that the computer is clearly better than humans, who will never catch up .....

("Humankind now produces in two days the same amount of data it took us from the dawn of civilization until 2003 to generate", he notes (alas, without any attribution or accounting of/for the numbers);

it is indeed a whole new world -- or many, daily.) For purely rule-based tasks -- like playing a game like Go -- this leads to quick success, but other tasks still prove challenging.

It's quite a leap from that to successful automated story-telling -- but mathematician Du Sautoy also states: "I think storytelling is actually the closest creative act to proving theorems", and considers how successful computers have been in his own creative field;

Perhaps because it is meant to be a more text for a general reader (though published by a university press in the US), there is practically no supporting apparatus -- no foot- or endnotes, or attributions for citations or data.

The end of humanity: Nick Bostrom at TEDxOxford

Swedish philosopher Nick Bostrom began thinking of a future full of human enhancement, nanotechnology and cloning long before they became mainstream ...

The A.I. Takeover Is Here! Should We Fear Artificial Intelligence?

John Lennox AI discussion starts at 22:30 - This is the shortened version. On Oct. 9, 2018, John Lennox addressed the critical questions surrounding artificial ...

The Promise of AI

Building on our popular primer on artificial intelligence [a16z.com/2016/06/10/ai-deep-learning-machines/] -- and a companion microsite [aiplaybook.a16z.com/] ...

Doha Debates: Artificial Intelligence

Advocates for AI defend it as manageable and say the risks are marginal, and the rewards life-improving, by empowering more people with instant information.

Kevin Korb - A History of AI

'A history of Artificial Intelligence: AI as a degenerating scientific research program' The goal of Artificial Intelligence (AI) as a discipline is to produce an Artificial ...

Introduction to AI and AI for Good

After viewing the webinar recording, please fill out our webinar evaluation form: ...

2018 Isaac Asimov Memorial Debate: Artificial Intelligence

Isaac Asimov's famous Three Laws of Robotics might be seen as early safeguards for our reliance on artificial intelligence, but as Alexa guides our homes and ...

The long-term future of AI(and what we can do about it): Daniel Dewey at TEDxVienna

Daniel Dewey is a research fellow in the Oxford Martin Programme on the Impacts of Future Technology at the Future of Humanity Institute, University of Oxford.

Cambridge Ideas - The Emotional Computer

Can computers understand emotions? Can computers express emotions? Can they feel emotions? The latest video from the University of Cambridge shows ...

John Searle: "Consciousness in Artificial Intelligence" | Talks at Google

John Searle is the Slusser Professor of Philosophy at the University of California, Berkeley. His Talk at Google is focused on the philosophy of mind and the ...