AI News, About

DeepMind’s AI has now outcompeted nearly all human players at StarCraft II

In January of this year, DeepMind announced it had hit a milestone in its quest for artificial general intelligence.

The results, published in Nature today, could have important implications for applications ranging from machine translation to digital assistants or even military planning.

A player must choose one of three human or alien races—Protoss, Terran, or Zerg—and alternate between gathering resources, building infrastructure and weapons, and attacking the opponent to win the game.

Every race has unique skill sets and limitations that affect the winning strategy, so players commonly pick and master playing with one.

In order to attain such flexibility, the DeepMind team modified a commonly used technique known as self-play, in which a reinforcement-learning algorithm plays against itself to learn faster.

Taking inspiration from the way pro StarCraft II players train with one another, the researchers instead programmed one of the algorithms to expose the flaws of the other rather than maximize its own chance of winning.

“These friends should show you what your weaknesses are, so then eventually you can become stronger.” The method produced much more generalizable algorithms that could adapt to a broader range of game scenarios.

DeepMind’s StarCraft 2 AI is now better than 99.8 percent of all human players

DeepMind today announced a new milestone for its artificial intelligence agents trained to play the Blizzard Entertainment game StarCraft II.

It also limited AlphaStar to only viewing the portion of the map a human would see and restricted the number of mouse clicks it could register to 22 non-duplicated actions every five seconds of play, to align it with standard human movement.

DeepMind sees the advancement as more proof that general-purpose reinforcement learning, which is the machine learning technique underpinning the training of AlphaStar, may one day be used to train self-learning robots, self-driving cars, and create more advanced image and object recognition systems.

and players have less information about their opponents than in poker.” Back in January, DeepMind announced that its AlphaStar system was able to best top pro players 10 matches in a row during a prerecorded session, but it lost to pro player Grzegorz “MaNa” Komincz in a final match streamed live online.

This research milestone closely aligns with a similar one from San Francisco-based AI research company OpenAI, which has been training AI agents using reinforcement learning to play the sophisticated five-on-five multiplayer game Dota 2.

Back in April, the most sophisticated version of the OpenAI Five software, as it’s called, bested the world champion Dota 2 team after only narrowly losing to two less capable e-sports teams the previous summer.

Instead, it’s to prove that — with enough time, effort, and resources — sophisticated AI software can best humans at virtually any competitive cognitive challenge, be it a board game or a modern video game.

It’s also to show the benefits of reinforcement learning, a special brand of machine learning that’s seen massive success in the last few years when combined with huge amounts of computing power and training methods like virtual simulation.

“Though some of AlphaStar’s strategies may at first seem strange, I can’t help but wonder if combining all the different play styles it demonstrated could actually be the best way to play the game.” DeepMind hopes advances in reinforcement learning achieved by its lab and fellow AI researchers may be more widely applicable at some point in the future.

A.I. Mastered Backgammon, Chess and Go. Now It Takes On StarCraft II

Komincz from Poland struck a blow for humankind when he defeated a multi-million-dollar artificial intelligence agent known as AlphaStar, designed specifically to pummel human players in the popular real-time strategy game.

The public loss in front of tens of thousands of eSports fans was a blow for Google parent company Alphabet’s London-based artificial intelligence subsidiary, DeepMind, which developed AlphaStar.

In the months since, AlphaStar has only grown stronger and is now able to defeat 99.8 percent of StarCraft II players online, achieving Grandmaster rank in the game on the official site Battle.net, a feat described today in a new paper in the journal Nature.

agents have slowly but surely dominated the world of games, and the ability to master beloved human strategy games has become one of the chief ways artificial intelligence is assessed.

More recently, in 2016, Deepmind’s AlphaGo beat the best human players of the Chinese game Go, a complex board game with thousands of possible moves each turn that some believed A.I.

Late last year, AlphaZero, the next iteration of the A.I., not only taught itself to become the best chess player in the world in just four hours, it also mastered the chess-like Japanese game Shogi in two hours as well as Go in just days.

research is now moving away from classic board games to video games, which, with their combination of physical dexterity, strategy and randomness can be much harder for machines to master.

agents, including DeepMind’s FTW algorithm which earlier this year studied teamwork while playing the video game Doom III, learned to master games by playing against versions of themselves.

While some of the opponents in the League were hell-bent on winning the game, others were more willing to take a walloping to help expose weaknesses in AlphaStar’s strategies, like a practice squad helping a quarterback work out plays.

in several key ways: multi-agent training in a competitive league can lead to great performance in highly complex environments, and imitation learning alone can achieve better results than we’d previously supposed,”

In the meantime, the company’s various machine learning projects have been challenging themselves against more earthly problems like figuring out how to fold proteins, decipher ancient Greek texts, and learning how to diagnose eye diseases as well or better than doctors.

What is Artificial Intelligence Exactly?

Subscribe here: Check out the previous episode: Become a Patreo

Elon Musk on Google's DeepMind | ARTIFICIAL INTELLIGENCE (AI)!!

Elon Musk has always warned us about Artificial Intelligence(AI) and this is what he thinks about Google's Deep Mind and AI. . All the clips in the video are taken ...

Google's Deep Mind Explained! - Self Learning A.I.

Subscribe here: Become a Patreon!: Visual animal AI: .

Demis Hassabis: creativity and AI – The Rothschild Foundation Lecture

Recorded at the Royal Academy of Arts on 17 September 2018: Demis Hassabis, Co-Founder and CEO of DeepMind, draws upon his eclectic experiences as ...

How Google's DeepMind is Using AI to Tackle Climate Change

AI is often seen as a magical fix-all. It's not, but it is, says Sims Witherspoon, a powerful tool for unlocking communities' problem-solving capacities. Witherspoon ...

Google's DeepMind AI Just Taught Itself To Walk

Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance ...

What is Artificial Intelligence (or Machine Learning)?

Want to stay current on emerging tech? Check out our free guide today: What is AI? What is machine learning and how does it work? You've ..

Elon Musk's Last Warning About Artificial Intelligence

Elon Musk Biography: Elon Musk Merchandise: Elon Musk Merchandise Store:

Charlie Munger, Bill Gates On Future Of Artificial Intelligence | CNBC

Charlie Munger, Berkshire Hathaway vice-chairman shares his thoughts on American Express, Costco and IBM's future working with artificial intelligence.