# AI News, Using Artificial Intelligence to solve the 2048 Game (JAVA code)

- On Friday, June 8, 2018
- By Read More

## Using Artificial Intelligence to solve the 2048 Game (JAVA code)

🙂 In this article I will briefly discuss my approach for building the Artificial Intelligence Solver of Game 2048, I will describe the heuristics that I used and I will provide the complete code which is written in JAVA.

When you perform a move all the values of the grid move towards that direction and they stop either when they reach the borders of the grid or when they reach another cell with non-zero value. If that previous cell has the same value, the two cells are merged into one cell with double value.

At the end of every move a random value is added in the board in one of the empty cells and its value is either 2 with 0.9 probability or 4 with 0.1 probability.

A nice simplification of the algorithm can be performed if we fix the direction towards which we can combine the pieces and rotate the board accordingly to perform the move.

Below we provide a high level description of the architecture of the implementation: The board class contains the main code of the game, which is responsible for moving the pieces, calculating the score, validating if the game is terminated etc.

Here is a nice video presentation of the minimax algorithm: Below you can see pseudocode of the Minimax Algorithm: The Alpha-beta pruning algorithm is an expansion of minimax, which heavily decreases (prunes) the number of nodes that we must evaluate/expand.

On the other hand the computer in the original game is not specifically programmed to block the user by selecting the worst possible move for him but rather randomly inserts values on the empty cells.

despite the fact that it is only the first player who tries to maximize his/her score, the choices of the computer can block the progress and stop the user from completing the game.

The main idea is not to use the score alone to evaluate each game-state but instead construct a heuristic composite score that includes the aforementioned scores.

Additionally unlike other implementations, I don’t prune aggressively the choices of the computer using arbitrary rules but instead I take all of them into account in order to find the best possible move of the player.

The one that I found most useful is the following: The above function combines the actual score of the board, the number of empty cells/tiles and a metric called clustering score which we will discuss later.

Finally we should note that when the player reaches a terminal game state and no more moves are allowed, we don’t use the above score to estimate the value of the state.

If the game is won we assign the highest possible integer value, while if the game is lost we assign the lowest non negative value (0 or 1 with similar logic as in the previous paragraph).

In my tests, a search with depth 3 lasts less than 0.05 seconds but gives 20% chance of winning, a depth of 5 lasts about a 1 second but gives 40% chance of winning and finally a depth of 7 lasts 27-28 seconds and gives about 70-80% chance of winning.

- On Thursday, June 21, 2018
- By Read More

## Minimax Algorithm in Game Theory | Set 3 (Tic-Tac-Toe AI – Finding optimal move)

Prerequisites: Minimax Algorithm in Game Theory, Evaluation Function in Game Theory Let us combine what we have learnt so far about minimax and evaluation function to write a proper Tic-Tac-Toe AI (Artificial Intelligence) that plays a perfect game.

The pseudocode is as follows : To check whether or not the current move is better than the best move we take the help of minimax() function which will consider all the possible ways the game can go and returns the best value for that move, assuming the opponent also plays optimally The

code for the maximizer and minimizer in the minimax() function is similar to findBestMove() , the only difference is, instead of returning a move, it will return a value.

This means that in case of a victory it will choose a the victory which takes least number of moves and in case of a loss it will try to prolong the game and play as many moves as possible.

3 possible scenarios in the above example are : Remember, even though X has a possibility of winning if he plays the middle move, O will never let that happen and will choose to draw instead.

Remember this implementation of minimax algorithm can be applied any 2 player board game with some minor changes to the board structure and how we iterate through the moves.

- On Thursday, June 21, 2018
- By Read More

## Java Graphics Tutorial

Click the image to run the demo for the various AI strategies (under the "Options"

To test the various AI strategies, an abstract superclass called AIPlayer is defined, which takes the Board as an argument in its constructor (because you need the board position to compute the next move).

The simplest computer strategy is to place the seed on the first empty cell in this order: the center, one of the four corners, one of the four sides.

For example, In this strategy, we need to formula a heuristic evaluation function, which returns a relative score, e.g., +∞

In Tic-Tac-Toe, a possible heuristic evaluation function for the current board position is: For Tic-Tac-Toe, compute the scores for each of the 8 lines (3 rows, 3 columns and 2 diagonals) and obtain the sum.

To implement this strategy, you need to compute the score for all the valid moves, and place the seed at the position with the highest score.

For Tic-Tac-Toe, the function could be as simple as returning +1 if the computer wins, -1 if the player wins, or 0 otherwise.

better evaluation function for Tic-Tac-Toe is: Compute the scores for each of the 8 lines (3 rows, 3 columns and 2 diagonals) and obtain the sum.

nodes or at maximum depth of 4) using the heuristic evaluation function, obtaining the values shown.

The algorithm continues evaluating the maximum and minimum values of the child nodes alternately until it reaches the root node, where it chooses the move with the maximum value.

Alpha-beta pruning seeks to reduce the number of nodes that needs to be evaluated in the search tree by the minimax algorithm.

- On Thursday, June 21, 2018
- By Read More

## What is the optimal algorithm for the game 2048?

My attempt uses expectimax like others solutions above, but without bitboards.

Nneonneo's solution can check 10millions of moves which is approximately a depth of 4 with 6 tiles left and 4 moves possible (2*6*4)4.

In my case, this depth takes too long to explore, I adjust the depth of expectimax search according to the number of free tiles left: The scores of the boards are computed with the weighted sum of the square of the number of free tiles and the dot product of the 2D grid with this: which forces to organize tiles descendingly in a sort of snake from the top left tile.

[-3.8,-3.7,-3.5,-3]] snake=snake.map(function(a){return a.map(Math.exp)})

function move(p, ai) { //0:up, 1:right, 2:down, 3:left var newgrid = mv(p, ai.grid);

var x = expandMove(root, ai) //console.log('number of leaves', x) //console.log('number of leaves2', countLeaves(root)) if (!root.children.length) return null var values = root.children.map(expectimax);

function expectimax(node) { if (!node.children.length) { return node.score } else { var values = node.children.map(expectimax);

if (node.prob) { //we are at a max node return Math.max.apply(null, values) } else { // we are at a random node var avg = 0;

i++) avg += node.children[i].prob * values[i] return avg / (values.length / 2) } } }

var child4 = {grid: grid4,prob: .1,path: node.path,children: []} node.children.push(child2) node.children.push(child4) x += expandMove(child2, ai) x += expandMove(child4, ai) } return x;

ai.depth) { for (var move of[0, 1, 2, 3]) { var grid = mv(move, node.grid);

var child = {grid: grid,path: node.path.concat([move]),children: []} node.children.push(child) x += expandRandom(child, ai) } } } if (isLeaf) node.score = dot(ai.weights, stats(node.grid)) return isLeaf ?

document.addEventListener('keydown', function(event) { if (event.which in map) { move(map[event.which], ai) console.log(stats(ai.grid)) updateUI(ai) } }) var map = { 38: 0, // Up 39: 1, // Right 40: 2, // Down 37: 3, // Left };

function dist2(a, b) { //squared 2D distance return Math.pow(a[0] - b[0], 2) + Math.pow(a[1] - b[1], 2) }

function freeCells(grid) { return grid.reduce(function(v, a) { return v + a.reduce(function(t, x) { return t + (x == 0) }, 0) }, 0) }

j++) { if (!grid[i][j]) { if (_r == r) { grid[i][j] = Math.random() <

j++) r += a[i][j] * b[3 - i][3 - j] return r }

// transformation matrix in the 4 directions g[i][j] = [up, right, down, left] ig[i][j] = [[j, i],[i, n-1-j],[n-1-j, i],[i, j]];

// the inverse tranformations } } this.transform = function(k, grid) { return this.transformer(k, grid, g) } this.itransform = function(k, grid) { // inverse transform return this.transformer(k, grid, ig) } this.transformer = function(k, grid, mat) { var newgrid = [];

- On Thursday, June 21, 2018
- By Read More

## How to make your Tic Tac Toe game unbeatable by using the minimax algorithm

I struggled for hours scrolling through tutorials, watching videos, and banging my head on the desk trying to build an unbeatable Tic Tac Toe game with a reliable Artificial Intelligence.

It keeps playing ahead until it reaches a terminal arrangement of the board (terminal state) resulting in a tie, a win, or a loss.

Once in a terminal state, the AI will assign an arbitrary positive score (+10) for a win, a negative score (-10) for a loss, or a neutral score (0) for a tie.

Since minimax evaluates every state of the game (hundreds of thousands), a near end state allows you to follow up with minimax’s recursive calls easier(9).

Additionally, you need a function that looks for winning combinations and returns true if it finds one, and a function that lists the indexes of available spots in the board.

Therefore, make an array called moves and loop through empty spots while collecting each move’s index and score in an object called move.

Later, set the empty spot on the newboard to the current player and call the minimax function with other player and the newly changed newboard.

Therefore, If the player is aiPlayer, it sets a variable called bestScore to a very low number and loops through the moves array, if a move has a higher score than bestScore, the algorithm stores that move.

In the next section, let’s go over the code line by line to better understand how the minimax function behaves given the board shown in figure 2.

Note: In figure 3, large numbers represent each function call and levels refer to how many steps ahead of the game the algorithm is playing.

- On Wednesday, September 18, 2019

**Optimal Strategy Game Pick from Ends of array Dynamic Programming**

N pots, each with some number of gold coins, are arranged in ..

**Programming Interviews: Find Path in NXN Maze (Backtracking problem)**

This video is produced by IITian S.Saurabh. He is B.Tech from IIT and MS from USA. Given a NXN maze, find a path from top left cell to bottom right cell given ...

**How to use Q Learning in Video Games Easily**

Only a few days left to signup for my Decentralized Applications course! In this video, I go over the history of reinforcement learning then ..

**JavaScript Tic Tac Toe Project Tutorial - Unbeatable AI w/ Minimax Algorithm**

A full web development tutorial for beginners that demonstrates how to create an unbeatable tic tac toe game using vanilla JavaScript, HTML, and CSS.

**Anomaly Detection: Algorithms, Explanations, Applications**

Anomaly detection is important for data cleaning, cybersecurity, and robust AI systems. This talk will review recent work in our group on (a) benchmarking ...

**CppCon 2017: Nicholas Ormrod “Fantastic Algorithms and Where To Find Them”**

Presentation Slides, PDFs, Source Code and other presenter materials are available at: — Come dive into some ..

**WWDC 2018 Keynote — Apple**

Apple WWDC 2018. Four OS updates. One big day. Take a look at updates for iPhone and iPad, Mac, Apple Watch, and Apple TV. 9:54 — Announcing iOS 12 ...

**Lecture 14 | Deep Reinforcement Learning**

In Lecture 14 we move from supervised learning to reinforcement learning (RL), in which an agent must learn to interact with an environment in order to ...

**Total Ways in Matrix Dynamic Programming**

Given a 2 dimensional matrix, how many ways you can reach bottom right from top left provided you can only move down and right.

**CellProfiler: Classifying Cells with Machine Learning**

Copyright Broad Institute, 2013. All rights reserved. CellProfiler Analyst contains a machine-learning tool for identifying complex and subtle cellular phenotypes.