We're back to continue talking about Reversi performance ideas, in this post's case: Move tree search algorithms.

Finding the Best Moves

The idea of all this thinking ahead is for the AI to determine what it thinks is the best move to make. All of the algorithms mentioned in this section are equivalent in that they will return the same move, but some are generally more efficient than others. If you're willing to (potentially) sacrifice accuracy for a significant speed gain, take a look at multi-prob cut... just make sure your evaluation function is good, or you will be skipping good moves for potentially bad ones, since this algorithm doesn't do exhaustive evaluation.

There are many algorithms for searching for the best move, but I am only going to cover the more basic ones. If you're interested in more and better algorithms, look up things like NegaScout, MTD(f), and/or SSS*, all of which should work at least as well as the best listed here.

Before we start, there is a basic concept to understand in order for this section to make sense: Reversi is a zero-sum game - basically, what's good for one player is equally bad for the other and vice-versa. This fact can be exploited for significant gain.

Minimax

This is the basic search algorithm used by AI in a two person zero-sum game like Reversi.

If it's thinking more than 1 ply ahead, the AI will need to consider what the opponent will do. Since it's not really possible to know what move the opponent will make, the safest (most conservative) idea is to assume the opponent will make the best possible move (as far as the AI is able to determine it). Because Reversi is a zero-sum game, a simplification of this is to alternate between looking for the move with the maximum score (when it's the AI's turn) and the minimum score (when it's the opponent's turn). When this is done properly, it guarantees that the best possible move is chosen (assuming the game is evaluated to the end by the AI - otherwise it's just the best possible move as the AI sees it). By assuming the best possible opponent, it won't be caught by a surprise move.

Alpha-Beta Pruning

With minimax, every move position is evaluated... the algorithm itself is not necessarily any faster than any other exhaustive algorithm you might chose. Alpha-Beta (A/B) pruning changes that by eliminating (pruning) certain parts of the move tree. Understanding how this works requires a solid grasp of minimax, so make sure that makes sense to you before going forwards.

The general idea behind A/B pruning is to take advantage of the knowledge that you're going to alternate between minimizing and maximizing the move scores used. Wikipedia has a good article explaining A/B pruning, including an animated illustration that is very helpful. I recommend studying this if you're having a hard time with understanding A/B pruning.

I'll try an example of how it works: Let's say you're at a point where the opponent is moving at depth 2, so you're looking for the move with the lowest score. You check the entire tree for the first move at depth 2 and get the value 3 and start looking at the 2nd move at depth 2. As you look at the resulting available moves at depth 3, you evaluate the first move and get the score 4. Since you're trying to maximize the value at depth 3, the final result from all of the moves in this tree must be at least the score of the first move, 4. If you look back at depth 2 (where we're minimizing), the first move was a 3. Since depth 2 is minimizing, 3 is less than 4, and the second move at depth 3 must be at least 4 (since depth 3 is maximizing), the this 2nd move at depth 2 will not be taken because the first move would be taken instead. The rules of A/B pruning take advantage of this bit of logic and allow the evaluation to skip the rest of the 2nd move's tree. This can cut out a significant amount of evaluation (greater the deeper you'd be going) without missing any moves.

Now that the big performance changes are out of the way, it's time for another break. Next time I'll give examples of small-scale optimizations.

Comments

There are no comments on this post.