AI Othello: Mick G.D. Remmerswaal April 23, 2020
AI Othello: Mick G.D. Remmerswaal April 23, 2020
AI Othello: Mick G.D. Remmerswaal April 23, 2020
1 Introduction
Playing games is fun and Winning them is even more fun, but how does one win a game? Playing
as good as possible is probably the answer. But what does this mean? How does one play to his
or her best potential? How does one know what is the best play to make? How does one react to
certain moves an opponent makes? The answers to these questions will form a path to an optimized
winning strategy.
Playing according to this strategy would net more winnings, but calculating this strategy is often
not the simplest task. Games with a lot of variables such as chess or Go are though to handle
for humans. Chess Grandmaster G. Kasparov could calculate around 12 moves ahead in certain
positions [4], further than that is nearly impossible for a human.
Computers though, will not have the same problem as humans do. They do not have the same
problem when it comes to calculation speed and are not limited with finite memory, one could
argue. Computers with well build algorithms can search through different game states and calculate
which move will be the best in that moment.
This report is about developing and implementing a search algorithm for the game Othello. This
will be accomplished by implementing the Min Max Algorithm and expanding it with Alpha Beta
pruning. Eventually some experiments are done comparing different algorithms with each other.
2 Othello
Othello [12], otherwise known as Reversi, is a two-player strategy game often played on an eight by
eight checkers board.
The rules are simple, Black starts by placing a disc in such a way that there is at least one straight
(horizontal, vertical, or diagonal) occupied line between the new disc and another black disc, with
one or more white discs between them.
When this happens all white discs flip colour to black and the turn is handed to White. White must
place a disc in the same manner but switch the words Black for White in the above paragraph.
1
One extra rule is added; if there are no possible moves to take the turn is forfeit and the opponent
takes another turn. The game is won by the person holding the most discs when the board is full,
when one player is eliminated from the board or the person holding the most the most discs when
neither player can take a turn.
(a) Start situation in the middle of the (b) A placement of a Black disc and the flip
board. of a White disc
An example of a flip or capture is seen in Image 2 the starting positions are in Image 2a and after
placing one disc the result is Image 2b. After the placement of a Black disc above the White disc,
the White disc flips colour and White takes its turn [12].
2
3 Assignment
The assignment of this report, found on [11], is to develop and implement the Min Max Algorithm
with and without Alpha Beta Pruning. At least two functions must be written before the algorithm
can work, an evaluate function and a search tree building function.
• int evaluate, this function evaluates the given game state after a certain depth has been
reached.
• int gametree, this function builds up the search tree used to compare the scores given by the
evaluate function. Eventually this must return the move number of the best move.
As mentioned the first step is to develop the Min Max algorithm. The Min Max Algorithm, explained
in Chapter 5 in [14], uses depth first search recursion to compute a score for the given game state.
This score is then used to evaluate the goodness of the game state. Higher scores are good for MAX
and lower scores are good for MIN, hence the name of the algorithm.
Using this principle the Min Max algorithm performs a complete depth first search trough the game
tree, with every possible move in the current game state representing a new branch. Eventually the
algorithm reaches the leaves, here the algorithm uses the evaluation function and the given score is
backed up to the root node.
The second step of this assignment is the implementation of Alpha Beta pruning alongside the
Min Max algorithm. Alpha Beta pruning is used to reduce the number of possible game states the
algorithm has to look at before reaching a decision.
The pruning is done on the basis that a certain branch will never be able to influence the outcome
of the algorithm, therefore it will be pruned and the algorithm continues. Alpha Beta pruning uses
the two variables α and β to check the influence of a branch.
α is the best value so far found along the path for MAX and β is the best value so far found along
the path for MIN. The algorithm continuously searches to update both α and β, while using them
as checks to see if the current found score of a branch in a MAX node is better/higher than the
current α or better/lower than the current β for a MIN node. If this is not the case, the branch is
pruned and the algorithm continues to a sibling node, if available.
3
4 Relevant work
Othello/Reversi has been used as a playing ground for developing search algorithms many times.
This can be seen in [1], who make use of genetic programming to calculate winning strategies or [6]
where it is used to create several different algorithms as an end project.
Most of the times the Alpha Beta algorithm is used with chess engines, as seen in [13], [8] and [3].
As mentioned before genetic programming is used a lot to compute Reversi strategies, this is for
example seen in [5] and [7].
One of the best search algorithms written for Othello is Logistello [2].
5 Approach
The algorithm makes use of the Min Max algorithm with Alpha Beta pruning, Null Move Pruning
[9] and Quiescence search [10]. The algorithm is a continuation of the skeleton code taken from the
assignment website found in Chapter 3.
Chapter 5 of [14] was extensively used to develop the search algorithm.
6 Implementation
The algorithm uses different functions to search for the best move possible for the current game
state. This is accomplished by building and traversing a game search tree, evaluating every possible
outcome of a move, pruning the tree to slice computation time and eventually return the best move
possible.
6.2 Evaluation
The algorithm relies on its ability to asses a certain game state. To reliably make this assessment,
an evaluation function has been written. This evaluation function is given the current state of the
game and assesses every piece and where it currently resides. A score is then attributed to the
board and returned to the algorithm.
Because the algorithm makes use of Min Max, a note has to be added here. Every time awarded is
mentioned this either means, increase the score when seen from a Max perspective or decrease that
score when seen from a Min perspective.
4
6.2.4 Danger squares
Because the corners are such important squares, squares around it are very dangerous to play. If
played in these squares it may enable the opponent to take the corner square. This may result in a
detrimental situation and eventually a loss. To make the algorithm avoid these danger squares, 100
points are awarded to the opponent when these squares are occupied.
6.2.5 Mobility
To make sure that there is always a move possible, capitalizing on the more moves may be an
effective strategy. Therefore the evaluation function takes into account how many moves are possible.
If there are more moves then the opponent has, twenty points are awarded.
5
7 Experiments
This chapter will describe the different experiments done with different algorithms. Five different
algorithms were used for these experiments:
• Greedy Algorithm, this algorithm always picks the first possible move.
• Random Algorithm, this algorithm randomly chooses on of its possible moves.
• Monte Carlo Algorithm, this algorithm uses the Monte Carlo principle with different playouts.
• Min Max: Min Max Algorithm without Alpha Beta pruning, described in Chapter 6.
• Alpha Beta: Min Max Algorithm with Alpha Beta Pruning, described in Chapter 6.
These experiments are mainly done to compare both Min Max Algorithms with each other. To
compare them more easily, different versions of a “do a move” function have been used. The first
one sets a limit on the amount of times the evaluate function can be used, the resource limited
version. The second function sets a limit on the depth that can be reached with the game tree
function, the depth limited version.
The first set of experiments are to see how good both algorithms perform against certain opponents.
The second set of experiments consist of tests to see how they perform when the restrictions are
tightened or loosened.
All tests were done on an 8x8 board if not otherwise stated. All tests were done with 50 runs to
save computational time, also if not stated otherwise. The first set of experiments were done against
all other algorithms. The second set of experiments were done against a Monte Carlo algorithm set
at 50 playouts.
6
7.3.1 Resource Limited
The graph below shows the amount of nodes both algorithms were possible to reach with limited
resources.
1 · 105
Nodes visisted
80,000
60,000
40,000
20,000
0
0 10,000 20,000 30,000 40,000 50,000
Resources available
The graph below shows the maximum reached depth with limited resources
20
15
Depth reached
10
Figure 6: Bar chart of the maximum Depth reached when restricted in resources
7
7.3.2 Depth Limited
The graph below shows how many nodes each algorithm was able to reach before reaching the
different depths.
1 · 105
Nodes visisted
80,000
60,000
40,000
20,000
0
1 2 3 4 5
Maximum Depth possible
The graph shown below shows the amount of evaluations the different algorithms needed to reach
the different depths.
40,000
30,000
20,000
10,000
0
1 2 3 4 5
Depth to reach
8
7.4 Alpha Beta vs Alpha Beta
A small challenge was set up with another team of developers. Elham Wasei (student number:
1333828) and Chris Congleton (student number: 2577240) were kind enough to help set up the
challenge. The challenge was simple, who could win the most games out of 100 games of Othello.
Playing the 100 games resulted in 9 wins for the algorithm described in this report and 90 wins for
the algorithm developed by Elham Wasei and Chris Congleton, there was one draw.
I would like to thank Elham and Chris for reaching out to me and setting up this fun challenge.
References
[1] Amit Benbassat and Moshe Sipper. “Evolving both search and strategy for reversi players
using genetic programming”. In: 2012 IEEE Conference on Computational Intelligence and
Games (CIG). IEEE. 2012, pp. 47–54.
[2] Michael Buro. LOGISTELLO’s Homepage. url: https://skatgame.net/mburo/log.html.
(Last accessed: 22.4.2020).
[3] Jeroen WT Carolus. “Alpha-beta with sibling prediction pruning in chess”. In: Amsterdam:
University of Amsterdam (2006).
[4] Math Chiller. engines - How moves does Anand calculate in his mind? url: https://chess.
stackexchange.com/questions/1408/how-many-moves-ahead-does-anand-calculate-
in-his-mind. (Last accessed: 21.4.2020).
[5] Karen Farnes. “The Genetic Algorithm vs Alpha-Beta Algorithm When Applied to Othello”.
In: USCCS 2013 (2013), p. 13.
[6] Jacopo Festa and Stanislao Davino. “” IAgo vs Othello”: An Artificial Intelligence Agent
Playing Reversi.” In: PAI@ AI* IA. 2013, pp. 43–50.
[7] Clive Frankland and Nelishia Pillay. “Evolving game playing strategies for othello”. In: 2015
IEEE Congress on Evolutionary Computation (CEC). IEEE. 2015, pp. 1498–1504.
[8] Samuel H Fuller, John G Gaschnig, JJ Gillogly, et al. Analysis of the alpha-beta pruning
algorithm. Department of Computer Science, Carnegie-Mellon University, 1973.
[9] Ernst A Heinz. “Adaptive null-move pruning”. In: ICGA Journal 22.3 (1999), pp. 123–132.
[10] Hermann Kaindl. “Quiescence search in computer chess”. In: SIGART Newsletter 80.124-131
(1982), p. 8.
9
[11] Walter Kosters. Kunstmatige intelligentie Programmeeropgave 3 van 2020 - Agenten &
Robotica. url: http://liacs.leidenuniv.nl/~kosterswa/AI/othello2020.html. (Last
accessed: 21.4.2020).
[12] Co. Othello and Megahouse. Play Othello online. url: https://www.eothello.com/. (Last
accessed: 21.4.2020).
[13] Werda Buana Putra and Lukman Heryawan. “Applying Alpha-beta Algorithm In A Chess
Engine”. In: Jurnal Teknosains 6.1 (2016), pp. 37–43.
[14] Stuart J Russell and Peter Norvig. Artificial intelligence: a modern approach. Pearson
Education Limited, 2016.
10
Appendix: Code
Code of Min Max with Alpha Beta pruning
1
2 //
3 // o t h e l l o a l p h a b e t a s p e l e r . cc p r o v i d e s A l p h a B e t a S p e l e r
4 // Roy van Hal en Walter K o s t e r s en Mick Remmerswaal
5 // 16 a p r i l 2020
6 //
7
8 #i n c l u d e <i o s t r e a m >
9 #i n c l u d e <l i m i t s >
10 #i n c l u d e <a l g o r i t h m >
11 #i n c l u d e <v e c t o r >
12
13 u s i n g namespace std ;
14 c l a s s AlphaBetaSpeler : p u b l i c Basisspeler
15 {
16 public :
17 AlphaBetaSpeler ( othello ∗ spelPointer ) ;
18 i n t volgendeZet ( ) ;
19 i n t AlternatiefvolgendeZet ( ) ;
20 i n t gametree ( othello ∗g , i n t depth , i n t &themove , i n t &evaluates , i n t
ALPHA , i n t BETA , b o o l NULLMOVE , i n t &visistedNodes ) ;
21 i n t evaluate ( othello ∗g ) ;
22 b o o l isQuieMove ( i n t moveNum , othello ∗g ) ;
23 b o o l isNearCornerMove ( i n t i , i n t j , othello ∗g ) ;
24 vector<i n t > getStableDiscs ( othello ∗g , c h a r player , i n t i , i n t j , vector
<i n t > stableDiscs ) ;
25
26
30 } ; // A l p h a B e t a S p e l e r
31
32 // c o n s t r u c t o r
33 AlphaBetaSpeler : : AlphaBetaSpeler ( othello ∗ spelPointer )
34 {
35 spel = spelPointer ;
36 } // A l p h a B e t a S p e l e r : : A l p h a B e t a S p e l e r
37
38 // e v a l u a t e game ∗ g
39 // h i g h v a l u e > 0 : good f o r BLACK −− MAX
40 // low v a l u e < 0 : good f o r WHITE −− MIN
41 // f o r i n s t a n c e : i s g−>b oa r d [ 0 ] [ 0 ] e q u a l t o ’W’ or ’B ’?
42 i n t AlphaBetaSpeler : : evaluate ( othello ∗g )
43 {
44 //TODO
45 i n t score = 0 ;
46 i n t pointsAwarded = 0 ;
47
48 i n t blackPieces = 0 ;
49 i n t whitePieces = 0 ;
50 i n t totalPieces = 0 ;
51
11
57 {
58 f o r ( i n t j = 0 ; j < g−>width ; j++)
59 {
60 i f ( g−>board [ i ] [ j ] == ’B ’ )
61 {
62 blackPieces++;
63 score++;
64 }
65 e l s e i f ( g−>board [ i ] [ j ] == ’W’ )
66 {
67 whitePieces++;
68 score −−;
69 }
70 }
71 }
72
75 // +1000 i f B l a c k o c c u p i e s a c o r n e r
76 // −1000 i f White o c c u p i e s a c o r n e r
77 #pragma region CornerPieces
78
79 pointsAwarded = 1 0 0 0 ;
80
97 i f ( g−>board [ g−>height − 1 ] [ 0 ] != ’ . ’ )
98 {
99 s w i t c h ( g−>board [ g−>height − 1 ] [ 0 ] )
100 {
101 c a s e ’B ’ :
102 score += pointsAwarded ;
103 break ;
104 c a s e ’W’ :
105 score −= pointsAwarded ;
106 break ;
107 default :
108 break ;
109 }
110 }
111
12
120 score −= pointsAwarded ;
121 break ;
122 default :
123 break ;
124 }
125 }
126
147 pointsAwarded = 1 0 0 ;
148
149 i f ( g−>board [ 0 ] [ 0 ] == ’ . ’ ) // t o p l e f t c o r n e r
150 {
151 i f ( g−>board [ 1 ] [ 0 ] == ’B ’ )
152 {
153 score −= pointsAwarded ;
154 }
155 e l s e i f ( g−>board [ 1 ] [ 0 ] == ’W’ )
156 {
157 score += pointsAwarded ;
158 }
159 i f ( g−>board [ 0 ] [ 1 ] == ’B ’ )
160 {
161 score −= pointsAwarded ;
162 }
163 e l s e i f ( g−>board [ 0 ] [ 1 ] == ’W’ )
164 {
165 score += pointsAwarded ;
166 }
167 i f ( g−>board [ 1 ] [ 1 ] == ’B ’ )
168 {
169 score −= pointsAwarded ;
170 }
171 e l s e i f ( g−>board [ 1 ] [ 1 ] == ’W’ )
172 {
173 score += pointsAwarded ;
174 }
175 }
176
13
183 else i f ( g−>board [ 0 ] [ g−>width − 2 ] == ’W’ )
184 {
185 score += pointsAwarded ;
186 }
187
14
245 score += pointsAwarded ;
246 }
247
273 i f ( whitePieces == 0 )
274 {
275 score += 1 0 0 0 0 0 ;
276 }
277 i f ( blackPieces == 0 )
278 {
279 score −= 1 0 0 0 0 0 ;
280 }
281
288 pointsAwarded = 3 0 ;
289
294 i f ( oppMoves == 0 )
295 {
296 s w i t c h ( kopie . whois ( ) )
297 {
298 c a s e 1 : // B l a c k
299 score += pointsAwarded ;
300 break ;
301
302 c a s e 2 : // White
303 score −= pointsAwarded ;
304 break ;
305
15
306 default :
307 break ;
308 }
309 }
310
316 pointsAwarded = 2 0 ;
317
332 c a s e 2 : // White
333 score −= pointsAwarded ;
334 break ;
335
336 default :
337 break ;
338 }
339 }
340 else
341 {
342 s w i t c h ( kopie . whois ( ) )
343 {
344 c a s e 1 : // B l a c k
345 score −= pointsAwarded ;
346 break ;
347
348 c a s e 2 : // White
349 score += pointsAwarded ;
350 break ;
351
352 default :
353 break ;
354 }
355 }
356 #pragma endregion MobilityScore
357
358 // +5 f o r e v e r y s t a b l e p i e c e ( p i e c e t h a t cannot be f l i p p e d )
359 #pragma region StablePieces
360
363 i f ( g−>board [ 0 ] [ 0 ] == ’B ’ ) // l e f t t o p c o r n e r
364 {
365 score += getStableDiscs(&kopie , ’B ’ , 0 , 0 , stableDiscs ) . size ( ) ∗ 5 ;
366 }
16
367 i f ( g−>board [ g−>height − 1 ] [ 0 ] == ’B ’ ) // l e f t bottom c o r n e r
368 {
369 score += getStableDiscs(&kopie , ’B ’ , g−>height − 1 , 0 , stableDiscs ) .
size ( ) ∗ 5 ;
370 }
371 i f ( g−>board [ 0 ] [ g−>width − 1 ] == ’B ’ ) // r i g h t t o p c o r n e r
372 {
373 score += getStableDiscs(&kopie , ’B ’ , 0 , g−>width − 1 , stableDiscs ) .
size ( ) ∗ 5 ;
374 }
375 i f ( g−>board [ g−>height − 1 ] [ g−>width − 1 ] == ’B ’ ) // r i g h t bottom c o r n e r
376 {
377 score += getStableDiscs(&kopie , ’B ’ , g−>height − 1 , g−>width − 1 ,
stableDiscs ) . size ( ) ∗ 5 ;
378 }
379 #pragma endregion StablePieces
380
381 r e t u r n score ;
382 } // A l p h a B e t a S p e l e r : : e v a l u a t e
383
415
424
425
17
426 i f ( isQuieMove ( themove , g ) )
427 {
428 e v a l u a t e s −−; // o n l y d e c r e a s e d h e r e
429 r e t u r n e v a l u a t e (& k o p i e ) ;
430 }
431 else
432 {
433 o t h e l l o kopie2 = ∗g ;
434 k o p i e 2 . dothemove ( themove ) ;
435
436 i n t dummyMove = 1 ;
437 r e t u r n ga m e tr e e (& k o p i e 2 , d e p t h + 1 , dummyMove , e v a l u a t e s , ALPHA,
BETA, NULLMOVE) ;
438 }
439 ∗/
440
441 visitedNodes++;
442 evaluates −−; // o n l y d e c r e a s e d h e r e
443 r e t u r n evaluate(&kopie ) ;
444 }
445 else
446 {
447 i n t dummyMove = 1 ;
448 //TODO
449 // t r a v e r s e t h e t r e e
450 i f ( kopie . whois ( ) == 1 ) // B l a c k −− MAX
451 {
452 i n t moves = kopie . numberofmoves ( ) ;
453 besteWaarde = numeric_limits<i n t > : : min ( ) ;
454
476 i n t huidigeWaarde = 0 ;
477 f o r ( i n t i = 1 ; i < moves + 1 ; i++)
478 {
479 othello kopie2 = ∗g ;
480 kopie2 . dothemove ( i ) ;
481 visitedNodes++;
482 huidigeWaarde = gametree(&kopie2 , depth − 1 , dummyMove ,
evaluates , ALPHA , BETA , NULLMOVE , visitedNodes ) ; // Check
de b e s t e waarde van de k i n d e r e n
483
18
484 i f ( besteWaarde < huidigeWaarde ) //MAX−KNOOP : : Check o f de
h u i d i g e waarde h o g e r i s
485 {
486 besteWaarde = huidigeWaarde ; //Zo j a => nieuwe b e s t e
waarde
487 themove = i ;
488 }
489
490 // s e t t i n g o f a l p h a
491 ALPHA = max ( ALPHA , besteWaarde ) ;
492
493 // a l p h a −b e t a p r u n i n g p a r t
494 i f ( BETA <= ALPHA )
495 {
496 break ;
497 }
498
528 i n t huidigeWaarde = 0 ;
529
19
538 {
539 besteWaarde = huidigeWaarde ; //Zo j a => nieuwe b e s t e
waarde
540 themove = i ;
541 }
542
543 // s e t t i n g o f b e t a , s m a l l e s t v a l u e s b e t w e e n o l d b e t a and
besteWaarde
544 BETA = min ( BETA , besteWaarde ) ;
545
546 // a l p h a −b e t a p r u n i n g p a r t
547 i f ( BETA <= ALPHA )
548 {
549 break ;
550 }
551
558 r e t u r n besteWaarde ;
559 } // A l p h a B e t a S p e l e r : : ga m e tr e e
560
580 r e t u r n mymove ;
581 } // A l p h a B e t a S p e l e r : : v o l g e n d e Z e t
582
20
595 r e t u r n themove ;
596 } // A l p h a B e t a S p e l e r : : A l t e r n a t i e f v o l g e n d e Z e t
597
629 // Top r i g h t c o r n e r
630 i f ( ( i == 0 && j == g −> width − 2 ) | | ( i == 1 && j == g −> width − 2 ) | | (
i == 1 && j == g −> width − 1 ) ) {
631 return true ;
632 }
633
634 // Bottom l e f t c o r n e r
635 i f ( ( i == g −> height − 2 && j == 0 ) | | ( i == g −> height − 2 && j == 1 ) | |
( i == g −> height − 1 && j == 1 ) ) {
636 return true ;
637 }
638
639 // Bottom r i g h t c o r n e r
640 i f ( ( i == g −> height − 1 && j == g −> width − 2 ) | | ( i == g −> height −
2 && j == g −> width − 2 ) | | ( i == g −> height − 2 && j == g −>
width − 1 ) ) {
641 return true ;
642 }
643
644 return f a l s e ;
645 }
646
21
653
654 // c h e c k d i s c s a b o v e
655 w h i l e ( i > 0 && g−>board [ i ] [ j ] == player )
656 {
657 val = ( i ∗ g−>height ) + ( j + 1 ) ;
658 candidates . push_back ( val ) ;
659 i−−;
660 }
661
671 // c h e c k d i s c s b e l o w
672 w h i l e ( i < g −> height && g−>board [ i ] [ j ] == player )
673 {
674 val = ( i ∗ g−>height ) + j ;
675 candidates . push_back ( val ) ;
676 i++;
677 }
678
688 // c h e c k d i s c s t o r i g h t
689 w h i l e ( j < g−>width && g−>board [ i ] [ j ] == player )
690 {
691 val = ( i ∗ g−>height ) + j ;
692 candidates . push_back ( val ) ;
693 j++;
694 }
695
705 // c h e c k d i s c s t o l e f t
706 w h i l e ( j > 0 && g−>board [ i ] [ j ] == player )
707 {
708 val = ( i ∗ g−>height ) + j ;
709 candidates . push_back ( val ) ;
710 j−−;
711 }
712
22
716 {
717 stableDiscs . push_back ( candidate ) ;
718 }
719 }
720 candidates . clear ( ) ;
721
722 r e t u r n stableDiscs ;
723 } // M i n i m a x s p e l e r : : g e t S t a b l e D i s c s
23
Code of Min Max without Alpha Beta pruning
1
2 //
3 // o t h e l l o m i n i m a x s p e l e r . cc p r o v i d e s M i n i m a x s p e l e r
4 // Roy van Hal en Walter K o s t e r s en Mick Remmerswaal
5 // 16 a p r i l 2020
6 //
7
8 #i n c l u d e <i o s t r e a m >
9 #i n c l u d e <l i m i t s >
10 #i n c l u d e <v e c t o r >
11 #i n c l u d e <a l g o r i t h m >
12
13 u s i n g namespace std ;
14 c l a s s Minimaxspeler : p u b l i c Basisspeler
15 {
16 public :
17 Minimaxspeler ( othello ∗ spelPointer ) ;
18 i n t volgendeZet ( ) ;
19 i n t AlternatiefvolgendeZet ( ) ;
20 i n t gametree ( othello ∗g , i n t depth , i n t &themove , i n t &evaluates , i n t &
visitedNodes ) ;
21 i n t evaluate ( othello ∗g ) ;
22 vector<i n t > getStableDiscs ( othello ∗g , c h a r player , i n t i , i n t j , vector<
i n t > stableDiscs ) ;
23 b o o l isQuieMove ( i n t moveNum , othello ∗g ) ;
24 b o o l isNearCornerMove ( i n t i , i n t j , othello ∗g ) ;
25 i n t visitedNodes = 0 ;
26 } ; // M i n i m a x s p e l e r
27
28 // c o n s t r u c t o r
29 Minimaxspeler : : Minimaxspeler ( othello ∗ spelPointer )
30 {
31 spel = spelPointer ;
32 } // M i n i m a x s p e l e r : : M i n i m a x s p e l e r
33
34 // e v a l u a t e game ∗ g
35 // h i g h v a l u e > 0 : good f o r BLACK −− MAX
36 // low v a l u e < 0 : good f o r WHITE −− MIN
37 // f o r i n s t a n c e : i s g−>b oa r d [ 0 ] [ 0 ] e q u a l t o ’W’ or ’B ’?
38 i n t Minimaxspeler : : evaluate ( othello ∗g )
39 {
40 //TODO
41 i n t score = 0 ;
42 i n t pointsAwarded = 0 ;
43
44 i n t blackPieces = 0 ;
45 i n t whitePieces = 0 ;
46 i n t totalPieces = 0 ;
47
24
61 else i f ( g−>board [ i ] [ j ] == ’W’ )
62 {
63 whitePieces++;
64 score −−;
65 }
66 }
67 }
68
71 // +1000 i f B l a c k o c c u p i e s a c o r n e r
72 // −1000 i f White o c c u p i e s a c o r n e r
73 #pragma region CornerPieces
74
75 pointsAwarded = 1 0 0 0 ;
76
93 i f ( g−>board [ g−>height − 1 ] [ 0 ] != ’ . ’ )
94 {
95 s w i t c h ( g−>board [ g−>height − 1 ] [ 0 ] )
96 {
97 c a s e ’B ’ :
98 score += pointsAwarded ;
99 break ;
100 c a s e ’W’ :
101 score −= pointsAwarded ;
102 break ;
103 default :
104 break ;
105 }
106 }
107
25
124 {
125 s w i t c h ( g−>board [ g−>height − 1 ] [ g−>width − 1 ] )
126 {
127 c a s e ’B ’ :
128 score += pointsAwarded ;
129 break ;
130 c a s e ’W’ :
131 score −= pointsAwarded ;
132 break ;
133 default :
134 break ;
135 }
136 }
137 #pragma endregion CornerPieces
138
143 pointsAwarded = 1 0 0 ;
144
145 i f ( g−>board [ 0 ] [ 0 ] == ’ . ’ ) // t o p l e f t c o r n e r
146 {
147 i f ( g−>board [ 1 ] [ 0 ] == ’B ’ )
148 {
149 score −= pointsAwarded ;
150 }
151 e l s e i f ( g−>board [ 1 ] [ 0 ] == ’W’ )
152 {
153 score += pointsAwarded ;
154 }
155 i f ( g−>board [ 0 ] [ 1 ] == ’B ’ )
156 {
157 score −= pointsAwarded ;
158 }
159 e l s e i f ( g−>board [ 0 ] [ 1 ] == ’W’ )
160 {
161 score += pointsAwarded ;
162 }
163 i f ( g−>board [ 1 ] [ 1 ] == ’B ’ )
164 {
165 score −= pointsAwarded ;
166 }
167 e l s e i f ( g−>board [ 1 ] [ 1 ] == ’W’ )
168 {
169 score += pointsAwarded ;
170 }
171 }
172
26
187 }
188 else i f ( g−>board [ 1 ] [ g−>width − 1 ] == ’W’ )
189 {
190 score += pointsAwarded ;
191 }
192
27
249 {
250 score += pointsAwarded ;
251 }
252
269 i f ( whitePieces == 0 )
270 {
271 score += 1 0 0 0 0 0 ;
272 }
273 i f ( blackPieces == 0 )
274 {
275 score −= 1 0 0 0 0 0 ;
276 }
277
284 pointsAwarded = 3 0 ;
285
290 i f ( oppMoves == 0 )
291 {
292 s w i t c h ( kopie . whois ( ) )
293 {
294 c a s e 1 : // B l a c k
295 score += pointsAwarded ;
296 break ;
297
298 c a s e 2 : // White
299 score −= pointsAwarded ;
300 break ;
301
302 default :
303 break ;
304 }
305 }
306
28
310 #pragma region MobilityScore
311
312 pointsAwarded = 2 0 ;
313
328 c a s e 2 : // White
329 score −= pointsAwarded ;
330 break ;
331
332 default :
333 break ;
334 }
335 }
336 else
337 {
338 s w i t c h ( kopie . whois ( ) )
339 {
340 c a s e 1 : // B l a c k
341 score −= pointsAwarded ;
342 break ;
343
344 c a s e 2 : // White
345 score += pointsAwarded ;
346 break ;
347
348 default :
349 break ;
350 }
351 }
352 #pragma endregion MobilityScore
353
354 // +5 f o r e v e r y s t a b l e p i e c e ( p i e c e t h a t cannot be f l i p p e d )
355 #pragma region StablePieces
356
359 i f ( g−>board [ 0 ] [ 0 ] == ’B ’ ) // l e f t t o p c o r n e r
360 {
361 score += getStableDiscs(&kopie , ’B ’ , 0 , 0 , stableDiscs ) . size ( ) ∗ 5 ;
362 }
363 i f ( g−>board [ g−>height − 1 ] [ 0 ] == ’B ’ ) // l e f t bottom c o r n e r
364 {
365 score += getStableDiscs(&kopie , ’B ’ , g−>height − 1 , 0 , stableDiscs ) .
size ( ) ∗ 5 ;
366 }
367 i f ( g−>board [ 0 ] [ g−>width − 1 ] == ’B ’ ) // r i g h t t o p c o r n e r
368 {
369 score += getStableDiscs(&kopie , ’B ’ , 0 , g−>width − 1 , stableDiscs ) .
29
size ( ) ∗ 5 ;
370 }
371 i f ( g−>board [ g−>height − 1 ] [ g−>width − 1 ] == ’B ’ ) // r i g h t bottom c o r n e r
372 {
373 score += getStableDiscs(&kopie , ’B ’ , g−>height − 1 , g−>width − 1 ,
stableDiscs ) . size ( ) ∗ 5 ;
374 }
375 #pragma endregion StablePieces
376
377 r e t u r n score ;
378 } // M i n i m a x s p e l e r : : e v a l u a t e
379
411
420
421
422
423
424
30
430 else
431 {
432 o t h e l l o kopie2 = ∗g ;
433 k o p i e 2 . dothemove ( themove ) ;
434
435 i n t dummyMove = 1 ;
436 g am e tr e e (& k o p i e 2 , d e p t h + 1 , dummyMove , e v a l u a t e s ) ;
437 }
438 ∗/
439
440 visitedNodes++;
441 evaluates −−; // o n l y d e c r e a s e d h e r e
442 r e t u r n evaluate(&kopie ) ;
443
444 }
445 else
446 {
447 i n t dummyMove = 1 ;
448 //TODO
449 // t r a v e r s e t h e t r e e
450 i f ( kopie . whois ( ) == 1 ) // Zwart
451 {
452 i n t moves = kopie . numberofmoves ( ) ;
453 besteWaarde = numeric_limits<i n t > : : min ( ) ;
454
462 i n t huidigeWaarde = 0 ;
463 f o r ( i n t i = 1 ; i < moves + 1 ; i++)
464 {
465 othello kopie2 = ∗g ;
466 kopie2 . dothemove ( i ) ;
467 visitedNodes++;
468 huidigeWaarde = gametree(&kopie2 , depth − 1 , dummyMove , evaluates ,
visitedNodes ) ; // Check de b e s t e waarde van de k i n d e r e n
469
31
488 visitedNodes++;
489 r e t u r n gametree(&kopie , depth − 1 , dummyMove , evaluates ,
visitedNodes ) ;
490 }
491
492 i n t huidigeWaarde = 0 ;
493
507
508
515 r e t u r n besteWaarde ;
516 } // M i n i m a x s p e l e r : : g a m et r ee
517
537 r e t u r n mymove ;
538 } // M i n i m a x s p e l e r : : v o l g e n d e Z e t
539
32
545 i n t evaluates = 1 0 0 0 0 0 0 0 0 0 ;
546 i n t depth = 5 ;
547 visitedNodes = 0 ;
548
554 r e t u r n themove ;
555 } // M i n i m a x s p e l e r : : A l t e r n a t i e f v o l g e n d e Z e t
556
564 // c h e c k d i s c s a b o v e
565 w h i l e ( i > 0 && g−>board [ i ] [ j ] == player )
566 {
567 val = ( i ∗ g−>height ) + ( j + 1 ) ;
568 candidates . push_back ( val ) ;
569 i−−;
570 }
571
581 // c h e c k d i s c s b e l o w
582 w h i l e ( i < g −> height && g−>board [ i ] [ j ] == player )
583 {
584 val = ( i ∗ g−>height ) + j ;
585 candidates . push_back ( val ) ;
586 i++;
587 }
588
598 // c h e c k d i s c s t o r i g h t
599 w h i l e ( j < g−>width && g−>board [ i ] [ j ] == player )
600 {
601 val = ( i ∗ g−>height ) + j ;
602 candidates . push_back ( val ) ;
603 j++;
604 }
605
33
606 f o r ( i n t candidate : candidates )
607 {
608 i f ( ! ( count ( stableDiscs . begin ( ) , stableDiscs . end ( ) , candidate ) ) )
609 {
610 stableDiscs . push_back ( candidate ) ;
611 }
612 }
613 candidates . clear ( ) ;
614
615 // c h e c k d i s c s t o l e f t
616 w h i l e ( j > 0 && g−>board [ i ] [ j ] == player )
617 {
618 val = ( i ∗ g−>height ) + j ;
619 candidates . push_back ( val ) ;
620 j−−;
621 }
622
632 r e t u r n stableDiscs ;
633 } // M i n i m a x s p e l e r : : g e t S t a b l e D i s c s
634
666 // Top r i g h t c o r n e r
667 i f ( ( i == 0 && j == g −> width − 2 ) | | ( i == 1 && j == g −> width − 2 ) | | (
i == 1 && j == g −> width − 1 ) ) {
34
668 return true ;
669 }
670
671 // Bottom l e f t c o r n e r
672 i f ( ( i == g −> height − 2 && j == 0 ) | | ( i == g −> height − 2 && j == 1 ) | |
( i == g −> height − 1 && j == 1 ) ) {
673 return true ;
674 }
675
676 // Bottom r i g h t c o r n e r
677 i f ( ( i == g −> height − 1 && j == g −> width − 2 ) | | ( i == g −> height −
2 && j == g −> width − 2 ) | | ( i == g −> height − 2 && j == g −>
width − 1 ) ) {
678 return true ;
679 }
680
681 return f a l s e ;
682 }
35