🎮 Minimax Game AI
Generic game-playing AI with alpha-beta pruning and iterative deepening
Overview
A functor-based game AI framework that plays any two-player zero-sum game optimally. The engine uses minimax search with alpha-beta pruning to explore game trees, iterative deepening for time-bounded play, and a transposition table for memoizing previously seen positions.
Two complete games are included: Tic-Tac-Toe (solved — AI never loses) and Connect Four (deep search with heuristic evaluation).
Concepts Demonstrated
- Module system —
GAMEsignature defines the abstract interface - Functors —
MakeAI(G : GAME)builds an AI for any game - Minimax with alpha-beta — prunes branches that can't affect the result
- Iterative deepening — search depth 1, 2, 3… until time runs out
- Transposition table — hash-consed game states avoid re-evaluation
- Immutable state — functional game state with move/undo via new values
The GAME Interface
module type GAME = sig
type state (* board representation *)
type move (* legal action *)
type player = Maximizer | Minimizer
val initial : state (* starting position *)
val current_player : state -> player
val legal_moves : state -> move list
val apply_move : state -> move -> state
val is_terminal : state -> bool
val evaluate : state -> float (* heuristic: +∞ max wins, -∞ min wins *)
val hash : state -> int (* for transposition table *)
end
(* The functor builds a complete AI from any GAME *)
module MakeAI (G : GAME) : sig
val best_move : G.state -> depth:int -> G.move
val best_move_timed : G.state -> seconds:float -> G.move
end
Alpha-Beta Pruning
(* Minimax without pruning: explore ALL branches — O(b^d) *)
(* Alpha-beta: skip branches that can't improve the result *)
(* α = best score Maximizer can guarantee so far
β = best score Minimizer can guarantee so far
When α ≥ β, the remaining branches are irrelevant — prune *)
(* With good move ordering, alpha-beta reduces branching:
O(b^d) → O(b^(d/2)) — effectively doubles search depth *)
let rec alphabeta state depth alpha beta =
if depth = 0 || is_terminal state then evaluate state
else match current_player state with
| Maximizer ->
legal_moves state |> List.fold_left (fun alpha move ->
let score = alphabeta (apply_move state move) (depth-1) alpha beta in
if score >= beta then beta (* prune! *)
else max alpha score
) alpha
| Minimizer -> (* symmetric *)
Included Games
| Game | Board | Branching | AI Strength |
|---|---|---|---|
| Tic-Tac-Toe | 3×3 | ~4 avg | Perfect play (solved) |
| Connect Four | 7×6 | ~7 avg | Strong with iterative deepening |
Related Modules
- Genetic Algorithms — evolutionary optimization
- Neural Networks — learning-based approaches
- A* Search — heuristic pathfinding