๐Ÿง  Neural Network

Feedforward multilayer perceptron with backpropagation โ€” from scratch

Overview

A complete implementation of a multilayer perceptron (MLP) using only OCaml's standard library. No frameworks, no dependencies โ€” just matrix math and calculus implemented by hand. Demonstrates how neural networks learn through gradient descent and backpropagation.

Features

Concepts Demonstrated

How It Works

(* 1. Initialize network with random weights *)
(*    Xavier: scale = sqrt(2 / (fan_in + fan_out)) *)
(*    He:     scale = sqrt(2 / fan_in)             *)

(* 2. Forward pass: propagate input through layers *)
(*    output_i = activate(W_i ยท input_i + bias_i)  *)

(* 3. Compute loss (MSE or cross-entropy) *)

(* 4. Backward pass: compute gradients via chain rule *)
(*    ฮด_output = (predicted - target) * activate'   *)
(*    ฮด_hidden = (W_next^T ยท ฮด_next) * activate'    *)

(* 5. Update weights: W -= learning_rate * ฮด ยท input^T *)

(* 6. Repeat until convergence *)

Demo Problems

Key Takeaways