Neural Networks — Perceptrons, Backpropagation & MLPs
Summary
From single perceptrons to multi-layer networks: understanding how neurons compute (weights, input functions, sigmoid activation), why random weight initialization matters, and how backpropagation actually works through a manual 4-step process. Builds a feedforward neural network from scratch to solve XOR — the classic problem that single-layer networks cannot handle — then scales up to an MLP classifier on the Iris dataset, systematically varying hidden neuron counts (1, 2, 4, 8, 16, 32) to see exactly how capacity affects learning.
Materials
Neural Networks from First Principles
How perceptrons compute, why activation functions matter, and the step-by-step mechanics of backpropagation that make neural networks learn.
Neural Networks Notebook — XOR & Iris MLP
Build a feedforward neural network from scratch with manual backpropagation to solve XOR, then train MLPs on Iris with varying architectures.
Includes notebook