Select a preset and press Play to start training.
How It Works
A feedforward neural network learns by repeatedly adjusting its weights to minimize prediction error. Each training step has two phases:
Forward pass — input values propagate through the network layer by layer. Each neuron computes a weighted sum of its inputs, adds a bias, and applies an activation function.
Backward pass — the error (loss) at the output is propagated backward using the chain rule. Each weight receives a gradient indicating how much it contributed to the error.
Weight update — weights are nudged in the opposite direction of their gradient, scaled by the learning rate. Over many iterations the network converges to a solution.
The loss function is binary cross-entropy (BCE), which measures divergence between predicted probabilities and binary targets. Best suited for classification tasks with sigmoid outputs. The learning rate controls how large each weight update step is — too high and training becomes unstable, too low and it converges slowly.
Dataset — XOR
| # | x1 | x2 | target |
|---|---|---|---|
| 1 | 0.00 | 0.00 | 0.00 |
| 2 | 0.00 | 1.00 | 1.00 |
| 3 | 1.00 | 0.00 | 1.00 |
| 4 | 1.00 | 1.00 | 0.00 |
Presets
Controls
Training Info
0
1/4
0.0000
Ready
Train to see loss over time.
2 → 3 → 1