Visualise how neural network weights map to fixed-point and low-bit formats
Quantising one column introduces error that propagates to remaining columns. Watch how compensating updates reduce total error.
A 3-layer MLP trained on a spiral dataset. Quantise and see how the decision boundary degrades.
Relative multiplier area vs estimated accuracy retention. The Pareto-optimal frontier defines the best achievable tradeoffs.
Assign precision per layer. Sensitive layers benefit from higher precision.