Neural network training stability
Bendex monitors your training run in real time, identifies which layer is failing, and corrects it automatically — before your run is lost.
How it works
Bendex runs alongside your training loop with a single observe() call per step. No wrappers, no rewrites.
Bendex tracks the geometric trajectory of your model's weights relative to a frozen reference. The discrete curvature κ_t spikes at instability onset — well before your loss curve shows anything.
When a trigger fires, Bendex identifies which specific module deviated first using per-layer z-scored divergence histories. You know exactly where the problem originated.
Bendex freezes the suspect module, reduces learning rate, and monitors recovery. If the run doesn't stabilize, it escalates. Your training continues — automatically.
Benchmarks
Bendex was benchmarked against loss spike detection, gradient norm monitoring, and patience-based early stopping — the approaches most teams currently use.
| System | Detection | False positive rate | Recovery | Attributes module | Intervenes |
|---|---|---|---|---|---|
| No monitor | 0% | 0% | 0% | No | No |
| Loss spike | 100% | 80% | 0% | No | No |
| Gradient norm | 90% | 50% | 0% | No | No |
| Patience / early stop | 100% | 50% | 0% | No | No |
| Bendex | 100% | 0% | 90% | Yes | Yes |
Research
Bendex is the applied proof of a theoretical program in information geometry. The same Fisher manifold underlying Bendex's zero false positive rate also derives the fine structure constant α to 8 significant figures — a blind prediction from pure geometry.
Request access
Bendex is in early access. Fill out the form and we'll be in touch within 48 hours with onboarding instructions and licensing details.
Questions? Email 9hannahnine@gmail.com