top of page

Search


Breaking the Amnesia Cycle in Large Sequence Models
Nested Learning reframes neural models as multi-loop systems updating at different frequencies, revealing that depth stacking hides gradient mechanics and limits continual learning. It interprets optimizers like Momentum and Adam as associative gradient memories and introduces CMS for incremental abstraction. The HOPE module combines self-modification, multi-clock updates, and deep contextual compression, offering a white-box path beyond static backbones for long-context and

Juan Manuel Ortiz de Zarate
Nov 279 min read


Optimizing Machine Learning Models
Optimize ML models with Grid Search, Random Search, and Bayesian Optimization. Boost performance, reduce overfitting, and enhance metrics.

Juan Manuel Ortiz de Zarate
Oct 29, 20249 min read
bottom of page