top of page

Search


What If Reasoning Doesn’t Need Billion-Parameter Models?
Large language models excel at language but often struggle with structured reasoning tasks. This article explores Tiny Recursive Models (TRMs), a radically simpler approach that uses small neural networks with recursive refinement to outperform massive LLMs on puzzles like Sudoku, mazes, and ARC-AGI. By prioritizing iterative reasoning over scale, TRMs show that deep thinking can emerge from minimal architectures, challenging prevailing assumptions about model size and intell

Juan Manuel Ortiz de Zarate
Dec 18, 202510 min read


Breaking the Amnesia Cycle in Large Sequence Models
Nested Learning reframes neural models as multi-loop systems updating at different frequencies, revealing that depth stacking hides gradient mechanics and limits continual learning. It interprets optimizers like Momentum and Adam as associative gradient memories and introduces CMS for incremental abstraction. The HOPE module combines self-modification, multi-clock updates, and deep contextual compression, offering a white-box path beyond static backbones for long-context and

Juan Manuel Ortiz de Zarate
Nov 27, 20259 min read


Measuring Controversy in Social Networks through NLP
Discover how NLP tools can identify and analyze contentious topics.

Juan Manuel Ortiz de Zarate
Jul 6, 202410 min read
bottom of page