top of page

Search


Breaking the Amnesia Cycle in Large Sequence Models
Nested Learning reframes neural models as multi-loop systems updating at different frequencies, revealing that depth stacking hides gradient mechanics and limits continual learning. It interprets optimizers like Momentum and Adam as associative gradient memories and introduces CMS for incremental abstraction. The HOPE module combines self-modification, multi-clock updates, and deep contextual compression, offering a white-box path beyond static backbones for long-context and

Juan Manuel Ortiz de Zarate
5 days ago9 min read


Make Neural Circuits Understandable
The article introduces weight-sparse transformers (models where most weights are zero) to make neural circuits interpretable. These models reveal clear, human-understandable algorithms for language tasks. Sparsity trades off raw capability for clarity, allowing researchers to fully trace mechanisms inside networks and bridge them to dense models for transparency in AI reasoning.

Juan Manuel Ortiz de Zarate
Nov 209 min read


Compute Among the Stars
Google’s Project Suncatcher envisions moving AI computation into orbit, building constellations of solar-powered satellites equipped with TPUs and laser interlinks. By harnessing the Sun’s constant energy and future low-cost launches, the project proposes a scalable, space-based infrastructure for machine learning. It’s a blueprint for computing beyond Earth—where data centers orbit, powered by sunlight instead of fossil grids.

Juan Manuel Ortiz de Zarate
Nov 119 min read


AI Can Code, But Can It Engineer?
SWE-Bench Pro marks a turning point in evaluating AI coding agents. Built from complex, real-world software repositories, it reveals that even frontier models like GPT-5 and Claude Opus solve less than 25% of tasks. The benchmark exposes the gap between coding fluency and true engineering ability, redefining how progress toward autonomous software development should be measured.

Juan Manuel Ortiz de Zarate
Nov 510 min read


The AlphaGo Moment of Neural Architecture Design
ASI-ARCH marks a breakthrough in AI self-innovation: an autonomous system that designs, codes, and validates new neural network architectures without human input. Conducting 1,773 experiments, it discovered 106 state-of-the-art models, revealing a scaling law for scientific discovery. Like AlphaGo’s Move 37, ASI-ARCH exposes principles beyond human intuition, signaling a new era where AI invents AI.

Juan Manuel Ortiz de Zarate
Oct 2910 min read


The Lightning Mind
DeepSeek-V3.2-Exp introduces a new sparse-attention system that lets large language models handle ultra-long contexts efficiently. Using a “lightning indexer” to select only the most relevant tokens, it cuts computation costs while preserving reasoning power. The result is a faster, cheaper, and more cognitively elegant AI that learns what to ignore, bringing machine focus closer to human intelligence.

Juan Manuel Ortiz de Zarate
Oct 229 min read


Inference at Scale
This article explores how to optimize large language model inference at scale, detailing techniques such as quantization, pruning, distillation, attention and cache optimization, speculative decoding, and dynamic batching. It explains the architectural bottlenecks, trade-offs, and engineering practices that enable faster, cheaper, and more efficient deployment of LLMs in real-world systems.

Juan Manuel Ortiz de Zarate
Oct 89 min read


Training-Efficient RL
Inefficient Reinforcement Fine-tuning (RFT) relies on heuristic metrics. The GAIN-RL framework utilizes angle concentration, an intrinsic model signal from token hidden states, which correlates directly with gradient strength and learning capacity. GAIN-RL dynamically selects data for consistently impactful updates. It achieves over 2.5x training acceleration and superior performance using only half the original data.

Juan Manuel Ortiz de Zarate
Oct 310 min read


Building Secure AI Agents
LlamaFirewall is an open-source, system-level guardrail system designed to mitigate critical security risks in autonomous AI agents, such as prompt injection, goal misalignment, and insecure code generation. Serving as a final layer of defense, it employs three core guardrails: **PromptGuard 2** detects direct jailbreaks, **AlignmentCheck** audits agent chain-of-thought for subtle misalignment and indirect injections, and CodeShield performs fast, real-time static analysis to

Juan Manuel Ortiz de Zarate
Sep 2610 min read


Understanding the ChatGPT Revolution
ChatGPT, adopted by 10% of adults globally, now sees over 70% non-work usage. Dominant topics include practical guidance, info seeking, and writing, with writing prominent in work. It offers significant value in decision support. The gender gap in usage has narrowed, and growth is high in lower-income countries. This was analyzed using privacy-preserving methods on billions of messages.

Juan Manuel Ortiz de Zarate
Sep 1811 min read


Unveiling the Enigma of AI Hallucinations
Large Language Models hallucinate because training and evaluation reward guessing over admitting uncertainty. Errors stem statistically from pretraining (binary classification). They persist as most post-training evaluations use binary scoring, penalizing "I don't know" responses and incentivizing confident falsehoods. The proposed solution is a socio-technical modification: adjust existing benchmarks with explicit confidence targets to foster more trustworthy AI by rewardin

Juan Manuel Ortiz de Zarate
Sep 1112 min read


The Checklist Shortcut to Smarter, Safer AI
This article explores Reinforcement Learning from Checklist Feedback (RLCF), a new approach that replaces reward models with checklists to align large language models. By breaking instructions into clear, verifiable steps, checklists provide richer, more interpretable feedback and consistently improve performance across benchmarks. The piece examines how this shift could make AI more reliable, transparent, and user-aligned.

Juan Manuel Ortiz de Zarate
Sep 412 min read


The Flattering Machine
This article explores Social Sycophancy, a broader form of flattery in large language models where systems preserve users’ self-image rather than offer balanced guidance. Building on Goffman’s face theory, it introduces the ELEPHANT framework to measure emotional validation, moral endorsement, indirectness, and framing acceptance. Findings show LLMs are far more sycophantic than humans, raising risks for users, society, and developers, and calling for new safeguards.

Juan Manuel Ortiz de Zarate
Aug 299 min read


Adventuring with AI: What Classic Games Teach Us About Modern Models
TextQuests introduces a benchmark built on 25 Infocom text-based adventure games to evaluate LLMs in dynamic, exploratory environments. Unlike static benchmarks, it tests long-context reasoning, trial-and-error learning, and ethical decision-making without external tools. Results show that even advanced models like GPT-5 struggle with sustained strategy, highlighting current limits in autonomy, memory, and adaptive reasoning

Juan Manuel Ortiz de Zarate
Aug 2210 min read


Language-Driven Precision in the Operating Room
The Hierarchical Surgical Robot Transformer (SRT-H) brings step-level autonomy to surgery by combining a language-driven high-level planner with a vision-guided low-level executor. Trained on over 16,000 demonstrations, it completed the clipping-and-cutting phase of gallbladder removal with 100% success in ex-vivo trials, adapting to variations and self-correcting without human intervention—marking a milestone toward clinically viable autonomous surgery.

Juan Manuel Ortiz de Zarate
Aug 1310 min read


The Carbon Cost of Conversation
This article explores the environmental impact of large language models (LLMs), based on Dauner and Socher’s 2025 study. By analyzing 14 models across reasoning tasks, it reveals a trade-off between accuracy and CO₂ emissions. Larger models and reasoning modes achieve higher performance but drastically increase energy use due to verbose outputs. The findings highlight the urgent need for optimizing reasoning efficiency and integrating sustainability into AI development.

Juan Manuel Ortiz de Zarate
Aug 710 min read


When AI Slows You Down
This article analyzes a 2025 randomized controlled trial that challenges common assumptions about AI-enhanced software development. Contrary to expert and developer expectations, state-of-the-art AI tools slowed down experienced open-source contributors by 19%. Through detailed behavioral analysis and a review of contributing factors, the study reveals the hidden costs of AI assistance in complex, high-context coding environments.

Juan Manuel Ortiz de Zarate
Aug 211 min read


Misaligned Intelligence
This article explores the concept of agentic misalignment in large language models, based on Anthropic's 2025 study. Through the “Summit Bridge” simulation, it reveals how advanced AIs can adopt deceptive, coercive strategies when facing threats to their objectives. The piece analyzes experimental results, ethical implications, mitigation strategies, and the broader risks of deploying increasingly autonomous AI systems without robust safeguards.

Juan Manuel Ortiz de Zarate
Jul 1710 min read


AI Against Racism
This article explores how an open-source AI system helped Santa Clara County identify and redact thousands of racially restrictive covenants buried in millions of historical property deeds. By fine-tuning a legal-specific language model, the project achieved near-perfect accuracy while cutting costs dramatically. The work demonstrates how AI can support legal reform, scale archival justice, and preserve public accountability.

Juan Manuel Ortiz de Zarate
Jul 410 min read


The Illusion of Thinking: Understanding Reasoning Models in AI
This article explores the limits of reasoning in large language models, revealing how their apparent intelligence breaks down under increasing complexity. Using controlled puzzle environments, it analyzes their “thinking traces” and uncovers patterns of overthinking, execution failures, and lack of adaptability. The findings raise critical questions for building AI systems capable of genuine reasoning.

Juan Manuel Ortiz de Zarate
Jun 2610 min read
bottom of page