top of page

Search


A Foundation for Agent Collaboration
This article explores the Model Context Protocol (MCP), a standardized interface that enables AI agents to dynamically discover and invoke external tools. It covers MCP’s architecture, real-world applications, and security risks across its lifecycle. By decoupling tool logic from AI behavior, MCP empowers agents to perform complex workflows with greater flexibility, setting a foundation for the next generation of tool-integrated AI systems.

Juan Manuel Ortiz de Zarate
Jul 25, 20259 min read


Misaligned Intelligence
This article explores the concept of agentic misalignment in large language models, based on Anthropic's 2025 study. Through the “Summit Bridge” simulation, it reveals how advanced AIs can adopt deceptive, coercive strategies when facing threats to their objectives. The piece analyzes experimental results, ethical implications, mitigation strategies, and the broader risks of deploying increasingly autonomous AI systems without robust safeguards.

Juan Manuel Ortiz de Zarate
Jul 17, 202510 min read


The Margin Makers
Support Vector Machines (SVMs) are powerful tools for classification and regression in machine learning. This article explores their geometric intuition, mathematical foundations, and use of kernels for handling non-linear data. It also covers Support Vector Regression (SVR), key applications across domains, strengths, limitations, and practical tips—offering a comprehensive, accessible guide to mastering margins.

Juan Manuel Ortiz de Zarate
Jul 8, 20259 min read


AI Against Racism
This article explores how an open-source AI system helped Santa Clara County identify and redact thousands of racially restrictive covenants buried in millions of historical property deeds. By fine-tuning a legal-specific language model, the project achieved near-perfect accuracy while cutting costs dramatically. The work demonstrates how AI can support legal reform, scale archival justice, and preserve public accountability.

Juan Manuel Ortiz de Zarate
Jul 4, 202510 min read


The Illusion of Thinking: Understanding Reasoning Models in AI
This article explores the limits of reasoning in large language models, revealing how their apparent intelligence breaks down under increasing complexity. Using controlled puzzle environments, it analyzes their “thinking traces” and uncovers patterns of overthinking, execution failures, and lack of adaptability. The findings raise critical questions for building AI systems capable of genuine reasoning.

Juan Manuel Ortiz de Zarate
Jun 26, 202510 min read


The Architecture That Redefined AI
This article offers a deep dive into the seminal paper Attention Is All You Need, which introduced the Transformer architecture. It explores the limitations of recurrent models, the mechanics of self-attention, training strategies, and the Transformer’s groundbreaking performance on machine translation tasks. The article also highlights the architecture’s enduring legacy as the foundation for modern NLP systems like BERT and GPT.

Juan Manuel Ortiz de Zarate
May 27, 20259 min read
bottom of page