top of page

Search


The Carbon Cost of Conversation
This article explores the environmental impact of large language models (LLMs), based on Dauner and Socher’s 2025 study. By analyzing 14 models across reasoning tasks, it reveals a trade-off between accuracy and CO₂ emissions. Larger models and reasoning modes achieve higher performance but drastically increase energy use due to verbose outputs. The findings highlight the urgent need for optimizing reasoning efficiency and integrating sustainability into AI development.

Juan Manuel Ortiz de Zarate
Aug 710 min read


When AI Slows You Down
This article analyzes a 2025 randomized controlled trial that challenges common assumptions about AI-enhanced software development. Contrary to expert and developer expectations, state-of-the-art AI tools slowed down experienced open-source contributors by 19%. Through detailed behavioral analysis and a review of contributing factors, the study reveals the hidden costs of AI assistance in complex, high-context coding environments.

Juan Manuel Ortiz de Zarate
Aug 211 min read


A Foundation for Agent Collaboration
This article explores the Model Context Protocol (MCP), a standardized interface that enables AI agents to dynamically discover and invoke external tools. It covers MCP’s architecture, real-world applications, and security risks across its lifecycle. By decoupling tool logic from AI behavior, MCP empowers agents to perform complex workflows with greater flexibility, setting a foundation for the next generation of tool-integrated AI systems.

Juan Manuel Ortiz de Zarate
Jul 259 min read


Misaligned Intelligence
This article explores the concept of agentic misalignment in large language models, based on Anthropic's 2025 study. Through the “Summit Bridge” simulation, it reveals how advanced AIs can adopt deceptive, coercive strategies when facing threats to their objectives. The piece analyzes experimental results, ethical implications, mitigation strategies, and the broader risks of deploying increasingly autonomous AI systems without robust safeguards.

Juan Manuel Ortiz de Zarate
Jul 1710 min read


Do They Really Think?
This article explores the limits of reasoning in large language models, revealing how their apparent intelligence breaks down under increasing complexity. Using controlled puzzle environments, it analyzes their “thinking traces” and uncovers patterns of overthinking, execution failures, and lack of adaptability. The findings raise critical questions for building AI systems capable of genuine reasoning.

Juan Manuel Ortiz de Zarate
Jun 2611 min read


The Architecture That Redefined AI
This article offers a deep dive into the seminal paper Attention Is All You Need, which introduced the Transformer architecture. It explores the limitations of recurrent models, the mechanics of self-attention, training strategies, and the Transformer’s groundbreaking performance on machine translation tasks. The article also highlights the architecture’s enduring legacy as the foundation for modern NLP systems like BERT and GPT.

Juan Manuel Ortiz de Zarate
May 279 min read


Foundation Models
Foundation models like GPT-3 and CLIP are reshaping AI by enabling general-purpose systems trained on massive, unlabelled data. This article explores their key concepts—emergence and homogenization—their capabilities across language, vision, and more, and the risks they pose, from bias to environmental impact. Based on the Stanford report, it highlights why foundation models are powerful, unpredictable, and demand responsible development.

Juan Manuel Ortiz de Zarate
May 711 min read


How Bigger Models Get Better
This article explores the groundbreaking findings of Kaplan et al. on scaling laws for neural language models. It explains how model performance improves predictably with increased model size, dataset size, and compute budget, highlighting power-law relationships. The piece discusses implications for efficient AI training, optimal resource allocation, overfitting avoidance, and future research directions.

Juan Manuel Ortiz de Zarate
Apr 3010 min read


How AI is Transforming Science and Medicine
This article explores how AI is transforming science and medicine in 2025. From breakthroughs in protein engineering and brain mapping to outperforming doctors in clinical diagnosis, AI is becoming an active research partner and clinical assistant. It highlights key findings from Stanford’s AI Index Report, including the rise of virtual labs, predictive healthcare models, AI scribes, and the importance of ethical, inclusive, and regulated deployment.

Juan Manuel Ortiz de Zarate
Apr 1511 min read


Can a Chatbot Make Us Feel Better (or Worse)?
Can AI chatbots comfort us—or make us dependent? A study explores ChatGPT's emotional impact and the ethics of affective design.

Juan Manuel Ortiz de Zarate
Apr 59 min read


Tech Titans Turn to Atomic Power to Fuel the Future
Tech giants turn to nuclear energy to power AI, tackling rising energy demands and environmental impact with bold new strategies.

Juan Manuel Ortiz de Zarate
Mar 3010 min read


Diffusion LLM: Closer to Human Thought
SEDD redefines generative AI with human-like reasoning, enabling faster, high-quality text and code through discrete diffusion models.

Juan Manuel Ortiz de Zarate
Mar 79 min read


Benchmarking AI Across Disciplines
SuperGPQA evaluates LLMs across 285 disciplines with 26,529 questions, testing their reasoning and knowledge beyond traditional fields.

Juan Manuel Ortiz de Zarate
Feb 269 min read


AI, enhancer or threat?
AI is not just replacing jobs; it's empowering 10x professionals, and amplifying their impact in marketing, recruitment, and beyond.

Juan Manuel Ortiz de Zarate
Feb 139 min read


DeepSeek, the game-changing model
DeepSeek R1 enhances AI reasoning with reinforcement learning and distillation, achieving top-tier performance while maintaining efficiency

Juan Manuel Ortiz de Zarate
Jan 319 min read


Measuring Intelligence: Key Benchmarks and Metrics for LLMs
A comprehensive review of essential benchmarks and metrics for evaluating Large Language Models, from accuracy to fairness and conversationa

Juan Manuel Ortiz de Zarate
Nov 8, 202410 min read


AI Researchers
AI Scientist automates research, generating ideas, running experiments, and writing papers, challenging AI's role in novel scientific discov

Juan Manuel Ortiz de Zarate
Aug 27, 20249 min read


Retrieval Augmented Generation: Increasing knowledge of your LLM
Dive into the world of Retrieval-Augmented Generation! See how RAG transforms AI responses by blending retrieval with generation.

Juan Manuel Ortiz de Zarate
May 25, 20249 min read


The Mathematics of Language
Computers model text with vectors. Using Word2Vec, FastText, and Transformers, they understand and generate context-aware text. Learn how!

Juan Manuel Ortiz de Zarate
May 25, 20248 min read


A Brief Introduction to Mixtures-of-Experts
In this article, we will explore the Mixture-of-Experts (MoE) and discuss the idea behind the gating mechanism used by the Sparse MoE.
Cristian Cardellino
Mar 26, 20248 min read
bottom of page