top of page

Search


The Architecture That Redefined AI
This article offers a deep dive into the seminal paper Attention Is All You Need, which introduced the Transformer architecture. It explores the limitations of recurrent models, the mechanics of self-attention, training strategies, and the Transformer’s groundbreaking performance on machine translation tasks. The article also highlights the architecture’s enduring legacy as the foundation for modern NLP systems like BERT and GPT.

Juan Manuel Ortiz de Zarate
May 279 min read


Training Harmless AI at Scale
This article explores Constitutional AI, a framework developed by Anthropic to train AI systems that are helpful, harmless, and non-evasive—without relying on human labels for harmfulness. By guiding models through critique–revision loops and reinforcement learning from AI-generated feedback, this method offers a scalable, transparent alternative to RLHF and advances the field of AI alignment and self-supervised safety

Juan Manuel Ortiz de Zarate
May 811 min read


Foundation Models
Foundation models like GPT-3 and CLIP are reshaping AI by enabling general-purpose systems trained on massive, unlabelled data. This article explores their key concepts—emergence and homogenization—their capabilities across language, vision, and more, and the risks they pose, from bias to environmental impact. Based on the Stanford report, it highlights why foundation models are powerful, unpredictable, and demand responsible development.

Juan Manuel Ortiz de Zarate
May 79 min read


How Bigger Models Get Better
This article explores the groundbreaking findings of Kaplan et al. on scaling laws for neural language models. It explains how model performance improves predictably with increased model size, dataset size, and compute budget, highlighting power-law relationships. The piece discusses implications for efficient AI training, optimal resource allocation, overfitting avoidance, and future research directions.

Juan Manuel Ortiz de Zarate
Apr 3010 min read


The Power of Convolutional Neural Networks
Convolutional Neural Networks have revolutionized artificial intelligence by enabling machines to process visual data with remarkable accuracy. Inspired by the visual cortex, CNNs evolved from early models like LeNet-5 to powerful architectures such as AlexNet, VGG, ResNet, and DenseNet. This article explores CNN core concepts, key innovations, real-world applications, and future trends, highlighting their enduring impact on AI.

Juan Manuel Ortiz de Zarate
Apr 2610 min read


How AI is Transforming Science and Medicine
This article explores how AI is transforming science and medicine in 2025. From breakthroughs in protein engineering and brain mapping to outperforming doctors in clinical diagnosis, AI is becoming an active research partner and clinical assistant. It highlights key findings from Stanford’s AI Index Report, including the rise of virtual labs, predictive healthcare models, AI scribes, and the importance of ethical, inclusive, and regulated deployment.

Juan Manuel Ortiz de Zarate
Apr 1511 min read
bottom of page