top of page

Search


How Language Models Learned to Reason
The article explores the paper Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, showing that large language models can perform complex reasoning when prompted to generate intermediate reasoning steps in natural language. By providing examples with explicit “chains of thought,” models learn to decompose problems and significantly improve performance on arithmetic, commonsense, and symbolic reasoning tasks—without fine-tuning or architectural changes.

Juan Manuel Ortiz de Zarate
Feb 1510 min read


When Models Learn to Think Before Painting
This article explores HunyuanImage 3.0, Tencent’s groundbreaking open-source multimodal model that unifies language understanding, visual reasoning, and image generation. It examines the model’s data pipeline, architecture, Chain-of-Thought workflow, and progressive training strategy, showing how HunyuanImage 3.0 achieves state-of-the-art text-to-image performance while enabling richer control, coherence, and creativity.

Juan Manuel Ortiz de Zarate
Dec 6, 20259 min read
bottom of page