top of page

Search


The Lightning Mind
DeepSeek-V3.2-Exp introduces a new sparse-attention system that lets large language models handle ultra-long contexts efficiently. Using a “lightning indexer” to select only the most relevant tokens, it cuts computation costs while preserving reasoning power. The result is a faster, cheaper, and more cognitively elegant AI that learns what to ignore, bringing machine focus closer to human intelligence.

Juan Manuel Ortiz de Zarate
Oct 229 min read


Unveiling the Enigma of AI Hallucinations
Large Language Models hallucinate because training and evaluation reward guessing over admitting uncertainty. Errors stem statistically from pretraining (binary classification). They persist as most post-training evaluations use binary scoring, penalizing "I don't know" responses and incentivizing confident falsehoods. The proposed solution is a socio-technical modification: adjust existing benchmarks with explicit confidence targets to foster more trustworthy AI by rewardin

Juan Manuel Ortiz de Zarate
Sep 1112 min read


Tech Titans Turn to Atomic Power to Fuel the Future
Tech giants turn to nuclear energy to power AI, tackling rising energy demands and environmental impact with bold new strategies.

Juan Manuel Ortiz de Zarate
Mar 2910 min read
bottom of page