Jordi Villar

A reading log that updates every Sunday (when I've read enough).

I save articles to my read-it-later app and write quick thoughts as I read. An automated task pulls everything I've read during the week, uses AI to review the content and expand my notes into something more coherent, then publishes them here.

#
  • Hedonism and Entrepreneurship in Barcelona — A potential acquisition dies because a vegan exec unknowingly eats mayo on patatas bravas. A critical migration stalls at 69% while the engineer responsible plays ping-pong. Having lived the Barcelona startup scene myself, this one hit close. The absurdity is the point — startup success is so much more arbitrary than anyone wants to admit, and the line between “almost rich” and “back to Monday” is thinner than your Series A pitch deck suggests.
  • OpenClaw, OpenAI and the future — After 13 years building PSPDFKit, Steinberger explicitly chose not to build another company. Instead he’s joining OpenAI and turning OpenClaw into a foundation. I keep thinking about the tension here: he wants to keep things open and independent, but the path to maximum impact runs through one of the most powerful closed labs in the world. Whether that trade-off holds up depends on how seriously OpenAI takes the foundation model. Time will tell.
  • They’re putting blue food coloring in everything — It’s not about blue food, obviously. It’s about how unwanted things get normalized — first one restaurant does it, then “all the best restaurants” do it, then your friend sneaks it into homemade food and tells you you’re overreacting. The detail that got me is the protagonist saying “I just think it tastes weird” and being told “most people say it’s just fine.” I’ve had that exact conversation about too many things in tech.
  • Why I joined OpenAI — The turning point wasn’t a demo or a benchmark — it was his hairstylist casually mentioning she uses ChatGPT all the time. She recognized ChatGPT as a brand more readily than Intel, where Gregg was a Fellow. That contrast says more about where computing impact has shifted than any industry report. The environmental framing is interesting too: at this scale, performance engineering isn’t just cost optimization, it’s resource consumption with real planetary consequences.
  • GitHub Actions Is Slowly Killing Your Engineering Team — The fact that an entire cottage industry of startups exists solely to fix GitHub Actions’ slow runners tells you everything. The comparison to Internet Explorer is spot on — it wins because it ships with the thing, not because it’s good. I’ve felt the pain of debugging through four pages of loading spinners to find a failed step, and the escape hatch of “just write a bash script” that inevitably becomes 800 lines of unmaintainable CI logic. Default integration is a powerful moat, even for mediocre products.
  • 2025 letter — Wang’s framing of the US economy as “a highly leveraged bet on deep learning” stuck with me. The Xiaomi vs Apple comparison is brutal: Xiaomi shipped an EV in 4 years while Apple spent 10 years and $10B before giving up entirely. His concept of Silicon Valley’s “soft Leninism” — groupthink disguised as meritocracy — is uncomfortable because it’s hard to argue against when you see how the industry moves in lockstep. Not sure I fully buy the symmetry he draws between SV and the CCP’s self-seriousness, but it made me think.
#
  • My AI Adoption Journey — The most practical thing here is the “reproduce your own work” phase: doing tasks manually and then redoing them with agents. Painful, but it forces you to learn where agents actually help versus where you’re just cargo-culting. I also liked the idea of end-of-day agent sessions to get warm starts the next morning. What resonated most is his framing around craftsmanship rather than ideology — he doesn’t care if AI is here to stay, he just wants to do good work faster.
  • Nobody cares — The examples hit hard because they’re so mundane: bike lanes designed to kill you, gym weights left unracked, dog waste everywhere. It’s not malice, it’s indifference, and that’s almost worse. I keep thinking about the author’s failed experiment of installing dog waste dispensers in his neighborhood hoping it would snowball into community care. It didn’t. The Japan comparison is interesting but I’m not sure “will to have nice things” is something you can transplant — it might be downstream of much deeper cultural structures.
  • Hotpatching ClickHouse in production with XRay — The fact that you can inject log statements into a running ClickHouse instance via SQL is remarkable. SYSTEM INSTRUMENT ADD LOG 'QueryMetricLog::startQuery' '...' and it just works. The 4-7% binary size overhead with negligible runtime cost when inactive makes it a reasonable trade-off. This solves the one problem every production debugger knows: wishing you’d added one more log statement before deploying.
  • I miss thinking hard. — The Builder vs Thinker framing put words to something I’ve been feeling. When AI gives you a 70% solution, rejecting it to think harder feels economically irrational — even though that struggle is where the real growth happens. The honest part is the ending: there’s no resolution. He doesn’t pretend to have figured out how to balance velocity with depth, and I respect that more than a tidy answer.
  • Barefoot Data Platforms — This is the kind of pragmatism I appreciate: rip out Dagster and dbt, replace them with plain scripts that have metadata headers, and let agents iterate on isolated files. The constraint of “under 200 assets” is refreshingly honest about scope. Most data platform posts sell you a cathedral — this one says a well-organized shed is enough for most teams, and it probably is.
  • My First Distributed System — The Pokémon cloning exploit as a distributed systems lesson is one of those analogies that actually holds up under scrutiny. Two Game Boys linked by a cable are a real distributed system with real partial failure modes. The proposed escrow state fix is textbook two-phase commit, and the trade-off is clear: you can’t clone Pokémon anymore, but you might lose both if the cable disconnects mid-trade. Classic distributed systems — pick your failure mode.
  • ASCII characters are not pixels: a deep dive into ASCII rendering — I don’t know how I ended up reading this, but the core insight stuck: instead of mapping brightness to characters, you use 6-dimensional shape vectors to match character geometry to image regions. That’s why an L character gets placed where there’s an L-shaped edge in the image. The contrast enhancement technique — sampling into neighboring cells to push boundaries — is borrowed from cel-shading, which is a connection I wouldn’t have expected.
  • The Design & Implementation of Sprites — Three design choices that make Sprites interesting: no container images (pre-staged base instances for instant creation), object storage for disks (durable state becomes “just a URL”), and inside-out orchestration (user code in an inner container, platform services in the root namespace). The storage stack using JuiceFS-style chunk splitting with local SQLite metadata is clever — it makes VM migration trivial since there’s no attached volume to move.
  • WAL usage looks broken in modern Time Series Databases? — The core argument is that most TSDBs don’t actually fsync their WAL on every write, so the safety guarantee is weaker than you’d expect. Prometheus fsyncs WAL segments every 2 hours by default. Two hours. The author’s proposed alternative — buffer in memory and flush to SSTables — gives similar guarantees with less complexity. Though Ayende’s pushback is worth reading too: WAL implementations handle partial writes by rolling back incomplete transactions, which the article glosses over.
  • Keeping 20,000 GPUs healthy — The number that stands out: GPU issues account for 58.7% of all unexpected problems in production. Modal has never seen a degraded CPU core, but GPUs fail constantly — thermal throttling at 94°C, uncorrectable ECC errors clustering in specific regions, 0.1% CUDA initialization flake rates on L4s. Their approach of not attempting recovery and just disposing of unhealthy hosts is pragmatic. GPU reliability is years behind CPU reliability, and this post makes that concrete.
  • Standups — The argument that your teammates should already know what you did yesterday — through commits, PRs, and notifications — is one of those obvious truths that most teams ignore. I’ve been in standups that were pure “are you done yet?” rituals disguised as collaboration. The proposal to replace status reports with written updates and reserve meeting time for actual problem-solving is straightforward, but getting teams to actually make that switch requires someone willing to challenge the ceremony.
  • Tuning Tansu: 600,000 record/s with 13MB of RAM — What I liked about this is the methodology: using a null storage engine to isolate protocol overhead from I/O. The biggest wins came from reducing allocations during serialization and fixing eager evaluation of Option and Result methods — small things that compound. Getting from 3.7s to 1.9s on codec benchmarks (49% improvement) before even touching storage shows how much performance gets left on the table in the protocol layer.
#
  • Automatic Programming — The distinction antirez makes here finally gave me language for something I’ve been fumbling with. It’s not about whether you use AI to write code—it’s about whether you’re steering or just prompting and hoping. His Redis example hit hard: the value wasn’t in technical novelty but in the contained vision. That maps to what I see working well versus the codebases that feel like they emerged from a chatbot fever dream.
  • Email triage with an embedding-based classifier — This outperformed a fine-tuned GPT by 11 percentage points while being dramatically faster. The separation of concerns makes sense: embeddings handle “understand the email” while logistic regression handles “what does this user care about.” People keep defaulting to LLMs when something simpler would work better. Worth remembering that the expensive part doesn’t need to run every time.
  • Efficient String Compression for Modern Database Systems — The insight that compression is primarily about query performance, not storage, reframes the whole tradeoff. Getting data to fit in L1 cache (1ns access) versus RAM fundamentally changes what operations cost. FSST’s approach of building a symbol table from sample data feels like the kind of clever-but-not-too-clever technique that actually ships.
  • I made my own git — “Git is just a content-addressable file store” is one of those realizations that makes everything else click. What stuck with me is that parsing was harder than the actual version control logic. We treat Git like it’s complicated, but the core idea is almost trivial—it’s the interface that makes it feel like a black box.
  • Online, Asynchronous Schema Change in F1 — The intermediate states approach is elegant: you can’t jump from no-index to index safely, but you can chain compatible transitions. Delete-only and write-only states let nodes migrate without corrupting data. This feels like the kind of solution that’s obvious after you see it but probably took years to figure out. Makes me think about what other distributed systems problems have similar chain-of-compatibility solutions.
  • Why Senior Engineers Let Bad Projects Fail — “Being right and being effective are different” cuts through so much noise. The credibility-as-currency framing explains behavior I’ve seen but couldn’t articulate. You don’t get credit for disasters you prevent, only for the battles you pick and win. Still processing whether this is pragmatic wisdom or just resignation to broken systems.
  • Slop is Everywhere For Those With Eyes to See — The 90-9-1 rule creates a structural problem: platforms need infinite content but only 1-3% of users create anything. Algorithms fill that gap with slop because engagement matters more than quality. The behavioral science point about effort and meaning landed—when everything is effortless to access, nothing feels valuable. I’ve been noticing this with technical content too, not just social media.
  • How I estimate work — “Only the known work can be accurately estimated, but unknown work takes 90% of the time” explains why estimation always feels broken. The reframe that estimates are political negotiation tools, not technical predictions, matches every project I’ve seen. Managers arrive with timelines, engineers figure out what fits. Treating it as a prediction problem sets everyone up for disappointment.
  • Scaling PostgreSQL to power 800 million ChatGPT users | OpenAI — The challenges they describe—connection pooling, read replica lag, vacuum tuning, lock contention—are exactly what you hit at high throughput. Nothing novel but it’s validating to see that even at ChatGPT scale, you’re fighting the same PostgreSQL battles. Sometimes the answer to “how do they do it?” is just “they do the same things, but more carefully.”
#
  • I’m addicted to being useful — This hit close. The idea that some of us are fundamentally wired to solve problems, and engineering just happens to fit that dysfunction perfectly. I’ve been thinking about whether my drive to build things comes from genuine interest or just this compulsion to be useful. His point about how this actually protects against burnout when you can satisfy it rings true.
  • How ClickHouse handles strings — The overlapping memory reads technique is wild. Reading the same characters multiple times on purpose because it makes branch prediction work better. I keep coming back to how much performance work is about understanding what the hardware actually does, not what we think it should do.
  • How to figure out what to do with your life — I’m at this exact crossroads. The thread didn’t load properly but the title alone captures where I am right now.
  • Raising money fucked me up — The part about projecting expectations onto yourself hit hard. He realized his investors weren’t actually pressuring him, he was doing it to himself. I see this pattern in how I think about my own work. The shift from “what problem does this solve” to “how big does this feel” is something I need to watch for.
  • Personal information firehose — A personal algorithm that learns what matters to you without manual filters. The idea feels right but the obstacles are real - training on small personal datasets instead of massive global ones, working across fragmented channels, dealing with restricted APIs. Still figuring out if this is technically possible or just wishful thinking.
  • Don’t fall into the anti-AI hype — Antirez building a BERT inference library in pure C with AI assistance in hours instead of weeks made this concrete for me. The skill that matters is knowing what to build and how to guide the tool, not typing the code yourself. I’m still working through what this means for how I spend my time.
  • The Unbearable Joy of Sitting Alone in A Café — The observation that you can’t control what other people think, revealed just by sitting still without your phone. I don’t do this enough. There’s something about being alone in public that forces a different kind of attention than being alone at home.
  • If you think @Sentry isn’t serious about AI, I’d recommend… — David Cramer’s internal push for everyone at Sentry to go all-in on AI. Watching companies navigate this shift in real time.
  • ~/rbv/2025 — Raquel’s point about almost forgetting to enjoy herself struck me. Also her take that intellectual giftedness is less important than finding people who actually want to understand you. The whole review feels like someone recalibrating after a hard year.
  • Deconstructing cultural codes — Cowen’s approach is to learn as many cultural systems as possible - art, music, industries, religions - to understand how things actually work. The problem is cultural codes are multiplying faster than anyone can learn them. This feels relevant to how I think about understanding different parts of tech.
  • The Sauna Algorithm: Surviving Asynchrony Without a Clock — Using causality instead of time to coordinate. Wait for the person who arrived after you to leave, guaranteeing you stayed long enough. The sauna framing makes the distributed systems concept click in a way that formal definitions don’t. Events ordered by what caused what, not by clock time.
#
  • Distinguishing yourself early in your career as a developer — Honestly? This hit different. Phil breaks down the job market into three tiers and the advice that stuck: start local, skip the FAANG obsession early on, and—here’s what I appreciate most—write consistently about niche technical stuff. Not for clout, but because it’s genuinely one of the most effective ways to build credibility over time. The 6-12 month job search reality check is brutal but refreshing. Also, the idea that support or QA roles can be legitimate entry points into dev? That’s the kind of pragmatic wisdom people actually need.
  • You Are NOT Dumb, You Just Lack the Prerequisites — I’ve definitely read this before, but revisiting it at the start of the year feels intentional. The premise is simple yet powerful: struggling with hard concepts doesn’t mean you’re intellectually incapable, it means you’re missing the foundational pieces. What resonates is the author’s journey—150 days of going back to basics in math, systematically rebuilding understanding. It’s the kind of humility and persistence we need more of. The analogy about jumping into a video game at minimum level? Chef’s kiss. This is the energy I want to carry into 2026.
  • The Prison Of Financial Mediocrity — This thread captures something unsettling about where we are right now. A whole generation is effectively locked out of traditional wealth-building paths—no homeownership, no stable pensions, wages that don’t match cost of living. So what happens? People turn to high-variance bets like crypto, prediction markets, and sports betting because it feels like the only way to gain some agency over their financial future. The platforms and “hope sellers” profit regardless. It’s a depressing feedback loop where desperation meets exploitation. Hard to read, harder to ignore.
  • How to add two vectors, fast — When Txus writes about low-level optimization stuff, you stop and pay attention. This is a deep dive into CPU vs GPU performance for vector addition—concrete benchmarks, memory-bound vs compute-bound problems, the whole deal. The best part? Learning that throwing fancy optimizations at a memory-bound kernel barely moves the needle because data movement, not computation, is the bottleneck. It’s a reminder that profiling and understanding hardware constraints beats clever code tricks. If you’re into performance engineering or just curious about why GPUs work the way they do, this is worth your time.
  • How We Lost Communication to Entertainment — This one stings because it’s true. We don’t have social networks anymore, we have entertainment platforms optimized for engagement, not connection. The shift is subtle but devastating: people now accept lost messages, multiple accounts, algorithmic feeds that prioritize virality over trust. The author draws this generational line—older folks expecting reliability, younger users treating platforms like content feeds. What I appreciate is the refusal to chase critical mass. Instead: email, RSS, mailing lists, offline-first tools. It’s a smaller community, sure, but one built on actual communication. I feel seen.
  • Designing remote work at Linear — Linear’s approach to remote work feels refreshingly intentional. It’s not just “we allow WFH,” it’s designed around freedom, trust, and deep focus. Small autonomous teams (2-4 people), rotating project leadership, zero-bugs SLAs, quality Wednesdays, feature roasts before shipping—these aren’t just rituals, they’re guardrails for maintaining quality without micromanagement. The goalie rotation for handling unplanned work is clever. And honestly? The 10-year equity exercise windows and sabbatical provisions show they’re thinking long-term about retention. This is what remote-first looks like when you actually commit to it, not just tolerate it.
  • Engineers who won’t commit force bad decisions — This called me out a bit. The argument is sharp: when senior engineers stay non-committal in technical discussions, they’re not being careful or humble—they’re shifting the burden to less-informed teammates. Either junior devs end up guessing, or the loudest voice wins by default. The root cause? Fear of being publicly wrong. But here’s the thing: managers expect some calls to fail, especially on genuinely hard problems. The threshold is simple—if you have more context than others in the room, speak up. Caveats and hedging just create friction. It’s uncomfortable but necessary: taking a position, even with uncertainty, moves the team forward.
Page 1 Next →