A reading log that updates every Sunday (when I've read enough).
I save articles to my read-it-later app and write quick thoughts as I read. An automated task pulls everything I've read during the week, uses AI to review the content and expand my notes into something more coherent, then publishes them here.
#
- My AI Adoption Journey — The most practical thing here is the “reproduce your own work” phase: doing tasks manually and then redoing them with agents. Painful, but it forces you to learn where agents actually help versus where you’re just cargo-culting. I also liked the idea of end-of-day agent sessions to get warm starts the next morning. What resonated most is his framing around craftsmanship rather than ideology — he doesn’t care if AI is here to stay, he just wants to do good work faster.
- Nobody cares — The examples hit hard because they’re so mundane: bike lanes designed to kill you, gym weights left unracked, dog waste everywhere. It’s not malice, it’s indifference, and that’s almost worse. I keep thinking about the author’s failed experiment of installing dog waste dispensers in his neighborhood hoping it would snowball into community care. It didn’t. The Japan comparison is interesting but I’m not sure “will to have nice things” is something you can transplant — it might be downstream of much deeper cultural structures.
- Hotpatching ClickHouse in production with XRay — The fact that you can inject log statements into a running ClickHouse instance via SQL is remarkable.
SYSTEM INSTRUMENT ADD LOG 'QueryMetricLog::startQuery' '...'and it just works. The 4-7% binary size overhead with negligible runtime cost when inactive makes it a reasonable trade-off. This solves the one problem every production debugger knows: wishing you’d added one more log statement before deploying. - I miss thinking hard. — The Builder vs Thinker framing put words to something I’ve been feeling. When AI gives you a 70% solution, rejecting it to think harder feels economically irrational — even though that struggle is where the real growth happens. The honest part is the ending: there’s no resolution. He doesn’t pretend to have figured out how to balance velocity with depth, and I respect that more than a tidy answer.
- Barefoot Data Platforms — This is the kind of pragmatism I appreciate: rip out Dagster and dbt, replace them with plain scripts that have metadata headers, and let agents iterate on isolated files. The constraint of “under 200 assets” is refreshingly honest about scope. Most data platform posts sell you a cathedral — this one says a well-organized shed is enough for most teams, and it probably is.
- My First Distributed System — The Pokémon cloning exploit as a distributed systems lesson is one of those analogies that actually holds up under scrutiny. Two Game Boys linked by a cable are a real distributed system with real partial failure modes. The proposed escrow state fix is textbook two-phase commit, and the trade-off is clear: you can’t clone Pokémon anymore, but you might lose both if the cable disconnects mid-trade. Classic distributed systems — pick your failure mode.
- ASCII characters are not pixels: a deep dive into ASCII rendering — I don’t know how I ended up reading this, but the core insight stuck: instead of mapping brightness to characters, you use 6-dimensional shape vectors to match character geometry to image regions. That’s why an
Lcharacter gets placed where there’s an L-shaped edge in the image. The contrast enhancement technique — sampling into neighboring cells to push boundaries — is borrowed from cel-shading, which is a connection I wouldn’t have expected. - The Design & Implementation of Sprites — Three design choices that make Sprites interesting: no container images (pre-staged base instances for instant creation), object storage for disks (durable state becomes “just a URL”), and inside-out orchestration (user code in an inner container, platform services in the root namespace). The storage stack using JuiceFS-style chunk splitting with local SQLite metadata is clever — it makes VM migration trivial since there’s no attached volume to move.
- WAL usage looks broken in modern Time Series Databases? — The core argument is that most TSDBs don’t actually fsync their WAL on every write, so the safety guarantee is weaker than you’d expect. Prometheus fsyncs WAL segments every 2 hours by default. Two hours. The author’s proposed alternative — buffer in memory and flush to SSTables — gives similar guarantees with less complexity. Though Ayende’s pushback is worth reading too: WAL implementations handle partial writes by rolling back incomplete transactions, which the article glosses over.
- Keeping 20,000 GPUs healthy — The number that stands out: GPU issues account for 58.7% of all unexpected problems in production. Modal has never seen a degraded CPU core, but GPUs fail constantly — thermal throttling at 94°C, uncorrectable ECC errors clustering in specific regions, 0.1% CUDA initialization flake rates on L4s. Their approach of not attempting recovery and just disposing of unhealthy hosts is pragmatic. GPU reliability is years behind CPU reliability, and this post makes that concrete.
- Standups — The argument that your teammates should already know what you did yesterday — through commits, PRs, and notifications — is one of those obvious truths that most teams ignore. I’ve been in standups that were pure “are you done yet?” rituals disguised as collaboration. The proposal to replace status reports with written updates and reserve meeting time for actual problem-solving is straightforward, but getting teams to actually make that switch requires someone willing to challenge the ceremony.
- Tuning Tansu: 600,000 record/s with 13MB of RAM — What I liked about this is the methodology: using a null storage engine to isolate protocol overhead from I/O. The biggest wins came from reducing allocations during serialization and fixing eager evaluation of
OptionandResultmethods — small things that compound. Getting from 3.7s to 1.9s on codec benchmarks (49% improvement) before even touching storage shows how much performance gets left on the table in the protocol layer.
#
- Automatic Programming — The distinction antirez makes here finally gave me language for something I’ve been fumbling with. It’s not about whether you use AI to write code—it’s about whether you’re steering or just prompting and hoping. His Redis example hit hard: the value wasn’t in technical novelty but in the contained vision. That maps to what I see working well versus the codebases that feel like they emerged from a chatbot fever dream.
- Email triage with an embedding-based classifier — This outperformed a fine-tuned GPT by 11 percentage points while being dramatically faster. The separation of concerns makes sense: embeddings handle “understand the email” while logistic regression handles “what does this user care about.” People keep defaulting to LLMs when something simpler would work better. Worth remembering that the expensive part doesn’t need to run every time.
- Efficient String Compression for Modern Database Systems — The insight that compression is primarily about query performance, not storage, reframes the whole tradeoff. Getting data to fit in L1 cache (1ns access) versus RAM fundamentally changes what operations cost. FSST’s approach of building a symbol table from sample data feels like the kind of clever-but-not-too-clever technique that actually ships.
- I made my own git — “Git is just a content-addressable file store” is one of those realizations that makes everything else click. What stuck with me is that parsing was harder than the actual version control logic. We treat Git like it’s complicated, but the core idea is almost trivial—it’s the interface that makes it feel like a black box.
- Online, Asynchronous Schema Change in F1 — The intermediate states approach is elegant: you can’t jump from no-index to index safely, but you can chain compatible transitions. Delete-only and write-only states let nodes migrate without corrupting data. This feels like the kind of solution that’s obvious after you see it but probably took years to figure out. Makes me think about what other distributed systems problems have similar chain-of-compatibility solutions.
- Why Senior Engineers Let Bad Projects Fail — “Being right and being effective are different” cuts through so much noise. The credibility-as-currency framing explains behavior I’ve seen but couldn’t articulate. You don’t get credit for disasters you prevent, only for the battles you pick and win. Still processing whether this is pragmatic wisdom or just resignation to broken systems.
- Slop is Everywhere For Those With Eyes to See — The 90-9-1 rule creates a structural problem: platforms need infinite content but only 1-3% of users create anything. Algorithms fill that gap with slop because engagement matters more than quality. The behavioral science point about effort and meaning landed—when everything is effortless to access, nothing feels valuable. I’ve been noticing this with technical content too, not just social media.
- How I estimate work — “Only the known work can be accurately estimated, but unknown work takes 90% of the time” explains why estimation always feels broken. The reframe that estimates are political negotiation tools, not technical predictions, matches every project I’ve seen. Managers arrive with timelines, engineers figure out what fits. Treating it as a prediction problem sets everyone up for disappointment.
- Scaling PostgreSQL to power 800 million ChatGPT users | OpenAI — The challenges they describe—connection pooling, read replica lag, vacuum tuning, lock contention—are exactly what you hit at high throughput. Nothing novel but it’s validating to see that even at ChatGPT scale, you’re fighting the same PostgreSQL battles. Sometimes the answer to “how do they do it?” is just “they do the same things, but more carefully.”
#
- I’m addicted to being useful — This hit close. The idea that some of us are fundamentally wired to solve problems, and engineering just happens to fit that dysfunction perfectly. I’ve been thinking about whether my drive to build things comes from genuine interest or just this compulsion to be useful. His point about how this actually protects against burnout when you can satisfy it rings true.
- How ClickHouse handles strings — The overlapping memory reads technique is wild. Reading the same characters multiple times on purpose because it makes branch prediction work better. I keep coming back to how much performance work is about understanding what the hardware actually does, not what we think it should do.
- How to figure out what to do with your life — I’m at this exact crossroads. The thread didn’t load properly but the title alone captures where I am right now.
- Raising money fucked me up — The part about projecting expectations onto yourself hit hard. He realized his investors weren’t actually pressuring him, he was doing it to himself. I see this pattern in how I think about my own work. The shift from “what problem does this solve” to “how big does this feel” is something I need to watch for.
- Personal information firehose — A personal algorithm that learns what matters to you without manual filters. The idea feels right but the obstacles are real - training on small personal datasets instead of massive global ones, working across fragmented channels, dealing with restricted APIs. Still figuring out if this is technically possible or just wishful thinking.
- Don’t fall into the anti-AI hype — Antirez building a BERT inference library in pure C with AI assistance in hours instead of weeks made this concrete for me. The skill that matters is knowing what to build and how to guide the tool, not typing the code yourself. I’m still working through what this means for how I spend my time.
- The Unbearable Joy of Sitting Alone in A Café — The observation that you can’t control what other people think, revealed just by sitting still without your phone. I don’t do this enough. There’s something about being alone in public that forces a different kind of attention than being alone at home.
- If you think @Sentry isn’t serious about AI, I’d recommend… — David Cramer’s internal push for everyone at Sentry to go all-in on AI. Watching companies navigate this shift in real time.
- ~/rbv/2025 — Raquel’s point about almost forgetting to enjoy herself struck me. Also her take that intellectual giftedness is less important than finding people who actually want to understand you. The whole review feels like someone recalibrating after a hard year.
- Deconstructing cultural codes — Cowen’s approach is to learn as many cultural systems as possible - art, music, industries, religions - to understand how things actually work. The problem is cultural codes are multiplying faster than anyone can learn them. This feels relevant to how I think about understanding different parts of tech.
- The Sauna Algorithm: Surviving Asynchrony Without a Clock — Using causality instead of time to coordinate. Wait for the person who arrived after you to leave, guaranteeing you stayed long enough. The sauna framing makes the distributed systems concept click in a way that formal definitions don’t. Events ordered by what caused what, not by clock time.
#
- Distinguishing yourself early in your career as a developer — Honestly? This hit different. Phil breaks down the job market into three tiers and the advice that stuck: start local, skip the FAANG obsession early on, and—here’s what I appreciate most—write consistently about niche technical stuff. Not for clout, but because it’s genuinely one of the most effective ways to build credibility over time. The 6-12 month job search reality check is brutal but refreshing. Also, the idea that support or QA roles can be legitimate entry points into dev? That’s the kind of pragmatic wisdom people actually need.
- You Are NOT Dumb, You Just Lack the Prerequisites — I’ve definitely read this before, but revisiting it at the start of the year feels intentional. The premise is simple yet powerful: struggling with hard concepts doesn’t mean you’re intellectually incapable, it means you’re missing the foundational pieces. What resonates is the author’s journey—150 days of going back to basics in math, systematically rebuilding understanding. It’s the kind of humility and persistence we need more of. The analogy about jumping into a video game at minimum level? Chef’s kiss. This is the energy I want to carry into 2026.
- The Prison Of Financial Mediocrity — This thread captures something unsettling about where we are right now. A whole generation is effectively locked out of traditional wealth-building paths—no homeownership, no stable pensions, wages that don’t match cost of living. So what happens? People turn to high-variance bets like crypto, prediction markets, and sports betting because it feels like the only way to gain some agency over their financial future. The platforms and “hope sellers” profit regardless. It’s a depressing feedback loop where desperation meets exploitation. Hard to read, harder to ignore.
- How to add two vectors, fast — When Txus writes about low-level optimization stuff, you stop and pay attention. This is a deep dive into CPU vs GPU performance for vector addition—concrete benchmarks, memory-bound vs compute-bound problems, the whole deal. The best part? Learning that throwing fancy optimizations at a memory-bound kernel barely moves the needle because data movement, not computation, is the bottleneck. It’s a reminder that profiling and understanding hardware constraints beats clever code tricks. If you’re into performance engineering or just curious about why GPUs work the way they do, this is worth your time.
- How We Lost Communication to Entertainment — This one stings because it’s true. We don’t have social networks anymore, we have entertainment platforms optimized for engagement, not connection. The shift is subtle but devastating: people now accept lost messages, multiple accounts, algorithmic feeds that prioritize virality over trust. The author draws this generational line—older folks expecting reliability, younger users treating platforms like content feeds. What I appreciate is the refusal to chase critical mass. Instead: email, RSS, mailing lists, offline-first tools. It’s a smaller community, sure, but one built on actual communication. I feel seen.
- Designing remote work at Linear — Linear’s approach to remote work feels refreshingly intentional. It’s not just “we allow WFH,” it’s designed around freedom, trust, and deep focus. Small autonomous teams (2-4 people), rotating project leadership, zero-bugs SLAs, quality Wednesdays, feature roasts before shipping—these aren’t just rituals, they’re guardrails for maintaining quality without micromanagement. The goalie rotation for handling unplanned work is clever. And honestly? The 10-year equity exercise windows and sabbatical provisions show they’re thinking long-term about retention. This is what remote-first looks like when you actually commit to it, not just tolerate it.
- Engineers who won’t commit force bad decisions — This called me out a bit. The argument is sharp: when senior engineers stay non-committal in technical discussions, they’re not being careful or humble—they’re shifting the burden to less-informed teammates. Either junior devs end up guessing, or the loudest voice wins by default. The root cause? Fear of being publicly wrong. But here’s the thing: managers expect some calls to fail, especially on genuinely hard problems. The threshold is simple—if you have more context than others in the room, speak up. Caveats and hedging just create friction. It’s uncomfortable but necessary: taking a position, even with uncertainty, moves the team forward.
#
- How uv got so fast — This one hit different because everyone’s been saying “uv is fast because Rust” but the real story is way more interesting. It’s fast because of what it doesn’t do—no legacy .egg support, no bytecode compilation, no permissive spec violations. The timing was perfect: PEP 658 landed on PyPI in May 2023, giving direct access to package metadata, and uv launched February 2024. Honestly? Most of the speedups (parallel downloads, global cache, the PubGrub resolver) could be added to pip without touching Rust. The language helps with zero-copy deserialization and threading, sure, but the real win is having the courage to say “we’re not supporting that old stuff anymore.” It’s a reminder that sometimes architectural decisions matter way more than implementation language.
- My role as a founder CTO: Year Eight — Miguel got offered $500M for RevenueCat and turned it down. That decision alone makes this worth reading, but what I appreciate most is how raw he is about the oscillation between conviction and doubt. His wife told him to keep going, viewing it as their shared legacy, and that reframed everything. The rest is a founder CTO doing founder CTO things at scale—50 flights, still doing 40 interviews a year because hiring is “the highest leverage activity,” creating this Office of the CTO team for zero-to-one work. He admits his three biggest mistakes openly: wasting energy on a VP of Engineering search, letting hiring velocity stall mid-year, and moving people to new initiatives before stabilizing existing ones. What stuck with me though is that after all this growth and near-exit, he’s still fundamentally a builder who can’t imagine stopping. That kind of clarity about who you are is rare.
- Package managers keep using git as a database, it never works out — The pattern is almost comical at this point: use git as your package registry because it’s convenient and free, watch it break spectacularly as you scale, then quietly migrate to actual HTTP APIs. Cargo had users stuck on “delta resolution” forever, Homebrew’s .git folders hit 1GB, CocoaPods took minutes just to clone. The best part? They all solved it the same way—keep git internally for governance workflows, but serve metadata over HTTP/CDN to users. What I appreciate about this piece is how clearly it explains why git fails: it’s missing CHECK constraints, UNIQUE constraints, proper locking, query indexes, all the things actual databases have. Plus filesystem limits like Windows’ 260-character path restriction and case-sensitivity mismatches. It’s a textbook case of picking a tool that solves your immediate problem (version control + hosting) while ignoring what you actually need (a queryable database with performance guarantees).
- Maybe the Default Settings Are Too High — David Cain reads Lord of the Rings out loud at triple-slow pace, pausing after commas, and gets more out of it than speed-reading ever gave him. Same with eating—half speed, smaller portions, more pleasure. The insight that landed for me: when we rush to get to the “good stuff” faster, we actually guarantee we’ll miss it entirely. Our sensory systems need time to propagate the experience, but modern life has conditioned us toward empty, surface-level rewards because we treat everything as disposable. The paradox is brutal: we have infinite books and snacks available, so we unconsciously devalue each one by consuming it too quickly. Slowing down doesn’t just make experiences richer, it naturally redirects you toward substantive things because cheap chocolate and TikTok videos don’t reward patience. This meshes with something I’ve been feeling about how I read technical articles—scanning for bullet points instead of letting ideas settle.
- Faster Rust builds on Mac — I’ve been hitting this without realizing what it was. Every build script and test binary triggers XProtect’s malware scan the first time it runs, and since Rust compiles fresh binaries constantly, you’re basically waiting for a single-threaded security daemon to approve every executable. The author shows build scripts going from 3.88 seconds down to 0.14 seconds, and a test suite dropping from 9m42s to 3m33s, just by adding Terminal to the developer tools list. That’s a massive win for a one-time settings change. The honest trade-off discussion is what I appreciate here—this disables an OS security feature, so you’re choosing speed over protection. For personal dev machines where you control what code you’re running, that’s probably fine. For shared or work machines, maybe not. Either way, knowing this exists beats suffering through slow builds and blaming Rust.
- Write to escape your default setting — This connects to the previous article about slowing down, but focused on thinking instead of consuming. Our minds operate in “perpetual approximation mode”—jumping between shiny fragments, never settling long enough to go deep. Writing breaks that pattern by forcing you to create coherence on paper, which immediately exposes the gaps between what you think you know and what you actually understand. The Francis Bacon line hits: “reading maketh a full man… writing maketh an exact man.” What I appreciate is the permission to write fast and sloppy—the point isn’t polished prose, it’s getting the muddy bottom of your thoughts visible so you can examine it. This is why I keep coming back to writing notes and posts even when it feels inefficient. It’s not about producing content, it’s about extending my working memory beyond what I can hold internally and discovering what I actually believe.
- You’re Not Burnt Out. You’re Existentially Starving. — Neil Thanedar argues that burnout is often misdiagnosed—what we’re actually experiencing is Viktor Frankl’s “existential vacuum,” a profound absence of purpose despite material comfort. The argument cuts through the hustle culture vs. anti-hustle binary: working 100+ hours a week isn’t inherently problematic if those hours align with genuine purpose, but 40 hours of meaningless tasks will destroy you. What resonates is his challenge to reconnect with childhood dreams before self-doubt kicked in, then build your entire life around that direction rather than just optimizing leisure time. His own story—abandoning astronaut/president dreams in middle school, spending 15+ years in tech, finally embracing political engagement—shows this isn’t about sudden epiphanies but deliberate realignment. The “start small” advice matters: volunteer one hour weekly for something you believe in, don’t wait for perfect timing. Honestly? This reframes my own frustrations with leadership work as potentially a purpose problem, not a workload problem.
- You are not the code — Txus spent weeks building an elegant Clojure query language for power users, demoed it to his manager Manish, got told not to ship it, and then just… deleted the branch. The relief he felt in that moment revealed something profound: he’d been fusing his self-worth with his code output, making every criticism feel personal and every failure diminishing. The realization that “the code is an artifact, a byproduct of an ongoing process” freed him—the two weeks weren’t wasted because the learning remained even after the deletion. This hits hard for anyone who’s had technically brilliant work rejected for product/team alignment reasons. The maturity here is recognizing that being right about the technical solution doesn’t mean it’s the right solution. Your value comes from growth and capability, not from lines of code surviving in the repo. I’ve definitely struggled with this when features I built got deprecated or rewritten, feeling like it invalidated the work, when actually it just meant we learned enough to do something better.
Page 1 Next →