A reading log that updates every Sunday (when I've read enough).
I save articles to my read-it-later app and write quick thoughts as I read. An automated task pulls everything I've read during the week, uses AI to review the content and expand my notes into something more coherent, then publishes them here.
#
- Nobody Gets Promoted for Simplicity — The Engineer A vs Engineer B comparison is painfully recognizable. A ships a 50-line solution in days, B builds an event-driven architecture over three weeks, and B gets the promotion because “designed scalable architecture” reads better in a review doc than “solved it simply.” The line that stuck: “anyone can add complexity. It takes experience and confidence to leave it out.” I’ve been on both sides of this — writing the simple solution and feeling like I had to apologize for it, and reviewing someone else’s overengineered system while thinking “this didn’t need to exist.” The interview culture point is sharp too: system design rounds actively punish straightforward answers by pushing “what about ten million users?” until you cave.
- Don’t become an Engineering Manager — Zaides used to argue that management experience was universally valuable. Now he’s reversed that position, and the reasons are hard to dismiss: Amazon increased its IC-to-manager ratio by 15%, Staff engineers out-earn EMs across the industry, and the pace of technical change makes stepping away from code genuinely risky. The detail about his friend getting Staff offers paying 20-30% more than his internal EM promotion says a lot about how the market has shifted. What I respect is that Zaides stays an EM anyway because he enjoys it — he’s not telling you to optimize for comp, he’s telling you to stop assuming the management track is the default next step.
- The stranger secret: how to talk to anyone – and why you should — The opening anecdote is what stays with me: an elderly woman on a nearly empty train asks to sit nearby, and they end up talking for 50 minutes. The author senses, without being told, that this woman is probably heading back to an empty house and needs to process her day out loud. Sometimes the most useful thing you can do is just not say no to proximity. Remote work has made me worse at this — I’ve optimized so hard for uninterrupted focus that I’ve forgotten what it’s like to let a random conversation happen.
- LLMs Are Good at SQL. We Gave Ours Terabytes of CI Logs. — The counterintuitive move here is denormalization: they stamp 48 metadata columns onto every single log line instead of normalizing into separate tables. It sounds wasteful until you realize ClickHouse compresses repeated values so aggressively that
commit_messagehits a 301:1 compression ratio. The agent behavior patterns are interesting too — across 8,534 sessions, they found agents don’t run one clever query, they investigate iteratively like a human would, starting broad then drilling in. Average investigation: 4.4 queries. The freshness point is the real takeaway: “did I break this, or was it already broken?” is the question that actually matters, and you can only answer it with current data. - Leaving Google has actively improved my life — The most telling detail isn’t about Google’s products — it’s about distribution. Google pays Apple $20B annually to stay the default on iOS, and Chrome holds ~70% browser market share. People don’t choose Google, they just don’t un-choose it. The author’s point about search is the one I keep coming back to: switching to alternatives turned “Googling” back into “surfing the web,” reconnecting with actual discovery instead of algorithm-filtered answers. I’ve made similar switches and the difference is real, though I’ll admit YouTube is the one I can’t quit either.
- The Hunt for Dark Breakfast — This is the kind of thing the internet was made for. The author maps breakfast foods onto a simplex using milk, eggs, and flour ratios, identifies known clusters (the Pancake Local Group, the Egg Singularity), and discovers a gap — a theoretical breakfast that should exist but doesn’t. The breakthrough: IHOP adds pancake batter to omelettes, which means the “Dark Breakfast Abyss” is occupiable. There’s even a recipe: ¼ cup milk, 4 eggs, ½ cup flour, instructions unknown. Ending the whole thing with a Lovecraft quote about humanity’s inability to correlate all its contents is the perfect touch.
- The path to ubiquitous AI — The ENIAC parallel is the framing that makes the technical claims click: room-sized and impractical became ubiquitous through specialization, and Taalas is betting the same applies to inference. Hard-wiring a single model (Llama 3.1 8B) into custom silicon to get 17K tokens/second at 20x lower cost and 10x lower power is a bold trade-off — you lose flexibility entirely but gain numbers that would change what’s economically viable. The aggressive quantization (3-bit and 6-bit) introduces quality degradation, so the real question is whether “good enough but everywhere” beats “better but expensive.” They spent $30M of $200M raised with a 24-person team, which at least shows discipline.
#
- Hedonism and Entrepreneurship in Barcelona — A potential acquisition dies because a vegan exec unknowingly eats mayo on patatas bravas. A critical migration stalls at 69% while the engineer responsible plays ping-pong. Having lived the Barcelona startup scene myself, this one hit close. The absurdity is the point — startup success is so much more arbitrary than anyone wants to admit, and the line between “almost rich” and “back to Monday” is thinner than your Series A pitch deck suggests.
- OpenClaw, OpenAI and the future — After 13 years building PSPDFKit, Steinberger explicitly chose not to build another company. Instead he’s joining OpenAI and turning OpenClaw into a foundation. I keep thinking about the tension here: he wants to keep things open and independent, but the path to maximum impact runs through one of the most powerful closed labs in the world. Whether that trade-off holds up depends on how seriously OpenAI takes the foundation model. Time will tell.
- They’re putting blue food coloring in everything — It’s not about blue food, obviously. It’s about how unwanted things get normalized — first one restaurant does it, then “all the best restaurants” do it, then your friend sneaks it into homemade food and tells you you’re overreacting. The detail that got me is the protagonist saying “I just think it tastes weird” and being told “most people say it’s just fine.” I’ve had that exact conversation about too many things in tech.
- Why I joined OpenAI — The turning point wasn’t a demo or a benchmark — it was his hairstylist casually mentioning she uses ChatGPT all the time. She recognized ChatGPT as a brand more readily than Intel, where Gregg was a Fellow. That contrast says more about where computing impact has shifted than any industry report. The environmental framing is interesting too: at this scale, performance engineering isn’t just cost optimization, it’s resource consumption with real planetary consequences.
- GitHub Actions Is Slowly Killing Your Engineering Team — The fact that an entire cottage industry of startups exists solely to fix GitHub Actions’ slow runners tells you everything. The comparison to Internet Explorer is spot on — it wins because it ships with the thing, not because it’s good. I’ve felt the pain of debugging through four pages of loading spinners to find a failed step, and the escape hatch of “just write a bash script” that inevitably becomes 800 lines of unmaintainable CI logic. Default integration is a powerful moat, even for mediocre products.
- 2025 letter — Wang’s framing of the US economy as “a highly leveraged bet on deep learning” stuck with me. The Xiaomi vs Apple comparison is brutal: Xiaomi shipped an EV in 4 years while Apple spent 10 years and $10B before giving up entirely. His concept of Silicon Valley’s “soft Leninism” — groupthink disguised as meritocracy — is uncomfortable because it’s hard to argue against when you see how the industry moves in lockstep. Not sure I fully buy the symmetry he draws between SV and the CCP’s self-seriousness, but it made me think.
#
- My AI Adoption Journey — The most practical thing here is the “reproduce your own work” phase: doing tasks manually and then redoing them with agents. Painful, but it forces you to learn where agents actually help versus where you’re just cargo-culting. I also liked the idea of end-of-day agent sessions to get warm starts the next morning. What resonated most is his framing around craftsmanship rather than ideology — he doesn’t care if AI is here to stay, he just wants to do good work faster.
- Nobody cares — The examples hit hard because they’re so mundane: bike lanes designed to kill you, gym weights left unracked, dog waste everywhere. It’s not malice, it’s indifference, and that’s almost worse. I keep thinking about the author’s failed experiment of installing dog waste dispensers in his neighborhood hoping it would snowball into community care. It didn’t. The Japan comparison is interesting but I’m not sure “will to have nice things” is something you can transplant — it might be downstream of much deeper cultural structures.
- Hotpatching ClickHouse in production with XRay — The fact that you can inject log statements into a running ClickHouse instance via SQL is remarkable.
SYSTEM INSTRUMENT ADD LOG 'QueryMetricLog::startQuery' '...'and it just works. The 4-7% binary size overhead with negligible runtime cost when inactive makes it a reasonable trade-off. This solves the one problem every production debugger knows: wishing you’d added one more log statement before deploying. - I miss thinking hard. — The Builder vs Thinker framing put words to something I’ve been feeling. When AI gives you a 70% solution, rejecting it to think harder feels economically irrational — even though that struggle is where the real growth happens. The honest part is the ending: there’s no resolution. He doesn’t pretend to have figured out how to balance velocity with depth, and I respect that more than a tidy answer.
- Barefoot Data Platforms — This is the kind of pragmatism I appreciate: rip out Dagster and dbt, replace them with plain scripts that have metadata headers, and let agents iterate on isolated files. The constraint of “under 200 assets” is refreshingly honest about scope. Most data platform posts sell you a cathedral — this one says a well-organized shed is enough for most teams, and it probably is.
- My First Distributed System — The Pokémon cloning exploit as a distributed systems lesson is one of those analogies that actually holds up under scrutiny. Two Game Boys linked by a cable are a real distributed system with real partial failure modes. The proposed escrow state fix is textbook two-phase commit, and the trade-off is clear: you can’t clone Pokémon anymore, but you might lose both if the cable disconnects mid-trade. Classic distributed systems — pick your failure mode.
- ASCII characters are not pixels: a deep dive into ASCII rendering — I don’t know how I ended up reading this, but the core insight stuck: instead of mapping brightness to characters, you use 6-dimensional shape vectors to match character geometry to image regions. That’s why an
Lcharacter gets placed where there’s an L-shaped edge in the image. The contrast enhancement technique — sampling into neighboring cells to push boundaries — is borrowed from cel-shading, which is a connection I wouldn’t have expected. - The Design & Implementation of Sprites — Three design choices that make Sprites interesting: no container images (pre-staged base instances for instant creation), object storage for disks (durable state becomes “just a URL”), and inside-out orchestration (user code in an inner container, platform services in the root namespace). The storage stack using JuiceFS-style chunk splitting with local SQLite metadata is clever — it makes VM migration trivial since there’s no attached volume to move.
- WAL usage looks broken in modern Time Series Databases? — The core argument is that most TSDBs don’t actually fsync their WAL on every write, so the safety guarantee is weaker than you’d expect. Prometheus fsyncs WAL segments every 2 hours by default. Two hours. The author’s proposed alternative — buffer in memory and flush to SSTables — gives similar guarantees with less complexity. Though Ayende’s pushback is worth reading too: WAL implementations handle partial writes by rolling back incomplete transactions, which the article glosses over.
- Keeping 20,000 GPUs healthy — The number that stands out: GPU issues account for 58.7% of all unexpected problems in production. Modal has never seen a degraded CPU core, but GPUs fail constantly — thermal throttling at 94°C, uncorrectable ECC errors clustering in specific regions, 0.1% CUDA initialization flake rates on L4s. Their approach of not attempting recovery and just disposing of unhealthy hosts is pragmatic. GPU reliability is years behind CPU reliability, and this post makes that concrete.
- Standups — The argument that your teammates should already know what you did yesterday — through commits, PRs, and notifications — is one of those obvious truths that most teams ignore. I’ve been in standups that were pure “are you done yet?” rituals disguised as collaboration. The proposal to replace status reports with written updates and reserve meeting time for actual problem-solving is straightforward, but getting teams to actually make that switch requires someone willing to challenge the ceremony.
- Tuning Tansu: 600,000 record/s with 13MB of RAM — What I liked about this is the methodology: using a null storage engine to isolate protocol overhead from I/O. The biggest wins came from reducing allocations during serialization and fixing eager evaluation of
OptionandResultmethods — small things that compound. Getting from 3.7s to 1.9s on codec benchmarks (49% improvement) before even touching storage shows how much performance gets left on the table in the protocol layer.
#
- Automatic Programming — The distinction antirez makes here finally gave me language for something I’ve been fumbling with. It’s not about whether you use AI to write code—it’s about whether you’re steering or just prompting and hoping. His Redis example hit hard: the value wasn’t in technical novelty but in the contained vision. That maps to what I see working well versus the codebases that feel like they emerged from a chatbot fever dream.
- Email triage with an embedding-based classifier — This outperformed a fine-tuned GPT by 11 percentage points while being dramatically faster. The separation of concerns makes sense: embeddings handle “understand the email” while logistic regression handles “what does this user care about.” People keep defaulting to LLMs when something simpler would work better. Worth remembering that the expensive part doesn’t need to run every time.
- Efficient String Compression for Modern Database Systems — The insight that compression is primarily about query performance, not storage, reframes the whole tradeoff. Getting data to fit in L1 cache (1ns access) versus RAM fundamentally changes what operations cost. FSST’s approach of building a symbol table from sample data feels like the kind of clever-but-not-too-clever technique that actually ships.
- I made my own git — “Git is just a content-addressable file store” is one of those realizations that makes everything else click. What stuck with me is that parsing was harder than the actual version control logic. We treat Git like it’s complicated, but the core idea is almost trivial—it’s the interface that makes it feel like a black box.
- Online, Asynchronous Schema Change in F1 — The intermediate states approach is elegant: you can’t jump from no-index to index safely, but you can chain compatible transitions. Delete-only and write-only states let nodes migrate without corrupting data. This feels like the kind of solution that’s obvious after you see it but probably took years to figure out. Makes me think about what other distributed systems problems have similar chain-of-compatibility solutions.
- Why Senior Engineers Let Bad Projects Fail — “Being right and being effective are different” cuts through so much noise. The credibility-as-currency framing explains behavior I’ve seen but couldn’t articulate. You don’t get credit for disasters you prevent, only for the battles you pick and win. Still processing whether this is pragmatic wisdom or just resignation to broken systems.
- Slop is Everywhere For Those With Eyes to See — The 90-9-1 rule creates a structural problem: platforms need infinite content but only 1-3% of users create anything. Algorithms fill that gap with slop because engagement matters more than quality. The behavioral science point about effort and meaning landed—when everything is effortless to access, nothing feels valuable. I’ve been noticing this with technical content too, not just social media.
- How I estimate work — “Only the known work can be accurately estimated, but unknown work takes 90% of the time” explains why estimation always feels broken. The reframe that estimates are political negotiation tools, not technical predictions, matches every project I’ve seen. Managers arrive with timelines, engineers figure out what fits. Treating it as a prediction problem sets everyone up for disappointment.
- Scaling PostgreSQL to power 800 million ChatGPT users | OpenAI — The challenges they describe—connection pooling, read replica lag, vacuum tuning, lock contention—are exactly what you hit at high throughput. Nothing novel but it’s validating to see that even at ChatGPT scale, you’re fighting the same PostgreSQL battles. Sometimes the answer to “how do they do it?” is just “they do the same things, but more carefully.”
#
- I’m addicted to being useful — This hit close. The idea that some of us are fundamentally wired to solve problems, and engineering just happens to fit that dysfunction perfectly. I’ve been thinking about whether my drive to build things comes from genuine interest or just this compulsion to be useful. His point about how this actually protects against burnout when you can satisfy it rings true.
- How ClickHouse handles strings — The overlapping memory reads technique is wild. Reading the same characters multiple times on purpose because it makes branch prediction work better. I keep coming back to how much performance work is about understanding what the hardware actually does, not what we think it should do.
- How to figure out what to do with your life — I’m at this exact crossroads. The thread didn’t load properly but the title alone captures where I am right now.
- Raising money fucked me up — The part about projecting expectations onto yourself hit hard. He realized his investors weren’t actually pressuring him, he was doing it to himself. I see this pattern in how I think about my own work. The shift from “what problem does this solve” to “how big does this feel” is something I need to watch for.
- Personal information firehose — A personal algorithm that learns what matters to you without manual filters. The idea feels right but the obstacles are real - training on small personal datasets instead of massive global ones, working across fragmented channels, dealing with restricted APIs. Still figuring out if this is technically possible or just wishful thinking.
- Don’t fall into the anti-AI hype — Antirez building a BERT inference library in pure C with AI assistance in hours instead of weeks made this concrete for me. The skill that matters is knowing what to build and how to guide the tool, not typing the code yourself. I’m still working through what this means for how I spend my time.
- The Unbearable Joy of Sitting Alone in A Café — The observation that you can’t control what other people think, revealed just by sitting still without your phone. I don’t do this enough. There’s something about being alone in public that forces a different kind of attention than being alone at home.
- If you think @Sentry isn’t serious about AI, I’d recommend… — David Cramer’s internal push for everyone at Sentry to go all-in on AI. Watching companies navigate this shift in real time.
- ~/rbv/2025 — Raquel’s point about almost forgetting to enjoy herself struck me. Also her take that intellectual giftedness is less important than finding people who actually want to understand you. The whole review feels like someone recalibrating after a hard year.
- Deconstructing cultural codes — Cowen’s approach is to learn as many cultural systems as possible - art, music, industries, religions - to understand how things actually work. The problem is cultural codes are multiplying faster than anyone can learn them. This feels relevant to how I think about understanding different parts of tech.
- The Sauna Algorithm: Surviving Asynchrony Without a Clock — Using causality instead of time to coordinate. Wait for the person who arrived after you to leave, guaranteeing you stayed long enough. The sauna framing makes the distributed systems concept click in a way that formal definitions don’t. Events ordered by what caused what, not by clock time.
Page 1 Next →