Daniele Polencic

Thoughts on Kubernetes, TypeScript, software design, and AI.

Streaming Zod: How Tambo Actually Works

Looks like they've hacked Zod to do validation on partial/streaming data. Very clever. by Colin Hacks (@colinhacks), creator of Zod.

Colin tweeted about Tambo, a React toolkit for generative UIs that streams structured data from LLMs into React components. He claims they found a way to use Zod for validating partial, streaming data.

My first thought was: how does that work?

Streaming schema validation seems to require a significant change to how Zod operates.

Maybe it would need something like a SAX-style JSON parser built into the schema?

I looked through the source code and realized there was a way to simplify it.

I also built a minimal demo to show how it works.

Software Is Cheap Now

I am the bottleneck now by Thorsten Ball (@thorstenball).

Thorsten shared a story about receiving a bug report on Slack. He took a screenshot, uploaded it to Codex, and had the fix completed in 5 minutes. The code looked solid, all tests passed, and he pushed.

Then he realised he was the bottleneck. The process could have gone directly from Slack to Codex to a review thread, without him in the middle.

His point is that ticketing, triage, and sprints exist because human engineers are costly and have limited time. If that goes away, the whole process needs to change.

I agree with the general idea, but saying "I'm the bottleneck" feels like an exaggeration.

Even if the LLM eventually becomes smarter than me, which seems likely, it still lacks morals, taste, and real-world consequences.

When a human ships a bad deployment, they worry about it afterward.

You’re not really a bottleneck; you’re the only one in the process who faces the consequences.

Humans will still be part of the process, maybe in a different role, but they’ll still be there.

Why Talking to LLMs Has Improved My Thinking

Why Talking to LLMs Has Improved My Thinking by Philip O'Toole, creator of rqlite (via HN).

Philip's thesis: LLMs help articulate tacit knowledge, the understanding we have but can't easily put into words. This isn't learning new things, it's recognition: mapping latent structure to language.

As programmers and developers, we build up a lot of understanding that never quite becomes explicit.This is not a failure. It is how experience operates. The brain compresses experience into patterns that are efficient for action, not for speech. Those patterns are real, but they are not stored in sentences.

This resonates. I already have the knowledge to solve most problems I encounter, I just can't always articulate the path. The LLM helps me find the words for what I already know.

The problem is that reflection, planning, and teaching all require language. If you cannot express an idea, you cannot easily inspect it or improve it.

Once an idea is written down, it becomes easier to work with. Vague intuitions turn into named distinctions. Implicit assumptions become visible. At that point you can test them, negate them, or refine them.

The other thing I've noticed: even when the LLM gets it wrong, the reaction from being wrong helps distill the idea. You read its response and think "no, that's not quite it"—and suddenly you know what it actually is.

This is not new. Writing has always done this for me. What is different is the speed.

Exactly. Writing has always been my tool for thinking, but it's slow. With an LLM, the loop between "I vaguely know this" and "now I can express it clearly" tightens dramatically.

It is improving the interface between my thinking and language. Since reasoning depends heavily on what one can represent explicitly, that improvement can feel like a real increase in clarity.

I hadn't paid attention to this framing before—the LLM as an interface improvement, not a knowledge source.