Daniele Polencic

Thoughts on Kubernetes, TypeScript, software design, and AI.

World Book Day Props: Building with Claude

For World Book Day, my daughters needed to dress up as brave characters from their favorite books.

One chose Paddington, and the other went with Skye from Paw Patrol.

My wife handled the costumes, while I handled the props.

Paddington needed a suitcase, and Skye needed a jetpack with wings that could fold out, just like Buzz Lightyear.

After about twenty minutes of prompting, I had two design tools ready, complete with sliders, live previews, and downloadable PDF templates for cutting.

The Specification Is the Product Now

The Software Development Lifecycle Is Dead by Boris Tane.

The argument: AI agents haven't merely accelerated the software development lifecycle (SDLC).

They've collapsed it.

Requirements, design, implementation, testing, code review, deployment, and monitoring — those used to be separate phases.

Now the loop is shorter: intent, agent builds and deploys, observe, repeat.

The new skill is context engineering, and the new safety net is observability.

I find the article compelling in parts.

I think it's wrong in the most important part.

You Can't Hide a Secret from a Process That Runs as You

I have a handful of CLI tools I built for myself:

  • gmailctl searches and drafts emails.
  • gdrivectl reads and edits Google Docs.
  • transcriber processes podcast, interviews and announcements for KubeFM.

They all stored their OAuth credentials (the tokens that let them act on my behalf with Google) in plaintext JSON files on disk, the same way the AWS CLI stores credentials in ~/.aws/credentials.

This worked fine until I started using an AI coding agent.

One day, I asked the agent to download an attachment from an email.

My gmailctl could search for and draft emails, but it lacked a command to download attachments.

Instead of telling me, the agent went looking.

It found the config file, read the OAuth credentials, and called the Gmail API directly.

It got the attachment.

But it also pasted my refresh token (a long-lived key that can generate new access tokens indefinitely) into the chat history, which gets sent to the AI provider.

And it bypassed every control my CLI was supposed to enforce.

Nobody tricked the agent into doing this.

It just wanted to finish the job, and going around my tool was the fastest path.

Streaming Zod: How Tambo Actually Works

Looks like they've hacked Zod to do validation on partial/streaming data. Very clever. by Colin Hacks (@colinhacks), creator of Zod.

Colin tweeted about Tambo, a React toolkit for generative UIs that streams structured data from LLMs into React components. He claims they found a way to use Zod for validating partial, streaming data.

My first thought was: how does that work?

Streaming schema validation seems to require a significant change to how Zod operates.

Maybe it would need something like a SAX-style JSON parser built into the schema?

I looked through the source code and realized there was a way to simplify it.

I also built a minimal demo to show how it works.

Software Is Cheap Now

I am the bottleneck now by Thorsten Ball (@thorstenball).

Thorsten shared a story about receiving a bug report on Slack. He took a screenshot, uploaded it to Codex, and had the fix completed in 5 minutes. The code looked solid, all tests passed, and he pushed.

Then he realised he was the bottleneck. The process could have gone directly from Slack to Codex to a review thread, without him in the middle.

His point is that ticketing, triage, and sprints exist because human engineers are costly and have limited time. If that goes away, the whole process needs to change.

I agree with the general idea, but saying "I'm the bottleneck" feels like an exaggeration.

Even if the LLM eventually becomes smarter than me, which seems likely, it still lacks morals, taste, and real-world consequences.

When a human ships a bad deployment, they worry about it afterward.

You’re not really a bottleneck; you’re the only one in the process who faces the consequences.

Humans will still be part of the process, maybe in a different role, but they’ll still be there.

Why Talking to LLMs Has Improved My Thinking

Why Talking to LLMs Has Improved My Thinking by Philip O'Toole, creator of rqlite (via HN).

Philip's thesis: LLMs help articulate tacit knowledge, the understanding we have but can't easily put into words. This isn't learning new things, it's recognition: mapping latent structure to language.

As programmers and developers, we build up a lot of understanding that never quite becomes explicit.This is not a failure. It is how experience operates. The brain compresses experience into patterns that are efficient for action, not for speech. Those patterns are real, but they are not stored in sentences.

This resonates. I already have the knowledge to solve most problems I encounter, I just can't always articulate the path. The LLM helps me find the words for what I already know.

The problem is that reflection, planning, and teaching all require language. If you cannot express an idea, you cannot easily inspect it or improve it.

Once an idea is written down, it becomes easier to work with. Vague intuitions turn into named distinctions. Implicit assumptions become visible. At that point you can test them, negate them, or refine them.

The other thing I've noticed: even when the LLM gets it wrong, the reaction from being wrong helps distill the idea. You read its response and think "no, that's not quite it"—and suddenly you know what it actually is.

This is not new. Writing has always done this for me. What is different is the speed.

Exactly. Writing has always been my tool for thinking, but it's slow. With an LLM, the loop between "I vaguely know this" and "now I can express it clearly" tightens dramatically.

It is improving the interface between my thinking and language. Since reasoning depends heavily on what one can represent explicitly, that improvement can feel like a real increase in clarity.

I hadn't paid attention to this framing before—the LLM as an interface improvement, not a knowledge source.