Back

The Compounding Value of 'Useless' Tinkering

Something I’ve been thinking about recently: the compounding value of “useless” tinkering.

For years I was messing around with tools that never quite produced anything. Obsidian vaults that got abandoned. n8n workflows that automated nothing important. Half-finished scripts. API integrations that worked but solved no real problem.

By any reasonable measure, most of it was wasted time.

Then LLMs showed up and suddenly all of it clicked. Not because AI taught me these things. I already half-knew them. AI just removed the last obstacle. The gap between “I roughly understand how this works” and “I can actually build with this” collapsed overnight.

APIs make sense now. Workflow architecture makes sense. Data pipelines make sense. The knowledge was already there, sitting dormant, waiting for the activation energy to drop low enough.

I think there’s a general pattern here. We judge learning by whether it produces an immediate result. If it doesn’t, we call it a waste of time. But that’s not how capability actually compounds. The tinkering was the foundation. I just couldn’t see it because nothing was sitting on top of it yet.

Concretely: I now run a system where LLMs compile research into markdown wikis, maintain indexes, answer complex queries against the knowledge base, and file the outputs back in to make the whole thing smarter over time. Obsidian is the frontend. The LLM does the writing. I do the thinking.

None of that would work if I hadn’t spent years poking at exactly these tools for no good reason.

If you’re a marketing leader, or anyone really, tinkering with AI right now and wondering if it’s going anywhere, it probably is. You just can’t see what it’s building toward yet. That’s fine. That’s how it works.