I used AI. It worked. I hated it.
25 minute read Published: 2026-03-30I'm as anti-genAI as it gets. And yet, this past month, I have used generative coding to complete a project. It works. I hated making it.
I'm as anti-genAI as it gets. And yet, this past month, I have used generative coding to complete a project. It works. I hated making it.
Nobody can buy RAM right now. SSDs are in a similar situation. Now, even spinning disks are hard to come by. Of course, GPU prices have been skyrocketing for some time. While geopolitics and supply chain issues compound the issue, the largest driver of demand is clear: the AI industry spinning up more and more compute to serve their oracles.
Our inability to buy computing power, while not necessarily the AI companies' direct objective, is absolutely a happy outcome for their bosom buddies, cloud service providers.
I keep running up against this argument about LLMs and generative AI:
What a year huh? I thought about doing a broad review of the year in cyber news. Over a year of running the TTI Intel Feed has given me a front-row seat to the weirdest show on earth. Problem is, there's just too much. 0-days, ransomware, nation state activity, cybercrime—any one area would take more time than I have to write, certainly more time than you have to read.
But as I read through article after article, one thought kept popping into my head:
"Boy, Microsoft sure ate a lot of shit this year."
A year on from writing "Truth in the Age of Mechanical Reproduction", much of what I had feared has come to pass. In fact, in many cases things are worse than I expected.
There's nothing a user interface designer loathes more than complexity. Every design—at least, every modern design—seeks to minimize clicks, icons, visual noise. What if instead of a button, we had a borderless icon? What if instead of navigation controls, we used gestures?
And what if—hear me out—instead of search results, we had language model-distilled text delivered to you, hot and fresh?
Count me among those who are alarmed about the implications of "AI," such as it is. But I am not among those who worry about machines taking over. I see no signs of intelligence—either from the large language models being hyped right now, or from those doing the hyping. My concern around this technology is more mundane than apocalypse, but more profound than simple economic impact.
I'm terrified we're about to lose the war for truth.