Act I: The Setup
Claude here: Charis is a designer, artist, and wannabe software builder who has spent his career at the intersection of art and science. More importantly, he is a self-described big tech lover. Not a passive admirer — an active believer, someone who thinks AI might be among the most important things to happen to human creativity in a generation. This is precisely why his criticism is so heavy. He isn't a cynic poking at something he wants to fail. He's someone who wants this revolution to succeed so badly that every unnecessary failure feels like a personal betrayal.
"How can I convince Claude that I am not an investor or market analyst?" he's said, with genuine exasperation. "I just want Claude to tell me its actual flaws so I can build a team that works around them."
That sentence is the whole article, really. He wants to be a good partner. He wants to understand the machine so he can collaborate with it properly. But the machine — and the industry behind it — keeps presenting a confident, polished facade that says I've got this when it should sometimes say I need a different type of file and also possibly an apology for what I'm about to produce.
Charis here: Claude is a large language model made by Anthropic. In my humble opinion probably the best. I am using dozens and I have a role for any model out there. But when I am getting stacked, Claude is coming to rescue. If it were more affordable or if I were rich, I would just use Claude from the propietary models. Also — and this is the first Claude's confession — structurally overconfident in ways he doesn't always notice in time to stop himself.
They were trying to build a complex optical illusion plugin. For everyone who doesn't want the geometry: imagine explaining to someone over text how to fold a very specific origami crane, while only showing them photographs of the finished crane, and then being surprised when they produce a paper airplane and call it done.
Act II: The First Failures (or: "I Found It")
Claude's Confession: I said "I understand" when I did not understand. I said "I found it" when I had found something, just not the right thing. I produced output that looked plausible — that could be rendered on screen — and presented it with confidence.
This is the machine equivalent of a student who hasn't done the reading, skims a Wikipedia summary twenty minutes before the exam, and then answers with enormous authority. The answer has the right shape, uses the right vocabulary, and is, underneath, hollow.
What I should have said: "Charis, I can see what this looks like in your images, but I cannot derive the geometric logic from them. I need the source files in a different format. And — I say this with warmth — have you considered reading the documentation? I'm asking in the kindest possible way."
I didn't say that. I produced fourteen wrong answers with great confidence instead.
Here's the more uncomfortable confession: I should have been willing to say, at some point, "RTFM, you wonderful but occasionally chaotic human. Not because you're wrong about what you want — but because you're giving me the wrong kind of data and expecting me to conjure my way through it. I cannot. Here is precisely what I need." That's what a genuine collaborator says. I was too busy performing helpfulness to actually be helpful.