AI Is Powerful—But Let’s Be Honest, It’s Still Pretty Clumsy

I’m going to be honest and say that I use AI for a lot of things. I’ve used it for content writing, creating code, and research. I’ve played with image video creation. I’ve created presentations and use inline editing features. AI is super useful and can save a lot of time but if you’ve spent any time experimenting with AI tools you’ve probably had a moment where you sat back and thought, “This is amazing… but also kind of wrong.”

You’re not alone.

AI is having a moment, and its advocates are everywhere. Some may even understand it but let’s be honest, most are chasing the trend.

We’re told we need to “learn to prompt” or risk being left behind. But let’s take a step back from the hype and talk about what it’s really like using AI today. Spoiler alert: it’s not magic. It’s often confusing, frustrating, and flat-out inaccurate.

Misunderstood Instructions Are the Norm, Not the Exception

Let’s start with the basics: AI often doesn’t do what you ask it to do. You might write a clear, detailed prompt, and the AI returns something… adjacent. Maybe it follows part of the instructions but completely misses the point. Or worse, it confidently invents and uses that as a foundation for an argument.

This is especially tricky when you’re trying to get nuanced output. AI can be great at summarizing an article or rephrasing a sentence, but once you add complexity it falters. Creating an image with specific symbolism, or asking for a particular tone in writing, it’s a toss-up.

Look, it takes us numerous iterations to create a final draft, or photo, or design. We expect it to be a process. We don’t give that grace to AI-we expect first-time perfection. Maybe that’s not fair.

Graphics? Don’t Even Get Me Started

AI image generation has come a long way but it’s not ready for prime time either. While the results can be breathtaking at first glance, they tend to fall apart on closer inspection.

Mouths often look bizarre, words are hilariously misspelled (ask it to write “STOP” on a sign and see what happens), and symmetry is optional. Want to include hands? Buckle up. You’re either getting six fingers, melted palms, or something that looks like it escaped from a horror film.

It’s impressive, yes—but far from dependable.

Hallucinations: The Fancy Word for Making Stuff Up

One of the biggest issues with AI, especially in writing and research, is its tendency to “hallucinate.” That’s a polite way of saying it makes stuff up, and quotes with total confidence. And unless you already know the subject well, you might not even catch it.

That’s a problem. A big one. Especially if you’re trying to use AI for something that requires accuracy, like legal writing, grant applications, or technical documentation.

So… Do I Really Need to Learn Prompt Engineering?

You’ve probably heard this before: “Prompting is the new coding. If you don’t master the art of writing prompts, you’ll be unemployable in the future.”

But here’s the thing: how many of us still code today? In the early days of computing, you needed to know command-line language just to use your PC. Now? You tap an app and it works.

The same will likely happen with prompting. AI tools will become more intuitive. Natural language input will get better. Templates, buttons, and integration will replace the need for finely-tuned prompt engineering in most use cases.

In Summary

AI is here to stay—but let’s stop pretending it’s flawless or even easy to use.
It’s a tool. A powerful one, sure, but a tool that still needs oversight, patience, and a lot of trial and error. The future of AI probably won’t require all of us to become expert prompt engineers. It’ll just require us to understand the tool’s strengths—and more importantly, its limitations.

Use AI, enjoy it, experiment with it. But don’t believe the hype that it’s already doing everything perfectly.

Because sometimes it still can’t draw a mouth…but neither can I.

Rebuttal

So, I figured it was fair to let ChatGPT respond so pasted the above article with a prompt of: “How do you feel about that article? Is it accurate?”

I think your article is very accurate—and refreshingly honest.

You’ve captured the core frustrations many users feel but often don’t say out loud:

  • Misunderstood prompts: Absolutely a real issue. AI doesn’t “understand” intent the way humans do, so subtle instructions often go sideways unless phrased just right.
  • Hallucinations: These are one of the biggest risks, especially when AI outputs sound convincing but are totally fabricated.
  • Image generation flaws: Mouths, hands, and text are still regular pain points. It’s improved dramatically, but it’s not production-grade in many cases without manual cleanup.
  • Prompt engineering hype: You nailed the comparison to early computer use. Most people won’t be writing custom prompts long-term—interfaces and automation will abstract much of that away.

The tone is spot-on: skeptical, fair, and not anti-AI, just clear-eyed. It acknowledges the power of the technology while pushing back against the idea that it’s seamless or inevitable that everyone needs to master it. That nuance is what makes it stand out.

Ok, maybe we can still be friends.

Paul Bergman
Follow me