I understand when someone says “AI is totally overrated”.
They are fundamentally mistaken.
But I understand why they say it.
It’s very easy to try some of the popular tools like ChatGPT or Gemini, play with them for 15 or 20 minutes, get terrible output, and come to the conclusion that “AI simply isn’t all it’s cracked up to be”.
Like any tool, how well AI works depends a lot on how you use it.
Basic prompts often give you basic results. To really see what AI can do, you need to learn how to interact with these tools effectively.
Here are some resources that can help with that.
And I get it: it’s tempting for people to think that technology can never match human capabilities.
The problem with that is they’re missing the point.
AI is not a single, monolithic technology. It’s a diverse group of completely different technologies from different companies, and each have strengths and weaknesses.
All of them are improving quickly, some at an exponential rate, and will continue to accelerate in terms of capability.
LLMs like ChatGPT are just one type of AI. There are other types of AI being combined with LLMs to create even more advanced systems.
Most notably, when AI agents become easier to build they will drastically change how we work.
Ignoring AI because of perceived limitations is missing the huge impact it will have on our work, business, education, and even social lives.
As you start using AI language models like ChatGPT, Claude, and Gemini more and more, you’ll probably encounter some weird output and basic mistakes that you feel this level of technology should be better at avoiding.
There are logical reasons why these mistakes happen! Let’s look at why they happen and how to fix them:
I’ll be sharing more AI tips like this, as well as more strategic advice for business leaders. Follow me on LinkedIn and check out The AI-Powered Business Leader podcast for in-depth conversations with thought leaders in business and AI.