top of page
Writer's pictureRoberto Coronel

AI is tepid: But are we ready for truly critical AI?

In today's Artificial Intelligence (AI) landscape, models like Gemini, Claude, ChatGPT, and Co-pilot are designed with some very clear boundaries: they must be inoffensive, non-sexual, and non-critical. Essentially, these tools are made to be passive—AI with soft edges.


In a conversation with Roberto Coronel, AI enthusiast, we explored the restrictions we've placed on AI, that most average users are unaware of, limiting our effective use of the tools.


For example, you may not realize it, but AI can coach you. Yet, how often do you truly consider that machine that lives in your computer as a tool to give you feedback? Would you actually use AI to sharpen your skills?


The truth is, we're not ready for AI to train us.

We’re not actively pushing AI to take on more critical, complex roles. And we won't, until we reach the point where we’re consistently telling AI to:

  • "Critique my article with no sugarcoating."

  • "Provide me with constructive feedback on what went wrong in my video."

  • "Identify gaps of emotional communication in my marketing so I can close them."


"We're only scratching the surface of AI's potential in fostering deeper thought processes. Sure, AI can assist in many ways, but we're not at the stage where it replaces the nuanced, critical thinking required in high-level tasks," says Coronel.



AI and offense: the learning disconnection

One uncomfortable reality we need to face is that offense—criticism, challenge, pushback—drives growth. Unfortunately, we can't yet depend on AI to deliver that type of feedback.


"While nobody enjoys taking offense, it's necessary for real learning and improvement. And until AI systems can offer tough love, they won't be able to facilitate the kind of learning that leads to true progress," explains Coronel.


We have to accept that critical feedback is a crucial part of any learning process. Whether it's identifying flaws in your work or pointing out areas of improvement you hadn't considered, there's a constructive side to criticism that AI isn't fully equipped to handle—at least, not yet.


ChatGPT: a predictor, not a source

Coronel explained AI's limitations with ChatGPT, for example. "It doesn’t actually source information from the web like a search engine. Instead, it predicts information based on patterns it’s learned from vast datasets. It’s not pulling from "what’s out there" in real time; it’s generating responses based on what it was trained on. It does not output the literal text used to train it though. Instead, it uses predictive models."


This sounds limiting, but is interesting. Why? Because it means ChatGPT infers more than your average marketing intern. It connects dots in ways that can sometimes surprise you, but it's not always flawless. Like that same intern, it may occasionally miss instructions, or worse, invent answers it thinks you need. That's why AI, like that intern, shouldn't be left unsupervised.


Three data analysts are gathered around screens, dressed formally

Bing AI (Enhanced by Copilot): A Step Forward with Real Sources, Yet Still Constrained

Now let’s consider Co-pilot. It marks a step forward from ChatGPT because it has access to search and read content directly from the web, allowing it to source real-time information which seems to give it a substantial edge. However, its capability to tap into online resources doesn't equate to unrestricted access or unfettered accuracy.

"While Co-pilot can pull from the web, it is still constrained by the filters through which this information is processed and the inherent limitations of its core algorithms—rooted in OpenAI's predictive models," explains Coronel.

Furthermore, the apparent freedom to scour the web brings its own set of challenges. The internet is not always a reliable source of truth; it is riddled with inaccuracies and undesirable content. Our caution towards what the internet might yield—whether unappealing or incorrect—places additional limitations on the utility of such AI tools. Despite its advancements, Copilot, like its predecessors, must navigate these complexities and is ultimately limited by the quality and veracity of the information it accesses.


What's next for AI?

The key to unlocking AI's potential lies in how we allow it to evolve.


"If we keep designing models that are passive, safe, and agreeable, we’ll never see AI’s true capacity for critical thinking, analysis, or even offense-driven growth. And until we embrace a bit of discomfort, AI will remain in the realm of tepid tools—handy, but not transformative."


Maybe it’s time we stop asking AI to be so nice and start demanding it push back.


Want to read more from Roberto? He's contributed other articles to Write Wiser on this same subject.


35 views

Yorumlar


bottom of page