How Professionals Stay Smart in the Artificial Intelligence Age.

 

A user once asked a generative AI tool to come up with an original idea for children using a Lego set. The tool’s answer: “Imagine a Lego set that allows you to build a fully functioning time machine with intricate details and moving parts. This set combines the fun of building with Legos with an educational twist, teaching kids about historical eras and famous landmarks as they embark on time-traveling adventures.” What? Imagine the health hazards, the lawsuits, the damage to our current timeline…

Ok, that’s a delightfully harmless example. (Well, until we invent time travel anyway.) But misinformation isn’t always so obvious.

And we need to know when AI gets things wrong, because the consequences can range from funny to embarrassing to catastrophic.

Inaccurate data, flawed guidance, or hallucinated case studies can erode trust, trigger poor decisions, and damage your relationship with your team or your customers.

Nowadays, we pretty much have to use artificial intelligence, right? From generating marketing content to summarizing research and writing code, AI is saving time and unlocking creativity. It feels like if you’re not using it, you’re a step behind.

The good news: you can harness AI’s power without falling victim to its little robot lies.

The Misinformation Problem

The term “hallucination” is often used to describe when generative AI tools produce information that sounds plausible but is oh so wrong. These tools are powerful language models, not search engines. They don’t take a skeptical view or verify sources. They predict language based on patterns, not validity.

In 2023, a New York attorney used ChatGPT to draft a legal filing that cited six entirely fabricated cases. The AI-generated cases didn’t exist, but they looked real enough to pass an initial glance. The result? Sanctions and professional embarrassment. (Source: NYT)

Think about the high-stakes situations you might confront:

  • A strategy deck citing false data could mislead executives and derail decisions.
  • An AI-written employee communication could spread confusion if it includes misquotes or outdated policy references.
  • A government contract could be rejected if AI invents credentials or claims.

Misinformation doesn’t always look like fiction. It often seems credible.

Three Traps for Business Professionals

Overtrusting Output

AI sounds confident. Its tone is polished. But confidence is not accuracy. (We all know people who fit this profile, right?) Professionals pressed for time might generate, copy, paste, and trust without verifying.

A psychological dynamic called “authority bias” is to blame: we tend to trust information from sources that “sound” authoritative, even if they aren’t.

False Citations or “Source Laundering”

Some generative AI tools fabricate sources or mix real citations with fake ones. Even when sources are accurate, they may be out of context or misrepresented.

In one test, ChatGPT cited a Harvard Business Review article that didn’t exist, but with a title and author combination that seemed plausible.

The Chicago Sun-Times published a 2025 Summer Reading List recommending 15 titles. The problem: ten of them were not real. The journalist relied on a third-party source that had used AI to generate the list.

A separate news item found that if you ask Chat GPT for links to sources, and there aren’t any, it will simply make them up.

Context Collapse

AI lacks situational awareness. It may pull ideas from unrelated fields or fail to consider the unique regulatory, cultural, or industry context that matters in your decision-making.

Five Ways to Stay Smart and Safe with AI

Always verify sources.

If AI offers a citation, check it manually. Follow the link. Search for the document. If it doesn’t exist, that’s your first red flag.

Pro Tip: Use AI tools like Perplexity.ai or Consensus.app, which prioritize sourcing and transparency over polish.

Add a layer of human judgment.

AI can accelerate your thinking, but it should not replace it. Ask yourself:

  • Does this make sense in our context?
  • What’s missing?
  • Who would be harmed if this were wrong?

Think of AI as your intern, not your expert advisor. Its job is to help you think, not to do your thinking for you.

Use AI transparency settings.

Tools like ChatGPT Enterprise or Copilot often offer audit trails or cite where their information came from. Use these features to assess credibility.

If you’re using AI inside your organization, push vendors to explain how their models generate outputs and what guardrails are in place.

Insist on a human in the loop.

In regulated or high-stakes industries (e.g., healthcare, finance, government), build processes that require human review before AI-generated content is published or decisions are made.

This aligns with the GROW coaching model: AI may help define the Goal and assess Reality, but Options and Will must come from human insight and accountability.

Train teams to spot and stop AI misinformation.

Embed AI literacy into onboarding, upskilling, and leadership development. Teach employees:

  • How to question AI output
  • What to verify (and how)
  • When to escalate concerns

And remember what we change management pros preach: changing behavior requires awareness, engagement, action, and reinforcement.

The Enemy

It’s not generative AI. Complacency is the enemy.

Generative AI isn’t malicious. It’s simply not designed to be truthful. It’s designed to be useful. As business professionals, it’s on us to pair AI’s speed and creativity with our discernment and ethics.

The most dangerous misinformation isn’t always the most outrageous. It’s the quiet, almost-right statement that slips through our filters because we’re too rushed, too trusting, or too dazzled by the tech.

In the age of AI, wisdom isn’t knowing everything. It’s knowing what to double-check.