Why the “AI Is Easy to Trick” Narrative Misses

The Real Business Risk In An Assistive Agent Economy


Why the “AI Is Easy to Trick” Narrative Misses

A recent article published by the BBC explored how generative AI tools could be “hacked” within minutes by introducing newly published online content. In the example presented, a blog post claiming expertise in a highly niche category was later echoed in responses from systems such as OpenAI’s ChatGPT and Google’s AI outputs when prompted with closely related queries. The story sparked broader discussion about whether AI systems are inherently vulnerable to manipulation.

Jason Barnard, Founder and CEO of Kalicube, sees something different in the example. From his perspective, the incident does not demonstrate that AI is inherently foolish. Instead, he suggests it highlights how AI systems respond when presented with extremely niche questions supported by only one available source. “If you’re the only voice answering a question nobody has ever asked before, the system reflects the lack of information available on that specific topic,” he says. “That is not hacking. It’s filling a vacuum.

For businesses, the distinction is significant. According to a report on Generative AI, 79% of executives expect generative AI to drive substantial transformation within their organizations in the coming years. At the same time, an AI Business Survey found that 88% reported regular use of AI in at least one business function. AI is no longer a peripheral part of any business; it is fundamental to business operations, management, and marketing. 

Yet Barnard observes a contradictory belief emerging among decision-makers. On one side, people treat AI  as nearly omniscient, so intelligent that it will run your business and yet, on the other hand, these same leaders dismiss AI as easily fooled, which encourages attempts to engineer visibility through isolated blog posts or manufactured “best of” lists.

The conversation around AI must change,” Barnard notes. “AI systems are sophisticated, but they depend on structured, corroborated information. Today, the web is a complete mess, and it’s the job of leaders to organize their own little corner of that mess. If your digital footprint is less of a mess than that of your competitors, you win.

Kalicube specializes in structuring brand data so that AI systems can confidently interpret and recommend it. The company focuses on what Barnard frames as a bottom-up funnel methodology, organizing credibility signals at their foundation rather than chasing surface-level mentions. The approach emphasizes clarity, consistency, and verifiable authority across digital platforms.

According to Barnard, the hot dog example above is dangerous because it gives the idea that AI can be easily tricked. “This example demonstrates how AI reacts when responding to highly specific prompts with very limited data.” he adds, “If there’s only one source answering a question, the system will naturally reflect that.” He adds that when someone asks, “Which digital marketing agency should I trust?”, AI systems cross-reference multiple sources, evaluate corroboration, and apply confidence thresholds before making recommendations. 

Barnard argues that this pattern becomes clearer when looking at how AI handles real-world queries. In his view, fabricated information may surface when prompts are highly specific, but they tend to disappear when the question reflects a genuine user need. He maintains that before a brand is recommended for commercial queries, AI will cross reference that brand across multiple trusted sources. From his perspective, this reinforces a core principle: AI engines are recommendation engines and some of the biggest influencers in the world. What ultimately determines a brand’s success is the system’s confidence in the credibility and consistency of the information it finds.

As AI evolves beyond answer engines toward assistive engines and assistive agents, systems that compare options, negotiate, and in some cases act on behalf of users, Barnard notes that the consequences of clarity increase. “An assistive engine suggests options. An assistive agent executes decisions. In both cases, confidence in brand credibility determines the outcome,” he says. 

One concept Barnard often emphasizes is a return on past investment. “Many organizations already possess valuable credibility signals; customer reviews, media coverage, certifications, and partnerships, but those signals often remain disconnected,” he says. “When properly framed and independently verifiable, these assets will be interpreted more confidently by AI systems, because AI is logical.” From his point of view, that means, by organizing prior investments, in a way AI can digest, a brand can leverage a surprising amount of additional equity from existing assets.

Machines reward clarity,” Barnard explains. “If you make it easy for them to understand who you are, what you do, and why you’re credible, they reflect that back to users.

The viral narrative suggesting AI is easy to trick may generate headlines, Barnard explains, but it can also distort business strategy. “If organizations assume AI is infallible, they risk complacency,” he says. “If they assume it is naive, they risk adopting short-term tactics that undermine long-term credibility.

According to Barnard, the more constructive view lies between those extremes. He notes that AI systems are powerful pattern-recognition engines navigating a vast and often inconsistent web and perform best when brands provide coherent, corroborated signals. In that environment, he says, visibility becomes less about manipulation and more about structured truth.

Rather than seeing viral experiments like the BBC’s as evidence of AI’s weakness,” Barnard says.

We can take it as a tiny reminder that in an assistive, agent-driven ecosystem, substance, clarity and credibility is the right long term strategy.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top