Picture a VP of finance at a large retailer. She asks the company’s new AI analytics agent a simple question: “What was our revenue last quarter?” The answer comes back in seconds.
Confident.
Clean.
Wrong.
That exact scenario happens more frequently than many organizations would care to admit. AtScale, which enables organizations to deploy governed analytics environments and semantic consistency, has found that simply increasing model parameterization alone cannot address the AI governance and context issues enterprises face.
When AI systems query inconsistent or ungoverned data, adding more model complexity doesn’t contain the problem, it compounds it. Organizations across industries have acted quickly to develop agentic AI, deploying systems that analyze data, generate insights, and trigger automated workflows. In response to this trend, the AI models have adapted to react quickly via larger model parameters, increased computing power, and additional features. The underlying assumption has been that as long as the model gets large enough, the eventual result will be reliable.
However, there are indications that this assumption may not hold up. Recent TDWI research found that nearly half of respondents characterized their AI governance initiatives as either immature or very immature. This may have more to do with data lineage and the business definitions on which these models are based than with the models’ capabilities.
Why bigger models don’t solve governance
The AI industry tends to operate on an unexamined assumption about what drives better performance: as we build more advanced models, they will somehow self-correct their performance errors. In enterprise analytics, that assumption can fall apart quickly.
While scale may improve the breadth of reasoning in a model, it doesn’t automatically enforce which definition of gross margin the business has agreed to use. It doesn’t resolve metric inconsistencies that have lived in separate dashboards for years. And it also doesn’t produce traceable lineage on its own.
Governance problems don’t resolve at scale. Business rules buried in individual tools, inconsistent definitions across teams, and outputs with no audit trail are structural issues, and a larger model doesn’t fix structure. It just produces unreliable answers more fluently.
At AtScale, there’s a consistent theme among our clients: When inconsistent data definitions followed organizations into their AI layer, the problems didn’t stop there. They propagated forward, typically at greater speed and with less transparency than the previous layer had offered.
Performance and responsibility are separate jobs. A model reasons. A governance layer defines what the model reasons over, constrains how it applies business logic, and ensures outputs can be traced back to a source of record. One cannot substitute for the other.
The real risk: Unconstrained agents in enterprise environments
The problem with AI agents is seldom the model itself. It’s what the model is working with, and if anyone can see what it did.
With common context, AI agents might read data differently on different systems. In large enterprises, even small differences in definitions can lead to different results. Structural risks typically stem from four main causes:
- Agents pull from sources where the same metric can mean different things to different teams, making data definitions less clear.
- Metrics from different departments that don’t agree – two agents give two answers, but it’s not clear which one is right.
- Unclear reasoning produces outputs without a clear lineage as to how a decision was made.
- Audit gaps: When outputs can’t be traced back to a governed source of record, there’s no reliable way to catch errors, assign accountability, or course-correct.
What guardrails actually mean in AI analytics
Guardrails are often viewed as a limitation. However, in many cases, guardrails are the very conditions that permit AI agents to operate with greater confidence.
Guardrails can help align AI-generated outputs with established business logic. They also create a structure in which autonomous agents can operate; this way, as autonomy increases, so does reliability. In analytics, guardrails typically exist in several specific formats:
- Shared data definitions: A single definition of terms such as revenue, churn, or margin that are shared across all systems.
- Business logic constraints: Rules governing how calculations are to be performed, regardless of the tools or agents performing those calculations.
- Lineage visibility: The capability to identify where any output originated from.
- Access controls: Defined permissions determining what data an agent can query.
- Standardization of metrics: Consistent definitions applicable across departments and platforms.
The intention isn’t to impede AI’s performance. It’s to offer AI a base upon which it can stand.
The role of the semantic layer as a constraint framework
A semantic layer sits between data and the applications and AI agents that use it, defining business concepts, implementing logical processes, and providing a common framework of terms for all applications and AI agents to draw upon.
A semantic layer does not manipulate or duplicate data; it defines what the data represents. By asking questions of a governed semantic layer rather than the base table, AI agents can generate output based on business-defined logic, rather than on inference. The distinction of this output becomes particularly important when multiple AI agents across multiple systems must produce similar outputs.
From AtScale’s perspective, the semantic layer serves as a context boundary that can help ensure AI agents interpret data according to shared business definitions. The semantic layer is more analogous to a common language, as opposed to a guardrail, that ensures all systems operate with a common understanding.
Governance is an architectural question, not a model question
Enterprise organizations realize that AI governance is less about building the largest model and more about making an environment where the chosen model can work well. A well-designed and governed architecture (with shared definitions for concepts, traceable logic, and a shared context across all systems) will likely deliver better, more reliable results than a larger model running in an uncontrolled data environment.
Scaling models without improving semantic clarity tends to add complexity, not reduce it. As each additional tool, system, or workflow is added to an uncontrolled environment, the opportunities for divergence increase.
In this sense, responsible AI is an infrastructure challenge. Organizations with successful AI deployments treat the meaning of their data as a design decision,before the model is even chosen.
Economic and operational implications
Governance gaps do not stay abstract for long. They tend to show up in the budget.
Ambiguity in data meaning may increase operational friction, agents that produce inconsistent outputs require human review, reconciliation cycles, and rework that compounds across teams and tools. When lineage is not clear, audits cost more. Retrofitting controls after deployment typically costs more than building the right architecture from the start.
In complex enterprise settings, costs can show up in predictable ways: redundant validation when outputs don’t match across systems, excess compute triggered by unclear queries, and slower analysis as teams pause to figure out which answer is actually reliable. Clear semantic constraints can mean fewer validation cycles, and that operational value is becoming easier to measure.
The path forward: Constrained autonomy
AI agents aren’t a future consideration, they’re already in use. What’s still catching up is the infrastructure around them. Agents without clear context and constraints tend to operate beyond what the organization can actually govern. That gap doesn’t close on its own.
The differentiator in enterprise AI, AtScale contends, won’t be model scale, it will be the clarity of the environment models operate in. As agents become more common in business workflows, how well the semantic layer is defined may matter more than how large the model is.
This shift toward governed context and constrained autonomy is explored in more detail in AtScale’s 2026 State of the Semantic Layer report, which examines how open standards, interoperability, and semantic governance are shaping the next phase of enterprise intelligence.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
Content provided by Ascend Agency in collaboration with TNW
