When I was at EY building AI products — competitive intelligence platforms, invoice reconciliation tools, commercial optimization systems — governance was barely a conversation. The question was “does it work?” and if the answer was yes, it shipped. That was five years ago. The world has changed.
Today, boards are asking questions about AI that most management teams aren’t prepared to answer. Not “are we using AI?” but “how are we governing the AI we’re already using?” And for companies operating across jurisdictions, the answer often reveals an uncomfortable truth: governance hasn’t kept pace with adoption.
The Real Problem Isn’t Missing Governance — It’s Bad Governance
The knee-jerk response to AI risk is to create governance layers. Approval committees. Review boards. Checklists. I’ve seen organizations where getting an AI use case approved requires sign-off from seven different functions — legal, compliance, risk, data privacy, IT security, the business unit, and a dedicated AI ethics committee.
The result? Innovation stalls. Teams route around the process. Shadow AI proliferates. The governance framework technically exists but practically doesn’t.
The opposite failure is equally common: governance that’s so high-level it means nothing. A one-page AI policy that says “we commit to responsible AI” without defining what that means operationally. No risk classification. No clear decision rights. No mechanism for enforcement.
What Actually Works
The governance frameworks I’ve seen succeed share a few characteristics:
Risk-based classification, not blanket rules. An AI model that recommends product features is a different risk profile than one that makes lending decisions. Treating them the same way — either too strictly or too loosely — is the core design error. The EU AI Act gets this right conceptually with its tiered approach (unacceptable, high-risk, limited risk, minimal risk). The implementation challenge is mapping your specific use cases to those tiers honestly.
Clear decision rights. Who can approve a new AI use case? Who decides when a model needs retraining? Who’s accountable if something goes wrong? These questions sound basic, but I’ve worked with organizations where nobody could answer them clearly. Good governance makes the answer obvious.
Automated monitoring where possible. Manual compliance reviews don’t scale. If your governance depends on someone remembering to check a spreadsheet every quarter, it’s already failing. Model performance monitoring, data drift detection, bias auditing — these need to be built into the pipeline, not bolted on as afterthoughts.
The Regulatory Landscape Is Converging
The EU AI Act is the most visible regulation, but it’s not the only one. India’s Digital Personal Data Protection Act affects how AI systems handle personal data. Sector-specific regulators — RBI for banking, TRAI for telecom — are developing their own AI guidelines. CERT-In directives are adding cybersecurity requirements that touch AI systems.
For companies operating in multiple jurisdictions, this creates a compliance matrix that’s genuinely complex. The pragmatic approach isn’t to build separate compliance frameworks for each jurisdiction but to design governance that satisfies the most stringent requirements and then adapt downward. Start with EU AI Act rigor and you’ll meet most other standards by default.
What Boards Should Be Asking
If you’re a board member reading this, here are five questions worth asking at your next meeting:
- How many AI models are running in production right now? If nobody can answer quickly, that’s your first governance gap.
- How are we classifying AI risk? If the answer is “we’re not,” or “we treat all AI the same,” that’s the second gap.
- Who’s accountable when an AI system produces a bad outcome? Not the team, the individual.
- How would we know if a model is drifting or producing biased results? If the answer involves a manual quarterly review, it’s insufficient.
- What’s our exposure under the EU AI Act and India’s DPDP Act? If nobody has done this analysis, commission it.
Governance as Competitive Advantage
I’ll end with a perspective that’s unpopular in AI-enthusiast circles: governance done well is not a constraint on innovation. It’s a competitive advantage. The companies that can deploy AI quickly because they have clear governance — clear risk classification, fast approval paths for low-risk use cases, automated monitoring — will outpace competitors who are either paralyzed by over-governance or exposed by under-governance.
The worst position is the middle: enough governance to create friction, not enough to actually manage risk. If that’s where you are, fix it before a regulator or an incident forces you to.