Generative AI is moving quickly from experimental hype to operational reality, the challenge for enterprises is no longer just if they will adopt it, but how they will do so responsibly. The findings from the 2025 AI Governance Survey paint a stark picture: While enthusiasm for AI remains strong, governance maturity is lagging behind.
The survey exposes critical gaps in how organizations manage AI risk, especially among smaller companies, and highlights the growing need for leadership to prioritize governance as a foundational element of their AI strategy. To build safer, more effective AI systems, we must first understand the current shortcomings and how they ripple across the development lifecycle.
Adoption Is Slow and Governance Is Underdeveloped
Despite the public perception surrounding generative AI, real-world adoption remains modest. Just 30% of surveyed organizations have deployed generative AI in production, and only 13% manage multiple deployments. Larger enterprises are five times more likely than smaller firms to do so.
But this measured pace hasn’t translated into safety. Nearly half of organizations (48%) don’t monitor their AI systems for accuracy, drift, or misuse – basic pillars of responsible governance. Among small firms, that number drops to a staggering 9%. Limited resources and a lack of in-house expertise amplify these risks in smaller environments.
Pressure to Move Fast Is Outpacing Safety
The greatest barrier to stronger AI governance isn’t technical complexity or regulatory ambiguity, it’s urgency. Nearly 45% of all respondents, and 56% of technical leaders, cited pressure to move quickly as the primary governance obstacle. In many companies, governance is still perceived as a brake on innovation, rather than an accelerator of safe deployment.
This, however, is in vain. The absence of structured oversight often leads to preventable failures – issues that can stall projects, erode stakeholder trust, and attract regulatory scrutiny. Robust governance frameworks, including monitoring, risk assessments, and incident response protocols, enable teams to move faster and safer.
Policies Are Not Preparedness
While 75% of companies report having AI usage policies, fewer than 60% have designated governance roles or defined response playbooks. This signals a clear disconnect between policy and practice. Among small firms, the disparity is even greater – only 36% have governance leads and just 41% conduct annual AI training.
This “check-the-box” mentality suggests many organizations are treating governance as a compliance formality, rather than an important consideration in the development process. Real governance means assigning ownership, integrating safeguards into workflows, and allocating resources to AI oversight. Ideally from the start.
Leadership Gaps Persist
The survey reveals a growing divide between technical leaders and their business counterparts. Engineers and AI leaders are almost twice as likely to be pursuing multiple use cases, leading hybrid build-and-buy strategies, and pushing deployments forward. Yet these same leaders face the brunt of governance demands – often without the training or tools to fully manage the risks.
For CTOs, VPs, and engineering managers, the lesson is clear: Technical execution must be matched with governance acumen. That means closer alignment with compliance teams, clear accountability structures, and built-in processes for ethical AI development.
Small Firms Pose a Systemic Risk
One of the survey’s most pressing findings is the governance vulnerability of small firms. These organizations are significantly less likely to monitor models, define governance roles, or stay current with emerging regulations. Only 14% reported familiarity with well-known standards like the NIST AI Risk Management Framework.
In a world where even small players can deploy powerful AI systems, this presents systemic risk. Failures to mitigate bias, data leaks, or model degradation and misuse among other factors, can cascade across the ecosystem. Larger enterprises must take a leadership role in uplifting the governance capacity of their vendors, partners, and affiliates. Industry-wide collaboration, tools, and templates can also help minimize issues.
While these survey results, as well as other recent governance surveys from organizations like Deloitte and McKinsey evidence, the state of AI governance is grim. Enterprises are taking real regulatory and reputational risks in the name of keeping up, but this is a mistake. The organizations that thrive won’t be those that simply deploy AI fast, but the ones that deploy it responsibly, at scale.
Here are some actions enterprise leaders can take to prioritize AI governance:
- Make AI Governance a Top-Down Initiative: AI governance should be a board-level concern. Assign dedicated leadership, establish cross-functional ownership, and link governance to business outcomes.
- Embed Risk Monitoring into DevOps: Integrate monitoring tools for model drift, hallucination, and injection attacks directly into deployment pipelines.
- Make AI Governance Training Mandatory: Invest in AI training for your entire organization. Ensure teams understand key frameworks like NIST AI RMF, ISO 42001, and applicable local and industry-specific regulations that impact your business.
- Get Ready to Fail: Develop incident response plans tailored to AI-specific risks—bias, misuse, data exposure, and adversarial attacks. The only guarantee is that there will be missteps along the way. Make sure you’re prepared to remediate quickly and effectively.
- Be an Active Participant: Support your AI partners and vendors by sharing best practices, tools, and governance frameworks. Progress is faster when we move together.
Organizations leading the way in AI adoption share one thing in common: they treat governance as a performance enabler. They weave monitoring, risk evaluation, and incident management into engineering workflows. They use automated checks to prevent flawed models from reaching production. They also prepare for inevitable failures with robust, AI-specific contingency plans.
Most importantly, they embed governance across functions, from product and engineering to AI and compliance, ensuring that responsibility isn’t siloed. With clear roles, proactive training, and integrated observability, these organizations reduce risk and accelerate innovation in a way that is safe and sustainable.


