Key Takeaways
- Small, highly skilled teams can make a big impact in AI governance.
- Building trust and cooperation is key to operationalizing responsible AI governance.
- Providing proactive risk guidance and access to tools enables compliance and effective AI.
Getting Started with AI Governance
When it comes to AI governance, most enterprises are still trying to put all the pieces together. They have good intentions – close to 90% of those organizations already using AI are working on AI governance, according to IAPP, an organization dedicated to defining, promoting, and improving the professions of privacy, AI governance, and digital responsibility globally. Its AI Governance Profession Report 2025 finds that one significant challenge, though, is access to qualified AI governance talent and skills in the workforce.
While that poses an issue, the good news is that companies don’t need a large group of AI governance experts to get the job done. Small teams can have an outsized impact when the stakes are high, as is the case with AI – it is critical to instill trust in these systems in order to reap business value.
Speaking at DAVERSITY’s Data Governance & Information Quality Conference, John Hearty, who leads global AI governance at Mastercard, cited research showing that 61% of people are wary of trusting AI and 67% report only low to moderate acceptance of it. That said, “The enterprise can implement responsible AI,” he said. “Not only that, it’s practical to operationalize it. You can do it with a small team.”
Having started as the only employee in AI governance at Mastercard just a few years ago, Hearty today leads a lean team that is responsible for mitigating bias, ensuring efficacy, and enabling transparency in an environment where a diverse range of tools, practices, and teams fosters variety, volume, and complexity. The number of AI systems that Hearty’s team needed to assess, give guidance on, and monitor has doubled every year up until 2024, when it increased by 60%. He explained:
“We work with many major banks who expect to understand how our models perform. They have model risk management teams, and that’s their focus. But if you have different teams, different environments, context, and consistency, those teams may not document, monitor, or even own their solutions in the same ways, and that can create challenges. For example, record keeping and documentation may be sporadic or non-existent, and then you have to go work out how to close the gap to give regulated customers what they need.”
Strategic AI Governance in Action
Amid the complexities of governing existing AI systems, leaders like Hearty find themselves in a position where people within the company may be launching or buying new AI systems – and not always telling the AI governance team about it. Result: a backlog of legacy products to look at, as well as increased operational risk from new internal and externally developed products.
In fact, Hearty was in that very position in early 2022, which meant it was time to create an AI governance strategy at Mastercard that revolved around building influence and making compliance easy. His team had to focus on building internal sources of influence, earning as strong a reputation as possible to build trust and cooperation. For example, his team partnered with developer peers to collaborate on reviewing its proposed AI governance framework to operationalize responsible AI, then worked together on bias testing an API to distribute to the company. He said:
“If your partnerships deliver something, they don’t just create value. They also create more influence as a result of having done valuable work with other people. Good partnerships, we found, were the ones structured to provide win-wins –something of value to both parties.”
The best partnerships were those that focused on others’ pain points and that also happened to benefit his team’s needs, like providing Mastercard’s customers with a more detailed understanding of its models and products. His team partnered with a data science team to work together on a model documentation template, and then took that to the customers pre-contract to see if it would meet their requirements for information. That upfront information sharing helped create trust.
Making AI more effective, inclusive, and trusted is a process of enablement, which in this context “means helping people to do the things that you want done. It means helping people to use the processes and support that you provide, and it means helping people to succeed who generally want to do things that are aligned with your goals,” said Hearty.
Next Operationalization Steps
In terms of making compliance easy, think job support, which means ensuring the owner of an AI system has not only the guidance about undertaking work such as AI bias tests but also easy access to the tools that enable them to do it properly. Hearty’s team partnered with developers to create an API for bias testing as well as mitigation requirements for large language model (LLM) evaluation and aligned testing requirements with the tools it was distributing.
The lynchpin of AI governance, though, is issuing guidance regarding risks you’ve identified that are associated with a given AI project. It starts with having a product owner complete a scorecard before a system is built or a contract is signed that aims to ferret out everything from whether they understand the data and techniques they’ll be using, to whether the system has agency to make decisions. “If risk might be present, treat it as though it is,” Hearty said. “And if you’re wrong, you’ll get that in control verification, because the team will give you evidence they mitigated the risk.”
Small Teams Can Do Big Work
And yes, Hearty emphasized, all this can be done with few resources, if they are the right ones.
At Mastercard, five specialists cover a lot of ground. The team includes an architect to make its framework elements work with processes from other teams and take on tasks like system designs; a model risk expert who understands regulatory demands on banking customers; a specialist in building various technology solutions to iterate and improve existing technology, run development partnerships, and manage debt; and a communications and networking expert whose role is to enable the team to leverage their thought leadership in the most valuable and effective ways and drive responsible AI messaging. Hearty himself brings to the table a background in R&D leadership in data science. He noted:
“Every member of this team has these same behaviors in common, and I would absolutely recommend hiring for these behaviors: curiosity and self-awareness. That’s absolutely crucial, because you need an AI governance team to be transparent, especially if it’s going to take scope of ownership.”
His takeaway: It is practical to operationalize AI with a small team if you manage your influence well, create a resilient culture, implement smart processes and refine them, and focus on enablement. Said Hearty, “This work is urgent, it is important, it is possible.”
Applied Data Governance Practitioner Certification
Validate your expertise – accelerate your career.


