Scaling AI Risk Management without Slowing Innovation

Managing AI risks for Generative and Predictive AI

Challenge

Managing AI risks for Generative and Predictive AI

Taking the first steps in generative AI is challenging, especially when you want to do it right. What new risks emerge with this technology? How do we identify them? And what do we do about them? These were some of the key questions our client, a major Belgian telecom provider, was facing. 

They were developing a flagship Generative AI application using cutting-edge technology but needed a clear approach to managing its risks responsibly. However, this was just one of many AI applications in their organization. With over 100 AI models—both generative and predictive—spanning multiple business units and sourced from in-house teams and external vendors, they required an operating model that was scalable and practical. 

The challenge was twofold:

  1. Establish a structured yet lightweight AI-risk management approach that defined who needs to do what and when.
  2. Ensure this framework applied across all AI-systems while keeping it pragmatic, avoiding unnecessary complexity, and working within limited additional resources.

Our task was clear: design and implement a practical AI-risk management methodology and ensure the flagship GenAI application could go live on time while adhering to these practices.

Approach

A Pragmatic and Actionable Framework

The first step was alignment: defining the AI risks in scope—including robustness, fairness, and environmental impact—while structuring the process into three key phases: identifying intrinsic risks, managing risks, and measuring residual risk. For each AI system, we will need to identify which risks apply and score them from low to very high. After identifying the risks we need to decide on and implement the controls and measures to mitigate these risks. Finally, we measure the residual risk—the risk that remains after mitigation. This residual risk should ideally be lower than the intrinsic risk and should no longer contain any very high-risk elements. If very high risks persist, additional mitigation strategies must be implemented.

Throughout this process, involving the right people is crucial. Effective AI risk management requires both technical expertise and engagement from controlling functions such as compliance and security, as well as decision-makers responsible for resource allocation. To achieve this, we leveraged existing organizational structures, enabling multidisciplinary teams to collaborate in meetings and workshops, drawing on each other's expertise to strengthen the risk management process.

By aligning our approach with established workflows—such as expert review platforms for risk assessment, project management processes with clear decision points, and escalation mechanisms for handling high-risk scenarios—we accelerated adoption, minimized duplication of effort, and ensured seamless integration within the organization.

We further refined the framework through iterative implementation, applying it to different types of AI applications—Generative vs. Predictive AI, in-house development vs. vendor solutions, and AI-systems managed by technical teams vs. business owners. Each iteration strengthened the framework, making it more comprehensive while ensuring risks were addressed effectively.

Impact

An Organization Ready for Responsible AI

The result was a scalable AI risk management framework that covered the entire AI portfolio—from predictive models built by data teams to GenAI tools adopted by marketing. The organization now has a clear and actionable process for managing AI risks across all use cases.

Beyond the framework itself, we proactively mitigated risks for multiple AI applications—starting with the flagship GenAI system but also extending to other initiatives involving HR and marketing applications. This ensures critical risks are addressed today while laying the foundation for long-term responsible AI adoption.
The benefits were clear for all involved teams:

  • Business teams successfully launched their AI applications on time, reinforcing operational excellence
  • Data teams ensured the reliability, performance, and security of their AI systems
  • Risk and compliance functions established the necessary governance processes to ensure responsible AI

This project demonstrates how AI-risk management, when approached pragmatically, can be embedded into an organization’s existing structure without slowing down innovation. 

Ready to accelerate your journey toward responsible AI? Let’s connect.

Shift from data to impact today

Contact datashift