Taking responsibility: Governing AI to generate value while managing risks

19 February 2025
Blog Image

Introduction

Over the last two years, many companies have been experimenting with Artificial Intelligence (AI). Most have recognized the value these systems bring and have successfully generated real business impact. On the other hand, we've also seen the challenges and risks that come with AI. Think about chatbots that write poems about how useless they are, discrimination and transparency issues in automated loan approvals, privacy concerns when employees input sensitive information into AI assistants, or criminals stealing millions through deepfake phishing attacks. These aren’t hypothetical scenarios—they’ve all happened in recent years. These growing concerns have given rise to the field of responsible AI. It’s a young discipline covering a vast range of topics, so the terminology can sometimes be confusing. In this blog post, I’ll introduce some of the core concepts to help you navigate responsible AI and understand the role of AI Governance and AI risk management. Keep in mind that terminology is still evolving—some people might use different words for the same ideas, or the same words to mean entirely different things.

Responsible AI vs Trustworthy AI

The terms "Responsible AI" and "Trustworthy AI" both refer to building AI that benefits users, organizations, and society while minimizing potential harm. The term Responsible AI is more commonly used in the US, whereas Europe mostly uses the term Trustworthy AI, which also has a formal definition in the context of industry standards.Nevertheless, I prefer the term Responsible AI as the overarching concept for three reasons:

  1. It focuses on the "how" rather than just the "what."
    Responsible implies doing something or taking action—being deliberate about how we develop and deploy AI. In contrast, trustworthy AI describes the end goal: an AI system that people trust. While aiming for trustworthy AI is great, in it's formal definition, it might be a stretch for some organizations today.
  2. Trustworthiness is often linked to explainability, but that’s not always necessary.
    People trust air travel without fully understanding aerodynamics. Similarly, not every AI system needs to be explainable to a broad audience to be valuable. Explainability is an important tool, but it’s not the only factor in responsible AI.
  3. Taking ownership matters.
    The concept of responsible AI emphasizes proactive ownership—making deliberate efforts to align AI systems with organizational goals. This goes beyond mere accountability and focuses on actively building AI that aligns with business and societal values.For these reasons, I’ll use responsible AI throughout this post to describe the broad set of activities organizations need to undertake to build AI systems that create value for users, businesses, and society.

AI Risk management as a core competency

At the core of Responsible AI is risk management. This approach is reflected in the AI Act, and it’s easy to see why. AI applications, such as chatbots, introduce new risks—ranging from toxic content to misinformation—that must be managed. While it’s clear that not all AI systems require the same level of scrutiny, determining which risks to manage and to what extent isn’t always straightforward.A useful way to think about AI risk management is through a maturity model, inspired by the Corporate Social Responsibility (CSR) pyramid:

  1. "Be profitable" – Managing financial risks.
    AI should not harm a company’s bottom line. A poorly designed customer support chatbot that frustrates users and drives them away is a classic example of an AI risk at this level. When talking about risk management people often refer to this level.
  2. "Obey the law" – Managing regulatory compliance risks.
    Organizations, like individuals, must follow regulations. This is where regulations like the EU AI Act and GDPR come into play. Managing regulatory compliance risks are typically considered in the context of "AI compliance" or "compliance management'. Poor management of regulatory compliance risks might result in significant regulatory fines and thus also increase the AI system's financial risks.
  3. "Be ethical" – Aligning AI with societal expectations.
    Beyond legal requirements, AI should align with ethical principles like fairness and safety. This is where terms like "ethical AI" come into play.
  4. "Be a good corporate citizen" – Contributing beyond what’s required.
    Companies can go beyond ethical and legal obligations to use AI for broader societal good. While I haven’t seen the term "philanthropic AI" used yet, I wouldn’t be surprised if it emerges.

It has to be done

Recognizing these different levels of AI ambition and risk is great—but how do we translate them into day-to-day operations? That’s where AI governance comes in.AI governance is about setting up the right people, processes, and tools to systematically manage AI risks and ensure AI aligns with organizational goals—whether those goals are profit, compliance, ethics, or philanthropy.A significant part of the work in responsible AI is deeply technical. It involves evaluating machine learning models for accuracy, fairness, and robustness, preventing harmful outputs, and using explainability techniques to improve transparency. Until recently, these tasks were handled by AI engineers or data scientists with a personal interest in responsible AI. But now, we’re seeing the rise of a new role: the AI safety engineer—a specialist who combines deep technical expertise with an understanding of the broader social dynamics that shape AI risks.

Conclusion

In the end, terminology matters less than understanding each other. What truly matters is taking responsibility—owning the AI systems you build and contribute to. Setting concrete goals to create trustworthy solutions, managing risks, implementing governance, and having the right people in place are all essential steps in making AI work safely and effectively for users, businesses, and society. If you are still unsure where to start, let's have a coffee.