Expert Insights On Ethics, Tech, And Law


In the fast-paced pace of AI development, risk managers are often trying to stay ahead of the curve.

As stories of bots and AI tools gone rogue dominate headlines and consumer AI tools flood the market, public trust in conversational AI has taken a hit. A 2024 Gallup/Bentley University survey find only 23% of U.S. consumers trust companies to handle AI responsibly.

For AI governance and compliance professionals, this is the reality they face every day. With 2025 expected to bring new challenges, from AI agents to new regulatory developments, we spoke with industry leaders to get their views on the future of AI governance.

The regulatory maze will become more complex

In 2025, AI governance will rely heavily on compliance with emerging regulations, predicts Michael Brent, director of responsible AI at the Boston Consulting Group (BCG).

The EU AI law, with its potential penalties of €35 million, is poised to become a defining force in global AI governance.

“The EU’s regulatory approach will serve as a closely watched test, with organizations and nations monitoring its impact on competitive advantage and business operations,” explains Ms. Alyssa Lefaivre Škopac, Director of Trust and Security at the EU. AI to AI. Alberta Artificial Intelligence Institute (Amii).

Lefaivre Škopac predicts that “soft law” mechanisms – including standards, certifications, collaboration between national AI security institutes and domain-specific guidance – will play an increasingly important role in fill regulatory gaps. “It’s still going to be fragmented and not completely harmonized for the foreseeable future, if ever,” she admits.

At the same time, the American landscape is expected to remain fragmented.

Alexandra Robinson, who leads the AI ​​governance and cybersecurity policy teams that support U.S. federal government partners at Steampunk Inc., predicts that “state governments will invest in enacting AI legislation consumer-focused, while Congress will likely prioritize reducing barriers to innovation – as is happening. the US consumer privacy regulatory landscape.

Experts predict that the compliance landscape will take many forms. Fion Lee-Madan, co-founder of Fairly AI, an AI governance software company, makes a bold prediction: “ISO/IEC 42001 certification will be the hottest issue in 2025, as organizations move away from the buzz of AI to the real security and compliance requirements of AI accountability. ”

Standards and certifications, while voluntary, are becoming essential tools for navigating a complex regulatory environment, with procurement teams increasingly demanding them to ensure trust and compliance among AI providers, says Lee-Madan .

Agentic AI will redefine governance priorities

While generative AI has made headlines in 2024, experts believe that 2025 belongs to “agentic AI”. These systems, capable of autonomously planning and executing tasks based on user-defined goals, present unprecedented governance challenges.

“With the rise of agentic workflow research, we expect a surge in AI governance centered around AI agents,” predicts Apoorva Kumar, CEO and co-founder of Inspeq AI, a responsible AI operations platform.

Based on this, Jose Belo, co-president of the International Association of Privacy Professionals (APPI) The London Chapter warns that the decision-making capabilities of these systems raise thorny questions about the autonomy and safeguards needed to prevent harm. Likewise, experts like Ms. Lefaivre Škopac from AMII anticipate important research into the balance between the autonomy of these systems and the responsibility for their actions.

The implications for the workforce are also significant: “This will naturally intensify discussions and research on the impacts of AI on the workforce, the replacement of employees with AI agents and to what extent scale,” she warns.

AI governance will move from ethics to operational realities

“AI governance is no longer just an ethical afterthought; it is becoming standard business practice,” notes Ms. Lefaivre Škopac.

Companies are integrating responsible AI principles into their strategies, recognizing that governance involves people and processes as much as the technology itself, according to Giovanni Leoni, responsible AI leader and associate director at Accenture.

Framing governance as part of a broader transformation, Mr. Leoni observes: “AI governance is a change management journey. » This shift reflects a growing recognition of AI governance as an essential element of strategic planning rather than an isolated initiative.

This development is also highlighted by Alice Thwaite, Head of Ethics at Omnicom Media Group UK, who points out that companies are starting to separate the concepts of governance, ethics and compliance from AI. “Each of these areas requires unique frameworks and expertise,” she notes, reflecting a growing understanding of AI challenges.

At the same time, Mr. Kumar draws attention to the operational aspect of this transformation. With the rise of Responsible AI Operations (RAIops) and platforms like Inspeq AI, businesses now have tools to measure, monitor and audit their AI applications, integrating governance directly into their workflows.

Environmental considerations will play a greater role in AI governance

Environmental considerations are becoming a central concern in governance, experts predict. IAPP’s Belo emphasizes that reducing the environmental impact of AI is a shared responsibility between providers and deployers.

Suppliers must take the lead by designing energy-efficient systems and adopting transparent carbon reporting practices. Deployers, in turn, must adopt sustainable practices around cloud usage, prioritize greener data centers, and minimize redundancy. Ethical decommissioning of AI systems will also be crucial to avoid unnecessary environmental degradation.

Key Drivers of AI Governance Advances

What will advance AI governance? Industry leaders offer key insights, each emphasizing different but interconnected factors:

BCG’s Michael Brent highlights the role of proactive business involvement: “The most important factor that will accelerate progress in AI governance is proactive investment by businesses, including the creation of responsible AI teams.”

From a practical perspective, Apoorva Kumar of Inspeq AI highlights the real-world implications: “Loss of trust and reputation has already cost companies like DPD, Snapchat and Google Gemini dearly. Continued failures will drive further progress in AI governance.”

On the business side, Ms. Lefaivre Škopac underlines the importance of taking advantage of purchasing power: “Organizations must leverage their purchasing power to demand higher standards from AI providers, demanding transparency, documentation and test results.”

Finally, as AI becomes more widespread, Mr. Belo highlights the need for education: “Proficiency in AI is increasingly recognized as an essential requirement across all sectors.”

Each perspective reinforces the idea that progress in AI governance requires action on multiple fronts: business engagement, transparency, and a growing focus on literacy and accountability.

The road ahead: clear challenges, complex solutions

In summary, the path to better AI governance is unlikely to be straightforward. Some of the most optimistic predictions, such as increased investments in AI compliance, have been tempered by the current complexity of theoretical frameworks and operational challenges.

Global harmonization remains an elusive goal, particularly in light of recent developments in the United States. Organizations continue to struggle with a mix of “soft power” mechanisms (frameworks, standards and protocols) without clear regulatory guidelines for specific use cases.

At the same time, emerging AI trends, such as agentic AI, are poised to introduce a new wave of complex risks that will test the adaptive capacity of practitioners responsible for AI. ‘AI. A key distinction remains between a holistic, human-centered approach to responsible AI development and a narrower focus on risk management at the highest levels.

What is clear is that no team can meet these challenges alone. As Ms. Robinson of Steampunk aptly summarizes: “My motto for 2025 is to move from extractive AI compliance to effective engagement. For those of us working on AI governance, we need to empower technologists to create and deploy secure, trustworthy, and accountable AI. . This means meeting people where they are: we can’t hand a product owner a 500-question AI risk assessment and expect anything but frustration. »

Although the AI ​​governance landscape in 2025 promises to be more complex than ever, the outlines of a more structured and practical framework for AI governance are becoming visible.

Leave a Reply

Your email address will not be published. Required fields are marked *