Overcoming AI’s Nagging Trust And Ethics Issues


The hype and enthusiasm surrounding artificial intelligence is beginning to transform into more fundamental concerns: helping people and organizations become more successful. Questions now arise: will AI help deliver a superior customer experience, enrich people’s work experience and create entrepreneurial opportunities? Or is it just the latest shiny new thing?

When used well, AI can be a very effective tool for wowing customers, pleasing employees, and launching new businesses. However, the key is to do AI well, ethically and trustworthy.

Trust and ethics in AI are what makes business leaders nervous. For example, at least 72% of executives who responded to a recent investigation of the IBM Institute for Business Value declare themselves “willing to forego the benefits of generative AI for ethical reasons”. Additionally, more than half (56%) say they are delaying major investments in generative AI until AI standards and regulations are clear.

Successful AI is and always will be a people-centered process. Energize people in their work. Deliver products and services to customers. Make sure things run smoothly. “AI technology is still in its early stages, and we must assume that human input and oversight will continue to be crucial in the development of responsible AI,” said Jeremy Barnes, vice president of ServiceNow .

Although the level of human involvement required may change as AI continues to evolve, “I don’t believe it will ever be a completely hands-off process,” Barnes said. “Continuous improvement in AI requires regular monitoring and updates, relying on user research and human expertise for valuable insights and feedback. This ensures that AI systems can evolve and adapt efficiently and ethically.

As with everything else in life, trust in AI must be earned. This confidence will likely continue to improve, but it is something that will evolve over the years. Right now, trust is possible, but only in very specific and controlled circumstances, said Doug RossAmerican technology director at Capgemini Americas.

“Today, guardrails are a growing area of ​​practice for the AI ​​community given the stochastic nature of these models,” Ross said. “Guardrails can be used in virtually any decision-making area, from examining bias to preventing leaks of sensitive data.”

Right now, generative AI use cases require significant human oversight, he agrees. Miranda Nashgroup vice president for application development and strategy at Oracle. “For example, generative AI integrated into business processes helps users write first drafts of employee performance summaries, financial narrative reports, and customer service summaries.

The key word here is ‘help,’” Nash continued. “The responsibilities of end users have not changed. They must always review and edit for accuracy to ensure the accuracy of their work. In situations where AI accuracy has been validated with months or even years of observation, a human may only be needed for exception handling.

The situation is unlikely to change anytime soon, Jeremy Rambarranprofessor at the Graduate School of Touro University, pointed out. “Even though the result generated may be unique, depending on how it is presented, it is always possible that part of the results may not be entirely accurate. This will eventually change as the algorithms are improved and could eventually be updated in an automated manner.

It is therefore important that “AI decisions are only used as an input into a human orchestration of the overall decision-making process,” Ross said.

How can we best guide AI so that it is ethical and trustworthy? Compliance requirements will of course be a major driver of trust in AI in the future, Rambarran said. “We must ensure that AI-based processes comply with ethical guidelines, legal regulations and industry standards. Humans must be aware of the ethical implications of AI decisions and be prepared to intervene when ethical issues arise. »

It’s also important to “foster a culture of collaboration between humans and AI systems,” Rambarran said. “It is vital to encourage interdisciplinary teams of domain experts, data scientists and AI engineers to work together to effectively solve complex problems.

Dashboards and dashboards are tools that can make this process easier, Ross said. “We can also segment decisions into low, medium and high risk categories. High-risk decisions should be escalated to a human for review and approval.

AI will not progress beyond the shiny new object phase without the governance, ethics and trust that will enable acceptance and innovation from all sides. We are all in this together.

Leave a Reply

Your email address will not be published. Required fields are marked *