New report documents business benefits of ‘responsible AI’

The topic of responsible AI implementation is gaining traction as companies adopt AI to drive business strategies.

A new global study defines responsible AI as “a framework of principles, policies, tools and processes to ensure that AI systems are developed and operated in the service of individuals and society, while still enabling transformative business influences.”

The study, conducted by MIT Sloan Management Review and the Boston Consulting Group, found that while AI initiatives are proliferating, responsible AI is lagging behind.

While the majority of companies surveyed said they believed responsible AI could help mitigate technological risks — including security, bias, fairness and privacy concerns — they admitted failing to prioritize RAI. This gap increases the likelihood of failure and exposes companies to regulatory, financial and customer satisfaction risks.

The MIT Sloan/BCG report, which includes interviews with C-level executives and AI experts and findings, found a significant gap between companies’ interest in RAI and their ability to execute the practice across the enterprise.

5 2 %

52% of companies have implemented some level of responsible AI, but 79% of them say their implementations are limited in size and scope.

Conducted in spring 2022, the survey analyzed responses from 1,093 participants in 96 countries who reported annual revenues of at least $100 million in 22 industries.

The majority of respondents (84%) believe that RAI should be the highest priority for management, but just over half (56%) confirm that RAI has reached this status, and only a quarter say they have a fully fledged RAI plan.

More than half of respondents (52%) said their companies have some level of RAI practice, but 79% of them admit that their implementations are limited in size and scope.

Why do companies have a hard time talking about RAI? Part of the problem is confusion over the term itself — which overlaps with ethical AI — a barrier cited by 36 percent of survey respondents who acknowledged there is little consistency given the practice is still evolving.

Other factors that limit RAI implementation fall under the umbrella of general organizational challenges:

  • 54% of respondents are struggling to find RAI expertise and talent.
  • 53% of employees lack training or knowledge.
  • Forty-three percent of respondents said senior leadership has limited priority and focus.
  • Appropriate funding (43%) and awareness of RAI programs (42%) also hinder RAI program maturity.

As AI gains prominence in the business, companies are under increasing pressure to bridge these gaps and successfully prioritize and execute RAI, the report said.

“As we navigate an increasingly complex and unknown AI-driven future, establishing a clear ethical framework is not optional — it is critical to its future,” said a CodeX Fellow at Stanford Law School’s Center for Computational Law and a Researcher Riyanka Roy Choudhury said AI experts interviewed for the report.

RAI done correctly

Those companies with the most established RAI programs—about 16 percent of respondents to the MIT Sloan/BCS survey, which the report described as “RAI leaders”—have a lot in common. They see RAI as an organizational issue, not a technical one, and they are investing time and resources in creating a comprehensive RAI program.

related articles

These companies also take a more strategic approach to RAI, led by corporate values ​​and a broad view of responsibility to countless stakeholders and society as a whole.

Taking a leadership role in an RAI can translate into measurable business benefits, such as better products and services, improved long-term profitability, and even improved recruitment and retention rates. 41% of RAI leaders confirm that they have achieved some measurable business benefit, compared to only 14% of companies with less RAI investment.

RAI leaders are also better equipped to deal with an increasingly dynamic AI regulatory environment – more than half (51%) of RAI leaders feel ready to meet emerging AI regulations, while less than a third of organizations have emerging AI regulations RAI program, the survey found.

Companies with mature RAI programs follow some common best practices. in:

Make RAI part of the administrative agenda. RAI is not just a “checkbox” exercise, but part of an organization’s top management agenda. For example, approximately 77% of RAI leaders invest material resources (training, talent, budget) in their RAI efforts, compared to 39% of respondents overall.

Rather than product managers or software developers guiding RAI decisions, there is a clear message from the top that the responsible implementation of AI is an organization’s top priority.

“Without leadership support, practitioners may lack the necessary incentives, time and resources to prioritize RAI,” UNICEF Global Insights and Policy Office Digital Policy Specialist, MIT Sloan/BCG survey accepted Steven Vosloo, one of the experts interviewed said.

In fact, nearly half (47%) of RAI leaders say they involve CEOs in their RAI efforts, more than twice as many as their peers.

Broaden your horizons. The survey found that in addition to top management involvement, mature RAI programs include a broad range of participants in these efforts—an average of 5.8 roles at leading companies, compared with 3.9 roles for non-leaders.

The majority of leading companies (73%) use RAI as part of their corporate social responsibility efforts and even consider society a key stakeholder. For these companies, the values ​​and principles that determine their approach to responsible behavior apply to their entire portfolio of technologies and systems—and to processes such as RAI.

“Many of the core ideas behind responsible AI, such as preventing bias, transparency and fairness, are already aligned with the fundamental principles of corporate social responsibility,” said Nitzan Mekel-Bobrov, eBay’s chief AI officer and one of the experts interviewed. for investigation. “Therefore, an organization should already be naturally involved in its AI efforts.”

Start early, not after the fact. Surveys show that it takes an average of three years to start reap the commercial benefits of RAI. Therefore, companies should start RAI programs as soon as possible, develop the necessary expertise and provide training. The AI ​​experts surveyed also recommend increasing RAI maturity before AI matures to prevent failure and significantly reduce the ethical and business risks associated with scaling AI efforts.

Given the high stakes surrounding AI, RAI needs to be prioritized as an organizational task, not just a technical issue. Companies that can connect RAI to its mission of being a responsible corporate citizen are the ones that achieve the best results.

reading report

Source link