Integration of Responsible AI in the Prioritization of Generative AI Projects

Here’s the English translation:

In the past two years, companies have faced the growing need to develop effective methodologies to prioritize generative artificial intelligence (AI) projects. With a wide variety of applications, the challenge lies in assessing business value based on cost, required effort, and various factors that influence the success of these projects. A crucial aspect to consider is the “hallucinations” of generative AI, which represent erroneous decisions made by agents, along with the rapidly evolving regulatory context. To systematically address these concerns, it is suggested to incorporate responsible AI practices into prioritization methods.

The Amazon Web Services (AWS) framework defines responsible AI as the practice of designing, developing, and using artificial intelligence technology with the goal of maximizing benefits while minimizing associated risks. This framework outlines eight key dimensions: fairness, explainability, privacy and security, controllability, veracity and robustness, governance, and transparency. At critical stages of the generative AI development lifecycle, it is essential for teams to assess potential harm and risks within each of these dimensions, implementing mitigation measures and conducting ongoing monitoring.

Applying responsible AI throughout the development cycle is especially relevant for generative AI projects, due to the novel nature of the risks that may arise. Considering responsible AI from the outset provides a clearer view of project risk and the effort required to mitigate those risks, which can reduce the likelihood of incurring additional costs if issues are discovered later.

While many companies already have their own prioritization methods, the use of the WSJF (Weighted Shortest Job First) methodology from the Scaled Agile framework is notable. This methodology calculates priority using the formula: Priority = (cost of delay) / (size of work). The cost of delay is measured in terms of business value, which includes not only direct value but also urgency and adjacent opportunities. Meanwhile, the size of work assesses the effort required to execute the project, considering development and infrastructure costs.

To illustrate these concepts, two generative AI projects can be compared: one that uses a large language model to generate product descriptions, and another that employs a text-to-image model to create visuals for advertising campaigns. An initial analysis, without considering responsible AI, might make the second project appear more attractive due to its urgent nature. However, when applying a risk assessment approach that incorporates the dimensions of responsible AI, the first project may reveal lower complexity and mitigation costs, suggesting that, after rigorous analysis, it could be the more suitable option to pursue.

Therefore, it is crucial for companies to begin developing responsible AI policies and adopting appropriate practices for generative AI projects. This appropriateness not only influences prioritization decisions but is also fundamental for avoiding unnecessary costs and maintaining customer trust in an increasingly demanding regulatory environment.

via: MiMub in Spanish

Scroll to Top
×