Sure! Here’s the translation into American English:
—
The growing presence of agentive artificial intelligence is transforming the technological landscape in ways that traditional artificial intelligence cannot reach. This mode of AI is characterized by its high autonomy in specific tasks, which poses significant challenges in terms of regulation. The expert community is divided: while some advocate for the creation of specific legislation, others believe that existing regulations are sufficient to address these innovations.
On an international level, governance of artificial intelligence is taking shape, with the OECD identifying a total of 668 initiatives across 69 countries and territories, including the European Union. In the United Kingdom, 18 interconnected regulatory frameworks have been established to govern both the development and use of AI, reflecting the government’s intent to adopt a more flexible approach than other jurisdictions.
One of the hot topics in AI regulation is the concept of “Human Out Of The Loop” (HOOTL), which refers to systems that operate completely independently and make decisions without human intervention. This phenomenon creates new complexities for policymakers, who must find a balance between innovation and protecting citizens.
Since the launch of ChatGPT in November 2022, many companies have begun integrating this tool to optimize costs, projecting that generative AI could contribute over $20 trillion to GDP and save 300 billion work hours annually. However, a significant percentage of generative AI projects are expected to be abandoned before 2025 due to issues related to data quality and risk management.
With the expansion of generative AI, there is an urgent need to assess whether the risks associated with agentive AI are adequately covered by existing regulations. In the United Kingdom, the significance of agentive AI has been highlighted in multiple reports, underscoring its relevance from a regulatory perspective.
Recently, the passage of the Data (Use and Access) Act 2025 has introduced changes that expand data processing capabilities, although it has also raised concerns about its flexibility and a potential increase in civil claims. The implementation of specific regulations for agentive AI does not seem to be on the immediate horizon, although the Department for Science, Innovation, and Technology has published a Code of Practice on AI Cybersecurity, providing guidelines to help companies oversee their increasingly autonomous uses of AI.
With the growing adoption of agentive AI, the complexity of compliance with current regulations intensifies, especially regarding citizens’ rights over their data. The increased autonomy of these systems could lead to decisions that are unpredictable and difficult to observe, raising concerns about gaps in impact assessments concerning data protection.
Looking ahead, regulation of agentive AI presents a considerable challenge. As new implications arise and its impact across various sectors is assessed, a review of current compliance mechanisms will be conducted to determine how agentive AI will respond to existing legislation in the United Kingdom and what adjustments may be necessary in the future.
Referrer: MiMub in Spanish