Site icon becoration

The Trump Crusade Against ‘Woke AI’: A Threat to Civil Liberties

Sure! Here’s the translation into American English:

The White House has unveiled its new “AI Action Plan,” focused on what is being called “woke AI.” This concept refers to language models that provide information that does not align with the current government’s policies on issues like climate change and gender equity. One of the key objectives of the plan is to regulate the generation of content that includes racism, sexism, or hate speech.

Alongside this initiative, an executive order titled “Preventing Woke AI in the Federal Government” has been implemented. This document requires technology companies receiving federal contracts to demonstrate that their language models are free of what are categorized as “ideological biases,” especially on matters of diversity, equity, and inclusion. However, this censorship could limit the development of these models as tools for expression and access to information, rather than improving their accuracy and reliability.

The issue of bias in artificial intelligence is not new. Many models discriminate against racial and gender groups due to the patterns learned from the datasets used in their training. If those data contain prejudices, AI tends to replicate them. For example, “predictive policing” tools, which are based on arrest data, often recommend a higher level of surveillance in predominantly Black communities, perpetuating historical injustices.

Generative models also face this issue. Research has shown that certain language models disproportionately associate people of color with criminal activities. Studies reveal that 80% of images generated of prisoners depict individuals with darker skin, while over 90% of images of judges are male, despite women making up 34% of these positions in reality.

The inaccuracy of AI is not the only challenge. When government agencies use biased systems for decision-making, the repercussions can significantly affect the individuals involved, impacting aspects like their freedom, access to financial resources, and healthcare. The recent administration has noted an increase in the use of these models within agencies, which could reinforce systemic injustices.

It is vital to establish appropriate safeguards to prevent government entities from adopting AI tools that perpetuate biases. The current administration has weakened regulations designed to protect civil rights, increasing the likelihood of AI-related abuses. Furthermore, their new regulations could influence companies to develop less effective and less accessible models for the public.

Experts and digital rights organizations have expressed their opposition to the use of algorithms in critical decision-making, emphasizing the need to protect citizens’ rights against determinations that could be influenced by biased machine learning models.

Source: MiMub in Spanish

Exit mobile version