Using algorithmic decision-making technologies (ADM) has raised growing concerns in 2024, with the Electronic Frontier Foundation (EFF) analyzing them. These tools, which allow various entities—such as owners, employers, regulators, and law enforcement—to make decisions with little or no human intervention, have the potential to deeply impact personal freedom and access to essential services such as health and housing.
The EFF has issued reports and statements globally, warning about the risks these technologies pose to human rights. The algorithms that support ADMs often replicate biased data patterns, which can result in the automation of historical injustices. For example, if an algorithm is trained using data on arrests or approval history in health insurance, the automated decision may perpetuate existing inequalities. The lack of transparency in the reasoning behind these systems further complicates any attempt to challenge their decisions.
A critical aspect highlighted by the EFF is that decision-makers often use ADMs to justify their own biases. Although these tools are presented as a solution to optimize government decision-making, their adoption rarely involves the necessary public participation. This poses a significant risk to the most vulnerable groups in society, who may suffer the consequences of technologies implemented without proper evaluation.
The increasing optimism surrounding artificial intelligence has led law enforcement agencies to spend public resources on technologies that hinder accountability. In fact, the EFF has denounced the use of generative intelligence to create police reports from body camera recordings, warning that this compromises transparency in the national security sector.
The private sector is not immune to this trend, as companies also turn to ADMs for decisions related to employment, housing, and healthcare. Public opinion is mostly negative about this practice. Many Americans feel uncomfortable with the use of these technologies, especially when companies continue laying off workers despite ADM tools failing to improve expected productivity.
However, ADMs can also help avoid discriminatory decisions in the workplace, underscoring the need to implement mechanisms that protect against private discrimination. Nevertheless, the pursuit of maximizing user data has led to invasive practices in terms of privacy, prompting the EFF to promote an approach that prioritizes privacy over harmful applications of these technologies.
Recently, in an episode of its podcast, the EFF discussed the challenges and potential constructive applications of artificial intelligence, emphasizing that its use should align with the protection of human rights and the overall well-being of individuals. Unless significant changes are implemented, the impact of AI on decision-making regarding humans may continue to cause more harm than benefit.
via: MiMub in Spanish